01/18/20: Spring Semester Goals

The next few weeks are the final weeks of production for this case study - in two weeks, Tori and I are flying to Florida to demo at the Zora Neale Hurston Festival. We then have the month of February and first week of March to make any modifications. At this point, production is more about cleanup of the main case study experience, and tiny prototype experiments based on proximity to other avatars and how to fully construct the environment.

Reaching the end of last semester, I had a rough draft of what our final experience was going to look like and a head start on the writing portion of my thesis. I created a priority list addressing our deadline for the Zora festival in two weeks and for the end of March:

Priority List for final production of Ruby Bridges case study.

Project Process

Based on the priority list, I wanted to line up my goals for the Zora festival and for the end of the semester to work on them simultaneously. The first thing I’ve been working on has been creating a consistent proximity in the walk with the user, Lucille, and the two federal marshals. At the end of December, all three characters would reach the end with the user, but would often feel too far away during the walk itself. It didn’t seem natural to have Lucille so far away from the user in a hostile situation, or to have the marshals walking three meters up the sidewalk while the mob avatars are crowding in.

In my test navigation scene, I set up the three avatars and put in a base user animation with some speed variations.

Group Walk Test using test prototyping space.

One of the biggest problems was getting the speed and animation adjustments right regardless of what the user is doing. From the user’s perspective, if they’re slowing down then it means they’re not looking straight ahead down the sidewalk, which gives me a little bit of leeway in the adjustments I make. Slowing down the avatars to an unreasonable speed (often looking like they’re moving through water) doesn’t matter as much because they will speed back up when the user looks directly ahead.

Implementing this into the main project scene was going to require reorganizing how the main control script was running the scene. Initially, the script was controlling all of the dialogue, audio and motion triggers. This got a bit messy and difficult to debug. Using this primary control script as a template, I created Scene1_MainControl.cs to house the booleans used to indicate scene status and runs the timing for each phase of the experience. From that, I created separate scripts to control the motion for all of the avatars in the scene (including the user) and the audio/dialogue. With that separation I’m able to get a better handle on debugging down the road.

New control script setup.

The audio script also took some prototyping to get right. I was having problems last semester with mob members initially playing all at once versus a randomized wait time. Distributing the AudioSources in the scene and layering these sounds still needs a lot of work, which Tori and I have already reached out for. I focused strictly on timing the audio and ensuring that the mob chants selected are appropriately randomized from a set number of clips.

Next Steps

Those scripts are currently in the scene and functional, so in the next two days I will be turning my attention to the environmental setup of the scene. This is where my Zora goals and end-of-semester goals overlap - I’ll be using a separate prototyping scene on Unity to place prototyped blocks and houses in order to determine the best placement for these assets in the world, and explore different configurations for the mob. I thought about using Tilt Brush or Maquette for this, but I found it’s much more efficient to use Unity because I can mix and match the assets I already have. I have already finished assigning all the textures and materials in the scene itself, and will continue to add environment assets in between the setup of the houses and cars. I will also need to time the door animations for the user to exit the car, and time to cleanup the user’s exit from the car itself.

Next week I will have documented the prototyping scene and resulting changes in the environment, as well as a runthrough of the experience with the separated script setup. Tori and I will be taking an Oculus RIft with us to demo the project in Florida, and so we will be conducting these tests using both an Oculus Rift and a Vive to check for any other issues.

12/06/19: Five Week Summary

I’ve reached the end of my in-class window for this project, and it’s time to go back and review what’s happened in the last month.

I started this project fairly optimistic about what I would be able to achieve. My initial task list was ambitious but not out of the realm of expectation with the number of prototypes I had behind me. What I didn’t anticipate were the hangups on character animations and locomotion. Tori and I have been working on getting this motion capture data processed and ready for the project over the last four months, so I focused on some initial isolated problems while I was waiting on the data. As it started to come in during Week 2, we realized there were some issues with the characters having major offsets and occasionally walk cycles with broken knees. I was also having problems aligning them with the car model I brought into the scene. Tori took those models and adjusted the animations to fit the car and fixed them, mostly focusing on the introductory car sequence. Within the last week of the project, we had all of the animations turned over to me. As I was bringing them in, I realized that the scene was cluttered, and I was going to need a different method to bring the characters down the sidewalk. To refresh the project (and myself), I started over with a fresh scene and spent the last three days of work bringing in the final animations, working on implementing the new locomotion system, and cleaning up my personal workflow with scripting.

12/06/19: Progress update. Video is a little choppy due to all the functions not yet being fully implemented!

What I Learned

For as many issues as I had with this project, I did learn a lot.

NAV MESH USE

I only used NavMesh once or twice prior to this project, and had to learn how it worked fairly quickly in order to time my characters motion through the script. I had some issues aligning the animation with the motion, but really that was just a chance for me to get better with setting up Animation Controllers.

CHARACTER ANIMATIONS

As an Animator, I stay really true to my corner of environment art and development. I generally don’t enjoy working with characters beyond the occasional sculpt. Prior to this prototype I spent most of my time working on interactions or the user themselves, not so much with worrying about cleaning up animations or transitions. There was a rough learning curve this time around. I had to link up multiple animations and make sure the transitions worked with the NavMeshAgent’s motion. While I still don’t enjoy the process, I feel much more confident about troubleshooting these areas of the project in the future.

PROJECT/SCENE ORGANIZATION

Speaking specifically here about keeping character versions in line and how my scripts are organized. I’m the only one really working on this project in Unity so I will sometimes let my organization slip in lieu of efficiency… which later is a pain in the butt. Tori and I were constantly testing different versions of characters for the Federal Marshals and Lucille, to the point where I eventually lost track of which animation was associated with which FBX file. Cleaning out my project helped enormously, but I eventually figured out I needed to be more attentive to my archive folder and communicate file structures with Tori for when she sends me Unity Packages of characters.

Additionally, my scripting has been much more organized. I began implementing headers for the inspector to avoid the paragraph of variable inputs, and that keeps the script much more organized my end this time. I also stuck to using one primary control script to keep track of events in the scene and overall story elements, while keeping mob behavior and audio as their own separate (but linked) scripts. I’ve since been able to work much more efficiently knowing where specific functions are housed.

PROTOTYPING SCENE

I usually have a practice scene early on in the development process, but it tends to get abandoned once the project has increased in complexity. I kept this test scene close at hand this time around. I’ve found that the complexity often makes it harder for me to tell where problems are arising, so bringing a prefab into that scene and testing specific scripts and functions has made troubleshooting much faster. I used this for the Head Rotation scene, testing several animation transitions with the characters, ensuring the Mob Behavior script was handling audio properly, and an initial NavMesh learning space

What’s Next

As we know, this prototype doesn’t end here. At the end of the five weeks I managed to debug and troubleshoot plenty of these issues, but there is still much further to go. Tori and I will both be discussing this project in the next week, requiring it to be functionally sound. In January we will be showing our experience at the Zora Neale Hurston festival in Eatonville, FL, so I will be focusing on getting the visuals up to par and working with Tori to adjust any animations that still need attention. While I have begun many of these tasks, this is the current to-do list:

  • Grouping function for the primary characters.

    • Because the user can control the speed at which they move down the sidewalk, the Federal Marshals and Lucille’s avatars need to be able to slow down and resume normal speed based on how far away the user is.

  • Mob encounters

    • Part of the Mob Behavior script, certain characters will react based on user proximity. This needs to be tied in with the audio (already implemented) and an additional animation trigger.

  • Audio mixing

    • Mob audio needs to be set to sound “muffled” while the user is inside of the car and at full volume when out on the sidewalk. Additional audio effects can be tested here as well for an outdoor city scenario/background audio. I have begun this process with the Mob Behavior script looking at individual phrases and sayings for the characters, but the mob chant, background city audio, and the car sounds still need to be brought in.

  • Complete Car Exit

    • Characters now exit the car appropriately, but timing needs to be adjusted for the user’s exit.

  • Implement scaling script

    • Needs importing from prior prototypes.

  • Prologue and Main Menu scene

    • This project only focused on Scene 01. I will create a package of the Prologue sequence from the previous project to be imported and applied to this one. A main menu and additional “end sequence” still need to be created.

  • Looking at environment calibration

    • A reach, but could be utilized in the main menu to determine the user’s position in the playspace and adjust the environment to them? Not necessary, but would be easier for future demos.

  • Visual Polish

    • Set to the side for the moment, final textures, assets, and post-processing needs to be applied to the scene. This also includes additional terrain assets such as grass, trees, and plants.

With as many weird curves this prototype had, I’m pretty proud of what I was able to learn and accomplish, particularly in the past three days. I think that in the next week I will be able to address many of the above points on my task list and really be able to get some feedback on the state of the project. I will continue to make update posts next week of where I ended up and the feedback that I receive from my thesis committee!

12/05/19: Time for a Reboot

Sometimes you just reach a point where what you’re doing in the project isn’t working, and it’s time to start fresh.

I hit that point this week with the Ruby Bridges case study.

All of the issues that Tori and I have had with the various animations really came to a head when I was trying to get the sidewalk animations coordinated. I realized that the system we were using to drive all the character’s locomotion (other than the user) just wasn’t working, and I was spending more time trying to fix little timing problems than actually setting up the scene. The project was also getting cluttered with previous versions of animations (we found another issue with the male walk cycle causing broken knees). I started over with a fresh project on Monday night, only bringing in the finalized animations as Tori sent them to me, and set up Scene 1 to be driven by NavMeshAgents.

I’ve only actually used NavMeshes maybe once or twice, so this was a bit of a learning curve. I had to figure out how to coordinate the agents getting on/off the meshes for the car scene, make sure there wouldn’t be any interference from the car agent while all the characters were inside, and then determine the best way to drive them using waypoints. With as much time as the reset took me, I’m still convinced that I was able to get more done in a much more organized fashion than I had before. I kept a test scene to work out the animation issues I was having, and then brought the solutions over into the primary Scene 1.

Still from Test Scene, testing character animation transitions.

Using Test Scene to troubleshoot and test HeadLookController.cs

(originally found at https://wiki.unity3d.com/index.php/HeadLookController#Description)

Getting the characters to move along with the NavMeshAgent was easily one of the most frustrating parts. The instructions are outlined pretty clearly at the Unity website, but some of the animations were jumpy or not running once the blend tree was triggered. Some of these issues are still unresolved; I evaded further hours of blend tree debugging by taking the one functional animator I had and just rebuilding all of the character animations on top of it.

Animator controller for Federal Marshal 1, showing all transitions and triggers from the car sequence to the end confrontation with the State Officials.

I’m trying to be a little smarter with my scripting this time around as well, making generalized functions that are flexible enough to be applied to multiple characters, decreasing the repetition in the script. There’s more dialogue in this prototype than the past, and I had some problems making sure the audio sources for the characters were functioning properly. But we made it through, and now the car dialogue works well. I’ll be able to test this more thoroughly once the mob is in and I’m coordinating the audio changes there.

Current State

Status update of characters walking with navmesh. Dec 5 2019.

What’s Left?

As we can see from the above video, there’s plenty of cleanup that needs to happen once the base functions are complete. Implementing door animations, making sure that characters are facing the right way, and adding in the final pieces of the puzzle so that booleans are triggered on time.

I still have a fairly large to-do list now that I’ve got the characters walking together and able to look at specific targets. Implementing that script is the next immediate task. I’m split on what comes after that - I could begin bringing in the mob characters and setting them up. This has to be done before I can start working on any interactive events. But I also need to add the user to the primary group moving down the sidewalk. I focused on getting the characters behaving in their ideal patterns before introducing the user to the scene to interrupt them, but I now need the user to test a script that will keep the group within close proximity of each other. Below is the current list of tasks left:

  • Fully implement “Look At” script for primary group

  • Mob arrangement

  • SteamVR cam setup

    • Includes animation down the sidewalk, group proximity, and coding mob interactions.

  • Scaling environment script (needs importing)

  • Import Prologue scene from previous project version

  • Audio mixing for Scene 01

  • Main Menu

It’s a lot, but things are already going significantly better. I’m starting to implement more organized practices for myself, such as including headers in my scripts to make the Inspector a little kinder to the eyes. Small thing, but it makes me feel better when I have to use it.

The next update will be my overview of how the project went and a video of its state, as well as an overview of the things that need to happen to complete the project for a January exhibition.

11/30/19: Tiny Steps Forward

After the previous update, Tori and I ran into some time consuming animation difficulties on our case study. We had animations that needed to be adjusted to fit the car we’re using in the scene, and Tori needed time to set everyone up in Maya and get those bugs worked out before they made their way back to me. So most of last week was spent cleaning out the project of old animations, testing the new things Tori sent me, then cleaning them out again.

I have the car sequence nearly completed with the new characters; everyone is siting where they’re supposed to be, the animations are timed, and the user can now successfully get out of the car and move down the sidewalk with Lucille and both federal marshals individually. Making sure they stay together as a group is another challenge. The doors aren’t fully timed and animated yet because I chose to save that for a later round of tweaking. Right now, I need to get all the broadstrokes happening in the scene to make up some ground. There are a few flaws with the characters - one of the marshals has an arm that breaks and spins when opening Lucille’s door, and the walk cycle that we’re using for some reason breaks all the rig’s knees when it’s applied. A problem to solve in January; right now, I just want everything in the scene and timed.

User view exiting the car with Lucille.

This iteration has especially been testing my workflow with Tori. I think in a future iteration we need to work more closely together on the capture process for the motion, because a large portion of my time on this scene was spent just getting all of the characters in the right place at the right time. I have to divide the animations I’m given because they don’t match the audio cut for the instructions, and or the length of the drive up to the school. Creating more intentional shots with a bigger emphasis on timing and physical props might reduce some of the cleanup and issues we’ve seen this time around. The bottleneck in our skills has been frustrating, as development halted while we solved the animation issues and replaced assets.

Now that I’m moving forward, I’m working with the assets as they come to me. I began testing “encounters” with the mob and federal marshals using empty game objects in Unity to represent mob members. As the group approaches down the sidewalk, the distances from these tagged objects will be tracked. Once close enough, the objects are destroyed and the federal marshal reacts by telling the mob member to back off. This can be useful in setting up encounters throughout the scene once the mob members are actually in place.

Test encounter with empty game object. Also visible: the leg deformation with the Federal Marshal walk cycle.

Moving Forward

This is technically the last week of development for this project and in terms of goals, I’m still back in Week 2. I am prioritizing the completion of the sidewalk locomotion with the group, the encounter at the end with the state official, and the mob placement by the end of the week. A reach goal would be to spend time integrating audio with the scene, assuming I can get all the assets in. Visual polish and environment development will have to wait until cleanup time in January; for now, a functional scene for the user is all I’m looking for.

11/17/19: Animation Setbacks

With as much success as I had last week making progress and getting the project set up, the end of this week put me several steps backwards.

I started out prepping some of the audio clips we need for the project, because so much of this experience relies on these mob members yelling unrelentingly terrible things at you. In the past we recorded a few basic chants that we heard in our research and put them in as placeholders on repeat with a few individualized phrases yelled from avatars. There’s a little more dialogue in this version, and Tori recorded a variety of different mob interactions so that we could create a more dynamic audio experience. I went in and cleaned up each of the clips, balanced the audio, and then compiled some of the takes into a “group chant” to help ease the load in the Unity scene.

Group chant compilation in Adobe Audition.

With that task complete, I started in on getting the car introduction completed. I had all the avatars in last week, I just needed to separate some of the animation clips, set up triggers, and begin timing it out. I quickly got in over my head setting up all the parameters for all four characters, so I scaled back and began just getting the timing down for Federal Marshal 1 (driving). Then I added in the second Federal Marshal and Lucille Bridges. There was a slight scripting issue I had with the audio repeating, but jumped that hurdle and got to where all of the characters were reasonable aligned.

From here I was ready to develop the transition from the car to the sidewalk for the user. At the end of the video above I noticed an issue with one of the marshals snapping back into the drivers seat - as it turns out, the rig in Unity was still set to generic, while the Walk animation was imported and set to Humanoid. The walk animation wouldn’t play, and the avatar resets. I switched the rig, and chaos happened. The rotations of the rigs were causing some intense offsets to happen from where I originally had the avatars set, to the point where making the adjustments was near impossible. I brought the problem to Tori and we discussed it a bit - this is a video I sent her last night of the issue I was having:

We decided the best move from here was to just have Tori animate the sequence in Maya with the car model, so all I would have to do is bring it in and apply the scripts. It’s going to take a little time to get that process going so I’m currently at a bit of a standstill (also waiting for the rest of the mocap characters). Once I have that animation back it’s probably going to take a bit of adjusting of the script to get everything cooperating again, but at least all of the characters will stay where they’re supposed to be on-start.

Next Steps

From here, it’s kind of a waiting game. I need to get the assets from Tori to move forward. Once I have that, I’ll set up the intro sequence again and get them moving down the sidewalk by the end of the week, ideally with starting the interaction at the end. It’s a bit of a setback but I think I’m still in good shape to finish out this coming week on schedule.

11/09/19: Updates & RB Final Build Begins

Semester Updates

Since my first post this semester is about three weeks before the semester ends, I have a bit of catching up to do here!

The majority of these updates will be about the final build for the Ruby Bridges case study and thoughts about relating it to my paper writing. The earlier part of my semester was spent working a bit on the Oculus Quest with a five-week project based on house museums. My team and I made a prototype for a house museum on Jackson Pollock, which was a fun foray into Quest development. I also spent a bit of time playing with Vuforia and AR by making a Lord of the Rings companion app for iOS that provides additional information and context when looking at a map of Middle Earth, such as the paths taken by notable characters throughout The Hobbit and the Lord of the Rings trilogy. I’m updating the MFA section of this site with the details and documentation of those projects.


Planning

From there, Tori and I have been working steadily on what will be our final version of this project. Taking into account all of the things we’ve learned from the last seven versions or so, I compiled a list of key tasks to address in this version:

  • Main Menu:

    • Creating a natural introduction to the gaze-based interaction. In previous versions the user was required to look at a specific button to trigger it, but I would like to make this a little more related to the content of the experience itself.

  • Prologue

    • Adjusting the position of the images and text in the prologue for a seated static user. The new form of locomotion (discussed below) does not require a user to stand, and so the images in a circular configuration can be difficult to see. Instead I will be re-arranging the images to be on a single plane across from the user, gallery-style.

  • Scene 01

    • Locomotion. This has been an issue since the very first prototype, and we’ve gone through several formats as we try to determine the optimal amount of user agency in the scene while keeping user attention on the scene around them. Our last prototype led us to the conclusion that it wasn’t reliable or reasonable to explicitly attempt to direct a user’s gaze to move forward through the space, as it takes attention from the events around them. Talking to Scott, he presented us with a middle-ground: putting the user back on a rail, but allowing them to control the speed by looking around.

    • Construct a full narrative scene using new mocap characters. Tori has been working hard this summer and semester on cleaning new mocap data for the scene. We’ve run into some software issues so it’s taken much longer than expected, but I will be placing all the new data into this scene. This includes a new interaction between the federal marshals and the state officials once reaching the front of the school.

    • Audio. Bringing in new variations to the audio so that the mob doesn’t appear to be repeating the same phrases every 10 seconds, as well as arranging dialogue.

  • Ending Sequence

    • Previous demos have ended with the walk to the school. Because there are no other scenes currently following this up, I will be book-ending Scene 01 with a sequence leading the user out of the space, with conversation on what happened next or drawing conclusions to today. The scene will have a similar setup to the Prologue, leading the user back out to the Main Menu where it can loop again seamlessly for demo purposes.

Schedule

Initial schedule for Final RB Case Study, created 10/28/19

Week 1 Progress

Just as my schedule shows, this week I started a clean project and began importing all the assets. I made a timeline to plot out specific interactions in the project. It’s helped me visualize a few of the scripting decisions I’ve had to make so far - instead of relying on timing, I will be using global booleans in the scene to determine when specific actions occur. Giving the user the ability to control their speed down the sidewalk makes the timing variable anyways, and this is the best way to ensure all of the interactions occur regardless of the user’s pace.

Timeline for project flow.

From there, I focused my attention on working through the gaze-based rail system. Expecting pitfalls, I gaze myself 2 weeks to figure it out and troubleshoot. It turned out to only take 2 hours, and integrated well with the scaling system I had set up. At this point, if the user has the camera rotation between 50 and 130 degrees (with 90 degrees facing straight down the sidewalk), the speed of the animation down the sidewalk remains at 1. If the user passes either of these thresholds, the speed will decrease based on how far past the threshold the user has turned.

Screencap of locomotion script in-play.

Locomotion script written for rail system.

With this out of the way, I began to work on the next steps on my schedule. I built the prologue into a gallery wall instead of a circular arrangement and added in the timing/narration. This has prompted further conversation from Shadrick and Maria about the sequencing of the images, the narration being used, and how I might incorporate the gaze-based mechanics into this scene. I’m still not convinced on adding any gaze functions to these scenes - I don’t want this to be the space for users to linger, I want it to be an area to gain context for the scene ahead. Now that the functions are the scene are set up, I’ll be looking at the sequencing and narration used to see if I can build a stronger narrative for the user going in.

Initial Prologue setup.

I began bringing in the finished data as Tori uploads them. The federal marshals are in, as well as Lucille. I found a car asset with an interior and doors that open (harder than you would expect), and began to construct the introductory car scene. The clips for the federal marshals were separated out so that they can be timed in the script, and I’m in the process of locating the walk/idle/talking motions for them for the later dialogue sequences.

Goals: Next Week

I have to adjust my schedule a little bit to make up for the fast locomotion work. This is what I plan on having done by the update next week:

  • Car sequence complete with anims and dialogue attached

  • User transition from car to rail animation

  • Additional animation imports for Lucille

  • Main menu scene addition with stand-in gaze button.

By next week I should have some in-game footage to show with the animations intact and some additional models added to the environment. I will also be introducing some of my thesis writing concepts and how it relates to the decisions I’m making here in the project.