Animated Control

The main focus of the past week was working out how to control animations in Unity, and create a menu for the users to do so within the scene. This was going to be the biggest challenge for me so far, as my experience with the Animator is minimal. Tori imported some of our first mocap data into the scene last week and I watched that process, so I understood generally how to navigate the windows and what basic settings would do. 

At first, I thought working with the Animator and swapping between animation states would be the best way to do this. Not the case at all, and I spent two hours on it before realizing I misunderstood how animation states work. 

Enter the Timeline, a feature of Unity that I had no idea existed until I started sifting through the tutorials on the Unity website. We weren't trying to blend different animations together, we just wanted to be able to pause, play, and restart whatever was playing. The Timeline allowed me to do this. I was able to access the Playable Director component on each game object in the Timeline, and use Unity Events with a UI Button to attach play/pause functions. 

Screenshot in Unity of Animation Controls added to Headset Menu.

This is basic functionality, and there are definitely still bugs that need to be worked out. While the animations do restart, they restart from the position that they were in when you pressed the button. I would just need to write a script to start them back from their original places at the beginning of the scene. For now, connecting the button with the action completed my goal. I spent Tuesday watching more of the Unity tutorials to understand the Animator a bit more and how it connects with the Timeline, so in the future editing animations will make more sense. 

I built the animation controls into the menu, and attached that menu to every other navigational scene. 

Unity screenshot: Active game with Pause button pulled up in the Inspector, showing the button function using the Playable Director.

On Thursday, Joe critiqued the level that we had and identified several issues that needed some work. Some were similar to those pointed out by Maggie.

  • Controllers. Left and Right controllers are getting confused, even by me. He suggested finding a way to distinguish them in-game, maybe by changing the colors of the models in order to prevent incorrect instructors.

  • 2D scene has mislabeled Menu options. This has been fixed in the most recent version.

  • Text for 2D map is too close to the periphery of the mask. Need to be relocated more central to the screen.

  • Correct laser activation. Though I did fix the laser switches, I need to make sure every level only relegates menu control to the Left controller. This is a matter of making sure all the correct options are selected in each level, I believe one or two got overlooked.

Joe also presented two ideas on how we might organize the whole experience based on the scenes he saw. 

  • Using the controllers to pick up books around the scene in order to gain information about it. Similar to picking up "notes" in popular games, but giving the same control as the interactive cubes in the scene. Promoting immersion and interactivity.

  • Starting the experience with the user as Ruby, no matter what. They must experience the scene as her first walking up to the school. Then afterwards, reloading the scene and giving control of the scene to the user via navigation and animation controls.

His second comment harmonized with other discussions I've had with Tori and Maria about how user control can be used to emphasize narrative elements. We've had suggestions about making the user Ruby's height as they navigate the scene, which would definitely create an impression. On the flip side, for a young student there would be a fine line between creating an impact and pushing too far into scaring the student. Something that we have to consider when we make these choices.

I did like the idea of starting the experience with a prologue-type event, and really pushing that lack of control on the user to encapsulate her experience. So I created a scene where the user follows an animated null object up the path to the school. When the user reaches the front steps there's a pause and the scene changes to one of our test navigation scenes. 

I ran into some new technical issues for this scene. For one, while the user is bound to the animation of the null object, they can still step away and move away from the object into the scene. I will need to lock the transforms for the user and force them to experience her route as it is. I also need to deactivate the teleportation, lasers, and menu controls. A non-user related problem was the actual scene transition. When the player reaches the step, a scene does load but it is not the one that I select. There's a trigger at the top of the steps and I think something may be wrong with the colliders and tagging system, so I'll need to try a few other methods and make sure there isn't something in the scene interfering. More debugging to come. 

Tori and I also sat down and started working out our research questions for this project. We figured that a good format would be to have one overall statement for the project, and then two  other questions for ourselves that relates directly to the areas we're exploring. That conversation was started two weeks ago discussing our general directions, but we were able to distill them down into working statements. As it stands, my questions are: 

  • "How does the combination of motion capture and VR enhance the fostering of empathy in elementary school literature as an educational tool?" (Main research question)

    • "How does user interaction with environmental elements reinforce informational transfer?"

    • "What forms of navigation promote exploration of a narrative scene in a virtual environment?"

The phrasing of the statement is still being worked out and edited, but it feels like we're making progress defining the goals and direction of our project.


With these four weeks behind us, I'll be going through the scene and pulling together all of my process documentation. Tori and I will be presenting our four week proof of concept, and our presentation will include a full video showing all of our progress to date. It will also give us time to pause and discuss our plans moving forward, and consider everything that we've been learning and researching during this time.