Building in Unity

This week started with the completion of the Explainer Video, which I've placed below. Creating this video really did help me organize my thoughts from last semester and display what I've been working on. Seeing this together made it easier to find my path forward. It also gave me the chance to work in After Effects again and brush up on some old skillsets. 

Tori and I began discussing the Ruby Bridges project last weekend and had a general plan in place for production to begin. On Wednesday, we spent some time in the Motion Capture Lab learning how to stream an actor's motions directly into Unity. I have spent very little time in the Motion Capture Lab in the past and am unfamiliar with the programs that Tori has to use in order to capture data, so seeing this process gave me a general idea of her pipeline. Our classmate Taylor put on the suit and we started by setting up tracking as if we were doing a recording of his movements. This process Tori is very familiar with and will be something that we'll be using to test out animations. 

We then pulled up Unity and learned how to stream an actor live directly into the scene, which did require some tweaking and setup for the basic character we were using. But the end result was being able to put on the headset, see Taylor's character in the HTC Vive, and interact with him live. 

Tori viewing Taylor's motions in the HTC Vive. 

This is going to be especially valuable once we have the final set built and can interact with actors in the space. His movements were very clear, though we didn't properly orient the character and the camera around the origin. Whenever Taylor moved to his left, it appeared to me that he was moving straight towards me. Just making sure all of our transforms are correct should clear up this issue. 

I made a demo Unity project earlier in the week that was just a base set- a flat plane with some boxes and house-like representations just to have a place to test out interaction. When going to show Tori, I realized that I had forgotten to load the SteamVR asset package. Trying to reload that caused a whole host of problems, and I found it was easier to start from scratch and build up a demo scene with a layout similar to our story. I spent Thursday building up a new set using the Prototyping asset package that comes with Unity. Because interaction is my priority, I'm choosing not to focus on the models and just work with representations. This new map features a school, front yard, and street. 

Screenshot of new Unity scene.

Screenshot of new Unity scene.

I chose to make this scene fairly large, so we have room to experiment with navigating larger environments. This also means more room for figures once we start importing the motion capture data. 

From there, I followed some of the VRTK tutorials (found HERE) to set up the camera, basic teleportation system, and a few interactable objects. There's a table off to the side with 6 cubes on it, each with different properties. One functions as a control and cannot be picked up using the hand controls. The other five have varying highlighting settings, and react differently when picked up by the controllers. This helped me learn a bit more about how the hand controllers are set up to work with interactive objects, and what options I have for modifying these interactions.  

Screenshot of interactive table, with one of the pick-up cubes selected.

Screenshot of interactive table, with one of the pick-up cubes selected.

I had to make several decisions this week about what type of interaction specifically I would be searching for. I knew that it would be three broad topics: navigation, object interaction, and time. But I broke that down and really thought about what I want to explore in those areas, beginning with navigation. 

  • Teleportation:

    • Using VRTK, the standard simple teleport function. I did switch the pointer to be a bezier pointer, which seems easier to use than the straight pointer. It's easier to determine a final destination where the straight pointer tends to overshoot. I learned this week how to set that up from scratch, which was my first goal.

    • Point and click navigation. In this scenario, the user determines their destination but we (the designers) control the actual teleportation. The scene would be divided up into sections, and when at the border of a section the user will have a cue to move into the next area. The user will appear in the same spot each time. It will be interesting to investigate whether it's easier to move this way and take the user's focus off of the controls.

    • 2D Map. Using a menu function to determine which area the user wants to teleport to. In this case, having a map available to toggle on a hand control or a series to options. Something like "School Entrance" would teleport them to the front of the school doors.

I took inspiration from games like Myst, Dreadhalls, and The Sims when considering these layouts and how the player interacts with a larger map or navigation techniques. 

  • UI/Menus

    • Scrolling

    • Moving windows

    • Typing (for potential classroom uses)

    • Buttons

  • Animation

    • Starting and stopping animations with a button to "pause" the scene while retaining player movement.

    • Play with time: ability to move backwards and forwards, implementing those scroll bars from the UI Menu exploration. Similar to resting in Skyrim.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

Tori and I also discussed the hardware being used. While we know we're going to be developing for the Vive, we also have access to the Leap Motion sensors for hand controls. I looked into development for these, and I think it would be valuable area to explore. Being able to see and reach out to grab objects or incorporating gestures into navigation could an interesting space to work. For now, I've decided to accomplish the above goals using the HTC Vive, but to keep researching and looking up Leap Motion resources if we decide to take that route in the future.

Below is a video from Leap Motion previewing their VR hand tracking software in 2016, just to give a general idea of the type of interaction we could be looking at. 

The past week has been full of development and making decisions to start moving forward on our prototype. Moving forward in the next week, I will be finishing up some tests for the point and click navigation and getting the menu-based navigation in place. I have a few ideas for how to accomplish this, but I need to do some research and see if there's any cleaner or simpler paths. I

will be sorting through and organizing readings tomorrow- another classmate showed me the Mendeley app, and I'd like to try to use that to keep research together. I also need to get some time in the Sim lab to test out the level I made, and make sure these interactions are functioning the way they're supposed to. The simulator in Unity isn't running properly for me in this scene - while it would be useful, it's just not a priority right now and I can work on fixing that once a few other tasks are accomplished.