2/17/19: Phase 1 Begins

Projects like Orion and spending time on other VR applications has been a welcome break for exploration, but this week brings the return of thesis. We’re working on projects in phases, with Phase 1 lasting for the next five weeks.

I’ve been thinking about our prototype of Scene 1 (Ten Week Prototype) from the Ruby Bridges case study last semester. The final result was not a functional experience technically or visually, and after speaking with peers and receiving feedback I realized that I needed to go back to some fundamental concepts to examine some of the decisions made in designing the experience, such as timing, sequencing, motion, and scene composition. I feel that our last project started getting into the production value too soon when we should have been focusing on the bigger questions: how does the user move through the virtual space? How much control do we give them over that movement? What variations in scale and proximity will most contribute to the experience? These are the questions we started with and seemingly lost sight of.

In developing the proposal for my project I also began considering more specifically what I’m going to be writing about in my thesis. And, more importantly, beginning to put language to those thoughts. Recent projects have been allowing me to question what parameters designers operate with when we’re designing for a VR narrative experience. It gets even more complicated when we start breaking down the types of narratives being designed for. In this case, the Ruby Bridges case study is a historical narrative - how would those parameters shift between a historical narrative and a mythological narrative? What questions overlap? Orion was a great project for examining design process for narrative, and now in shifting to another, I’m interested to see how that process carries over here.

Phase 1: Pitch

Production Schedule for Phase 1

I will be creating two test scenes to address issues face in the 10 Week Prototype. The first will address Motion- how can a user progress through this space in the direction and manner necessary for the narrative while still maintaining interest and time for immersion? And does giving this method of progression to the user benefit the scene more than the designer controlling their motion? In the previous prototype we chose to animate the user’s progression with a specific pace. This time, I will be testing a “blink” style teleporting approach, allowing the user to move between points in the scene. Each of these points creates an opportunity for myself as a designer to have compositional control while still allowing the user control over their pace and time spent in that moment. This also provides an opportunity for gamified elements to be introduced, which is something I will be exploring as I move through the project.

The second scene address proximity and scale, creating a scene where the user adopts the height of a six year old child and scaling the world around accordingly. Even to the point of exaggeration to experience that feeling for myself. It was suggested in a critique last semester that I create these small little experiences and go through them just to understand how they feel for my own knowledge, and I agree with this method - more experience would certainly help inform the final design decisions. I will again be experimenting with the composition and density of the mob outside of the school to create some of these experiences.

Week 1

I purposefully scheduled Week 1 to focus on planning out the rest of the project and getting a strong foundation built. I planned out what I was going to do specifically in each scene and brainstormed various ways to solve technical issues. Writing my project proposal had already helped solidify these plans, but I’ve developed this back and forth process with my writing. My sketchbook helps me get general concepts and ideas going, where the proposal then puts formal language to these ideas. While writing the proposal I usually find a couple of other threads that I hadn’t considered, which brings me back to the sketchbook where I then update the proposal… the cycle continues, but it has been especially productive over the last two weeks.

I focused on getting the overall environmental scaling and test space created this week using assets from our previous prototype. The issue was having the user start the experience in the right scale and position every time. Locking in the camera in VR is a pretty big “NO”, and Unity makes it especially difficult as the VR Camera overrides any attempts to manually shift it to its proper spot.

Scaling was much easier to figure out than I expected - I’m just scaling the entire set to account for the height of the user at any given point based on the height of a six year old (1.14 m) rather than forcing a user to be a height that physically doesn’t make sense to them. I expected this code to be much more difficult, but so far it seems to work pretty consistently when I test it at various heights.

I’m still working on getting the recentering function to work. I found a lot of old documentation from 2015 and 2016 that doesn’t account for all the changes in Unity and SteamVR. There’s some good concepts, and even a button press would be great for now. Still planning on continuously exploring this, and I expect I’ll be working on it throughout Phase 1.

NEXT

  • Begin Blink teleport testing through the scene.

    • When I made this schedule, I didn’t realize that SteamVR has a Teleport Point prefab. So, yay! Production time cut down! I’ll be using that spare time to add in primitives simulating the placement of the crowd and brainstorming potential gamification/timing. I may also go on a search for some audio and add that to the scene as part of my testing.

  • Experiment with button pressing versus gaze direction. How does the scene feel without controllers? Would gaze navigation be effective here?

  • Playtest #1 with peers, gaining feedback on the button or gaze mechanisms and other developments made during the week. Will also gain feedback on the scaling and positioning of the user.


OUTSIDE RESEARCH

The games I played this week were all very physically involved with a lot of motion required on the part of the player. However, none of these games used methods that required teleporting or “artificial motion” via joysticks or touchpads. All were based on the motion of the player’s body. Even more interesting, I experienced a strong sense of flow in these games than in past titles, though each for different reasons. Considering my thesis, which would not be this action oriented, it’s helpful to see how specific components in this games - sound, motion, repetition - are utilized in ways that ultimately make a flow state possible.

FLOW VIA SOUND: Beat Saber

Beat Saber is a VR rhythm game that operates as a standing experience, where players use their arms and lean to hit the cubes with sabers on the beat and in the indicated direction. Unlike the others, I’ve been playing this game for a few weeks and have had time to examine increase in skill level as well as what kind of experience I was having. It was initially very difficult to get used to the cubes flying directly at me and to be able to react to the arrows indicated on the cubes - a longer adjustment than I expected, actually. I play games like this on my phone using thumbs, and my body knew what it needed to do… but was having a difficult time getting my arms to react to it. After a couple of weeks I can now play the above song on Hard mode, which is what I’m including for this group of games.

Every time I play a song, I usually get to a point where I experience flow - able to react to the cubes as they come and follow the rhythm without really even thinking about it (and significantly better than if I am thinking about it). It’s a state that feels instinctual and occasionally feels as though time slows down, a common description of flow. Sound is what’s driving that experience, without the music this would be much more anxiety-inducing and stressful than enjoyable.

After playing I was thinking a lot about Csikszentmihalyi’s book Flow, where he outlines several important features in a flow activity: rules requiring learning of skills, goals, feedback, and the possibility of control. Even with varying definitions of what is considered a game, most require those components in one way or another. He references French psychological anthropologist Roger Caillois and his four classes of games in the world - based on those, Beat Saber is an agonistic game, one in which competition is the main feature. In this case the competition is against yourself to improve skills and others to move up in the leaderboards. However, as frequently as I did fall into flow, I also fell out of it easily when a level grew too difficult or beyond my skills.

FLOW VIA MOTION: SUPERHOT VR

I’m not quite sure how to categorize Superhot VR, but it’s the most physical game I’ve ever played in VR. Players can pick up items or use their fists to destroy enemies making their way towards you in changing environments… the twist is, time only moves if you move. Every time I rotate my head the enemies get a little closer, or if I reach out to pick up a weapon suddenly I have to dodge a projectile. As the number of enemies increased with each level I found myself kneeling, crouching, or dodging. There is no teleportation or motion beyond your own physical movement.

Everything here is reactionary. I experienced a strong level of flow, unlike the intermittent experience I tend to have in Beat Saber. Time being distorted here and used as a game mechanic almost seemed to echo those flow states. The stages are all different with minimal indication of what is coming next, and often the scene starts with enemies within reach. I didn’t have to think about what buttons or motions were required to move, it was a natural interface - I could just move my body to throw punches or duck behind walls. While this was effectively immersive and did result in a strong flow state, I was pulled out of it immediately every time I ran into a wall in my office or accidentally attacked an innocent stack of items sitting on my desk.

Sound was minimal, which I very much appreciated but sets this game in stark contrast to Beat Saber. The focus of this game is motion, not music or rhythm. On a continuous side note from the last two weeks, death states in Superhot VR were much less disruptive than the other games. The entire environment is white, so the fade to white and restarting of the menu isn’t very jarring or disruptive to the experience. It was easy to jump back into the level and begin again. This may be an interesting point for transitioning my thesis between scenes- having a fade or transition that is close to the environment rather than just doing the standard “fade to black”. I suppose it depends on the sequence I’m designing… a thought for next week.

Elven Assassin VR

And last, this is a game that combines a little bit of everything. Elven Assassin VR requires you to take the role of an archer fending off waves of orcs planning to invade your town. Your position is generally static with some ducking and leaning, and the ability to teleport to different vantage points within the scene. This deals in precision and speed, and the physical motion of firing the bow. The satisfaction of hitting a target in this game was immense, and I ended up playing until my arms hurt. The flow in this game comes from the rhythm of motion - every shot requires you to nock, draw, aim, and release the arrow to take down one enemy. There isn’t really a narrative occurring in this game at the moment. It tends to operate more like target practice, and the concentration required was what induced that flow state.

Falling out of flow was a little easier here with technical glitches - tracking on my controllers would get disrupted and my bow would fly across the world while I fell to a random orc sneaking through the town. Their use of a multiplayer function is also really interesting here, and the social aspect may be an interesting avenue to explore with this game.

Conclusions

I didn’t actually expect to talk about flow at all, it was just a happy side effect. These are three VERY different games and that experience of flow was the strongest commonality between them. This kind of goes back to game design as a whole rather than specifically VR design. But those little differences in how each game approached physical action and reaction to the environment really drove that point for me. Where Elven Assassin VR focused on action that was repetitive and chaotic, Beat Saber focused on the rhythm of those actions and applied them to the template of the song. Superhot VR left the chosen action up to you, but suggested some paths and required movement to occur in order to advance. The result was neither repetitive nor rhythmic, but required control.

I am not planning on making experiences so heavily focused on action and movement as these, but bringing what I’ve seen here from the choice in motion to smaller actions or interactions with the environments in my thesis work might help me answer some of the design questions I’m exploring in the Phase 1 project. How can a user move through a space? I’m considering teleporting from point to point, but have not yet thought about the potential secondary actions on behalf of the user - those spaces where gamification could occur. These games re-framed motion for me, reminding me to define more specifically the type of motion expected of the user, and ensure that the motion (or lack of) enhances the experience itself.

02/10/19: Reviewing Orion

After five weeks, the Orion project has come to a close. And as with most project, the final result was vastly different from what I anticipated when I began.

Textured image of Orion in UE4 editor

Textured image of Orion in UE4 editor

PROCESS

When I began Orion I anticipated a fairly straightforward process. I would be working with Quill and Unreal to learn the pipeline for each and between the two. What I had forgotten is that I have never created an observational narrative experience from scratch in VR. I am usually planning for some form of interaction, or in the case of my thesis project the narrative and environment are already described for me. Traditional storyboarding and animatic techniques were not going to work for me, which is where my foray into Maquette and Tilt Brush came in. Every step of the process was steamrolling through technical issues to see what worked and what didn’t work.

Process path for Orion

I realized that I really just needed more time to learn the painting and animation techniques for Quill along with all its quirks. I was excited about painting the cabin last week, but ultimately the asset ended up not working and I still build the asset using Maya and Substance Painter… I have never been so happy to be back in Maya, to be honest. I used the terrain tools in UE4 and the “Forest Knoll” asset pack purchased a few years ago to build the rest of the environment. I used a few Quill animations, such as the candle and the stars, as “accents” to the rest of the scene.

On a personal process note, while putting together the scene I made the decision not to use any visual reference at all. This was for two reasons: to avoid hyperfocus on unnecessary details, and to operate within the essence of memory. The project description was to show the essence of our memory in 15 seconds - well, 15 seconds is a very short time in VR. That’s usually the amount of time it takes for a viewer to orient themselves and focus in on the story. I didn’t want to overwhelm with an overly detailed environment that misses the point of my memory. And if I used visual reference, I would shift focus from my own memories to what it “should be”.

CONCLUSIONS

Even with all of the roundabout processes, I felt the final result was remarkably close to what I remember. Closer than I expect the original storyboards would have been. I think those would have been visually exciting and fun to watch, but that’s not what this moment was about. It was a quiet fifteen seconds on the deck in nature with just the stars and the sound of the trees. I have yet to share this experience with my partner to see how her memory might differ from my own.

I also learned a lot about the technical aspects of these tools, and personally did not enjoy using Quill for most of my painting time. It was fun to make some looping animations, but I doubt I’ll ever actually use this again for a project in the near (or distant) future. The final result was something I probably could have made in three or four days of work in Maya and Unreal, but I feel that I’m at a good point to move forward if I want to use Unreal for future VR experiences and feel more informed about the pipeline options available to me.

NEXT

  • Documenting Orion. I’m having a difficult time getting a video of the full experience because the scene is so dark. In the headset it’s easy to see, but the screen recordings I have taken so far have been really low quality and dark. I’m currently working on some rendering options in Unreal that may produce a better result.

  • Begin Phase 1 Project. I will dedicate next week’s post to the Phase 1 project centered around my thesis, but currently I’m still working out a final plan and some language to describe the project itself.


OUTSIDE RESEARCH

Continuing my theme of playing VR games and experiences for research, this week I went for a bit of a different track. I did some digging around in the Oculus and Steam stores, and I was able to play four of the games I had lined up.

BOARD GAMES IN VR

I think the initial question here for me was “why”? I enjoy board games, specifically the social aspect. Sitting around with friends chatting, accusing each other of hiding cards, accidentally bumping the board and sending pieces flying. It’s all part of the experience. I noticed on the Oculus store they have several Chess applications, so naturally I had to download one. And I also found a Catan VR app that I wanted to try.

(I found out while writing this post that both applications are made by the same studio called Experiment 7)

The only real difference we have between a board game in VR and a board game in real life is the social aspect, which is really what these games are trying to create. Catan’s environment looks like a mountain lodge with scenes of mountains outside and a nice soundtrack, with four chairs sitting around a table. Chess is similar, taking place in a library by default. I played against the AI for both, which in Chess produced a little robot figure watching me across the table while the Catan foes were painted portraits that had moving eyes and facial expressions. That bit was a little unnerving, to be honest.

I got absolutely destroyed in both games, but I was surprised how much I enjoyed the experience of sitting in a chair interacting with other “players”. The animated board in Catan was a nice touch, although things move so quickly it took some getting used to. Being able to physically pick up a chess piece and hesitate or fiddle with it before moving was great improvement over playing typical browser games. I felt present in the world and able to interact with the other players, feeling real frustration with them when I lost resources or had a bad roll. I was worried that these games would simply animate the board and leave it at that, but the efforts made to engage the players in the space and with each other made for a much more effective experience.

EXPLORATION and PUZZLES

The first game I played is called “I Expect You to Die”, where the player is a secret agent going on missions where the path forward must be determined by actions and clues in the space, and often the process to figure out this path results in a gruesome death. I played the first level of this game a few years ago, but since then they’ve added a beautiful animated introduction and several new levels. This game is meant to be played seated with the player reaching out or leaning to move or using their telekinetic prowess to bring objects to them.

In this case, the lack of locomotion around the scene increases the challenge and still makes for an enjoyable experience. It becomes accessible for all kinds of players and play spaces, and the missions themselves have good variety… though the deaths are still extremely startling in VR. The controls especially seemed to work and I enjoyed a great level of dexterity in the scene switching between objects and using them with ease.

The last game was “Internal Light”, an escape room style game where the user must navigate a creepy dark building to make it to the outside with a tiny ball of light as your guide.

Now, when I started this game, I didn’t know what it was going to look like or what kind of gameplay there was going to be. You start off in a cell chained to a bed in a scene that looks like its out of Resident Evil. There will be a week when I go into horror games, but I was not planning on it to be today and, well, I’m a chicken. I immediately started sweating and wanted to leave (escape?). The game itself is not a horror game, it’s just creepy. But the environment was effective for building suspense and tension.

What really sticks out for me here is locomotion. The player moves while holding a button and alternately swinging your arms back and forth in a skiing motion. I have NEVER seen this before and it was oddly effective. To navigate the player is required to be crouching and dodging security, and there’s a special kind of anxiety when swinging your arms to move from one cover to the next, hoping you’re moving fast enough. Even though I was standing I didn’t get motion sick, and was able to run through most of the game fairly quickly. I didn’t see any options to adjust these settings.

CONCLUSIONS

All four experiences used their environments to create presence in the space, and included a level of AI “social” interaction. Whether it was the calm atmosphere conducive to board games, the action hero inspired music and imagery, or the anxiety-inducing horror themes, the environments were really the selling point for the experience. The social interaction between computer and user (with the potential for multiple users) negates the isolation that VR can sometimes induce as discussed last week. I’m still curious about why that form of motion for Internal Light worked so well, and I am curious to see if any similar methods are used as I continue to explore what VR experiences are out there.

2/3/19: Painting and Planning

As a production week, I’ve been splitting my time between getting audio set up in Unreal and getting objects made in Quill. The last few days are where I get to put it all together with the last bit of animated assets.

I’m operating a little in the dark right now (pun intended) on what the final look of this piece is going to be. I timed out some atmospheric fog to reveal the scene slowly and made some cues for the sound effects: a match lighting, trees swaying in the wind, ambient noise for the scene around. The narration is in the scene, but I still need to adjust the timing and put it all together.

Quill has been easier for making static objects. I painted the cabin setting for the user - quicker than I expected using the straight line tools and some colorize to get the final shading in. The candle currently in the scene feels a little too bright, so I tried to go darker and see what the lighting in Unreal can do. Something annoying about painting things like this in Quill- if you’re painting a lot with a specific color, the lack of lighting tools in the program makes it really difficult to see the cursor against those colors. I got lost trying to find where my brush was in the cabin sometimes even though my hand was right in front of my face. Click the images below to check it out, though they’re really dark when not in the program.

This last leap is about putting the pieces all together and testing it out. By the end of today I should have all the assets in and will be doing the final bit in Unreal. We were able to get both Quill and Unreal working in the labs, which significantly increased my production time.

What’s Next

  • Finishing the last few Quill assets

  • Compositing the 3D and Quill assets

  • Finishing audio timing

  • Add a “Restart” button, so that the experience can loop at the viewer’s choice or provide an easy restart between viewers (a reach, but would be ideal)

  • Troubleshooting


Outside Research

NARRATIVE

Throughout this project I’ve been thinking about how to conduct the viewer’s attention to the events you most want them to see, while taking into account that they have agency over the camera itself. Part of that has to do with seeing the viewer themselves as an actor within the scene, and the designer as a form of director. And how that production process would differ in VR compared to the 3D workspace - that’s something I’ve been struggling with myself in this project, finding that path in a very short amount of time. An article about “Cycles”, a VR short that Disney released late last year, came across my path.

Disney’s “Cycles”, from AWN article. Source

“Cycles” has a really interesting visual feature in that, when a viewer looks away from the central action to an area off to the side or behind them, those features desaturate and become darker. I also read that they used Quill to create storyboards for the film and developed a number of virtual tools to experience each stage of the process both inside and outside of VR.

GAMES

Moving away from the Quill project and towards my Thesis, I decided to use our unexpected Snow Day to conduct some VR research… using my chunk of time to experience the variety of things available on Steam and broaden my understanding of what techniques are being used.

I started with The Talos Principle VR, a game that I enjoy playing on the PC. When VR was first released, many game studios started porting their current titles over to VR by just changing out the controls and letting the content flow just the same. I wanted to be able to do a direct comparison of the two.

Screencap from Youtube playthrough by Bangkokian1967 ( source )

Screencap from Youtube playthrough by Bangkokian1967 (source)

What I was really exploring here was how they approached movement. The Talos Principle is incredibly nonlinear, where players generally get to choose how and where they go, and what path they choose to take to get there. It’s a puzzle game with generally realistic assets, and movement to avoid enemies is a huge part of successfully completing each stage.

The player gets enormous amounts of control over how they want to move through the game, showing every option about how you move and how the camera adjusts for that movement. I started with teleporting, which works okay to get across long spaces. But in confined spots with enemies that require you to move quickly, the few seconds it takes to acclimate to your new location tended to result in the death of my character.

Oh yeah - dying in VR? More disturbing than I thought it would be. It’s just a little explosion sound and a fade to black, but still very startling.

Walking using the touchpad didn’t make me as sick as I thought I would be after I adjusted the vignette over the camera and made sure to stay seated. Standing resulted in quick loss of balance and motion sickness, though I noticed that movement in a direction where I wasn’t looking also made me a little queasy.

Overall, I thought the adjustments made to the motion in the game worked well and I was able to play for over an hour before taking off the headset. I’m not sure if the game experience was especially different from playing on a PC, but I’m also aware that I already know the story and it may be difficult to judge how immersed I was when I already know how the game works.

EXPERIENCE

The last thing I wanted to look at was an experience called Where Thoughts Go: Prologue, available in Steam. The user sits in an environment and is presented with a question, where they can listen to the anonymous answers of other participants and then record their own to move on to the next. There are five questions, and I still spent over an hour in this experience.

Where Thoughts Go: Prologue, Chapter 2.

Each environment changes to suit the question, from the lighthearted first question to darker and more somber for the last. The experience was incredibly meditative - the environments are pleasant to sit in. The little orbs in the image are the responses of previous people. You listen to their voices answering, and I was shocked by how open and honest the answers were. Being able to hear someone’s voice crack a little bit as they talk about a sad event or get higher discussing an upcoming wedding to their love just pulls me further in to the space.

VR can be considered isolating, as for the most part we’re all just sitting by ourselves in a headset in our own worlds. This took an isolating experience and turned it into a communal feeling, a place where you can be vulnerable without risk. There are no usernames or accounts, just a recording. When a user adds their own recording to the space, you pick up the orb you’ve just made and pass it off to join the world. It becomes a sense of closure and just enough participation that I felt like part of the experience.

Where Thoughts Go: Prologue, Chapter 2

Conclusions

I realized that I haven’t been very involved in what’s happening in VR outside of the academic research world, and need to continue going through these experiences alongside my own research. As I go through I’m keeping a journal of notes from each experience and what I can take away from them. I would like to play a made-for-VR game next week and see how that feels compared to a port like The Talos Principle, and search for other more community-based experiences like Where Thoughts Go.

1/27/19: Quill to Unreal

Where last week was full of conceptual challenges, I encountered all of the technical challenges over the last few days. Before moving too far into production in Quill, I wanted to make sure this was a pipeline that was feasible and functional within the time span I have. I also wanted to double up my time by facing on the technical challenges and still working on an animatic.

Maquette was a good first step last week. I have found that planning for VR in VR is a key part of the development process… yet it’s still difficult to choose a tool. While Maquette presented many opportunities for rapid spatial iteration, what I really needed - light placement, pipeline development, and audio - just wasn’t viable in that environment. I needed to get Unreal functional. I was able to get Quill and Unreal working together on the same computer in the lab, but still face issues with frequent crashing and Quill being incredibly picky about tracking. I haven’t faced those issues on my home setup, and so I’ve been doing most experimentation on my own setup.

I’m still getting used to the controls in Quill. They don’t feel especially intuitive, though I am slowly getting better with time. Working on another project in Tilt Brush earlier in the week, I was able to get more of a feel for painting techniques in a program that I feel is much more pared down and easier to iterate in. Jumping back into Quill afterwards felt a little more comfortable and I am getting faster. To test the pipeline I animated a candle flame and sparks that are a major lighting source in my story:

Animated flame in Quill

Animated flame in Quill

I watched Goro Fujito’s video on how to animate brush strokes for falling leaves- incredibly helpful, though I was still having a rough time getting a hang of that painted lathe trick to make solid objects.

I exported that out of Quill as an alembic file, and that’s where some of my trouble began. There is very little documentation on the pipeline for bringing a Quill animation into Unreal. Unity has its own alembic importer and shader for Quill, and Maya has some decent documentation. I tried bringing it into Maya, exporting into different formats, bringing that into UE4 and Unity. Just having a ton of issues getting the animations to play properly and texture to display.

It turns out the final process was to export the alembic from Quill, import into Unreal as the experimental Geometry Cache component, and make my own material with a vertex color node. I’m pretty sure I can separate individual layers in Maya and export them as their own alembic files for use, but that’s a process for the more complex elements in the scene and I haven’t tested it yet.

I started building a scene around that, getting some lighting in there and blocking out some of the bigger landscape features that I’ll later be painting.

UE4: Blocking in geometry.

UE4: Blocking in geometry.

Unreal has a pretty good Sky Sphere with illuminated stars, I’m using that as a stand in right now. As I blocked in shapes I made sure to check periodically in the Oculus that the scale made sense for what I’m trying to accomplish. I am also familiar with the Sequencer tool in Unreal, and so have been using it to key values in the scene to create a basic animatic. The result is a developing block-in for my project that functions as a VR animatic while I get more familiar with Unreal. The viewer starts in a dark fog, which then lightens briefly to reveal the candle lighting. Over time the fog recedes and the stars show. I plan on guiding the user’s attention with specific lighting cues, the first already in the scene with the rising sparks on the candle.

Current state of the sequencer, using to coordinate my animations.

Current state of the sequencer, using to coordinate my animations.

Going through the process, I think my biggest question right now comes down to scale (again). I want the viewer to feel small in the beginning, but then transition to feeling close and connected. Scenes in VR have a tendency to feel very large, distances seem much farther than they should be. I’m interested to see how much more effective a dramatic shift in proximity to the viewer can be in a VR space. On a project last semester I fell into a habit of testing the scene in the VR Simulator instead of in the headset. The result was a scene that felt too large for the user, and I’m already starting to catch those instances just by working in the headset more frequently. Animating in Quill has been really helpful as well, as I’m able to use my body as a reference.

Next Steps

Adding in the narrative audio will be the first step of this week to work out the timing, and bringing in more Quill animations and static models by the end. Now that I understand the pipeline a little better and how to move around Unreal, I’ll be able to bring in all new work and add it to the scene as I go.

I have also begun collecting sound effects, and will be using those to build my scene up throughout the week.

1/20/19: Considering the Narrative VR Pipeline

Planning a narrative experience in VR requires its own pipeline and structure. Logically I already knew this, but I still went into development with the same animation mindset. I spent this week focusing on fleshing out the narrative itself, creating storyboards, and determining which technical paths are viable - a process that reworked itself along the way.

Gathering References

I began gathering some reference this week, looking for potential lighting inspiration and trying to determine how these scenes are created. Goro Fujito’s work was great inspiration, but I realized that watching his renders and videos are exactly the same as watching a traditional animation - I needed to experience the scene itself, see where the lighting takes place behind the viewer, above the viewer. How does the scene play as a whole?
Tilt Brush to the rescue. Tilt Brush provides the ability for users to select scenes uploaded by other artists and watch them being painted in VR or skip ahead to the final result. I went through many of the scenes while in the Vive, focusing on those whose lighting most closely matched my own or whose style would be useful to observe.

Keeping in mind that Quill does not seem to have the wide variety of playful brushes available here, watching how the artists structured these scenes gave me some ideas for potential visual styles and techniques. After-Hours Artist is the only one I experienced that used 3D models that were then painted on top of, something I meant to explore further in Quill. Backyard View showed a series of single paint strokes layered in front of each other, then using a “fog” brush to emit a tiny bit of light and create depth. It was incredibly effective and dramatic in this case. And in Straits of Mackinac, the artist created the illusion of water by setting the background to a dark blue and implying reflection with only a few brush strokes.

Just by being in VR I found I was able to more fully deconstruct the scenes than I would in a still render, setting the path for my way forward.

Story(board)

At the same time, I have been fleshing out what it is I want to happen in this experience. The result was the following initial concept:

As I was working through the story, I grew frustrated.

Storyboards are a standard part of the animation pipeline, and I fell into the process of making one without realizing that the end result would be nearly useless for conveying what I am trying to create for this experience. Storyboards assume that designers have control of the frame, that what is presented to the viewer is a carefully constructed composition flowing from one scene to the next. At this point in VR, the designers have next to no control over the camera. I can choose which direction the viewer may start out facing. I can provide a limited scene with nothing else to focus attention on. I can attempt to draw their attention with sound cues and peripheral movement. At the end of the day, the viewer gets to control which details they experience within this world. Creating these storyboards did help me generally work out what I would like to happen within the experience, though I do not believe they are useful in helping me convey that to others.

I’m currently taking a Narrative Performance in VR class that is discussing many of these topics, and one helpful thing from this week was a Variety interview quote from John Kahrs discussing the making of Age of Sail. Kahrs comes from an animation background and talks about having to break that pipeline in order to develop an animated VR cinematic. “I was told not to storyboard it and just dive into the 3D layout process, which, I think, was excellent advice.” In that same lecture, this diagram from AWN article
Back to the Moon’ VR Doodle Celebrates Georges Méliès was presented:

The designers for that experience split the scene into sections and mapped out the action occurring in each part of the scene at every time. Thinking about the scene in this way, as a production rather than a composition, changes the way I’m approaching both the narrative itself and the production process.

Time to change tactics. VR manipulates space, not a frame. It then follows that I should begin feeling out that space in order to “storyboard” my animations.

I moved into Microsoft Maquette.

Maquette offers ease of material. I can place basic 3D shapes at all scales, manipulate a painting and text tool, and create multiple scenes that can be easily scanned back and forth to watch progression. I can view these scenes from a distance or at the viewer’s level. After experimenting with the tools, I began building a primitive scene to understand spatially what manipulations I wanted to happen. The result is an odd combination of an animatic and a storyboard.

Technical Progress

I did some experimenting in Tilt Brush, first with painting and then with the export pipeline. I am currently still waiting on Quill and Unreal Engine to be available in the lab, but will be spending this weekend working on my Oculus at home to see the results. Tilt Brush gave me some practice working with painting in a virtual space, specifically dealing with depth and object manipulation. I chose to create one of the chairs from my scene with the candle sitting on it as a test subject.

Painting in Tilt Brush of a candle sitting on the arm of a chair.

I turned most of the lights down in Tilt Brush to get a feel for what the scene would actually be like, and see what the various brushes would produce in terms of light. Not very much, as we can barely see in the image above.

What I really wanted to test was the export process from Tilt Brush to Unreal Engine. Tilt Brush exports as an FBX with the textures, but upon importing to UE4 I realized that the FBX is split into pieces based on which brush you used for each stroke. Further, the materials don’t seem to work without undergoing a process in between to assign a vertex color map to the object. I’m still a bit hazy on this process, though from my understanding Quill exports in a different file format that will seemingly not require this middle step.

Unreal Test - bringing a Tilt Brush model in, without functional textures.

Unity, however, has a package made to work with Tilt Brush materials called Tilt Brush Toolkit. Once downloaded from Github and loaded into a fresh Unity scene, I was able to import my model without any issues from the textures. All I had to do was drag it into the hierarchy.

Unity Test - bringing in the Tilt Brush object after importing Tilt Brush Toolkit.

Next Steps

My steps forward are really just finishing up where I’m at now and making some real steps towards solid production.

  • Spending time animating in Quill. The next week will be getting some of these base animations down in Quill and trying to export into Unreal.

  • Determining which 3D models I’ll be creating and starting work on that, while blocking out their presence in UE4.

  • Finish creating Maquette scene mockups. Finalize story.

1/13/19: Investigating Quill

This week marked the start of a short project on experiential storytelling, memory, and light. We were asked to think about three memories in which lighting was an important factor, and write a short description of this moment. Emphasis was put on the word moment - this is not meant to be a life story. The idea is to bring the viewer into this moment, understand what it is that’s happening, and then exit in 15 seconds.

I started thinking about memories that had specific focus on lighting, and found it was more difficult than expected. There were plenty of memories where I could remember what the lighting was and appreciated it, but I had to find three where it really stood out to me. I found that when writing about them, I was walking a fine line between what I’m saying to the viewer and what they’ll actually be seeing in the experience. The descriptions were going to be recorded and part of the audio. For each chosen memory I already had a vague impression of what I wanted to accomplish, the most difficult part was deciding how much visual detail I would be including along with narrative, and how specific that narrative would be.


Chosen Path

“I was looking for Orion. He’s there, as always, but tonight he brought friends to fill the usually empty sky. Standing barefoot on the deck with only the glow of the candle, we stared at each other over the hills, making introductions.”

This memory is from last May, standing on the deck of our house in North Carolina. My partner and I drove down from Columbus for my birthday. The area isn’t very populated, mostly woods - on a clear night it’s easy to see all of the stars. We stood outside the first night we got there with all the lights out except a candle just looking at the stars. As a child I would always look for Orion every time I walked outside, it was the only constellation you could usually see from where we lived in Miami. The light from the stars, the candle, and the houses on the other hill really stand out to me in that memory, and I chose it because I feel that I could bring the essence of this moment to a viewer with varying levels of abstraction.

Panorama off the deck at North Carolina at sunset. Original scene where the memory takes place.

Process

In the past, my research has required virtually the same pipeline visually every step of the way. Block modeling in Maya, some texturing in Substance Painter (occasionally), and then putting it in Unity and adding lights. The focus was on making the program itself function rather than impart an experience visually. I want to take a step back and create something that imparts meaning without necessarily requiring the viewer to actively be a part of it.

Oculus Quill presents some really interesting opportunities for animating in virtual reality. I spent some time looking around and finding examples of these animations that might be similar to my own.

Artist Goro Fujito spends his time creating animations in Oculus, showing a variety of scenes and perspectives. Viking Rockstar is a great example of the type of color and lighting I want to use in my own scene, includes multiple shots and sound design. I wouldn’t categorize this as a virtual experience, but as an animation it’s beautiful and on the right track stylistically.

This short looped animation puts the viewer in the perspective of driving through the rain. With the sound design and lighting, it’s incredibly effective and shows how the user can be brought into an experience.

Fortunately, Fujito also posts videos where he shows his process and explains his animation workflow. I watched this to get a better understanding of how Quill functions and if it would be a good option for me moving forward.

The official website provides some resources on how to export animations and FBXs to Unity, though I needed to look externally for information on how to do this in Unreal Engine. I was considering using UE4 specifically for its lighting capabilities. I worked on the lighting for Project Sphincter while at CCAD, and Unity just hasn’t been able to compare. As of now, I am leaning towards this option.

Putting the final software choice aside for a moment, I decided to get into Quill and see if this was really something I wanted to commit to. Granted, I’ve spent maybe a grand total of 3 hours in it and probably need to watch some more tutorials, but the learning curve is pretty rough. I had a difficult time getting a hang of the controls, which are not well explained when first entering the program beyond a little diagram that pops up by default. These were the initial sketch results:

capture00000.png

Next Steps

After spending some time in the Oculus, I don’t think it’s practical to do the entire scene in this way. At least, not from scratch. I need to investigate bringing in models and animating over top of them, possibly as reference. It’s very difficult to gauge depth in there once the scene is moved around. I also need to look into animating only certain objects in the scene with quill rather the entire environment, or blending them together. This will help me determine a production schedule for the next two weeks.

Beyond the pipeline research, I will spend this next week gathering my final visual reference, sketching out a storyboard, and recording my story for timing. I have been gathering some information on lighting and technical terms in order to actually discuss the decisions I’m making on the lighting in the scene, and will be discussing that more next week as well.

Looking Back on Liminality

A few weeks have passed, and I wanted to wrap up the work we did on Ter(li)minal from September!

Our end result was a space in which we sought to place the participant in a liminal space, forced into a sense of waiting. You are constrained by lack of movement and lack of activity, only able to observe by looking around and rotating your head. Upon starting the application, the user sees a space sparsely populated by seated and walking figures. As you wait, small changes begin to occur. Once empty seats are filled with figures. Seated figures may change positions when you look away. The departure board gains more and more red delayed flights as time passes. Babies scream, planes take off, the space becomes more chaotic and crowded with each delay announcement. Figures began to break away from their straightforward march along the walkway and defy gravity, floating through the ceilings and moving sideways out to the planes landing. Finally the scene calms as the boarding call is made, and the player is allowed to move on.

In the process of development, I learned a lot about linking layered timed events in Unity. Once the sightline script was functional and able to be applied to multiple objects, it became a matter of making sure these events were allowed to occur at the proper times in the application. I had to go back to the basics: public booleans and instantiations. The sightline function ended up working out due to a tip from Alan to use Transform.InverseTransformPoint instead of the function with the frustrum planes. I was able to get the function working that same day and then it all became about timing. Sara made a rhythm chart for the project that I based all of the interactions on:

rhythmchart_3.jpg

Some critique points that came up:

  • Move camera back. The player’s face is too far forward over the model, and it’s nearly impossible to see the models placed nearby.

  • Add interaction to the models around the player. The boarding pass, phone, and books around the player were actually supposed to be moving. I ran out of time and just didn’t make it around.

  • The departure board is very hard to see. In our presentation, we talked a lot about being able to see these subtle changes in the environment around you. But the departure board was placed so far away that it was difficult to read and see these changes as they occurred. Moving it to one of the pillars by the player might be more effective.

  • Animated figures- be more intentional. Some technical issues came up with the loops on the animations and timing them out. While the number of figures does increase over time, it’s difficult to see once they start walking through the floors and ceiling. This happens fairly quickly- waiting for more time to pass and spacing out these occurrences would make it feel like less of an accident (although… to be honest… it definitely was an accident).

  • Line renderer on the objects seems to glitch a lot as they walk, gets confusing and difficult to see the figures.

  • Audio. Needs to be louder overall- very hard to hear on the phone even with headphones.

Overall I was very happy to learn the process for android development and get to work a bit with the Daydream. It was much easier than expected and very quick to prototype. I would like to revisit this in the future and make adjustments, though that may be more of a Christmas personal project. Learning about the sightlines is going to be especially useful for our 10 Week Ruby Bridges iteration that Tori and I are currently getting started- but the journey for that so far deserves its own post. More soon!

Weeks 1-2: Sightlines, Airports, and Liminal Spaces

Year 2 is now off and running! 

Most of my energy over the past three weeks has been focused on the first project of the year: a five week team effort for 6400. The same project that produced the MoCap Music Video last year. 

Concept

Our team was told the due date and to make something... very open for interpretation. My team includes two 2nd year DAIM students (Taylor Olsen, Leah Coleman) and one first year student (Sara Caudill). We eventually settled on creating a VR experience based on liminal spaces, specifically taking place in an airport, with the viewer losing time and identity as the experience goes on.

Liminal spaces are typically said to be spaces of transition, or "in-between"- a threshold. Common examples are school hallways on the weekend, elevators, or truck stops. Time can feel distorted, reality a bit altered, and boundaries begin to diminish. They serve as a place of transition- the target is usually before or after them. The sense of prolonged waiting and distortion of reality is what we intend to recreate in this experience. By placing the viewer in the gate of an airport and observing the altered effects around them, such as compressed/expanded time, we will bring the viewer into our own liminal space. 

All of our team members had an interest in working with VR and with games, so I looked for environmental examples of what might be considered a liminal space already existing within a game. The Stanley Parable sets the player in an office building by themselves, seemingly at night, which contributes to the odd feeling of the game- you never see another human, and the goal is to escape. The presence of a narrator and instructions (despite the player choosing whether or not to follow it) prevents this from being a true liminal space, but I feel that the setting itself creates a strong nod in that direction.

Silent Hills P.T. is much closer to the feeling we're getting to. The player constantly traverses the same hallway, though with each pass the hallway is slightly altered. There is minimal player identity, the passage of time is uncertain, and the player is constantly in a state of transition looking for the end. 

Sightline: The Chair became an important source material for us. Developed early on for the Oculus, the player is seated in a chair and looks around at their environment- one that constantly morphs and shifts around them. The key point is that these changes occur when the player looks away, and then are in place when the player looks back. This is an element I very much want to incorporate into our game. It really messes with the flow of time and creates a surreal feeling. Importantly, the player cannot interact with any of the objects around them- they must simple sit and wait for the changes to occur. 

Progress

From there, we met as a team and began planning out the experience- interactions, the layout of the airport, how time would pass, what events would be happening. An asset list was formed and placed online, as well as a schedule for development. We wanted to make sure everyone on the team was learning new skills they were interested in, and teaching others the skills that they have. Sara and Leah focused on visual and concept development- the color keys, the rhythm of the experience, etc. Taylor worked on finding reference photos, and began modeling the 3D assets we would need for the airport. 

For me, I spent the last few days focused on modeling the airport environment and beginning some of the interaction work in Unity. Based off of the layout we created in the team meeting, I was able to finish the airport shell and start working on some of the other environmental assets- a gate desk, vending machine, gate doors. 

I brought those models into Unity to start working on developing some code. Taylor made the chairs for the gate, so I placed those and got a basic setup going. 

090418_AirportEnvUnity1.jpg
090418_AirportEnvUnity3.jpg

I began working on some Audio scripts to randomly generate background noise and events- an assistance cart beeping by, a plane landing, announcements being made, and planes taking off/landing. That's about done, and I'll be posting an update video soon with the progress made.

The current problem I'm having is the script to change items when the viewer isn't looking at them. I found GeometryUtility.TestPlanesAABB in the scripting API, which forms planes where the camera's frustrum is and then calculates if an object's bounding box is between them or colliding with them. Is the object where the player can see it? I can successfully determine that an object is present, but when deactivated to switch to another GameObject, the first object is still detected and causes issues with the script I've written to try and swap it with another. I got it to successfully work with two objects, but three is revealing this issue in full force. I may try instantiating objects next instead of just activating them- either way, this test has allowed me to learn a lot about how Unity determines what's "visible". 

Next? 

This weekend, I'll be continuing to work on this sightline script for the camera and hopefully finding a solution. I also have several other environmental assets to model, and will begin textures for the ones that I already have completed. On Sunday I plan on posting a progress video of the application as is. We still haven't decided whether to use the Vive or attempt mobile VR, something that I've been especially interested in. Alan suggested letting the project develop organically and then make a decision near the end- I'm leaning towards the Vive for this currently for the familiarity and extra power, but on mobile the player is forced to be stationary and lacks control. More thoughts on that soon. 

First Year Wrap and Ruby Bridges: 6 Week Conclusion

In the first week of May, Tori and I completed our work on the 6 Week Prototype for the Ruby Bridges Project. It was presented, and then folded into a much larger presentation about our progress throughout the first year of our MFA program. As classes are starting back up, I wanted to make a post summarizing my journey over last year, the results of Ruby Bridges, and my current starting point. 

At the beginning of the year, I focused my efforts on the interactions between game design, education, and virtual reality. For me, this meant a lot of exploration and a technical education in these areas. 

My early projects focused on improving my skills in Unity. I worked on team projects for the first time in Computer Game I and obtained a real introduction to game design and game thinking. This also allowed me to develop my own workflow and organization in Unity. While exploring my personal workflow, I was interested in potentially using VR to organize materials and form connections throughout the scope of a project using Google Cardboard. The result was the MindMap project, which was a great introduction to mobile development and Google Cardboard, but provided limited usefulness for my work. It was tested using materials from my Hurricane Preparedness Project, a 10 week prototype developed to provide virtual disaster training for those in areas threatened by hurricanes. This was my first time using Unity for VR, and developing with the HTC Vive. The topics explored, including player awareness in VR, organization of emotional content, and player movement in a game space would eventually become the basis of my work on the Ruby Bridges Project. 

There has been a clear evolution in my own design process and focus, mainly with a shift from visual organization to functional prototyping. Earlier in the year I still had a heavy focus in visual elements and art assets, though with game design projects that experience suffered because the game was not totally functional. By the spring, I had shifted completely into prototyping and non-art assets. All of these projects challenged my process and boosted my technical skills, and then I brought these technical developments into a narrative context. 

EDUCATIONAL AND EMOTIONAL STORYTELLING THROUGH IMMERSIVE DIGITAL APPLICATIONS

In the Spring, Tori Campbell and I began working on our concept for the Ruby Bridges Project. Working together, we would like to use motion capture and virtual reality to explore immersive and interactive storytelling. Ultimately, we are examining how these concepts can be used to change audience perception of the narratives and of themselves. Ruby Bridges' experience on her first day of school is the narrative we've chosen to focus on. 

Ruby was one of five African-American girls to be integrated into an all-white school in New Orleans, LA in 1960. She was the only one of those girls to attend William Frantz Elementary School at 6 years old, told only that she would be attending a new school and to behave herself. That morning, four U.S. Federal Marshals escorted her to her new school. Mobs surrounded the front of the school and the sidewalks, protesting the desegregation of schools by shouting at Ruby, threatening her, and showing black baby dolls in coffins. 

This scene outside the front of the school became our prototype in VR. 

The Four Week Prototype focused on developing technical skills that we would need moving forward, specifically navigation, menu/UI, and animation controls. In doing so, I learned not just how to make these functions work, but the pros and cons of each.  This allowed me to make more educated decisions in the design of our Six Week Prototype. We gathered motion capture data from actors to work with the data in a VR space, and to help experiment with controlling the animations. 

My goal with the Six Week Prototype was to create a fully functional framework for the experience, something with a beginning, middle, and end. I created a main menu, narrative transition into a Prologue scene, the actual Prologue scene where the user is Ruby's avatar seeing from her perspective, and then an interactive scene where the user can examine the environment from a third person view. This view would provide background information/historical context, and drop into the scene from another perspective. Where the broad goals of the Four Week Prototype was technical development, this project was examining different levels of user control, the effects of this on the experience of the scene, and how to create an experience that flows from scene to scene smoothly even with these different levels of control. 

This prototype became a great first step into a much larger project. We learned a lot about creating narrative in VR, and though demonstrations with an Open House audience we discovered just how much impact a simple scene with basic elements can have on the viewer. 

THEORY

Broadly, my thread going into the year was how virtual reality can be combined with game design for educational purposes. Through these experiences, I was able to refine that to how immersion and environmental interaction along with game design can be used to form an educational narrative experience. 

Tori and I are focusing on different but connected elements while working on this project. I am working specifically with theories concerning self-perception, learning, and gamification. Structuring these together, I form a framework for my research. Self-perception theory is connected through the concept of perspective-taking, representing the user and how they reflect back on themselves and their experiences. Gamification represents the interaction the user has in their environment- provides the virtual framework for the experience using game design concepts. Learning theory places the whole experience in the context of education and the "big picture". 

WHAT'S NEXT? 

Over the next year, I will be continuing to work with Tori on the next stages of the Ruby Bridges Project. While we are still currently discussing our next steps, I would like to explore move environment building and structures of the experience. The Six Week Prototype was a great learning experience for how to set up a narrative flow and work through different levels of interactivity/user experience. But there are still so many other directions to push forward with it. Having the crowd react back to Ruby by throwing objects, yelling specifically at her, or even having all of their eyes constantly gazing down at her, further increasing the menacing presence. Playing with perspective-taking so users can switch back and forth between different members of a scene and determining if that ability contributes positively to the scene. Pushing other concepts of gamification, such as giving users a task while they are in there to highlight aspects of the environment (the closeness of the crowd, the size of Ruby, etc). Manipulating these environmental aspects will likely be the next step for me. 

I will continue to research the theoretical framework highlighted above and will likely be making modifications as I start to delve more into these topics. My classes begin next week, and as part of that I will be taking Psychobiology of Learning and Memory- this will likely have an impact on the theoretical framework, but I'm very excited to take what we learn in there and potentially apply it to the experiences.

On the technical side, I will be conducting small-scale rapid prototypes to test these concepts as main development on Ruby Bridges continues. Furthermore, I would like to experiment with mobile development on the side to see if a similar experience to our prototype could be offered with various mobile technologies, such as Google Cardboard or GearVR, perhaps even the Oculus Go. 

For now, I'll be organizing my research and getting ready to hit the ground running. 

1000 Ways How Not To Control Cameras

This week and the next three days form the final week of development for the Ruby Bridges 6 Week Prototype, and last week I outlined the functions that I would like to implement in the build for this week. 

The good news is, I learned a lot about how the SteamVR camera likes to operate. The bad news is, it took me all week to learn these lessons and adjust our prototype accordingly. 

Debug list from 04/21/18

Most of the issues I ran into had to do with moving the camera around a scene. The third person documentary view that I'm building was initially including a zoom function. I went through a couple different methods to get this to work: sliders, touchpad walking, scaling the environment. I finally got this to work using a UI slider. But I discovered that the effect was extremely jarring and didn't really add anything for the user- if they're going to be able to take on perspectives in the scenes themselves, the zoom function becomes redundant. I have decided to fix the camera to one point away from the environment and allow the user to rotate the scene manually to examine the tooltips. 

The other issue was locking the camera to Ruby's head. I could parent the camera to her motion without a problem, but the height of the user would influence the Y value of the camera transforms. I wasn't able to find a way to lock this even with research (although some online forums mentioned it's extremely disorienting to have head transforms locked in VR). To solve this problem for now, users will complete the experience in a seated position. This should have the added benefit of assisting with the motion sickness issues from the motion of Ruby's walk. 

On Saturday, I had a debug day and tried to work through all the issues that came up from testing on the Vive instead of the simulator. This included Menu buttons working properly, pointers, and disappearing controllers. The controller thing has to do with how I parent the camera to Ruby's head- they still function, but you can't see them. Still working on a solution for that. I also found that the environment itself was not centered and had tons of weird offsets, so I started a fresh scene with the environment in the right place- that solved a lot of the camera transform issues. 

Screenshot of current camera view for interactive scene.

NEXT

With the camera issues relatively sorted, I have to place the object tooltips into the scene and place the background/historical information on them. These will also include the buttons for perspective view in each part of the scene. Tori worked on creating a crowd using the new character models she made and did a great job offsetting the animations, so I'll be placing those into both scenes as well and cleaning up the overall function. 

Interactive Building

Last week of development! After taking into account all our feedback, Tori and I really had to think about how to round out this project. 

Tori will be working with the character animations and models, fixing some of the technical issues like locking the feet to the ground and replacing the robotic models with the avatars she created. While the current animations were still effective, the unedited animations and floating characters do crack the immersion. 

On my end, I had some technical issues that I wanted to fix too: locking the Camera to Ruby, crowd simulation with offset animations, and editing the audio to be more cohesive. But along with that I wanted to round out the experience. We put the interactive level aside to focus on putting together the prologue and receiving feedback on that experience. 
For the last bit of this project, I'll also be putting together a basic prototype scene to explore a 3rd person documentary view. The user will be able to rotate the scene and zoom closer, then use the tooltips to gain historical background. Within these tooltips will be a button that, upon clicking, will allow the user to join the scene on the ground, much like Google Street View. It's not a fully fleshed out experience, but will allow us to broadly explore some of the concepts we discussed back at the beginning of the project as far as how to transfer that information. This is a good starting point- users will still have control of their experience, and the information will be there for them to uncover at their own pace in a variety of ways. Meanwhile, we still have that perspective-taking ability there to continue the experience the user had as Ruby or other members of the scene. 

The scans below from my sketchbook show some of the notes taken while discussing how to set up this level. 

I did take some inspiration from Assassin's Creed, as discussed last week. The series itself has always included a wealth of historical information embedded within menus and the occasional quest. However, as a player, you have to go searching for this information, and the reveal tends to be a wall of text with the occasional image. It's underwhelming after running around a richly animated recreation of Rome or Havana. The new Discovery Mode provides text, images, audio, and video from both the game and reality. I found myself much more excited to experience a multi-modal presentation rather than reading text block after text block. This much text (as shown in the images below) really doesn't work well in VR- it's difficult to read the panels unless they take up the full screen and overall the immersion is just lost. I would rather focus on using the environment to explore and convey information rather than relying on text. 

In a similar vein, the newest installations of Tomb Raider include historical information with artifacts that players collect throughout the course of the game. Removed from the world gameplay, a screen comes up and players can examine 3D recreations of these items with a basic description of what it is in the context of the game/world. Granted, it's usually only a sentence or two, but not something really required by the game. It allows players to view the item up close and learn a little bit more about the culture of the world around them without overwhelming with too much detail. It's another way for players to experience this information. I thought about this when considering the 3D manipulation of the scene and engaging the user in the content. 

Another great example of this came from one of our readings (experiences?) for class this week. Refugee Republic, an interactive documentary, takes the viewer on a journey through a Syrian refugee camp in Iraq by scrolling through a panoramic illustration depicting different parts of life in the camp. The media often presents an inaccurate view of refugee camps, and the team who created it set out to create a more real image of life in this camp. While the landscape itself is mostly drawings, as the user scrolls along it transitions into film and image and text. The result is incredibly dynamic and provides a lot of depth to the experience, as each media is used for it's strengths. It plays to every sense, and that's what we're trying to do with this interactive level. I began thinking about how to choose what media and what information I present in this 3rd person view and what media might work best from the perspective-taking option. I'm going to start researching some more experiences and games that provide a similar media overlap. 

With this in mind, I was able to make decent progress on getting the level set up this week.

  • Prologue: camera is finally locked to Ruby. All users will experience the walk at her height, and without walking away from her body accidentally. In the interactive level, I'm contemplating giving the user the ability to walk around as Ruby without having her set animation. This was discussed multiple times as how impactful the scene could be if the user is seeing it all from Ruby's height and exploring at their pace. I don't think we'll have the time to get that in this time around, but a future feature to consider.

  • Created the new scene with a third person camera. Began implementing camera movement and manipulation functions, such as zooming in with a UI slider (harder than anticipated) and working on rotating the environment using the pointer from the controller.

NEXT

This week is going to be straight work on this level. Getting those features in will mostly be shifting the camera around, and once I have the process down it should go fairly quickly. It's also going to be compiling Tori and I's work into a final build and debugging as much as possible. I have yet to test progress on the new level in the Vive, so I'll be doing that tomorrow and every other day until it's due just to make sure the changes are working in the headset as well as the simulator. 

 

 

 

2 Weeks In: Crowd Building and Playtesting

Over the last two weeks, all of my efforts for the Ruby Bridges project have been focused on the Prologue experience. This included creating a crowd that surrounds the user, adequate audio, attaching the camera to a moving Ruby, bringing all of these animations into the same scene together, and a smooth transition from the introductory sequence to the actual experience. 

Troubleshooting the Prototype before the Open House. April 3, 2018

The crowd building was a real technical challenge for us, and we still haven't completely nailed it down. For playtesting purposes we took the captured data we had for four figures and duplicated it into a crowd, then instantiated that crowd once the scene started. Eventually what I would like to do is use a crowd simulation to offset the animations of the figures- looking at the crowd as it is, it's very easy to spot patterns where we duplicated groups and where figures are floating above the ground plane. It would also help us create a more faithful representation of the scene; I looked at some images taken from Ruby's fist few days of school to gauge where the crowd would be harassing her along the sidewalk and how close they were to her. Based on these the crowd was most aggressive on the sidewalk around the school, but were kept away from the front doors as the school had a fence all the way around it. 

For the transition into this scene, we wanted to give the user context for where they were and whose shoes they would be standing in. Upon starting the experience, the user is in a almost completely dark room listening to audio of Ruby talking about her first day of school from her perspective. Text cues come up with Ruby's name and what interview we're pulling the audio from, followed by the school, date of the event, and the location as she's talking. The scene then fades and the user reappears in front of the school.

While listening to some interviews with Ruby talking about her first day, I noticed some of these podcasts and interviews included audio of the crowds yelling at her. I was able to cut up this audio and loop the crowd yelling into the scene, along with some stock effects of neighborhood environmental noises. Tori recorded some of our classmates yelling specific phrases, such as "We don't want you here!" and "We're FOR segregation!" to add into the audio amongst the crowd. With the volume all the way up, this audio can be very chaotic and confusing. After a few moments just standing there and the headset on, I found it easy to lose track of where I was. The audio completely obscures anything in the outside world. The added chants grounds the user in the event and the time period. 

On Friday, Tori and I were able to demonstrate the current version of our prototype at the ACCAD Open House. Other than the two of us and the occasional classmate, we haven't been receiving much feedback from sources outside of the Design world. We were able to get some fantastic feedback from a wide variety of people of all ages, races, and experience with virtual reality. The topic itself raised a lot of interest with those walking by, and after a quick background on who Ruby was and our intentions with the project, most were eager to see what we had. 

After taking off the headset, we had a table set up with the children's book and post-it notes for guests to provide written feedback for us. We only had two written notes, but most of the guests asked questions and gave us their impressions afterwards.

  • One of the most frequent comments we received was "wow, it feels like you're really there! It's very immersive." I do take that with a grain of salt, especially as many of the guests were experiencing virtual reality for the first time. However, the fact that we were able to gain that reaction from so many of those who experienced a prototype with primitive forms and non-recognizable humanoid figures was very promising. Guests gave different reasons for feeling this way- the audio being powerful and negative, the crowd surrounding the user, seeing the crowd animated in VR.

  • Some guests cited brief dizziness during the movement as Ruby up the sidewalk. I myself experienced this when testing the prototype before the Open House. The fact that it was significant to mention after only a 3 second motion is important as we're going to be putting a longer walk and animation in the scene in the future. After the motion stopped, the users adjusted to the world. Part of this could be resolved by having guests sit for this experience- it can be disorienting to be standing while the character is moving. Though if we continue with the interactive portion of the experience, guests would ideally be standing and moving around. I have seen other solutions in VR ports of games like Skyrim where the edges of the screen are blurred on the periphery to reduce the feeling of sickness while the player is moving, and the blur fades once the player has stopped. This may be a good area to explore when we have longer animated sequences in the scene.

I had several conversations with guests who are instructors or educators, and all mentioned seeing the uses for this in the classroom.

  • One guest asked me if I would be working with educators in the development of this experience. Ideally, yes- this experience is meant to be implemented in the classroom, not to replace the classroom itself. It's very far in the future, but gaining feedback from instructors as to how they could best utilize this would be absolutely necessary.

  • Several guests commented on whether the experience would be appropriate for elementary-age students, after asking what our target audience is. To be honest, there is very little research on how kids those ages react to virtual reality. There have been studies that suggest kids ages 6-18 perceive virtual experiences to be much more "real" than adults (as discussed and referenced here), and that children ages 6-8 can create false memories after experiencing a virtual event (source). While we want to stay faithful to Ruby's account, Tori and I will have to discuss the implications of how "real" of an experience we create.

  • Following up on that question, another guest asked whether we had considered leaving the avatars of the characters as these robotic figures rather than assigning them race. She was interested in how the user might project onto these figures if a race was not assigned and thus change the experience for the user. I understand her point and this is a question being addressed on several studies dealing with racial bias and stereotyping- in that realm, leaving the user "colorblind" may be an interesting area to study. One such study involves changing the race of the user's avatar and observing how users of different races demonstrated bias when the avatar was different from their own race (finding it reduced explicit bias, with no impact on implicit- an interesting study to consider when we're having users experience Ruby's walk. Source). However, our purpose is to craft a world similar to that Ruby experienced to promote empathy, understanding, and connection between the student and Ruby. Race is a vital point to her story and understanding that this is just one of many moments during this time where she would encounter aggressive racism is vital to this experience.

The question of interaction was addressed when discussing the scene where the user would be able to explore the world. Guests asked what kind of interactions they might experience- would the crowd react to their presence? Would they be able to move around the scene? One guest suggested using gaze-tracking to trigger the crowd into throwing things at you when walking around the scene. In past critiques, the suggestion of having the crowd's heads all turn to follow you no matter where you are would certainly be intimidating (or even menacing).

It really comes down to what we want the user to gain from that freedom to explore. Initially it was to provide background knowledge of the event and learn more about the long-term effects/major components in the scene- how Louisiana fought her attendance, how the community reacted, what the rest of Ruby's education was like. The major question is how to go about delivering this information. Looking at perspective-taking, the user could embody different characters in the scene and listen to their internal monologue as a way of understanding different points of view. Or the user could walk around as their own avatar objectively, as if at a museum.

An Open House guest gave me a great case study for this "virtual museum" experience created by Assassin's Creed Origins. The game takes place in Ancient Egypt, and your character is part of a vast open-world environment. Ubisoft recently released a Discovery Mode for the game, featuring guided tours through landmarks and buildings. The player can run around the landscape at will as their own character. When a tour is activated, a guided trail is illuminated along with interactive checkpoints that features a narrator and extra written information/artwork added into a menu archive for later inspection. 

This seems to be a great way to keep player autonomy and the general elements of gamification consistent in the game while still conveying the relevant information. I own the game and have yet to explore Discovery Mode myself, but I will be doing so this week and discussing ways to move forward with Tori. 

NEXT

Tori and I will be meeting this week to discuss our next steps and compiling the feedback received from the Open House. With the current course, we will likely be working on the crowd simulation and the user animation for Ruby. The current walk is very short, and we will need to work on the animation cycles (and creating an idle state) so the characters do not just stop after a three second experience. We will also be testing out model applications for the crowd and adjusting the audio. 

Project Framework and Flow

Tori and I were able to discuss the notes I made last week on the flow of the project, and were able to finalize our plans for the next six weeks of development. 

Notes from planning out the experience structure.

We sat down together and discussed the flow of the experience that I had outlined. The first thing the user experiences is a start menu, with a start, quit, and options button. Upon pressing start, there's a transition in which the scene fades to black and displays the date and time to set the scene. Our feedback here from Maria was to provide other information to place the user in the experience by providing more background information, so we will be building on that and displaying more information about the story during the transition through audio and images. 

After the transition, the scene fades back in with the user as Ruby. This is a passive experience with no navigational control available. The user will start at the sidewalk, and experience Ruby's walk up to the door with the teacher. We debated the option of even giving the user menu control, or the ability to exit the experience- functionally, I think it could be detrimental and difficult for the user to have to force quit an experience in order to restart in case something went wrong. We'll be getting Alan's opinion this and other questions regarding gamified elements later this week. 

From there, we transition into the Interactive Mode where the user respawns with an objective placement on the map. They are not part of a particular group, but initially respawn as an outside impartial observer. The scene with Ruby has restarted, and they will view the walk they just took from other areas in the scene. The user has full navigational abilities. The animation will begin as they collect icons, prompting the user with a question, fact, or experience to witness. The idea being that the user moves along with Ruby, but avoids being constricted into a linear gameplay by being given the choice to pursue the icons in whichever order they desire. They will also have the ability to replay each checkpoint from a secondary menu. 

Further questions we asked had to do with the avatar of the player. When starting the Prologue, the player is seeing from Ruby's perspective and will be embodying her avatar. After that, what would the player's avatar look like? Would they even have one? I've been considering these questions with the Proteus effect discussed last week, and thinking about how this visualization would change the experience for the use.

Following more critique from Maria, we're moving forward with crafting the Prologue experience first. This week I did some research on the area and sketched a rough map of what our prototype will look like. William Frantz Elementary School has been restored as a historical site, and though it has a new academic center attached to it, the original building and neighborhood have changed very little from 1960 to now. I tried to keep the general shape of the building and placement of nearby streets/houses historically accurate for the prototype. 

Sketch of map for 6 week prototype.

I began working on the framework for the experience in Unity. I built the general environment, set up the player camera/controllers using SteamVR and VRTK, and started putting together a functional menu system to transition between each scene.  

Tori is going to be working on adding the animations captured in the last prototype to the scene, and choreographing their interaction. Once they're added in, I'll be making sure the cameras attach to Ruby's character for the Prologue and work on the animation controls for the interaction scene. For now, our priority is going to be completing the prologue experience and getting those elements functional. 

The theoretical framework for this project has been a work in progress, but I've been narrowing down the key theories and concepts we're working with. Most of what I've been examining has come from self perception theory, learning theory, and gamification. When presenting for critique, the feedback I received was to be less specific with the framework. I have plenty of information on the psychological aspects and even some on game theory, but very little on virtual reality itself. I'll be doing more research this week to fill those gaps. I have been told to read Art as Education by John Dewey by several professors and classmates. I'll be adding this to my reading list as well. 

Current breakdown of theoretical framework.

NEXT: 

  • Finishing up framework for the whole project build.

  • Functional Ruby experience in the Prologue

  • Transitions between scenes started.

  • Main menu complete with options.

  • Research

Phase 2: Continuing the Prototype

After completing our 4 week project, Tori and I had a talk about where we would go with the next 6 weeks to advance this project. We decided to continue in the direction outlined in my last post- creating the first steps of a vertical slice from the story of Ruby Bridges- Tori focusing on organizing the animations and drama, and myself focused on creating a full build in Unity. 

Our four week prototype had a loose menu structure that I created to make it easier for us to test out different functions and for myself to understand how they work. These were purely technical exercises. In this prototype, will be creating a prototype that contains a full narrative. The user will begin the experience as Ruby, with minimal control of their surroundings. From there, the scene will restart and the user will gain the ability to navigate the environment. There will be interactable objects to collect and examine, containing background information from the time period and location. While we want to avoid creating a full-fledged game with this experience, I will be using game design elements to encourage exploration of the environment so students will actually find this information. 

We took into consideration the critique that we received from our initial prototype. Our objectives were reframed to focus on the story and less on the technology, and we will continue to focus on function and interaction instead of aesthetic appearance. These are questions we can begin examining after this project. Our research has already begun expanding to include psychology, learning theory, and empathy. 

Proposed work schedule for 6 Week Prototype.

Above is the working schedule I've created for my part of the prototype. Tori's schedule lines up with mine so that we're both generally working at the same pace and form of development. 

I began working on some of the general layout for our project, considering the flow of the experience and what functions would be available in each. While this is still a broad layout, it's a sketch of the experience from the start screen all the way to the end of interaction. Tori and I will be meeting this week to finalize this plan and discuss details. I will also be starting the general layout of the experience, with a blocked in environment and basic navigation for the user. 

Image of notes on the layout of the experience.

I also continued reading some of the research gathered over the last four weeks: 

These readings covered a wide range of topics. Research on the effects of virtual immersion on younger children is nearly nonexistent, and that is mentioned several times throughout these papers. A few of them had to do with digital representations and how users behavior changes when their avatar reflects a different identity. Children develop self-recognition around the age of 3 or 4, and these connections grow with executive functions. It was also shown that children between the ages of 6-18 report higher levels of realness in virtual environments than adults. It's been shown that children have developed false memories from virtual reality experiences, thinking events in the virtual environment actually occurred.  I was also introduced to the Proteus effect, which suggests that changing self-representations in VR would have an impact on how that person behaves in a virtual environment. By placing a student in Ruby's avatar, we also would change their judgements of Ruby to one that is situational, and create an increased overlap between the student and the character. When we're thinking about placing a student in Ruby Bridges' shoes and considering aspects such as the aesthetic appearance of the environment and the interaction between Ruby and the other characters, we have to remember that this experience may be much more intense for younger students who experience a higher level of environmental immersion than adults.


Over Spring Break I spent my time at the Creating Reality Hackathon in Los Angeles, CA, where I got to collaborate with some great people in the AR industry and work with the Microsoft Hololens for two days. Our group was working on a social AR tabletop game platform called ARena using Chess as a sample project. While we were not successful, it was a great lesson in AR development and approach. I also gained exposure to other headsets and devices from the workshops and sponsors- the Mira headset runs from a phone placed inside the headset. And there are a variety of Mixed Reality headsets that use the same Microsoft toolkit for the Hololens. 

Workshop showing the Mixed Reality Toolkit with the Hololens.

While the Hackathon was a great technical and collaborative experience, it also opened up other possibilities for our current project in the long run. Part of our research is discovering what virtual reality itself brings to this learning experience beyond just being cool or fun to experience. We already know that this experience is not meant to replace the reading of the book or any in-class lecture- it provides another medium for students to experience and understand this story. After spending the week working and thinking with AR, I was thinking about how we can better bridge that gap between the physical experience in the classroom and the virtual experience. Using an AR to VR transition that interacts with the physical book would be an interesting concept to explore related to this.

The technology doesn't quite seem to be there yet- there's no headset out there that has the ability to switch from AR to full immersive VR. But Vuforia seems to have this function available and could possibly be accomplished on a mobile device. There's even a demonstration recorded from the Vision Summit in 2016 showing this ability (at time 22:00), documentation on Vuforia's website about AR to VR in-game transitions, and a quick search on Youtube shows other proof-of-concept projects with this ability. This isn't a function that will really be able to be explored until much further down the line and potentially will not be possible until the right technology exists, but raises questions about how we can create that transition between the physical and virtual. 

From some of the participants at this hackathon, I also learned about the Stanford Immersive Media Conference this May, which will feature talks by several of the authors of the papers we've been reading for research and others involved with the Stanford Virtual Human Interaction Lab. This is potentially a great way to interact with others who are doing work in the same areas of VR and AR, and discuss their research. 

4 Week Wrap Up

Framing the Project

Over the last four weeks, Tori and I have been working on a proof of concept utilizing VR and Motion Capture. The larger goal for our project is to form an educational VR experience that promotes the development of empathy in elementary school literature. We have chosen to examine this by developing an experience based on the story of Ruby Bridges, though for this four week prototype we focused our efforts on overcoming new technology and starting research.

Why is this a problem? 

Virtual reality has been rapidly developing in areas such as video games and medical research, but less work has been done in the areas of children, empathy, story, and how we can tie these factors into education. 

This creates a unique challenge for me as a designer. VR creates a new range of possibilities for interaction, especially when combined with game design concepts. Examining these interactions and their implications in the context of the story will be a large role for me in the development of this project, and something I began to do with the prototype. 

Results

Above is a recap of my progress over the last four weeks. It includes a demonstration of each of the navigation, UI, and animation controls, as well as a sample scene created from the point of view as Ruby. 

My personal goals were to examine the interaction possibilities in VR, and learn to develop them for use in the Vive. I specifically worked with navigation, UI menus/panels, and controlling our animations using these menu assets. In terms of this prototype, I did not know how to do any of this in Unity using VRTK or SteamVR. I had to learn how the technology worked and explore those potential options in order to move forward. 

In the grand scheme of the project, these factors are very important in how the user experiences the scene and allows us to start asking questions about their impact. For example, if we limit a user's ability to navigate based on the role they're playing in the scene, how will that impact their impression of that role? If as Ruby the user has no control over their environment, will this convey the lack of control a six year old would experience? And if the user is able to navigate, is it less immersive to present them with a menu of tasks or create a more freeform navigation using the pointer and no text? Tori and I don't have answers to these questions yet as our research will help point us in the direction of the type of experience we will create. 

Above are the slides from our presentation in class.

Feedback

We received great feedback over the last four weeks. Here are some of the thoughts given on our prototype.

  • Consider looking at Suzanne Keene, Dr. Bruce Perry, and Mary Jordan in your research.

    • Bruce gave us some great avenues to explore. Although I haven't gone too deep into their research yet, Keene does some interesting work with how multimedia setups can be used in a museum setting and Dr. Perry works with children's mental health and psychology. I need to speak with Bruce again about what work Mary Jordan does.

  • Drop back on your aesthetic choices for now. Focus on function, determine what you want the experience to be first.

    • In our next steps, I agree that the aesthetics are going to need to be considered but not fully developed until we know exactly what we want out of this project. Tori and I have discussed in the past the fine line we walk between making an impact and creating something that actually scares students. While it's certainly not a focus just yet, we know that we don't want the final experience to be too realistic in appearance but still authentic.

  • Can you cut this project back to a more accessible technology for the classroom?

    • A fair point. HTC Vives aren't cost-friendly and unlikely to be found in any public school environment. Google Cardboards have been implemented in classrooms already and are much more feasible on a large-scale consideration. I think we can absolutely adapt what we've learned on a smaller scale and create something cross-platform, we just have to adjust for different mechanical experiences and technological issues. A conversation for the upcoming phases.

  • More research on developmental psychology and learning theory.

    • Absolutely. It's already on the list of topics we're gathering media on. I'll also be taking a class next semester on learning and memory, which will hopefully be bringing me more materials and context for this project.

  • What are the qualities of VR that would enable empathy or learning? Work towards answering this.

    • This ties in with choosing more accessible technology. Something we'll be working on answering - or at least guessing at - in the next couple of weeks.

  • Write your next steps with your objectives in mind- exploring empathy and storyness. Start bringing context into your interactions.

    • I am already adjusting my next steps to bring the narrative and the concepts we've been researching into these design decisions (discussed below).

Next Steps

Thinking about the prototype and the feedback we got, I would like to move forward and use these tools to develop a full narrative prototype. For me, this means mapping out a narrative framework for the experience with a beginning, middle, and end. This will include a more developed environment, with a blocked in scene for the school, neighborhood, and general placement of props as they would be in the actual location. A beginning scene with the user as Ruby, experiencing her walk up to the schoolhouse. Protestors surrounding the sidewalk, including sound in the scene this time. Forcing the user to walk her path before being returned to the scene to navigate on their own. The final result would be a prototype of a vertical slice- not polished, not focused on visuals, but purely interaction and narrative. 

While still complex, this scene will not require learning as much technical skill as our four week proof of concept. Therefore I will be using that time to continue to catch up with research and focus on the design of the experience. This step feels like a good leap from our technical exploration to trying out what we learned in a narrative setting. 

Animated Control

The main focus of the past week was working out how to control animations in Unity, and create a menu for the users to do so within the scene. This was going to be the biggest challenge for me so far, as my experience with the Animator is minimal. Tori imported some of our first mocap data into the scene last week and I watched that process, so I understood generally how to navigate the windows and what basic settings would do. 

At first, I thought working with the Animator and swapping between animation states would be the best way to do this. Not the case at all, and I spent two hours on it before realizing I misunderstood how animation states work. 

Enter the Timeline, a feature of Unity that I had no idea existed until I started sifting through the tutorials on the Unity website. We weren't trying to blend different animations together, we just wanted to be able to pause, play, and restart whatever was playing. The Timeline allowed me to do this. I was able to access the Playable Director component on each game object in the Timeline, and use Unity Events with a UI Button to attach play/pause functions. 

Screenshot in Unity of Animation Controls added to Headset Menu.

This is basic functionality, and there are definitely still bugs that need to be worked out. While the animations do restart, they restart from the position that they were in when you pressed the button. I would just need to write a script to start them back from their original places at the beginning of the scene. For now, connecting the button with the action completed my goal. I spent Tuesday watching more of the Unity tutorials to understand the Animator a bit more and how it connects with the Timeline, so in the future editing animations will make more sense. 

I built the animation controls into the menu, and attached that menu to every other navigational scene. 

Unity screenshot: Active game with Pause button pulled up in the Inspector, showing the button function using the Playable Director.

On Thursday, Joe critiqued the level that we had and identified several issues that needed some work. Some were similar to those pointed out by Maggie.

  • Controllers. Left and Right controllers are getting confused, even by me. He suggested finding a way to distinguish them in-game, maybe by changing the colors of the models in order to prevent incorrect instructors.

  • 2D scene has mislabeled Menu options. This has been fixed in the most recent version.

  • Text for 2D map is too close to the periphery of the mask. Need to be relocated more central to the screen.

  • Correct laser activation. Though I did fix the laser switches, I need to make sure every level only relegates menu control to the Left controller. This is a matter of making sure all the correct options are selected in each level, I believe one or two got overlooked.

Joe also presented two ideas on how we might organize the whole experience based on the scenes he saw. 

  • Using the controllers to pick up books around the scene in order to gain information about it. Similar to picking up "notes" in popular games, but giving the same control as the interactive cubes in the scene. Promoting immersion and interactivity.

  • Starting the experience with the user as Ruby, no matter what. They must experience the scene as her first walking up to the school. Then afterwards, reloading the scene and giving control of the scene to the user via navigation and animation controls.

His second comment harmonized with other discussions I've had with Tori and Maria about how user control can be used to emphasize narrative elements. We've had suggestions about making the user Ruby's height as they navigate the scene, which would definitely create an impression. On the flip side, for a young student there would be a fine line between creating an impact and pushing too far into scaring the student. Something that we have to consider when we make these choices.

I did like the idea of starting the experience with a prologue-type event, and really pushing that lack of control on the user to encapsulate her experience. So I created a scene where the user follows an animated null object up the path to the school. When the user reaches the front steps there's a pause and the scene changes to one of our test navigation scenes. 

I ran into some new technical issues for this scene. For one, while the user is bound to the animation of the null object, they can still step away and move away from the object into the scene. I will need to lock the transforms for the user and force them to experience her route as it is. I also need to deactivate the teleportation, lasers, and menu controls. A non-user related problem was the actual scene transition. When the player reaches the step, a scene does load but it is not the one that I select. There's a trigger at the top of the steps and I think something may be wrong with the colliders and tagging system, so I'll need to try a few other methods and make sure there isn't something in the scene interfering. More debugging to come. 

Tori and I also sat down and started working out our research questions for this project. We figured that a good format would be to have one overall statement for the project, and then two  other questions for ourselves that relates directly to the areas we're exploring. That conversation was started two weeks ago discussing our general directions, but we were able to distill them down into working statements. As it stands, my questions are: 

  • "How does the combination of motion capture and VR enhance the fostering of empathy in elementary school literature as an educational tool?" (Main research question)

    • "How does user interaction with environmental elements reinforce informational transfer?"

    • "What forms of navigation promote exploration of a narrative scene in a virtual environment?"

The phrasing of the statement is still being worked out and edited, but it feels like we're making progress defining the goals and direction of our project.

NEXT:

With these four weeks behind us, I'll be going through the scene and pulling together all of my process documentation. Tori and I will be presenting our four week proof of concept, and our presentation will include a full video showing all of our progress to date. It will also give us time to pause and discuss our plans moving forward, and consider everything that we've been learning and researching during this time. 

Debugging Movement

Last weekend, Tori and I worked with two actors in the motion capture lab to gather data for our scene. Zach and Shaun played four different roles in their captures: angry protestors, the police, Ruby's Mom, and the teacher. The capture itself went fine- both actors were great and gave us good variety of motion to work with and fill the scene. We also included a marker for Ruby, just for our reference later in the scene. 

This weekend, we took more recorded captures with Cynthia and Mckenna from the Theater department. We captured both of them at the same time acting together in similar roles- as mother and daughter, teacher and student, protestor and cop. Their captures should give us enough variety in motion that between last weekend and this weekend we'll be able to populate the scene. 

Still from motion capture data capture, 2/18/18. Cop confronting a protester.

While all of the recorded captures went really well, I ran into some issues last weekend with Unity and SteamVR. There seems to be a bug in the SteamVR Beta where the controllers are deactivated when playing the game in Unity. They still function when pressing the menu button but in the actual scene- nothing.  Lakshika was kind enough to come in this weekend and work with me once our recorded captures were done. We discovered that by bringing in old SteamVR files to the project the controllers will function properly. Truthfully I have no idea why that's the case, but I'm not going to question functional technology. In a clean file, we were able to bring the actors in live and have them put on the headset to see the scene as they were acting. They were also able to use basic teleport functionality with the controllers, although I noticed that the character models start to offset with each teleport. We should be able to fix that with a quick script that attaches the character to the headset as the player teleports, then releases it. 

On Thursday I brought in two other forms of navigation: a radial menu attached to the right controller, and a 2D map that can be accessed in the main menu at any point by the user.

The radial menu can be accessed just by touching the touchpad on the right controller, and circling to whichever destination the user wants. Clicking the touchpad will then take the player to that specific beacon. There are four currently at the front door of the school, the end of the sidewalk, in the street, and one over by the table of interactive cubes. 

In the 2D menu, I took a screenshot of an orthographic top view of the map. I then placed buttons over these same four areas where the beacons are. When the user pulls up the menu, a button there will activate the 2D map. From there, the user selects any of the buttons on the map and they will be transported to the beacon in that area. 

In-Game screenshot of 2D Map on the headset menu.

In-Game screenshot of radial menu.

Of course, I spent some time this weekend debugging the issues we found after play-testing all the navigation changes. The major issues found: 

  • TOUCHPAD SCENE:

    • Teleportation on right controller turns back on after first successful jump using the Touchpad.

    • Teleporting with the Pointer and then attempting to use the Touchpad results in an offset from the beacons that continues to grow.

    • After fixing the first issue, menu laser was no longer turning back on for selections.

  • 2D MAP:

    • Menu missing entirely, though the pointer still appears.

    • Teleport offset (again)

    • Floating off the ground (again)

I chose to limit the Pointer teleportation to the left controller in all levels, for operational consistency and less confusion overall in controls. This solved the problems with the Touchpad presses when trying to use the radial menu to navigate. The pointers turning on when using the Touchpad was resolved with some scripting changes, and the Teleport offset was just a matter of making sure the CameraRig was being used as a transform reference instead of it's parent, the SteamVR. It was being pulled off course by the SteamVR object in the hierarchy. Simple fix, frustrating to actually figure out what the issue was. 

Debugging while in-game. Bezier pointer still appearing and enabling teleport while using the radial menu.

Achieving victory over the debug list meant it was time to bring in someone else to get some outside feedback. My friend Maggie tried out all of the scenes and functions with some critique on the button mapping and overall function. 

  • Confusion over the controls. The trigger and touchpad can be confusing as there's no tutorial or hints other than me explaining the controls in person. For someone unfamiliar with the Vive, this really makes navigation difficult.

  • 2D map needs some color and highlighting to indicate the interactive areas for the user.

  • Placing colliders around the buildings to avoid "phasing" into spaces we don't want players to be. Very disorienting and it breaks the immersion significantly.

As as result, I went ahead and added in controller tooltips immediately for users to understand the button functions. These tooltips appear when the scene starts for 15 seconds, then deactivate. However there is a toggle in the main menu if the user needs help or would just like to reactivate them. 

Tooltips present on the controllers at the beginning of the scene.

Tooltips present on the controllers at the beginning of the scene.

Thursday night, Tori was able to get one of our recorded captures into Unity and functioning with a character model. We started with one of the protester walks just to get the process down, and she'll be working on getting a variety of animations and characters into the scene. I started the research on how to control animations in Unity and it's not a clearly defined process. There's confusion between legacy scripting in Unity and what works with the Animator now currently. I keep getting mixed up between the two and accidentally working with methods that are no longer used in newer versions of Unity. It seems most of the play-pause functions have to do with stopping time in the game, but when I attempted this it stopped my use of controllers in the Simulator. Definitely going to need some more research in this area. 

NEXT:

This next week is going to be all about animation controls. I've considered using sliders, buttons, and toggles but really it's going to come down to what I find in research and what actually works in the scene. Once I get this working on one of the characters in the scene, I'll move on to applying this to multiple characters/animations. I'll also be working on some smaller tasks: adding proximity tooltips to objects around the scene (information transfer), colliders around the school, and adjusting the 2D map with button highlights and a smaller surface (too far into the periphery). 

Navigation and Menus

As of today, our Unity file is set up to operate as a good testing ground for interaction! Last week was really focused on honing our test ideas and getting the basic scene set up. This week, I focused on implementing navigation, interaction, and a functional menu to control our testing in the future. 

On Monday I went ahead and tested the basic navigation and interaction functions I set up last week in the Vive. I set up the teleportation using the straight line pointer at first, and realized that this can prove difficult when moving around a large space. The straight line renderer in VRTK makes it hard to gauge distances, though it works excellent for UI selections. Keeping this in mind, I changed the teleportation to a bezier pointer. The arced line is easier to see in the space, especially when navigating elevated elements such as stairs or ramps. 

Using the Bezier pointer for teleportation.

I also tested out the interactive cubes. After some debugging, all the different highlighting settings worked. Some have outlines, some turn to solid colors, and they can be grabbed/tossed around the scene. I did run into one issue where the cubes could not be retrieved from the ground plane- it turns out the player was floating 0.5 units above the ground. Adjusting this value fixed the problem pretty quickly. 

Initial test in the Vive, 2/5/18.

The Simulator now works in this project file, so I can roughly test functions without having to load into the Vive every time. In the past, troubleshooting often took a long time because I would playtest after making fifty changes. This makes it difficult to find the issues when they arise. It's been a conscious effort this time around to test every major implementation, and it's paying off. 

I switched gears for a bit to work on UI elements in the scene. Because Tori and I are going to be testing a variety of interactive properties (some variations, other contradicting each other completely), I made a main menu for the whole project so that we can easily switch between scenes. This also includes toggles for the interactive objects if we don't need them in the scene. In the future, I will be adding controls for the motion capture data and animations. While I've made menus in the past for Unity, this one is attached to the headset and moves around with the player. Following the VRTK tutorial for a Headset Menu gave me a great start on the format, and then I adjusted it to fit our purposes. 

State of Headset Menu, as of 2/8/18

Tori and I did a playtest together of the first basic teleportation level, just to make sure the buttons work and debug a few issues from Tuesday. Below is a video of our test and working through the ground plane issues. 

Once the project framework was in place, I nailed down what exactly the navigation functions were going to look like and look for any holes in my logic. Thinking through what the player impact would be, what level of control they would have, and what purpose these changes would serve. All four of these options address very different concerns in the environment and are viable options to explore for potential user interaction. I have a pretty good idea of how to set these up in Unity, but I'm going to spend this weekend actually doing that. Next week will have more information on the development and results. On Tuesday, Tori and I will do another playtest to determine whether we need further exploration in each scene. 

Planning out navigation functions for the first phase of my work.

On the theoretical side, we had a conversation today about what our research question was looking like for this project as a whole. Tori is exploring the immersion side of the environment- telling the story through acting and environment. I'm exploring the interaction- how users experience and navigate through this story. We're both still pretty new to writing proper research questions and the results of the next four weeks are going to determine a lot about where we go moving forward. We just needed to start putting language to our work and determine how it all fits together in the big picture. The picture below is our start on this conversation, though it will be developing over the next few weeks as we work on writing our own questions and bring them together. 

Working on formulating research questions

I mentioned using Mendeley last week for organizing my research- the reading has officially begun. I downloaded the desktop app and started adding all the current studies that Tori and I have gathered. While waiting for the uploads to complete, I read about the Virtual Human Interaction Lab (VHIL) at Stanford University, which studies the impact of human interaction in virtual reality and it's larger societal effects. Their website has a large archive of research papers over the past ten years, and I left with 16 studies on topics ranging from interaction, children's education, and racial bias. 

Tori is pretty far ahead of me on readings right now, but I've started prioritizing them and working my way through the list based on those most relevant to my development in Unity. So far I've read:

The cybersickness reading really tied in with the current issues I'm challenging with navigation. Because the user will be navigating a large space (and in the endgame, the user will be a younger student), our tests should address comfort in navigation as well as functionality. Maria mentioned thinking about how these navigation forms could influence the story itself- for example, having limited movement when playing as Ruby, or losing the ability to navigate the space altogether. Really emphasizing the role of the child as also being one with minimal control over their world, Ruby particularly. Although I wonder if that effect would be as prominent if the user is already a child who may experience this in their life. Unless it's emphasized to a new degree? Food for thought.

"How to Do Things With Videogames", by Ian Bogost, explores the variety of uses games have been applied to. Some of these uses are unrelated to us at the moment- the debate over whether video games qualify as art is interesting but not really what we're exploring. But there are chapters on empathy, reverence, and work that give great examples of games to look at and how they deal with these topics. The empathy chapter discusses two games made by USC graduate students called "Darfur is Dying" and "Hush", both dealing with genocide and fostering empathy for the people trying to survive in these situations.

It also introduced the concept of the vignette in games, giving an impression of experience rather than advancing a narrative. Bogost also write an article on Gamasutra explaining his thoughts on video game vignettes. Our experience does not focus on one particular aspect of Ruby's walk to the school but would highlight multiple things she would face- confusion, loud crowds, angry faces, lack of control. Because of this I do not believe we could call this experience a vignette, but it's a good reminder to consider breaking down each aspect of her experience and how we portray that to students. 

I've started making progress on the Vygotsky paper Maria sent us about Imagination in Childhood, about 11 pages in (out of 92). So far he's discussing what imagination actually is, it's origination in childhood, and imagination's basis in reality. I'm interested to see where this goes as far as discussing the perception of reality in VR, and how that ties in with the other papers I downloaded from VHIL. 

Screenshot of my current Mendeley setup, with readings uploaded.

Annotations for "New VR Navigation Techniques to Reduce Cybersickness"

NEXT: 

From here on I'll be developing the other three teleportation techniques and making progress on the readings. In working with the teleportation, I'll be learning more about UI and setting up the controllers for specific functions, thus knocking out two of my three goals. 

On Sunday, Tori and I have three actors from the Theater Department coming into the motion capture lab to capture data and potentially interact with in the scene. I should be able to teleport and move around them while they act. This will serve as a good test of scale in the environment and we'll be able to run through the procedures we learned last week. Once Tori has this data ready to go, we can bring it into the scene and I'll start playing with user control over animations/time. 

Building in Unity

This week started with the completion of the Explainer Video, which I've placed below. Creating this video really did help me organize my thoughts from last semester and display what I've been working on. Seeing this together made it easier to find my path forward. It also gave me the chance to work in After Effects again and brush up on some old skillsets. 

Tori and I began discussing the Ruby Bridges project last weekend and had a general plan in place for production to begin. On Wednesday, we spent some time in the Motion Capture Lab learning how to stream an actor's motions directly into Unity. I have spent very little time in the Motion Capture Lab in the past and am unfamiliar with the programs that Tori has to use in order to capture data, so seeing this process gave me a general idea of her pipeline. Our classmate Taylor put on the suit and we started by setting up tracking as if we were doing a recording of his movements. This process Tori is very familiar with and will be something that we'll be using to test out animations. 

We then pulled up Unity and learned how to stream an actor live directly into the scene, which did require some tweaking and setup for the basic character we were using. But the end result was being able to put on the headset, see Taylor's character in the HTC Vive, and interact with him live. 

Tori viewing Taylor's motions in the HTC Vive. 

This is going to be especially valuable once we have the final set built and can interact with actors in the space. His movements were very clear, though we didn't properly orient the character and the camera around the origin. Whenever Taylor moved to his left, it appeared to me that he was moving straight towards me. Just making sure all of our transforms are correct should clear up this issue. 

I made a demo Unity project earlier in the week that was just a base set- a flat plane with some boxes and house-like representations just to have a place to test out interaction. When going to show Tori, I realized that I had forgotten to load the SteamVR asset package. Trying to reload that caused a whole host of problems, and I found it was easier to start from scratch and build up a demo scene with a layout similar to our story. I spent Thursday building up a new set using the Prototyping asset package that comes with Unity. Because interaction is my priority, I'm choosing not to focus on the models and just work with representations. This new map features a school, front yard, and street. 

Screenshot of new Unity scene.

Screenshot of new Unity scene.

I chose to make this scene fairly large, so we have room to experiment with navigating larger environments. This also means more room for figures once we start importing the motion capture data. 

From there, I followed some of the VRTK tutorials (found HERE) to set up the camera, basic teleportation system, and a few interactable objects. There's a table off to the side with 6 cubes on it, each with different properties. One functions as a control and cannot be picked up using the hand controls. The other five have varying highlighting settings, and react differently when picked up by the controllers. This helped me learn a bit more about how the hand controllers are set up to work with interactive objects, and what options I have for modifying these interactions.  

Screenshot of interactive table, with one of the pick-up cubes selected.

Screenshot of interactive table, with one of the pick-up cubes selected.

I had to make several decisions this week about what type of interaction specifically I would be searching for. I knew that it would be three broad topics: navigation, object interaction, and time. But I broke that down and really thought about what I want to explore in those areas, beginning with navigation. 

  • Teleportation:

    • Using VRTK, the standard simple teleport function. I did switch the pointer to be a bezier pointer, which seems easier to use than the straight pointer. It's easier to determine a final destination where the straight pointer tends to overshoot. I learned this week how to set that up from scratch, which was my first goal.

    • Point and click navigation. In this scenario, the user determines their destination but we (the designers) control the actual teleportation. The scene would be divided up into sections, and when at the border of a section the user will have a cue to move into the next area. The user will appear in the same spot each time. It will be interesting to investigate whether it's easier to move this way and take the user's focus off of the controls.

    • 2D Map. Using a menu function to determine which area the user wants to teleport to. In this case, having a map available to toggle on a hand control or a series to options. Something like "School Entrance" would teleport them to the front of the school doors.

I took inspiration from games like Myst, Dreadhalls, and The Sims when considering these layouts and how the player interacts with a larger map or navigation techniques. 

  • UI/Menus

    • Scrolling

    • Moving windows

    • Typing (for potential classroom uses)

    • Buttons

  • Animation

    • Starting and stopping animations with a button to "pause" the scene while retaining player movement.

    • Play with time: ability to move backwards and forwards, implementing those scroll bars from the UI Menu exploration. Similar to resting in Skyrim.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

Tori and I also discussed the hardware being used. While we know we're going to be developing for the Vive, we also have access to the Leap Motion sensors for hand controls. I looked into development for these, and I think it would be valuable area to explore. Being able to see and reach out to grab objects or incorporating gestures into navigation could an interesting space to work. For now, I've decided to accomplish the above goals using the HTC Vive, but to keep researching and looking up Leap Motion resources if we decide to take that route in the future.

Below is a video from Leap Motion previewing their VR hand tracking software in 2016, just to give a general idea of the type of interaction we could be looking at. 

The past week has been full of development and making decisions to start moving forward on our prototype. Moving forward in the next week, I will be finishing up some tests for the point and click navigation and getting the menu-based navigation in place. I have a few ideas for how to accomplish this, but I need to do some research and see if there's any cleaner or simpler paths. I

will be sorting through and organizing readings tomorrow- another classmate showed me the Mendeley app, and I'd like to try to use that to keep research together. I also need to get some time in the Sim lab to test out the level I made, and make sure these interactions are functioning the way they're supposed to. The simulator in Unity isn't running properly for me in this scene - while it would be useful, it's just not a priority right now and I can work on fixing that once a few other tasks are accomplished. 

Finalized Proposal and Work Documentation

APPROACH

The majority of this week was spent gathering footage and replacing all of the storyboards in my animatic. I went back into each project and did screen recordings along with footage of the players, from the HTC Vive to the Google Cardboard. Syncing up the footage was actually easier than expected using screen context clues. I also added background music and an intro sequence. As of right now, Explainer Video 1 is about 90% complete. The only things missing are the credits and some tweaks on sound/text. 

Still of title sequence from Explainer Video 1

Screenshot working in After Effects

Screenshot working in After Effects

CHOICES MADE

Tori and I submitted our project proposals for the next four weeks, and that meant making decisions on what exactly I wanted to investigate for my portion of the project. We discussed working on technical exercises, trying to nail down the pipeline and techniques we might use for future development. My current plan for the next four weeks is to focus on: 

  • Navigation: How does the user move about the scene? I will use VRTK in Unity to experiment with teleportation, walking, and top-down maps as ways for the user to explore a given area. I've used the teleportation tools before in my Hurricane Prep project, so this will be a familiar area to start with. 
  • Object Interaction: How can the UI tools in VRTK and Unity be used to convey information to the user? There are a variety of methods for pop-ups and object selection. I will set up different objects throughout the scene and apply these different menu types/functions to them in order to test them out. 
  • Time: Tori's working on getting the motion capture pipeline down, from getting the data to bringing it into Unity. Once those animations are present, I would like users to be able to pause the action in the scene and be able to explore a frozen moment in time at their will. I will test this technique by importing a simple animated object in place of the mocap data, then applying it to the figures once they are in the scene. 

RELEVANT SOURCES/INSPIRATION

I gathered a ton of relevant sources this week, all from different areas that we're investigating. 

I found an article titled "In Their Shoes: 10 Empathetic VR Experiences" that features VR projects covering a vast span of topics, from refugee camps to solitary confinement. One that stood out is a project from Derek Ham (NC State University) titled "I Am A Man", part of an exhibition that will be on display at the National Civil Rights Museum. The exhibit is about the 1968 Memphis Sanitation Strike, and his experience takes you back to that scene. Ham documents the production on his LinkedIn page, which has provided valuable insight on his process. One particular entry I read had detailed his thought process on whether his project should be a documentary-like experience or fictional narrative, and how the presence of the user in the scene as themselves automatically alters the accuracy of the historical retelling. I've linked that page here, and placed the trailer for his experience below: 

On another note, I was recently linked to the IEEE Conference on Virtual Reality through an educational AR/VR Facebook group. While unfortunately this year's conference is in Germany (unrealistic), the site listed papers from past conferences. I picked up several papers on using VR and immersive technologies in schools, and have added those to my reading list. Maria also linked Tori and I to a few readings on educational theory, and sent us more looking specifically at how elementary age children learn. 

CURRENT QUESTIONS/NEEDS RAISED

As I start on this four week project, my questions are going to be technically based. I'll need to begin working with VRTK again and diving into some tools that I only understood at a surface level for the hurricane project. Through these readings I'll be gathering information on what the current opinion is on VR in classrooms, and how empathy plays in. The biggest need raised this week is the need to read all of these sources.

LIKELY NEXT STEPS

This week I will be finishing up my Explainer Video 1 and posting it on my site for viewing. I'll also be creating a base Unity file for our four week project and getting the teleportation tool functional, hopefully by mid-week. Tori and I will be getting motion capture data on Wednesday and learning how to live-stream the data into Unity. I'll be documenting the process and some of that footage will probably be in the blog post next week, along with some Unity snapshots.