05/12/20: Thesis Completion and Moving Forward

Thesis is complete! I turned in my paper and successfully defended on April 8, 2020, and have since graduated from OSU with my MFA.

Most of the last few months were spent writing and revising my thesis paper, but I didn’t really post any updates along the way. That’s what this post is for.

As of the end of January, all work on our Narrative Performance case study prototype has come to an end. Taking the final steps in production to have the experience ready for the Zora Neale Hurston Festival signaled my transition from production to data analysis and writing.

While the prototype is not at all in a polished state, it was stable enough for us to test some of the design decisions arrived at from our past two years of work.

Final prototype video, April 2020.

Zora Neale Hurston Festival and ACCAD Playtest Day

Tori and I ready to present at the Zora Neale Hurston Festival, Jan 30 2020.

Tori and I were set up next to the academic conference at the Zora festival. Our IRB submission was finally approved in December, so we came equipped with our verbal consent script, user surveys, and screen recording software. Over the two days of the conference, we gathered 13 user responses and screen recordings, primarily from users who had never experienced VR before or minimally in the past. We had some really wonderful conversations with conference attendees about the potential impact of this experience in a museum, as well as several revealing comments about the experience itself. A few users were unsure of their role in the experience, mistaking Lucille Bridges’ character for Ruby, and many pointed out the technical flaws in the mob’s animations and audio. Looking back at their comments, I wonder if working on this project constantly has desensitized me to the uncanny valley effect that some of these avatars may be creating by intersecting with the user on their walk down the sidewalk. Spotting the technical issues didn’t seem to overwhelm the user’s focus on the experience, but definitely stayed in their minds as they exited and discussed their thoughts with us. I tried to relieve this effect after the first test of the day by moving some of the avatars further back off the sidewalk and away from each other, but sometimes the collisions occurred with the federal marshals or Lucille Bridges.

Some of the users also mentioned not realizing that turning their head was impacting their speed. I did not inform them of this ahead of time, as I wanted to see what would happen if it was a mechanic users weren’t consciously aware of. I forgot to account for the VR acclimation period that occurs in first time users - I have noticed during our demos and for other VR projects that users who aren’t as familiar with VR tend to not move their heads or bodies much in the experience unless reminded to do so. In spacing out the avatars walking with the users, I assumed that there would be more variation in speed. However, when users look straight ahead the entire time, the speed remains at 1 for the duration of the experience, and there wasn’t enough distance being created between the users and the avatars walking in front of them. I had to make some adjustments to the walking distance allowed between the avatars to alleviate some of the users passing through the avatars.

The overall experience was rewarding, and the small number of users gave Tori and me a chance to work out the kinks in our presentation flow and delivery.

A few weeks later in February, we presented the same project at ACCAD as part of Playtest Day, an event for students to gather feedback on their in-progress projects. We were able to gather over 30 responses from a user base that tended to include more experience with VR, and added the survey responses to our collective data.

Thesis Writing and Defense with COVID-19

Following the ACCAD PlayTest day, most of March was spent finishing my thesis paper. I haven’t written much on my blog about my writing process or concept; to summarize, I was creating a framework for designers who are creating narrative VR content that takes into account the contributions of the user along with those of the designer. How do we as designers consider the user’s role in the design process? I used the concept of the magic circle as a starting point for much of this, outlining what would be the components of the VR magic circle and how one would be constructed by the designer. The design process for the Narrative Performance prototype and overall data was analyzed within this framework and the implications of the user responses from the prototype. I’ll post a link to the actual paper in the future as I’ll probably be referencing parts of this moving forward in personal projects.

The actual writing process was surprisingly satisfying, despite being interrupted by COVID-19. Tori and I were able to complete all our data gathering and project work prior to the university closing down. Spring Break was the second week of March and I never went back after that. It took some time to get into the groove of writing at home with my partner and dog present - it was hard to sit at the same desk for 12 hours a day, not moving between my GRA hours and schoolwork. It became a matter of “if you’re in the mood to write, do it” and adjusting the rest of my schedule around that to get it done.

I finished my paper on April 1, and defended on April 8 over Zoom, which was…. odd. There were roughly 18 people on the call with their videos off, so I was just talking to my powerpoint for the majority of it.

But it was successful, and I only had to make a few minor changes to my paper before final submission to the grad school. I was able to complete and submit my GIS in Cognitive Science not long after.

WHAT’S NEXT

I’m at a point now where I’m finishing up my GRA hours and transitioning into a new full-time job - more on that next time. But what I’m seeing now is a need to strengthen my portfolio beyond prototyping.

Quarantine Cards

So naturally, I started by branching out and making a Cards Against Humanity clone.

Social distancing and quarantine has really done a number on my love of game nights. I’ve been facetiming with my sister and brother-in-law every Saturday night for the last five weeks to do a networked Cards Against Humanity game. Except every site that we’ve used has come with its issues - a limited number of cards/expansions, unable to switch out problematic or terrible cards, difficult to see the text or move the cards around on different platforms. Well, I’ve never done networked play or text parsing in Unity - time to learn!

I started building the card logic first, entering card options in an excel spreadsheet and loading the into Unity. The spreadsheet is shuffled and then random entries are assigned to each card, which are removed from the list once the card is swapped or discarded.

Test video for card logic of Cards Against Humanity clone.

Making this has meant being able to include all our expansions, create custom cards, and throw out the ones that will never be played/are at little TOO problematic. So that’s been a fun learning experience and side project so far. I’m going to jump into Photon later this week to learn how to set up networked play and figure out the second half of the card logic - how to get cards submitted for the card czar.

GOALS:

  • Return to environment art on a small scale, using static vignettes of a wider story.

  • Consider how to insert users into a narrative by adjusting a specific narrative moment to suit VR affordances.

  • Polished portfolio pieces

  • Consider the VR circle in the creation of the environment, and analyze for persistent patterns. I want to continue my thesis discussions here.

  • Explore Unreal Engine and become familiar again. I’ve lost touch in the last two years and want to be proficient again. I may start this project in Unity and move to Unreal as I work out the timing and scope of the project.

  • And on a personal note: reconnect with reading my favorite novels, because graduation has opened up some time for hobbies!

01/18/20: Spring Semester Goals

The next few weeks are the final weeks of production for this case study - in two weeks, Tori and I are flying to Florida to demo at the Zora Neale Hurston Festival. We then have the month of February and first week of March to make any modifications. At this point, production is more about cleanup of the main case study experience, and tiny prototype experiments based on proximity to other avatars and how to fully construct the environment.

Reaching the end of last semester, I had a rough draft of what our final experience was going to look like and a head start on the writing portion of my thesis. I created a priority list addressing our deadline for the Zora festival in two weeks and for the end of March:

Priority List for final production of Ruby Bridges case study.

Project Process

Based on the priority list, I wanted to line up my goals for the Zora festival and for the end of the semester to work on them simultaneously. The first thing I’ve been working on has been creating a consistent proximity in the walk with the user, Lucille, and the two federal marshals. At the end of December, all three characters would reach the end with the user, but would often feel too far away during the walk itself. It didn’t seem natural to have Lucille so far away from the user in a hostile situation, or to have the marshals walking three meters up the sidewalk while the mob avatars are crowding in.

In my test navigation scene, I set up the three avatars and put in a base user animation with some speed variations.

Group Walk Test using test prototyping space.

One of the biggest problems was getting the speed and animation adjustments right regardless of what the user is doing. From the user’s perspective, if they’re slowing down then it means they’re not looking straight ahead down the sidewalk, which gives me a little bit of leeway in the adjustments I make. Slowing down the avatars to an unreasonable speed (often looking like they’re moving through water) doesn’t matter as much because they will speed back up when the user looks directly ahead.

Implementing this into the main project scene was going to require reorganizing how the main control script was running the scene. Initially, the script was controlling all of the dialogue, audio and motion triggers. This got a bit messy and difficult to debug. Using this primary control script as a template, I created Scene1_MainControl.cs to house the booleans used to indicate scene status and runs the timing for each phase of the experience. From that, I created separate scripts to control the motion for all of the avatars in the scene (including the user) and the audio/dialogue. With that separation I’m able to get a better handle on debugging down the road.

New control script setup.

The audio script also took some prototyping to get right. I was having problems last semester with mob members initially playing all at once versus a randomized wait time. Distributing the AudioSources in the scene and layering these sounds still needs a lot of work, which Tori and I have already reached out for. I focused strictly on timing the audio and ensuring that the mob chants selected are appropriately randomized from a set number of clips.

Next Steps

Those scripts are currently in the scene and functional, so in the next two days I will be turning my attention to the environmental setup of the scene. This is where my Zora goals and end-of-semester goals overlap - I’ll be using a separate prototyping scene on Unity to place prototyped blocks and houses in order to determine the best placement for these assets in the world, and explore different configurations for the mob. I thought about using Tilt Brush or Maquette for this, but I found it’s much more efficient to use Unity because I can mix and match the assets I already have. I have already finished assigning all the textures and materials in the scene itself, and will continue to add environment assets in between the setup of the houses and cars. I will also need to time the door animations for the user to exit the car, and time to cleanup the user’s exit from the car itself.

Next week I will have documented the prototyping scene and resulting changes in the environment, as well as a runthrough of the experience with the separated script setup. Tori and I will be taking an Oculus RIft with us to demo the project in Florida, and so we will be conducting these tests using both an Oculus Rift and a Vive to check for any other issues.

12/06/19: Five Week Summary

I’ve reached the end of my in-class window for this project, and it’s time to go back and review what’s happened in the last month.

I started this project fairly optimistic about what I would be able to achieve. My initial task list was ambitious but not out of the realm of expectation with the number of prototypes I had behind me. What I didn’t anticipate were the hangups on character animations and locomotion. Tori and I have been working on getting this motion capture data processed and ready for the project over the last four months, so I focused on some initial isolated problems while I was waiting on the data. As it started to come in during Week 2, we realized there were some issues with the characters having major offsets and occasionally walk cycles with broken knees. I was also having problems aligning them with the car model I brought into the scene. Tori took those models and adjusted the animations to fit the car and fixed them, mostly focusing on the introductory car sequence. Within the last week of the project, we had all of the animations turned over to me. As I was bringing them in, I realized that the scene was cluttered, and I was going to need a different method to bring the characters down the sidewalk. To refresh the project (and myself), I started over with a fresh scene and spent the last three days of work bringing in the final animations, working on implementing the new locomotion system, and cleaning up my personal workflow with scripting.

12/06/19: Progress update. Video is a little choppy due to all the functions not yet being fully implemented!

What I Learned

For as many issues as I had with this project, I did learn a lot.

NAV MESH USE

I only used NavMesh once or twice prior to this project, and had to learn how it worked fairly quickly in order to time my characters motion through the script. I had some issues aligning the animation with the motion, but really that was just a chance for me to get better with setting up Animation Controllers.

CHARACTER ANIMATIONS

As an Animator, I stay really true to my corner of environment art and development. I generally don’t enjoy working with characters beyond the occasional sculpt. Prior to this prototype I spent most of my time working on interactions or the user themselves, not so much with worrying about cleaning up animations or transitions. There was a rough learning curve this time around. I had to link up multiple animations and make sure the transitions worked with the NavMeshAgent’s motion. While I still don’t enjoy the process, I feel much more confident about troubleshooting these areas of the project in the future.

PROJECT/SCENE ORGANIZATION

Speaking specifically here about keeping character versions in line and how my scripts are organized. I’m the only one really working on this project in Unity so I will sometimes let my organization slip in lieu of efficiency… which later is a pain in the butt. Tori and I were constantly testing different versions of characters for the Federal Marshals and Lucille, to the point where I eventually lost track of which animation was associated with which FBX file. Cleaning out my project helped enormously, but I eventually figured out I needed to be more attentive to my archive folder and communicate file structures with Tori for when she sends me Unity Packages of characters.

Additionally, my scripting has been much more organized. I began implementing headers for the inspector to avoid the paragraph of variable inputs, and that keeps the script much more organized my end this time. I also stuck to using one primary control script to keep track of events in the scene and overall story elements, while keeping mob behavior and audio as their own separate (but linked) scripts. I’ve since been able to work much more efficiently knowing where specific functions are housed.

PROTOTYPING SCENE

I usually have a practice scene early on in the development process, but it tends to get abandoned once the project has increased in complexity. I kept this test scene close at hand this time around. I’ve found that the complexity often makes it harder for me to tell where problems are arising, so bringing a prefab into that scene and testing specific scripts and functions has made troubleshooting much faster. I used this for the Head Rotation scene, testing several animation transitions with the characters, ensuring the Mob Behavior script was handling audio properly, and an initial NavMesh learning space

What’s Next

As we know, this prototype doesn’t end here. At the end of the five weeks I managed to debug and troubleshoot plenty of these issues, but there is still much further to go. Tori and I will both be discussing this project in the next week, requiring it to be functionally sound. In January we will be showing our experience at the Zora Neale Hurston festival in Eatonville, FL, so I will be focusing on getting the visuals up to par and working with Tori to adjust any animations that still need attention. While I have begun many of these tasks, this is the current to-do list:

  • Grouping function for the primary characters.

    • Because the user can control the speed at which they move down the sidewalk, the Federal Marshals and Lucille’s avatars need to be able to slow down and resume normal speed based on how far away the user is.

  • Mob encounters

    • Part of the Mob Behavior script, certain characters will react based on user proximity. This needs to be tied in with the audio (already implemented) and an additional animation trigger.

  • Audio mixing

    • Mob audio needs to be set to sound “muffled” while the user is inside of the car and at full volume when out on the sidewalk. Additional audio effects can be tested here as well for an outdoor city scenario/background audio. I have begun this process with the Mob Behavior script looking at individual phrases and sayings for the characters, but the mob chant, background city audio, and the car sounds still need to be brought in.

  • Complete Car Exit

    • Characters now exit the car appropriately, but timing needs to be adjusted for the user’s exit.

  • Implement scaling script

    • Needs importing from prior prototypes.

  • Prologue and Main Menu scene

    • This project only focused on Scene 01. I will create a package of the Prologue sequence from the previous project to be imported and applied to this one. A main menu and additional “end sequence” still need to be created.

  • Looking at environment calibration

    • A reach, but could be utilized in the main menu to determine the user’s position in the playspace and adjust the environment to them? Not necessary, but would be easier for future demos.

  • Visual Polish

    • Set to the side for the moment, final textures, assets, and post-processing needs to be applied to the scene. This also includes additional terrain assets such as grass, trees, and plants.

With as many weird curves this prototype had, I’m pretty proud of what I was able to learn and accomplish, particularly in the past three days. I think that in the next week I will be able to address many of the above points on my task list and really be able to get some feedback on the state of the project. I will continue to make update posts next week of where I ended up and the feedback that I receive from my thesis committee!

12/05/19: Time for a Reboot

Sometimes you just reach a point where what you’re doing in the project isn’t working, and it’s time to start fresh.

I hit that point this week with the Ruby Bridges case study.

All of the issues that Tori and I have had with the various animations really came to a head when I was trying to get the sidewalk animations coordinated. I realized that the system we were using to drive all the character’s locomotion (other than the user) just wasn’t working, and I was spending more time trying to fix little timing problems than actually setting up the scene. The project was also getting cluttered with previous versions of animations (we found another issue with the male walk cycle causing broken knees). I started over with a fresh project on Monday night, only bringing in the finalized animations as Tori sent them to me, and set up Scene 1 to be driven by NavMeshAgents.

I’ve only actually used NavMeshes maybe once or twice, so this was a bit of a learning curve. I had to figure out how to coordinate the agents getting on/off the meshes for the car scene, make sure there wouldn’t be any interference from the car agent while all the characters were inside, and then determine the best way to drive them using waypoints. With as much time as the reset took me, I’m still convinced that I was able to get more done in a much more organized fashion than I had before. I kept a test scene to work out the animation issues I was having, and then brought the solutions over into the primary Scene 1.

Still from Test Scene, testing character animation transitions.

Using Test Scene to troubleshoot and test HeadLookController.cs

(originally found at https://wiki.unity3d.com/index.php/HeadLookController#Description)

Getting the characters to move along with the NavMeshAgent was easily one of the most frustrating parts. The instructions are outlined pretty clearly at the Unity website, but some of the animations were jumpy or not running once the blend tree was triggered. Some of these issues are still unresolved; I evaded further hours of blend tree debugging by taking the one functional animator I had and just rebuilding all of the character animations on top of it.

Animator controller for Federal Marshal 1, showing all transitions and triggers from the car sequence to the end confrontation with the State Officials.

I’m trying to be a little smarter with my scripting this time around as well, making generalized functions that are flexible enough to be applied to multiple characters, decreasing the repetition in the script. There’s more dialogue in this prototype than the past, and I had some problems making sure the audio sources for the characters were functioning properly. But we made it through, and now the car dialogue works well. I’ll be able to test this more thoroughly once the mob is in and I’m coordinating the audio changes there.

Current State

Status update of characters walking with navmesh. Dec 5 2019.

What’s Left?

As we can see from the above video, there’s plenty of cleanup that needs to happen once the base functions are complete. Implementing door animations, making sure that characters are facing the right way, and adding in the final pieces of the puzzle so that booleans are triggered on time.

I still have a fairly large to-do list now that I’ve got the characters walking together and able to look at specific targets. Implementing that script is the next immediate task. I’m split on what comes after that - I could begin bringing in the mob characters and setting them up. This has to be done before I can start working on any interactive events. But I also need to add the user to the primary group moving down the sidewalk. I focused on getting the characters behaving in their ideal patterns before introducing the user to the scene to interrupt them, but I now need the user to test a script that will keep the group within close proximity of each other. Below is the current list of tasks left:

  • Fully implement “Look At” script for primary group

  • Mob arrangement

  • SteamVR cam setup

    • Includes animation down the sidewalk, group proximity, and coding mob interactions.

  • Scaling environment script (needs importing)

  • Import Prologue scene from previous project version

  • Audio mixing for Scene 01

  • Main Menu

It’s a lot, but things are already going significantly better. I’m starting to implement more organized practices for myself, such as including headers in my scripts to make the Inspector a little kinder to the eyes. Small thing, but it makes me feel better when I have to use it.

The next update will be my overview of how the project went and a video of its state, as well as an overview of the things that need to happen to complete the project for a January exhibition.

11/30/19: Tiny Steps Forward

After the previous update, Tori and I ran into some time consuming animation difficulties on our case study. We had animations that needed to be adjusted to fit the car we’re using in the scene, and Tori needed time to set everyone up in Maya and get those bugs worked out before they made their way back to me. So most of last week was spent cleaning out the project of old animations, testing the new things Tori sent me, then cleaning them out again.

I have the car sequence nearly completed with the new characters; everyone is siting where they’re supposed to be, the animations are timed, and the user can now successfully get out of the car and move down the sidewalk with Lucille and both federal marshals individually. Making sure they stay together as a group is another challenge. The doors aren’t fully timed and animated yet because I chose to save that for a later round of tweaking. Right now, I need to get all the broadstrokes happening in the scene to make up some ground. There are a few flaws with the characters - one of the marshals has an arm that breaks and spins when opening Lucille’s door, and the walk cycle that we’re using for some reason breaks all the rig’s knees when it’s applied. A problem to solve in January; right now, I just want everything in the scene and timed.

User view exiting the car with Lucille.

This iteration has especially been testing my workflow with Tori. I think in a future iteration we need to work more closely together on the capture process for the motion, because a large portion of my time on this scene was spent just getting all of the characters in the right place at the right time. I have to divide the animations I’m given because they don’t match the audio cut for the instructions, and or the length of the drive up to the school. Creating more intentional shots with a bigger emphasis on timing and physical props might reduce some of the cleanup and issues we’ve seen this time around. The bottleneck in our skills has been frustrating, as development halted while we solved the animation issues and replaced assets.

Now that I’m moving forward, I’m working with the assets as they come to me. I began testing “encounters” with the mob and federal marshals using empty game objects in Unity to represent mob members. As the group approaches down the sidewalk, the distances from these tagged objects will be tracked. Once close enough, the objects are destroyed and the federal marshal reacts by telling the mob member to back off. This can be useful in setting up encounters throughout the scene once the mob members are actually in place.

Test encounter with empty game object. Also visible: the leg deformation with the Federal Marshal walk cycle.

Moving Forward

This is technically the last week of development for this project and in terms of goals, I’m still back in Week 2. I am prioritizing the completion of the sidewalk locomotion with the group, the encounter at the end with the state official, and the mob placement by the end of the week. A reach goal would be to spend time integrating audio with the scene, assuming I can get all the assets in. Visual polish and environment development will have to wait until cleanup time in January; for now, a functional scene for the user is all I’m looking for.

11/17/19: Animation Setbacks

With as much success as I had last week making progress and getting the project set up, the end of this week put me several steps backwards.

I started out prepping some of the audio clips we need for the project, because so much of this experience relies on these mob members yelling unrelentingly terrible things at you. In the past we recorded a few basic chants that we heard in our research and put them in as placeholders on repeat with a few individualized phrases yelled from avatars. There’s a little more dialogue in this version, and Tori recorded a variety of different mob interactions so that we could create a more dynamic audio experience. I went in and cleaned up each of the clips, balanced the audio, and then compiled some of the takes into a “group chant” to help ease the load in the Unity scene.

Group chant compilation in Adobe Audition.

With that task complete, I started in on getting the car introduction completed. I had all the avatars in last week, I just needed to separate some of the animation clips, set up triggers, and begin timing it out. I quickly got in over my head setting up all the parameters for all four characters, so I scaled back and began just getting the timing down for Federal Marshal 1 (driving). Then I added in the second Federal Marshal and Lucille Bridges. There was a slight scripting issue I had with the audio repeating, but jumped that hurdle and got to where all of the characters were reasonable aligned.

From here I was ready to develop the transition from the car to the sidewalk for the user. At the end of the video above I noticed an issue with one of the marshals snapping back into the drivers seat - as it turns out, the rig in Unity was still set to generic, while the Walk animation was imported and set to Humanoid. The walk animation wouldn’t play, and the avatar resets. I switched the rig, and chaos happened. The rotations of the rigs were causing some intense offsets to happen from where I originally had the avatars set, to the point where making the adjustments was near impossible. I brought the problem to Tori and we discussed it a bit - this is a video I sent her last night of the issue I was having:

We decided the best move from here was to just have Tori animate the sequence in Maya with the car model, so all I would have to do is bring it in and apply the scripts. It’s going to take a little time to get that process going so I’m currently at a bit of a standstill (also waiting for the rest of the mocap characters). Once I have that animation back it’s probably going to take a bit of adjusting of the script to get everything cooperating again, but at least all of the characters will stay where they’re supposed to be on-start.

Next Steps

From here, it’s kind of a waiting game. I need to get the assets from Tori to move forward. Once I have that, I’ll set up the intro sequence again and get them moving down the sidewalk by the end of the week, ideally with starting the interaction at the end. It’s a bit of a setback but I think I’m still in good shape to finish out this coming week on schedule.

11/09/19: Updates & RB Final Build Begins

Semester Updates

Since my first post this semester is about three weeks before the semester ends, I have a bit of catching up to do here!

The majority of these updates will be about the final build for the Ruby Bridges case study and thoughts about relating it to my paper writing. The earlier part of my semester was spent working a bit on the Oculus Quest with a five-week project based on house museums. My team and I made a prototype for a house museum on Jackson Pollock, which was a fun foray into Quest development. I also spent a bit of time playing with Vuforia and AR by making a Lord of the Rings companion app for iOS that provides additional information and context when looking at a map of Middle Earth, such as the paths taken by notable characters throughout The Hobbit and the Lord of the Rings trilogy. I’m updating the MFA section of this site with the details and documentation of those projects.


Planning

From there, Tori and I have been working steadily on what will be our final version of this project. Taking into account all of the things we’ve learned from the last seven versions or so, I compiled a list of key tasks to address in this version:

  • Main Menu:

    • Creating a natural introduction to the gaze-based interaction. In previous versions the user was required to look at a specific button to trigger it, but I would like to make this a little more related to the content of the experience itself.

  • Prologue

    • Adjusting the position of the images and text in the prologue for a seated static user. The new form of locomotion (discussed below) does not require a user to stand, and so the images in a circular configuration can be difficult to see. Instead I will be re-arranging the images to be on a single plane across from the user, gallery-style.

  • Scene 01

    • Locomotion. This has been an issue since the very first prototype, and we’ve gone through several formats as we try to determine the optimal amount of user agency in the scene while keeping user attention on the scene around them. Our last prototype led us to the conclusion that it wasn’t reliable or reasonable to explicitly attempt to direct a user’s gaze to move forward through the space, as it takes attention from the events around them. Talking to Scott, he presented us with a middle-ground: putting the user back on a rail, but allowing them to control the speed by looking around.

    • Construct a full narrative scene using new mocap characters. Tori has been working hard this summer and semester on cleaning new mocap data for the scene. We’ve run into some software issues so it’s taken much longer than expected, but I will be placing all the new data into this scene. This includes a new interaction between the federal marshals and the state officials once reaching the front of the school.

    • Audio. Bringing in new variations to the audio so that the mob doesn’t appear to be repeating the same phrases every 10 seconds, as well as arranging dialogue.

  • Ending Sequence

    • Previous demos have ended with the walk to the school. Because there are no other scenes currently following this up, I will be book-ending Scene 01 with a sequence leading the user out of the space, with conversation on what happened next or drawing conclusions to today. The scene will have a similar setup to the Prologue, leading the user back out to the Main Menu where it can loop again seamlessly for demo purposes.

Schedule

Initial schedule for Final RB Case Study, created 10/28/19

Week 1 Progress

Just as my schedule shows, this week I started a clean project and began importing all the assets. I made a timeline to plot out specific interactions in the project. It’s helped me visualize a few of the scripting decisions I’ve had to make so far - instead of relying on timing, I will be using global booleans in the scene to determine when specific actions occur. Giving the user the ability to control their speed down the sidewalk makes the timing variable anyways, and this is the best way to ensure all of the interactions occur regardless of the user’s pace.

Timeline for project flow.

From there, I focused my attention on working through the gaze-based rail system. Expecting pitfalls, I gaze myself 2 weeks to figure it out and troubleshoot. It turned out to only take 2 hours, and integrated well with the scaling system I had set up. At this point, if the user has the camera rotation between 50 and 130 degrees (with 90 degrees facing straight down the sidewalk), the speed of the animation down the sidewalk remains at 1. If the user passes either of these thresholds, the speed will decrease based on how far past the threshold the user has turned.

Screencap of locomotion script in-play.

Locomotion script written for rail system.

With this out of the way, I began to work on the next steps on my schedule. I built the prologue into a gallery wall instead of a circular arrangement and added in the timing/narration. This has prompted further conversation from Shadrick and Maria about the sequencing of the images, the narration being used, and how I might incorporate the gaze-based mechanics into this scene. I’m still not convinced on adding any gaze functions to these scenes - I don’t want this to be the space for users to linger, I want it to be an area to gain context for the scene ahead. Now that the functions are the scene are set up, I’ll be looking at the sequencing and narration used to see if I can build a stronger narrative for the user going in.

Initial Prologue setup.

I began bringing in the finished data as Tori uploads them. The federal marshals are in, as well as Lucille. I found a car asset with an interior and doors that open (harder than you would expect), and began to construct the introductory car scene. The clips for the federal marshals were separated out so that they can be timed in the script, and I’m in the process of locating the walk/idle/talking motions for them for the later dialogue sequences.

Goals: Next Week

I have to adjust my schedule a little bit to make up for the fast locomotion work. This is what I plan on having done by the update next week:

  • Car sequence complete with anims and dialogue attached

  • User transition from car to rail animation

  • Additional animation imports for Lucille

  • Main menu scene addition with stand-in gaze button.

By next week I should have some in-game footage to show with the animations intact and some additional models added to the environment. I will also be introducing some of my thesis writing concepts and how it relates to the decisions I’m making here in the project.

04/20/19: Phase 2 Implementation

PROGRESS

Last week’s successful gaze development paid off. Scene 01 no longer requires the use of controllers!

Big achievement of the week was adding in all the gaze-based teleport points to the scene I constructed for demos 2 weeks ago. The actual script for applying a ray to the camera and teleporting to a point with a trigger didn’t require much modification. The trouble I had was deconstructing EVERYTHING that was attached to the original, time-based teleporting and attaching it to the new ones, then making sure to activate/deactivate the next and previous points to prevent the user from going backwards. I also added a light to each point that increases in intensity when starting at the teleport point as an indicator for myself and other players.

Me testing gaze teleporting in Scene 01. 04/19/19

After adding these features and testing myself, I found a few errors shown in the video above.

The range of the headset was set too short, which made it difficult to reach the next point. It prevents the user from triggering anything too far ahead of their position in the scene, but unfortunately it also prevents me from reaching some of the teleport points. I organized the mob variations before a visual of the next step was actually required to progress, but because the ray only reacts to objects in the Teleport layer, the user can look straight through the legs of the mob and still activate the next point without ever gaining direct visual contact. And the doors of the school are intended to be a trigger that ends the experience - instead, the user just keeps activating the teleport function and re-spawning directly in front of them. After addressing these issues (minus the points visible through the mob’s legs), I added a gaze-based “start” button at the main menu to create a completely controller-free experience and introduce this concept before actually entering the scene.

Script commenting for

For my own process, I realized that many of the scripts I’m writing/using will be useful to us moving forward in development and in various other projects. I took a quick break from scene development to add comments to the scripts for my own sanity and ease of understanding down the line. Just a new habit that I’m trying to develop, thanks to Matt’s Programming for Designer’s class.

We were fortunate enough to host a VR Columbus meeting on Friday evening at ACCAD and demo this new scene with gaze teleportation for the first time. Below is one of the recordings I took of a user moving through the experience from the main menu all the way to the end. Sound from the experience is included below.

User playtest at ACCAD, 04/19/19.

While users were in the experience, I was watching the scene editor (as pictured above) to get an idea of where people were looking and changes that needed to be made for easier gaze detection. Blue lines in the scene above indicate where the ray is being cast from the user camera. When the line turns yellow, the user is making contact with the teleport point.

After watching a few users go through, I think I experienced a case of “designer blindness”, where after working on an experience for a certain amount of time you get so proficient at moving through it that you miss some potential user issues. I was really surprised at how much people tend to move their heads in VR! The teleport points require you to hit the collider for 3 seconds before activating, and for most people they would only manage to make it for two before their head twitched and the count restarted. From this, I imagine making the colliders larger would help. The further they are away, the more difficult it is to activate. Users would tell me “I’m looking right at it!” when really the ray was hitting the floor just below the trigger point. The light cue was somewhat helpful, but I think the user needs more than that to figure out where their gaze is actually falling. I think adding a light reticle to the camera will help this issue, and I’ll be testing it in the next week just to see how it feels. I’m concerned that the reticle is going to add a layer of separation between the user and the scene, reminding them again of the technology and breaking flow/immersion.

I know this is just a prototyped proof of concept, but the teleporting points are not especially obvious in the scene and their function is not clear from the start. Tori and I are still required to brief the user before the experience begins, reminding them to look around, to stand up, and that the points are even there. We’ve been planning on using Lucille Bridges’ character as a means of progression, walking ahead and calling our attention with audio cues, but even then I’m not sure how to transfer these attributes of “focus” and indication of action to a human figure. Or even if it’s required- maybe a glance at Lucille Bridges’ face is enough to move the user forward. This is a point of experimentation for Tori and myself beyond the current prototype.

What’s Next

Overall, I think this is a good foundation to build from. Now that I have an understanding of user action and progression, I feel I can start layering smaller interactions from the user and mob into the scene. This phase is due next week - I’ll save my thoughts on the summer for then. But in the immediate future, Phase 2 requires troubleshooting and adjustment of all the colliders in the scene. I will also be testing some reactions to proximity and gaze with one or two of the crowd members. Ideally, a user will look someone in the eyes and cause an insult to be hurled or an aggressive motion to occur. To aid in user gaze accuracy, I will add a reticle to the camera to see what that effect is like.

OUTSIDE RESEARCH

Research this week included three experiences on the Oculus Rift: Dreams of Dali, The Night Cafe, and Phone of the Wind. The descriptions in all three really drew me to them, as they’re meant to be contemplative experiences requiring you to navigate space and uncover the narrative (or lack there-of).

I started with Dreams of Dali. This experience was on display at The Dali Museum in St. Petersburg, FL for over two years, and explores Dali’s painting “Archaeological Reminiscence of Millet’s ‘Angelus’”. Looking at their page about this experience, I noticed that the experience is available in multiple formats for VR headsets and a “linear 360” view. This might be the first time I’ve seen that much variety available on a museum page. The VR experience was also covered by admission to the museum - nice of them, the ones I’ve done so far on-location have required additional ticket purchases.

“Archaeological Reminiscence of Millet’s ‘Angelus'“, Dali.

I had to laugh when the experience started up. The very first screen was instructions to stare at a glowing orb for 3 seconds to move around, with glowing orb included to begin the game. Interesting case study for the problems I was discussing in my Phase 2 project! This experience required me to move around large distances, and their inclusion of a reticle helped enormously. It only appeared when my gaze was near an orb, which left me free to explore the rest of the world without obstruction. On the actual experience, I was able to navigate in whatever order I pleased. Some orbs were only accessible from certain points, and at other points a new event was triggered. I moved out to the fringe of the desert on the other side of these structures and encountered elephants with the legs of an insect towering over me and making their way past. They continued to walk throughout the scene. Or I turned a corner and encountered a lobster sitting on top of a phone. Audio in the scene included soft rumblings or ambient effects, said to represent Dali’s potential thoughts in the scene. Few words were distinguishable to me but it really added to the dreamlike state of the place.

In the teleport actions themselves, the user actually slides through the space quickly to the given point. There’s no fade in/fade out or blink occurring. You’re able to see the ground moving below you and your destination. The only time this became an issue for me was when ascending or descending the long spiral stairs in the tower - I didn’t realize the next orb would just throw me directly up to the top. Not too dissimilar from when Saruman propels Gandalf up to the top of Isengard in Lord of the Rings: Two Towers.

Anyways. I feel that context is important in this experience. Had I been visiting the museum I would have probably had more of an appreciation for the things included in the experience. I have a very base knowledge from taking Art History in early college, but my understanding of Dali doesn’t go much further than that. As a user at home, that additional information must be sought out independently from the experience itself. I also wonder if the “linear 360 experience” is crafted to form a particular narrative or just a path that covers all of the points. I didn’t have time to go into that this time around, but I’d like to make a closer comparison in the future.

I moved on to The Night Cafe: A VR Tribute to Van Gogh, made by Borrowed Light Studios.

I’m going to have to revisit this experience, as the only way to navigate was using a console controller. Kind of odd to make that the only source of input, but until I can get that set up I’ll just give my static impressions of the first scene. The assets and animations are very beautiful, and the style of the room definitely matches. In the spaces where they had to guess at detail, such as the wall and door behind me, the makers said they took reference from other paintings and were able to match his style pretty well. The intro leading up to this sequence was an image of The Night Cafe painting before fading into the actual scene.

The last experience was Phone of the Wind, an interactive film based on a phone booth in Japan used to connect and speak to departed loved ones.

This phone booth is well documented, actually sitting in the town of Otsuchi, Japan, and built as a way for people to grieve and heal after the 2011 earthquake and tsunami. In the experience, you listen to three people talking to their loved ones in the booth. I was really surprised by the types of visual content included; users begin the experience from the perspective of a drone actually flying over the booth. As each story begins, the world transforms into an animated scene representing what is being said. At the beginning and end of the experience, the world is made of 3D assets. I’m not sure the transition was smooth due to them being full world transformations, but it definitely added variety and personality to each story.

The interactive aspect of this film comes in at the very end. The user is given the option to enter the booth themselves and leave a message for a loved one. I really love that this was part of the experience, and I can see some similarities between this and Where Thoughts Go. Users can choose to skip though and move on or to take a moment to privately reflect. The few instances of movement in here with the flying drone or the user entering the telephone box is forced, there is little control over your location in the scene.

It was difficult to find any information about this experience beyond what’s given on the Oculus store - the developer’s website is now private. But snooping around the reviews was… its own experience. Some users loved it and were crying, others thought it was stupid and shouldn’t be allowed on the store due to it not being “fun”. From their comments I gather that many, like me, had no idea this was a real place with its own history and meaning to a community, not just a filmmaker’s idea. While I don’t think that information is necessary for the purpose of the experience, I wonder why this information isn’t more readily given and attached to real life events. Knowing the history helped ground the story for me.

CONCLUSIONS

These were very different interpretations of real objects or places. I think seeing how some of the gaps in information were filled in with reference, though for the two I was able to fully experience I think the outside context and experience was not fully filled in for the user. I feel that I needed that additional information to truly enjoy and understand the content to its fullest extent. I think designers are taking these experiences that were initially in exhibitions and putting them on the Oculus or Steam stores, but not accounting for that missing information and how that experience outside of the headset is part of the overall design process. These outside research experiences this week have really made these points clear to me, and have been helpful in clarifying my thoughts about how to organize the content outside of VR in my framework.









04/13/19: Gazing into Phase 2

Phase 2 Updates

Progress has been made! I’ve been focusing on getting gaze detection into the scenes I put together for the demos last week, and I think I finally have some momentum going. Initially my schedule for Phase 2 was to start small- activate a button, make something happen by looking at it. I saw a few scripts included with the SteamVR SDK, but there’s very little documentation on their actual usage. I even looked through the Google SDK for Daydream and the Oculus SDK, but those scripts were not especially helpful.

So I just built it myself. I have a general understanding of the process: write a script sending a ray from the camera to collide with objects, isolate the objects to their own layer, and then have something happen once that collision occurs. In this case, the test was to change a cube from blue to red when looking at it. Initial tests had the raycast changing the color of a cube when pointing at the ground as well as the cube.

Raycast test in Unity - cube still changes color even when looking away from it.

With some research and experimentation, I found out the issue was in my definition of the mask. I want the raycast to only affect the objects under this particular layer, and I wasn’t representing that layer correctly in the script. Everything worked properly after fixing this line, and I was able to move on.

Successful raycast test, with fixed script shown.

This detection is great, and I can definitely use that to trigger reaction animations in the mob characters within our scene. But the next step was using it as a means of transport through the scene. I made another cube and expanded the script to include a second layer specifically for teleportation, wrote a function that would change the color so I knew I was looking at it, and delayed the teleport by a variable time (3 seconds) so it became an intentional action. This script is flexible enough to identify different objects and teleport points, and gain information about those spaces. It also includes a distance cap so that objects beyond a certain point (5 units in the test scene) cannot be activated.

Gaze Teleport Test: 4/13/19.

Next Steps

Troubleshooting. I did notice that the precision of the gaze is difficult in the headset. I suspect that scaling up the colliders to be larger than the objects themselves will make it much easier to move from place to place. I also saw a little jump without a fade that happened when doing the playtest in the above video — I have yet to recreate this, but I’m going to keep an eye out for it.

The next step for this device, once properly adjusted in this test scene, is to bring it into Scene 01 of Tori and I’s project. I have a few adjustments to make there before another demo on Tuesday, but I’d like to have this in as a means of locomotion before then. After that, I’ll be using this to trigger additional animations or environmental effects to see what they add to the scene, and experiment with placement of these triggers temporally and spatially.

OUTSIDE RESEARCH

This week’s outside research was inspired by my Intro to Cognitive Science course. One of my required response papers was based on an article titled “The Mind’s Eye” by Oliver Sacks, published July 2003 in The New Yorker. Sacks, a neurologist, writes about the varying experiences and adaptations of the blind to the loss of vision from the physical world based on personal accounts. He begins with John Hull, who experienced deteriorating vision loss from the age of 13. Hull eventually progressed to total blindness by age forty-eight, and along the way kept journals and audio recordings discussing the nature of his condition. Not long after losing visual input, Hull experienced what he calls “deep blindness”, a complete loss of mental imagery where even the concept of seeing had disappeared. To account for the loss of visual input, the brain (in all of its wonderful weird plasticity) heightened other senses. Sound connected him deeply with nature and the world around him, experiencing true joy and even producing a landscape of its own for him.

Sacks realized that Hull’s experience was not universal. Other accounts are discussed with people who can utilize a mental landscape to solve problems, produce powerful mental scenes, and manipulate this “inner canvas”. He questions whether the ability to consciously construct mental imagery is even all that important, eventually concluding that heightened sensitivity resulting from blindness is just another reproduction of reality, one that is not the result of one sense but an intertwined collaboration of all the senses from all levels of consciousness.

I mention this because I found “Notes on Blindness” on the Oculus store, a VR experience based on the audio recordings made by John Hull.

I have to say, the trailer really doesn’t do it justice.

The entire experience is based on Hull’s strong connection to natural audio. I went through this experience seated, but standing would work just as well. There are six scenes or chapters to play, each themed on a particular point: “How does it feel to be blind”, “On Panic”, “Cognition is Beautiful”. In the initial scene, you appear in a landscape built of tiny dots. I could make out surfaces and the shapes of trees, but overall you are alone. As the audio plays, Hull describes the individual sounds of the park and they build into this thriving scene - with the point that objects only appear if they are making some form of ambient sound.

Since writing this article, Sacks has published a book under the same name discussing broader sensory losses such as facial recognition or reading.

Screenshot from Scene 01: “How does it feel to be blind” of Notes on Blindness.

While the vast majority of this experience is observational, there are points where the user is required to interact with the scene. In one scene, I am given control of the wind to blow and reveal trees and a creaky swing set at a park. In another, I am required to gaze at highlighted footsteps in order to move forward and given a cane in one hand to tap on the ground, illuminating the immediate ground below me. The designers made smart choices with where they implemented these methods - gazing at the footsteps and the cane occur in a scene related to panic and anxiety, one where I as a user feel useless despite being given an action. In the wind, it emphasized the point of the revealing power of nature. All along the way, Hull narrates his feelings about these sounds and how they give him power where sighted people may disregard or even fear them.

The sound design throughout is phenomenal. And in the scene about panic, I absolutely felt it. The sounds that had previously signified a release and peacefulness turned against me and it became a hostile unidentifiable world with unorganized structure and an intense color switch. The visuals emphasized a different kind of seeing, but were still stunning to look at and representative of the descriptions being given. Despite the world being visually beautiful, the sound always was clearly the priority and the emphasis on cognition.

I did have a VR game title that would give another perspective though. Where Notes on Blindness functioned as a storytelling experience, the game Blind utilizes these interactions as game mechanics in a psychological thriller. The main character awakes without knowing where she is and missing her sight, requiring the use of echolocation to visualize the world around her. I thought this shift of focus and mechanic would make for an interesting comparison.

In all honestly, I only played the first 20 minutes of the game due to time constraints. And the fact that I could feel my anxiety skyrocketing the first time I looked down a dark hallway with no indication of what lies ahead. I’ve played enough horror games to want no part in that.

However, I was able to experiment with some of the puzzle solving mechanics and the interactions the user has with sound. Much like Notes on Blindness, when no sound is playing I am unable to see ANYTHING in the scene. No sense of space. That combined with the complete silence makes for an eerie atmosphere. Throwing objects will temporarily illuminate sections of the scene, and in the introduction the user is guided by a gramophone producing sound to illuminate a path or guide the user to a specific spot. I have watched a walkthrough of the entire game and know that later on you receive a cane to use. There is a requirement for environmental interaction that is more present than in most games- without it, the game does not exist at all.

The beginning of the game includes a short story sequence shown in a comic-type format before the user “awakes” in a dark space. In the intro level there are three basic puzzles to complete that introduce the user to the mechanics - a safe, a maze, and sound buttons. In the safe, the user can see two dials but no markings. You must rely on the vibrations from the controllers to unlock them. The maze is located inside of a box- by moving around a handle, you can navigate a small ball through the passages and illuminate the interior of the box. And the sound puzzle forces you to focus on a particular melody and play its segments in order.

Navigation through the scene is a bit odd. Because I was playing on the Oculus, without a third sensor I am unable to turn my back on the two sensors above my monitor. The limited motion was a little frustrating when I just wanted to turn around to open a drawer. And the user walks by pushing the joystick, sliding forward/backward. There are minimal options for user motion beyond turning off strafing.

CONCLUSIONS

I found it really interesting how the designers for both experiences were able to take the same base information and bring them into unique narratives and interactions. I can actually plot both experiences within the framework that I’m building, as far as the roles of user and designer, and the experience definition. It’s hard to find narrative content with the same basis currently, and I expect I’ll start seeing more patterns when I add them into my research experience spreadsheet. I’m starting to see a lot of the same design decisions I’m now making in my own prototypes present in these built experiences, showing that designers are asking themselves many of the same questions along this process.

04/07/19: Rebuilding for Phase 2

Over the last week, the majority of my focus has been on showing demos of our thesis project at the ACCAD Open House and the Student Art Collective. Quite some time has passed since Tori and I were able to show our progress to anyone outside of ACCAD, and as we didn’t have a working prototype from last semester… we needed to make one that was able to be shown and experienced by the public.

I took what was our Fall prototype and completely rebuilt it between Saturday and Tuesday evening. Part of this was to bring the project forward into a new version of Unity, but I also wanted to include the height adjustment from Phase 1 and a different mob configuration. This build would also require the user to begin the experience sitting on a bench before standing to progress, an interaction I have not tested before in this scene.

My Phase 2 project was temporarily put to the side in order to get this ready for public, so I was not able to test out of the gaze-based interaction. I decided instead to hit a middle-ground between Phase 1 and Phase 2- timed teleportations. Not in the control of the user at all, but a little less disturbing than the sliding motion we previously used. This included a fade in/fade out to signal the motion was about to occur - a fairly simple visual, but actually caused a ton of technical issues. The fade would show up on the screen and not in the headset. For future reference, there is a SteamVR_Fade script that you’re required to use in order to make that appear properly in the headset - normal UI settings do not seem to work in this scenario!

The new environment height scaling feature also changed how I put certain assets in the scene and parented things to each other, as offset pivot points and use of a Unity Terrain asset caused some weird placement issues when the scene was run. And through both demos this week we faced some Audio problems, with the volume being either too low or coming out of the wrong ear. Two solutions to this: better headphones, and making sure the VR camera has an audio listener attached. The SteamVR Camera prefab does not have one attached automatically! And yes, it took me way too long to figure that out.

I rebuilt the Prologue sequence based on some feedback from earlier on in the semester, including more images of Ruby and taking into account the order in which the images appear to create better flow in the scene. For demo purposes, I also included a “start menu” triggered by the operators (Tori and I - spacebar to start the prologue), and an “End of Phase 1” scene that loops back to the start menu.

The Student Art Collective on Tuesday went well - we were set up at a space in Knowlton with the Vive Pro and Wireless Adapter. Actually, that was my first time using the Wireless for anything, and it was perfect for this project. Most of the attendees were students, though we did have a few parents/professors show up and try out the scene. It was a 3 hour exhibition, which gave Tori and I a good measure of how long mobile setup would take and get back in the groove of giving a 30 second explanation/VR prep to new users. There was a short calibration process during setup with the bench to make sure users were facing the right way and the bench was in the center of the play space, but it everything ran smoothly after that.

Friday afternoon was the ACCAD Open House. Tori and I showed our Six-Week prototype at last year’s event, and played that video on a screen this time around to show our progress in the year since. We didn’t have the Wireless for this event, but the scene worked just as well with a tether. We had some wonderful conversations with guests about our work and where it’s going. It was easier for me this time around to speak about our project - I felt much more informed and confident now that we’ve grown from the “exploring technology” phase to the “conceptual development” phase.

FEEDBACK and CRITIQUE

Both of the demos provided valuable information. The most common reaction and comment we received after users exited the experience was about the height change in the scene. Having the mob members towering over and changing at users, some of whom are used to towering over others, was intimidating and placed them in the correct mindset for this experience. We also heard that they appreciated the prologue in the beginning as framing for the experience. It seemed that more of the guests this year had heard of Ruby Bridges before, and once teacher even told us she had a classroom of middle school kids who love Ruby’s story.

The main issues we experienced were technical or had to do with user flow in the experience. Audio was a real issue in the beginning - fixed by cranking the volume to accommodate the noise of the space and using better headphones (thanks, Tori!). The fade in/fade out of the scene seemed to be fixed by having both the SteamVR_Fade script active and the original Fade image active, though sometimes it would flicker between teleports. In the Prologue sequence, images appear around the user in a circle - which would be no problem if the user was in a spinning chair, but on a bench it tends to break flow when they have to turn their head all the way back around to continue looking. Some users who stand to get out of the car will continue to walk around, while others stand in place- not really an issue, but it poses a risk of tripping over the bench unless Tori or I move it. This was especially dangerous with the Wireless demo - users without a tether are more likely to forget and take off. The Student Art Collective demo required one of us to stand at the periphery of the lighthouses once a user was in to make sure they didn’t wander off into the crowd or walk into something.

CONCLUSIONS

Overall it was a great experience and I appreciated getting to see how far we came in the last year. And now we have this great demo that I can use to prototype for Phase 2! The upcoming week has calmed slightly in GRA and interview obligations, so I will be able to actually catch up on my production schedule and begin to implement it into this prototype. Along the way I’d also like to polish some of the issues that came up this week and smooth it out, such as the asset pivot problems I discovered and the weird fade flickering.

03/24/19: Phase 1 Conclusions

Reaching the end of my Phase 1 investigations led to the reiteration of one very powerful concept: context is key.

My work over the last five weeks has been investigating how designers move a user in VR, specifically with user agency and how designers can direct the users down a particular path. Within this particular scene, that path was a long sidewalk that takes quite some time to traverse. I experimented with teleporting using the prefabs available in SteamVR, with scale and location of the user in VR, and playing with the transitions between different kinds of motion - from moving car to teleport points to a plane.

Even though I was not able to achieve everything I outlined in my initial schedule, everything was functional and pretty neat in the scene.

However, feedback that I received after demonstrating the scene and then going through it myself was that teleporting actually takes the user out of the experience. The appearance of the teleport points is unnatural in this space that I am trying to create, and using a controller itself is arguably a hazard to immersion. It brings the user’s thought back around to what they’re doing instead of what the people around them are doing. I’m incredibly grateful for that insight - I hadn’t thought of it from that perspective before, but having made the scene I have to agree.

I was really lucky to get to show this scene to Shadrick Addy, a designer and MFA student who worked on the I Am A Man VR experience, who sat down with Tori and I for an hour to discuss our work on this project. He offered much of the same critique about the teleporting, pointing out that in context it doesn’t make sense. Masking this motion with something that fits in context with the story would be much more effective - for example, using gaze detection to trigger the movement forward. A mother urging a daughter forward might look back, gesture, or verbally ask her to keep moving. In this, we could build a mechanic where the user looking at the mother after these triggers would generate their motion forward in the scene and along the narrative.

Using this gaze detection would have the benefit of eliminating controllers completely, something I discussed previously but didn’t fully understand the benefit of until having these conversations. In discussing the immersion this can bring, I asked him about a common structure I’ve seen in VR experiences so far - this sandwiching of VR between two informational sessions. Our project does this with an introductory Prologue as well, but my question was whether to add information in the experience as it progresses or leaving it alone. He suggested that the addition of information would only serve as a distraction from the scene itself, a distraction that might prevent the emotional reaction and/or conversation that Tori and I are attempting to create. There’s some really interesting layers there: in how the main scene is encased in narrative, how the prologue and “epilogue” scenes frame the experience, how the use of VR itself is encased within a system that provides context for the technology, and that space is designated and placed within an exhibit discussing larger themes - all informing each other.

Coming back from the tangent, Phase 1 helped answer my question as far as the type of motion I should be considering within this space and why. I was able to work out some of the smaller technical bugs that will go a long way in the long run. And I was able to spend a lot of time doing some outside research on VR experiences to help understand the decisions currently being made in other projects.

What’s Next

Phase 2 naturally follows Phase 1, and I think here the best option would be to build up. I learned a lot from the last few weeks and I would really love to develop some gaze control mechanics. Being able to move forward in the space powered by gaze in a crowd and testing this crowd reactions that I didn’t get to in Phase 1 would go a long way towards development this summer and in the fall. I’ll be articulating this plan a little better next weekend after I’ve written this proposal and work has begun. I will also be recording Phase 1 for documentation and uploading that to my MFA page for documentation.


OUTSIDE RESEARCH

Spring Break happened since my last post and I took advantage of the time.

Museum: Rosa Parks VR

Over break I found myself at the Underground Railroad Freedom Center in Cincinnati, OH, where they’re currently hosting a Rosa Parks VR exhibit.

I was really interested in how this experience was going to be placed in the Freedom Center, and what the VR content was going to consist of. This was my first time using VR in a public space, plus I came in not knowing much about the experience itself. I tried not to watch videos or read articles - although looking for the video above I realized it’s incredibly difficult to find any information about it. Up on the 3rd floor of the Freedom Center in a corner to the right are four seats from a school bus on a low platform, and a table for the center attendant to take tickets/give information. The experience was made in Unity, uses Samsung Galaxy 9s and mobile headsets (which were cleaned after every person), and headphones.

Full disclosure, my headset had something VERY wrong with the lens spacing and I ended up watching the experience with one eye closed.

I saw the same sandwich structure here as what Tori and I are using - an introductory sequence, an uninterrupted 360 video experience of the user being confronted by first the bus driver and then a police officer, and then an exit sequence discussing historical ramifications and present day context, all narrated by a voice actor speaking as Rosa. Having the users sit on the bus seats was a really nice haptic touch that I enjoyed - that weird texture and smell just can’t be faked. The user is embodied in the experience, being able to look down and see what she was wearing on that day. Each time slot is 4 people for group every 5-10 minutes, and in our time waiting I saw people of all ages coming over to go through the experience. In order to start, the user has to look directly into the eyes of an image of Rosa Parks for a certain length of time.

I thought that the embodiment was a really effective choice for a static seated experience that requires little to no active participation. The user is reminded by the attendant at the beginning that they can look around in all directions. I was most surprised by how it was situated in the center. Fortunately Rosa Parks is a pretty well known figure in history, but if you didn’t know anything about her there was nothing to inform you in the surrounding area. The informational segments in the experience spoke mostly of what was happening in that time period. Being a standalone attraction sparks curiosity in wanting to know about the experience while being part of a larger exhibition may give greater context in the long run… so I suppose it depends on your goals for the user where this is placed. I think I personally would have liked more information to surround the experience, especially considering how complex the other two exhibits on the floor were.

I think I would need to do this experience again to examine how I felt coming out of it. I wasn’t especially affected - more distracted by the odd 360 video editing happening in the middle to try to increase depth and the funky lens adjustment in my headset - but I did appreciate the nature of the experience itself and its placement in the center. And I was able to find parallels between their development and what Tori and I are working on.

Museum: Jurassic Flight

This was the other VR experience I got to do inside of a museum. We discovered it completely on accident in the Museum of Natural History and Science in the Cincinnati Museum Center.

Me flying as a pterodactyl in Jurassic Flight. Skip to 0:19 for the actual experience start.

After Rosa Parks VR, this was as opposite of a VR experience as I could manage. Jurassic Flight makes use of equipment called Birdly, which I last saw in a video of its prototyping stage. The experience requires you to lie on your stomach on this device, arms out to the side, Vive Pro strapped to your head. You take flight as a pterodactyl, soaring above trees, rivers, and mountains, observing the other dinosaurs living their lives. There is no goal here, no informational aspect to the experience. It’s all about the haptic feedback. There’s a fan at the front of the device that increases and decreases with the user’s air speed, the device tilts forward and backward based on your pitch in the game, and you control direction with the paddles at the end of the “wings” of the device.

The experience is situated just to the right of a big dinosaur exhibit, which provided plenty of context before actually going into it. It’s not particularly thought provoking or educational but it does add on to the content already addressed in the museum from a fun perspective. It’s about 5 minutes, very scenic and peaceful (minus the initial few seconds of motion sickness during a dive), and it was made in Unreal so the environment and lighting was really stunning.

Again, not really able to make a connection here (pre)historically or in structure of the experience, but I was really fascinated by the haptic feedback that occurred and that novel flying experience.

Anne Frank House VR

I found this experience on the Oculus Store and had to give it a go. Unlike the past two, I went through this experience at home on my own machines. I read Anne Frank’s Diary in elementary school, though most of the details escaped me as an adult. This experience recreated the Frank’s annex while they were in hiding from the Nazis.

Again I found the informational sandwich structure. The user is offered the option to go through a story mode or a tour mode through the annex - I chose the story mode. This begins with a fairly long narrated introduction with historical images, followed by the exploration of the annex. The user requires very little interaction beyond pointing and clicking to the next point. Once there, narration begins, and we hear Anne Frank telling us about her daily life in each of these spaces. It’s a linear path through the space and the only interaction is really moving from point to point.

It’s beautifully recreated. The quality of the environment really called me to examine it closely. I wanted to see the pictures on the walls, the books scattered over the bed, what crossword questions were in the paper. Each progression revealed a little more about the family and what their everyday life was like. Hearing these stories in contrast with the empty spaces the user explores creates a wistful mood. I didn’t want to make any noise myself due to the emptiness and hearing about how the family had to remain quiet during the day to avoid rousing suspicion.

The whole tour took me around 20 minutes, and I felt like I really did learn a lot just from seeing the space and hearing fragments about life related to each segment of the house. The choice of motion seems to come from the fact that this seems to be an experience made for mobile and thus requires a controller of some kind. Beyond the point of the cursor, I never see the controller itself. It seems like a good compromise that doesn’t threaten the immersion in the experience.

Traveling While Black

I’ve been meaning to do this experience for the last three months. Now that I have, I can almost guarantee I’m going to need to do it again.

Traveling While Black addresses the issues faced traveling across the country in the past by black Americans, starting in the 1940s and ending in 2014. The interactions and interviews all take place in Ben’s Chili Bowl, serving as a hub and safe space throughout the course of the experience. Every interview occurs in a booth, with the user switching locations from one seat to another with each new person. The visuals themselves are beautifully executed and edited, running strong parallels between past and present. At the very end of the experience, the user sits across the table from Samaria Rice as she speaks about the day her 12 year old son was shot by police in Cleveland, OH.

There was no embodiment, no interaction - the user is watching and listening throughout the whole experience. Placing the user in an intimate setting like a diner booth and in close proximity to those being interviewed allows the user to feel like part of the scene. It’s a 360 video that’s about 20 minutes long, ending with the point that safety is still not guaranteed for black Americans.

I came out of the experience strongly emotional and had to sit for awhile to really absorb everything. Truthfully, I’m going to need to do the experience again to actually analyze the structure and think about the decisions being made and how 360 decisions may differ from animated VR. But I do know that this is the kind of effect I want to have on the viewers of our project. And I wonder if anything stylized, not film, could create that level of personally jarring human connection.

Conclusions

Having gone through a few historical VR experiences now, I’m seeing this sandwich pattern of information more and mode. And I think I understand for the most part why this is occurring. I’m also seeing how multiple narratives are being organized cohesively as well as how one narrative can be distilled to give a whole picture without saturating the user with information. I’m going to continue with this next week and see what other kinds of narrative thought-provoking things I can find, as well as any other museum VR exhibits existing - of any kind - that I might go and explore.

03/08/19: Video Update on Phase 1

This is going to be a relatively short update on how far Phase 1 has progressed in the last few days, but finally including some video footage of the scene working, along with some of the tools I’ve been brushing up on to apply this week.

The above video is a quick demo of the teleport point placement and scaling in the scene.

What was most surprising for me was just how long the sidewalk actually became. It felt like our last prototype was dealing with issues of time because the walk down the sidewalk was too short or the walking motion was too fast. At the height of a child, the building itself becomes this mammoth imposing object rather than just a set piece or a destination. The teleporting really emphasizes the distance too, all of the points are just at the edge of the teleport curve. I think I got lucky there. Overall this layout feels smoother and I’m excited to start putting in the other scene elements.

On some technical notes:

  • During our demo on Thursday it was pointed out that some objects aren’t keeping scale with the ground or street planes. In the video I can definitely see the lamp posts hovering off of the ground- this may just be a matter of making sure the final assets in the scene are combined into one set object. Still experimenting with that.

  • I found in this scene that the teleport point on top of the stairs was actually really hard to get to - you can actually see me struggling with it in the video. I underestimated how large the stairs would become at that height.

  • Which leads me to the suspicion that this height ratio isn’t quite right. I recorded this experience while seated, so I thought it might just be something wrong with the math. I repeated the same thing while standing and had the same issue. I can play with some numbers to get that right.

  • This was my first time testing SteamVR with a headset other than the Vive. Up until now all of my development has been using the Vive headset and controllers. Oculus is what’s available to me in this moment so I took the opportunity - it connected no problem! Teleport was already mapped to the joystick on the Oculus Rift controller. Cue my sigh of relief for a more versatile development process.

I have begun working with the car animation, starting with placing the user.

030819_Phase1a.PNG

I made the loosest possible version of a block car in Maya with separate doors and brought it in just to have something to prototype with. This is where the user’s location in space is going to become an issue- I have to make sure they’re aligned with the driver’s seat. We’re going to have the user sitting in the demo anyways, so we might be able to just calibrate the seat with the environment and have the user sit on the bench.

Working on a GRA assignment this week I also learned how to use the Audio Mixer in Unity. Turns out I can group all my different audio tracks together and transition between various parameter states. Who knew!

Apparently not me. I suspect this is going to fix A LOT of the audio issues we were having in the last prototype, especially having to do with consistency - some of the volume levels were… jarring, and not in the intentional design type of way.

Critique

In class, I think I opened up the wrong version of the project, because all of the environmental objects were scaling without the teleport points attached. When I got home I realized that it was all fixed on my current version! One less thing to tackle.

Going away from the technical for a moment, Taylor posed an interesting question to me: how do we categorize this experience? I realize I’ve just been using the word “experience” but we’ve also discussed “simulation”. Adding that to the long list of queries for this open week ahead of me - confirming a proper term for what we’re working on, and justifying that definition.

What’s Next

  • Car animation

  • Composing Crowds

  • Connecting theory with my actions

  • Resuming my lineup of VR experiences

03/03/19: Phase 1, Midway

In the last two weeks, the physical production of my Phase 1 project has slowed in favor of investigating the theories and plans behind my thesis investigation. I came to the realization midway through Week 2 that I was approaching this prototype much the same way I was approaching the last three and not weighting my theoretical framework or design goals into the decision making process.

Starting with the main project development, here are some of the achievements from the last two weeks:

  • Fixed a bug where the environment was adjusting to the player height but left behind the Start Point, causing the player to actually appear way off mark.

  • Getting the start point to actually move the player to the right spot. I can move the play space, which at least gets us to the right area. This may be more of an issue in the car scene, but the teleport points will cut down potential issues of running through objects or agents in the crowd.

  • Added teleport points to the scene.

    • This actually checked me on my scale once again. I initially only had three points along the sidewalk, and on testing it in the Vive I found that the pointer from the controllers couldn’t even reach the first point! To compensate for the user’s smaller relative size, I added two extra points, made the space in front of the school a teleport plane for free movement (to be explored in the crowd composition portion of the project), and placed a point on top of the stairs to avoid awkward stair collisions in Unity.

  • Major debugging time with SteamVR Input.

    • This was a huge issue, once again. But I’m slowly getting better at figuring out where the misstep is between Unity and my controller bindings. I brought a project from home to the Vive at school, and that particular computer had bindings that for some reason disconnected. Nearly two hours later, we had them satisfactorily connected and shut off the haptic feedback - for some reason, the default teleport in Unity had the controllers vibrating every 5 seconds.

    • Also came to the realization that controller actions only show up in the SteamVR Input Live window if there are functions in the scene that require the bindings to be active. So if I pulled up the window to check the bindings before, say, having the teleport prefab in the scene… it would look like the buttons aren’t working. But it’s because they aren’t being called! One of those tiny little victorious moments of understanding.

Phase 1: Next Steps

I am certainly behind in development for this scene - I should be finishing up the car animation. Next week is Spring Break and I will be here in Columbus cranking out work for the majority of it, which should make up some of the lost time from this week. Therefore the goals for this week are:

  • Complete the car animation

  • Troubleshoot/Playtest on Thursday with classmates

  • Ensure a smooth transition from the car to the sidewalk

  • Check in with Tori about any potential new data to add to the crowd/car, and be in a good position to move forward next week.

Theoretical Developments

Over the course of last week, I had several meetings about where this project is conceptually and where it’s going. I briefly mentioned this in my previous blog post introducing Phase 1, but want to begin documenting my progress here as I work through the language and questions required to articulate my thesis.

My thesis goal is to articulate a framework for designers of VR narrative experiences based on the weight of specific VR design elements (gamification, user identity, movement, visual design, etc), stemming from my interest in how to direct users through a scene with high levels of implied agency (control over the camera). The Ruby Bridges project is operating as the first case study for this framework as a historical narrative. After completion of Scene 01, I will be utilizing another narrative of a contrasting “genre” - currently thinking about mythological fantasy - to test this framework and compare how it is utilized when presented with two different stories.

A huge part of this is recognizing the specific roles that users and designers take on within the scene. In film, these roles are fairly distinct: the “designers” (writers) operate as the authors of the story being told. The directors and crew operate as the storytellers, visually interpreting the material that has been given to them. And the “user” in this case is a viewer, an audience member whose role is to view the narrative that has been visually curated and placed before them. These lines get a bit blurred when we consider video games. There are still writers and designers operating as the authors and storytellers. Users become players, who function as an audience for the world put before them and, to a limited degree, an author of their own experiences. Players have a degree of agency to them that allows them to function and impart change on this world within the game, though the storytellers can still choose to restrict this agency by placing boundaries on the edge of the world or controlling camera movements. Yet every player will play a game differently.

Virtual reality requires the creation of new roles. Users in a virtual space have more inherent agency than ever before with control over the camera and their physical pose. Designers still function as authors and storytellers, but also as directors are responsible for directing a user through the scene. Users, through their newfound agency within the world, then become part of the world as an actor.

030319_Roles.PNG

With these roles in mind, I’ve begun constructing a loose pathway for defining the goals of the experience and the elements that should be considered when working within VR. I designed this with a top-down path in mind, though it’s brought up some side questions about whether a bottom-up approach beginning with exploration of one particular element would be possible. The map below is a working representation of the pieces I’m currently trying to put together, although I know this is a sliver of the questions that are asked when in the design process.

030319_WorkingFramework.PNG

It was pointed out to me last week that the Phase 1 project is tackling questions of the role of the User as an Author/Actor. I’m focusing on how the user moves through this scene, and whether giving them that agency is right for what the scene demands.

I haven’t added any VR games or experiences to my list recently - moving apartments has me at a bit of a disadvantage in this moment. But I have instead begun tackling a spreadsheet to examine various elements in these games I’ve been talking about and how they compare across a wide range of experiences.

Tori will begin adding her thoughts and experiences to this list. Next weekend I’ll be going to the Rosa Parks VR experience at the National Underground Railroad Freedom Center, and I was given some good references for experiences to examine over the next week - Traveling While Black among them.

Connecting my theoretical framework with my developing project, outlining specific goals, and being very clear about what I want these experiences is going to be the priority here for the next few weeks.

2/17/19: Phase 1 Begins

Projects like Orion and spending time on other VR applications has been a welcome break for exploration, but this week brings the return of thesis. We’re working on projects in phases, with Phase 1 lasting for the next five weeks.

I’ve been thinking about our prototype of Scene 1 (Ten Week Prototype) from the Ruby Bridges case study last semester. The final result was not a functional experience technically or visually, and after speaking with peers and receiving feedback I realized that I needed to go back to some fundamental concepts to examine some of the decisions made in designing the experience, such as timing, sequencing, motion, and scene composition. I feel that our last project started getting into the production value too soon when we should have been focusing on the bigger questions: how does the user move through the virtual space? How much control do we give them over that movement? What variations in scale and proximity will most contribute to the experience? These are the questions we started with and seemingly lost sight of.

In developing the proposal for my project I also began considering more specifically what I’m going to be writing about in my thesis. And, more importantly, beginning to put language to those thoughts. Recent projects have been allowing me to question what parameters designers operate with when we’re designing for a VR narrative experience. It gets even more complicated when we start breaking down the types of narratives being designed for. In this case, the Ruby Bridges case study is a historical narrative - how would those parameters shift between a historical narrative and a mythological narrative? What questions overlap? Orion was a great project for examining design process for narrative, and now in shifting to another, I’m interested to see how that process carries over here.

Phase 1: Pitch

Production Schedule for Phase 1

I will be creating two test scenes to address issues face in the 10 Week Prototype. The first will address Motion- how can a user progress through this space in the direction and manner necessary for the narrative while still maintaining interest and time for immersion? And does giving this method of progression to the user benefit the scene more than the designer controlling their motion? In the previous prototype we chose to animate the user’s progression with a specific pace. This time, I will be testing a “blink” style teleporting approach, allowing the user to move between points in the scene. Each of these points creates an opportunity for myself as a designer to have compositional control while still allowing the user control over their pace and time spent in that moment. This also provides an opportunity for gamified elements to be introduced, which is something I will be exploring as I move through the project.

The second scene address proximity and scale, creating a scene where the user adopts the height of a six year old child and scaling the world around accordingly. Even to the point of exaggeration to experience that feeling for myself. It was suggested in a critique last semester that I create these small little experiences and go through them just to understand how they feel for my own knowledge, and I agree with this method - more experience would certainly help inform the final design decisions. I will again be experimenting with the composition and density of the mob outside of the school to create some of these experiences.

Week 1

I purposefully scheduled Week 1 to focus on planning out the rest of the project and getting a strong foundation built. I planned out what I was going to do specifically in each scene and brainstormed various ways to solve technical issues. Writing my project proposal had already helped solidify these plans, but I’ve developed this back and forth process with my writing. My sketchbook helps me get general concepts and ideas going, where the proposal then puts formal language to these ideas. While writing the proposal I usually find a couple of other threads that I hadn’t considered, which brings me back to the sketchbook where I then update the proposal… the cycle continues, but it has been especially productive over the last two weeks.

I focused on getting the overall environmental scaling and test space created this week using assets from our previous prototype. The issue was having the user start the experience in the right scale and position every time. Locking in the camera in VR is a pretty big “NO”, and Unity makes it especially difficult as the VR Camera overrides any attempts to manually shift it to its proper spot.

Scaling was much easier to figure out than I expected - I’m just scaling the entire set to account for the height of the user at any given point based on the height of a six year old (1.14 m) rather than forcing a user to be a height that physically doesn’t make sense to them. I expected this code to be much more difficult, but so far it seems to work pretty consistently when I test it at various heights.

I’m still working on getting the recentering function to work. I found a lot of old documentation from 2015 and 2016 that doesn’t account for all the changes in Unity and SteamVR. There’s some good concepts, and even a button press would be great for now. Still planning on continuously exploring this, and I expect I’ll be working on it throughout Phase 1.

NEXT

  • Begin Blink teleport testing through the scene.

    • When I made this schedule, I didn’t realize that SteamVR has a Teleport Point prefab. So, yay! Production time cut down! I’ll be using that spare time to add in primitives simulating the placement of the crowd and brainstorming potential gamification/timing. I may also go on a search for some audio and add that to the scene as part of my testing.

  • Experiment with button pressing versus gaze direction. How does the scene feel without controllers? Would gaze navigation be effective here?

  • Playtest #1 with peers, gaining feedback on the button or gaze mechanisms and other developments made during the week. Will also gain feedback on the scaling and positioning of the user.


OUTSIDE RESEARCH

The games I played this week were all very physically involved with a lot of motion required on the part of the player. However, none of these games used methods that required teleporting or “artificial motion” via joysticks or touchpads. All were based on the motion of the player’s body. Even more interesting, I experienced a strong sense of flow in these games than in past titles, though each for different reasons. Considering my thesis, which would not be this action oriented, it’s helpful to see how specific components in this games - sound, motion, repetition - are utilized in ways that ultimately make a flow state possible.

FLOW VIA SOUND: Beat Saber

Beat Saber is a VR rhythm game that operates as a standing experience, where players use their arms and lean to hit the cubes with sabers on the beat and in the indicated direction. Unlike the others, I’ve been playing this game for a few weeks and have had time to examine increase in skill level as well as what kind of experience I was having. It was initially very difficult to get used to the cubes flying directly at me and to be able to react to the arrows indicated on the cubes - a longer adjustment than I expected, actually. I play games like this on my phone using thumbs, and my body knew what it needed to do… but was having a difficult time getting my arms to react to it. After a couple of weeks I can now play the above song on Hard mode, which is what I’m including for this group of games.

Every time I play a song, I usually get to a point where I experience flow - able to react to the cubes as they come and follow the rhythm without really even thinking about it (and significantly better than if I am thinking about it). It’s a state that feels instinctual and occasionally feels as though time slows down, a common description of flow. Sound is what’s driving that experience, without the music this would be much more anxiety-inducing and stressful than enjoyable.

After playing I was thinking a lot about Csikszentmihalyi’s book Flow, where he outlines several important features in a flow activity: rules requiring learning of skills, goals, feedback, and the possibility of control. Even with varying definitions of what is considered a game, most require those components in one way or another. He references French psychological anthropologist Roger Caillois and his four classes of games in the world - based on those, Beat Saber is an agonistic game, one in which competition is the main feature. In this case the competition is against yourself to improve skills and others to move up in the leaderboards. However, as frequently as I did fall into flow, I also fell out of it easily when a level grew too difficult or beyond my skills.

FLOW VIA MOTION: SUPERHOT VR

I’m not quite sure how to categorize Superhot VR, but it’s the most physical game I’ve ever played in VR. Players can pick up items or use their fists to destroy enemies making their way towards you in changing environments… the twist is, time only moves if you move. Every time I rotate my head the enemies get a little closer, or if I reach out to pick up a weapon suddenly I have to dodge a projectile. As the number of enemies increased with each level I found myself kneeling, crouching, or dodging. There is no teleportation or motion beyond your own physical movement.

Everything here is reactionary. I experienced a strong level of flow, unlike the intermittent experience I tend to have in Beat Saber. Time being distorted here and used as a game mechanic almost seemed to echo those flow states. The stages are all different with minimal indication of what is coming next, and often the scene starts with enemies within reach. I didn’t have to think about what buttons or motions were required to move, it was a natural interface - I could just move my body to throw punches or duck behind walls. While this was effectively immersive and did result in a strong flow state, I was pulled out of it immediately every time I ran into a wall in my office or accidentally attacked an innocent stack of items sitting on my desk.

Sound was minimal, which I very much appreciated but sets this game in stark contrast to Beat Saber. The focus of this game is motion, not music or rhythm. On a continuous side note from the last two weeks, death states in Superhot VR were much less disruptive than the other games. The entire environment is white, so the fade to white and restarting of the menu isn’t very jarring or disruptive to the experience. It was easy to jump back into the level and begin again. This may be an interesting point for transitioning my thesis between scenes- having a fade or transition that is close to the environment rather than just doing the standard “fade to black”. I suppose it depends on the sequence I’m designing… a thought for next week.

Elven Assassin VR

And last, this is a game that combines a little bit of everything. Elven Assassin VR requires you to take the role of an archer fending off waves of orcs planning to invade your town. Your position is generally static with some ducking and leaning, and the ability to teleport to different vantage points within the scene. This deals in precision and speed, and the physical motion of firing the bow. The satisfaction of hitting a target in this game was immense, and I ended up playing until my arms hurt. The flow in this game comes from the rhythm of motion - every shot requires you to nock, draw, aim, and release the arrow to take down one enemy. There isn’t really a narrative occurring in this game at the moment. It tends to operate more like target practice, and the concentration required was what induced that flow state.

Falling out of flow was a little easier here with technical glitches - tracking on my controllers would get disrupted and my bow would fly across the world while I fell to a random orc sneaking through the town. Their use of a multiplayer function is also really interesting here, and the social aspect may be an interesting avenue to explore with this game.

Conclusions

I didn’t actually expect to talk about flow at all, it was just a happy side effect. These are three VERY different games and that experience of flow was the strongest commonality between them. This kind of goes back to game design as a whole rather than specifically VR design. But those little differences in how each game approached physical action and reaction to the environment really drove that point for me. Where Elven Assassin VR focused on action that was repetitive and chaotic, Beat Saber focused on the rhythm of those actions and applied them to the template of the song. Superhot VR left the chosen action up to you, but suggested some paths and required movement to occur in order to advance. The result was neither repetitive nor rhythmic, but required control.

I am not planning on making experiences so heavily focused on action and movement as these, but bringing what I’ve seen here from the choice in motion to smaller actions or interactions with the environments in my thesis work might help me answer some of the design questions I’m exploring in the Phase 1 project. How can a user move through a space? I’m considering teleporting from point to point, but have not yet thought about the potential secondary actions on behalf of the user - those spaces where gamification could occur. These games re-framed motion for me, reminding me to define more specifically the type of motion expected of the user, and ensure that the motion (or lack of) enhances the experience itself.

02/10/19: Reviewing Orion

After five weeks, the Orion project has come to a close. And as with most project, the final result was vastly different from what I anticipated when I began.

Textured image of Orion in UE4 editor

Textured image of Orion in UE4 editor

PROCESS

When I began Orion I anticipated a fairly straightforward process. I would be working with Quill and Unreal to learn the pipeline for each and between the two. What I had forgotten is that I have never created an observational narrative experience from scratch in VR. I am usually planning for some form of interaction, or in the case of my thesis project the narrative and environment are already described for me. Traditional storyboarding and animatic techniques were not going to work for me, which is where my foray into Maquette and Tilt Brush came in. Every step of the process was steamrolling through technical issues to see what worked and what didn’t work.

Process path for Orion

I realized that I really just needed more time to learn the painting and animation techniques for Quill along with all its quirks. I was excited about painting the cabin last week, but ultimately the asset ended up not working and I still build the asset using Maya and Substance Painter… I have never been so happy to be back in Maya, to be honest. I used the terrain tools in UE4 and the “Forest Knoll” asset pack purchased a few years ago to build the rest of the environment. I used a few Quill animations, such as the candle and the stars, as “accents” to the rest of the scene.

On a personal process note, while putting together the scene I made the decision not to use any visual reference at all. This was for two reasons: to avoid hyperfocus on unnecessary details, and to operate within the essence of memory. The project description was to show the essence of our memory in 15 seconds - well, 15 seconds is a very short time in VR. That’s usually the amount of time it takes for a viewer to orient themselves and focus in on the story. I didn’t want to overwhelm with an overly detailed environment that misses the point of my memory. And if I used visual reference, I would shift focus from my own memories to what it “should be”.

CONCLUSIONS

Even with all of the roundabout processes, I felt the final result was remarkably close to what I remember. Closer than I expect the original storyboards would have been. I think those would have been visually exciting and fun to watch, but that’s not what this moment was about. It was a quiet fifteen seconds on the deck in nature with just the stars and the sound of the trees. I have yet to share this experience with my partner to see how her memory might differ from my own.

I also learned a lot about the technical aspects of these tools, and personally did not enjoy using Quill for most of my painting time. It was fun to make some looping animations, but I doubt I’ll ever actually use this again for a project in the near (or distant) future. The final result was something I probably could have made in three or four days of work in Maya and Unreal, but I feel that I’m at a good point to move forward if I want to use Unreal for future VR experiences and feel more informed about the pipeline options available to me.

NEXT

  • Documenting Orion. I’m having a difficult time getting a video of the full experience because the scene is so dark. In the headset it’s easy to see, but the screen recordings I have taken so far have been really low quality and dark. I’m currently working on some rendering options in Unreal that may produce a better result.

  • Begin Phase 1 Project. I will dedicate next week’s post to the Phase 1 project centered around my thesis, but currently I’m still working out a final plan and some language to describe the project itself.


OUTSIDE RESEARCH

Continuing my theme of playing VR games and experiences for research, this week I went for a bit of a different track. I did some digging around in the Oculus and Steam stores, and I was able to play four of the games I had lined up.

BOARD GAMES IN VR

I think the initial question here for me was “why”? I enjoy board games, specifically the social aspect. Sitting around with friends chatting, accusing each other of hiding cards, accidentally bumping the board and sending pieces flying. It’s all part of the experience. I noticed on the Oculus store they have several Chess applications, so naturally I had to download one. And I also found a Catan VR app that I wanted to try.

(I found out while writing this post that both applications are made by the same studio called Experiment 7)

The only real difference we have between a board game in VR and a board game in real life is the social aspect, which is really what these games are trying to create. Catan’s environment looks like a mountain lodge with scenes of mountains outside and a nice soundtrack, with four chairs sitting around a table. Chess is similar, taking place in a library by default. I played against the AI for both, which in Chess produced a little robot figure watching me across the table while the Catan foes were painted portraits that had moving eyes and facial expressions. That bit was a little unnerving, to be honest.

I got absolutely destroyed in both games, but I was surprised how much I enjoyed the experience of sitting in a chair interacting with other “players”. The animated board in Catan was a nice touch, although things move so quickly it took some getting used to. Being able to physically pick up a chess piece and hesitate or fiddle with it before moving was great improvement over playing typical browser games. I felt present in the world and able to interact with the other players, feeling real frustration with them when I lost resources or had a bad roll. I was worried that these games would simply animate the board and leave it at that, but the efforts made to engage the players in the space and with each other made for a much more effective experience.

EXPLORATION and PUZZLES

The first game I played is called “I Expect You to Die”, where the player is a secret agent going on missions where the path forward must be determined by actions and clues in the space, and often the process to figure out this path results in a gruesome death. I played the first level of this game a few years ago, but since then they’ve added a beautiful animated introduction and several new levels. This game is meant to be played seated with the player reaching out or leaning to move or using their telekinetic prowess to bring objects to them.

In this case, the lack of locomotion around the scene increases the challenge and still makes for an enjoyable experience. It becomes accessible for all kinds of players and play spaces, and the missions themselves have good variety… though the deaths are still extremely startling in VR. The controls especially seemed to work and I enjoyed a great level of dexterity in the scene switching between objects and using them with ease.

The last game was “Internal Light”, an escape room style game where the user must navigate a creepy dark building to make it to the outside with a tiny ball of light as your guide.

Now, when I started this game, I didn’t know what it was going to look like or what kind of gameplay there was going to be. You start off in a cell chained to a bed in a scene that looks like its out of Resident Evil. There will be a week when I go into horror games, but I was not planning on it to be today and, well, I’m a chicken. I immediately started sweating and wanted to leave (escape?). The game itself is not a horror game, it’s just creepy. But the environment was effective for building suspense and tension.

What really sticks out for me here is locomotion. The player moves while holding a button and alternately swinging your arms back and forth in a skiing motion. I have NEVER seen this before and it was oddly effective. To navigate the player is required to be crouching and dodging security, and there’s a special kind of anxiety when swinging your arms to move from one cover to the next, hoping you’re moving fast enough. Even though I was standing I didn’t get motion sick, and was able to run through most of the game fairly quickly. I didn’t see any options to adjust these settings.

CONCLUSIONS

All four experiences used their environments to create presence in the space, and included a level of AI “social” interaction. Whether it was the calm atmosphere conducive to board games, the action hero inspired music and imagery, or the anxiety-inducing horror themes, the environments were really the selling point for the experience. The social interaction between computer and user (with the potential for multiple users) negates the isolation that VR can sometimes induce as discussed last week. I’m still curious about why that form of motion for Internal Light worked so well, and I am curious to see if any similar methods are used as I continue to explore what VR experiences are out there.

2/3/19: Painting and Planning

As a production week, I’ve been splitting my time between getting audio set up in Unreal and getting objects made in Quill. The last few days are where I get to put it all together with the last bit of animated assets.

I’m operating a little in the dark right now (pun intended) on what the final look of this piece is going to be. I timed out some atmospheric fog to reveal the scene slowly and made some cues for the sound effects: a match lighting, trees swaying in the wind, ambient noise for the scene around. The narration is in the scene, but I still need to adjust the timing and put it all together.

Quill has been easier for making static objects. I painted the cabin setting for the user - quicker than I expected using the straight line tools and some colorize to get the final shading in. The candle currently in the scene feels a little too bright, so I tried to go darker and see what the lighting in Unreal can do. Something annoying about painting things like this in Quill- if you’re painting a lot with a specific color, the lack of lighting tools in the program makes it really difficult to see the cursor against those colors. I got lost trying to find where my brush was in the cabin sometimes even though my hand was right in front of my face. Click the images below to check it out, though they’re really dark when not in the program.

This last leap is about putting the pieces all together and testing it out. By the end of today I should have all the assets in and will be doing the final bit in Unreal. We were able to get both Quill and Unreal working in the labs, which significantly increased my production time.

What’s Next

  • Finishing the last few Quill assets

  • Compositing the 3D and Quill assets

  • Finishing audio timing

  • Add a “Restart” button, so that the experience can loop at the viewer’s choice or provide an easy restart between viewers (a reach, but would be ideal)

  • Troubleshooting


Outside Research

NARRATIVE

Throughout this project I’ve been thinking about how to conduct the viewer’s attention to the events you most want them to see, while taking into account that they have agency over the camera itself. Part of that has to do with seeing the viewer themselves as an actor within the scene, and the designer as a form of director. And how that production process would differ in VR compared to the 3D workspace - that’s something I’ve been struggling with myself in this project, finding that path in a very short amount of time. An article about “Cycles”, a VR short that Disney released late last year, came across my path.

Disney’s “Cycles”, from AWN article. Source

“Cycles” has a really interesting visual feature in that, when a viewer looks away from the central action to an area off to the side or behind them, those features desaturate and become darker. I also read that they used Quill to create storyboards for the film and developed a number of virtual tools to experience each stage of the process both inside and outside of VR.

GAMES

Moving away from the Quill project and towards my Thesis, I decided to use our unexpected Snow Day to conduct some VR research… using my chunk of time to experience the variety of things available on Steam and broaden my understanding of what techniques are being used.

I started with The Talos Principle VR, a game that I enjoy playing on the PC. When VR was first released, many game studios started porting their current titles over to VR by just changing out the controls and letting the content flow just the same. I wanted to be able to do a direct comparison of the two.

Screencap from Youtube playthrough by Bangkokian1967 (source)

Screencap from Youtube playthrough by Bangkokian1967 (source)

What I was really exploring here was how they approached movement. The Talos Principle is incredibly nonlinear, where players generally get to choose how and where they go, and what path they choose to take to get there. It’s a puzzle game with generally realistic assets, and movement to avoid enemies is a huge part of successfully completing each stage.

The player gets enormous amounts of control over how they want to move through the game, showing every option about how you move and how the camera adjusts for that movement. I started with teleporting, which works okay to get across long spaces. But in confined spots with enemies that require you to move quickly, the few seconds it takes to acclimate to your new location tended to result in the death of my character.

Oh yeah - dying in VR? More disturbing than I thought it would be. It’s just a little explosion sound and a fade to black, but still very startling.

Walking using the touchpad didn’t make me as sick as I thought I would be after I adjusted the vignette over the camera and made sure to stay seated. Standing resulted in quick loss of balance and motion sickness, though I noticed that movement in a direction where I wasn’t looking also made me a little queasy.

Overall, I thought the adjustments made to the motion in the game worked well and I was able to play for over an hour before taking off the headset. I’m not sure if the game experience was especially different from playing on a PC, but I’m also aware that I already know the story and it may be difficult to judge how immersed I was when I already know how the game works.

EXPERIENCE

The last thing I wanted to look at was an experience called Where Thoughts Go: Prologue, available in Steam. The user sits in an environment and is presented with a question, where they can listen to the anonymous answers of other participants and then record their own to move on to the next. There are five questions, and I still spent over an hour in this experience.

Where Thoughts Go: Prologue, Chapter 2.

Each environment changes to suit the question, from the lighthearted first question to darker and more somber for the last. The experience was incredibly meditative - the environments are pleasant to sit in. The little orbs in the image are the responses of previous people. You listen to their voices answering, and I was shocked by how open and honest the answers were. Being able to hear someone’s voice crack a little bit as they talk about a sad event or get higher discussing an upcoming wedding to their love just pulls me further in to the space.

VR can be considered isolating, as for the most part we’re all just sitting by ourselves in a headset in our own worlds. This took an isolating experience and turned it into a communal feeling, a place where you can be vulnerable without risk. There are no usernames or accounts, just a recording. When a user adds their own recording to the space, you pick up the orb you’ve just made and pass it off to join the world. It becomes a sense of closure and just enough participation that I felt like part of the experience.

Where Thoughts Go: Prologue, Chapter 2

Conclusions

I realized that I haven’t been very involved in what’s happening in VR outside of the academic research world, and need to continue going through these experiences alongside my own research. As I go through I’m keeping a journal of notes from each experience and what I can take away from them. I would like to play a made-for-VR game next week and see how that feels compared to a port like The Talos Principle, and search for other more community-based experiences like Where Thoughts Go.

1/27/19: Quill to Unreal

Where last week was full of conceptual challenges, I encountered all of the technical challenges over the last few days. Before moving too far into production in Quill, I wanted to make sure this was a pipeline that was feasible and functional within the time span I have. I also wanted to double up my time by facing on the technical challenges and still working on an animatic.

Maquette was a good first step last week. I have found that planning for VR in VR is a key part of the development process… yet it’s still difficult to choose a tool. While Maquette presented many opportunities for rapid spatial iteration, what I really needed - light placement, pipeline development, and audio - just wasn’t viable in that environment. I needed to get Unreal functional. I was able to get Quill and Unreal working together on the same computer in the lab, but still face issues with frequent crashing and Quill being incredibly picky about tracking. I haven’t faced those issues on my home setup, and so I’ve been doing most experimentation on my own setup.

I’m still getting used to the controls in Quill. They don’t feel especially intuitive, though I am slowly getting better with time. Working on another project in Tilt Brush earlier in the week, I was able to get more of a feel for painting techniques in a program that I feel is much more pared down and easier to iterate in. Jumping back into Quill afterwards felt a little more comfortable and I am getting faster. To test the pipeline I animated a candle flame and sparks that are a major lighting source in my story:

Animated flame in Quill

Animated flame in Quill

I watched Goro Fujito’s video on how to animate brush strokes for falling leaves- incredibly helpful, though I was still having a rough time getting a hang of that painted lathe trick to make solid objects.

I exported that out of Quill as an alembic file, and that’s where some of my trouble began. There is very little documentation on the pipeline for bringing a Quill animation into Unreal. Unity has its own alembic importer and shader for Quill, and Maya has some decent documentation. I tried bringing it into Maya, exporting into different formats, bringing that into UE4 and Unity. Just having a ton of issues getting the animations to play properly and texture to display.

It turns out the final process was to export the alembic from Quill, import into Unreal as the experimental Geometry Cache component, and make my own material with a vertex color node. I’m pretty sure I can separate individual layers in Maya and export them as their own alembic files for use, but that’s a process for the more complex elements in the scene and I haven’t tested it yet.

I started building a scene around that, getting some lighting in there and blocking out some of the bigger landscape features that I’ll later be painting.

UE4: Blocking in geometry.

UE4: Blocking in geometry.

Unreal has a pretty good Sky Sphere with illuminated stars, I’m using that as a stand in right now. As I blocked in shapes I made sure to check periodically in the Oculus that the scale made sense for what I’m trying to accomplish. I am also familiar with the Sequencer tool in Unreal, and so have been using it to key values in the scene to create a basic animatic. The result is a developing block-in for my project that functions as a VR animatic while I get more familiar with Unreal. The viewer starts in a dark fog, which then lightens briefly to reveal the candle lighting. Over time the fog recedes and the stars show. I plan on guiding the user’s attention with specific lighting cues, the first already in the scene with the rising sparks on the candle.

Current state of the sequencer, using to coordinate my animations.

Current state of the sequencer, using to coordinate my animations.

Going through the process, I think my biggest question right now comes down to scale (again). I want the viewer to feel small in the beginning, but then transition to feeling close and connected. Scenes in VR have a tendency to feel very large, distances seem much farther than they should be. I’m interested to see how much more effective a dramatic shift in proximity to the viewer can be in a VR space. On a project last semester I fell into a habit of testing the scene in the VR Simulator instead of in the headset. The result was a scene that felt too large for the user, and I’m already starting to catch those instances just by working in the headset more frequently. Animating in Quill has been really helpful as well, as I’m able to use my body as a reference.

Next Steps

Adding in the narrative audio will be the first step of this week to work out the timing, and bringing in more Quill animations and static models by the end. Now that I understand the pipeline a little better and how to move around Unreal, I’ll be able to bring in all new work and add it to the scene as I go.

I have also begun collecting sound effects, and will be using those to build my scene up throughout the week.

1/20/19: Considering the Narrative VR Pipeline

Planning a narrative experience in VR requires its own pipeline and structure. Logically I already knew this, but I still went into development with the same animation mindset. I spent this week focusing on fleshing out the narrative itself, creating storyboards, and determining which technical paths are viable - a process that reworked itself along the way.

Gathering References

I began gathering some reference this week, looking for potential lighting inspiration and trying to determine how these scenes are created. Goro Fujito’s work was great inspiration, but I realized that watching his renders and videos are exactly the same as watching a traditional animation - I needed to experience the scene itself, see where the lighting takes place behind the viewer, above the viewer. How does the scene play as a whole?
Tilt Brush to the rescue. Tilt Brush provides the ability for users to select scenes uploaded by other artists and watch them being painted in VR or skip ahead to the final result. I went through many of the scenes while in the Vive, focusing on those whose lighting most closely matched my own or whose style would be useful to observe.

Keeping in mind that Quill does not seem to have the wide variety of playful brushes available here, watching how the artists structured these scenes gave me some ideas for potential visual styles and techniques. After-Hours Artist is the only one I experienced that used 3D models that were then painted on top of, something I meant to explore further in Quill. Backyard View showed a series of single paint strokes layered in front of each other, then using a “fog” brush to emit a tiny bit of light and create depth. It was incredibly effective and dramatic in this case. And in Straits of Mackinac, the artist created the illusion of water by setting the background to a dark blue and implying reflection with only a few brush strokes.

Just by being in VR I found I was able to more fully deconstruct the scenes than I would in a still render, setting the path for my way forward.

Story(board)

At the same time, I have been fleshing out what it is I want to happen in this experience. The result was the following initial concept:

As I was working through the story, I grew frustrated.

Storyboards are a standard part of the animation pipeline, and I fell into the process of making one without realizing that the end result would be nearly useless for conveying what I am trying to create for this experience. Storyboards assume that designers have control of the frame, that what is presented to the viewer is a carefully constructed composition flowing from one scene to the next. At this point in VR, the designers have next to no control over the camera. I can choose which direction the viewer may start out facing. I can provide a limited scene with nothing else to focus attention on. I can attempt to draw their attention with sound cues and peripheral movement. At the end of the day, the viewer gets to control which details they experience within this world. Creating these storyboards did help me generally work out what I would like to happen within the experience, though I do not believe they are useful in helping me convey that to others.

I’m currently taking a Narrative Performance in VR class that is discussing many of these topics, and one helpful thing from this week was a Variety interview quote from John Kahrs discussing the making of Age of Sail. Kahrs comes from an animation background and talks about having to break that pipeline in order to develop an animated VR cinematic. “I was told not to storyboard it and just dive into the 3D layout process, which, I think, was excellent advice.” In that same lecture, this diagram from AWN article
Back to the Moon’ VR Doodle Celebrates Georges Méliès was presented:

The designers for that experience split the scene into sections and mapped out the action occurring in each part of the scene at every time. Thinking about the scene in this way, as a production rather than a composition, changes the way I’m approaching both the narrative itself and the production process.

Time to change tactics. VR manipulates space, not a frame. It then follows that I should begin feeling out that space in order to “storyboard” my animations.

I moved into Microsoft Maquette.

Maquette offers ease of material. I can place basic 3D shapes at all scales, manipulate a painting and text tool, and create multiple scenes that can be easily scanned back and forth to watch progression. I can view these scenes from a distance or at the viewer’s level. After experimenting with the tools, I began building a primitive scene to understand spatially what manipulations I wanted to happen. The result is an odd combination of an animatic and a storyboard.

Technical Progress

I did some experimenting in Tilt Brush, first with painting and then with the export pipeline. I am currently still waiting on Quill and Unreal Engine to be available in the lab, but will be spending this weekend working on my Oculus at home to see the results. Tilt Brush gave me some practice working with painting in a virtual space, specifically dealing with depth and object manipulation. I chose to create one of the chairs from my scene with the candle sitting on it as a test subject.

Painting in Tilt Brush of a candle sitting on the arm of a chair.

I turned most of the lights down in Tilt Brush to get a feel for what the scene would actually be like, and see what the various brushes would produce in terms of light. Not very much, as we can barely see in the image above.

What I really wanted to test was the export process from Tilt Brush to Unreal Engine. Tilt Brush exports as an FBX with the textures, but upon importing to UE4 I realized that the FBX is split into pieces based on which brush you used for each stroke. Further, the materials don’t seem to work without undergoing a process in between to assign a vertex color map to the object. I’m still a bit hazy on this process, though from my understanding Quill exports in a different file format that will seemingly not require this middle step.

Unreal Test - bringing a Tilt Brush model in, without functional textures.

Unity, however, has a package made to work with Tilt Brush materials called Tilt Brush Toolkit. Once downloaded from Github and loaded into a fresh Unity scene, I was able to import my model without any issues from the textures. All I had to do was drag it into the hierarchy.

Unity Test - bringing in the Tilt Brush object after importing Tilt Brush Toolkit.

Next Steps

My steps forward are really just finishing up where I’m at now and making some real steps towards solid production.

  • Spending time animating in Quill. The next week will be getting some of these base animations down in Quill and trying to export into Unreal.

  • Determining which 3D models I’ll be creating and starting work on that, while blocking out their presence in UE4.

  • Finish creating Maquette scene mockups. Finalize story.

1/13/19: Investigating Quill

This week marked the start of a short project on experiential storytelling, memory, and light. We were asked to think about three memories in which lighting was an important factor, and write a short description of this moment. Emphasis was put on the word moment - this is not meant to be a life story. The idea is to bring the viewer into this moment, understand what it is that’s happening, and then exit in 15 seconds.

I started thinking about memories that had specific focus on lighting, and found it was more difficult than expected. There were plenty of memories where I could remember what the lighting was and appreciated it, but I had to find three where it really stood out to me. I found that when writing about them, I was walking a fine line between what I’m saying to the viewer and what they’ll actually be seeing in the experience. The descriptions were going to be recorded and part of the audio. For each chosen memory I already had a vague impression of what I wanted to accomplish, the most difficult part was deciding how much visual detail I would be including along with narrative, and how specific that narrative would be.


Chosen Path

“I was looking for Orion. He’s there, as always, but tonight he brought friends to fill the usually empty sky. Standing barefoot on the deck with only the glow of the candle, we stared at each other over the hills, making introductions.”

This memory is from last May, standing on the deck of our house in North Carolina. My partner and I drove down from Columbus for my birthday. The area isn’t very populated, mostly woods - on a clear night it’s easy to see all of the stars. We stood outside the first night we got there with all the lights out except a candle just looking at the stars. As a child I would always look for Orion every time I walked outside, it was the only constellation you could usually see from where we lived in Miami. The light from the stars, the candle, and the houses on the other hill really stand out to me in that memory, and I chose it because I feel that I could bring the essence of this moment to a viewer with varying levels of abstraction.

Panorama off the deck at North Carolina at sunset. Original scene where the memory takes place.

Process

In the past, my research has required virtually the same pipeline visually every step of the way. Block modeling in Maya, some texturing in Substance Painter (occasionally), and then putting it in Unity and adding lights. The focus was on making the program itself function rather than impart an experience visually. I want to take a step back and create something that imparts meaning without necessarily requiring the viewer to actively be a part of it.

Oculus Quill presents some really interesting opportunities for animating in virtual reality. I spent some time looking around and finding examples of these animations that might be similar to my own.

Artist Goro Fujito spends his time creating animations in Oculus, showing a variety of scenes and perspectives. Viking Rockstar is a great example of the type of color and lighting I want to use in my own scene, includes multiple shots and sound design. I wouldn’t categorize this as a virtual experience, but as an animation it’s beautiful and on the right track stylistically.

This short looped animation puts the viewer in the perspective of driving through the rain. With the sound design and lighting, it’s incredibly effective and shows how the user can be brought into an experience.

Fortunately, Fujito also posts videos where he shows his process and explains his animation workflow. I watched this to get a better understanding of how Quill functions and if it would be a good option for me moving forward.

The official website provides some resources on how to export animations and FBXs to Unity, though I needed to look externally for information on how to do this in Unreal Engine. I was considering using UE4 specifically for its lighting capabilities. I worked on the lighting for Project Sphincter while at CCAD, and Unity just hasn’t been able to compare. As of now, I am leaning towards this option.

Putting the final software choice aside for a moment, I decided to get into Quill and see if this was really something I wanted to commit to. Granted, I’ve spent maybe a grand total of 3 hours in it and probably need to watch some more tutorials, but the learning curve is pretty rough. I had a difficult time getting a hang of the controls, which are not well explained when first entering the program beyond a little diagram that pops up by default. These were the initial sketch results:

capture00000.png

Next Steps

After spending some time in the Oculus, I don’t think it’s practical to do the entire scene in this way. At least, not from scratch. I need to investigate bringing in models and animating over top of them, possibly as reference. It’s very difficult to gauge depth in there once the scene is moved around. I also need to look into animating only certain objects in the scene with quill rather the entire environment, or blending them together. This will help me determine a production schedule for the next two weeks.

Beyond the pipeline research, I will spend this next week gathering my final visual reference, sketching out a storyboard, and recording my story for timing. I have been gathering some information on lighting and technical terms in order to actually discuss the decisions I’m making on the lighting in the scene, and will be discussing that more next week as well.

Looking Back on Liminality

A few weeks have passed, and I wanted to wrap up the work we did on Ter(li)minal from September!

Our end result was a space in which we sought to place the participant in a liminal space, forced into a sense of waiting. You are constrained by lack of movement and lack of activity, only able to observe by looking around and rotating your head. Upon starting the application, the user sees a space sparsely populated by seated and walking figures. As you wait, small changes begin to occur. Once empty seats are filled with figures. Seated figures may change positions when you look away. The departure board gains more and more red delayed flights as time passes. Babies scream, planes take off, the space becomes more chaotic and crowded with each delay announcement. Figures began to break away from their straightforward march along the walkway and defy gravity, floating through the ceilings and moving sideways out to the planes landing. Finally the scene calms as the boarding call is made, and the player is allowed to move on.

In the process of development, I learned a lot about linking layered timed events in Unity. Once the sightline script was functional and able to be applied to multiple objects, it became a matter of making sure these events were allowed to occur at the proper times in the application. I had to go back to the basics: public booleans and instantiations. The sightline function ended up working out due to a tip from Alan to use Transform.InverseTransformPoint instead of the function with the frustrum planes. I was able to get the function working that same day and then it all became about timing. Sara made a rhythm chart for the project that I based all of the interactions on:

rhythmchart_3.jpg

Some critique points that came up:

  • Move camera back. The player’s face is too far forward over the model, and it’s nearly impossible to see the models placed nearby.

  • Add interaction to the models around the player. The boarding pass, phone, and books around the player were actually supposed to be moving. I ran out of time and just didn’t make it around.

  • The departure board is very hard to see. In our presentation, we talked a lot about being able to see these subtle changes in the environment around you. But the departure board was placed so far away that it was difficult to read and see these changes as they occurred. Moving it to one of the pillars by the player might be more effective.

  • Animated figures- be more intentional. Some technical issues came up with the loops on the animations and timing them out. While the number of figures does increase over time, it’s difficult to see once they start walking through the floors and ceiling. This happens fairly quickly- waiting for more time to pass and spacing out these occurrences would make it feel like less of an accident (although… to be honest… it definitely was an accident).

  • Line renderer on the objects seems to glitch a lot as they walk, gets confusing and difficult to see the figures.

  • Audio. Needs to be louder overall- very hard to hear on the phone even with headphones.

Overall I was very happy to learn the process for android development and get to work a bit with the Daydream. It was much easier than expected and very quick to prototype. I would like to revisit this in the future and make adjustments, though that may be more of a Christmas personal project. Learning about the sightlines is going to be especially useful for our 10 Week Ruby Bridges iteration that Tori and I are currently getting started- but the journey for that so far deserves its own post. More soon!