03/24/19: Phase 1 Conclusions

Reaching the end of my Phase 1 investigations led to the reiteration of one very powerful concept: context is key.

My work over the last five weeks has been investigating how designers move a user in VR, specifically with user agency and how designers can direct the users down a particular path. Within this particular scene, that path was a long sidewalk that takes quite some time to traverse. I experimented with teleporting using the prefabs available in SteamVR, with scale and location of the user in VR, and playing with the transitions between different kinds of motion - from moving car to teleport points to a plane.

Even though I was not able to achieve everything I outlined in my initial schedule, everything was functional and pretty neat in the scene.

However, feedback that I received after demonstrating the scene and then going through it myself was that teleporting actually takes the user out of the experience. The appearance of the teleport points is unnatural in this space that I am trying to create, and using a controller itself is arguably a hazard to immersion. It brings the user’s thought back around to what they’re doing instead of what the people around them are doing. I’m incredibly grateful for that insight - I hadn’t thought of it from that perspective before, but having made the scene I have to agree.

I was really lucky to get to show this scene to Shadrick Addy, a designer and MFA student who worked on the I Am A Man VR experience, who sat down with Tori and I for an hour to discuss our work on this project. He offered much of the same critique about the teleporting, pointing out that in context it doesn’t make sense. Masking this motion with something that fits in context with the story would be much more effective - for example, using gaze detection to trigger the movement forward. A mother urging a daughter forward might look back, gesture, or verbally ask her to keep moving. In this, we could build a mechanic where the user looking at the mother after these triggers would generate their motion forward in the scene and along the narrative.

Using this gaze detection would have the benefit of eliminating controllers completely, something I discussed previously but didn’t fully understand the benefit of until having these conversations. In discussing the immersion this can bring, I asked him about a common structure I’ve seen in VR experiences so far - this sandwiching of VR between two informational sessions. Our project does this with an introductory Prologue as well, but my question was whether to add information in the experience as it progresses or leaving it alone. He suggested that the addition of information would only serve as a distraction from the scene itself, a distraction that might prevent the emotional reaction and/or conversation that Tori and I are attempting to create. There’s some really interesting layers there: in how the main scene is encased in narrative, how the prologue and “epilogue” scenes frame the experience, how the use of VR itself is encased within a system that provides context for the technology, and that space is designated and placed within an exhibit discussing larger themes - all informing each other.

Coming back from the tangent, Phase 1 helped answer my question as far as the type of motion I should be considering within this space and why. I was able to work out some of the smaller technical bugs that will go a long way in the long run. And I was able to spend a lot of time doing some outside research on VR experiences to help understand the decisions currently being made in other projects.

What’s Next

Phase 2 naturally follows Phase 1, and I think here the best option would be to build up. I learned a lot from the last few weeks and I would really love to develop some gaze control mechanics. Being able to move forward in the space powered by gaze in a crowd and testing this crowd reactions that I didn’t get to in Phase 1 would go a long way towards development this summer and in the fall. I’ll be articulating this plan a little better next weekend after I’ve written this proposal and work has begun. I will also be recording Phase 1 for documentation and uploading that to my MFA page for documentation.


OUTSIDE RESEARCH

Spring Break happened since my last post and I took advantage of the time.

Museum: Rosa Parks VR

Over break I found myself at the Underground Railroad Freedom Center in Cincinnati, OH, where they’re currently hosting a Rosa Parks VR exhibit.

I was really interested in how this experience was going to be placed in the Freedom Center, and what the VR content was going to consist of. This was my first time using VR in a public space, plus I came in not knowing much about the experience itself. I tried not to watch videos or read articles - although looking for the video above I realized it’s incredibly difficult to find any information about it. Up on the 3rd floor of the Freedom Center in a corner to the right are four seats from a school bus on a low platform, and a table for the center attendant to take tickets/give information. The experience was made in Unity, uses Samsung Galaxy 9s and mobile headsets (which were cleaned after every person), and headphones.

Full disclosure, my headset had something VERY wrong with the lens spacing and I ended up watching the experience with one eye closed.

I saw the same sandwich structure here as what Tori and I are using - an introductory sequence, an uninterrupted 360 video experience of the user being confronted by first the bus driver and then a police officer, and then an exit sequence discussing historical ramifications and present day context, all narrated by a voice actor speaking as Rosa. Having the users sit on the bus seats was a really nice haptic touch that I enjoyed - that weird texture and smell just can’t be faked. The user is embodied in the experience, being able to look down and see what she was wearing on that day. Each time slot is 4 people for group every 5-10 minutes, and in our time waiting I saw people of all ages coming over to go through the experience. In order to start, the user has to look directly into the eyes of an image of Rosa Parks for a certain length of time.

I thought that the embodiment was a really effective choice for a static seated experience that requires little to no active participation. The user is reminded by the attendant at the beginning that they can look around in all directions. I was most surprised by how it was situated in the center. Fortunately Rosa Parks is a pretty well known figure in history, but if you didn’t know anything about her there was nothing to inform you in the surrounding area. The informational segments in the experience spoke mostly of what was happening in that time period. Being a standalone attraction sparks curiosity in wanting to know about the experience while being part of a larger exhibition may give greater context in the long run… so I suppose it depends on your goals for the user where this is placed. I think I personally would have liked more information to surround the experience, especially considering how complex the other two exhibits on the floor were.

I think I would need to do this experience again to examine how I felt coming out of it. I wasn’t especially affected - more distracted by the odd 360 video editing happening in the middle to try to increase depth and the funky lens adjustment in my headset - but I did appreciate the nature of the experience itself and its placement in the center. And I was able to find parallels between their development and what Tori and I are working on.

Museum: Jurassic Flight

This was the other VR experience I got to do inside of a museum. We discovered it completely on accident in the Museum of Natural History and Science in the Cincinnati Museum Center.

Me flying as a pterodactyl in Jurassic Flight. Skip to 0:19 for the actual experience start.

After Rosa Parks VR, this was as opposite of a VR experience as I could manage. Jurassic Flight makes use of equipment called Birdly, which I last saw in a video of its prototyping stage. The experience requires you to lie on your stomach on this device, arms out to the side, Vive Pro strapped to your head. You take flight as a pterodactyl, soaring above trees, rivers, and mountains, observing the other dinosaurs living their lives. There is no goal here, no informational aspect to the experience. It’s all about the haptic feedback. There’s a fan at the front of the device that increases and decreases with the user’s air speed, the device tilts forward and backward based on your pitch in the game, and you control direction with the paddles at the end of the “wings” of the device.

The experience is situated just to the right of a big dinosaur exhibit, which provided plenty of context before actually going into it. It’s not particularly thought provoking or educational but it does add on to the content already addressed in the museum from a fun perspective. It’s about 5 minutes, very scenic and peaceful (minus the initial few seconds of motion sickness during a dive), and it was made in Unreal so the environment and lighting was really stunning.

Again, not really able to make a connection here (pre)historically or in structure of the experience, but I was really fascinated by the haptic feedback that occurred and that novel flying experience.

Anne Frank House VR

I found this experience on the Oculus Store and had to give it a go. Unlike the past two, I went through this experience at home on my own machines. I read Anne Frank’s Diary in elementary school, though most of the details escaped me as an adult. This experience recreated the Frank’s annex while they were in hiding from the Nazis.

Again I found the informational sandwich structure. The user is offered the option to go through a story mode or a tour mode through the annex - I chose the story mode. This begins with a fairly long narrated introduction with historical images, followed by the exploration of the annex. The user requires very little interaction beyond pointing and clicking to the next point. Once there, narration begins, and we hear Anne Frank telling us about her daily life in each of these spaces. It’s a linear path through the space and the only interaction is really moving from point to point.

It’s beautifully recreated. The quality of the environment really called me to examine it closely. I wanted to see the pictures on the walls, the books scattered over the bed, what crossword questions were in the paper. Each progression revealed a little more about the family and what their everyday life was like. Hearing these stories in contrast with the empty spaces the user explores creates a wistful mood. I didn’t want to make any noise myself due to the emptiness and hearing about how the family had to remain quiet during the day to avoid rousing suspicion.

The whole tour took me around 20 minutes, and I felt like I really did learn a lot just from seeing the space and hearing fragments about life related to each segment of the house. The choice of motion seems to come from the fact that this seems to be an experience made for mobile and thus requires a controller of some kind. Beyond the point of the cursor, I never see the controller itself. It seems like a good compromise that doesn’t threaten the immersion in the experience.

Traveling While Black

I’ve been meaning to do this experience for the last three months. Now that I have, I can almost guarantee I’m going to need to do it again.

Traveling While Black addresses the issues faced traveling across the country in the past by black Americans, starting in the 1940s and ending in 2014. The interactions and interviews all take place in Ben’s Chili Bowl, serving as a hub and safe space throughout the course of the experience. Every interview occurs in a booth, with the user switching locations from one seat to another with each new person. The visuals themselves are beautifully executed and edited, running strong parallels between past and present. At the very end of the experience, the user sits across the table from Samaria Rice as she speaks about the day her 12 year old son was shot by police in Cleveland, OH.

There was no embodiment, no interaction - the user is watching and listening throughout the whole experience. Placing the user in an intimate setting like a diner booth and in close proximity to those being interviewed allows the user to feel like part of the scene. It’s a 360 video that’s about 20 minutes long, ending with the point that safety is still not guaranteed for black Americans.

I came out of the experience strongly emotional and had to sit for awhile to really absorb everything. Truthfully, I’m going to need to do the experience again to actually analyze the structure and think about the decisions being made and how 360 decisions may differ from animated VR. But I do know that this is the kind of effect I want to have on the viewers of our project. And I wonder if anything stylized, not film, could create that level of personally jarring human connection.

Conclusions

Having gone through a few historical VR experiences now, I’m seeing this sandwich pattern of information more and mode. And I think I understand for the most part why this is occurring. I’m also seeing how multiple narratives are being organized cohesively as well as how one narrative can be distilled to give a whole picture without saturating the user with information. I’m going to continue with this next week and see what other kinds of narrative thought-provoking things I can find, as well as any other museum VR exhibits existing - of any kind - that I might go and explore.

03/08/19: Video Update on Phase 1

This is going to be a relatively short update on how far Phase 1 has progressed in the last few days, but finally including some video footage of the scene working, along with some of the tools I’ve been brushing up on to apply this week.

The above video is a quick demo of the teleport point placement and scaling in the scene.

What was most surprising for me was just how long the sidewalk actually became. It felt like our last prototype was dealing with issues of time because the walk down the sidewalk was too short or the walking motion was too fast. At the height of a child, the building itself becomes this mammoth imposing object rather than just a set piece or a destination. The teleporting really emphasizes the distance too, all of the points are just at the edge of the teleport curve. I think I got lucky there. Overall this layout feels smoother and I’m excited to start putting in the other scene elements.

On some technical notes:

  • During our demo on Thursday it was pointed out that some objects aren’t keeping scale with the ground or street planes. In the video I can definitely see the lamp posts hovering off of the ground- this may just be a matter of making sure the final assets in the scene are combined into one set object. Still experimenting with that.

  • I found in this scene that the teleport point on top of the stairs was actually really hard to get to - you can actually see me struggling with it in the video. I underestimated how large the stairs would become at that height.

  • Which leads me to the suspicion that this height ratio isn’t quite right. I recorded this experience while seated, so I thought it might just be something wrong with the math. I repeated the same thing while standing and had the same issue. I can play with some numbers to get that right.

  • This was my first time testing SteamVR with a headset other than the Vive. Up until now all of my development has been using the Vive headset and controllers. Oculus is what’s available to me in this moment so I took the opportunity - it connected no problem! Teleport was already mapped to the joystick on the Oculus Rift controller. Cue my sigh of relief for a more versatile development process.

I have begun working with the car animation, starting with placing the user.

030819_Phase1a.PNG

I made the loosest possible version of a block car in Maya with separate doors and brought it in just to have something to prototype with. This is where the user’s location in space is going to become an issue- I have to make sure they’re aligned with the driver’s seat. We’re going to have the user sitting in the demo anyways, so we might be able to just calibrate the seat with the environment and have the user sit on the bench.

Working on a GRA assignment this week I also learned how to use the Audio Mixer in Unity. Turns out I can group all my different audio tracks together and transition between various parameter states. Who knew!

Apparently not me. I suspect this is going to fix A LOT of the audio issues we were having in the last prototype, especially having to do with consistency - some of the volume levels were… jarring, and not in the intentional design type of way.

Critique

In class, I think I opened up the wrong version of the project, because all of the environmental objects were scaling without the teleport points attached. When I got home I realized that it was all fixed on my current version! One less thing to tackle.

Going away from the technical for a moment, Taylor posed an interesting question to me: how do we categorize this experience? I realize I’ve just been using the word “experience” but we’ve also discussed “simulation”. Adding that to the long list of queries for this open week ahead of me - confirming a proper term for what we’re working on, and justifying that definition.

What’s Next

  • Car animation

  • Composing Crowds

  • Connecting theory with my actions

  • Resuming my lineup of VR experiences

03/03/19: Phase 1, Midway

In the last two weeks, the physical production of my Phase 1 project has slowed in favor of investigating the theories and plans behind my thesis investigation. I came to the realization midway through Week 2 that I was approaching this prototype much the same way I was approaching the last three and not weighting my theoretical framework or design goals into the decision making process.

Starting with the main project development, here are some of the achievements from the last two weeks:

  • Fixed a bug where the environment was adjusting to the player height but left behind the Start Point, causing the player to actually appear way off mark.

  • Getting the start point to actually move the player to the right spot. I can move the play space, which at least gets us to the right area. This may be more of an issue in the car scene, but the teleport points will cut down potential issues of running through objects or agents in the crowd.

  • Added teleport points to the scene.

    • This actually checked me on my scale once again. I initially only had three points along the sidewalk, and on testing it in the Vive I found that the pointer from the controllers couldn’t even reach the first point! To compensate for the user’s smaller relative size, I added two extra points, made the space in front of the school a teleport plane for free movement (to be explored in the crowd composition portion of the project), and placed a point on top of the stairs to avoid awkward stair collisions in Unity.

  • Major debugging time with SteamVR Input.

    • This was a huge issue, once again. But I’m slowly getting better at figuring out where the misstep is between Unity and my controller bindings. I brought a project from home to the Vive at school, and that particular computer had bindings that for some reason disconnected. Nearly two hours later, we had them satisfactorily connected and shut off the haptic feedback - for some reason, the default teleport in Unity had the controllers vibrating every 5 seconds.

    • Also came to the realization that controller actions only show up in the SteamVR Input Live window if there are functions in the scene that require the bindings to be active. So if I pulled up the window to check the bindings before, say, having the teleport prefab in the scene… it would look like the buttons aren’t working. But it’s because they aren’t being called! One of those tiny little victorious moments of understanding.

Phase 1: Next Steps

I am certainly behind in development for this scene - I should be finishing up the car animation. Next week is Spring Break and I will be here in Columbus cranking out work for the majority of it, which should make up some of the lost time from this week. Therefore the goals for this week are:

  • Complete the car animation

  • Troubleshoot/Playtest on Thursday with classmates

  • Ensure a smooth transition from the car to the sidewalk

  • Check in with Tori about any potential new data to add to the crowd/car, and be in a good position to move forward next week.

Theoretical Developments

Over the course of last week, I had several meetings about where this project is conceptually and where it’s going. I briefly mentioned this in my previous blog post introducing Phase 1, but want to begin documenting my progress here as I work through the language and questions required to articulate my thesis.

My thesis goal is to articulate a framework for designers of VR narrative experiences based on the weight of specific VR design elements (gamification, user identity, movement, visual design, etc), stemming from my interest in how to direct users through a scene with high levels of implied agency (control over the camera). The Ruby Bridges project is operating as the first case study for this framework as a historical narrative. After completion of Scene 01, I will be utilizing another narrative of a contrasting “genre” - currently thinking about mythological fantasy - to test this framework and compare how it is utilized when presented with two different stories.

A huge part of this is recognizing the specific roles that users and designers take on within the scene. In film, these roles are fairly distinct: the “designers” (writers) operate as the authors of the story being told. The directors and crew operate as the storytellers, visually interpreting the material that has been given to them. And the “user” in this case is a viewer, an audience member whose role is to view the narrative that has been visually curated and placed before them. These lines get a bit blurred when we consider video games. There are still writers and designers operating as the authors and storytellers. Users become players, who function as an audience for the world put before them and, to a limited degree, an author of their own experiences. Players have a degree of agency to them that allows them to function and impart change on this world within the game, though the storytellers can still choose to restrict this agency by placing boundaries on the edge of the world or controlling camera movements. Yet every player will play a game differently.

Virtual reality requires the creation of new roles. Users in a virtual space have more inherent agency than ever before with control over the camera and their physical pose. Designers still function as authors and storytellers, but also as directors are responsible for directing a user through the scene. Users, through their newfound agency within the world, then become part of the world as an actor.

030319_Roles.PNG

With these roles in mind, I’ve begun constructing a loose pathway for defining the goals of the experience and the elements that should be considered when working within VR. I designed this with a top-down path in mind, though it’s brought up some side questions about whether a bottom-up approach beginning with exploration of one particular element would be possible. The map below is a working representation of the pieces I’m currently trying to put together, although I know this is a sliver of the questions that are asked when in the design process.

030319_WorkingFramework.PNG

It was pointed out to me last week that the Phase 1 project is tackling questions of the role of the User as an Author/Actor. I’m focusing on how the user moves through this scene, and whether giving them that agency is right for what the scene demands.

I haven’t added any VR games or experiences to my list recently - moving apartments has me at a bit of a disadvantage in this moment. But I have instead begun tackling a spreadsheet to examine various elements in these games I’ve been talking about and how they compare across a wide range of experiences.

Tori will begin adding her thoughts and experiences to this list. Next weekend I’ll be going to the Rosa Parks VR experience at the National Underground Railroad Freedom Center, and I was given some good references for experiences to examine over the next week - Traveling While Black among them.

Connecting my theoretical framework with my developing project, outlining specific goals, and being very clear about what I want these experiences is going to be the priority here for the next few weeks.

2/17/19: Phase 1 Begins

Projects like Orion and spending time on other VR applications has been a welcome break for exploration, but this week brings the return of thesis. We’re working on projects in phases, with Phase 1 lasting for the next five weeks.

I’ve been thinking about our prototype of Scene 1 (Ten Week Prototype) from the Ruby Bridges case study last semester. The final result was not a functional experience technically or visually, and after speaking with peers and receiving feedback I realized that I needed to go back to some fundamental concepts to examine some of the decisions made in designing the experience, such as timing, sequencing, motion, and scene composition. I feel that our last project started getting into the production value too soon when we should have been focusing on the bigger questions: how does the user move through the virtual space? How much control do we give them over that movement? What variations in scale and proximity will most contribute to the experience? These are the questions we started with and seemingly lost sight of.

In developing the proposal for my project I also began considering more specifically what I’m going to be writing about in my thesis. And, more importantly, beginning to put language to those thoughts. Recent projects have been allowing me to question what parameters designers operate with when we’re designing for a VR narrative experience. It gets even more complicated when we start breaking down the types of narratives being designed for. In this case, the Ruby Bridges case study is a historical narrative - how would those parameters shift between a historical narrative and a mythological narrative? What questions overlap? Orion was a great project for examining design process for narrative, and now in shifting to another, I’m interested to see how that process carries over here.

Phase 1: Pitch

Production Schedule for Phase 1

I will be creating two test scenes to address issues face in the 10 Week Prototype. The first will address Motion- how can a user progress through this space in the direction and manner necessary for the narrative while still maintaining interest and time for immersion? And does giving this method of progression to the user benefit the scene more than the designer controlling their motion? In the previous prototype we chose to animate the user’s progression with a specific pace. This time, I will be testing a “blink” style teleporting approach, allowing the user to move between points in the scene. Each of these points creates an opportunity for myself as a designer to have compositional control while still allowing the user control over their pace and time spent in that moment. This also provides an opportunity for gamified elements to be introduced, which is something I will be exploring as I move through the project.

The second scene address proximity and scale, creating a scene where the user adopts the height of a six year old child and scaling the world around accordingly. Even to the point of exaggeration to experience that feeling for myself. It was suggested in a critique last semester that I create these small little experiences and go through them just to understand how they feel for my own knowledge, and I agree with this method - more experience would certainly help inform the final design decisions. I will again be experimenting with the composition and density of the mob outside of the school to create some of these experiences.

Week 1

I purposefully scheduled Week 1 to focus on planning out the rest of the project and getting a strong foundation built. I planned out what I was going to do specifically in each scene and brainstormed various ways to solve technical issues. Writing my project proposal had already helped solidify these plans, but I’ve developed this back and forth process with my writing. My sketchbook helps me get general concepts and ideas going, where the proposal then puts formal language to these ideas. While writing the proposal I usually find a couple of other threads that I hadn’t considered, which brings me back to the sketchbook where I then update the proposal… the cycle continues, but it has been especially productive over the last two weeks.

I focused on getting the overall environmental scaling and test space created this week using assets from our previous prototype. The issue was having the user start the experience in the right scale and position every time. Locking in the camera in VR is a pretty big “NO”, and Unity makes it especially difficult as the VR Camera overrides any attempts to manually shift it to its proper spot.

Scaling was much easier to figure out than I expected - I’m just scaling the entire set to account for the height of the user at any given point based on the height of a six year old (1.14 m) rather than forcing a user to be a height that physically doesn’t make sense to them. I expected this code to be much more difficult, but so far it seems to work pretty consistently when I test it at various heights.

I’m still working on getting the recentering function to work. I found a lot of old documentation from 2015 and 2016 that doesn’t account for all the changes in Unity and SteamVR. There’s some good concepts, and even a button press would be great for now. Still planning on continuously exploring this, and I expect I’ll be working on it throughout Phase 1.

NEXT

  • Begin Blink teleport testing through the scene.

    • When I made this schedule, I didn’t realize that SteamVR has a Teleport Point prefab. So, yay! Production time cut down! I’ll be using that spare time to add in primitives simulating the placement of the crowd and brainstorming potential gamification/timing. I may also go on a search for some audio and add that to the scene as part of my testing.

  • Experiment with button pressing versus gaze direction. How does the scene feel without controllers? Would gaze navigation be effective here?

  • Playtest #1 with peers, gaining feedback on the button or gaze mechanisms and other developments made during the week. Will also gain feedback on the scaling and positioning of the user.


OUTSIDE RESEARCH

The games I played this week were all very physically involved with a lot of motion required on the part of the player. However, none of these games used methods that required teleporting or “artificial motion” via joysticks or touchpads. All were based on the motion of the player’s body. Even more interesting, I experienced a strong sense of flow in these games than in past titles, though each for different reasons. Considering my thesis, which would not be this action oriented, it’s helpful to see how specific components in this games - sound, motion, repetition - are utilized in ways that ultimately make a flow state possible.

FLOW VIA SOUND: Beat Saber

Beat Saber is a VR rhythm game that operates as a standing experience, where players use their arms and lean to hit the cubes with sabers on the beat and in the indicated direction. Unlike the others, I’ve been playing this game for a few weeks and have had time to examine increase in skill level as well as what kind of experience I was having. It was initially very difficult to get used to the cubes flying directly at me and to be able to react to the arrows indicated on the cubes - a longer adjustment than I expected, actually. I play games like this on my phone using thumbs, and my body knew what it needed to do… but was having a difficult time getting my arms to react to it. After a couple of weeks I can now play the above song on Hard mode, which is what I’m including for this group of games.

Every time I play a song, I usually get to a point where I experience flow - able to react to the cubes as they come and follow the rhythm without really even thinking about it (and significantly better than if I am thinking about it). It’s a state that feels instinctual and occasionally feels as though time slows down, a common description of flow. Sound is what’s driving that experience, without the music this would be much more anxiety-inducing and stressful than enjoyable.

After playing I was thinking a lot about Csikszentmihalyi’s book Flow, where he outlines several important features in a flow activity: rules requiring learning of skills, goals, feedback, and the possibility of control. Even with varying definitions of what is considered a game, most require those components in one way or another. He references French psychological anthropologist Roger Caillois and his four classes of games in the world - based on those, Beat Saber is an agonistic game, one in which competition is the main feature. In this case the competition is against yourself to improve skills and others to move up in the leaderboards. However, as frequently as I did fall into flow, I also fell out of it easily when a level grew too difficult or beyond my skills.

FLOW VIA MOTION: SUPERHOT VR

I’m not quite sure how to categorize Superhot VR, but it’s the most physical game I’ve ever played in VR. Players can pick up items or use their fists to destroy enemies making their way towards you in changing environments… the twist is, time only moves if you move. Every time I rotate my head the enemies get a little closer, or if I reach out to pick up a weapon suddenly I have to dodge a projectile. As the number of enemies increased with each level I found myself kneeling, crouching, or dodging. There is no teleportation or motion beyond your own physical movement.

Everything here is reactionary. I experienced a strong level of flow, unlike the intermittent experience I tend to have in Beat Saber. Time being distorted here and used as a game mechanic almost seemed to echo those flow states. The stages are all different with minimal indication of what is coming next, and often the scene starts with enemies within reach. I didn’t have to think about what buttons or motions were required to move, it was a natural interface - I could just move my body to throw punches or duck behind walls. While this was effectively immersive and did result in a strong flow state, I was pulled out of it immediately every time I ran into a wall in my office or accidentally attacked an innocent stack of items sitting on my desk.

Sound was minimal, which I very much appreciated but sets this game in stark contrast to Beat Saber. The focus of this game is motion, not music or rhythm. On a continuous side note from the last two weeks, death states in Superhot VR were much less disruptive than the other games. The entire environment is white, so the fade to white and restarting of the menu isn’t very jarring or disruptive to the experience. It was easy to jump back into the level and begin again. This may be an interesting point for transitioning my thesis between scenes- having a fade or transition that is close to the environment rather than just doing the standard “fade to black”. I suppose it depends on the sequence I’m designing… a thought for next week.

Elven Assassin VR

And last, this is a game that combines a little bit of everything. Elven Assassin VR requires you to take the role of an archer fending off waves of orcs planning to invade your town. Your position is generally static with some ducking and leaning, and the ability to teleport to different vantage points within the scene. This deals in precision and speed, and the physical motion of firing the bow. The satisfaction of hitting a target in this game was immense, and I ended up playing until my arms hurt. The flow in this game comes from the rhythm of motion - every shot requires you to nock, draw, aim, and release the arrow to take down one enemy. There isn’t really a narrative occurring in this game at the moment. It tends to operate more like target practice, and the concentration required was what induced that flow state.

Falling out of flow was a little easier here with technical glitches - tracking on my controllers would get disrupted and my bow would fly across the world while I fell to a random orc sneaking through the town. Their use of a multiplayer function is also really interesting here, and the social aspect may be an interesting avenue to explore with this game.

Conclusions

I didn’t actually expect to talk about flow at all, it was just a happy side effect. These are three VERY different games and that experience of flow was the strongest commonality between them. This kind of goes back to game design as a whole rather than specifically VR design. But those little differences in how each game approached physical action and reaction to the environment really drove that point for me. Where Elven Assassin VR focused on action that was repetitive and chaotic, Beat Saber focused on the rhythm of those actions and applied them to the template of the song. Superhot VR left the chosen action up to you, but suggested some paths and required movement to occur in order to advance. The result was neither repetitive nor rhythmic, but required control.

I am not planning on making experiences so heavily focused on action and movement as these, but bringing what I’ve seen here from the choice in motion to smaller actions or interactions with the environments in my thesis work might help me answer some of the design questions I’m exploring in the Phase 1 project. How can a user move through a space? I’m considering teleporting from point to point, but have not yet thought about the potential secondary actions on behalf of the user - those spaces where gamification could occur. These games re-framed motion for me, reminding me to define more specifically the type of motion expected of the user, and ensure that the motion (or lack of) enhances the experience itself.