2/17/19: Phase 1 Begins

Projects like Orion and spending time on other VR applications has been a welcome break for exploration, but this week brings the return of thesis. We’re working on projects in phases, with Phase 1 lasting for the next five weeks.

I’ve been thinking about our prototype of Scene 1 (Ten Week Prototype) from the Ruby Bridges case study last semester. The final result was not a functional experience technically or visually, and after speaking with peers and receiving feedback I realized that I needed to go back to some fundamental concepts to examine some of the decisions made in designing the experience, such as timing, sequencing, motion, and scene composition. I feel that our last project started getting into the production value too soon when we should have been focusing on the bigger questions: how does the user move through the virtual space? How much control do we give them over that movement? What variations in scale and proximity will most contribute to the experience? These are the questions we started with and seemingly lost sight of.

In developing the proposal for my project I also began considering more specifically what I’m going to be writing about in my thesis. And, more importantly, beginning to put language to those thoughts. Recent projects have been allowing me to question what parameters designers operate with when we’re designing for a VR narrative experience. It gets even more complicated when we start breaking down the types of narratives being designed for. In this case, the Ruby Bridges case study is a historical narrative - how would those parameters shift between a historical narrative and a mythological narrative? What questions overlap? Orion was a great project for examining design process for narrative, and now in shifting to another, I’m interested to see how that process carries over here.

Phase 1: Pitch

Production Schedule for Phase 1

I will be creating two test scenes to address issues face in the 10 Week Prototype. The first will address Motion- how can a user progress through this space in the direction and manner necessary for the narrative while still maintaining interest and time for immersion? And does giving this method of progression to the user benefit the scene more than the designer controlling their motion? In the previous prototype we chose to animate the user’s progression with a specific pace. This time, I will be testing a “blink” style teleporting approach, allowing the user to move between points in the scene. Each of these points creates an opportunity for myself as a designer to have compositional control while still allowing the user control over their pace and time spent in that moment. This also provides an opportunity for gamified elements to be introduced, which is something I will be exploring as I move through the project.

The second scene address proximity and scale, creating a scene where the user adopts the height of a six year old child and scaling the world around accordingly. Even to the point of exaggeration to experience that feeling for myself. It was suggested in a critique last semester that I create these small little experiences and go through them just to understand how they feel for my own knowledge, and I agree with this method - more experience would certainly help inform the final design decisions. I will again be experimenting with the composition and density of the mob outside of the school to create some of these experiences.

Week 1

I purposefully scheduled Week 1 to focus on planning out the rest of the project and getting a strong foundation built. I planned out what I was going to do specifically in each scene and brainstormed various ways to solve technical issues. Writing my project proposal had already helped solidify these plans, but I’ve developed this back and forth process with my writing. My sketchbook helps me get general concepts and ideas going, where the proposal then puts formal language to these ideas. While writing the proposal I usually find a couple of other threads that I hadn’t considered, which brings me back to the sketchbook where I then update the proposal… the cycle continues, but it has been especially productive over the last two weeks.

I focused on getting the overall environmental scaling and test space created this week using assets from our previous prototype. The issue was having the user start the experience in the right scale and position every time. Locking in the camera in VR is a pretty big “NO”, and Unity makes it especially difficult as the VR Camera overrides any attempts to manually shift it to its proper spot.

Scaling was much easier to figure out than I expected - I’m just scaling the entire set to account for the height of the user at any given point based on the height of a six year old (1.14 m) rather than forcing a user to be a height that physically doesn’t make sense to them. I expected this code to be much more difficult, but so far it seems to work pretty consistently when I test it at various heights.

I’m still working on getting the recentering function to work. I found a lot of old documentation from 2015 and 2016 that doesn’t account for all the changes in Unity and SteamVR. There’s some good concepts, and even a button press would be great for now. Still planning on continuously exploring this, and I expect I’ll be working on it throughout Phase 1.

NEXT

  • Begin Blink teleport testing through the scene.

    • When I made this schedule, I didn’t realize that SteamVR has a Teleport Point prefab. So, yay! Production time cut down! I’ll be using that spare time to add in primitives simulating the placement of the crowd and brainstorming potential gamification/timing. I may also go on a search for some audio and add that to the scene as part of my testing.

  • Experiment with button pressing versus gaze direction. How does the scene feel without controllers? Would gaze navigation be effective here?

  • Playtest #1 with peers, gaining feedback on the button or gaze mechanisms and other developments made during the week. Will also gain feedback on the scaling and positioning of the user.


OUTSIDE RESEARCH

The games I played this week were all very physically involved with a lot of motion required on the part of the player. However, none of these games used methods that required teleporting or “artificial motion” via joysticks or touchpads. All were based on the motion of the player’s body. Even more interesting, I experienced a strong sense of flow in these games than in past titles, though each for different reasons. Considering my thesis, which would not be this action oriented, it’s helpful to see how specific components in this games - sound, motion, repetition - are utilized in ways that ultimately make a flow state possible.

FLOW VIA SOUND: Beat Saber

Beat Saber is a VR rhythm game that operates as a standing experience, where players use their arms and lean to hit the cubes with sabers on the beat and in the indicated direction. Unlike the others, I’ve been playing this game for a few weeks and have had time to examine increase in skill level as well as what kind of experience I was having. It was initially very difficult to get used to the cubes flying directly at me and to be able to react to the arrows indicated on the cubes - a longer adjustment than I expected, actually. I play games like this on my phone using thumbs, and my body knew what it needed to do… but was having a difficult time getting my arms to react to it. After a couple of weeks I can now play the above song on Hard mode, which is what I’m including for this group of games.

Every time I play a song, I usually get to a point where I experience flow - able to react to the cubes as they come and follow the rhythm without really even thinking about it (and significantly better than if I am thinking about it). It’s a state that feels instinctual and occasionally feels as though time slows down, a common description of flow. Sound is what’s driving that experience, without the music this would be much more anxiety-inducing and stressful than enjoyable.

After playing I was thinking a lot about Csikszentmihalyi’s book Flow, where he outlines several important features in a flow activity: rules requiring learning of skills, goals, feedback, and the possibility of control. Even with varying definitions of what is considered a game, most require those components in one way or another. He references French psychological anthropologist Roger Caillois and his four classes of games in the world - based on those, Beat Saber is an agonistic game, one in which competition is the main feature. In this case the competition is against yourself to improve skills and others to move up in the leaderboards. However, as frequently as I did fall into flow, I also fell out of it easily when a level grew too difficult or beyond my skills.

FLOW VIA MOTION: SUPERHOT VR

I’m not quite sure how to categorize Superhot VR, but it’s the most physical game I’ve ever played in VR. Players can pick up items or use their fists to destroy enemies making their way towards you in changing environments… the twist is, time only moves if you move. Every time I rotate my head the enemies get a little closer, or if I reach out to pick up a weapon suddenly I have to dodge a projectile. As the number of enemies increased with each level I found myself kneeling, crouching, or dodging. There is no teleportation or motion beyond your own physical movement.

Everything here is reactionary. I experienced a strong level of flow, unlike the intermittent experience I tend to have in Beat Saber. Time being distorted here and used as a game mechanic almost seemed to echo those flow states. The stages are all different with minimal indication of what is coming next, and often the scene starts with enemies within reach. I didn’t have to think about what buttons or motions were required to move, it was a natural interface - I could just move my body to throw punches or duck behind walls. While this was effectively immersive and did result in a strong flow state, I was pulled out of it immediately every time I ran into a wall in my office or accidentally attacked an innocent stack of items sitting on my desk.

Sound was minimal, which I very much appreciated but sets this game in stark contrast to Beat Saber. The focus of this game is motion, not music or rhythm. On a continuous side note from the last two weeks, death states in Superhot VR were much less disruptive than the other games. The entire environment is white, so the fade to white and restarting of the menu isn’t very jarring or disruptive to the experience. It was easy to jump back into the level and begin again. This may be an interesting point for transitioning my thesis between scenes- having a fade or transition that is close to the environment rather than just doing the standard “fade to black”. I suppose it depends on the sequence I’m designing… a thought for next week.

Elven Assassin VR

And last, this is a game that combines a little bit of everything. Elven Assassin VR requires you to take the role of an archer fending off waves of orcs planning to invade your town. Your position is generally static with some ducking and leaning, and the ability to teleport to different vantage points within the scene. This deals in precision and speed, and the physical motion of firing the bow. The satisfaction of hitting a target in this game was immense, and I ended up playing until my arms hurt. The flow in this game comes from the rhythm of motion - every shot requires you to nock, draw, aim, and release the arrow to take down one enemy. There isn’t really a narrative occurring in this game at the moment. It tends to operate more like target practice, and the concentration required was what induced that flow state.

Falling out of flow was a little easier here with technical glitches - tracking on my controllers would get disrupted and my bow would fly across the world while I fell to a random orc sneaking through the town. Their use of a multiplayer function is also really interesting here, and the social aspect may be an interesting avenue to explore with this game.

Conclusions

I didn’t actually expect to talk about flow at all, it was just a happy side effect. These are three VERY different games and that experience of flow was the strongest commonality between them. This kind of goes back to game design as a whole rather than specifically VR design. But those little differences in how each game approached physical action and reaction to the environment really drove that point for me. Where Elven Assassin VR focused on action that was repetitive and chaotic, Beat Saber focused on the rhythm of those actions and applied them to the template of the song. Superhot VR left the chosen action up to you, but suggested some paths and required movement to occur in order to advance. The result was neither repetitive nor rhythmic, but required control.

I am not planning on making experiences so heavily focused on action and movement as these, but bringing what I’ve seen here from the choice in motion to smaller actions or interactions with the environments in my thesis work might help me answer some of the design questions I’m exploring in the Phase 1 project. How can a user move through a space? I’m considering teleporting from point to point, but have not yet thought about the potential secondary actions on behalf of the user - those spaces where gamification could occur. These games re-framed motion for me, reminding me to define more specifically the type of motion expected of the user, and ensure that the motion (or lack of) enhances the experience itself.

02/10/19: Reviewing Orion

After five weeks, the Orion project has come to a close. And as with most project, the final result was vastly different from what I anticipated when I began.

Textured image of Orion in UE4 editor

Textured image of Orion in UE4 editor

PROCESS

When I began Orion I anticipated a fairly straightforward process. I would be working with Quill and Unreal to learn the pipeline for each and between the two. What I had forgotten is that I have never created an observational narrative experience from scratch in VR. I am usually planning for some form of interaction, or in the case of my thesis project the narrative and environment are already described for me. Traditional storyboarding and animatic techniques were not going to work for me, which is where my foray into Maquette and Tilt Brush came in. Every step of the process was steamrolling through technical issues to see what worked and what didn’t work.

Process path for Orion

I realized that I really just needed more time to learn the painting and animation techniques for Quill along with all its quirks. I was excited about painting the cabin last week, but ultimately the asset ended up not working and I still build the asset using Maya and Substance Painter… I have never been so happy to be back in Maya, to be honest. I used the terrain tools in UE4 and the “Forest Knoll” asset pack purchased a few years ago to build the rest of the environment. I used a few Quill animations, such as the candle and the stars, as “accents” to the rest of the scene.

On a personal process note, while putting together the scene I made the decision not to use any visual reference at all. This was for two reasons: to avoid hyperfocus on unnecessary details, and to operate within the essence of memory. The project description was to show the essence of our memory in 15 seconds - well, 15 seconds is a very short time in VR. That’s usually the amount of time it takes for a viewer to orient themselves and focus in on the story. I didn’t want to overwhelm with an overly detailed environment that misses the point of my memory. And if I used visual reference, I would shift focus from my own memories to what it “should be”.

CONCLUSIONS

Even with all of the roundabout processes, I felt the final result was remarkably close to what I remember. Closer than I expect the original storyboards would have been. I think those would have been visually exciting and fun to watch, but that’s not what this moment was about. It was a quiet fifteen seconds on the deck in nature with just the stars and the sound of the trees. I have yet to share this experience with my partner to see how her memory might differ from my own.

I also learned a lot about the technical aspects of these tools, and personally did not enjoy using Quill for most of my painting time. It was fun to make some looping animations, but I doubt I’ll ever actually use this again for a project in the near (or distant) future. The final result was something I probably could have made in three or four days of work in Maya and Unreal, but I feel that I’m at a good point to move forward if I want to use Unreal for future VR experiences and feel more informed about the pipeline options available to me.

NEXT

  • Documenting Orion. I’m having a difficult time getting a video of the full experience because the scene is so dark. In the headset it’s easy to see, but the screen recordings I have taken so far have been really low quality and dark. I’m currently working on some rendering options in Unreal that may produce a better result.

  • Begin Phase 1 Project. I will dedicate next week’s post to the Phase 1 project centered around my thesis, but currently I’m still working out a final plan and some language to describe the project itself.


OUTSIDE RESEARCH

Continuing my theme of playing VR games and experiences for research, this week I went for a bit of a different track. I did some digging around in the Oculus and Steam stores, and I was able to play four of the games I had lined up.

BOARD GAMES IN VR

I think the initial question here for me was “why”? I enjoy board games, specifically the social aspect. Sitting around with friends chatting, accusing each other of hiding cards, accidentally bumping the board and sending pieces flying. It’s all part of the experience. I noticed on the Oculus store they have several Chess applications, so naturally I had to download one. And I also found a Catan VR app that I wanted to try.

(I found out while writing this post that both applications are made by the same studio called Experiment 7)

The only real difference we have between a board game in VR and a board game in real life is the social aspect, which is really what these games are trying to create. Catan’s environment looks like a mountain lodge with scenes of mountains outside and a nice soundtrack, with four chairs sitting around a table. Chess is similar, taking place in a library by default. I played against the AI for both, which in Chess produced a little robot figure watching me across the table while the Catan foes were painted portraits that had moving eyes and facial expressions. That bit was a little unnerving, to be honest.

I got absolutely destroyed in both games, but I was surprised how much I enjoyed the experience of sitting in a chair interacting with other “players”. The animated board in Catan was a nice touch, although things move so quickly it took some getting used to. Being able to physically pick up a chess piece and hesitate or fiddle with it before moving was great improvement over playing typical browser games. I felt present in the world and able to interact with the other players, feeling real frustration with them when I lost resources or had a bad roll. I was worried that these games would simply animate the board and leave it at that, but the efforts made to engage the players in the space and with each other made for a much more effective experience.

EXPLORATION and PUZZLES

The first game I played is called “I Expect You to Die”, where the player is a secret agent going on missions where the path forward must be determined by actions and clues in the space, and often the process to figure out this path results in a gruesome death. I played the first level of this game a few years ago, but since then they’ve added a beautiful animated introduction and several new levels. This game is meant to be played seated with the player reaching out or leaning to move or using their telekinetic prowess to bring objects to them.

In this case, the lack of locomotion around the scene increases the challenge and still makes for an enjoyable experience. It becomes accessible for all kinds of players and play spaces, and the missions themselves have good variety… though the deaths are still extremely startling in VR. The controls especially seemed to work and I enjoyed a great level of dexterity in the scene switching between objects and using them with ease.

The last game was “Internal Light”, an escape room style game where the user must navigate a creepy dark building to make it to the outside with a tiny ball of light as your guide.

Now, when I started this game, I didn’t know what it was going to look like or what kind of gameplay there was going to be. You start off in a cell chained to a bed in a scene that looks like its out of Resident Evil. There will be a week when I go into horror games, but I was not planning on it to be today and, well, I’m a chicken. I immediately started sweating and wanted to leave (escape?). The game itself is not a horror game, it’s just creepy. But the environment was effective for building suspense and tension.

What really sticks out for me here is locomotion. The player moves while holding a button and alternately swinging your arms back and forth in a skiing motion. I have NEVER seen this before and it was oddly effective. To navigate the player is required to be crouching and dodging security, and there’s a special kind of anxiety when swinging your arms to move from one cover to the next, hoping you’re moving fast enough. Even though I was standing I didn’t get motion sick, and was able to run through most of the game fairly quickly. I didn’t see any options to adjust these settings.

CONCLUSIONS

All four experiences used their environments to create presence in the space, and included a level of AI “social” interaction. Whether it was the calm atmosphere conducive to board games, the action hero inspired music and imagery, or the anxiety-inducing horror themes, the environments were really the selling point for the experience. The social interaction between computer and user (with the potential for multiple users) negates the isolation that VR can sometimes induce as discussed last week. I’m still curious about why that form of motion for Internal Light worked so well, and I am curious to see if any similar methods are used as I continue to explore what VR experiences are out there.

2/3/19: Painting and Planning

As a production week, I’ve been splitting my time between getting audio set up in Unreal and getting objects made in Quill. The last few days are where I get to put it all together with the last bit of animated assets.

I’m operating a little in the dark right now (pun intended) on what the final look of this piece is going to be. I timed out some atmospheric fog to reveal the scene slowly and made some cues for the sound effects: a match lighting, trees swaying in the wind, ambient noise for the scene around. The narration is in the scene, but I still need to adjust the timing and put it all together.

Quill has been easier for making static objects. I painted the cabin setting for the user - quicker than I expected using the straight line tools and some colorize to get the final shading in. The candle currently in the scene feels a little too bright, so I tried to go darker and see what the lighting in Unreal can do. Something annoying about painting things like this in Quill- if you’re painting a lot with a specific color, the lack of lighting tools in the program makes it really difficult to see the cursor against those colors. I got lost trying to find where my brush was in the cabin sometimes even though my hand was right in front of my face. Click the images below to check it out, though they’re really dark when not in the program.

This last leap is about putting the pieces all together and testing it out. By the end of today I should have all the assets in and will be doing the final bit in Unreal. We were able to get both Quill and Unreal working in the labs, which significantly increased my production time.

What’s Next

  • Finishing the last few Quill assets

  • Compositing the 3D and Quill assets

  • Finishing audio timing

  • Add a “Restart” button, so that the experience can loop at the viewer’s choice or provide an easy restart between viewers (a reach, but would be ideal)

  • Troubleshooting


Outside Research

NARRATIVE

Throughout this project I’ve been thinking about how to conduct the viewer’s attention to the events you most want them to see, while taking into account that they have agency over the camera itself. Part of that has to do with seeing the viewer themselves as an actor within the scene, and the designer as a form of director. And how that production process would differ in VR compared to the 3D workspace - that’s something I’ve been struggling with myself in this project, finding that path in a very short amount of time. An article about “Cycles”, a VR short that Disney released late last year, came across my path.

Disney’s “Cycles”, from AWN article. Source

“Cycles” has a really interesting visual feature in that, when a viewer looks away from the central action to an area off to the side or behind them, those features desaturate and become darker. I also read that they used Quill to create storyboards for the film and developed a number of virtual tools to experience each stage of the process both inside and outside of VR.

GAMES

Moving away from the Quill project and towards my Thesis, I decided to use our unexpected Snow Day to conduct some VR research… using my chunk of time to experience the variety of things available on Steam and broaden my understanding of what techniques are being used.

I started with The Talos Principle VR, a game that I enjoy playing on the PC. When VR was first released, many game studios started porting their current titles over to VR by just changing out the controls and letting the content flow just the same. I wanted to be able to do a direct comparison of the two.

Screencap from Youtube playthrough by Bangkokian1967 (source)

Screencap from Youtube playthrough by Bangkokian1967 (source)

What I was really exploring here was how they approached movement. The Talos Principle is incredibly nonlinear, where players generally get to choose how and where they go, and what path they choose to take to get there. It’s a puzzle game with generally realistic assets, and movement to avoid enemies is a huge part of successfully completing each stage.

The player gets enormous amounts of control over how they want to move through the game, showing every option about how you move and how the camera adjusts for that movement. I started with teleporting, which works okay to get across long spaces. But in confined spots with enemies that require you to move quickly, the few seconds it takes to acclimate to your new location tended to result in the death of my character.

Oh yeah - dying in VR? More disturbing than I thought it would be. It’s just a little explosion sound and a fade to black, but still very startling.

Walking using the touchpad didn’t make me as sick as I thought I would be after I adjusted the vignette over the camera and made sure to stay seated. Standing resulted in quick loss of balance and motion sickness, though I noticed that movement in a direction where I wasn’t looking also made me a little queasy.

Overall, I thought the adjustments made to the motion in the game worked well and I was able to play for over an hour before taking off the headset. I’m not sure if the game experience was especially different from playing on a PC, but I’m also aware that I already know the story and it may be difficult to judge how immersed I was when I already know how the game works.

EXPERIENCE

The last thing I wanted to look at was an experience called Where Thoughts Go: Prologue, available in Steam. The user sits in an environment and is presented with a question, where they can listen to the anonymous answers of other participants and then record their own to move on to the next. There are five questions, and I still spent over an hour in this experience.

Where Thoughts Go: Prologue, Chapter 2.

Each environment changes to suit the question, from the lighthearted first question to darker and more somber for the last. The experience was incredibly meditative - the environments are pleasant to sit in. The little orbs in the image are the responses of previous people. You listen to their voices answering, and I was shocked by how open and honest the answers were. Being able to hear someone’s voice crack a little bit as they talk about a sad event or get higher discussing an upcoming wedding to their love just pulls me further in to the space.

VR can be considered isolating, as for the most part we’re all just sitting by ourselves in a headset in our own worlds. This took an isolating experience and turned it into a communal feeling, a place where you can be vulnerable without risk. There are no usernames or accounts, just a recording. When a user adds their own recording to the space, you pick up the orb you’ve just made and pass it off to join the world. It becomes a sense of closure and just enough participation that I felt like part of the experience.

Where Thoughts Go: Prologue, Chapter 2

Conclusions

I realized that I haven’t been very involved in what’s happening in VR outside of the academic research world, and need to continue going through these experiences alongside my own research. As I go through I’m keeping a journal of notes from each experience and what I can take away from them. I would like to play a made-for-VR game next week and see how that feels compared to a port like The Talos Principle, and search for other more community-based experiences like Where Thoughts Go.

Weeks 1-2: Sightlines, Airports, and Liminal Spaces

Year 2 is now off and running! 

Most of my energy over the past three weeks has been focused on the first project of the year: a five week team effort for 6400. The same project that produced the MoCap Music Video last year. 

Concept

Our team was told the due date and to make something... very open for interpretation. My team includes two 2nd year DAIM students (Taylor Olsen, Leah Coleman) and one first year student (Sara Caudill). We eventually settled on creating a VR experience based on liminal spaces, specifically taking place in an airport, with the viewer losing time and identity as the experience goes on.

Liminal spaces are typically said to be spaces of transition, or "in-between"- a threshold. Common examples are school hallways on the weekend, elevators, or truck stops. Time can feel distorted, reality a bit altered, and boundaries begin to diminish. They serve as a place of transition- the target is usually before or after them. The sense of prolonged waiting and distortion of reality is what we intend to recreate in this experience. By placing the viewer in the gate of an airport and observing the altered effects around them, such as compressed/expanded time, we will bring the viewer into our own liminal space. 

All of our team members had an interest in working with VR and with games, so I looked for environmental examples of what might be considered a liminal space already existing within a game. The Stanley Parable sets the player in an office building by themselves, seemingly at night, which contributes to the odd feeling of the game- you never see another human, and the goal is to escape. The presence of a narrator and instructions (despite the player choosing whether or not to follow it) prevents this from being a true liminal space, but I feel that the setting itself creates a strong nod in that direction.

Silent Hills P.T. is much closer to the feeling we're getting to. The player constantly traverses the same hallway, though with each pass the hallway is slightly altered. There is minimal player identity, the passage of time is uncertain, and the player is constantly in a state of transition looking for the end. 

Sightline: The Chair became an important source material for us. Developed early on for the Oculus, the player is seated in a chair and looks around at their environment- one that constantly morphs and shifts around them. The key point is that these changes occur when the player looks away, and then are in place when the player looks back. This is an element I very much want to incorporate into our game. It really messes with the flow of time and creates a surreal feeling. Importantly, the player cannot interact with any of the objects around them- they must simple sit and wait for the changes to occur. 

Progress

From there, we met as a team and began planning out the experience- interactions, the layout of the airport, how time would pass, what events would be happening. An asset list was formed and placed online, as well as a schedule for development. We wanted to make sure everyone on the team was learning new skills they were interested in, and teaching others the skills that they have. Sara and Leah focused on visual and concept development- the color keys, the rhythm of the experience, etc. Taylor worked on finding reference photos, and began modeling the 3D assets we would need for the airport. 

For me, I spent the last few days focused on modeling the airport environment and beginning some of the interaction work in Unity. Based off of the layout we created in the team meeting, I was able to finish the airport shell and start working on some of the other environmental assets- a gate desk, vending machine, gate doors. 

I brought those models into Unity to start working on developing some code. Taylor made the chairs for the gate, so I placed those and got a basic setup going. 

090418_AirportEnvUnity1.jpg
090418_AirportEnvUnity3.jpg

I began working on some Audio scripts to randomly generate background noise and events- an assistance cart beeping by, a plane landing, announcements being made, and planes taking off/landing. That's about done, and I'll be posting an update video soon with the progress made.

The current problem I'm having is the script to change items when the viewer isn't looking at them. I found GeometryUtility.TestPlanesAABB in the scripting API, which forms planes where the camera's frustrum is and then calculates if an object's bounding box is between them or colliding with them. Is the object where the player can see it? I can successfully determine that an object is present, but when deactivated to switch to another GameObject, the first object is still detected and causes issues with the script I've written to try and swap it with another. I got it to successfully work with two objects, but three is revealing this issue in full force. I may try instantiating objects next instead of just activating them- either way, this test has allowed me to learn a lot about how Unity determines what's "visible". 

Next? 

This weekend, I'll be continuing to work on this sightline script for the camera and hopefully finding a solution. I also have several other environmental assets to model, and will begin textures for the ones that I already have completed. On Sunday I plan on posting a progress video of the application as is. We still haven't decided whether to use the Vive or attempt mobile VR, something that I've been especially interested in. Alan suggested letting the project develop organically and then make a decision near the end- I'm leaning towards the Vive for this currently for the familiarity and extra power, but on mobile the player is forced to be stationary and lacks control. More thoughts on that soon. 

First Year Wrap and Ruby Bridges: 6 Week Conclusion

In the first week of May, Tori and I completed our work on the 6 Week Prototype for the Ruby Bridges Project. It was presented, and then folded into a much larger presentation about our progress throughout the first year of our MFA program. As classes are starting back up, I wanted to make a post summarizing my journey over last year, the results of Ruby Bridges, and my current starting point. 

At the beginning of the year, I focused my efforts on the interactions between game design, education, and virtual reality. For me, this meant a lot of exploration and a technical education in these areas. 

My early projects focused on improving my skills in Unity. I worked on team projects for the first time in Computer Game I and obtained a real introduction to game design and game thinking. This also allowed me to develop my own workflow and organization in Unity. While exploring my personal workflow, I was interested in potentially using VR to organize materials and form connections throughout the scope of a project using Google Cardboard. The result was the MindMap project, which was a great introduction to mobile development and Google Cardboard, but provided limited usefulness for my work. It was tested using materials from my Hurricane Preparedness Project, a 10 week prototype developed to provide virtual disaster training for those in areas threatened by hurricanes. This was my first time using Unity for VR, and developing with the HTC Vive. The topics explored, including player awareness in VR, organization of emotional content, and player movement in a game space would eventually become the basis of my work on the Ruby Bridges Project. 

There has been a clear evolution in my own design process and focus, mainly with a shift from visual organization to functional prototyping. Earlier in the year I still had a heavy focus in visual elements and art assets, though with game design projects that experience suffered because the game was not totally functional. By the spring, I had shifted completely into prototyping and non-art assets. All of these projects challenged my process and boosted my technical skills, and then I brought these technical developments into a narrative context. 

EDUCATIONAL AND EMOTIONAL STORYTELLING THROUGH IMMERSIVE DIGITAL APPLICATIONS

In the Spring, Tori Campbell and I began working on our concept for the Ruby Bridges Project. Working together, we would like to use motion capture and virtual reality to explore immersive and interactive storytelling. Ultimately, we are examining how these concepts can be used to change audience perception of the narratives and of themselves. Ruby Bridges' experience on her first day of school is the narrative we've chosen to focus on. 

Ruby was one of five African-American girls to be integrated into an all-white school in New Orleans, LA in 1960. She was the only one of those girls to attend William Frantz Elementary School at 6 years old, told only that she would be attending a new school and to behave herself. That morning, four U.S. Federal Marshals escorted her to her new school. Mobs surrounded the front of the school and the sidewalks, protesting the desegregation of schools by shouting at Ruby, threatening her, and showing black baby dolls in coffins. 

This scene outside the front of the school became our prototype in VR. 

The Four Week Prototype focused on developing technical skills that we would need moving forward, specifically navigation, menu/UI, and animation controls. In doing so, I learned not just how to make these functions work, but the pros and cons of each.  This allowed me to make more educated decisions in the design of our Six Week Prototype. We gathered motion capture data from actors to work with the data in a VR space, and to help experiment with controlling the animations. 

My goal with the Six Week Prototype was to create a fully functional framework for the experience, something with a beginning, middle, and end. I created a main menu, narrative transition into a Prologue scene, the actual Prologue scene where the user is Ruby's avatar seeing from her perspective, and then an interactive scene where the user can examine the environment from a third person view. This view would provide background information/historical context, and drop into the scene from another perspective. Where the broad goals of the Four Week Prototype was technical development, this project was examining different levels of user control, the effects of this on the experience of the scene, and how to create an experience that flows from scene to scene smoothly even with these different levels of control. 

This prototype became a great first step into a much larger project. We learned a lot about creating narrative in VR, and though demonstrations with an Open House audience we discovered just how much impact a simple scene with basic elements can have on the viewer. 

THEORY

Broadly, my thread going into the year was how virtual reality can be combined with game design for educational purposes. Through these experiences, I was able to refine that to how immersion and environmental interaction along with game design can be used to form an educational narrative experience. 

Tori and I are focusing on different but connected elements while working on this project. I am working specifically with theories concerning self-perception, learning, and gamification. Structuring these together, I form a framework for my research. Self-perception theory is connected through the concept of perspective-taking, representing the user and how they reflect back on themselves and their experiences. Gamification represents the interaction the user has in their environment- provides the virtual framework for the experience using game design concepts. Learning theory places the whole experience in the context of education and the "big picture". 

WHAT'S NEXT? 

Over the next year, I will be continuing to work with Tori on the next stages of the Ruby Bridges Project. While we are still currently discussing our next steps, I would like to explore move environment building and structures of the experience. The Six Week Prototype was a great learning experience for how to set up a narrative flow and work through different levels of interactivity/user experience. But there are still so many other directions to push forward with it. Having the crowd react back to Ruby by throwing objects, yelling specifically at her, or even having all of their eyes constantly gazing down at her, further increasing the menacing presence. Playing with perspective-taking so users can switch back and forth between different members of a scene and determining if that ability contributes positively to the scene. Pushing other concepts of gamification, such as giving users a task while they are in there to highlight aspects of the environment (the closeness of the crowd, the size of Ruby, etc). Manipulating these environmental aspects will likely be the next step for me. 

I will continue to research the theoretical framework highlighted above and will likely be making modifications as I start to delve more into these topics. My classes begin next week, and as part of that I will be taking Psychobiology of Learning and Memory- this will likely have an impact on the theoretical framework, but I'm very excited to take what we learn in there and potentially apply it to the experiences.

On the technical side, I will be conducting small-scale rapid prototypes to test these concepts as main development on Ruby Bridges continues. Furthermore, I would like to experiment with mobile development on the side to see if a similar experience to our prototype could be offered with various mobile technologies, such as Google Cardboard or GearVR, perhaps even the Oculus Go. 

For now, I'll be organizing my research and getting ready to hit the ground running. 

Reality Virtually Hackathon!

Earlier this month I was able to attend and compete in the Reality Virtually VR/AR Hackathon, hosted by MIT Media Lab. I registered, was accepted, and started connecting with other participants via a facebook page. Everybody was really friendly and excited about working with VR/AR technology! I saw people from all kinds of fields and backgrounds, from students to industry professionals. About two weeks before the Hackathon, everyone started posting their bios and work experience to see who was interested in working together or finding a team. I spoke to several participants, but one reached out and wanted me to join their team. All they knew was that they were interested in working with the recently released ARKit, and all of the team members were iOS developers. They needed someone from the 3D world. 

So I drove out to Boston for the Hackathon, and that first night we had a brainstorming session. Just throwing ideas around until it stuck. We decided to tackle the problem of Collaborative AR- something that had not been successful in ARKit before. But at the end of the two days, we had it! It was definitely more of a technical challenge than an artistic one, but I made the art assets we used to demonstrate its capabilities and tried to get the team to think on a design process as well as an engineering process. 

The video above was made during the competition to show our platform in action. I'll be creating a more comprehensive video in the next few weeks. 

"Team Two" ended up winning our category, Architecture, Engineering, and Construction, and Best Everyday AR Hack from Samsung! 

Team 2 after the Closing Ceremony

The overall experience was amazing. This group worked well together and was able to solve a problem that opens up a lot of opportunity for developers. I learned a lot from them- I had never worked with mobile development and had no idea what was involved with development for iOS. Or with AR for that matter. The workshops before the event was a great way to get into the headspace of VR/AR development and ask questions about various aspects of the industry. The Facebook group is still alive and I made a lot of connections from the event. I'm planning on attending again next year and maybe trying to go to the one at Penn State as well. 


While I was at the Hackathon, I was also working on a game level for my Computer Game 1 class. This was a team project centered around the theme of a broken bridge. Each of us had to create a level using different game mechanics to get around the bridge. Mine was to collect planks that had washed downriver and carry them back to the bridge in order to repair it. I found, especially during this project, that my scripting skills in C# are improving a lot and I'm starting to understand Unity a lot better. Of course, I still get a little overexcited when building scenes so... even though this was a prototyping assignment I got to play with all kinds of fun settings. 

The next couple of weeks are going to be intense. I have a VR prototype that I'm working on involving Hurricane Preparedness (more on that soon), and an AR MindMap project I'm working on to explore my own process a bit more. Next week I should have a computer game final project in the works as well- not too sure what that's going to look like just yet. There will be plenty of process work to post on here! 

GDEX 2017

I spent this weekend at GDEX- the Game Development Expo held here in Columbus, Ohio! This was my second year attending the Expo, their largest yet. I volunteered in the morning and then was scheduled to show my Roll-A-Ball mod at the ACCAD table for an hour. My volunteer shift ran a little long and I ended up not showing my game, but I was still able to get a semi-functional version complete last week: 

Screenshot of Roll-A-Ball Mod

The idea of the mod was to find your way to the center of the maze, hitting switches as you go to lower walls around the final pickup. However, the lights are off and the only things you have to navigate by are the glow of the ball, the light trail that illuminates the path you've already taken, and the map in the corner showing the entire maze. Scattered throughout the maze are light pickups that illuminate the maze for a brief period of time, allowing the player to see potential paths ahead. 

Screenshot after light pickup. 

There's still a long way to go before my C# skills are considered proficient, but by the end of this process I was starting to get the hang of it. 

This weekend I'll be attending the Reality, Virtually 2017 VR/AR Hackathon in Cambridge, MA. It'll be my first hackathon and I'm excited to meet other people working with this technology in different industries and roles. Following that, I'll be starting development on a Hurricane Preparedness VR game and another small collaborative platforming game in Unity, so plenty of updates to be posted this week! 

Music Mocap and Wrecked Cabin are Up!

I spent today updating my website with the finals from my Wrecked Cabin project and our most recent project: Music Mocap.

This project was my first experience ever with motion capture, and OSU has a fantastic motion capture studio. I wrote a little bit about it in the last post, but we had a great time working with the dancers and I learned a lot about the actual process... especially because I had to get in a suit to do a test capture. 

21278015_10210211086898463_842400013_o.jpg

Other than Music MoCap, I'm taking a Computer Game class that's really expanding my knowledge of Unity and game design. In the past, my game classes have just given us a few pieces to cobble together and didn't really take the time to go into how things work or why games are designed that way. But now I'm actually learning how C# functions and why it works.

We started just following the Roll-A-Ball tutorial from the Unity website, but now we have to create a mod that changes the gameplay or introduces a new dynamic. I'm working with this idea of creating a timed maze scenario, but haven't quite narrowed down my exact plan just yet. Here's a few pages from my sketchbook, just jotting down ideas: 

Over the weekend I'll be working out the final plans for this mod and getting a working prototype started. With Unity projects in the past I was focusing all of my efforts on asset creation and less on the game itself, so this is going to present a new challenge and experience for me. More updates to come!