Debugging Movement

Last weekend, Tori and I worked with two actors in the motion capture lab to gather data for our scene. Zach and Shaun played four different roles in their captures: angry protestors, the police, Ruby's Mom, and the teacher. The capture itself went fine- both actors were great and gave us good variety of motion to work with and fill the scene. We also included a marker for Ruby, just for our reference later in the scene. 

This weekend, we took more recorded captures with Cynthia and Mckenna from the Theater department. We captured both of them at the same time acting together in similar roles- as mother and daughter, teacher and student, protestor and cop. Their captures should give us enough variety in motion that between last weekend and this weekend we'll be able to populate the scene. 

Still from motion capture data capture, 2/18/18. Cop confronting a protester.

While all of the recorded captures went really well, I ran into some issues last weekend with Unity and SteamVR. There seems to be a bug in the SteamVR Beta where the controllers are deactivated when playing the game in Unity. They still function when pressing the menu button but in the actual scene- nothing.  Lakshika was kind enough to come in this weekend and work with me once our recorded captures were done. We discovered that by bringing in old SteamVR files to the project the controllers will function properly. Truthfully I have no idea why that's the case, but I'm not going to question functional technology. In a clean file, we were able to bring the actors in live and have them put on the headset to see the scene as they were acting. They were also able to use basic teleport functionality with the controllers, although I noticed that the character models start to offset with each teleport. We should be able to fix that with a quick script that attaches the character to the headset as the player teleports, then releases it. 

On Thursday I brought in two other forms of navigation: a radial menu attached to the right controller, and a 2D map that can be accessed in the main menu at any point by the user.

The radial menu can be accessed just by touching the touchpad on the right controller, and circling to whichever destination the user wants. Clicking the touchpad will then take the player to that specific beacon. There are four currently at the front door of the school, the end of the sidewalk, in the street, and one over by the table of interactive cubes. 

In the 2D menu, I took a screenshot of an orthographic top view of the map. I then placed buttons over these same four areas where the beacons are. When the user pulls up the menu, a button there will activate the 2D map. From there, the user selects any of the buttons on the map and they will be transported to the beacon in that area. 

In-Game screenshot of 2D Map on the headset menu.

In-Game screenshot of radial menu.

Of course, I spent some time this weekend debugging the issues we found after play-testing all the navigation changes. The major issues found: 

  • TOUCHPAD SCENE:

    • Teleportation on right controller turns back on after first successful jump using the Touchpad.

    • Teleporting with the Pointer and then attempting to use the Touchpad results in an offset from the beacons that continues to grow.

    • After fixing the first issue, menu laser was no longer turning back on for selections.

  • 2D MAP:

    • Menu missing entirely, though the pointer still appears.

    • Teleport offset (again)

    • Floating off the ground (again)

I chose to limit the Pointer teleportation to the left controller in all levels, for operational consistency and less confusion overall in controls. This solved the problems with the Touchpad presses when trying to use the radial menu to navigate. The pointers turning on when using the Touchpad was resolved with some scripting changes, and the Teleport offset was just a matter of making sure the CameraRig was being used as a transform reference instead of it's parent, the SteamVR. It was being pulled off course by the SteamVR object in the hierarchy. Simple fix, frustrating to actually figure out what the issue was. 

Debugging while in-game. Bezier pointer still appearing and enabling teleport while using the radial menu.

Achieving victory over the debug list meant it was time to bring in someone else to get some outside feedback. My friend Maggie tried out all of the scenes and functions with some critique on the button mapping and overall function. 

  • Confusion over the controls. The trigger and touchpad can be confusing as there's no tutorial or hints other than me explaining the controls in person. For someone unfamiliar with the Vive, this really makes navigation difficult.

  • 2D map needs some color and highlighting to indicate the interactive areas for the user.

  • Placing colliders around the buildings to avoid "phasing" into spaces we don't want players to be. Very disorienting and it breaks the immersion significantly.

As as result, I went ahead and added in controller tooltips immediately for users to understand the button functions. These tooltips appear when the scene starts for 15 seconds, then deactivate. However there is a toggle in the main menu if the user needs help or would just like to reactivate them. 

Tooltips present on the controllers at the beginning of the scene.

Tooltips present on the controllers at the beginning of the scene.

Thursday night, Tori was able to get one of our recorded captures into Unity and functioning with a character model. We started with one of the protester walks just to get the process down, and she'll be working on getting a variety of animations and characters into the scene. I started the research on how to control animations in Unity and it's not a clearly defined process. There's confusion between legacy scripting in Unity and what works with the Animator now currently. I keep getting mixed up between the two and accidentally working with methods that are no longer used in newer versions of Unity. It seems most of the play-pause functions have to do with stopping time in the game, but when I attempted this it stopped my use of controllers in the Simulator. Definitely going to need some more research in this area. 

NEXT:

This next week is going to be all about animation controls. I've considered using sliders, buttons, and toggles but really it's going to come down to what I find in research and what actually works in the scene. Once I get this working on one of the characters in the scene, I'll move on to applying this to multiple characters/animations. I'll also be working on some smaller tasks: adding proximity tooltips to objects around the scene (information transfer), colliders around the school, and adjusting the 2D map with button highlights and a smaller surface (too far into the periphery). 

Navigation and Menus

As of today, our Unity file is set up to operate as a good testing ground for interaction! Last week was really focused on honing our test ideas and getting the basic scene set up. This week, I focused on implementing navigation, interaction, and a functional menu to control our testing in the future. 

On Monday I went ahead and tested the basic navigation and interaction functions I set up last week in the Vive. I set up the teleportation using the straight line pointer at first, and realized that this can prove difficult when moving around a large space. The straight line renderer in VRTK makes it hard to gauge distances, though it works excellent for UI selections. Keeping this in mind, I changed the teleportation to a bezier pointer. The arced line is easier to see in the space, especially when navigating elevated elements such as stairs or ramps. 

Using the Bezier pointer for teleportation.

I also tested out the interactive cubes. After some debugging, all the different highlighting settings worked. Some have outlines, some turn to solid colors, and they can be grabbed/tossed around the scene. I did run into one issue where the cubes could not be retrieved from the ground plane- it turns out the player was floating 0.5 units above the ground. Adjusting this value fixed the problem pretty quickly. 

Initial test in the Vive, 2/5/18.

The Simulator now works in this project file, so I can roughly test functions without having to load into the Vive every time. In the past, troubleshooting often took a long time because I would playtest after making fifty changes. This makes it difficult to find the issues when they arise. It's been a conscious effort this time around to test every major implementation, and it's paying off. 

I switched gears for a bit to work on UI elements in the scene. Because Tori and I are going to be testing a variety of interactive properties (some variations, other contradicting each other completely), I made a main menu for the whole project so that we can easily switch between scenes. This also includes toggles for the interactive objects if we don't need them in the scene. In the future, I will be adding controls for the motion capture data and animations. While I've made menus in the past for Unity, this one is attached to the headset and moves around with the player. Following the VRTK tutorial for a Headset Menu gave me a great start on the format, and then I adjusted it to fit our purposes. 

State of Headset Menu, as of 2/8/18

Tori and I did a playtest together of the first basic teleportation level, just to make sure the buttons work and debug a few issues from Tuesday. Below is a video of our test and working through the ground plane issues. 

Once the project framework was in place, I nailed down what exactly the navigation functions were going to look like and look for any holes in my logic. Thinking through what the player impact would be, what level of control they would have, and what purpose these changes would serve. All four of these options address very different concerns in the environment and are viable options to explore for potential user interaction. I have a pretty good idea of how to set these up in Unity, but I'm going to spend this weekend actually doing that. Next week will have more information on the development and results. On Tuesday, Tori and I will do another playtest to determine whether we need further exploration in each scene. 

Planning out navigation functions for the first phase of my work.

On the theoretical side, we had a conversation today about what our research question was looking like for this project as a whole. Tori is exploring the immersion side of the environment- telling the story through acting and environment. I'm exploring the interaction- how users experience and navigate through this story. We're both still pretty new to writing proper research questions and the results of the next four weeks are going to determine a lot about where we go moving forward. We just needed to start putting language to our work and determine how it all fits together in the big picture. The picture below is our start on this conversation, though it will be developing over the next few weeks as we work on writing our own questions and bring them together. 

Working on formulating research questions

I mentioned using Mendeley last week for organizing my research- the reading has officially begun. I downloaded the desktop app and started adding all the current studies that Tori and I have gathered. While waiting for the uploads to complete, I read about the Virtual Human Interaction Lab (VHIL) at Stanford University, which studies the impact of human interaction in virtual reality and it's larger societal effects. Their website has a large archive of research papers over the past ten years, and I left with 16 studies on topics ranging from interaction, children's education, and racial bias. 

Tori is pretty far ahead of me on readings right now, but I've started prioritizing them and working my way through the list based on those most relevant to my development in Unity. So far I've read:

The cybersickness reading really tied in with the current issues I'm challenging with navigation. Because the user will be navigating a large space (and in the endgame, the user will be a younger student), our tests should address comfort in navigation as well as functionality. Maria mentioned thinking about how these navigation forms could influence the story itself- for example, having limited movement when playing as Ruby, or losing the ability to navigate the space altogether. Really emphasizing the role of the child as also being one with minimal control over their world, Ruby particularly. Although I wonder if that effect would be as prominent if the user is already a child who may experience this in their life. Unless it's emphasized to a new degree? Food for thought.

"How to Do Things With Videogames", by Ian Bogost, explores the variety of uses games have been applied to. Some of these uses are unrelated to us at the moment- the debate over whether video games qualify as art is interesting but not really what we're exploring. But there are chapters on empathy, reverence, and work that give great examples of games to look at and how they deal with these topics. The empathy chapter discusses two games made by USC graduate students called "Darfur is Dying" and "Hush", both dealing with genocide and fostering empathy for the people trying to survive in these situations.

It also introduced the concept of the vignette in games, giving an impression of experience rather than advancing a narrative. Bogost also write an article on Gamasutra explaining his thoughts on video game vignettes. Our experience does not focus on one particular aspect of Ruby's walk to the school but would highlight multiple things she would face- confusion, loud crowds, angry faces, lack of control. Because of this I do not believe we could call this experience a vignette, but it's a good reminder to consider breaking down each aspect of her experience and how we portray that to students. 

I've started making progress on the Vygotsky paper Maria sent us about Imagination in Childhood, about 11 pages in (out of 92). So far he's discussing what imagination actually is, it's origination in childhood, and imagination's basis in reality. I'm interested to see where this goes as far as discussing the perception of reality in VR, and how that ties in with the other papers I downloaded from VHIL. 

Screenshot of my current Mendeley setup, with readings uploaded.

Annotations for "New VR Navigation Techniques to Reduce Cybersickness"

NEXT: 

From here on I'll be developing the other three teleportation techniques and making progress on the readings. In working with the teleportation, I'll be learning more about UI and setting up the controllers for specific functions, thus knocking out two of my three goals. 

On Sunday, Tori and I have three actors from the Theater Department coming into the motion capture lab to capture data and potentially interact with in the scene. I should be able to teleport and move around them while they act. This will serve as a good test of scale in the environment and we'll be able to run through the procedures we learned last week. Once Tori has this data ready to go, we can bring it into the scene and I'll start playing with user control over animations/time. 

Building in Unity

This week started with the completion of the Explainer Video, which I've placed below. Creating this video really did help me organize my thoughts from last semester and display what I've been working on. Seeing this together made it easier to find my path forward. It also gave me the chance to work in After Effects again and brush up on some old skillsets. 

Tori and I began discussing the Ruby Bridges project last weekend and had a general plan in place for production to begin. On Wednesday, we spent some time in the Motion Capture Lab learning how to stream an actor's motions directly into Unity. I have spent very little time in the Motion Capture Lab in the past and am unfamiliar with the programs that Tori has to use in order to capture data, so seeing this process gave me a general idea of her pipeline. Our classmate Taylor put on the suit and we started by setting up tracking as if we were doing a recording of his movements. This process Tori is very familiar with and will be something that we'll be using to test out animations. 

We then pulled up Unity and learned how to stream an actor live directly into the scene, which did require some tweaking and setup for the basic character we were using. But the end result was being able to put on the headset, see Taylor's character in the HTC Vive, and interact with him live. 

Tori viewing Taylor's motions in the HTC Vive. 

This is going to be especially valuable once we have the final set built and can interact with actors in the space. His movements were very clear, though we didn't properly orient the character and the camera around the origin. Whenever Taylor moved to his left, it appeared to me that he was moving straight towards me. Just making sure all of our transforms are correct should clear up this issue. 

I made a demo Unity project earlier in the week that was just a base set- a flat plane with some boxes and house-like representations just to have a place to test out interaction. When going to show Tori, I realized that I had forgotten to load the SteamVR asset package. Trying to reload that caused a whole host of problems, and I found it was easier to start from scratch and build up a demo scene with a layout similar to our story. I spent Thursday building up a new set using the Prototyping asset package that comes with Unity. Because interaction is my priority, I'm choosing not to focus on the models and just work with representations. This new map features a school, front yard, and street. 

Screenshot of new Unity scene.

Screenshot of new Unity scene.

I chose to make this scene fairly large, so we have room to experiment with navigating larger environments. This also means more room for figures once we start importing the motion capture data. 

From there, I followed some of the VRTK tutorials (found HERE) to set up the camera, basic teleportation system, and a few interactable objects. There's a table off to the side with 6 cubes on it, each with different properties. One functions as a control and cannot be picked up using the hand controls. The other five have varying highlighting settings, and react differently when picked up by the controllers. This helped me learn a bit more about how the hand controllers are set up to work with interactive objects, and what options I have for modifying these interactions.  

Screenshot of interactive table, with one of the pick-up cubes selected.

Screenshot of interactive table, with one of the pick-up cubes selected.

I had to make several decisions this week about what type of interaction specifically I would be searching for. I knew that it would be three broad topics: navigation, object interaction, and time. But I broke that down and really thought about what I want to explore in those areas, beginning with navigation. 

  • Teleportation:

    • Using VRTK, the standard simple teleport function. I did switch the pointer to be a bezier pointer, which seems easier to use than the straight pointer. It's easier to determine a final destination where the straight pointer tends to overshoot. I learned this week how to set that up from scratch, which was my first goal.

    • Point and click navigation. In this scenario, the user determines their destination but we (the designers) control the actual teleportation. The scene would be divided up into sections, and when at the border of a section the user will have a cue to move into the next area. The user will appear in the same spot each time. It will be interesting to investigate whether it's easier to move this way and take the user's focus off of the controls.

    • 2D Map. Using a menu function to determine which area the user wants to teleport to. In this case, having a map available to toggle on a hand control or a series to options. Something like "School Entrance" would teleport them to the front of the school doors.

I took inspiration from games like Myst, Dreadhalls, and The Sims when considering these layouts and how the player interacts with a larger map or navigation techniques. 

  • UI/Menus

    • Scrolling

    • Moving windows

    • Typing (for potential classroom uses)

    • Buttons

  • Animation

    • Starting and stopping animations with a button to "pause" the scene while retaining player movement.

    • Play with time: ability to move backwards and forwards, implementing those scroll bars from the UI Menu exploration. Similar to resting in Skyrim.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

Tori and I also discussed the hardware being used. While we know we're going to be developing for the Vive, we also have access to the Leap Motion sensors for hand controls. I looked into development for these, and I think it would be valuable area to explore. Being able to see and reach out to grab objects or incorporating gestures into navigation could an interesting space to work. For now, I've decided to accomplish the above goals using the HTC Vive, but to keep researching and looking up Leap Motion resources if we decide to take that route in the future.

Below is a video from Leap Motion previewing their VR hand tracking software in 2016, just to give a general idea of the type of interaction we could be looking at. 

The past week has been full of development and making decisions to start moving forward on our prototype. Moving forward in the next week, I will be finishing up some tests for the point and click navigation and getting the menu-based navigation in place. I have a few ideas for how to accomplish this, but I need to do some research and see if there's any cleaner or simpler paths. I

will be sorting through and organizing readings tomorrow- another classmate showed me the Mendeley app, and I'd like to try to use that to keep research together. I also need to get some time in the Sim lab to test out the level I made, and make sure these interactions are functioning the way they're supposed to. The simulator in Unity isn't running properly for me in this scene - while it would be useful, it's just not a priority right now and I can work on fixing that once a few other tasks are accomplished. 

Finalized Proposal and Work Documentation

APPROACH

The majority of this week was spent gathering footage and replacing all of the storyboards in my animatic. I went back into each project and did screen recordings along with footage of the players, from the HTC Vive to the Google Cardboard. Syncing up the footage was actually easier than expected using screen context clues. I also added background music and an intro sequence. As of right now, Explainer Video 1 is about 90% complete. The only things missing are the credits and some tweaks on sound/text. 

Still of title sequence from Explainer Video 1

Screenshot working in After Effects

Screenshot working in After Effects

CHOICES MADE

Tori and I submitted our project proposals for the next four weeks, and that meant making decisions on what exactly I wanted to investigate for my portion of the project. We discussed working on technical exercises, trying to nail down the pipeline and techniques we might use for future development. My current plan for the next four weeks is to focus on: 

  • Navigation: How does the user move about the scene? I will use VRTK in Unity to experiment with teleportation, walking, and top-down maps as ways for the user to explore a given area. I've used the teleportation tools before in my Hurricane Prep project, so this will be a familiar area to start with. 
  • Object Interaction: How can the UI tools in VRTK and Unity be used to convey information to the user? There are a variety of methods for pop-ups and object selection. I will set up different objects throughout the scene and apply these different menu types/functions to them in order to test them out. 
  • Time: Tori's working on getting the motion capture pipeline down, from getting the data to bringing it into Unity. Once those animations are present, I would like users to be able to pause the action in the scene and be able to explore a frozen moment in time at their will. I will test this technique by importing a simple animated object in place of the mocap data, then applying it to the figures once they are in the scene. 

RELEVANT SOURCES/INSPIRATION

I gathered a ton of relevant sources this week, all from different areas that we're investigating. 

I found an article titled "In Their Shoes: 10 Empathetic VR Experiences" that features VR projects covering a vast span of topics, from refugee camps to solitary confinement. One that stood out is a project from Derek Ham (NC State University) titled "I Am A Man", part of an exhibition that will be on display at the National Civil Rights Museum. The exhibit is about the 1968 Memphis Sanitation Strike, and his experience takes you back to that scene. Ham documents the production on his LinkedIn page, which has provided valuable insight on his process. One particular entry I read had detailed his thought process on whether his project should be a documentary-like experience or fictional narrative, and how the presence of the user in the scene as themselves automatically alters the accuracy of the historical retelling. I've linked that page here, and placed the trailer for his experience below: 

On another note, I was recently linked to the IEEE Conference on Virtual Reality through an educational AR/VR Facebook group. While unfortunately this year's conference is in Germany (unrealistic), the site listed papers from past conferences. I picked up several papers on using VR and immersive technologies in schools, and have added those to my reading list. Maria also linked Tori and I to a few readings on educational theory, and sent us more looking specifically at how elementary age children learn. 

CURRENT QUESTIONS/NEEDS RAISED

As I start on this four week project, my questions are going to be technically based. I'll need to begin working with VRTK again and diving into some tools that I only understood at a surface level for the hurricane project. Through these readings I'll be gathering information on what the current opinion is on VR in classrooms, and how empathy plays in. The biggest need raised this week is the need to read all of these sources.

LIKELY NEXT STEPS

This week I will be finishing up my Explainer Video 1 and posting it on my site for viewing. I'll also be creating a base Unity file for our four week project and getting the teleportation tool functional, hopefully by mid-week. Tori and I will be getting motion capture data on Wednesday and learning how to live-stream the data into Unity. I'll be documenting the process and some of that footage will probably be in the blog post next week, along with some Unity snapshots. 

Explainer Video 1 Progress

APPROACH

Having organized all of my assets last week, I was able to use this week to finalize my script and storyboards. I focused on three main points: my technical exploration of VR and AR, research work, and collaborative experience. From there I briefly detail my direction moving forward for the next semester. After a few drafts, I felt comfortable with the script and began rearranging the storyboards into a formal template. 

Final storyboard template, with timing and narration. 

While it was recommended for us to work in Premiere, I have more experience in After Effects and chose to use it for this video. The most difficult part of this week was recording the narration. I did a few test recordings using old script drafts, then a new one with the final. The pace of my speech would vary and I learned that there are certain sounds that are very difficult for me to say clearly. The animatic I produced at the end of the week is still using rough audio that needs to be edited for timing, but I was able to begin dropping some of the footage I already have into the composition. 

Screenshot of Animatic work in After Effects

Screenshot of Animatic work in After Effects

CHOICES MADE

The Explainer Video was already solidly planned for work this week, and my choices there were made early on with content organization and script editing. Tori and I have chosen to meet every Tuesday morning to discuss research findings and thoughts on project development for the potential Ruby Bridges project, though we still communicate frequently about this project at other points in the week. 

RELEVANT SOURCES/INSPIRATION 

I was sent several relevant sources this week. Joe pointed me in the direction of the VR/AR Association Online Conference, taking place from January 16-30. There are speeches being given in a variety of tracks, including Education and Storytelling, and they are recorded for viewing at any point. I also found that there are committees for each track with links to relevant articles. There are several talks in the Education track that I will be listening to this week, one specifically being "VR in Education: from Perception to Immersion" by Steve Barnbury. (Linked HERE)

Maria also sent Tori and I several relevant sources throughout the week addressing some of the questions I mentioned in my last post. While we’re not entirely sure that we’re going with the Ruby Bridges story, part of our conversation this week was how to figure out which books students are reading and how to narrow down that search. This article titled “The Confounding Science of Children’s Literature” tells us that nobody can agree why or how kids pick certain books to read. There are some subjects and genres that are overwhelmingly more popular than others and books with narratives are generally preferred, but It also mentioned that kids are picking books that can be part of a social experience, or something that they can talk to their friends about. There seems to be a small section of research in this area to get into, and this will likely be talking point for Tori and I this week. 

The Blue Eyes/Brown Eyes Exercise by Jane Elliott has also been part of the discussion this week. This project centers on ultimately fostering empathy for the students, and the exercise run by Elliott gives a classroom of students the experience of being a minority. The documentary for this project is linked below. While they do have some crossover, I was thinking about how she uses the social dynamics of a classroom to immerse students in this experience and if virtual reality (an individual experience) is able to create the same impact.

On the topic of empathy, another article sent by Maria actually argues a different side- that VR can be misleading and misrepresent their topics. Full immersion is interrupted by factors such as safety and that these are short-term experiences. The article ("It's Ridiculous to use Virtual Reality to Empathize with Refugees") is discussing VR in terms of disability simulations and refugee situations, but it does make a good point when discussing the factor of time and player awareness. If the player is aware that this is a simulation, will that lessen the impact because the element of fear will no longer exist? I feel that fear can be simulated to some degree in VR, but so many of these situations come from fear being experienced over an extended period of time. Those feelings cannot be replicated, and that is a point worth remembering.

CURRENT QUESTIONS/NEEDS RAISED

Last week I was thinking about broader implications of the project, and most of those questions have not been answered so they still stand. The needs raised for this week have to do with information- I need to sit down and gather all the information on topics being discussed (empathy in VR, VR in education, narratives in VR). Then from there see what questions are left or need to be reformed. 

LIKELY NEXT STEPS

This week will be completing the Explainer Video 1. This will involve editing audio, recording gameplay, and creating a rough cut of the video for Tuesday. 

The rest of my work for my project with Tori will be reading and gathering information. I'm still reading Flow, though the sections this week were not really relevant to my work. I have the videos from the AR/VR Conference to sort through about VR in the classroom, and a TED talk linked in the source above about empathy in VR. 

Start of Spring: Explainer Video 1

APPROACH

I spent this week sorting and analyzing my work from the previous semester to see where the common threads were and articulate a direction for my research. I made a list of all the projects and experiences I had, and what I gained from each. The majority of these projects were intended to be technical explorations; I wanted to gain more experience working in virtual and augmented reality to understand the mediums and become more comfortable with Unity. This came with good results, gaining more experience in C# and developing a better collaborative workflow by combining multiple projects in one game. I was even able to take a step towards mobile development. 

While organizing these projects, I noticed many of them had to do with player interaction and how players move throughout a space. In some cases, like the Hurricane Preparedness prototype, the player has the ability to move through a space and interact with objects based on the goal of the level. The VR MindMap project was a purely passive experience with no user interaction. I chose to make this the focus of my video: examining how the interactive nature of VR and AR technologies can be used in an educational environment. From there, I sketched out thumbnails for my storyboards and wrote a draft script detailing the connections made between these projects and my path forward. 

CHOICES MADE

Once deciding the direction of my work, I had a conversation with my classmate Tori about a potential project. She proposed the idea of creating an immersive virtual reality experience for students in elementary/middle school that would recreate a scene from "The Story of Ruby Bridges", the first African American student to integrate an all-white school in New Orleans. The scene in question stems from the photos of Ruby walking up to the doors of the school with protestors shouting at her from across the street. While we're still examining other impactful novels that students are reading today, I have decided to join this project and work with Tori to create an immersive experience giving students the option to move through these scenes at their own pace, exploring the world and gaining more information. 

RELEVANT SOURCES/INSPIRATION

After discussing this project with Tori, we brought it to Maria who recommended looking into some studies on VR immersion and emotion. I have started collecting several studies and books on VR interaction and narrative, one in particular titled "Advances in Interaction with 3D Environments". It makes a point of discussing different methods for wayfinding and navigation through a 3D space, and the efficiency of different manipulation techniques for 3D objects. 

9780061876721_p0_v2_s600x595.jpg

I also began reading "Flow" by Mihaly Csikszentmihalyi, which discusses the psychological state of flow. I have only ever heard of this concept in the context of game design, and did not realize this was a much broader theory. The book itself is written for the reader to understand how to achieve happiness. Flow is defined as "...the state in which people are so involved in an activity that nothing else seems to matter...", and is often manipulated in games to create emotional impact in between high-action moments. This feels especially relevant for the Ruby Bridges project; if the intended goal is to create an educational experience for the student through emotion, it's important to consider how interactivity may interrupt that flow or enhance it. 

CURRENT QUESTIONS/NEEDS RAISED

I'm starting to narrow down what part of "interactive" I'm choosing to focus on, but most of my questions from this week have to do with further defining this in the context of the Ruby Bridges project. 

  • What degree of realism should be achieved for the emotional impact we're seeking? 
  • Would allowing the students to interact with the scene decrease this impact, or draw them away from the narrative? 
  • What specific mechanics would I want to focus on for the scene, and are they appropriate for the age of the students? 
  • What form of hardware would the students be using to experience the scene? A Google Cardboard or a full headset? 
  • If this is the narrative we choose to pursue, which individuals or organizations should be involved to ensure a respectful, accurate portrayal? 

LIKELY NEXT STEPS

As far as the needs of the video, I will be working on recordings of gameplay from my projects and getting footage of the Hurricane Preparedness prototype being played by others on the Vive. I received permission to show footage from the app used for the VR Physics Education Study, so I'll be recording a section of that as well. I will have solidified the storyboards and script this weekend, and will do another run-through in the sound room to start putting my animatic together. 

I will continue reading Flow and searing for more sources on emotion and narrative in virtual reality, as well as immersion. Some of these sources need to be ordered through the library, so I'll be taking care of that and adding them to my reading list. Tori and I will also be having meetings on Tuesdays to work through some of our research and discuss further details on the project.