Narrative FRAMEWORK FOR EDUCATIONAL Virtual Reality CONTENT
Case Study: RUBY BRIDGES
An ongoing virtual reality project created by Tori Campbell and Abigail Ayers for the Design MFA program at Ohio State University utilizes motion capture and virtual reality tools to bring children’s stories into an immersive world, examining how these narratives can be explored using new technologies and methods of production.
Our current case study is the story of Ruby Bridges, one of five African-American girls to be integrated into an all-white school in New Orleans, LA during 1960. But, of those girls, she was the only one who would attend William Frantz Elementary School. She was 6 years old. During her first day there, her parents didn’t prepare her for what she was about to encounter. They simply told her that she would be attending a new school, and to behave herself.
That morning, four U.S. Federal Marshals arrived at her home to escort her to her new school. They instructed her to not look around, but to look straight ahead and follow them up to the front steps of the school. When they arrived, mobs surrounded the front of the school and the sidewalks surrounding the area, protesting the desegregation of schools. People were shouting at Ruby, telling her they would poison her, and showing her black dolls in coffins.
By examining different forms of interaction and immersion, we are working towards designing an experience with strong emotional and empathetic impact for the user.
Process work and further details can be found over on my Blog.
Phase 1, 1.5, and 2 (Spring 2019)
Ten Week Prototype (Fall 2018)
Six Week Prototype (Spring 2018)
Four Week Prototype (Spring 2018)
PHASE 1 AND 2 PROTOTYPES
March 2019 - April 2019
The primary focus of these “phase” prototypes was to take a step back and examine some of the issues that emerged from the Ten Week Prototype.
PHASE 1: User Movement and Environmental Response
In Phase 1, I wanted to address how to move a user through a sequence while still giving them a limited amount of agency. The animated walk from the previous prototype felt unnatural, and there is no user interaction occurring in the scene, which led to a passive watching experience rather than an active engaging one. As a physical walk would not be possible in the space, I decided to try teleport points in a simplified environment. The teleport points allow me to constrain the user to a space while still giving them agency over the time they spend within that space. Additionally, I can construct scene interactions around that specific point in space and time rather than scattering them along the route of the sidewalk.
Another point of interest for me was scale and transition. Users in the Six Week Prototype were constrained to an avatar that was roughly the height of a six-year-old, yet this only moved the position of the camera. The floor would then appear to be located at knee-height for some of the taller users, which can be disorienting when physically moving around a space. In Phase 1, I created an environment that would scale itself proportionally to the height of the user. Each user would then be the height of a six-year-old child in the space regardless of their actual height.
Phase 1 Conclusions:
While teleportation was an efficient method to get down the sidewalk, it was too efficient. Users could easily just jump from point to point without taking in the scene around them. Additionally, teleportation is an unnatural movement in this space - it takes the user out of the flow of the experience and reminds them of the technology rather than the narrative the technology is in.
The presence of the controllers required to teleport is distracting in the scene. User focus in this experience should be on the content, not on learning controls.
Scaling of the environment works well for all heights, though adjustments need to be made for certain assets to ensure they do not float above the ground plane, or otherwise spin wildly around the scene.
PHASE 1.5: Demo
Between Phase 1 and 2, Tori and I were conducting demos at the ACCAD Spring Open House and the Student Art Collective. This was an excellent opportunity to rebuild the scene from the Ten Week Prototype and implement some of the features I had been working on, such as the environment scaling and updating the visuals in the Prologue sequence. Due to its timing between Phase 1 and 2, I chose to use timed teleportation to move the user through the space rather than bringing back the teleport points.
The user feedback we received was extremely helpful in gauging our progress over the last year. At the Student Art Collective, Tori and I were able to try out our scene with the Wireless Vive adapter on the Vive Pro, and at the ACCAD Spring Open House we saw users who had experienced our Six Week Prototype last year.
Scaling of the environment is ESPECIALLY effective with the mobs in the scene. Users reported feeling small and intimidated.
PHASE 2: Gaze-BASed Interaction
Phase 2’s primary focus was to remove the controllers from the scene altogether, and trigger user interactions with gaze or proximity. Beginning with simple prototypes, I first learned how to change colors of cubes based on where my gaze was located. Then I would teleport to the cubes themselves, and finally teleport after a certain amount of time had passed. Once I reached this point, I brought gaze triggers into the Phase 1.5 scene to see how users move through the space and adapted the main menu to include a gaze-triggered button that would start the experience.
In the middle of this phase, Tori and I did a demo for VR Columbus who came into ACCAD to see what we were working on and chat about our research. I noticed that users had a particularly difficult time knowing that they were looking in the right space. I provided visual feedback in the form of a light turning on and dimming, but this is difficult to notice in a daytime scene. Following their critique, I included a reticle for the user to “aim” their gaze at the right spot, which proved to be incredibly helpful and effective. As an additional interaction, i included a member of the mob whose animation would change when the user’s gaze rests on him. This particular character gets in the users face, bending over aggressively before returning to his initial animation.
Knowing that these functions work individually is great, but the next step is to weave them into the narrative naturally. Navigation in the final scene will not be based on the user staring at spotlights on the sidewalk, but gazing in response to a character’s directions or voice.
Next steps will require building this scene with final assets and motion capture, defining the interactions that will occur at each stage of the sequence, and how audio will be layered into these moments.
TEN Week Prototype
October 2018 - December 2018
In moving forward, Tori and I considered the direction of our project and how to best present key moments within the narrative. Previous prototypes served as technical exercises, and the next step was examining how to bring forward the narrative itself and place the user within.
The 10 Week prototype extended the first scene of Ruby walking up to the school. The user begins with the same Prologue with audio of Ruby Bridges from an interview. This Prologue now includes two historical images from that period of time. The scene fades into the user sitting in a car with a government official and Lucille Bridges. As the car slows, the official turns to direct the user and Lucille offers her hand to exit the car. As the user reaches out to touch her hand, the camera will rise indicating for the user to stand as well. The camera then moves along the sidewalk through the crowd in short segments. Each segment is broken by a fade to black, another clip of audio of Ruby speaking about the experience, and upon fading back in the mob is closer, denser, and grows slightly in size. The scene ends with the user standing on top of the steps of the school before fading in to the next scene.
At this point, we have begun narrowing down how we would like the rest of the project to play out. Following the Prologue and Scene 01, there will be two other scenes broken by audio transitions similar to the Prologue. Scene 02 will take place with the user sitting inside the Principal’s office as children are removed from the school in increasing numbers, and Scene 03 will take place a few days later within a classroom in isolation other than the teacher. This is the general format for the experience that we have decided on, although we have yet to begin work on developing the other two scenes.
Important things learned from this prototype:
Proximity. Designing for this experience, everything in the space felt just a little too far away. Designing for a properly scaled experience and really pushing the possibilities of proximity in a stressful VR environment is something to examine in the next iteration of this level.
Movement through a space. We spent a lot of time thinking through how to move the user through this area in a way that made sense with the scene. The result for this prototype was a crude “sliding” animation, which felt very artificial and did not quite account for the varying heights of the user (a technical issue to be addressed). The “blinks” in the middle were to help break up that motion, although we have also discussed using the proximity of the crowd to “teleport” the user to progressive points. Creating our own static standing “blinks” that allow for absorption of a moment without requiring the user to process forced movement. A discussion to have for the future.
User identity. We consistently went back and forth on whether or not the user should have an avatar, posing a difficult question. By providing an avatar, do we suggest that the user is embodying Ruby and therefore supposed to be her? This doesn’t seem to be a realistic expectation. We could choose a neutral avatar, and for this prototype we decided not to use one at all. However, not having an avatar of some sort can be jarring in a VR experience. I conducted some “virtual mirror” tests as an attempt at a middle ground- perhaps the user doesn’t have an avatar, but we can remind them of Ruby’s identity through environmental reflections or cues. Either way, this is a continuing point of research that remains up in the air.
Audio. The audio present in this prototype presents a little more variety from the Six Week Prototype, though still needs some variety and environmental ambient sounds. Other factors such as volume tweaking and component adjustments will need to be considered for a more solid sound design in future, along with optional captioning.
Technical Compatibility. The release of SteamVR 2.0 in the middle of this prototype did not play well with using VRTK assets. I will be finding alternate ways to accomplish the interactive portions that aligns with SteamVR.
Our next steps will require further examination into these issues and some small scale prototyping to determine potential paths forward in the space, and beginning to consider stylization for the scene.
Six Week Prototype
February 2018 - April 2018
A continuation of the four week prototype, this project took the technical skills we gained and applied it to the Ruby Bridges narrative.
I worked on creating a framework for the full experience. This included a functional start menu, introduction featuring an audio interview of Ruby, a Prologue sequence with the user seeing from Ruby's point of view, and an interactive sequence to give background and historical context. We spent the first four weeks of development working on the Prologue, then took feedback we received from that level to adjust it and form the interactive scene. I experimented with various levels of user control and perspectives within the scene, as well as different forms of menu UI and navigation.
FOUR WEEK PROTOTYPE
January 2018 - February 2018
The four week prototype served as technical exploration of setting up a virtual reality space and potential challenges we may encounter further down in production. I focused on three main areas: navigation, UI/menu, and animation controls. My goal was to learn how to set up each of these things in VR and how they function within the space. This enabled me to make more educated decisions in later prototypes, and boosted my technical skills. It also became an opportunity for Tori and I to learn about each other's mediums. We captured some data for each of the characters using actors, and used this data to experiment with animations in Unity.