First Year Wrap and Ruby Bridges: 6 Week Conclusion

In the first week of May, Tori and I completed our work on the 6 Week Prototype for the Ruby Bridges Project. It was presented, and then folded into a much larger presentation about our progress throughout the first year of our MFA program. As classes are starting back up, I wanted to make a post summarizing my journey over last year, the results of Ruby Bridges, and my current starting point. 

At the beginning of the year, I focused my efforts on the interactions between game design, education, and virtual reality. For me, this meant a lot of exploration and a technical education in these areas. 

My early projects focused on improving my skills in Unity. I worked on team projects for the first time in Computer Game I and obtained a real introduction to game design and game thinking. This also allowed me to develop my own workflow and organization in Unity. While exploring my personal workflow, I was interested in potentially using VR to organize materials and form connections throughout the scope of a project using Google Cardboard. The result was the MindMap project, which was a great introduction to mobile development and Google Cardboard, but provided limited usefulness for my work. It was tested using materials from my Hurricane Preparedness Project, a 10 week prototype developed to provide virtual disaster training for those in areas threatened by hurricanes. This was my first time using Unity for VR, and developing with the HTC Vive. The topics explored, including player awareness in VR, organization of emotional content, and player movement in a game space would eventually become the basis of my work on the Ruby Bridges Project. 

There has been a clear evolution in my own design process and focus, mainly with a shift from visual organization to functional prototyping. Earlier in the year I still had a heavy focus in visual elements and art assets, though with game design projects that experience suffered because the game was not totally functional. By the spring, I had shifted completely into prototyping and non-art assets. All of these projects challenged my process and boosted my technical skills, and then I brought these technical developments into a narrative context. 

EDUCATIONAL AND EMOTIONAL STORYTELLING THROUGH IMMERSIVE DIGITAL APPLICATIONS

In the Spring, Tori Campbell and I began working on our concept for the Ruby Bridges Project. Working together, we would like to use motion capture and virtual reality to explore immersive and interactive storytelling. Ultimately, we are examining how these concepts can be used to change audience perception of the narratives and of themselves. Ruby Bridges' experience on her first day of school is the narrative we've chosen to focus on. 

Ruby was one of five African-American girls to be integrated into an all-white school in New Orleans, LA in 1960. She was the only one of those girls to attend William Frantz Elementary School at 6 years old, told only that she would be attending a new school and to behave herself. That morning, four U.S. Federal Marshals escorted her to her new school. Mobs surrounded the front of the school and the sidewalks, protesting the desegregation of schools by shouting at Ruby, threatening her, and showing black baby dolls in coffins. 

This scene outside the front of the school became our prototype in VR. 

The Four Week Prototype focused on developing technical skills that we would need moving forward, specifically navigation, menu/UI, and animation controls. In doing so, I learned not just how to make these functions work, but the pros and cons of each.  This allowed me to make more educated decisions in the design of our Six Week Prototype. We gathered motion capture data from actors to work with the data in a VR space, and to help experiment with controlling the animations. 

My goal with the Six Week Prototype was to create a fully functional framework for the experience, something with a beginning, middle, and end. I created a main menu, narrative transition into a Prologue scene, the actual Prologue scene where the user is Ruby's avatar seeing from her perspective, and then an interactive scene where the user can examine the environment from a third person view. This view would provide background information/historical context, and drop into the scene from another perspective. Where the broad goals of the Four Week Prototype was technical development, this project was examining different levels of user control, the effects of this on the experience of the scene, and how to create an experience that flows from scene to scene smoothly even with these different levels of control. 

This prototype became a great first step into a much larger project. We learned a lot about creating narrative in VR, and though demonstrations with an Open House audience we discovered just how much impact a simple scene with basic elements can have on the viewer. 

THEORY

Broadly, my thread going into the year was how virtual reality can be combined with game design for educational purposes. Through these experiences, I was able to refine that to how immersion and environmental interaction along with game design can be used to form an educational narrative experience. 

Tori and I are focusing on different but connected elements while working on this project. I am working specifically with theories concerning self-perception, learning, and gamification. Structuring these together, I form a framework for my research. Self-perception theory is connected through the concept of perspective-taking, representing the user and how they reflect back on themselves and their experiences. Gamification represents the interaction the user has in their environment- provides the virtual framework for the experience using game design concepts. Learning theory places the whole experience in the context of education and the "big picture". 

WHAT'S NEXT? 

Over the next year, I will be continuing to work with Tori on the next stages of the Ruby Bridges Project. While we are still currently discussing our next steps, I would like to explore move environment building and structures of the experience. The Six Week Prototype was a great learning experience for how to set up a narrative flow and work through different levels of interactivity/user experience. But there are still so many other directions to push forward with it. Having the crowd react back to Ruby by throwing objects, yelling specifically at her, or even having all of their eyes constantly gazing down at her, further increasing the menacing presence. Playing with perspective-taking so users can switch back and forth between different members of a scene and determining if that ability contributes positively to the scene. Pushing other concepts of gamification, such as giving users a task while they are in there to highlight aspects of the environment (the closeness of the crowd, the size of Ruby, etc). Manipulating these environmental aspects will likely be the next step for me. 

I will continue to research the theoretical framework highlighted above and will likely be making modifications as I start to delve more into these topics. My classes begin next week, and as part of that I will be taking Psychobiology of Learning and Memory- this will likely have an impact on the theoretical framework, but I'm very excited to take what we learn in there and potentially apply it to the experiences.

On the technical side, I will be conducting small-scale rapid prototypes to test these concepts as main development on Ruby Bridges continues. Furthermore, I would like to experiment with mobile development on the side to see if a similar experience to our prototype could be offered with various mobile technologies, such as Google Cardboard or GearVR, perhaps even the Oculus Go. 

For now, I'll be organizing my research and getting ready to hit the ground running. 

1000 Ways How Not To Control Cameras

This week and the next three days form the final week of development for the Ruby Bridges 6 Week Prototype, and last week I outlined the functions that I would like to implement in the build for this week. 

The good news is, I learned a lot about how the SteamVR camera likes to operate. The bad news is, it took me all week to learn these lessons and adjust our prototype accordingly. 

Debug list from 04/21/18

Most of the issues I ran into had to do with moving the camera around a scene. The third person documentary view that I'm building was initially including a zoom function. I went through a couple different methods to get this to work: sliders, touchpad walking, scaling the environment. I finally got this to work using a UI slider. But I discovered that the effect was extremely jarring and didn't really add anything for the user- if they're going to be able to take on perspectives in the scenes themselves, the zoom function becomes redundant. I have decided to fix the camera to one point away from the environment and allow the user to rotate the scene manually to examine the tooltips. 

The other issue was locking the camera to Ruby's head. I could parent the camera to her motion without a problem, but the height of the user would influence the Y value of the camera transforms. I wasn't able to find a way to lock this even with research (although some online forums mentioned it's extremely disorienting to have head transforms locked in VR). To solve this problem for now, users will complete the experience in a seated position. This should have the added benefit of assisting with the motion sickness issues from the motion of Ruby's walk. 

On Saturday, I had a debug day and tried to work through all the issues that came up from testing on the Vive instead of the simulator. This included Menu buttons working properly, pointers, and disappearing controllers. The controller thing has to do with how I parent the camera to Ruby's head- they still function, but you can't see them. Still working on a solution for that. I also found that the environment itself was not centered and had tons of weird offsets, so I started a fresh scene with the environment in the right place- that solved a lot of the camera transform issues. 

Screenshot of current camera view for interactive scene.

NEXT

With the camera issues relatively sorted, I have to place the object tooltips into the scene and place the background/historical information on them. These will also include the buttons for perspective view in each part of the scene. Tori worked on creating a crowd using the new character models she made and did a great job offsetting the animations, so I'll be placing those into both scenes as well and cleaning up the overall function. 

Interactive Building

Last week of development! After taking into account all our feedback, Tori and I really had to think about how to round out this project. 

Tori will be working with the character animations and models, fixing some of the technical issues like locking the feet to the ground and replacing the robotic models with the avatars she created. While the current animations were still effective, the unedited animations and floating characters do crack the immersion. 

On my end, I had some technical issues that I wanted to fix too: locking the Camera to Ruby, crowd simulation with offset animations, and editing the audio to be more cohesive. But along with that I wanted to round out the experience. We put the interactive level aside to focus on putting together the prologue and receiving feedback on that experience. 
For the last bit of this project, I'll also be putting together a basic prototype scene to explore a 3rd person documentary view. The user will be able to rotate the scene and zoom closer, then use the tooltips to gain historical background. Within these tooltips will be a button that, upon clicking, will allow the user to join the scene on the ground, much like Google Street View. It's not a fully fleshed out experience, but will allow us to broadly explore some of the concepts we discussed back at the beginning of the project as far as how to transfer that information. This is a good starting point- users will still have control of their experience, and the information will be there for them to uncover at their own pace in a variety of ways. Meanwhile, we still have that perspective-taking ability there to continue the experience the user had as Ruby or other members of the scene. 

The scans below from my sketchbook show some of the notes taken while discussing how to set up this level. 

I did take some inspiration from Assassin's Creed, as discussed last week. The series itself has always included a wealth of historical information embedded within menus and the occasional quest. However, as a player, you have to go searching for this information, and the reveal tends to be a wall of text with the occasional image. It's underwhelming after running around a richly animated recreation of Rome or Havana. The new Discovery Mode provides text, images, audio, and video from both the game and reality. I found myself much more excited to experience a multi-modal presentation rather than reading text block after text block. This much text (as shown in the images below) really doesn't work well in VR- it's difficult to read the panels unless they take up the full screen and overall the immersion is just lost. I would rather focus on using the environment to explore and convey information rather than relying on text. 

In a similar vein, the newest installations of Tomb Raider include historical information with artifacts that players collect throughout the course of the game. Removed from the world gameplay, a screen comes up and players can examine 3D recreations of these items with a basic description of what it is in the context of the game/world. Granted, it's usually only a sentence or two, but not something really required by the game. It allows players to view the item up close and learn a little bit more about the culture of the world around them without overwhelming with too much detail. It's another way for players to experience this information. I thought about this when considering the 3D manipulation of the scene and engaging the user in the content. 

Another great example of this came from one of our readings (experiences?) for class this week. Refugee Republic, an interactive documentary, takes the viewer on a journey through a Syrian refugee camp in Iraq by scrolling through a panoramic illustration depicting different parts of life in the camp. The media often presents an inaccurate view of refugee camps, and the team who created it set out to create a more real image of life in this camp. While the landscape itself is mostly drawings, as the user scrolls along it transitions into film and image and text. The result is incredibly dynamic and provides a lot of depth to the experience, as each media is used for it's strengths. It plays to every sense, and that's what we're trying to do with this interactive level. I began thinking about how to choose what media and what information I present in this 3rd person view and what media might work best from the perspective-taking option. I'm going to start researching some more experiences and games that provide a similar media overlap. 

With this in mind, I was able to make decent progress on getting the level set up this week.

  • Prologue: camera is finally locked to Ruby. All users will experience the walk at her height, and without walking away from her body accidentally. In the interactive level, I'm contemplating giving the user the ability to walk around as Ruby without having her set animation. This was discussed multiple times as how impactful the scene could be if the user is seeing it all from Ruby's height and exploring at their pace. I don't think we'll have the time to get that in this time around, but a future feature to consider.

  • Created the new scene with a third person camera. Began implementing camera movement and manipulation functions, such as zooming in with a UI slider (harder than anticipated) and working on rotating the environment using the pointer from the controller.

NEXT

This week is going to be straight work on this level. Getting those features in will mostly be shifting the camera around, and once I have the process down it should go fairly quickly. It's also going to be compiling Tori and I's work into a final build and debugging as much as possible. I have yet to test progress on the new level in the Vive, so I'll be doing that tomorrow and every other day until it's due just to make sure the changes are working in the headset as well as the simulator. 

 

 

 

2 Weeks In: Crowd Building and Playtesting

Over the last two weeks, all of my efforts for the Ruby Bridges project have been focused on the Prologue experience. This included creating a crowd that surrounds the user, adequate audio, attaching the camera to a moving Ruby, bringing all of these animations into the same scene together, and a smooth transition from the introductory sequence to the actual experience. 

Troubleshooting the Prototype before the Open House. April 3, 2018

The crowd building was a real technical challenge for us, and we still haven't completely nailed it down. For playtesting purposes we took the captured data we had for four figures and duplicated it into a crowd, then instantiated that crowd once the scene started. Eventually what I would like to do is use a crowd simulation to offset the animations of the figures- looking at the crowd as it is, it's very easy to spot patterns where we duplicated groups and where figures are floating above the ground plane. It would also help us create a more faithful representation of the scene; I looked at some images taken from Ruby's fist few days of school to gauge where the crowd would be harassing her along the sidewalk and how close they were to her. Based on these the crowd was most aggressive on the sidewalk around the school, but were kept away from the front doors as the school had a fence all the way around it. 

For the transition into this scene, we wanted to give the user context for where they were and whose shoes they would be standing in. Upon starting the experience, the user is in a almost completely dark room listening to audio of Ruby talking about her first day of school from her perspective. Text cues come up with Ruby's name and what interview we're pulling the audio from, followed by the school, date of the event, and the location as she's talking. The scene then fades and the user reappears in front of the school.

While listening to some interviews with Ruby talking about her first day, I noticed some of these podcasts and interviews included audio of the crowds yelling at her. I was able to cut up this audio and loop the crowd yelling into the scene, along with some stock effects of neighborhood environmental noises. Tori recorded some of our classmates yelling specific phrases, such as "We don't want you here!" and "We're FOR segregation!" to add into the audio amongst the crowd. With the volume all the way up, this audio can be very chaotic and confusing. After a few moments just standing there and the headset on, I found it easy to lose track of where I was. The audio completely obscures anything in the outside world. The added chants grounds the user in the event and the time period. 

On Friday, Tori and I were able to demonstrate the current version of our prototype at the ACCAD Open House. Other than the two of us and the occasional classmate, we haven't been receiving much feedback from sources outside of the Design world. We were able to get some fantastic feedback from a wide variety of people of all ages, races, and experience with virtual reality. The topic itself raised a lot of interest with those walking by, and after a quick background on who Ruby was and our intentions with the project, most were eager to see what we had. 

After taking off the headset, we had a table set up with the children's book and post-it notes for guests to provide written feedback for us. We only had two written notes, but most of the guests asked questions and gave us their impressions afterwards.

  • One of the most frequent comments we received was "wow, it feels like you're really there! It's very immersive." I do take that with a grain of salt, especially as many of the guests were experiencing virtual reality for the first time. However, the fact that we were able to gain that reaction from so many of those who experienced a prototype with primitive forms and non-recognizable humanoid figures was very promising. Guests gave different reasons for feeling this way- the audio being powerful and negative, the crowd surrounding the user, seeing the crowd animated in VR.

  • Some guests cited brief dizziness during the movement as Ruby up the sidewalk. I myself experienced this when testing the prototype before the Open House. The fact that it was significant to mention after only a 3 second motion is important as we're going to be putting a longer walk and animation in the scene in the future. After the motion stopped, the users adjusted to the world. Part of this could be resolved by having guests sit for this experience- it can be disorienting to be standing while the character is moving. Though if we continue with the interactive portion of the experience, guests would ideally be standing and moving around. I have seen other solutions in VR ports of games like Skyrim where the edges of the screen are blurred on the periphery to reduce the feeling of sickness while the player is moving, and the blur fades once the player has stopped. This may be a good area to explore when we have longer animated sequences in the scene.

I had several conversations with guests who are instructors or educators, and all mentioned seeing the uses for this in the classroom.

  • One guest asked me if I would be working with educators in the development of this experience. Ideally, yes- this experience is meant to be implemented in the classroom, not to replace the classroom itself. It's very far in the future, but gaining feedback from instructors as to how they could best utilize this would be absolutely necessary.

  • Several guests commented on whether the experience would be appropriate for elementary-age students, after asking what our target audience is. To be honest, there is very little research on how kids those ages react to virtual reality. There have been studies that suggest kids ages 6-18 perceive virtual experiences to be much more "real" than adults (as discussed and referenced here), and that children ages 6-8 can create false memories after experiencing a virtual event (source). While we want to stay faithful to Ruby's account, Tori and I will have to discuss the implications of how "real" of an experience we create.

  • Following up on that question, another guest asked whether we had considered leaving the avatars of the characters as these robotic figures rather than assigning them race. She was interested in how the user might project onto these figures if a race was not assigned and thus change the experience for the user. I understand her point and this is a question being addressed on several studies dealing with racial bias and stereotyping- in that realm, leaving the user "colorblind" may be an interesting area to study. One such study involves changing the race of the user's avatar and observing how users of different races demonstrated bias when the avatar was different from their own race (finding it reduced explicit bias, with no impact on implicit- an interesting study to consider when we're having users experience Ruby's walk. Source). However, our purpose is to craft a world similar to that Ruby experienced to promote empathy, understanding, and connection between the student and Ruby. Race is a vital point to her story and understanding that this is just one of many moments during this time where she would encounter aggressive racism is vital to this experience.

The question of interaction was addressed when discussing the scene where the user would be able to explore the world. Guests asked what kind of interactions they might experience- would the crowd react to their presence? Would they be able to move around the scene? One guest suggested using gaze-tracking to trigger the crowd into throwing things at you when walking around the scene. In past critiques, the suggestion of having the crowd's heads all turn to follow you no matter where you are would certainly be intimidating (or even menacing).

It really comes down to what we want the user to gain from that freedom to explore. Initially it was to provide background knowledge of the event and learn more about the long-term effects/major components in the scene- how Louisiana fought her attendance, how the community reacted, what the rest of Ruby's education was like. The major question is how to go about delivering this information. Looking at perspective-taking, the user could embody different characters in the scene and listen to their internal monologue as a way of understanding different points of view. Or the user could walk around as their own avatar objectively, as if at a museum.

An Open House guest gave me a great case study for this "virtual museum" experience created by Assassin's Creed Origins. The game takes place in Ancient Egypt, and your character is part of a vast open-world environment. Ubisoft recently released a Discovery Mode for the game, featuring guided tours through landmarks and buildings. The player can run around the landscape at will as their own character. When a tour is activated, a guided trail is illuminated along with interactive checkpoints that features a narrator and extra written information/artwork added into a menu archive for later inspection. 

This seems to be a great way to keep player autonomy and the general elements of gamification consistent in the game while still conveying the relevant information. I own the game and have yet to explore Discovery Mode myself, but I will be doing so this week and discussing ways to move forward with Tori. 

NEXT

Tori and I will be meeting this week to discuss our next steps and compiling the feedback received from the Open House. With the current course, we will likely be working on the crowd simulation and the user animation for Ruby. The current walk is very short, and we will need to work on the animation cycles (and creating an idle state) so the characters do not just stop after a three second experience. We will also be testing out model applications for the crowd and adjusting the audio. 

Project Framework and Flow

Tori and I were able to discuss the notes I made last week on the flow of the project, and were able to finalize our plans for the next six weeks of development. 

Notes from planning out the experience structure.

We sat down together and discussed the flow of the experience that I had outlined. The first thing the user experiences is a start menu, with a start, quit, and options button. Upon pressing start, there's a transition in which the scene fades to black and displays the date and time to set the scene. Our feedback here from Maria was to provide other information to place the user in the experience by providing more background information, so we will be building on that and displaying more information about the story during the transition through audio and images. 

After the transition, the scene fades back in with the user as Ruby. This is a passive experience with no navigational control available. The user will start at the sidewalk, and experience Ruby's walk up to the door with the teacher. We debated the option of even giving the user menu control, or the ability to exit the experience- functionally, I think it could be detrimental and difficult for the user to have to force quit an experience in order to restart in case something went wrong. We'll be getting Alan's opinion this and other questions regarding gamified elements later this week. 

From there, we transition into the Interactive Mode where the user respawns with an objective placement on the map. They are not part of a particular group, but initially respawn as an outside impartial observer. The scene with Ruby has restarted, and they will view the walk they just took from other areas in the scene. The user has full navigational abilities. The animation will begin as they collect icons, prompting the user with a question, fact, or experience to witness. The idea being that the user moves along with Ruby, but avoids being constricted into a linear gameplay by being given the choice to pursue the icons in whichever order they desire. They will also have the ability to replay each checkpoint from a secondary menu. 

Further questions we asked had to do with the avatar of the player. When starting the Prologue, the player is seeing from Ruby's perspective and will be embodying her avatar. After that, what would the player's avatar look like? Would they even have one? I've been considering these questions with the Proteus effect discussed last week, and thinking about how this visualization would change the experience for the use.

Following more critique from Maria, we're moving forward with crafting the Prologue experience first. This week I did some research on the area and sketched a rough map of what our prototype will look like. William Frantz Elementary School has been restored as a historical site, and though it has a new academic center attached to it, the original building and neighborhood have changed very little from 1960 to now. I tried to keep the general shape of the building and placement of nearby streets/houses historically accurate for the prototype. 

Sketch of map for 6 week prototype.

I began working on the framework for the experience in Unity. I built the general environment, set up the player camera/controllers using SteamVR and VRTK, and started putting together a functional menu system to transition between each scene.  

Tori is going to be working on adding the animations captured in the last prototype to the scene, and choreographing their interaction. Once they're added in, I'll be making sure the cameras attach to Ruby's character for the Prologue and work on the animation controls for the interaction scene. For now, our priority is going to be completing the prologue experience and getting those elements functional. 

The theoretical framework for this project has been a work in progress, but I've been narrowing down the key theories and concepts we're working with. Most of what I've been examining has come from self perception theory, learning theory, and gamification. When presenting for critique, the feedback I received was to be less specific with the framework. I have plenty of information on the psychological aspects and even some on game theory, but very little on virtual reality itself. I'll be doing more research this week to fill those gaps. I have been told to read Art as Education by John Dewey by several professors and classmates. I'll be adding this to my reading list as well. 

Current breakdown of theoretical framework.

NEXT: 

  • Finishing up framework for the whole project build.

  • Functional Ruby experience in the Prologue

  • Transitions between scenes started.

  • Main menu complete with options.

  • Research

Phase 2: Continuing the Prototype

After completing our 4 week project, Tori and I had a talk about where we would go with the next 6 weeks to advance this project. We decided to continue in the direction outlined in my last post- creating the first steps of a vertical slice from the story of Ruby Bridges- Tori focusing on organizing the animations and drama, and myself focused on creating a full build in Unity. 

Our four week prototype had a loose menu structure that I created to make it easier for us to test out different functions and for myself to understand how they work. These were purely technical exercises. In this prototype, will be creating a prototype that contains a full narrative. The user will begin the experience as Ruby, with minimal control of their surroundings. From there, the scene will restart and the user will gain the ability to navigate the environment. There will be interactable objects to collect and examine, containing background information from the time period and location. While we want to avoid creating a full-fledged game with this experience, I will be using game design elements to encourage exploration of the environment so students will actually find this information. 

We took into consideration the critique that we received from our initial prototype. Our objectives were reframed to focus on the story and less on the technology, and we will continue to focus on function and interaction instead of aesthetic appearance. These are questions we can begin examining after this project. Our research has already begun expanding to include psychology, learning theory, and empathy. 

Proposed work schedule for 6 Week Prototype.

Above is the working schedule I've created for my part of the prototype. Tori's schedule lines up with mine so that we're both generally working at the same pace and form of development. 

I began working on some of the general layout for our project, considering the flow of the experience and what functions would be available in each. While this is still a broad layout, it's a sketch of the experience from the start screen all the way to the end of interaction. Tori and I will be meeting this week to finalize this plan and discuss details. I will also be starting the general layout of the experience, with a blocked in environment and basic navigation for the user. 

Image of notes on the layout of the experience.

I also continued reading some of the research gathered over the last four weeks: 

These readings covered a wide range of topics. Research on the effects of virtual immersion on younger children is nearly nonexistent, and that is mentioned several times throughout these papers. A few of them had to do with digital representations and how users behavior changes when their avatar reflects a different identity. Children develop self-recognition around the age of 3 or 4, and these connections grow with executive functions. It was also shown that children between the ages of 6-18 report higher levels of realness in virtual environments than adults. It's been shown that children have developed false memories from virtual reality experiences, thinking events in the virtual environment actually occurred.  I was also introduced to the Proteus effect, which suggests that changing self-representations in VR would have an impact on how that person behaves in a virtual environment. By placing a student in Ruby's avatar, we also would change their judgements of Ruby to one that is situational, and create an increased overlap between the student and the character. When we're thinking about placing a student in Ruby Bridges' shoes and considering aspects such as the aesthetic appearance of the environment and the interaction between Ruby and the other characters, we have to remember that this experience may be much more intense for younger students who experience a higher level of environmental immersion than adults.


Over Spring Break I spent my time at the Creating Reality Hackathon in Los Angeles, CA, where I got to collaborate with some great people in the AR industry and work with the Microsoft Hololens for two days. Our group was working on a social AR tabletop game platform called ARena using Chess as a sample project. While we were not successful, it was a great lesson in AR development and approach. I also gained exposure to other headsets and devices from the workshops and sponsors- the Mira headset runs from a phone placed inside the headset. And there are a variety of Mixed Reality headsets that use the same Microsoft toolkit for the Hololens. 

Workshop showing the Mixed Reality Toolkit with the Hololens.

While the Hackathon was a great technical and collaborative experience, it also opened up other possibilities for our current project in the long run. Part of our research is discovering what virtual reality itself brings to this learning experience beyond just being cool or fun to experience. We already know that this experience is not meant to replace the reading of the book or any in-class lecture- it provides another medium for students to experience and understand this story. After spending the week working and thinking with AR, I was thinking about how we can better bridge that gap between the physical experience in the classroom and the virtual experience. Using an AR to VR transition that interacts with the physical book would be an interesting concept to explore related to this.

The technology doesn't quite seem to be there yet- there's no headset out there that has the ability to switch from AR to full immersive VR. But Vuforia seems to have this function available and could possibly be accomplished on a mobile device. There's even a demonstration recorded from the Vision Summit in 2016 showing this ability (at time 22:00), documentation on Vuforia's website about AR to VR in-game transitions, and a quick search on Youtube shows other proof-of-concept projects with this ability. This isn't a function that will really be able to be explored until much further down the line and potentially will not be possible until the right technology exists, but raises questions about how we can create that transition between the physical and virtual. 

From some of the participants at this hackathon, I also learned about the Stanford Immersive Media Conference this May, which will feature talks by several of the authors of the papers we've been reading for research and others involved with the Stanford Virtual Human Interaction Lab. This is potentially a great way to interact with others who are doing work in the same areas of VR and AR, and discuss their research.