After completing our 4 week project, Tori and I had a talk about where we would go with the next 6 weeks to advance this project. We decided to continue in the direction outlined in my last post- creating the first steps of a vertical slice from the story of Ruby Bridges- Tori focusing on organizing the animations and drama, and myself focused on creating a full build in Unity.
Our four week prototype had a loose menu structure that I created to make it easier for us to test out different functions and for myself to understand how they work. These were purely technical exercises. In this prototype, will be creating a prototype that contains a full narrative. The user will begin the experience as Ruby, with minimal control of their surroundings. From there, the scene will restart and the user will gain the ability to navigate the environment. There will be interactable objects to collect and examine, containing background information from the time period and location. While we want to avoid creating a full-fledged game with this experience, I will be using game design elements to encourage exploration of the environment so students will actually find this information.
We took into consideration the critique that we received from our initial prototype. Our objectives were reframed to focus on the story and less on the technology, and we will continue to focus on function and interaction instead of aesthetic appearance. These are questions we can begin examining after this project. Our research has already begun expanding to include psychology, learning theory, and empathy.
Above is the working schedule I've created for my part of the prototype. Tori's schedule lines up with mine so that we're both generally working at the same pace and form of development.
I began working on some of the general layout for our project, considering the flow of the experience and what functions would be available in each. While this is still a broad layout, it's a sketch of the experience from the start screen all the way to the end of interaction. Tori and I will be meeting this week to finalize this plan and discuss details. I will also be starting the general layout of the experience, with a blocked in environment and basic navigation for the user.
I also continued reading some of the research gathered over the last four weeks:
These readings covered a wide range of topics. Research on the effects of virtual immersion on younger children is nearly nonexistent, and that is mentioned several times throughout these papers. A few of them had to do with digital representations and how users behavior changes when their avatar reflects a different identity. Children develop self-recognition around the age of 3 or 4, and these connections grow with executive functions. It was also shown that children between the ages of 6-18 report higher levels of realness in virtual environments than adults. It's been shown that children have developed false memories from virtual reality experiences, thinking events in the virtual environment actually occurred. I was also introduced to the Proteus effect, which suggests that changing self-representations in VR would have an impact on how that person behaves in a virtual environment. By placing a student in Ruby's avatar, we also would change their judgements of Ruby to one that is situational, and create an increased overlap between the student and the character. When we're thinking about placing a student in Ruby Bridges' shoes and considering aspects such as the aesthetic appearance of the environment and the interaction between Ruby and the other characters, we have to remember that this experience may be much more intense for younger students who experience a higher level of environmental immersion than adults.
Over Spring Break I spent my time at the Creating Reality Hackathon in Los Angeles, CA, where I got to collaborate with some great people in the AR industry and work with the Microsoft Hololens for two days. Our group was working on a social AR tabletop game platform called ARena using Chess as a sample project. While we were not successful, it was a great lesson in AR development and approach. I also gained exposure to other headsets and devices from the workshops and sponsors- the Mira headset runs from a phone placed inside the headset. And there are a variety of Mixed Reality headsets that use the same Microsoft toolkit for the Hololens.
While the Hackathon was a great technical and collaborative experience, it also opened up other possibilities for our current project in the long run. Part of our research is discovering what virtual reality itself brings to this learning experience beyond just being cool or fun to experience. We already know that this experience is not meant to replace the reading of the book or any in-class lecture- it provides another medium for students to experience and understand this story. After spending the week working and thinking with AR, I was thinking about how we can better bridge that gap between the physical experience in the classroom and the virtual experience. Using an AR to VR transition that interacts with the physical book would be an interesting concept to explore related to this.
The technology doesn't quite seem to be there yet- there's no headset out there that has the ability to switch from AR to full immersive VR. But Vuforia seems to have this function available and could possibly be accomplished on a mobile device. There's even a demonstration recorded from the Vision Summit in 2016 showing this ability (at time 22:00), documentation on Vuforia's website about AR to VR in-game transitions, and a quick search on Youtube shows other proof-of-concept projects with this ability. This isn't a function that will really be able to be explored until much further down the line and potentially will not be possible until the right technology exists, but raises questions about how we can create that transition between the physical and virtual.
From some of the participants at this hackathon, I also learned about the Stanford Immersive Media Conference this May, which will feature talks by several of the authors of the papers we've been reading for research and others involved with the Stanford Virtual Human Interaction Lab. This is potentially a great way to interact with others who are doing work in the same areas of VR and AR, and discuss their research.