04/20/19: Phase 2 Implementation

PROGRESS

Last week’s successful gaze development paid off. Scene 01 no longer requires the use of controllers!

Big achievement of the week was adding in all the gaze-based teleport points to the scene I constructed for demos 2 weeks ago. The actual script for applying a ray to the camera and teleporting to a point with a trigger didn’t require much modification. The trouble I had was deconstructing EVERYTHING that was attached to the original, time-based teleporting and attaching it to the new ones, then making sure to activate/deactivate the next and previous points to prevent the user from going backwards. I also added a light to each point that increases in intensity when starting at the teleport point as an indicator for myself and other players.

Me testing gaze teleporting in Scene 01. 04/19/19

After adding these features and testing myself, I found a few errors shown in the video above.

The range of the headset was set too short, which made it difficult to reach the next point. It prevents the user from triggering anything too far ahead of their position in the scene, but unfortunately it also prevents me from reaching some of the teleport points. I organized the mob variations before a visual of the next step was actually required to progress, but because the ray only reacts to objects in the Teleport layer, the user can look straight through the legs of the mob and still activate the next point without ever gaining direct visual contact. And the doors of the school are intended to be a trigger that ends the experience - instead, the user just keeps activating the teleport function and re-spawning directly in front of them. After addressing these issues (minus the points visible through the mob’s legs), I added a gaze-based “start” button at the main menu to create a completely controller-free experience and introduce this concept before actually entering the scene.

Script commenting for

For my own process, I realized that many of the scripts I’m writing/using will be useful to us moving forward in development and in various other projects. I took a quick break from scene development to add comments to the scripts for my own sanity and ease of understanding down the line. Just a new habit that I’m trying to develop, thanks to Matt’s Programming for Designer’s class.

We were fortunate enough to host a VR Columbus meeting on Friday evening at ACCAD and demo this new scene with gaze teleportation for the first time. Below is one of the recordings I took of a user moving through the experience from the main menu all the way to the end. Sound from the experience is included below.

User playtest at ACCAD, 04/19/19.

While users were in the experience, I was watching the scene editor (as pictured above) to get an idea of where people were looking and changes that needed to be made for easier gaze detection. Blue lines in the scene above indicate where the ray is being cast from the user camera. When the line turns yellow, the user is making contact with the teleport point.

After watching a few users go through, I think I experienced a case of “designer blindness”, where after working on an experience for a certain amount of time you get so proficient at moving through it that you miss some potential user issues. I was really surprised at how much people tend to move their heads in VR! The teleport points require you to hit the collider for 3 seconds before activating, and for most people they would only manage to make it for two before their head twitched and the count restarted. From this, I imagine making the colliders larger would help. The further they are away, the more difficult it is to activate. Users would tell me “I’m looking right at it!” when really the ray was hitting the floor just below the trigger point. The light cue was somewhat helpful, but I think the user needs more than that to figure out where their gaze is actually falling. I think adding a light reticle to the camera will help this issue, and I’ll be testing it in the next week just to see how it feels. I’m concerned that the reticle is going to add a layer of separation between the user and the scene, reminding them again of the technology and breaking flow/immersion.

I know this is just a prototyped proof of concept, but the teleporting points are not especially obvious in the scene and their function is not clear from the start. Tori and I are still required to brief the user before the experience begins, reminding them to look around, to stand up, and that the points are even there. We’ve been planning on using Lucille Bridges’ character as a means of progression, walking ahead and calling our attention with audio cues, but even then I’m not sure how to transfer these attributes of “focus” and indication of action to a human figure. Or even if it’s required- maybe a glance at Lucille Bridges’ face is enough to move the user forward. This is a point of experimentation for Tori and myself beyond the current prototype.

What’s Next

Overall, I think this is a good foundation to build from. Now that I have an understanding of user action and progression, I feel I can start layering smaller interactions from the user and mob into the scene. This phase is due next week - I’ll save my thoughts on the summer for then. But in the immediate future, Phase 2 requires troubleshooting and adjustment of all the colliders in the scene. I will also be testing some reactions to proximity and gaze with one or two of the crowd members. Ideally, a user will look someone in the eyes and cause an insult to be hurled or an aggressive motion to occur. To aid in user gaze accuracy, I will add a reticle to the camera to see what that effect is like.

OUTSIDE RESEARCH

Research this week included three experiences on the Oculus Rift: Dreams of Dali, The Night Cafe, and Phone of the Wind. The descriptions in all three really drew me to them, as they’re meant to be contemplative experiences requiring you to navigate space and uncover the narrative (or lack there-of).

I started with Dreams of Dali. This experience was on display at The Dali Museum in St. Petersburg, FL for over two years, and explores Dali’s painting “Archaeological Reminiscence of Millet’s ‘Angelus’”. Looking at their page about this experience, I noticed that the experience is available in multiple formats for VR headsets and a “linear 360” view. This might be the first time I’ve seen that much variety available on a museum page. The VR experience was also covered by admission to the museum - nice of them, the ones I’ve done so far on-location have required additional ticket purchases.

“Archaeological Reminiscence of Millet’s ‘Angelus'“, Dali.

I had to laugh when the experience started up. The very first screen was instructions to stare at a glowing orb for 3 seconds to move around, with glowing orb included to begin the game. Interesting case study for the problems I was discussing in my Phase 2 project! This experience required me to move around large distances, and their inclusion of a reticle helped enormously. It only appeared when my gaze was near an orb, which left me free to explore the rest of the world without obstruction. On the actual experience, I was able to navigate in whatever order I pleased. Some orbs were only accessible from certain points, and at other points a new event was triggered. I moved out to the fringe of the desert on the other side of these structures and encountered elephants with the legs of an insect towering over me and making their way past. They continued to walk throughout the scene. Or I turned a corner and encountered a lobster sitting on top of a phone. Audio in the scene included soft rumblings or ambient effects, said to represent Dali’s potential thoughts in the scene. Few words were distinguishable to me but it really added to the dreamlike state of the place.

In the teleport actions themselves, the user actually slides through the space quickly to the given point. There’s no fade in/fade out or blink occurring. You’re able to see the ground moving below you and your destination. The only time this became an issue for me was when ascending or descending the long spiral stairs in the tower - I didn’t realize the next orb would just throw me directly up to the top. Not too dissimilar from when Saruman propels Gandalf up to the top of Isengard in Lord of the Rings: Two Towers.

Anyways. I feel that context is important in this experience. Had I been visiting the museum I would have probably had more of an appreciation for the things included in the experience. I have a very base knowledge from taking Art History in early college, but my understanding of Dali doesn’t go much further than that. As a user at home, that additional information must be sought out independently from the experience itself. I also wonder if the “linear 360 experience” is crafted to form a particular narrative or just a path that covers all of the points. I didn’t have time to go into that this time around, but I’d like to make a closer comparison in the future.

I moved on to The Night Cafe: A VR Tribute to Van Gogh, made by Borrowed Light Studios.

I’m going to have to revisit this experience, as the only way to navigate was using a console controller. Kind of odd to make that the only source of input, but until I can get that set up I’ll just give my static impressions of the first scene. The assets and animations are very beautiful, and the style of the room definitely matches. In the spaces where they had to guess at detail, such as the wall and door behind me, the makers said they took reference from other paintings and were able to match his style pretty well. The intro leading up to this sequence was an image of The Night Cafe painting before fading into the actual scene.

The last experience was Phone of the Wind, an interactive film based on a phone booth in Japan used to connect and speak to departed loved ones.

This phone booth is well documented, actually sitting in the town of Otsuchi, Japan, and built as a way for people to grieve and heal after the 2011 earthquake and tsunami. In the experience, you listen to three people talking to their loved ones in the booth. I was really surprised by the types of visual content included; users begin the experience from the perspective of a drone actually flying over the booth. As each story begins, the world transforms into an animated scene representing what is being said. At the beginning and end of the experience, the world is made of 3D assets. I’m not sure the transition was smooth due to them being full world transformations, but it definitely added variety and personality to each story.

The interactive aspect of this film comes in at the very end. The user is given the option to enter the booth themselves and leave a message for a loved one. I really love that this was part of the experience, and I can see some similarities between this and Where Thoughts Go. Users can choose to skip though and move on or to take a moment to privately reflect. The few instances of movement in here with the flying drone or the user entering the telephone box is forced, there is little control over your location in the scene.

It was difficult to find any information about this experience beyond what’s given on the Oculus store - the developer’s website is now private. But snooping around the reviews was… its own experience. Some users loved it and were crying, others thought it was stupid and shouldn’t be allowed on the store due to it not being “fun”. From their comments I gather that many, like me, had no idea this was a real place with its own history and meaning to a community, not just a filmmaker’s idea. While I don’t think that information is necessary for the purpose of the experience, I wonder why this information isn’t more readily given and attached to real life events. Knowing the history helped ground the story for me.

CONCLUSIONS

These were very different interpretations of real objects or places. I think seeing how some of the gaps in information were filled in with reference, though for the two I was able to fully experience I think the outside context and experience was not fully filled in for the user. I feel that I needed that additional information to truly enjoy and understand the content to its fullest extent. I think designers are taking these experiences that were initially in exhibitions and putting them on the Oculus or Steam stores, but not accounting for that missing information and how that experience outside of the headset is part of the overall design process. These outside research experiences this week have really made these points clear to me, and have been helpful in clarifying my thoughts about how to organize the content outside of VR in my framework.









04/13/19: Gazing into Phase 2

Phase 2 Updates

Progress has been made! I’ve been focusing on getting gaze detection into the scenes I put together for the demos last week, and I think I finally have some momentum going. Initially my schedule for Phase 2 was to start small- activate a button, make something happen by looking at it. I saw a few scripts included with the SteamVR SDK, but there’s very little documentation on their actual usage. I even looked through the Google SDK for Daydream and the Oculus SDK, but those scripts were not especially helpful.

So I just built it myself. I have a general understanding of the process: write a script sending a ray from the camera to collide with objects, isolate the objects to their own layer, and then have something happen once that collision occurs. In this case, the test was to change a cube from blue to red when looking at it. Initial tests had the raycast changing the color of a cube when pointing at the ground as well as the cube.

Raycast test in Unity - cube still changes color even when looking away from it.

With some research and experimentation, I found out the issue was in my definition of the mask. I want the raycast to only affect the objects under this particular layer, and I wasn’t representing that layer correctly in the script. Everything worked properly after fixing this line, and I was able to move on.

Successful raycast test, with fixed script shown.

This detection is great, and I can definitely use that to trigger reaction animations in the mob characters within our scene. But the next step was using it as a means of transport through the scene. I made another cube and expanded the script to include a second layer specifically for teleportation, wrote a function that would change the color so I knew I was looking at it, and delayed the teleport by a variable time (3 seconds) so it became an intentional action. This script is flexible enough to identify different objects and teleport points, and gain information about those spaces. It also includes a distance cap so that objects beyond a certain point (5 units in the test scene) cannot be activated.

Gaze Teleport Test: 4/13/19.

Next Steps

Troubleshooting. I did notice that the precision of the gaze is difficult in the headset. I suspect that scaling up the colliders to be larger than the objects themselves will make it much easier to move from place to place. I also saw a little jump without a fade that happened when doing the playtest in the above video — I have yet to recreate this, but I’m going to keep an eye out for it.

The next step for this device, once properly adjusted in this test scene, is to bring it into Scene 01 of Tori and I’s project. I have a few adjustments to make there before another demo on Tuesday, but I’d like to have this in as a means of locomotion before then. After that, I’ll be using this to trigger additional animations or environmental effects to see what they add to the scene, and experiment with placement of these triggers temporally and spatially.

OUTSIDE RESEARCH

This week’s outside research was inspired by my Intro to Cognitive Science course. One of my required response papers was based on an article titled “The Mind’s Eye” by Oliver Sacks, published July 2003 in The New Yorker. Sacks, a neurologist, writes about the varying experiences and adaptations of the blind to the loss of vision from the physical world based on personal accounts. He begins with John Hull, who experienced deteriorating vision loss from the age of 13. Hull eventually progressed to total blindness by age forty-eight, and along the way kept journals and audio recordings discussing the nature of his condition. Not long after losing visual input, Hull experienced what he calls “deep blindness”, a complete loss of mental imagery where even the concept of seeing had disappeared. To account for the loss of visual input, the brain (in all of its wonderful weird plasticity) heightened other senses. Sound connected him deeply with nature and the world around him, experiencing true joy and even producing a landscape of its own for him.

Sacks realized that Hull’s experience was not universal. Other accounts are discussed with people who can utilize a mental landscape to solve problems, produce powerful mental scenes, and manipulate this “inner canvas”. He questions whether the ability to consciously construct mental imagery is even all that important, eventually concluding that heightened sensitivity resulting from blindness is just another reproduction of reality, one that is not the result of one sense but an intertwined collaboration of all the senses from all levels of consciousness.

I mention this because I found “Notes on Blindness” on the Oculus store, a VR experience based on the audio recordings made by John Hull.

I have to say, the trailer really doesn’t do it justice.

The entire experience is based on Hull’s strong connection to natural audio. I went through this experience seated, but standing would work just as well. There are six scenes or chapters to play, each themed on a particular point: “How does it feel to be blind”, “On Panic”, “Cognition is Beautiful”. In the initial scene, you appear in a landscape built of tiny dots. I could make out surfaces and the shapes of trees, but overall you are alone. As the audio plays, Hull describes the individual sounds of the park and they build into this thriving scene - with the point that objects only appear if they are making some form of ambient sound.

Since writing this article, Sacks has published a book under the same name discussing broader sensory losses such as facial recognition or reading.

Screenshot from Scene 01: “How does it feel to be blind” of Notes on Blindness.

While the vast majority of this experience is observational, there are points where the user is required to interact with the scene. In one scene, I am given control of the wind to blow and reveal trees and a creaky swing set at a park. In another, I am required to gaze at highlighted footsteps in order to move forward and given a cane in one hand to tap on the ground, illuminating the immediate ground below me. The designers made smart choices with where they implemented these methods - gazing at the footsteps and the cane occur in a scene related to panic and anxiety, one where I as a user feel useless despite being given an action. In the wind, it emphasized the point of the revealing power of nature. All along the way, Hull narrates his feelings about these sounds and how they give him power where sighted people may disregard or even fear them.

The sound design throughout is phenomenal. And in the scene about panic, I absolutely felt it. The sounds that had previously signified a release and peacefulness turned against me and it became a hostile unidentifiable world with unorganized structure and an intense color switch. The visuals emphasized a different kind of seeing, but were still stunning to look at and representative of the descriptions being given. Despite the world being visually beautiful, the sound always was clearly the priority and the emphasis on cognition.

I did have a VR game title that would give another perspective though. Where Notes on Blindness functioned as a storytelling experience, the game Blind utilizes these interactions as game mechanics in a psychological thriller. The main character awakes without knowing where she is and missing her sight, requiring the use of echolocation to visualize the world around her. I thought this shift of focus and mechanic would make for an interesting comparison.

In all honestly, I only played the first 20 minutes of the game due to time constraints. And the fact that I could feel my anxiety skyrocketing the first time I looked down a dark hallway with no indication of what lies ahead. I’ve played enough horror games to want no part in that.

However, I was able to experiment with some of the puzzle solving mechanics and the interactions the user has with sound. Much like Notes on Blindness, when no sound is playing I am unable to see ANYTHING in the scene. No sense of space. That combined with the complete silence makes for an eerie atmosphere. Throwing objects will temporarily illuminate sections of the scene, and in the introduction the user is guided by a gramophone producing sound to illuminate a path or guide the user to a specific spot. I have watched a walkthrough of the entire game and know that later on you receive a cane to use. There is a requirement for environmental interaction that is more present than in most games- without it, the game does not exist at all.

The beginning of the game includes a short story sequence shown in a comic-type format before the user “awakes” in a dark space. In the intro level there are three basic puzzles to complete that introduce the user to the mechanics - a safe, a maze, and sound buttons. In the safe, the user can see two dials but no markings. You must rely on the vibrations from the controllers to unlock them. The maze is located inside of a box- by moving around a handle, you can navigate a small ball through the passages and illuminate the interior of the box. And the sound puzzle forces you to focus on a particular melody and play its segments in order.

Navigation through the scene is a bit odd. Because I was playing on the Oculus, without a third sensor I am unable to turn my back on the two sensors above my monitor. The limited motion was a little frustrating when I just wanted to turn around to open a drawer. And the user walks by pushing the joystick, sliding forward/backward. There are minimal options for user motion beyond turning off strafing.

CONCLUSIONS

I found it really interesting how the designers for both experiences were able to take the same base information and bring them into unique narratives and interactions. I can actually plot both experiences within the framework that I’m building, as far as the roles of user and designer, and the experience definition. It’s hard to find narrative content with the same basis currently, and I expect I’ll start seeing more patterns when I add them into my research experience spreadsheet. I’m starting to see a lot of the same design decisions I’m now making in my own prototypes present in these built experiences, showing that designers are asking themselves many of the same questions along this process.