04/13/19: Gazing into Phase 2

Phase 2 Updates

Progress has been made! I’ve been focusing on getting gaze detection into the scenes I put together for the demos last week, and I think I finally have some momentum going. Initially my schedule for Phase 2 was to start small- activate a button, make something happen by looking at it. I saw a few scripts included with the SteamVR SDK, but there’s very little documentation on their actual usage. I even looked through the Google SDK for Daydream and the Oculus SDK, but those scripts were not especially helpful.

So I just built it myself. I have a general understanding of the process: write a script sending a ray from the camera to collide with objects, isolate the objects to their own layer, and then have something happen once that collision occurs. In this case, the test was to change a cube from blue to red when looking at it. Initial tests had the raycast changing the color of a cube when pointing at the ground as well as the cube.

Raycast test in Unity - cube still changes color even when looking away from it.

With some research and experimentation, I found out the issue was in my definition of the mask. I want the raycast to only affect the objects under this particular layer, and I wasn’t representing that layer correctly in the script. Everything worked properly after fixing this line, and I was able to move on.

Successful raycast test, with fixed script shown.

This detection is great, and I can definitely use that to trigger reaction animations in the mob characters within our scene. But the next step was using it as a means of transport through the scene. I made another cube and expanded the script to include a second layer specifically for teleportation, wrote a function that would change the color so I knew I was looking at it, and delayed the teleport by a variable time (3 seconds) so it became an intentional action. This script is flexible enough to identify different objects and teleport points, and gain information about those spaces. It also includes a distance cap so that objects beyond a certain point (5 units in the test scene) cannot be activated.

Gaze Teleport Test: 4/13/19.

Next Steps

Troubleshooting. I did notice that the precision of the gaze is difficult in the headset. I suspect that scaling up the colliders to be larger than the objects themselves will make it much easier to move from place to place. I also saw a little jump without a fade that happened when doing the playtest in the above video — I have yet to recreate this, but I’m going to keep an eye out for it.

The next step for this device, once properly adjusted in this test scene, is to bring it into Scene 01 of Tori and I’s project. I have a few adjustments to make there before another demo on Tuesday, but I’d like to have this in as a means of locomotion before then. After that, I’ll be using this to trigger additional animations or environmental effects to see what they add to the scene, and experiment with placement of these triggers temporally and spatially.

OUTSIDE RESEARCH

This week’s outside research was inspired by my Intro to Cognitive Science course. One of my required response papers was based on an article titled “The Mind’s Eye” by Oliver Sacks, published July 2003 in The New Yorker. Sacks, a neurologist, writes about the varying experiences and adaptations of the blind to the loss of vision from the physical world based on personal accounts. He begins with John Hull, who experienced deteriorating vision loss from the age of 13. Hull eventually progressed to total blindness by age forty-eight, and along the way kept journals and audio recordings discussing the nature of his condition. Not long after losing visual input, Hull experienced what he calls “deep blindness”, a complete loss of mental imagery where even the concept of seeing had disappeared. To account for the loss of visual input, the brain (in all of its wonderful weird plasticity) heightened other senses. Sound connected him deeply with nature and the world around him, experiencing true joy and even producing a landscape of its own for him.

Sacks realized that Hull’s experience was not universal. Other accounts are discussed with people who can utilize a mental landscape to solve problems, produce powerful mental scenes, and manipulate this “inner canvas”. He questions whether the ability to consciously construct mental imagery is even all that important, eventually concluding that heightened sensitivity resulting from blindness is just another reproduction of reality, one that is not the result of one sense but an intertwined collaboration of all the senses from all levels of consciousness.

I mention this because I found “Notes on Blindness” on the Oculus store, a VR experience based on the audio recordings made by John Hull.

I have to say, the trailer really doesn’t do it justice.

The entire experience is based on Hull’s strong connection to natural audio. I went through this experience seated, but standing would work just as well. There are six scenes or chapters to play, each themed on a particular point: “How does it feel to be blind”, “On Panic”, “Cognition is Beautiful”. In the initial scene, you appear in a landscape built of tiny dots. I could make out surfaces and the shapes of trees, but overall you are alone. As the audio plays, Hull describes the individual sounds of the park and they build into this thriving scene - with the point that objects only appear if they are making some form of ambient sound.

Since writing this article, Sacks has published a book under the same name discussing broader sensory losses such as facial recognition or reading.

Screenshot from Scene 01: “How does it feel to be blind” of Notes on Blindness.

While the vast majority of this experience is observational, there are points where the user is required to interact with the scene. In one scene, I am given control of the wind to blow and reveal trees and a creaky swing set at a park. In another, I am required to gaze at highlighted footsteps in order to move forward and given a cane in one hand to tap on the ground, illuminating the immediate ground below me. The designers made smart choices with where they implemented these methods - gazing at the footsteps and the cane occur in a scene related to panic and anxiety, one where I as a user feel useless despite being given an action. In the wind, it emphasized the point of the revealing power of nature. All along the way, Hull narrates his feelings about these sounds and how they give him power where sighted people may disregard or even fear them.

The sound design throughout is phenomenal. And in the scene about panic, I absolutely felt it. The sounds that had previously signified a release and peacefulness turned against me and it became a hostile unidentifiable world with unorganized structure and an intense color switch. The visuals emphasized a different kind of seeing, but were still stunning to look at and representative of the descriptions being given. Despite the world being visually beautiful, the sound always was clearly the priority and the emphasis on cognition.

I did have a VR game title that would give another perspective though. Where Notes on Blindness functioned as a storytelling experience, the game Blind utilizes these interactions as game mechanics in a psychological thriller. The main character awakes without knowing where she is and missing her sight, requiring the use of echolocation to visualize the world around her. I thought this shift of focus and mechanic would make for an interesting comparison.

In all honestly, I only played the first 20 minutes of the game due to time constraints. And the fact that I could feel my anxiety skyrocketing the first time I looked down a dark hallway with no indication of what lies ahead. I’ve played enough horror games to want no part in that.

However, I was able to experiment with some of the puzzle solving mechanics and the interactions the user has with sound. Much like Notes on Blindness, when no sound is playing I am unable to see ANYTHING in the scene. No sense of space. That combined with the complete silence makes for an eerie atmosphere. Throwing objects will temporarily illuminate sections of the scene, and in the introduction the user is guided by a gramophone producing sound to illuminate a path or guide the user to a specific spot. I have watched a walkthrough of the entire game and know that later on you receive a cane to use. There is a requirement for environmental interaction that is more present than in most games- without it, the game does not exist at all.

The beginning of the game includes a short story sequence shown in a comic-type format before the user “awakes” in a dark space. In the intro level there are three basic puzzles to complete that introduce the user to the mechanics - a safe, a maze, and sound buttons. In the safe, the user can see two dials but no markings. You must rely on the vibrations from the controllers to unlock them. The maze is located inside of a box- by moving around a handle, you can navigate a small ball through the passages and illuminate the interior of the box. And the sound puzzle forces you to focus on a particular melody and play its segments in order.

Navigation through the scene is a bit odd. Because I was playing on the Oculus, without a third sensor I am unable to turn my back on the two sensors above my monitor. The limited motion was a little frustrating when I just wanted to turn around to open a drawer. And the user walks by pushing the joystick, sliding forward/backward. There are minimal options for user motion beyond turning off strafing.

CONCLUSIONS

I found it really interesting how the designers for both experiences were able to take the same base information and bring them into unique narratives and interactions. I can actually plot both experiences within the framework that I’m building, as far as the roles of user and designer, and the experience definition. It’s hard to find narrative content with the same basis currently, and I expect I’ll start seeing more patterns when I add them into my research experience spreadsheet. I’m starting to see a lot of the same design decisions I’m now making in my own prototypes present in these built experiences, showing that designers are asking themselves many of the same questions along this process.