Phase 2’s primary focus was to remove the controllers from the scene altogether, and trigger user interactions with gaze or proximity. Beginning with simple prototypes, I first learned how to change colors of cubes based on where my gaze was located. Then I would teleport to the cubes themselves, and finally teleport after a certain amount of time had passed. Once I reached this point, I brought gaze triggers into the Phase 1.5 scene to see how users move through the space and adapted the main menu to include a gaze-triggered button that would start the experience.
In the middle of this phase, Tori and I did a demo for VR Columbus who came into ACCAD to see what we were working on and chat about our research. I noticed that users had a particularly difficult time knowing that they were looking in the right space. I provided visual feedback in the form of a light turning on and dimming, but this is difficult to notice in a daytime scene. Following their critique, I included a reticle for the user to “aim” their gaze at the right spot, which proved to be incredibly helpful and effective. As an additional interaction, i included a member of the mob whose animation would change when the user’s gaze rests on him. This particular character gets in the users face, bending over aggressively before returning to his initial animation.
Knowing that these functions work individually is great, but the next step is to weave them into the narrative naturally. Navigation in the final scene will not be based on the user staring at spotlights on the sidewalk, but gazing in response to a character’s directions or voice.
Next steps will require building this scene with final assets and motion capture, defining the interactions that will occur at each stage of the sequence, and how audio will be layered into these moments.