Interactive Building

Last week of development! After taking into account all our feedback, Tori and I really had to think about how to round out this project. 

Tori will be working with the character animations and models, fixing some of the technical issues like locking the feet to the ground and replacing the robotic models with the avatars she created. While the current animations were still effective, the unedited animations and floating characters do crack the immersion. 

On my end, I had some technical issues that I wanted to fix too: locking the Camera to Ruby, crowd simulation with offset animations, and editing the audio to be more cohesive. But along with that I wanted to round out the experience. We put the interactive level aside to focus on putting together the prologue and receiving feedback on that experience. 
For the last bit of this project, I'll also be putting together a basic prototype scene to explore a 3rd person documentary view. The user will be able to rotate the scene and zoom closer, then use the tooltips to gain historical background. Within these tooltips will be a button that, upon clicking, will allow the user to join the scene on the ground, much like Google Street View. It's not a fully fleshed out experience, but will allow us to broadly explore some of the concepts we discussed back at the beginning of the project as far as how to transfer that information. This is a good starting point- users will still have control of their experience, and the information will be there for them to uncover at their own pace in a variety of ways. Meanwhile, we still have that perspective-taking ability there to continue the experience the user had as Ruby or other members of the scene. 

The scans below from my sketchbook show some of the notes taken while discussing how to set up this level. 

I did take some inspiration from Assassin's Creed, as discussed last week. The series itself has always included a wealth of historical information embedded within menus and the occasional quest. However, as a player, you have to go searching for this information, and the reveal tends to be a wall of text with the occasional image. It's underwhelming after running around a richly animated recreation of Rome or Havana. The new Discovery Mode provides text, images, audio, and video from both the game and reality. I found myself much more excited to experience a multi-modal presentation rather than reading text block after text block. This much text (as shown in the images below) really doesn't work well in VR- it's difficult to read the panels unless they take up the full screen and overall the immersion is just lost. I would rather focus on using the environment to explore and convey information rather than relying on text. 

In a similar vein, the newest installations of Tomb Raider include historical information with artifacts that players collect throughout the course of the game. Removed from the world gameplay, a screen comes up and players can examine 3D recreations of these items with a basic description of what it is in the context of the game/world. Granted, it's usually only a sentence or two, but not something really required by the game. It allows players to view the item up close and learn a little bit more about the culture of the world around them without overwhelming with too much detail. It's another way for players to experience this information. I thought about this when considering the 3D manipulation of the scene and engaging the user in the content. 

Another great example of this came from one of our readings (experiences?) for class this week. Refugee Republic, an interactive documentary, takes the viewer on a journey through a Syrian refugee camp in Iraq by scrolling through a panoramic illustration depicting different parts of life in the camp. The media often presents an inaccurate view of refugee camps, and the team who created it set out to create a more real image of life in this camp. While the landscape itself is mostly drawings, as the user scrolls along it transitions into film and image and text. The result is incredibly dynamic and provides a lot of depth to the experience, as each media is used for it's strengths. It plays to every sense, and that's what we're trying to do with this interactive level. I began thinking about how to choose what media and what information I present in this 3rd person view and what media might work best from the perspective-taking option. I'm going to start researching some more experiences and games that provide a similar media overlap. 

With this in mind, I was able to make decent progress on getting the level set up this week.

  • Prologue: camera is finally locked to Ruby. All users will experience the walk at her height, and without walking away from her body accidentally. In the interactive level, I'm contemplating giving the user the ability to walk around as Ruby without having her set animation. This was discussed multiple times as how impactful the scene could be if the user is seeing it all from Ruby's height and exploring at their pace. I don't think we'll have the time to get that in this time around, but a future feature to consider.

  • Created the new scene with a third person camera. Began implementing camera movement and manipulation functions, such as zooming in with a UI slider (harder than anticipated) and working on rotating the environment using the pointer from the controller.


This week is going to be straight work on this level. Getting those features in will mostly be shifting the camera around, and once I have the process down it should go fairly quickly. It's also going to be compiling Tori and I's work into a final build and debugging as much as possible. I have yet to test progress on the new level in the Vive, so I'll be doing that tomorrow and every other day until it's due just to make sure the changes are working in the headset as well as the simulator.