05/12/20: Thesis Completion and Moving Forward

Thesis is complete! I turned in my paper and successfully defended on April 8, 2020, and have since graduated from OSU with my MFA.

Most of the last few months were spent writing and revising my thesis paper, but I didn’t really post any updates along the way. That’s what this post is for.

As of the end of January, all work on our Narrative Performance case study prototype has come to an end. Taking the final steps in production to have the experience ready for the Zora Neale Hurston Festival signaled my transition from production to data analysis and writing.

While the prototype is not at all in a polished state, it was stable enough for us to test some of the design decisions arrived at from our past two years of work.

Final prototype video, April 2020.

Zora Neale Hurston Festival and ACCAD Playtest Day

Tori and I ready to present at the Zora Neale Hurston Festival, Jan 30 2020.

Tori and I were set up next to the academic conference at the Zora festival. Our IRB submission was finally approved in December, so we came equipped with our verbal consent script, user surveys, and screen recording software. Over the two days of the conference, we gathered 13 user responses and screen recordings, primarily from users who had never experienced VR before or minimally in the past. We had some really wonderful conversations with conference attendees about the potential impact of this experience in a museum, as well as several revealing comments about the experience itself. A few users were unsure of their role in the experience, mistaking Lucille Bridges’ character for Ruby, and many pointed out the technical flaws in the mob’s animations and audio. Looking back at their comments, I wonder if working on this project constantly has desensitized me to the uncanny valley effect that some of these avatars may be creating by intersecting with the user on their walk down the sidewalk. Spotting the technical issues didn’t seem to overwhelm the user’s focus on the experience, but definitely stayed in their minds as they exited and discussed their thoughts with us. I tried to relieve this effect after the first test of the day by moving some of the avatars further back off the sidewalk and away from each other, but sometimes the collisions occurred with the federal marshals or Lucille Bridges.

Some of the users also mentioned not realizing that turning their head was impacting their speed. I did not inform them of this ahead of time, as I wanted to see what would happen if it was a mechanic users weren’t consciously aware of. I forgot to account for the VR acclimation period that occurs in first time users - I have noticed during our demos and for other VR projects that users who aren’t as familiar with VR tend to not move their heads or bodies much in the experience unless reminded to do so. In spacing out the avatars walking with the users, I assumed that there would be more variation in speed. However, when users look straight ahead the entire time, the speed remains at 1 for the duration of the experience, and there wasn’t enough distance being created between the users and the avatars walking in front of them. I had to make some adjustments to the walking distance allowed between the avatars to alleviate some of the users passing through the avatars.

The overall experience was rewarding, and the small number of users gave Tori and me a chance to work out the kinks in our presentation flow and delivery.

A few weeks later in February, we presented the same project at ACCAD as part of Playtest Day, an event for students to gather feedback on their in-progress projects. We were able to gather over 30 responses from a user base that tended to include more experience with VR, and added the survey responses to our collective data.

Thesis Writing and Defense with COVID-19

Following the ACCAD PlayTest day, most of March was spent finishing my thesis paper. I haven’t written much on my blog about my writing process or concept; to summarize, I was creating a framework for designers who are creating narrative VR content that takes into account the contributions of the user along with those of the designer. How do we as designers consider the user’s role in the design process? I used the concept of the magic circle as a starting point for much of this, outlining what would be the components of the VR magic circle and how one would be constructed by the designer. The design process for the Narrative Performance prototype and overall data was analyzed within this framework and the implications of the user responses from the prototype. I’ll post a link to the actual paper in the future as I’ll probably be referencing parts of this moving forward in personal projects.

The actual writing process was surprisingly satisfying, despite being interrupted by COVID-19. Tori and I were able to complete all our data gathering and project work prior to the university closing down. Spring Break was the second week of March and I never went back after that. It took some time to get into the groove of writing at home with my partner and dog present - it was hard to sit at the same desk for 12 hours a day, not moving between my GRA hours and schoolwork. It became a matter of “if you’re in the mood to write, do it” and adjusting the rest of my schedule around that to get it done.

I finished my paper on April 1, and defended on April 8 over Zoom, which was…. odd. There were roughly 18 people on the call with their videos off, so I was just talking to my powerpoint for the majority of it.

But it was successful, and I only had to make a few minor changes to my paper before final submission to the grad school. I was able to complete and submit my GIS in Cognitive Science not long after.

WHAT’S NEXT

I’m at a point now where I’m finishing up my GRA hours and transitioning into a new full-time job - more on that next time. But what I’m seeing now is a need to strengthen my portfolio beyond prototyping.

Quarantine Cards

So naturally, I started by branching out and making a Cards Against Humanity clone.

Social distancing and quarantine has really done a number on my love of game nights. I’ve been facetiming with my sister and brother-in-law every Saturday night for the last five weeks to do a networked Cards Against Humanity game. Except every site that we’ve used has come with its issues - a limited number of cards/expansions, unable to switch out problematic or terrible cards, difficult to see the text or move the cards around on different platforms. Well, I’ve never done networked play or text parsing in Unity - time to learn!

I started building the card logic first, entering card options in an excel spreadsheet and loading the into Unity. The spreadsheet is shuffled and then random entries are assigned to each card, which are removed from the list once the card is swapped or discarded.

Test video for card logic of Cards Against Humanity clone.

Making this has meant being able to include all our expansions, create custom cards, and throw out the ones that will never be played/are at little TOO problematic. So that’s been a fun learning experience and side project so far. I’m going to jump into Photon later this week to learn how to set up networked play and figure out the second half of the card logic - how to get cards submitted for the card czar.

GOALS:

  • Return to environment art on a small scale, using static vignettes of a wider story.

  • Consider how to insert users into a narrative by adjusting a specific narrative moment to suit VR affordances.

  • Polished portfolio pieces

  • Consider the VR circle in the creation of the environment, and analyze for persistent patterns. I want to continue my thesis discussions here.

  • Explore Unreal Engine and become familiar again. I’ve lost touch in the last two years and want to be proficient again. I may start this project in Unity and move to Unreal as I work out the timing and scope of the project.

  • And on a personal note: reconnect with reading my favorite novels, because graduation has opened up some time for hobbies!

01/18/20: Spring Semester Goals

The next few weeks are the final weeks of production for this case study - in two weeks, Tori and I are flying to Florida to demo at the Zora Neale Hurston Festival. We then have the month of February and first week of March to make any modifications. At this point, production is more about cleanup of the main case study experience, and tiny prototype experiments based on proximity to other avatars and how to fully construct the environment.

Reaching the end of last semester, I had a rough draft of what our final experience was going to look like and a head start on the writing portion of my thesis. I created a priority list addressing our deadline for the Zora festival in two weeks and for the end of March:

Priority List for final production of Ruby Bridges case study.

Project Process

Based on the priority list, I wanted to line up my goals for the Zora festival and for the end of the semester to work on them simultaneously. The first thing I’ve been working on has been creating a consistent proximity in the walk with the user, Lucille, and the two federal marshals. At the end of December, all three characters would reach the end with the user, but would often feel too far away during the walk itself. It didn’t seem natural to have Lucille so far away from the user in a hostile situation, or to have the marshals walking three meters up the sidewalk while the mob avatars are crowding in.

In my test navigation scene, I set up the three avatars and put in a base user animation with some speed variations.

Group Walk Test using test prototyping space.

One of the biggest problems was getting the speed and animation adjustments right regardless of what the user is doing. From the user’s perspective, if they’re slowing down then it means they’re not looking straight ahead down the sidewalk, which gives me a little bit of leeway in the adjustments I make. Slowing down the avatars to an unreasonable speed (often looking like they’re moving through water) doesn’t matter as much because they will speed back up when the user looks directly ahead.

Implementing this into the main project scene was going to require reorganizing how the main control script was running the scene. Initially, the script was controlling all of the dialogue, audio and motion triggers. This got a bit messy and difficult to debug. Using this primary control script as a template, I created Scene1_MainControl.cs to house the booleans used to indicate scene status and runs the timing for each phase of the experience. From that, I created separate scripts to control the motion for all of the avatars in the scene (including the user) and the audio/dialogue. With that separation I’m able to get a better handle on debugging down the road.

New control script setup.

The audio script also took some prototyping to get right. I was having problems last semester with mob members initially playing all at once versus a randomized wait time. Distributing the AudioSources in the scene and layering these sounds still needs a lot of work, which Tori and I have already reached out for. I focused strictly on timing the audio and ensuring that the mob chants selected are appropriately randomized from a set number of clips.

Next Steps

Those scripts are currently in the scene and functional, so in the next two days I will be turning my attention to the environmental setup of the scene. This is where my Zora goals and end-of-semester goals overlap - I’ll be using a separate prototyping scene on Unity to place prototyped blocks and houses in order to determine the best placement for these assets in the world, and explore different configurations for the mob. I thought about using Tilt Brush or Maquette for this, but I found it’s much more efficient to use Unity because I can mix and match the assets I already have. I have already finished assigning all the textures and materials in the scene itself, and will continue to add environment assets in between the setup of the houses and cars. I will also need to time the door animations for the user to exit the car, and time to cleanup the user’s exit from the car itself.

Next week I will have documented the prototyping scene and resulting changes in the environment, as well as a runthrough of the experience with the separated script setup. Tori and I will be taking an Oculus RIft with us to demo the project in Florida, and so we will be conducting these tests using both an Oculus Rift and a Vive to check for any other issues.

12/06/19: Five Week Summary

I’ve reached the end of my in-class window for this project, and it’s time to go back and review what’s happened in the last month.

I started this project fairly optimistic about what I would be able to achieve. My initial task list was ambitious but not out of the realm of expectation with the number of prototypes I had behind me. What I didn’t anticipate were the hangups on character animations and locomotion. Tori and I have been working on getting this motion capture data processed and ready for the project over the last four months, so I focused on some initial isolated problems while I was waiting on the data. As it started to come in during Week 2, we realized there were some issues with the characters having major offsets and occasionally walk cycles with broken knees. I was also having problems aligning them with the car model I brought into the scene. Tori took those models and adjusted the animations to fit the car and fixed them, mostly focusing on the introductory car sequence. Within the last week of the project, we had all of the animations turned over to me. As I was bringing them in, I realized that the scene was cluttered, and I was going to need a different method to bring the characters down the sidewalk. To refresh the project (and myself), I started over with a fresh scene and spent the last three days of work bringing in the final animations, working on implementing the new locomotion system, and cleaning up my personal workflow with scripting.

12/06/19: Progress update. Video is a little choppy due to all the functions not yet being fully implemented!

What I Learned

For as many issues as I had with this project, I did learn a lot.

NAV MESH USE

I only used NavMesh once or twice prior to this project, and had to learn how it worked fairly quickly in order to time my characters motion through the script. I had some issues aligning the animation with the motion, but really that was just a chance for me to get better with setting up Animation Controllers.

CHARACTER ANIMATIONS

As an Animator, I stay really true to my corner of environment art and development. I generally don’t enjoy working with characters beyond the occasional sculpt. Prior to this prototype I spent most of my time working on interactions or the user themselves, not so much with worrying about cleaning up animations or transitions. There was a rough learning curve this time around. I had to link up multiple animations and make sure the transitions worked with the NavMeshAgent’s motion. While I still don’t enjoy the process, I feel much more confident about troubleshooting these areas of the project in the future.

PROJECT/SCENE ORGANIZATION

Speaking specifically here about keeping character versions in line and how my scripts are organized. I’m the only one really working on this project in Unity so I will sometimes let my organization slip in lieu of efficiency… which later is a pain in the butt. Tori and I were constantly testing different versions of characters for the Federal Marshals and Lucille, to the point where I eventually lost track of which animation was associated with which FBX file. Cleaning out my project helped enormously, but I eventually figured out I needed to be more attentive to my archive folder and communicate file structures with Tori for when she sends me Unity Packages of characters.

Additionally, my scripting has been much more organized. I began implementing headers for the inspector to avoid the paragraph of variable inputs, and that keeps the script much more organized my end this time. I also stuck to using one primary control script to keep track of events in the scene and overall story elements, while keeping mob behavior and audio as their own separate (but linked) scripts. I’ve since been able to work much more efficiently knowing where specific functions are housed.

PROTOTYPING SCENE

I usually have a practice scene early on in the development process, but it tends to get abandoned once the project has increased in complexity. I kept this test scene close at hand this time around. I’ve found that the complexity often makes it harder for me to tell where problems are arising, so bringing a prefab into that scene and testing specific scripts and functions has made troubleshooting much faster. I used this for the Head Rotation scene, testing several animation transitions with the characters, ensuring the Mob Behavior script was handling audio properly, and an initial NavMesh learning space

What’s Next

As we know, this prototype doesn’t end here. At the end of the five weeks I managed to debug and troubleshoot plenty of these issues, but there is still much further to go. Tori and I will both be discussing this project in the next week, requiring it to be functionally sound. In January we will be showing our experience at the Zora Neale Hurston festival in Eatonville, FL, so I will be focusing on getting the visuals up to par and working with Tori to adjust any animations that still need attention. While I have begun many of these tasks, this is the current to-do list:

  • Grouping function for the primary characters.

    • Because the user can control the speed at which they move down the sidewalk, the Federal Marshals and Lucille’s avatars need to be able to slow down and resume normal speed based on how far away the user is.

  • Mob encounters

    • Part of the Mob Behavior script, certain characters will react based on user proximity. This needs to be tied in with the audio (already implemented) and an additional animation trigger.

  • Audio mixing

    • Mob audio needs to be set to sound “muffled” while the user is inside of the car and at full volume when out on the sidewalk. Additional audio effects can be tested here as well for an outdoor city scenario/background audio. I have begun this process with the Mob Behavior script looking at individual phrases and sayings for the characters, but the mob chant, background city audio, and the car sounds still need to be brought in.

  • Complete Car Exit

    • Characters now exit the car appropriately, but timing needs to be adjusted for the user’s exit.

  • Implement scaling script

    • Needs importing from prior prototypes.

  • Prologue and Main Menu scene

    • This project only focused on Scene 01. I will create a package of the Prologue sequence from the previous project to be imported and applied to this one. A main menu and additional “end sequence” still need to be created.

  • Looking at environment calibration

    • A reach, but could be utilized in the main menu to determine the user’s position in the playspace and adjust the environment to them? Not necessary, but would be easier for future demos.

  • Visual Polish

    • Set to the side for the moment, final textures, assets, and post-processing needs to be applied to the scene. This also includes additional terrain assets such as grass, trees, and plants.

With as many weird curves this prototype had, I’m pretty proud of what I was able to learn and accomplish, particularly in the past three days. I think that in the next week I will be able to address many of the above points on my task list and really be able to get some feedback on the state of the project. I will continue to make update posts next week of where I ended up and the feedback that I receive from my thesis committee!

12/05/19: Time for a Reboot

Sometimes you just reach a point where what you’re doing in the project isn’t working, and it’s time to start fresh.

I hit that point this week with the Ruby Bridges case study.

All of the issues that Tori and I have had with the various animations really came to a head when I was trying to get the sidewalk animations coordinated. I realized that the system we were using to drive all the character’s locomotion (other than the user) just wasn’t working, and I was spending more time trying to fix little timing problems than actually setting up the scene. The project was also getting cluttered with previous versions of animations (we found another issue with the male walk cycle causing broken knees). I started over with a fresh project on Monday night, only bringing in the finalized animations as Tori sent them to me, and set up Scene 1 to be driven by NavMeshAgents.

I’ve only actually used NavMeshes maybe once or twice, so this was a bit of a learning curve. I had to figure out how to coordinate the agents getting on/off the meshes for the car scene, make sure there wouldn’t be any interference from the car agent while all the characters were inside, and then determine the best way to drive them using waypoints. With as much time as the reset took me, I’m still convinced that I was able to get more done in a much more organized fashion than I had before. I kept a test scene to work out the animation issues I was having, and then brought the solutions over into the primary Scene 1.

Still from Test Scene, testing character animation transitions.

Using Test Scene to troubleshoot and test HeadLookController.cs

(originally found at https://wiki.unity3d.com/index.php/HeadLookController#Description)

Getting the characters to move along with the NavMeshAgent was easily one of the most frustrating parts. The instructions are outlined pretty clearly at the Unity website, but some of the animations were jumpy or not running once the blend tree was triggered. Some of these issues are still unresolved; I evaded further hours of blend tree debugging by taking the one functional animator I had and just rebuilding all of the character animations on top of it.

Animator controller for Federal Marshal 1, showing all transitions and triggers from the car sequence to the end confrontation with the State Officials.

I’m trying to be a little smarter with my scripting this time around as well, making generalized functions that are flexible enough to be applied to multiple characters, decreasing the repetition in the script. There’s more dialogue in this prototype than the past, and I had some problems making sure the audio sources for the characters were functioning properly. But we made it through, and now the car dialogue works well. I’ll be able to test this more thoroughly once the mob is in and I’m coordinating the audio changes there.

Current State

Status update of characters walking with navmesh. Dec 5 2019.

What’s Left?

As we can see from the above video, there’s plenty of cleanup that needs to happen once the base functions are complete. Implementing door animations, making sure that characters are facing the right way, and adding in the final pieces of the puzzle so that booleans are triggered on time.

I still have a fairly large to-do list now that I’ve got the characters walking together and able to look at specific targets. Implementing that script is the next immediate task. I’m split on what comes after that - I could begin bringing in the mob characters and setting them up. This has to be done before I can start working on any interactive events. But I also need to add the user to the primary group moving down the sidewalk. I focused on getting the characters behaving in their ideal patterns before introducing the user to the scene to interrupt them, but I now need the user to test a script that will keep the group within close proximity of each other. Below is the current list of tasks left:

  • Fully implement “Look At” script for primary group

  • Mob arrangement

  • SteamVR cam setup

    • Includes animation down the sidewalk, group proximity, and coding mob interactions.

  • Scaling environment script (needs importing)

  • Import Prologue scene from previous project version

  • Audio mixing for Scene 01

  • Main Menu

It’s a lot, but things are already going significantly better. I’m starting to implement more organized practices for myself, such as including headers in my scripts to make the Inspector a little kinder to the eyes. Small thing, but it makes me feel better when I have to use it.

The next update will be my overview of how the project went and a video of its state, as well as an overview of the things that need to happen to complete the project for a January exhibition.

11/30/19: Tiny Steps Forward

After the previous update, Tori and I ran into some time consuming animation difficulties on our case study. We had animations that needed to be adjusted to fit the car we’re using in the scene, and Tori needed time to set everyone up in Maya and get those bugs worked out before they made their way back to me. So most of last week was spent cleaning out the project of old animations, testing the new things Tori sent me, then cleaning them out again.

I have the car sequence nearly completed with the new characters; everyone is siting where they’re supposed to be, the animations are timed, and the user can now successfully get out of the car and move down the sidewalk with Lucille and both federal marshals individually. Making sure they stay together as a group is another challenge. The doors aren’t fully timed and animated yet because I chose to save that for a later round of tweaking. Right now, I need to get all the broadstrokes happening in the scene to make up some ground. There are a few flaws with the characters - one of the marshals has an arm that breaks and spins when opening Lucille’s door, and the walk cycle that we’re using for some reason breaks all the rig’s knees when it’s applied. A problem to solve in January; right now, I just want everything in the scene and timed.

User view exiting the car with Lucille.

This iteration has especially been testing my workflow with Tori. I think in a future iteration we need to work more closely together on the capture process for the motion, because a large portion of my time on this scene was spent just getting all of the characters in the right place at the right time. I have to divide the animations I’m given because they don’t match the audio cut for the instructions, and or the length of the drive up to the school. Creating more intentional shots with a bigger emphasis on timing and physical props might reduce some of the cleanup and issues we’ve seen this time around. The bottleneck in our skills has been frustrating, as development halted while we solved the animation issues and replaced assets.

Now that I’m moving forward, I’m working with the assets as they come to me. I began testing “encounters” with the mob and federal marshals using empty game objects in Unity to represent mob members. As the group approaches down the sidewalk, the distances from these tagged objects will be tracked. Once close enough, the objects are destroyed and the federal marshal reacts by telling the mob member to back off. This can be useful in setting up encounters throughout the scene once the mob members are actually in place.

Test encounter with empty game object. Also visible: the leg deformation with the Federal Marshal walk cycle.

Moving Forward

This is technically the last week of development for this project and in terms of goals, I’m still back in Week 2. I am prioritizing the completion of the sidewalk locomotion with the group, the encounter at the end with the state official, and the mob placement by the end of the week. A reach goal would be to spend time integrating audio with the scene, assuming I can get all the assets in. Visual polish and environment development will have to wait until cleanup time in January; for now, a functional scene for the user is all I’m looking for.

04/07/19: Rebuilding for Phase 2

Over the last week, the majority of my focus has been on showing demos of our thesis project at the ACCAD Open House and the Student Art Collective. Quite some time has passed since Tori and I were able to show our progress to anyone outside of ACCAD, and as we didn’t have a working prototype from last semester… we needed to make one that was able to be shown and experienced by the public.

I took what was our Fall prototype and completely rebuilt it between Saturday and Tuesday evening. Part of this was to bring the project forward into a new version of Unity, but I also wanted to include the height adjustment from Phase 1 and a different mob configuration. This build would also require the user to begin the experience sitting on a bench before standing to progress, an interaction I have not tested before in this scene.

My Phase 2 project was temporarily put to the side in order to get this ready for public, so I was not able to test out of the gaze-based interaction. I decided instead to hit a middle-ground between Phase 1 and Phase 2- timed teleportations. Not in the control of the user at all, but a little less disturbing than the sliding motion we previously used. This included a fade in/fade out to signal the motion was about to occur - a fairly simple visual, but actually caused a ton of technical issues. The fade would show up on the screen and not in the headset. For future reference, there is a SteamVR_Fade script that you’re required to use in order to make that appear properly in the headset - normal UI settings do not seem to work in this scenario!

The new environment height scaling feature also changed how I put certain assets in the scene and parented things to each other, as offset pivot points and use of a Unity Terrain asset caused some weird placement issues when the scene was run. And through both demos this week we faced some Audio problems, with the volume being either too low or coming out of the wrong ear. Two solutions to this: better headphones, and making sure the VR camera has an audio listener attached. The SteamVR Camera prefab does not have one attached automatically! And yes, it took me way too long to figure that out.

I rebuilt the Prologue sequence based on some feedback from earlier on in the semester, including more images of Ruby and taking into account the order in which the images appear to create better flow in the scene. For demo purposes, I also included a “start menu” triggered by the operators (Tori and I - spacebar to start the prologue), and an “End of Phase 1” scene that loops back to the start menu.

The Student Art Collective on Tuesday went well - we were set up at a space in Knowlton with the Vive Pro and Wireless Adapter. Actually, that was my first time using the Wireless for anything, and it was perfect for this project. Most of the attendees were students, though we did have a few parents/professors show up and try out the scene. It was a 3 hour exhibition, which gave Tori and I a good measure of how long mobile setup would take and get back in the groove of giving a 30 second explanation/VR prep to new users. There was a short calibration process during setup with the bench to make sure users were facing the right way and the bench was in the center of the play space, but it everything ran smoothly after that.

Friday afternoon was the ACCAD Open House. Tori and I showed our Six-Week prototype at last year’s event, and played that video on a screen this time around to show our progress in the year since. We didn’t have the Wireless for this event, but the scene worked just as well with a tether. We had some wonderful conversations with guests about our work and where it’s going. It was easier for me this time around to speak about our project - I felt much more informed and confident now that we’ve grown from the “exploring technology” phase to the “conceptual development” phase.

FEEDBACK and CRITIQUE

Both of the demos provided valuable information. The most common reaction and comment we received after users exited the experience was about the height change in the scene. Having the mob members towering over and changing at users, some of whom are used to towering over others, was intimidating and placed them in the correct mindset for this experience. We also heard that they appreciated the prologue in the beginning as framing for the experience. It seemed that more of the guests this year had heard of Ruby Bridges before, and once teacher even told us she had a classroom of middle school kids who love Ruby’s story.

The main issues we experienced were technical or had to do with user flow in the experience. Audio was a real issue in the beginning - fixed by cranking the volume to accommodate the noise of the space and using better headphones (thanks, Tori!). The fade in/fade out of the scene seemed to be fixed by having both the SteamVR_Fade script active and the original Fade image active, though sometimes it would flicker between teleports. In the Prologue sequence, images appear around the user in a circle - which would be no problem if the user was in a spinning chair, but on a bench it tends to break flow when they have to turn their head all the way back around to continue looking. Some users who stand to get out of the car will continue to walk around, while others stand in place- not really an issue, but it poses a risk of tripping over the bench unless Tori or I move it. This was especially dangerous with the Wireless demo - users without a tether are more likely to forget and take off. The Student Art Collective demo required one of us to stand at the periphery of the lighthouses once a user was in to make sure they didn’t wander off into the crowd or walk into something.

CONCLUSIONS

Overall it was a great experience and I appreciated getting to see how far we came in the last year. And now we have this great demo that I can use to prototype for Phase 2! The upcoming week has calmed slightly in GRA and interview obligations, so I will be able to actually catch up on my production schedule and begin to implement it into this prototype. Along the way I’d also like to polish some of the issues that came up this week and smooth it out, such as the asset pivot problems I discovered and the weird fade flickering.

Phase 2: Continuing the Prototype

After completing our 4 week project, Tori and I had a talk about where we would go with the next 6 weeks to advance this project. We decided to continue in the direction outlined in my last post- creating the first steps of a vertical slice from the story of Ruby Bridges- Tori focusing on organizing the animations and drama, and myself focused on creating a full build in Unity. 

Our four week prototype had a loose menu structure that I created to make it easier for us to test out different functions and for myself to understand how they work. These were purely technical exercises. In this prototype, will be creating a prototype that contains a full narrative. The user will begin the experience as Ruby, with minimal control of their surroundings. From there, the scene will restart and the user will gain the ability to navigate the environment. There will be interactable objects to collect and examine, containing background information from the time period and location. While we want to avoid creating a full-fledged game with this experience, I will be using game design elements to encourage exploration of the environment so students will actually find this information. 

We took into consideration the critique that we received from our initial prototype. Our objectives were reframed to focus on the story and less on the technology, and we will continue to focus on function and interaction instead of aesthetic appearance. These are questions we can begin examining after this project. Our research has already begun expanding to include psychology, learning theory, and empathy. 

Proposed work schedule for 6 Week Prototype.

Above is the working schedule I've created for my part of the prototype. Tori's schedule lines up with mine so that we're both generally working at the same pace and form of development. 

I began working on some of the general layout for our project, considering the flow of the experience and what functions would be available in each. While this is still a broad layout, it's a sketch of the experience from the start screen all the way to the end of interaction. Tori and I will be meeting this week to finalize this plan and discuss details. I will also be starting the general layout of the experience, with a blocked in environment and basic navigation for the user. 

Image of notes on the layout of the experience.

I also continued reading some of the research gathered over the last four weeks: 

These readings covered a wide range of topics. Research on the effects of virtual immersion on younger children is nearly nonexistent, and that is mentioned several times throughout these papers. A few of them had to do with digital representations and how users behavior changes when their avatar reflects a different identity. Children develop self-recognition around the age of 3 or 4, and these connections grow with executive functions. It was also shown that children between the ages of 6-18 report higher levels of realness in virtual environments than adults. It's been shown that children have developed false memories from virtual reality experiences, thinking events in the virtual environment actually occurred.  I was also introduced to the Proteus effect, which suggests that changing self-representations in VR would have an impact on how that person behaves in a virtual environment. By placing a student in Ruby's avatar, we also would change their judgements of Ruby to one that is situational, and create an increased overlap between the student and the character. When we're thinking about placing a student in Ruby Bridges' shoes and considering aspects such as the aesthetic appearance of the environment and the interaction between Ruby and the other characters, we have to remember that this experience may be much more intense for younger students who experience a higher level of environmental immersion than adults.


Over Spring Break I spent my time at the Creating Reality Hackathon in Los Angeles, CA, where I got to collaborate with some great people in the AR industry and work with the Microsoft Hololens for two days. Our group was working on a social AR tabletop game platform called ARena using Chess as a sample project. While we were not successful, it was a great lesson in AR development and approach. I also gained exposure to other headsets and devices from the workshops and sponsors- the Mira headset runs from a phone placed inside the headset. And there are a variety of Mixed Reality headsets that use the same Microsoft toolkit for the Hololens. 

Workshop showing the Mixed Reality Toolkit with the Hololens.

While the Hackathon was a great technical and collaborative experience, it also opened up other possibilities for our current project in the long run. Part of our research is discovering what virtual reality itself brings to this learning experience beyond just being cool or fun to experience. We already know that this experience is not meant to replace the reading of the book or any in-class lecture- it provides another medium for students to experience and understand this story. After spending the week working and thinking with AR, I was thinking about how we can better bridge that gap between the physical experience in the classroom and the virtual experience. Using an AR to VR transition that interacts with the physical book would be an interesting concept to explore related to this.

The technology doesn't quite seem to be there yet- there's no headset out there that has the ability to switch from AR to full immersive VR. But Vuforia seems to have this function available and could possibly be accomplished on a mobile device. There's even a demonstration recorded from the Vision Summit in 2016 showing this ability (at time 22:00), documentation on Vuforia's website about AR to VR in-game transitions, and a quick search on Youtube shows other proof-of-concept projects with this ability. This isn't a function that will really be able to be explored until much further down the line and potentially will not be possible until the right technology exists, but raises questions about how we can create that transition between the physical and virtual. 

From some of the participants at this hackathon, I also learned about the Stanford Immersive Media Conference this May, which will feature talks by several of the authors of the papers we've been reading for research and others involved with the Stanford Virtual Human Interaction Lab. This is potentially a great way to interact with others who are doing work in the same areas of VR and AR, and discuss their research. 

Reality Virtually Hackathon!

Earlier this month I was able to attend and compete in the Reality Virtually VR/AR Hackathon, hosted by MIT Media Lab. I registered, was accepted, and started connecting with other participants via a facebook page. Everybody was really friendly and excited about working with VR/AR technology! I saw people from all kinds of fields and backgrounds, from students to industry professionals. About two weeks before the Hackathon, everyone started posting their bios and work experience to see who was interested in working together or finding a team. I spoke to several participants, but one reached out and wanted me to join their team. All they knew was that they were interested in working with the recently released ARKit, and all of the team members were iOS developers. They needed someone from the 3D world. 

So I drove out to Boston for the Hackathon, and that first night we had a brainstorming session. Just throwing ideas around until it stuck. We decided to tackle the problem of Collaborative AR- something that had not been successful in ARKit before. But at the end of the two days, we had it! It was definitely more of a technical challenge than an artistic one, but I made the art assets we used to demonstrate its capabilities and tried to get the team to think on a design process as well as an engineering process. 

The video above was made during the competition to show our platform in action. I'll be creating a more comprehensive video in the next few weeks. 

"Team Two" ended up winning our category, Architecture, Engineering, and Construction, and Best Everyday AR Hack from Samsung! 

Team 2 after the Closing Ceremony

The overall experience was amazing. This group worked well together and was able to solve a problem that opens up a lot of opportunity for developers. I learned a lot from them- I had never worked with mobile development and had no idea what was involved with development for iOS. Or with AR for that matter. The workshops before the event was a great way to get into the headspace of VR/AR development and ask questions about various aspects of the industry. The Facebook group is still alive and I made a lot of connections from the event. I'm planning on attending again next year and maybe trying to go to the one at Penn State as well. 


While I was at the Hackathon, I was also working on a game level for my Computer Game 1 class. This was a team project centered around the theme of a broken bridge. Each of us had to create a level using different game mechanics to get around the bridge. Mine was to collect planks that had washed downriver and carry them back to the bridge in order to repair it. I found, especially during this project, that my scripting skills in C# are improving a lot and I'm starting to understand Unity a lot better. Of course, I still get a little overexcited when building scenes so... even though this was a prototyping assignment I got to play with all kinds of fun settings. 

The next couple of weeks are going to be intense. I have a VR prototype that I'm working on involving Hurricane Preparedness (more on that soon), and an AR MindMap project I'm working on to explore my own process a bit more. Next week I should have a computer game final project in the works as well- not too sure what that's going to look like just yet. There will be plenty of process work to post on here! 

Updates!

It's been about 9 months since I last posted on here, but now that life is settled a bit I can explain what I've been up to and what's happening next! 

I graduated from CCAD in December and immediately began the process of applying to the Design MFA program at The Ohio State University. Between work, moving apartments, and seeing family, I decided to take a step back from animation and assess what my next move was going to be. I found out in March that I got accepted to OSU as part of the Digital Animation and Interactive Media track, and spent the summer working as a Residential and Teaching Assistant at CCAD as part of their College Preview Program. At OSU I also work as a Graduate Teaching Assistant for an undergraduate Design Foundations course- it's been fun working in a college classroom, and I've really enjoyed taking a step closer to education and the teaching process. 

I started the MFA program last month and hit the ground running. My current focus is in virtual and augmented reality research, and their potential applications to education and lifestyle. That's... quite a large topic, so I'm hoping to spend the next year experimenting and researching to narrow down my interests. I've already been able to experiment with some new processes in class- we used the motion capture lab here at OSU to capture data for an animated music video. Myself and another teammate used a combination of particles, fluids, and furs in Maya and applied them to the figures. I have not worked with dynamics in awhile and never with motion capture, so this was a fun learning process. I'm posting the full project on my home page soon with the completed video. 

I'll begin posting more project progress on here again and writing a little more about what kinds of research I'm doing for the future. For now I'm just excited to get started! 

CTNX!

I went on a brief hiatus this past week because I flew to California to attend the CTN Expo! It was my first year attending the expo and my first time on the west coast, so even though I was still getting work done I stuck to Instagram and Facebook for my updates while traveling. I got to meet a bunch of fantastic artists and was lucky enough to get a few incredibly useful portfolio reviews. I saw the Pacific Ocean for the first time while waiting around the airport (got some fun sketches below), had some spare time to go see Griffith Observatory. And Thursday before the convention we visited Blizzard's Irvine campus!

The best part for me was getting to be around a lot of people who clearly love what they do. I didn't take many pictures of the actual convention because as it turns out, I'm terrible at taking pictures of events I go to. But it was bigger than I anticipated and I managed to come home with some pretty great prints that I may be doing some practice environments with. 

I tried to do some work in between being at the convention and boarding various planes. Most of it was some more wood sculpts and finally getting my normal maps working properly. I did a hi-res sculpt of my landscape and put it in Unreal just to see what it was looking like, here's the results: 

I won't be able to get into the labs to work on foliage until at least Monday night, so I'm also working on my cabin and getting all the details fixed up for that. Then I can start moving fully into Unreal and putting it all together. I'm excited for how it's coming along though, but deadlines are looming and I'm hoping to have some more filled out screenshots coming in the next few days. 

 

Cabin Progress

These updates are a little late- I was volunteering at GDEX this year, the Game Development Expo for the midwest. It's hosted in Columbus and I helped out as a presentation room attendant. Afterwards I got to walk around and see what other game developers are doing in the area, and was really surprised at the variety! CCAD also had a booth at GDEX and was showing Project Sphincter to everybody- I got to watch people play our game level, and actually really like it! That was pretty great, honestly. 

On Friday I made some more progress on my cabin, using some of the plank sculpts in ZBrush to start putting together a rough pass of all the wooden parts of the cabin: 

So this is a little late, but my next two milestones are to block in some rocks in the environment and model out the plane (minus the engine detail). I'd also like to go in and fix my landscape for the level. Those brushes I downloaded to do the planks also have some great landscape tools, and I'd like to get that going more realistically. If I've got some spare time in between all of that, I plan on doing some physics tests in Unreal just to see what I could do to get some water flowing in the level. 

 

A Break

Time for some good news: I finished my demo reel! It's now on the main page of my site as Environment Reel: Oct 2016. I got some good critique for it and rearranged a few things, and have submitted it for a few possible opportunities. 

Now it's the last day of my Fall Break (and likely the last day I'll have to do absolutely nothing for awhile). Yesterday I tried to step away from the computer for awhile and did some sketching. Not quite my Inktober schedule or prompts, but it was relaxing to just sit and draw what was around me for awhile. 

Sketches of Mr. Otis, the black lab that I was dogsitting this week! 

Unfortunately with my school schedule I've been totally wrapped up in making games but not so much in playing them. Today I dust off my PS3 and spend a few hours as a pirate in Assassin's Creed: Black Flag! 

Welcome to my blog!

Hello! 

In the past I've hosted my blog on Wordpress, but I've decided to move my posts here. My previous blog was sporadically updated and contains work from early on in my time at CCAD. I'll keep it up and you're more than welcome to visit it here, but from now on I'll be posting all updates on this page. 

Right now I'm heading into my last semester here at CCAD, and my focus has shifted a little bit. I'm gearing my art towards the role of Environment Artist and I'm working on building up my portfolio. Most of what I'll be posting will be updates from current projects including sketches, models, project planning, and progress videos. But I'll also be showing the development of my personal branding, personal projects and sketches, and research into other artists and studios. 

I'm working on some projects that I'm pretty excited about, and I'm looking forward to sharing the journey with you! 

Abby