4 Week Wrap Up

Framing the Project

Over the last four weeks, Tori and I have been working on a proof of concept utilizing VR and Motion Capture. The larger goal for our project is to form an educational VR experience that promotes the development of empathy in elementary school literature. We have chosen to examine this by developing an experience based on the story of Ruby Bridges, though for this four week prototype we focused our efforts on overcoming new technology and starting research.

Why is this a problem? 

Virtual reality has been rapidly developing in areas such as video games and medical research, but less work has been done in the areas of children, empathy, story, and how we can tie these factors into education. 

This creates a unique challenge for me as a designer. VR creates a new range of possibilities for interaction, especially when combined with game design concepts. Examining these interactions and their implications in the context of the story will be a large role for me in the development of this project, and something I began to do with the prototype. 

Results

Above is a recap of my progress over the last four weeks. It includes a demonstration of each of the navigation, UI, and animation controls, as well as a sample scene created from the point of view as Ruby. 

My personal goals were to examine the interaction possibilities in VR, and learn to develop them for use in the Vive. I specifically worked with navigation, UI menus/panels, and controlling our animations using these menu assets. In terms of this prototype, I did not know how to do any of this in Unity using VRTK or SteamVR. I had to learn how the technology worked and explore those potential options in order to move forward. 

In the grand scheme of the project, these factors are very important in how the user experiences the scene and allows us to start asking questions about their impact. For example, if we limit a user's ability to navigate based on the role they're playing in the scene, how will that impact their impression of that role? If as Ruby the user has no control over their environment, will this convey the lack of control a six year old would experience? And if the user is able to navigate, is it less immersive to present them with a menu of tasks or create a more freeform navigation using the pointer and no text? Tori and I don't have answers to these questions yet as our research will help point us in the direction of the type of experience we will create. 

Above are the slides from our presentation in class.

Feedback

We received great feedback over the last four weeks. Here are some of the thoughts given on our prototype.

  • Consider looking at Suzanne Keene, Dr. Bruce Perry, and Mary Jordan in your research.

    • Bruce gave us some great avenues to explore. Although I haven't gone too deep into their research yet, Keene does some interesting work with how multimedia setups can be used in a museum setting and Dr. Perry works with children's mental health and psychology. I need to speak with Bruce again about what work Mary Jordan does.

  • Drop back on your aesthetic choices for now. Focus on function, determine what you want the experience to be first.

    • In our next steps, I agree that the aesthetics are going to need to be considered but not fully developed until we know exactly what we want out of this project. Tori and I have discussed in the past the fine line we walk between making an impact and creating something that actually scares students. While it's certainly not a focus just yet, we know that we don't want the final experience to be too realistic in appearance but still authentic.

  • Can you cut this project back to a more accessible technology for the classroom?

    • A fair point. HTC Vives aren't cost-friendly and unlikely to be found in any public school environment. Google Cardboards have been implemented in classrooms already and are much more feasible on a large-scale consideration. I think we can absolutely adapt what we've learned on a smaller scale and create something cross-platform, we just have to adjust for different mechanical experiences and technological issues. A conversation for the upcoming phases.

  • More research on developmental psychology and learning theory.

    • Absolutely. It's already on the list of topics we're gathering media on. I'll also be taking a class next semester on learning and memory, which will hopefully be bringing me more materials and context for this project.

  • What are the qualities of VR that would enable empathy or learning? Work towards answering this.

    • This ties in with choosing more accessible technology. Something we'll be working on answering - or at least guessing at - in the next couple of weeks.

  • Write your next steps with your objectives in mind- exploring empathy and storyness. Start bringing context into your interactions.

    • I am already adjusting my next steps to bring the narrative and the concepts we've been researching into these design decisions (discussed below).

Next Steps

Thinking about the prototype and the feedback we got, I would like to move forward and use these tools to develop a full narrative prototype. For me, this means mapping out a narrative framework for the experience with a beginning, middle, and end. This will include a more developed environment, with a blocked in scene for the school, neighborhood, and general placement of props as they would be in the actual location. A beginning scene with the user as Ruby, experiencing her walk up to the schoolhouse. Protestors surrounding the sidewalk, including sound in the scene this time. Forcing the user to walk her path before being returned to the scene to navigate on their own. The final result would be a prototype of a vertical slice- not polished, not focused on visuals, but purely interaction and narrative. 

While still complex, this scene will not require learning as much technical skill as our four week proof of concept. Therefore I will be using that time to continue to catch up with research and focus on the design of the experience. This step feels like a good leap from our technical exploration to trying out what we learned in a narrative setting. 

Animated Control

The main focus of the past week was working out how to control animations in Unity, and create a menu for the users to do so within the scene. This was going to be the biggest challenge for me so far, as my experience with the Animator is minimal. Tori imported some of our first mocap data into the scene last week and I watched that process, so I understood generally how to navigate the windows and what basic settings would do. 

At first, I thought working with the Animator and swapping between animation states would be the best way to do this. Not the case at all, and I spent two hours on it before realizing I misunderstood how animation states work. 

Enter the Timeline, a feature of Unity that I had no idea existed until I started sifting through the tutorials on the Unity website. We weren't trying to blend different animations together, we just wanted to be able to pause, play, and restart whatever was playing. The Timeline allowed me to do this. I was able to access the Playable Director component on each game object in the Timeline, and use Unity Events with a UI Button to attach play/pause functions. 

Screenshot in Unity of Animation Controls added to Headset Menu.

This is basic functionality, and there are definitely still bugs that need to be worked out. While the animations do restart, they restart from the position that they were in when you pressed the button. I would just need to write a script to start them back from their original places at the beginning of the scene. For now, connecting the button with the action completed my goal. I spent Tuesday watching more of the Unity tutorials to understand the Animator a bit more and how it connects with the Timeline, so in the future editing animations will make more sense. 

I built the animation controls into the menu, and attached that menu to every other navigational scene. 

Unity screenshot: Active game with Pause button pulled up in the Inspector, showing the button function using the Playable Director.

On Thursday, Joe critiqued the level that we had and identified several issues that needed some work. Some were similar to those pointed out by Maggie.

  • Controllers. Left and Right controllers are getting confused, even by me. He suggested finding a way to distinguish them in-game, maybe by changing the colors of the models in order to prevent incorrect instructors.

  • 2D scene has mislabeled Menu options. This has been fixed in the most recent version.

  • Text for 2D map is too close to the periphery of the mask. Need to be relocated more central to the screen.

  • Correct laser activation. Though I did fix the laser switches, I need to make sure every level only relegates menu control to the Left controller. This is a matter of making sure all the correct options are selected in each level, I believe one or two got overlooked.

Joe also presented two ideas on how we might organize the whole experience based on the scenes he saw. 

  • Using the controllers to pick up books around the scene in order to gain information about it. Similar to picking up "notes" in popular games, but giving the same control as the interactive cubes in the scene. Promoting immersion and interactivity.

  • Starting the experience with the user as Ruby, no matter what. They must experience the scene as her first walking up to the school. Then afterwards, reloading the scene and giving control of the scene to the user via navigation and animation controls.

His second comment harmonized with other discussions I've had with Tori and Maria about how user control can be used to emphasize narrative elements. We've had suggestions about making the user Ruby's height as they navigate the scene, which would definitely create an impression. On the flip side, for a young student there would be a fine line between creating an impact and pushing too far into scaring the student. Something that we have to consider when we make these choices.

I did like the idea of starting the experience with a prologue-type event, and really pushing that lack of control on the user to encapsulate her experience. So I created a scene where the user follows an animated null object up the path to the school. When the user reaches the front steps there's a pause and the scene changes to one of our test navigation scenes. 

I ran into some new technical issues for this scene. For one, while the user is bound to the animation of the null object, they can still step away and move away from the object into the scene. I will need to lock the transforms for the user and force them to experience her route as it is. I also need to deactivate the teleportation, lasers, and menu controls. A non-user related problem was the actual scene transition. When the player reaches the step, a scene does load but it is not the one that I select. There's a trigger at the top of the steps and I think something may be wrong with the colliders and tagging system, so I'll need to try a few other methods and make sure there isn't something in the scene interfering. More debugging to come. 

Tori and I also sat down and started working out our research questions for this project. We figured that a good format would be to have one overall statement for the project, and then two  other questions for ourselves that relates directly to the areas we're exploring. That conversation was started two weeks ago discussing our general directions, but we were able to distill them down into working statements. As it stands, my questions are: 

  • "How does the combination of motion capture and VR enhance the fostering of empathy in elementary school literature as an educational tool?" (Main research question)

    • "How does user interaction with environmental elements reinforce informational transfer?"

    • "What forms of navigation promote exploration of a narrative scene in a virtual environment?"

The phrasing of the statement is still being worked out and edited, but it feels like we're making progress defining the goals and direction of our project.

NEXT:

With these four weeks behind us, I'll be going through the scene and pulling together all of my process documentation. Tori and I will be presenting our four week proof of concept, and our presentation will include a full video showing all of our progress to date. It will also give us time to pause and discuss our plans moving forward, and consider everything that we've been learning and researching during this time. 

Debugging Movement

Last weekend, Tori and I worked with two actors in the motion capture lab to gather data for our scene. Zach and Shaun played four different roles in their captures: angry protestors, the police, Ruby's Mom, and the teacher. The capture itself went fine- both actors were great and gave us good variety of motion to work with and fill the scene. We also included a marker for Ruby, just for our reference later in the scene. 

This weekend, we took more recorded captures with Cynthia and Mckenna from the Theater department. We captured both of them at the same time acting together in similar roles- as mother and daughter, teacher and student, protestor and cop. Their captures should give us enough variety in motion that between last weekend and this weekend we'll be able to populate the scene. 

Still from motion capture data capture, 2/18/18. Cop confronting a protester.

While all of the recorded captures went really well, I ran into some issues last weekend with Unity and SteamVR. There seems to be a bug in the SteamVR Beta where the controllers are deactivated when playing the game in Unity. They still function when pressing the menu button but in the actual scene- nothing.  Lakshika was kind enough to come in this weekend and work with me once our recorded captures were done. We discovered that by bringing in old SteamVR files to the project the controllers will function properly. Truthfully I have no idea why that's the case, but I'm not going to question functional technology. In a clean file, we were able to bring the actors in live and have them put on the headset to see the scene as they were acting. They were also able to use basic teleport functionality with the controllers, although I noticed that the character models start to offset with each teleport. We should be able to fix that with a quick script that attaches the character to the headset as the player teleports, then releases it. 

On Thursday I brought in two other forms of navigation: a radial menu attached to the right controller, and a 2D map that can be accessed in the main menu at any point by the user.

The radial menu can be accessed just by touching the touchpad on the right controller, and circling to whichever destination the user wants. Clicking the touchpad will then take the player to that specific beacon. There are four currently at the front door of the school, the end of the sidewalk, in the street, and one over by the table of interactive cubes. 

In the 2D menu, I took a screenshot of an orthographic top view of the map. I then placed buttons over these same four areas where the beacons are. When the user pulls up the menu, a button there will activate the 2D map. From there, the user selects any of the buttons on the map and they will be transported to the beacon in that area. 

In-Game screenshot of 2D Map on the headset menu.

In-Game screenshot of radial menu.

Of course, I spent some time this weekend debugging the issues we found after play-testing all the navigation changes. The major issues found: 

  • TOUCHPAD SCENE:

    • Teleportation on right controller turns back on after first successful jump using the Touchpad.

    • Teleporting with the Pointer and then attempting to use the Touchpad results in an offset from the beacons that continues to grow.

    • After fixing the first issue, menu laser was no longer turning back on for selections.

  • 2D MAP:

    • Menu missing entirely, though the pointer still appears.

    • Teleport offset (again)

    • Floating off the ground (again)

I chose to limit the Pointer teleportation to the left controller in all levels, for operational consistency and less confusion overall in controls. This solved the problems with the Touchpad presses when trying to use the radial menu to navigate. The pointers turning on when using the Touchpad was resolved with some scripting changes, and the Teleport offset was just a matter of making sure the CameraRig was being used as a transform reference instead of it's parent, the SteamVR. It was being pulled off course by the SteamVR object in the hierarchy. Simple fix, frustrating to actually figure out what the issue was. 

Debugging while in-game. Bezier pointer still appearing and enabling teleport while using the radial menu.

Achieving victory over the debug list meant it was time to bring in someone else to get some outside feedback. My friend Maggie tried out all of the scenes and functions with some critique on the button mapping and overall function. 

  • Confusion over the controls. The trigger and touchpad can be confusing as there's no tutorial or hints other than me explaining the controls in person. For someone unfamiliar with the Vive, this really makes navigation difficult.

  • 2D map needs some color and highlighting to indicate the interactive areas for the user.

  • Placing colliders around the buildings to avoid "phasing" into spaces we don't want players to be. Very disorienting and it breaks the immersion significantly.

As as result, I went ahead and added in controller tooltips immediately for users to understand the button functions. These tooltips appear when the scene starts for 15 seconds, then deactivate. However there is a toggle in the main menu if the user needs help or would just like to reactivate them. 

Tooltips present on the controllers at the beginning of the scene.

Tooltips present on the controllers at the beginning of the scene.

Thursday night, Tori was able to get one of our recorded captures into Unity and functioning with a character model. We started with one of the protester walks just to get the process down, and she'll be working on getting a variety of animations and characters into the scene. I started the research on how to control animations in Unity and it's not a clearly defined process. There's confusion between legacy scripting in Unity and what works with the Animator now currently. I keep getting mixed up between the two and accidentally working with methods that are no longer used in newer versions of Unity. It seems most of the play-pause functions have to do with stopping time in the game, but when I attempted this it stopped my use of controllers in the Simulator. Definitely going to need some more research in this area. 

NEXT:

This next week is going to be all about animation controls. I've considered using sliders, buttons, and toggles but really it's going to come down to what I find in research and what actually works in the scene. Once I get this working on one of the characters in the scene, I'll move on to applying this to multiple characters/animations. I'll also be working on some smaller tasks: adding proximity tooltips to objects around the scene (information transfer), colliders around the school, and adjusting the 2D map with button highlights and a smaller surface (too far into the periphery). 

Navigation and Menus

As of today, our Unity file is set up to operate as a good testing ground for interaction! Last week was really focused on honing our test ideas and getting the basic scene set up. This week, I focused on implementing navigation, interaction, and a functional menu to control our testing in the future. 

On Monday I went ahead and tested the basic navigation and interaction functions I set up last week in the Vive. I set up the teleportation using the straight line pointer at first, and realized that this can prove difficult when moving around a large space. The straight line renderer in VRTK makes it hard to gauge distances, though it works excellent for UI selections. Keeping this in mind, I changed the teleportation to a bezier pointer. The arced line is easier to see in the space, especially when navigating elevated elements such as stairs or ramps. 

Using the Bezier pointer for teleportation.

I also tested out the interactive cubes. After some debugging, all the different highlighting settings worked. Some have outlines, some turn to solid colors, and they can be grabbed/tossed around the scene. I did run into one issue where the cubes could not be retrieved from the ground plane- it turns out the player was floating 0.5 units above the ground. Adjusting this value fixed the problem pretty quickly. 

Initial test in the Vive, 2/5/18.

The Simulator now works in this project file, so I can roughly test functions without having to load into the Vive every time. In the past, troubleshooting often took a long time because I would playtest after making fifty changes. This makes it difficult to find the issues when they arise. It's been a conscious effort this time around to test every major implementation, and it's paying off. 

I switched gears for a bit to work on UI elements in the scene. Because Tori and I are going to be testing a variety of interactive properties (some variations, other contradicting each other completely), I made a main menu for the whole project so that we can easily switch between scenes. This also includes toggles for the interactive objects if we don't need them in the scene. In the future, I will be adding controls for the motion capture data and animations. While I've made menus in the past for Unity, this one is attached to the headset and moves around with the player. Following the VRTK tutorial for a Headset Menu gave me a great start on the format, and then I adjusted it to fit our purposes. 

State of Headset Menu, as of 2/8/18

Tori and I did a playtest together of the first basic teleportation level, just to make sure the buttons work and debug a few issues from Tuesday. Below is a video of our test and working through the ground plane issues. 

Once the project framework was in place, I nailed down what exactly the navigation functions were going to look like and look for any holes in my logic. Thinking through what the player impact would be, what level of control they would have, and what purpose these changes would serve. All four of these options address very different concerns in the environment and are viable options to explore for potential user interaction. I have a pretty good idea of how to set these up in Unity, but I'm going to spend this weekend actually doing that. Next week will have more information on the development and results. On Tuesday, Tori and I will do another playtest to determine whether we need further exploration in each scene. 

Planning out navigation functions for the first phase of my work.

On the theoretical side, we had a conversation today about what our research question was looking like for this project as a whole. Tori is exploring the immersion side of the environment- telling the story through acting and environment. I'm exploring the interaction- how users experience and navigate through this story. We're both still pretty new to writing proper research questions and the results of the next four weeks are going to determine a lot about where we go moving forward. We just needed to start putting language to our work and determine how it all fits together in the big picture. The picture below is our start on this conversation, though it will be developing over the next few weeks as we work on writing our own questions and bring them together. 

Working on formulating research questions

I mentioned using Mendeley last week for organizing my research- the reading has officially begun. I downloaded the desktop app and started adding all the current studies that Tori and I have gathered. While waiting for the uploads to complete, I read about the Virtual Human Interaction Lab (VHIL) at Stanford University, which studies the impact of human interaction in virtual reality and it's larger societal effects. Their website has a large archive of research papers over the past ten years, and I left with 16 studies on topics ranging from interaction, children's education, and racial bias. 

Tori is pretty far ahead of me on readings right now, but I've started prioritizing them and working my way through the list based on those most relevant to my development in Unity. So far I've read:

The cybersickness reading really tied in with the current issues I'm challenging with navigation. Because the user will be navigating a large space (and in the endgame, the user will be a younger student), our tests should address comfort in navigation as well as functionality. Maria mentioned thinking about how these navigation forms could influence the story itself- for example, having limited movement when playing as Ruby, or losing the ability to navigate the space altogether. Really emphasizing the role of the child as also being one with minimal control over their world, Ruby particularly. Although I wonder if that effect would be as prominent if the user is already a child who may experience this in their life. Unless it's emphasized to a new degree? Food for thought.

"How to Do Things With Videogames", by Ian Bogost, explores the variety of uses games have been applied to. Some of these uses are unrelated to us at the moment- the debate over whether video games qualify as art is interesting but not really what we're exploring. But there are chapters on empathy, reverence, and work that give great examples of games to look at and how they deal with these topics. The empathy chapter discusses two games made by USC graduate students called "Darfur is Dying" and "Hush", both dealing with genocide and fostering empathy for the people trying to survive in these situations.

It also introduced the concept of the vignette in games, giving an impression of experience rather than advancing a narrative. Bogost also write an article on Gamasutra explaining his thoughts on video game vignettes. Our experience does not focus on one particular aspect of Ruby's walk to the school but would highlight multiple things she would face- confusion, loud crowds, angry faces, lack of control. Because of this I do not believe we could call this experience a vignette, but it's a good reminder to consider breaking down each aspect of her experience and how we portray that to students. 

I've started making progress on the Vygotsky paper Maria sent us about Imagination in Childhood, about 11 pages in (out of 92). So far he's discussing what imagination actually is, it's origination in childhood, and imagination's basis in reality. I'm interested to see where this goes as far as discussing the perception of reality in VR, and how that ties in with the other papers I downloaded from VHIL. 

Screenshot of my current Mendeley setup, with readings uploaded.

Annotations for "New VR Navigation Techniques to Reduce Cybersickness"

NEXT: 

From here on I'll be developing the other three teleportation techniques and making progress on the readings. In working with the teleportation, I'll be learning more about UI and setting up the controllers for specific functions, thus knocking out two of my three goals. 

On Sunday, Tori and I have three actors from the Theater Department coming into the motion capture lab to capture data and potentially interact with in the scene. I should be able to teleport and move around them while they act. This will serve as a good test of scale in the environment and we'll be able to run through the procedures we learned last week. Once Tori has this data ready to go, we can bring it into the scene and I'll start playing with user control over animations/time. 

Building in Unity

This week started with the completion of the Explainer Video, which I've placed below. Creating this video really did help me organize my thoughts from last semester and display what I've been working on. Seeing this together made it easier to find my path forward. It also gave me the chance to work in After Effects again and brush up on some old skillsets. 

Tori and I began discussing the Ruby Bridges project last weekend and had a general plan in place for production to begin. On Wednesday, we spent some time in the Motion Capture Lab learning how to stream an actor's motions directly into Unity. I have spent very little time in the Motion Capture Lab in the past and am unfamiliar with the programs that Tori has to use in order to capture data, so seeing this process gave me a general idea of her pipeline. Our classmate Taylor put on the suit and we started by setting up tracking as if we were doing a recording of his movements. This process Tori is very familiar with and will be something that we'll be using to test out animations. 

We then pulled up Unity and learned how to stream an actor live directly into the scene, which did require some tweaking and setup for the basic character we were using. But the end result was being able to put on the headset, see Taylor's character in the HTC Vive, and interact with him live. 

Tori viewing Taylor's motions in the HTC Vive. 

This is going to be especially valuable once we have the final set built and can interact with actors in the space. His movements were very clear, though we didn't properly orient the character and the camera around the origin. Whenever Taylor moved to his left, it appeared to me that he was moving straight towards me. Just making sure all of our transforms are correct should clear up this issue. 

I made a demo Unity project earlier in the week that was just a base set- a flat plane with some boxes and house-like representations just to have a place to test out interaction. When going to show Tori, I realized that I had forgotten to load the SteamVR asset package. Trying to reload that caused a whole host of problems, and I found it was easier to start from scratch and build up a demo scene with a layout similar to our story. I spent Thursday building up a new set using the Prototyping asset package that comes with Unity. Because interaction is my priority, I'm choosing not to focus on the models and just work with representations. This new map features a school, front yard, and street. 

Screenshot of new Unity scene.

Screenshot of new Unity scene.

I chose to make this scene fairly large, so we have room to experiment with navigating larger environments. This also means more room for figures once we start importing the motion capture data. 

From there, I followed some of the VRTK tutorials (found HERE) to set up the camera, basic teleportation system, and a few interactable objects. There's a table off to the side with 6 cubes on it, each with different properties. One functions as a control and cannot be picked up using the hand controls. The other five have varying highlighting settings, and react differently when picked up by the controllers. This helped me learn a bit more about how the hand controllers are set up to work with interactive objects, and what options I have for modifying these interactions.  

Screenshot of interactive table, with one of the pick-up cubes selected.

Screenshot of interactive table, with one of the pick-up cubes selected.

I had to make several decisions this week about what type of interaction specifically I would be searching for. I knew that it would be three broad topics: navigation, object interaction, and time. But I broke that down and really thought about what I want to explore in those areas, beginning with navigation. 

  • Teleportation:

    • Using VRTK, the standard simple teleport function. I did switch the pointer to be a bezier pointer, which seems easier to use than the straight pointer. It's easier to determine a final destination where the straight pointer tends to overshoot. I learned this week how to set that up from scratch, which was my first goal.

    • Point and click navigation. In this scenario, the user determines their destination but we (the designers) control the actual teleportation. The scene would be divided up into sections, and when at the border of a section the user will have a cue to move into the next area. The user will appear in the same spot each time. It will be interesting to investigate whether it's easier to move this way and take the user's focus off of the controls.

    • 2D Map. Using a menu function to determine which area the user wants to teleport to. In this case, having a map available to toggle on a hand control or a series to options. Something like "School Entrance" would teleport them to the front of the school doors.

I took inspiration from games like Myst, Dreadhalls, and The Sims when considering these layouts and how the player interacts with a larger map or navigation techniques. 

  • UI/Menus

    • Scrolling

    • Moving windows

    • Typing (for potential classroom uses)

    • Buttons

  • Animation

    • Starting and stopping animations with a button to "pause" the scene while retaining player movement.

    • Play with time: ability to move backwards and forwards, implementing those scroll bars from the UI Menu exploration. Similar to resting in Skyrim.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

(Skyrim) An example of time sliders, potentially incorporated into the scene to replay a moment or action.

Tori and I also discussed the hardware being used. While we know we're going to be developing for the Vive, we also have access to the Leap Motion sensors for hand controls. I looked into development for these, and I think it would be valuable area to explore. Being able to see and reach out to grab objects or incorporating gestures into navigation could an interesting space to work. For now, I've decided to accomplish the above goals using the HTC Vive, but to keep researching and looking up Leap Motion resources if we decide to take that route in the future.

Below is a video from Leap Motion previewing their VR hand tracking software in 2016, just to give a general idea of the type of interaction we could be looking at. 

The past week has been full of development and making decisions to start moving forward on our prototype. Moving forward in the next week, I will be finishing up some tests for the point and click navigation and getting the menu-based navigation in place. I have a few ideas for how to accomplish this, but I need to do some research and see if there's any cleaner or simpler paths. I

will be sorting through and organizing readings tomorrow- another classmate showed me the Mendeley app, and I'd like to try to use that to keep research together. I also need to get some time in the Sim lab to test out the level I made, and make sure these interactions are functioning the way they're supposed to. The simulator in Unity isn't running properly for me in this scene - while it would be useful, it's just not a priority right now and I can work on fixing that once a few other tasks are accomplished.