1/27/19: Quill to Unreal

Where last week was full of conceptual challenges, I encountered all of the technical challenges over the last few days. Before moving too far into production in Quill, I wanted to make sure this was a pipeline that was feasible and functional within the time span I have. I also wanted to double up my time by facing on the technical challenges and still working on an animatic.

Maquette was a good first step last week. I have found that planning for VR in VR is a key part of the development process… yet it’s still difficult to choose a tool. While Maquette presented many opportunities for rapid spatial iteration, what I really needed - light placement, pipeline development, and audio - just wasn’t viable in that environment. I needed to get Unreal functional. I was able to get Quill and Unreal working together on the same computer in the lab, but still face issues with frequent crashing and Quill being incredibly picky about tracking. I haven’t faced those issues on my home setup, and so I’ve been doing most experimentation on my own setup.

I’m still getting used to the controls in Quill. They don’t feel especially intuitive, though I am slowly getting better with time. Working on another project in Tilt Brush earlier in the week, I was able to get more of a feel for painting techniques in a program that I feel is much more pared down and easier to iterate in. Jumping back into Quill afterwards felt a little more comfortable and I am getting faster. To test the pipeline I animated a candle flame and sparks that are a major lighting source in my story:

Animated flame in Quill

Animated flame in Quill

I watched Goro Fujito’s video on how to animate brush strokes for falling leaves- incredibly helpful, though I was still having a rough time getting a hang of that painted lathe trick to make solid objects.

I exported that out of Quill as an alembic file, and that’s where some of my trouble began. There is very little documentation on the pipeline for bringing a Quill animation into Unreal. Unity has its own alembic importer and shader for Quill, and Maya has some decent documentation. I tried bringing it into Maya, exporting into different formats, bringing that into UE4 and Unity. Just having a ton of issues getting the animations to play properly and texture to display.

It turns out the final process was to export the alembic from Quill, import into Unreal as the experimental Geometry Cache component, and make my own material with a vertex color node. I’m pretty sure I can separate individual layers in Maya and export them as their own alembic files for use, but that’s a process for the more complex elements in the scene and I haven’t tested it yet.

I started building a scene around that, getting some lighting in there and blocking out some of the bigger landscape features that I’ll later be painting.

UE4: Blocking in geometry.

UE4: Blocking in geometry.

Unreal has a pretty good Sky Sphere with illuminated stars, I’m using that as a stand in right now. As I blocked in shapes I made sure to check periodically in the Oculus that the scale made sense for what I’m trying to accomplish. I am also familiar with the Sequencer tool in Unreal, and so have been using it to key values in the scene to create a basic animatic. The result is a developing block-in for my project that functions as a VR animatic while I get more familiar with Unreal. The viewer starts in a dark fog, which then lightens briefly to reveal the candle lighting. Over time the fog recedes and the stars show. I plan on guiding the user’s attention with specific lighting cues, the first already in the scene with the rising sparks on the candle.

Current state of the sequencer, using to coordinate my animations.

Current state of the sequencer, using to coordinate my animations.

Going through the process, I think my biggest question right now comes down to scale (again). I want the viewer to feel small in the beginning, but then transition to feeling close and connected. Scenes in VR have a tendency to feel very large, distances seem much farther than they should be. I’m interested to see how much more effective a dramatic shift in proximity to the viewer can be in a VR space. On a project last semester I fell into a habit of testing the scene in the VR Simulator instead of in the headset. The result was a scene that felt too large for the user, and I’m already starting to catch those instances just by working in the headset more frequently. Animating in Quill has been really helpful as well, as I’m able to use my body as a reference.

Next Steps

Adding in the narrative audio will be the first step of this week to work out the timing, and bringing in more Quill animations and static models by the end. Now that I understand the pipeline a little better and how to move around Unreal, I’ll be able to bring in all new work and add it to the scene as I go.

I have also begun collecting sound effects, and will be using those to build my scene up throughout the week.

1/20/19: Considering the Narrative VR Pipeline

Planning a narrative experience in VR requires its own pipeline and structure. Logically I already knew this, but I still went into development with the same animation mindset. I spent this week focusing on fleshing out the narrative itself, creating storyboards, and determining which technical paths are viable - a process that reworked itself along the way.

Gathering References

I began gathering some reference this week, looking for potential lighting inspiration and trying to determine how these scenes are created. Goro Fujito’s work was great inspiration, but I realized that watching his renders and videos are exactly the same as watching a traditional animation - I needed to experience the scene itself, see where the lighting takes place behind the viewer, above the viewer. How does the scene play as a whole?
Tilt Brush to the rescue. Tilt Brush provides the ability for users to select scenes uploaded by other artists and watch them being painted in VR or skip ahead to the final result. I went through many of the scenes while in the Vive, focusing on those whose lighting most closely matched my own or whose style would be useful to observe.

Keeping in mind that Quill does not seem to have the wide variety of playful brushes available here, watching how the artists structured these scenes gave me some ideas for potential visual styles and techniques. After-Hours Artist is the only one I experienced that used 3D models that were then painted on top of, something I meant to explore further in Quill. Backyard View showed a series of single paint strokes layered in front of each other, then using a “fog” brush to emit a tiny bit of light and create depth. It was incredibly effective and dramatic in this case. And in Straits of Mackinac, the artist created the illusion of water by setting the background to a dark blue and implying reflection with only a few brush strokes.

Just by being in VR I found I was able to more fully deconstruct the scenes than I would in a still render, setting the path for my way forward.


At the same time, I have been fleshing out what it is I want to happen in this experience. The result was the following initial concept:

As I was working through the story, I grew frustrated.

Storyboards are a standard part of the animation pipeline, and I fell into the process of making one without realizing that the end result would be nearly useless for conveying what I am trying to create for this experience. Storyboards assume that designers have control of the frame, that what is presented to the viewer is a carefully constructed composition flowing from one scene to the next. At this point in VR, the designers have next to no control over the camera. I can choose which direction the viewer may start out facing. I can provide a limited scene with nothing else to focus attention on. I can attempt to draw their attention with sound cues and peripheral movement. At the end of the day, the viewer gets to control which details they experience within this world. Creating these storyboards did help me generally work out what I would like to happen within the experience, though I do not believe they are useful in helping me convey that to others.

I’m currently taking a Narrative Performance in VR class that is discussing many of these topics, and one helpful thing from this week was a Variety interview quote from John Kahrs discussing the making of Age of Sail. Kahrs comes from an animation background and talks about having to break that pipeline in order to develop an animated VR cinematic. “I was told not to storyboard it and just dive into the 3D layout process, which, I think, was excellent advice.” In that same lecture, this diagram from AWN article
Back to the Moon’ VR Doodle Celebrates Georges Méliès was presented:

The designers for that experience split the scene into sections and mapped out the action occurring in each part of the scene at every time. Thinking about the scene in this way, as a production rather than a composition, changes the way I’m approaching both the narrative itself and the production process.

Time to change tactics. VR manipulates space, not a frame. It then follows that I should begin feeling out that space in order to “storyboard” my animations.

I moved into Microsoft Maquette.

Maquette offers ease of material. I can place basic 3D shapes at all scales, manipulate a painting and text tool, and create multiple scenes that can be easily scanned back and forth to watch progression. I can view these scenes from a distance or at the viewer’s level. After experimenting with the tools, I began building a primitive scene to understand spatially what manipulations I wanted to happen. The result is an odd combination of an animatic and a storyboard.

Technical Progress

I did some experimenting in Tilt Brush, first with painting and then with the export pipeline. I am currently still waiting on Quill and Unreal Engine to be available in the lab, but will be spending this weekend working on my Oculus at home to see the results. Tilt Brush gave me some practice working with painting in a virtual space, specifically dealing with depth and object manipulation. I chose to create one of the chairs from my scene with the candle sitting on it as a test subject.

Painting in Tilt Brush of a candle sitting on the arm of a chair.

I turned most of the lights down in Tilt Brush to get a feel for what the scene would actually be like, and see what the various brushes would produce in terms of light. Not very much, as we can barely see in the image above.

What I really wanted to test was the export process from Tilt Brush to Unreal Engine. Tilt Brush exports as an FBX with the textures, but upon importing to UE4 I realized that the FBX is split into pieces based on which brush you used for each stroke. Further, the materials don’t seem to work without undergoing a process in between to assign a vertex color map to the object. I’m still a bit hazy on this process, though from my understanding Quill exports in a different file format that will seemingly not require this middle step.

Unreal Test - bringing a Tilt Brush model in, without functional textures.

Unity, however, has a package made to work with Tilt Brush materials called Tilt Brush Toolkit. Once downloaded from Github and loaded into a fresh Unity scene, I was able to import my model without any issues from the textures. All I had to do was drag it into the hierarchy.

Unity Test - bringing in the Tilt Brush object after importing Tilt Brush Toolkit.

Next Steps

My steps forward are really just finishing up where I’m at now and making some real steps towards solid production.

  • Spending time animating in Quill. The next week will be getting some of these base animations down in Quill and trying to export into Unreal.

  • Determining which 3D models I’ll be creating and starting work on that, while blocking out their presence in UE4.

  • Finish creating Maquette scene mockups. Finalize story.

1/13/19: Investigating Quill

This week marked the start of a short project on experiential storytelling, memory, and light. We were asked to think about three memories in which lighting was an important factor, and write a short description of this moment. Emphasis was put on the word moment - this is not meant to be a life story. The idea is to bring the viewer into this moment, understand what it is that’s happening, and then exit in 15 seconds.

I started thinking about memories that had specific focus on lighting, and found it was more difficult than expected. There were plenty of memories where I could remember what the lighting was and appreciated it, but I had to find three where it really stood out to me. I found that when writing about them, I was walking a fine line between what I’m saying to the viewer and what they’ll actually be seeing in the experience. The descriptions were going to be recorded and part of the audio. For each chosen memory I already had a vague impression of what I wanted to accomplish, the most difficult part was deciding how much visual detail I would be including along with narrative, and how specific that narrative would be.

Chosen Path

“I was looking for Orion. He’s there, as always, but tonight he brought friends to fill the usually empty sky. Standing barefoot on the deck with only the glow of the candle, we stared at each other over the hills, making introductions.”

This memory is from last May, standing on the deck of our house in North Carolina. My partner and I drove down from Columbus for my birthday. The area isn’t very populated, mostly woods - on a clear night it’s easy to see all of the stars. We stood outside the first night we got there with all the lights out except a candle just looking at the stars. As a child I would always look for Orion every time I walked outside, it was the only constellation you could usually see from where we lived in Miami. The light from the stars, the candle, and the houses on the other hill really stand out to me in that memory, and I chose it because I feel that I could bring the essence of this moment to a viewer with varying levels of abstraction.

Panorama off the deck at North Carolina at sunset. Original scene where the memory takes place.


In the past, my research has required virtually the same pipeline visually every step of the way. Block modeling in Maya, some texturing in Substance Painter (occasionally), and then putting it in Unity and adding lights. The focus was on making the program itself function rather than impart an experience visually. I want to take a step back and create something that imparts meaning without necessarily requiring the viewer to actively be a part of it.

Oculus Quill presents some really interesting opportunities for animating in virtual reality. I spent some time looking around and finding examples of these animations that might be similar to my own.

Artist Goro Fujito spends his time creating animations in Oculus, showing a variety of scenes and perspectives. Viking Rockstar is a great example of the type of color and lighting I want to use in my own scene, includes multiple shots and sound design. I wouldn’t categorize this as a virtual experience, but as an animation it’s beautiful and on the right track stylistically.

This short looped animation puts the viewer in the perspective of driving through the rain. With the sound design and lighting, it’s incredibly effective and shows how the user can be brought into an experience.

Fortunately, Fujito also posts videos where he shows his process and explains his animation workflow. I watched this to get a better understanding of how Quill functions and if it would be a good option for me moving forward.

The official website provides some resources on how to export animations and FBXs to Unity, though I needed to look externally for information on how to do this in Unreal Engine. I was considering using UE4 specifically for its lighting capabilities. I worked on the lighting for Project Sphincter while at CCAD, and Unity just hasn’t been able to compare. As of now, I am leaning towards this option.

Putting the final software choice aside for a moment, I decided to get into Quill and see if this was really something I wanted to commit to. Granted, I’ve spent maybe a grand total of 3 hours in it and probably need to watch some more tutorials, but the learning curve is pretty rough. I had a difficult time getting a hang of the controls, which are not well explained when first entering the program beyond a little diagram that pops up by default. These were the initial sketch results:


Next Steps

After spending some time in the Oculus, I don’t think it’s practical to do the entire scene in this way. At least, not from scratch. I need to investigate bringing in models and animating over top of them, possibly as reference. It’s very difficult to gauge depth in there once the scene is moved around. I also need to look into animating only certain objects in the scene with quill rather the entire environment, or blending them together. This will help me determine a production schedule for the next two weeks.

Beyond the pipeline research, I will spend this next week gathering my final visual reference, sketching out a storyboard, and recording my story for timing. I have been gathering some information on lighting and technical terms in order to actually discuss the decisions I’m making on the lighting in the scene, and will be discussing that more next week as well.

Looking Back on Liminality

A few weeks have passed, and I wanted to wrap up the work we did on Ter(li)minal from September!

Our end result was a space in which we sought to place the participant in a liminal space, forced into a sense of waiting. You are constrained by lack of movement and lack of activity, only able to observe by looking around and rotating your head. Upon starting the application, the user sees a space sparsely populated by seated and walking figures. As you wait, small changes begin to occur. Once empty seats are filled with figures. Seated figures may change positions when you look away. The departure board gains more and more red delayed flights as time passes. Babies scream, planes take off, the space becomes more chaotic and crowded with each delay announcement. Figures began to break away from their straightforward march along the walkway and defy gravity, floating through the ceilings and moving sideways out to the planes landing. Finally the scene calms as the boarding call is made, and the player is allowed to move on.

In the process of development, I learned a lot about linking layered timed events in Unity. Once the sightline script was functional and able to be applied to multiple objects, it became a matter of making sure these events were allowed to occur at the proper times in the application. I had to go back to the basics: public booleans and instantiations. The sightline function ended up working out due to a tip from Alan to use Transform.InverseTransformPoint instead of the function with the frustrum planes. I was able to get the function working that same day and then it all became about timing. Sara made a rhythm chart for the project that I based all of the interactions on:


Some critique points that came up:

  • Move camera back. The player’s face is too far forward over the model, and it’s nearly impossible to see the models placed nearby.

  • Add interaction to the models around the player. The boarding pass, phone, and books around the player were actually supposed to be moving. I ran out of time and just didn’t make it around.

  • The departure board is very hard to see. In our presentation, we talked a lot about being able to see these subtle changes in the environment around you. But the departure board was placed so far away that it was difficult to read and see these changes as they occurred. Moving it to one of the pillars by the player might be more effective.

  • Animated figures- be more intentional. Some technical issues came up with the loops on the animations and timing them out. While the number of figures does increase over time, it’s difficult to see once they start walking through the floors and ceiling. This happens fairly quickly- waiting for more time to pass and spacing out these occurrences would make it feel like less of an accident (although… to be honest… it definitely was an accident).

  • Line renderer on the objects seems to glitch a lot as they walk, gets confusing and difficult to see the figures.

  • Audio. Needs to be louder overall- very hard to hear on the phone even with headphones.

Overall I was very happy to learn the process for android development and get to work a bit with the Daydream. It was much easier than expected and very quick to prototype. I would like to revisit this in the future and make adjustments, though that may be more of a Christmas personal project. Learning about the sightlines is going to be especially useful for our 10 Week Ruby Bridges iteration that Tori and I are currently getting started- but the journey for that so far deserves its own post. More soon!

Weeks 1-2: Sightlines, Airports, and Liminal Spaces

Year 2 is now off and running! 

Most of my energy over the past three weeks has been focused on the first project of the year: a five week team effort for 6400. The same project that produced the MoCap Music Video last year. 


Our team was told the due date and to make something... very open for interpretation. My team includes two 2nd year DAIM students (Taylor Olsen, Leah Coleman) and one first year student (Sara Caudill). We eventually settled on creating a VR experience based on liminal spaces, specifically taking place in an airport, with the viewer losing time and identity as the experience goes on.

Liminal spaces are typically said to be spaces of transition, or "in-between"- a threshold. Common examples are school hallways on the weekend, elevators, or truck stops. Time can feel distorted, reality a bit altered, and boundaries begin to diminish. They serve as a place of transition- the target is usually before or after them. The sense of prolonged waiting and distortion of reality is what we intend to recreate in this experience. By placing the viewer in the gate of an airport and observing the altered effects around them, such as compressed/expanded time, we will bring the viewer into our own liminal space. 

All of our team members had an interest in working with VR and with games, so I looked for environmental examples of what might be considered a liminal space already existing within a game. The Stanley Parable sets the player in an office building by themselves, seemingly at night, which contributes to the odd feeling of the game- you never see another human, and the goal is to escape. The presence of a narrator and instructions (despite the player choosing whether or not to follow it) prevents this from being a true liminal space, but I feel that the setting itself creates a strong nod in that direction.

Silent Hills P.T. is much closer to the feeling we're getting to. The player constantly traverses the same hallway, though with each pass the hallway is slightly altered. There is minimal player identity, the passage of time is uncertain, and the player is constantly in a state of transition looking for the end. 

Sightline: The Chair became an important source material for us. Developed early on for the Oculus, the player is seated in a chair and looks around at their environment- one that constantly morphs and shifts around them. The key point is that these changes occur when the player looks away, and then are in place when the player looks back. This is an element I very much want to incorporate into our game. It really messes with the flow of time and creates a surreal feeling. Importantly, the player cannot interact with any of the objects around them- they must simple sit and wait for the changes to occur. 


From there, we met as a team and began planning out the experience- interactions, the layout of the airport, how time would pass, what events would be happening. An asset list was formed and placed online, as well as a schedule for development. We wanted to make sure everyone on the team was learning new skills they were interested in, and teaching others the skills that they have. Sara and Leah focused on visual and concept development- the color keys, the rhythm of the experience, etc. Taylor worked on finding reference photos, and began modeling the 3D assets we would need for the airport. 

For me, I spent the last few days focused on modeling the airport environment and beginning some of the interaction work in Unity. Based off of the layout we created in the team meeting, I was able to finish the airport shell and start working on some of the other environmental assets- a gate desk, vending machine, gate doors. 

I brought those models into Unity to start working on developing some code. Taylor made the chairs for the gate, so I placed those and got a basic setup going. 


I began working on some Audio scripts to randomly generate background noise and events- an assistance cart beeping by, a plane landing, announcements being made, and planes taking off/landing. That's about done, and I'll be posting an update video soon with the progress made.

The current problem I'm having is the script to change items when the viewer isn't looking at them. I found GeometryUtility.TestPlanesAABB in the scripting API, which forms planes where the camera's frustrum is and then calculates if an object's bounding box is between them or colliding with them. Is the object where the player can see it? I can successfully determine that an object is present, but when deactivated to switch to another GameObject, the first object is still detected and causes issues with the script I've written to try and swap it with another. I got it to successfully work with two objects, but three is revealing this issue in full force. I may try instantiating objects next instead of just activating them- either way, this test has allowed me to learn a lot about how Unity determines what's "visible". 


This weekend, I'll be continuing to work on this sightline script for the camera and hopefully finding a solution. I also have several other environmental assets to model, and will begin textures for the ones that I already have completed. On Sunday I plan on posting a progress video of the application as is. We still haven't decided whether to use the Vive or attempt mobile VR, something that I've been especially interested in. Alan suggested letting the project develop organically and then make a decision near the end- I'm leaning towards the Vive for this currently for the familiarity and extra power, but on mobile the player is forced to be stationary and lacks control. More thoughts on that soon. 

Reality Virtually Hackathon!

Earlier this month I was able to attend and compete in the Reality Virtually VR/AR Hackathon, hosted by MIT Media Lab. I registered, was accepted, and started connecting with other participants via a facebook page. Everybody was really friendly and excited about working with VR/AR technology! I saw people from all kinds of fields and backgrounds, from students to industry professionals. About two weeks before the Hackathon, everyone started posting their bios and work experience to see who was interested in working together or finding a team. I spoke to several participants, but one reached out and wanted me to join their team. All they knew was that they were interested in working with the recently released ARKit, and all of the team members were iOS developers. They needed someone from the 3D world. 

So I drove out to Boston for the Hackathon, and that first night we had a brainstorming session. Just throwing ideas around until it stuck. We decided to tackle the problem of Collaborative AR- something that had not been successful in ARKit before. But at the end of the two days, we had it! It was definitely more of a technical challenge than an artistic one, but I made the art assets we used to demonstrate its capabilities and tried to get the team to think on a design process as well as an engineering process. 

The video above was made during the competition to show our platform in action. I'll be creating a more comprehensive video in the next few weeks. 

"Team Two" ended up winning our category, Architecture, Engineering, and Construction, and Best Everyday AR Hack from Samsung! 

Team 2 after the Closing Ceremony

The overall experience was amazing. This group worked well together and was able to solve a problem that opens up a lot of opportunity for developers. I learned a lot from them- I had never worked with mobile development and had no idea what was involved with development for iOS. Or with AR for that matter. The workshops before the event was a great way to get into the headspace of VR/AR development and ask questions about various aspects of the industry. The Facebook group is still alive and I made a lot of connections from the event. I'm planning on attending again next year and maybe trying to go to the one at Penn State as well. 

While I was at the Hackathon, I was also working on a game level for my Computer Game 1 class. This was a team project centered around the theme of a broken bridge. Each of us had to create a level using different game mechanics to get around the bridge. Mine was to collect planks that had washed downriver and carry them back to the bridge in order to repair it. I found, especially during this project, that my scripting skills in C# are improving a lot and I'm starting to understand Unity a lot better. Of course, I still get a little overexcited when building scenes so... even though this was a prototyping assignment I got to play with all kinds of fun settings. 

The next couple of weeks are going to be intense. I have a VR prototype that I'm working on involving Hurricane Preparedness (more on that soon), and an AR MindMap project I'm working on to explore my own process a bit more. Next week I should have a computer game final project in the works as well- not too sure what that's going to look like just yet. There will be plenty of process work to post on here! 

Cabin Finals!

Yesterday morning, at around 5:15am, I rendered out these final stills from my cabin environment project:

Initial concept by Olga Orlova

I was pretty happy with how this environment came together. This was my first time attempting a large scale project in Unreal by myself, and I learned a lot from it. I did run into some technical issues with the foliage and lightmaps. I'm going to have to do some research and actually try to understand lightmapping because it was really killing my final build and I just had no clue how to fix it. Even with those issues, I feel that I was able to get fairly close to the initial concept. I plan on going back and texturing the environment to really complete it for my portfolio, although for now I'm going to render out a flythrough of the landscape to show it off in its current state. I'll post that here when it's ready! 

I've also got an idea for a personal project I'll be starting during the Holidays involving some tiny landscapes... more on that soon. Graduation has opened up the ability for me to start making my own art for a little bit and I'll be taking full advantage of it! 



Cabin Moves to Unreal and 3December Begins

Finals are ramping up here at CCAD, and all my projects are starting to hit the end stages of production. Last weekend I started building my world in Unreal Engine 4 and getting the basic features laid out and working. 

Overall I'm really happy with the layout and the lighting, and I've had some good results with the water. I'm using packs from the UE4 marketplace to fill out the mass foliage and for the water spray effects around the rocks. 

I did run into an issue with the cabin though where it imported without smoothing groups and looks pretty funky, so I'm going to have to go in and fix that and some of the UVs. The normals are doing weird things right now. But once I can get that fixed, I can move some of the smaller world detail stuff into the scene- some scattered rubble and wood, various personal effects, maybe some signs and a few other basic structures. I just want to really build out this hero area and make sure the rest of the level within the colliders is still worth exploring.

I'm also participating in 3December, where you're supposed to be doing something in 3D every day and posting it on social media. Because of my schedule for finals I've only posted 3/6 days so far, but I'm getting more consistent. Last night I sat down and sculpted something just for myself- I haven't done that in a long time. I sketched out a character for my storyboarding final earlier and had such a fun time with it that I spent an hour sculpting his head: 

I was pretty proud for sticking to my self-induced time limit and making something that was just entertaining for me. I'll be posting the rest of my 3Decembers on here as they happen, but for other updates check out my Instagram (@abbytheturkey). 


I went on a brief hiatus this past week because I flew to California to attend the CTN Expo! It was my first year attending the expo and my first time on the west coast, so even though I was still getting work done I stuck to Instagram and Facebook for my updates while traveling. I got to meet a bunch of fantastic artists and was lucky enough to get a few incredibly useful portfolio reviews. I saw the Pacific Ocean for the first time while waiting around the airport (got some fun sketches below), had some spare time to go see Griffith Observatory. And Thursday before the convention we visited Blizzard's Irvine campus!

The best part for me was getting to be around a lot of people who clearly love what they do. I didn't take many pictures of the actual convention because as it turns out, I'm terrible at taking pictures of events I go to. But it was bigger than I anticipated and I managed to come home with some pretty great prints that I may be doing some practice environments with. 

I tried to do some work in between being at the convention and boarding various planes. Most of it was some more wood sculpts and finally getting my normal maps working properly. I did a hi-res sculpt of my landscape and put it in Unreal just to see what it was looking like, here's the results: 

I won't be able to get into the labs to work on foliage until at least Monday night, so I'm also working on my cabin and getting all the details fixed up for that. Then I can start moving fully into Unreal and putting it all together. I'm excited for how it's coming along though, but deadlines are looming and I'm hoping to have some more filled out screenshots coming in the next few days. 


Landscape Improvements

On Tuesday I went in and worked on my landscape for my river cabin scene (I need to name that soon... working on too many cabins!). It felt really blobby and just needed the finishing touches. Here's what the comp looks like together now in Maya: 

This is a slightly lower res version that I put into Maya to retopologize, which will be happening over the next day or two here. I'm working on modeling the plane parts of the cabin for tomorrow and finishing up the wooden support assets for the cabin. Really try and get this thing grounded in the landscape.

For the foliage we'll be learning Speedtree in a few weeks and I'll have to look into getting that river system up and running. I ordered this book, Botany for the Artist, to help me get an idea of what type of plants I should be looking at. Plus my plant knowledge needs a boost anyways!

Cabin Progress

These updates are a little late- I was volunteering at GDEX this year, the Game Development Expo for the midwest. It's hosted in Columbus and I helped out as a presentation room attendant. Afterwards I got to walk around and see what other game developers are doing in the area, and was really surprised at the variety! CCAD also had a booth at GDEX and was showing Project Sphincter to everybody- I got to watch people play our game level, and actually really like it! That was pretty great, honestly. 

On Friday I made some more progress on my cabin, using some of the plank sculpts in ZBrush to start putting together a rough pass of all the wooden parts of the cabin: 

So this is a little late, but my next two milestones are to block in some rocks in the environment and model out the plane (minus the engine detail). I'd also like to go in and fix my landscape for the level. Those brushes I downloaded to do the planks also have some great landscape tools, and I'd like to get that going more realistically. If I've got some spare time in between all of that, I plan on doing some physics tests in Unreal just to see what I could do to get some water flowing in the level. 


Sculpting Wood

I've hit the part in my cabin environment where I get to start sculpting some final assets, starting with some varied wood planks for the cabin itself. Initially I started these with the intention of them being roof shingles.

These three planks took about 2 hours total. I used Michael Dunnam's wood brush set combined with another wood brush set by Jonas Ronnegard.  

I still have a ways to go and I'm working on creating the main planks for the house. The three roof shingles I completed have new topology and are ready to go. By Friday I should start building the final cabin with these planks! 

Spooky Haunted House Layouts

I decided to take a character and environment class this semester because I've spent so long in 3D that my drawing skills began to suffer. But for this environment project we were encouraged to use 3D programs to help with our composition, perspective, and lighting... something that I probably should have figured out sooner but now that I know it, it's changing everything. 

We're supposed to be creating this haunted house, and I tried to steer towards more of a stylized form. I chose a dark cabin in the woods with shapes taken from your stereotypical witches hat. Then I went into 3D and did the most basic of block models, and used Maya's tree brushes to create this bent pathway. Here are some of the process shots: 

I'm really liking what's happening with the outside and I'm really enjoying playing with the lighting. I still have to do an interior which is going to happen tomorrow afternoon, but so far I'm liking where it's going and will continue posting progress as I go! 

Cabin Progress

I've finally been able to come back and work on this project, and it's going to be dominating the majority of my time for the next few weeks. Here's what I've got on the cabin right now: 

I'm going to go ahead and solidify my terrain and then start making assets in ZBrush to get final models going. The goal is to have a rough final cabin for next Friday, then start focusing on the machinery in the plane and the cloth. 

Reeling It In

So Inktober has gone on hiatus as I've been working the last few days to find time for my demo reel. Rendering in UE4 has been a bit of a challenge and I've spent a good amount of time with Google trying to figure out why something doesn't work. There are a couple of good shots I have from my renders, unfortunately I can't use them all in this reel and keep it under a minute. That's where the trouble comes. I'm not totally happy with what I have so far, but I still have a little bit of time to keep pushing and polish it up! 

I'll post some screencaps in the meantime: 

I've mostly been busy trying to round up models and get them into Unreal, render out a pass, and then get them together cohesively in After Effects. I made great progress tonight, and hopefully by tomorrow night I'll have a new video up on the home page with the renders I've just worked on. 

Cabin Blocks

I started block modeling my cabin for the environment concept I'm working on- just a few screenshots here to show what's going on.

Most of this was just about figuring out what goes where, and modifying the desk so that it made sense. I was planning on doing an interior for this house but I'm still not sure if that's something I want to pursue or if I want to find another way to work on some props. I added a cave system to my revised layout in Unreal so I may just add some excavation equipment to give the idea of exploration and preservation. Maybe add some tents in a clearing off a little trail.

Next step is to add the plane in and do a basic simulation to get the canvas on top of it. Then I can go into ZBrush and start adding in some of those nice woodgrain textures and smaller details inside the plane.

UE4 Marathon

I sat down this afternoon to start working on an environment to render my demo reel in. I worked on rendering bits and pieces from Project Sphincter in Unreal Engine 4 this week, and now that I've got the cinematics process down I figured I would just build a new environment to render all my game props in and put it all together. 

So I modeled this airplane hanger/warehouse type look and started playing with the lighting and materials in UE4. 

Screenshot from my setup. 

As I started putting the materials together and playing with lighting I got sucked into some of the finer aspects of UE4. Like building glass. I had never really gone too in-depth with the materials editor and I really wanted to see what was involved. I found this tutorial for creating glass HERE and started following it. The results still aren't quite what I'm looking for, but the refraction is at least looking pretty good. I'm hoping to play with it a little more once I've got more of my models in the scene. 

Screenshot of the materials editor after following the tutorial above. 

Screenshot of the materials editor after following the tutorial above. 

I'll be focusing on another environment project this week but I'll be posting updates from both on here, as well as Inktober! 

Working on Renders

It's demo reel time! I've been working pretty steadily this weekend to render different projects from this summer. Normally this wouldn't be such a huge challenge but since most of my models are game assets, I decided to render them in Unreal Engine 4. 

Working on cinematic renders of Project Sphincter. 

I'm still working out some of the quirks- I followed a tutorial that used Matinee only to find out that UE4 is using Sequencer now. A few hours and some tests later and I was ready to get into it. I'm also going to be building another environment and using it to create a more dynamic props reel. I like the idea of adding some interactivity instead of just staring at turnarounds and wireframes.

Still shot from one of the cinematic renders. 

Still shot from one of the cinematic renders. 

I found a few errors after the initial render but I've pretty much got the cinematics down for this part of the demo! Next up, building the demo room. 

Blocking It Out

I've finally opened up Unreal and started blocking out the general structure and landscape of my environment. The landscape itself is pretty non-specific, but I mainly focused on getting the cabin in scale. I used the preset third person character mesh to measure everything, and right now I've got the general shape locked down. 

I'm still using presets just to figure out the positioning of everything, but my goal for the rest of the day is to play with particles and learn how to create a slightly more believable water system.