Notifications
Article
Project Bespin
Published 8 months ago
429
0
Not so long ago in a galaxy not so far away....

Final Submission

Meet Team 77

“Team 77” is a group of graduate students from the Entertainment Technology Center at Carnegie Mellon University.
Yujin Ariza (Sound Designer / Programmer)
Caleb Biasco (Rendering Engineer)
Tai Ching Cheung (Texture Artist)
Bryan Kim (3D Artist)
Euna Park (Animation and Camera / Storyboard Artist)
Jacob Wilson (Systems Programmer)
Hyun Jong Won (Environment / Level Designer)

Narrative Inspiration

We imagined a dystopian future where overpopulation has led to the construction of hyper-vertical cities, where the buildings penetrate through the stratosphere. The construction of such superstructures have eclipsed most of the Earth, making the surface uninhabitable and obsolete for humans.
The thick smog serves as the barrier in a polarized society. The upper layer is called Eden, and the lower layer called the Hell. Eden is a utopia where humans are granted the luxuries of retirement and pleasure from birth. Work doesn't exist, only life. In contrast, Hell is a sub-society that solely exists to serve the upper class. In lieu of technological advancement, humans have been able to rapidly produce and mobilize an entire robotic workforce to both maintain and facilitate industrial production. Life doesn't exist, only work. The social hierarchy of the society is directly reflected in the environment’s verticality.
Consumerism at the scale of such population have been the driver of heavy industrial production, ultimately resulting in significant air pollution. Thick smog is prevalent in the lower parts of the stratosphere, rendering the outdoor environment uninhabitable. This serves as the industrial district where the working drones exist.

Scene Conceptualization

Visual Inspiration: Blade Runner, Neo Seoul, Tron, Coco

Using the above narrative as a starting point, we set out to make two distinct scenes that are vertically connected by mega skyscrapers. Due to the lack of sun in the lower levels, the dystopian industrial scene was envisioned as a dimly-lit environment punctuated by the building lights. The buildings eventually fade out in the distance, evoking a sense of mystery and claustrophobia. The intention here was to communicate the depressing tone of the slaves' mindless workplace. On the upper levels, the polished towers emerge out of the smog and reflective surfaces shine against the brilliant sun. Due to the curvature of the earth, the density of the towers are more sparse at the higher altitude, creating a sense of openness and expansiveness that reflects the luxury that humans enjoy.
We started off by collecting concept art and sketches that match the description above. Given the narrative, we focused on monumentality of ancient building styles, and environments that are both terrifying and mysterious.
As the team further developed the art direction of the environment, we were interested in expressing the larger building modules as a complex amalgamation of smaller volumes. Very much like computer chips, we settled on treating these small buildings as attachments onto the larger underlying base geometry.
Since scale and proportion were very important in conveying the desired visual effect, we rapidly prototyped several design schemes to determine the most appropriate shapes, sizes, proportions, and spacings for the building modules.
In facing a unique challenge of devising a visual language that could speak to both the upper and the lower layer, we strove to combine archaic and futuristic building styles into a hybrid that can cater to the two seemingly opposite environments. We were able to achieve this contrasting effect with material differentiation, juxtaposing the reflective, polished textures against the rough, rusted ones.

Production Process

In the initial stages of the project, the team conducted frequent scrums to rapidly prototype the storyboards prior to full production. This process was especially important not only because it eliminated many redundancies and dependencies between team members later down the production pipeline, but also because each team member was working remotely over winter break. Once the rough storyboard, level blocking, and general art direction were determined, all members of the team was able to jump into production concurrently.
The team members primarily communicated through Skype and Slack. Version control was handled using Perforce.

Storyboarding

In order to give the team direction before we all went our separate ways for winter break, we rapidly iterated storyboards to get a general feel and direction for what the video would be showing and the main character's placement in the scene. This animatic gave us the most direction, and while many elements changed, it gave the team a more unified vision for how the world would be presented.


Multi-Scene Timeline Setup

The development with Cinemachine and Timeline were closely intertwined, learning of one through using other. We initially tried to replicate the Adam pipeline as much as possible after reading the Adam Timeline blog, since it is considered to be the ideal use case. However, after just hours into replicating this workflow, we realized that Timeline and Cinemachine weren’t created with multi-scene editing capabilities. We first noticed this when the timeline properties all became disconnected and all work was lost, and since Timeline saves during playmode, our work was reset.
We were committed to the multi-scene editing, since we were all working remotely and couldn’t afford someone locking the only scene for one purpose, we needed to limit dependencies. After researching whether Timeline had any feature which could make it multi-scene capable, there were two forum posts in which Unity community members had suggestions to remedy this. By investigating their ideology and closely studying the Adam development blog, we accomplished multi-scene editing by using control tracks with defined names, such as “Character,” in the main Timeline, then creating a Timeline in the other scene with a track group with the same name as the control track. Like this, we were able to link the Timelines through the string.
We have a master Timeline scene which contained track groups and control tracks with unique names. There is a script attached to this “director” timeline containing a dictionary of track groups which mapped to a Timeline playable. Each Timeline in the other scenes had a track group to match the control track in the master Timeline, as well as a script adding all the track group names to the master Timeline’s dictionary and associating each of them to the current Timeline playable. There is slight repetitiveness in that all track groups weren’t necessary, however, this allowed the flexibility for the artists to put whichever one they wanted into the master Timeline. Having individual scene timelines also prevents interdependencies when working on different sections of the project, such as character and FX.


Multi-Scene Cinemachine Implementation

Cinemachine was used exclusively to control the camera with the brain placed on the original, Main Camera. Each camera type, as listed in the Cinemachine menu drop-down, was initially placed into the scene to examine its functionality and where/if it could possibly be useful for our storyboard. The shots were then blocked out using stationary, basic virtual cameras to capture the storyboard image, with each camera shot taking an equal amount of time in the camera timeline.
There was difficulty initially with assigning a follow and look at target, because the targets were in different scenes due to multi-scene editing. For the blockout, a duplicate character was placed into the camera scene, which felt wrong but was necessary to see the shots in the timeline. During playmode, the placeholder objects are disabled and there is an assigning script which associates the placeholder and the script field with the actual object in the scene. Again, this seems like a bad approach because any changes needed to be made twice, but there was an assumption that the placeholders themselves wouldn’t be altered unless the actual object was. The camera work was iterated on once it was seen in its basic form due to technical restrictions and artistic feel, but the cameras were still left stationary for the most part.
Transition between cameras were also tuned so that some were removed where others were extended. After these tweaks, I was skeptical of how the camera timeline was setup, with a specific camera for each shot. At one point, because of the scope we had over 40 separate cameras which seemed excessive and I had thought why not have a few cameras who’s position were animated rather than using timeline’s transitions. By hand animating the transitions it would give the artists more control of the speed, direction, and look frustum; however the major drawback would be time required to do this. There doesn’t seem like there is a convention yet with cinemachine on the approach to long cinematics, however because of the cinemachine brain I assumed that many camera’s was the approach the Cinemachine team had designed for and thus we continued on with the same approach.

Artist-Facing Tooling

Authoring script tooling that interfaced well with the Timeline workflow proved to be a challenge for multiple reasons. For one, all of our custom movement script needed to be deterministic, so that they could be scrubbed through in the timeline. Component scripts also needed to be designed in a way that worked well with a ScriptableObject-based workflow that minimized the locking of scene files in version control.
For the latter objective, we achieved this mostly by refactoring “settings” variables in component scripts into their own “Profile” assets. This effectively separated the functionality of each effect between its data and its functionality. However, we then found difficulty in controlling these values dynamically through the Timeline, because Timeline tracks only operate on scene assets, rather than project assets. Moreover, the complexity of the fog and cloud settings required a large number of variables to be tweaked simultaneously in Timeline.
To overcome this, we introduced an additional “CrossfadeProfile” to our assets, allowing the fog and cloud effects to crossfade between two preset Profiles. Animating between these two settings was just a matter of animating the crossfade slider in Timeline, from 0 to 1. In order to enable this workflow optimization, significant refactors were made to Unity’s VolumetricLighting package, Keijiro Takahashi’s KinoFog package, and our own Shadertoy-based Volumetric Clouds package.

Sound Design

When designing sound for this piece, the aim was to create something that was monumental and heavy. We took audio inspiration from the “Blade Runner” soundtrack, sound effects from “Howl’s Moving Castle,” and machinery sounds from Playdead’s Limbo. Most machinery sounds were pitched down to give a sense of heaviness, and granular synthesis was utilized to stretch out metallic groans and wails of sirens.
The primary music track in the piece was sourced from a relatively unintelligible machine-like sound, fittingly titled “zupzup.”
After pitch-shifting the sound by 13 semitones, the result formed the basis for the pulsing bassline.
After segmenting the sound by frequency, and then adding reverb, auto-pan, and distortion, we get the final result.

Environment Art

Due to the growing number of different building modules, it was imperative that the master model be updated as the assets are completed by the artists. The instancing and locator features in Maya have allowed the iterative process to be more expedient for testing multiple tower configurations in the early process.
Once the rough configuration was set, the number of diverse modules and their sizes were communicated to the 3d artists who modelled each unique building module within the parameters given. The model was then handed over to the texture artist, who textured each module using the softwares Substance Painter and Designer, and the level designer, who replaced the blocking instances with detailed models. Once the entire set of building modules had gone through this process, the building as a whole was brought into Unity, at which point the final materials are applied.

MODELING
Through the design process, the 3d artist modeled diverse module through the Autodesk Maya. For the variation of modules, modules are categorized into large, medium, and small. Each category has at least three variations. To create dimensional feeling of the rotation, each modules has its own unique standout point. Furthermore, to portrait futuristic feeling, modules have hexagonal shapes and has multiple bevels rather than having cube shapes or a sharp edge. Accessories on modules are also thoroughly modeled. Gears are also modeled in futuristic way. To achieve the futuristic feeling, one gear is shaped like wheel rather than a gear. Also, other gears have unique holes on the middle to portrait mystery of future technology. Chimneys rather has its own futuristic design on the tip of the chimney by having cylindrical type lid. Each accessories and modules is designed to convey the computer circuit feeling when all of them are combined. Brides and elevators are made in rather traditional design, but has its own unique mechanism to convey the futuristic feeling. Elevators have octagonal shape, and brides have connectors to fulfill the futuristic design.
TEXTURING
The textures in this project were created by using a substance designer, with the help of the substance painter when applying them to different models. Procedural materials were created with adjustable variables for age (rust, dust, and oxidation). Seamless textures were created for efficiency when applying them to large surfaces. The main material used was copper, with a strong contrast between fresh and oxidized copper to separate the two environments shown in this project. The heavy use of emissive mapping was to show the energy that powers these heavily industrial buildings, exposed on the surfaces due to the rough and crude design, as well as worn down protection.
ENVIRONMENT ANIMATIONS
The mega skyscrapers are envisioned as vertical stacks of individual building volumes that rotate independently of the overall tower. As each volume rotates, the gears in the reveal between the volumes also rotate, adding to the mechanic visual effect. Once the volume is rotated to its desired orientation, the gate opens and the skybridge extends across to the pairing tower. This environmental animation was intended to allow any of the building volumes to connect to any of its neighboring towers via skybridges, hence creating an impression of a networked city. The scene starts on one of these bridges.
The original vision was to have large modular towers with bridge swaying across the horizon we needed to create a network of buildings that these bridges could reach. The building distances were first put at varying lengths which required varying length bridges, but the problem was that a bridge, if not planned properly, could collide with other buildings as well as required additional work for the artists. To avoid both of these a city was designed where all the buildings were equidistant apart, thus all bridges of the same length, however this created alleys where the camera could possibly view down creating boring camera shots. The approach we implemented was to have extending and retracting bridges which Russian dolled into one another, and eventually into the building module itself, prior to rotating. This allowed us to have buildings placed wherever in the design, within a range of numbers rather than an exact placement. We could then network each module to any other module without worrying about collisions. The process of the module was to rotate a bridge gate to face a target module while that target module was also rotating a face to face the bridge module. The bridge gate doors would then open, revealing and extending the bridge towards the target. The bridge stayed extended for a set time then retracted, gates closed, and the module could rotate onto the next building. During the rotation, extension, and retraction phases gears on the module and bridge respectively spun to give the impression of mechanical movement. The gears were given a animation curve to add some randomness into the design without having to hand animate multiple gears.
Another piece of the design was to create a transition level in which the poor, rusted buildings separated prior to the smog giving way to clouds and a luxury skyline. Due to scope this level was cut however, Unity editors were created in the case of its addition. A building generator which would randomly place modules in a stack with different angle orientations was created for singular building asset files. The asset files could then be passed onto an environment generator which placed the buildings in a random location about a center point, given the number of small/medium/large buildings to place and the interval spacing between any given building. It was decided to not use this feature for either the industrial or luxury levels of the environment, because of the randomness and how close the camera was to that detail.

CHARACTERS
After exploring some pre-made character rigs to populate the lower environment, we decided to use Space Robot Kyle because he fit with the aesthetic and theming of our environment. The Kyles are animated with three walk cycles, created using Maya 2018. According to our storyboards and initial camera work, we had more animations which supported a simple story focused on Kyle in this world. However, this work was cut to support polishing the environment, VFX, and camera work.

Post Production

Lighting

The two main scenes of the project compose of two distinct lighting strategies. For the dimly-lit industrial scene, the towers are primarily defined by the emissive highlights from its materials, and complemented with area lights for rendering the building surfaces partly visible. The scene above is mainly lit by Unity’s directional light.

Clouds and Fog

The clouds used in our environment are rendered using a technique very familiar to the industry at this point: raymarching. Our initial concept came from the infamous Clouds entry on ShaderToy, where Inigo Quilez utilizes value noise sampling and uniform raymarching to sample out clouds on the GPU. Of course, raymarching is not natively supported on Unity, so we needed to bring the solution over and adapt it. This ended in lukewarm results:
After some more research, we found a SIGGRAPH presentation on fast and stable volume rendering, with source code on GitHub. Combining this with the previous work done on the Clouds shader, we achieved a less complex but equally accurate representation of the clouds:
Unfortunately, this came at a cost. A huge one on the GPU, to be exact. Volumetric rendering isn’t exactly fast, and doing so without a highly optimized or intelligent solution will always bring a simulation down to detestable framerates. Some more optimizations were made with respect to the volume itself and added a much needed speedup, but we were still facing a good three-quarters of our render time being eaten up by this render.
Fortunately, we had prior experience with a GitHub library that uses upsampled rendering, and porting over the upsampling scripts was straightforward. As one might expect, upsampling the clouds from a smaller resolution did wonders for the framerate since we only had to render a quarter or half of the pixels. On top of this, the clouds looked almost exactly the same!
Thanks to the fluffy, undetailed nature of clouds, most parts of the render looked equivalent to the full resolution version. Two pieces remained: the empty edges of the clouds, and the shared edges with geometry. The empty edges aren’t terrible and tend to be hidden in the distance, so we left that as is, but we could not go on without fixing the shared edges. After a couple of attempts at fudging the results with some blur, we altered the cloud rendering script to make two rendering passes, one of full resolution clouds and one of less-than-full resolution clouds. The trick to keeping our performance up was to only render the full resolution clouds where there is geometry, then overlay that on top of the quarter- or half-resolution render. Since our raymarched sampling is deterministic, no strange breaks would happen between pure clouds and the geometry:
At this point, the properties of the cloud renderer started to get unwieldy, so we ported all of the fields over to a ScriptableObject that our artists or programmers could check out and change without issue. It would have been nice to be done with it here, but some parts of the render were still bothersome. For instance, one of our shots pans the camera over the vast cloudscape at a point fairly high above the clouds, but in the distance the clouds become a flat mess of pixels. To fix this, we introduced a distance-based “cloud granularity” property which would influence how the clouds grow larger as they get further from the camera. As one might expect, this causes problems when that factor grows too large, as the samples will have wider steps and eventually break altogether. Fortunately for our case, we never allow this problem to occur.
Alongside the distant cloud property, we also optimized one more portion of the code. We still weren’t getting the breadth that we wanted (i.e. clouds to the horizon), so we made the step size of the raymarcher dependent on its distance from the camera. It worked like a charm and clouds could render all the way to the horizon, only at the cost of slightly sparser clouds and sampling error at large step-multiplier values or for very, very distant clouds.

Rendering

Project Assets

Hand-Authored Environment Assets

Since the initial idea called for a very particular requirements in terms of environment design, we hand-authored almost all of the environment assets entirely from scratch. You can find them on Sketchfab here. We used Autodesk Maya 2018 for modeling / UV mapping art assets as well as scene blocking.

Unity Store Assets

As we put a strong focus on authoring the environment assets, we took advantage of the Unity Asset Store for other supplementary assets. This way, we were able to tailor our efforts into the most important part of the project without compromising on its other necessary parts.
  • Kyle - with hand-authored texture and animation

Code

  • Shadertoy Clouds
  • SIGGRAPH Presentation, Fast and Stable Volume Rendering (GitHub)
  • Volumetric Lights for Unity 5
  • Keijiro Takahashi: Kinofog

Thank you for reading!


Euna Park
Graduate Student at Carnegie Mellon - Student
1
Contributors
Jacob Wilson
Game and Graphics Developer - Student
Hyun Jong Won
Game / Level / Environment Designer - Designer
Comments