Notifications
Article
Quarantine Zone Omega
Published a year ago
1.3 K
0
Outbreaks are commonplace, the city keeps on ticking...

INTRODUCTION

My name is Jeffrey Hepburn, I’m a 21 year old freelance filmmaker, photographer, and digital artist. I’ve been working with Blender now for going on 7 years as a hard-surface artist, and been working specifically with Unity and the Allegorithmic Substance suite for about 2 years. Both toolsets have greatly elevated my abilities to create as an artist and filmmaker, and I am excited to see the future of film and animation as the technology behind realtime visualisation and games only continues to improve, both in terms of hardware and software. In particular I find that the Unity engine, compared to other options out there for indie developers, is one of the most inviting and easiest to get into, with a vast library of first and third party tutorials, good documentation, and a thriving community behind it.

PLUGINS USED

  • SEGI
  • Rainbow Folders
  • BOLT
  • ProCore Tools
  • Builder
  • Grids
  • Post Processing Stack V1
  • Advanced FPS Counter
  • Next-Gen Soft Shadows
  • UBER Shaders
  • Cinemachine
  • HBAO
  • SE Natural Bloom
  • Substance Source for Unity
  • Instant Screenshot
  • Screen Space Subsurface Scattering
  • Mesh Combine Studio
  • Deep Sky Haze
  • Play Mode Saver

GETTING STARTED ON THE SCENE

Considering the themes of the Neon Challenge, my first thoughts obviously turned to classic films like Blade Runner, Star Wars, 2001 A Space Odyssey, Alien, The Fifth Element… I even toyed around with the idea of Mad Max themes. In the end I settled for a cross between the neo-noir themes of Blade Runner, and the snowy dystopian NYC in Ubisoft’s The Division. I wanted to depict an overrun but still somewhat familiar future, something grungy, sparkling and glowing with the lights of capitalism, but decrepit, militaristic, and diseased. For direct inspirations I grabbed screenshots from Blade Runner, The Division, Fallout 4, Mass Effect, Sleeping Dogs, and short films like State Zero, and The Leviathan, as well as of course stock photos of city streets in New York, Japan, and Hong Kong at night.
Getting into Unity I had a decent starting place prepared. Having worked on a number of smaller scenes in the past, I had a pre-existing default package for Unity 2017.2 (though I ended up upgrading to 2017.3 halfway through the project) setup with my usual quality settings and a list of my usual post-processing presets. Most of my work is more on the “tech demo” side of things, I like to push the boundaries of what is capable at runtime, so in many ways my presets aren’t best practise for game optimisation, but are designed instead for maximum possible visual fidelity whilst maintaining realtime framerates on my system. I’m using Unity here more as a realtime cinematic tool than simply a game engine. As such, tools like HBAO, Next-Gen Soft Shadows, Screen Space Subsurface Scattering, Deep Sky Haze, and SEGI allowed me to really push the boundaries of realtime lighting and shading in the Unity Engine.
At this point, I began work on my basic environment lighting, which would set the mood and time of day for all the other objects in my scene. I chose a free HDRI environment skybox from HDRI-Skies.com, brought that into Unity, and setup my directional sun lamp to roughly match the position, colour, and intensity of the sun in the skybox. At this point, I went to the lighting options and disabled both Realtime and Baked Global Illumination. If I were developing this Unity scene for an actual game, I would likely have used baked lighting as baking my GI would save me a lot of performance, but instead I chose to go with SEGI, an experimental realtime voxel-based global illumination solution. SEGI allowed me more flexibility to work at runtime without having to stop to bake lighting. Of course, you’re trading pre-baking time for performance at runtime, but knowing this was a cinematic tech-demo scene rather than a game that has to be playable and responsive on a multitude of systems, I was willing and able to make that tradeoff in performance.

DEVELOPING A CHARACTER WORKFLOW

Once I had my reference images and initial project setup, the first major thing I wanted to establish for my world were the characters that would inhabit it. I was working solo, and being a hard-surface environment/prop artist, I didn’t really have the experience necessary to sculpt, model, and animate humanoid characters and outfits for the scene, but I also didn’t want to just go to the asset store and buy generic character models and animations. As such my biggest concern going into this project was being able to create characters that were at least realistic enough to (under the right lighting in my evening scene) help bring the world to life. My plan was to find a workflow for quickly creating characters for the scene, and then use their scale and costumes to guide the development for the rest of the environment. For this project, I chose to use Adobe’s CC Fuse to create my characters. Fuse allowed me to create detailed game-quality characters and customise their bodies and clothing to my desires using a simple RPG-like interface, and then export those models to their Mixamo cloud service to be automatically rigged for animation. From there, I had access to their vast library of animations, giving me the choice of almost any generic action I could possibly want for the characters in my world: Running, jumping, idling, texting, talking, waving, fighting, walking… I had a huge amount of flexibility to choose, and because all the models shared the same bone structure I was able to use all of their animation sets interchangeably across all characters as needed.

For the characters’ materials, I ran into a small problem that the texture sets from Fuse weren’t packed together in the Unity way, so I grabbed a tool from GameTextures.com that uses Substance Player to let you pack texture sets for Unity. (I used this tool later on as well when packing textures for some of the environment decals and debris that I grabbed from the Quixel Megascans Library.)
Once in Unity, I decided to use the UBER shaders from the asset store in place of the standard PBR shaders. For a number of the smaller props and assets in the scene I stuck with the standard shaders and they blend together seamlessly with the UBER shaders, but I knew that I wanted to go for the rain washed look of Blade Runner in my scene, and the dynamic wetness provided by the UBER shaders would allow me to add dampness to my characters and environment as needed in engine, without having to tweak and rebake texture sets. I also knew that I was going to be using a lot of brick walls for the buildings in my scene, and the parallax occlusion mapping on the UBER shaders would really make those surfaces pop.
UBER shaders applied to my characters something was still missing though, they just didn’t look right yet- they looked too flat. They were missing subsurface scattering, that effect you get when you shine a flashlight through your hand and your flesh glows, the effect of light diffusing beneath the surface of a material like flesh or colloids. Without that effect- in particular around the nose, fingers, and ears- the characters just looked disconnected from the scene. Too opaque, plasticy, and flat. Luckily I found an incredible and free screen-space SSS effect on the Unity Asset store, which works by applying itself to any object with the correct script attached. This worked perfectly with the Fuse models, as they came already broken up into separate objects, with all the skin of the body on their own separate mesh layer. With the screen space SSS applied, the characters’ skins looked far more waxy up close- not perfect, but good enough for my purposes. I knew that I wasn’t going to be getting too close to any of the characters in my scene, my focus with the camera would be on demonstrating the environment at large as a single living breathing world, and not going in for dramatic close-ups on the characters. The humans in my scene would be more a part of the cityscape itself- elements to fill the negative space of the streets and provide to the overall story, while not individually being the focus, and from a distance the more waxy look of the skin softened their appearance and gave the skin more depth.

DEVELOPING MY INITIAL LOOK

With my first set of characters finished, and a good workflow for designing and animating them using the Mixamo library established, I decided to finish dialing in my initial look for the scene.
I applied SE Natural Bloom, which I find is a soft and quite pleasing bloom asset which nicely recreates how bloom actually appears in camera when using pro-mist filters. (The built in bloom effect in the new Post-Processing stack is pretty good, and the ability to tweak the falloff toe meant I could have used it just as well, I just had SE Bloom already and prefer the more subtle effect it provides.) Lastly, I added my post processing stack. For this scene I chose to use V1 of the Post-Processing Stack. V2 seems to have some really interesting ideas and features, specifically the ability to create processing volumes which can dynamically change your post-processing as you move through the world, for example changing the look and mood as you were to go from a forest to a field to a cave to a radioactive dump. However, V2 is still very experimental on GitHub and I was already more familiar and comfortable with V1’s workflow, and for my singular environment scene the ability to create post-processing volumes and change my look dynamically wasn’t really needed, so I played it safe and stuck to V1. With V1 I added FXAA to smooth out the edges of my models, Screen Space Reflections to provide a mirror-like shine off of surfaces perpendicular to the camera frame such as the reflections in the puddles on the ground, motion blur to simulate camera shutter, eye adaptation to give myself auto-exposure, colour grading with ACES to give myself a cinematic tonemapping, and finally chromatic aberration, grain, and vignette to finish simulating that real cinema camera look.
Finally, I had my basic look dialed in. My initial environment lighting was how I wanted it, my post effects were pretty much dialed in as they would remain for the rest of the project. I had my first handful of characters with working animations, and decided to finish filling out the cast with about 10 unique characters that I would scatter and repeat throughout the scene. I wanted to strike a balance between the amount of work I would have to put in creating and animating characters in the scene, and minimising noticeable repetition of assets. I also wanted the look of the characters to help drive the story I was going to be telling in my scene. While I had originally planned to have a bustling city street much like the early scenes in Blade Runner, I was instead being pushed in a new direction by the characters I was designing. I had created a handful of security forces, and this inspired me to instead create more of a quarantine zone with my scene. The focal point of my environment would be a VTOL dropship landing behind a barricaded containment zone This would keep my environment less populated by civilians, and introduce some interesting new elements to my planned design, such as armoured personnel carriers, drones, barricades, and debris.
(The idea of the VTOL was dropped as I got further into the project, replaced instead by spotlight bearing hunter drones.)

DEVELOPING THE ENVIRONMENT

At this point, I had mostly decided what I wanted my scene to roughly resemble, but I had to make some decisions about limiting the scope of the project. I only had about a month to work on this scene, making most of the assets from scratch, and only in my free time between work, so I needed some clever decisions that would help downscale the scene to something manageable, while still giving off the illusion of grand scale. By going with the diseased themes from something like The Division for my project, I could artificially limit the amount of cityscape I actually had to design. Instead of building a massive open world, instead I could focus on a couple small sections of street and alley and cut off visibility down the longer stretches of road with barricades and fences. This allowed me to focus on the relatively smaller area in the intersection at the heart of my scene, rather than trying to build whole blocks of city. I would still have to be cognisant of the skyline, and build facades for the tall buildings reaching up several stories, but those would be far easier to kitbash modular tilesets for than trying to populate the streets themselves with all the unique signs and props you’d expect to be littering the street level of the city.
From here, I began a rough blockout of my street layout.
One thing I found very helpful is how nicely the rule of 3 applies to urban cityscapes. The width of a car lane is on average about 3m, and the height of a building story is about 3m, and the height of a person is roughly 1.5m; this means that the scale of the tileset for my scene could all be based around 3x3m sections that would easily fit and snap together.
At this point I had my layout for the streets, now I had to populate them with objects, give them meaning and context. First thing I had to decide was what these blockouts would actually represent. I started looking through Google images for stock photos of New York streets, searching specifically for storefronts. I compiled a list which included lots of ideas for different shops that could make up my ground floor building facades: book stores, bakeries, barber shops, coffee bars, pizza restaurants, jewelers, salons, vintage records, pharmacies, grocers, massage parlors, etc… At the same time, because many NYC shops are also the ground floor for apartments, I went looking for a handful of different styles of bricks and concrete architecture for my upper floors to go above the street level shopping district. Now I could start going down my list and putting together modular tilesets for these shop exteriors. At this point it was just matter of building tilesets- there isn’t much to be said except to provide screenshots documenting the progress as I built more and more models to increase the detail. In general I kept looking for negative space in my scene and trying to figure out what to fill those voids with in order to give the illusion of a lived in and naturally built environment.
(The original list of storefronts would slowly get reduced over time to just a couple fronts, as the focus of most of the attention both in terms of the focal point of the scene and my attention moved towards the quarantine zone at the centre of our scene.)

Making Progress!

For the buildings in the far distance I found this amazing free texture pack of orthographic building facades. With it I was able to construct “skybox” buildings in the mid-distance out of quads. Exceptionally low poly and low detail, but because we never move the camera beyond the intersection in the middle of our scene, and the fog in the distance reduces our visibility, it’s really not noticeable. These assets simply help to fill in negative space around the skyline and provide the illusion of close neighbouring city blocks and the densely packed city which we are in the middle of, rather than a disconnected hovering set in the middle of a void. I also found lots of city skylines on Wiki-Commons which I brought into photoshop, cut out, and then placed in the very very far distances to really be a part of the skybox horizon. They also proved useful as a kind of light-cookie in the foreground to break up and add shape to the shadows cast on our close by buildings by the sun lamp. All together these provided the illusion of a much larger cityscape than I was actually building for this small localised scene.
After I had my skyline roughly finished, I proceeded to work on the actual quarantine zone that would be the centrepiece of my scene. For this I took heavy inspiration from Ubisoft’s “The Division” and in particular the work of environment artists Tommy Alvarez and James Trevett. I liked the look of the trusses covered in plastic canvas, the neon decontamination tubes, and the military wire fencing. I slowly built up a modular tileset that helped me build up the walled off streets and decontamination booth.

Nearing Completion!

I then began the finishing touches on my scene- drones in the sky, flying cars passing at random, trash and litter across the roads, road paint, sandbags, signs, air ducts, road blocks, etc…. Objects to fill the negative space in my scene and make it feel more alive. For most of the small debris and decal assets I ended up using the Quixel Megascans library. They have a plethora of great scanned assets over there for all your environment needs, from trash to foliage, rocks, rubble, etc… I ended up using some of their construction rubble, urban asphalt decals, dirt and trash. These little details are what helped take the scene from being flat and empty feeling, rather mediocre, to feeling naturally worn in and detailed.

VEHICLE MOTION (DRONES & FLYING CARS)

While developing the lighting for the scene I decided at one point that a spot light splashing across the back wall really helped to make the detail pop. Having lights and shadows all over the place really helped to pull out the detail in the environment, and this spotlight effect in particular was a perfect fit for having my drones that I wanted. I did decide, however, that I wanted to “animate” them procedurally- hand animating a drone hover would be too time consuming and slow to iterate upon if I wanted more or less of the effect in-engine. My solution was simple, yet results in quite natural motion (which as it is procedural can loop indefinitely and be tweaked at runtime if desired).
The problem: Make a natural looking hover movement for the drone that remained locked in the vicinity of its initial starting position and didn’t drift away over time. The solution: Sin waves! While my final graph in BOLT may look somewhat complex, that’s primarily because I broke up my Vector 3 coordinates into their X, Y, and Z components to affect them all separately, thus keeping the effect even less synchronised and more “random” looking. In order to get my hovering effect I essentially wanted cyclical motion of the drone bobbing up and down, left and right, across all axis and because a sin wave oscillates between -1 and 1, I was able to use this to push and pull the drone from its initial position without it ever drifting further than a fixed bound from that location. One level of sin wave, however, was too mechanical. The drone simply followed a continuous predictable loop in the air, so I further added 2 more level of sin wave which affected the overall motion of the drone at different scales and at different speeds. In this way I had 3 layers of waves all interacting with one another constructively and destructively out of phase with one another, and while the mathematical result is still a predictable and consistent loop over large time scales, for a human watching the motion it appears too sporadic and unpredictable to be noticeably mechanical in nature. It looks much more like a drone actually struggling against the wind.
The flying cars solution was far more simple. I just wanted them to fly past from a set direction and come back to fly by again at a slightly random interval. For this at the object start I recorded its initial position, and then in the update I had a timer set to a random interval between 15 and 35 seconds, after which it would reset the vehicle to its starting position to fly by again… and on and on.

ANIMATING THE CAMERA WITH CINEMACHINE & TIMELINE

With my environment finally to a point where I’m fairly satisfied, it was finally time to move into animating the camera for the demo. While I’d built the scene over the last many weeks I’d been playing with the editor scene view camera, finding interesting angles and fly-bys, so I knew generally the kinds of shots I wanted to grab. Essentially I wanted the scene to play out like a tech demo, showing off the fidelity of the scene, and letting the story play out continuously and indefinitely, like a play being performed around the camera, rather than having a singular linear narrative that would end as soon as the camera stopped moving. I wanted people to be able to launch the scene and let it run on loop, taking time to notice details in the scene each time it went by again. I also wanted to give the perception of more of an ongoing and living city, which existed before the story begins and will continue on after the demo is closed.
Thanks to Cinemachine I was able to very quickly block out lots of cool camera moves on the dolly, add natural shaky hand camera, and animate zooms and pans. I was able to finish in a single afternoon which animating by hand with a “dumb” camera would have taken days. The timeline made it very quick and easy to animate, position clips, and activate cameras as needed to pull off my effect.

FINAL COLOUR GRADE

With everything put together, the last thing to do is simply put together the final colour grade for my look. For this I saved out a screenshot of the scene and brought that into photoshop. I tweaked the colours until I was happy with a cinematic scifi appearance, then I applied that to one of the LUT strips provided in the Post-Processing Stack resources, and saved it as a new LUT I could apply in the stack- and I think it really helps the whole scene come together into something gorgeous, elevating it from the “log” format raw from the scene and turning it into something vibrant and cinematic. Enjoy some in-engine screenshots of the final product!

Jeffrey Hepburn
Hardsurface 3D + PBR Textures
2
Comments