QAHAL - Neon Challenge
Updated 2 years ago
My name Is Guilherme Zanchett, I'm a Game Designer based in Brazil, and I developed this project alone. This is my submission to the Neon Challenge.
In this article I'll try to describe my hole process since day one. The idea is not only to submit a fine piece of environment/animation, but also contribute to the community with some stuff I learned during the production of this piece. At the end of 2017, I was running up and down with lectures, so it took me a while to realize the challenge has been announced. Since I was already behind a couple days, I chose to wait until all classes were over to start the production, so I would not have any student's work to review and stuff like that to get my attention during development. I officially started December 11th, 2017. My time to work on the project was limited due to other jobs, and I knew I was not gonna be able to work on this everyday, so I needed to keep it as simple and organized as possible. The first thing I did was read the rules and plan the hole thing out. This seems pretty obvious stuff to do, and you're probably thinking it's needles to say (or document) any of this, but I can't tell how many people I've seen who fail their projects for lack of planning. That said, I'll try to describe everything the best way I can, hoping it helps someone...


Planning is as important as executing. I would say it's 50% of the job, and if it's not done properly or not done at all, raises exponentially the chances of failure. So I took about 5 days to plan the hole thing out, and kind of "sketched" a initial chronogram (Did a rough sketch of dates, that I changed after having all defined). First I read the guidelines for the project (announcement, briefing and the terms of service). This is really important, and often people just skip it, because they think it's a loss of time. After that I took some time to analyse the concept arts provided by Unity's Art Director Georgi Simeonov. I chose to work with his art for three main reasons: 1 - His concepts looked amazing. 2 - I suck at painting/concepts. 3 - Time. Since moment one, the concept that captured my attention was this one:
The concept choice was done, so I started thinking about a story for it. At first I though it could be a refugee camp built by humans, to shelter dicontinued robots that would be destroyed. But for some reason I didn't liked this idea, so I started to think about something different, and presented my ideas to a couple friends. After listening to their inputs about my ideas, I took what I thought to be the strongest points of the whole thing and made a new (kind of...) story for the project (I'll come back to this later). With the hole story/concept ready, it was time to work on an assets list which would allow me to plan what I would need to model/texture (only 3D modeling being considered here, since I wouldn't have time to sculpt anything), and what to get from the asset store/internet.
Another important part was to determine/identify the tools and workflows I was gonna go with (I'll get it detailed later). This is very important, and takes some careful research. At this point I was looking to answer questions like "Which software do I need to do this?", "Is this a Unity feature or I will need to find it somewhere else?", "I don't know how to use this, how long will it take for me to learn?". Seems silly, but trust me, it's not. This can save you lots of time during project, and even avoid failure. You don't wanna plan the hole thing out, just to realize later that the time you had planned for creating something now has to be used to learning some new technique you didn't know was required (Which actually happened to me, even considering this).
This was all the information I needed to plan the production. Here I defined the priorities of all tasks in order to create a refined chronogram. I divided the hole thing in: Pre Production > Production > Post Production.
To manage all of the project information/chronogram I chose to work with Dapulse, which has Gantt charts (helps me a lot to keep track of deadlines), and pulses (cards), that can hold information (Comments, Lists, Links, etc..) and be associated to dates, etc... It took me half a day to take all the information that I had collected to this point and create a chronogram for each task in Dapulse. Planning the chronogram can be tricky and one of the big mistakes I see people make very often, is not to consider unexpected problems/events. So to sum up, my chronogram was build like this:
Guidelines (briefing, terms of service, etc...) + Concept + Story + Assets + Tools + Production + Post Production + Article + Problems + Events (Holidays + friends/family visits during this time of the year).


In the production stage, I used Photoshop to start sketching some rough concepts (which look terrible, but work for me) of the scenario. This was the first rough sketch of the fort done literally in 5 minutes:
In the beginning, the fort was supposed to have 4 entrances, but I didn't like that so it ended up with only 2. The other thing I did after drawing the fort, was to test the concept of making the scene in a mountain. The hole idea was to have 4 paths that would lead to the main place. There would be a giant precipice between the mountain and the fort, and it would all be connected by bridges.


So I made this fast "mountain top" using World Machine. Did this in 10 min just to test the composition/shapes out. Didn't like it.
The first design led to another problem: I would need more stuff to fill my background, because the end of the terrain would be visible. So after trying a couple layouts, and a lot of tweaking, I decided that a "valley" in a "mountain top" would be a better fit. This is the terrain I finished with:
I wanted to keep it simple, so I used only two textures on the terrain: One for the snow and one for the cliffs. To do that I used the Terrain Toolkit 2017, which is free, very easy to use and presents wonderful results.
This was the texturing configs I used to get the result you see above. The Toolkit supports lots of textures, allowing you to select the desired slope for each one of them. This speeds up the workflow, saving both time and money.
For my terrain, I just added the textures I wanted (in my case, one for the cliffs and another for the snow), adjusted the slope, and pressed "Apply procedural texture" to see the toolkit work it's magic. It's a pretty good tool for free. Perfect solution for someone like me who didn't want to spend money on expensive terrain tools. One thing I was looking for to use was a Splat Map, but couldn't find it in the toolkit. However, in the end it did a very good job with the procedural texturing "magic".
The Terrain Toolkit 2017 can be found here.


After setting up the terrain I started producing assets and populating the scene with them. Since I was working around the clock, I chose not to sculpt anything, and went straight to polygon modeling. I modeled everything in 3DS Max 2018.
To save time, I also decided that I would not go for the high poly > low poly > bake workflow. Instead I modeled everything as a final model, and only baked the diffuse/normal maps to use with Quixel. This workflow required that I created most of the details using Ndo/Ddo. Results were pretty nice. Also I was not that worried about the level of detail on the models, because there would be no specific shot of any asset (except the flying robot). Here are some exemples of assets I did with this workflow and used in the scene:
I modeled a total of 5 tent types. But in order to reduce the impact of the small amount of props in the scene, I created 3 different textures for all the steel tents. This made the scene a little more natural.
The Fort was made in modular pieces so I could change and replicate whatever I wanted, which actually happened. As I said before, in the original concept, the fort had 4 entrances, but after organizing the scene, I chose to make it just two. The Fort's modularity allowed me to do that really fast. Also I could use the fort's tower rocks to create single towers all over the map, just by switching the combination of rocks and watch towers on top of them.
The assets made by me, followed a single texturing workflow, all done inside the Quixel Suite: Albedo + Metalic + Normals, in a Standard Unity Material. I chose to go with this workflow, because it's faster to do, and it doesn't compromise the asset's quality.
Almost all assets in this project were made by me, with the exception of a few that I grabbed from the asset store to save some precious time. Here's the complete asset list I used in this project (hope I don't forget anything):
48 Personal Assets:
  • Alarm / Barrels (2) / Torch Pyre / Military Boxes (2) / Gas Cylinders / Cord Spool / The Cube (Enemy) / The Dome / Energy and Light Poles (4) / Generators (2) / Pallets / Planks (5) / Power Cell / Messenger Robot / Ruin Arch / Tents (11) / Trash Can / Watch Towers (4) / Wooden Crane / Rock Paths (4) / Robot Arm / The fort.
6 Store Assets:
  • Army Turret (FREE)
  • AllSky 3 ($10)
  • Industrial Storage Tanks (FREE)
  • Pipes Kit (FREE)
  • Rocks and Boulders 2 (FREE)
  • Snowed Fence (FREE)
  • Unity Particle Pack (FREE)
  • Recorder (FREE)
  • Cinemachine
  • Post-processing Stack V.2
  • Volumetric Lights
  • Terrain Toolkit 2017
I also used the Robot Kyle + Dude models and animations from the old Unity 4.0 mechanim tutorial. You can find the tutorial files here. Originally I was using them as placeholders, that would be replaced by MCS Male and Female which are both free and great, but require a little tweaking. Since my time was short, I decided to keep them, since they wouldn't be the point of interest in any shot. The only adjustment I made was to paint the Dude's face and feet black, so it matched it's outfit, looking like one single piece. I also changed the material on Robot Kyle a little!


As I mentioned before, the original story was about a robot refugee camp. But it seemed a little weak to me. I wanted something with a meaning or a message if you will. So after long consideration, I decided to go with the following story: In the first years of the 22nd century, a great war between humans and machines started. Robots were fighting to be accepted as people, have rights, and to exclude humans from high responsability tasks, since humans could be corrupted and miscalculate things. Humans were not pleased about it and decided to shut down the Common Mind, an A.I brain that shared information and processing power between all machines in the world. When the human's plan was discovered, machines gathered to defend the Mind, and the war started. During the 3rd year of the war, without a warning, an alien shaped like a cube showed up in the sky, and as fast as it showed up, started draining power from every machine, and exterminating everything on planet Earth. Only a few survived. This survivors, humans and machines, decided to leave their differences behind and live as one people in order to survive. They found a mountain, after the mist plains, and built a fort in the mountain top. Using scraps of what has left from the old world, they built a dome with advanced ghosting technologie, capable of making them invisible to radars. And for many years, they started rebuilding a new society of men and machines. Until one day, the Cube shaped alien returns to finish it's job, and like it's first visit, nobody could see it coming.
This story was something I had in mind for quite some time by now. I think the biggest inspiration to it was Horizon Zero Dawn. I thought adding a Fallout touch to the voice over would be pretty nice to. Also, every time I looked at the concept, I thought of it as a sanctuary or a refugee camp type of place. So I figured a sanctuary was to clichê and decided to make it a refugee center for every single being that remains on earth.
If you want to take a look at the screenplay here's the link.
The main idea was to have a "cold truth dramatic tragedy" like story, with a message behind it. This was really fun to work on, and I already have lots of ideas for other projects based on this story line.


I wanted to keep the Fort always as the point of interest to the hole thing, and the Dome as a point of interest inside the Fort. So the valley idea was pretty good to support this, since I would put the Fort in the center of a valley with narrow paths. Also I could use a little "flavor" populating the surroundings with a couple smaller villages on the top of the mountain and near the Fort's base.
Inside the Fort, the Idea was the same. Dome in the middle, surrounded by little tents. All the buildings and props had the "built in the future from scraps" vibe. I chose to make the scene at night with a darker mood, to keep up with the emotional story line.
At first I was going to use some vegetation, but taking a look at a lot of reference images, I realized this kind of environment usually doesn't have any, so I decided to go with a rocky scenario. I also had to take a lot of care with the rocks positioning and amount, to keep it natural.
To compose the background, I decided to create watch towers, using the same towers from the fort. The idea behind this was to create an impression that there were outposts besides the fort, watching out for something.

Project Timelapse

Here's a little timelapse of the entire production of the scene:

Timeline + Cinemachine

These tools are insanely powerful and user friendly. Adding new cameras, or moving them around was incredibly easy with Cinemachine, as was controlling their timing and transitions in Timeline. Another thing I like about this combo was the organization capabilities that Timeline provides. I have this really weird way to organize things. For instance I created multiple camera tracks so I could locate myself better between frames without the need to keep clicking and dragging the time bar around. Notice that there is no need for this, is just part of my weird method of work that this tools completely supported!! Another good thing Timeline allows you to do, is create Track/Sub-Track Groups, to keep the hole thing even more organized.
The hole concept is to allow you to create new cameras, position them around and blend them in seconds without one single line of code. Really just drag and drop stuff. You can literally learn these tools and start creating a cool scene in 30 mins. That easy. Brackeys have good and simple tutorials on both of them if you want to take a look:
Cinemachine Tutorial
Timeline Tutorial
Cinemachine and Timeline also did a great job with post process effects blending. To do that, all you got to do, is set up your cameras with the post processing values you want, and create a blending state between them in Timeline. Unity has a really nice tutorial on the subject, and if you have interest here it is (notice you'll need to use the Post-processing Stack V.2, since Post-processing Stack V.1 doesn't support effects blending between cameras). And here's an exemple of effects blending on my scene (For some reason couldn't upload the GIF so here's a video):
Another easy thing to do with timeline was to create specific animations very quickly. I had lots of objects in the scene that shared one single animation, which can simply be done with an animator and one animation. But the problem was I needed specific animation, activated in specific times, for specific objects (very specific situation!).
Creating a state machine for all this would be too much work, so what to do? Enters Timeline. All I had to do was create an animation track, drag the desired object in, and drop the animation at the exact time I wanted. I used two different configs of animation in timeline. In one I wanted the animation to keep running until second order. This actually comes by default with the track, however if for some reason yours is not working correctly, all you got to do is switch the "post-extrapolate" to "hold", and extend the animation track to cover the time you need it to run. In the second config, I needed the animation to run just once and then stop at the final key frame. To do this, the first thing you need, is to make sure the "Loop Time" option is unchecked in your animation's settings. After that, switch the "post-extrapolate" to "continue", and position the animation in the timeline.
Something else I got to with Timeline was use activation tracks to manage the scene. For instance, I didn't wanted the Cube to be on the scene, casting shadows before a certain time. Also I would need to position it reeeealy far away from the camera, so it would not be captured in the sky before intended. This was simply solved by creating an activation track. So the Cube only shows up and starts moving exactly when I wanted, without the need of hiding it on the scene. I did this with lots of other objects, but the cube is just an exemple (Again GIF wouldn't upload so here's the video).
In general, Timeline and Cinemachine are very powerful easy tools to learn. They're user friendly and if you're totally new to Unity, don't worry, it's really simple to use. They work so well together that it actually seems like one tool. Plus, there are plenty of good tutorials out there explaining everything you need to know about it.


The lighting process was something I spent a lot of time thinking and reading about (not enough, and yet too much for this project's deadline) which, after some tests, made me change my mind a couple times about what I was doing. The main reason for all of this trouble was my complete lack of knowledge about lighting. I actually took account of that when building my chronogram. The fact is, it was a little more complex then I expected. So I had to find a way around this situation. In the beginning I was working with the Gamma Color Space, which was resulting in an overexposure of some lights (totally not what I wanted). Switching to the Linear Color Spacing made the scene look a lot more realistic, and gave me more control over the lights. You can read a little bit more about Color Space in here.
I worked only with realtime lights, for two main reasons. I was not going to have issues with performance and because baking the lights was causing lots of problems with animated emissive lights (and like I said, I'm not experienced with lighting), that I could not solve. This got me really frustrated and I figured the only thing that could be causing this crazy white blurs was the baked data from realtime emissive materials. So I decided to clear all baked data, and go "full realtime". Best solution? Probably not (taking suggestions here). But I think the result was good enough, and allowed me to finish the project. (Self-note: Learn about lighting before the next project!!)
Another topic I want to cover is Volumetric Lights. I used the SlightlyMad's Volumetric Lights, which is free, easy to set up, and delivers high quality. You can get it here. To use it all you have to do is drag the VolumetricLightRenderer script to your Main Camera and set default cookie texture (there is one with the file, but you can create or use any spot texture you want) for spot light. Then just drag the VolumetricLight script to every light that you want to apply the volumetric effect. The only thing to keep in mind is that the effect is only visible in play mode.
Something I wanted to run from was to "make the hole thing about volumetric lights". This is a cool effect that makes you want to put it everywhere, so I needed to watch out for this uncontrollable need of adding more and more volumetric lights. This should be something to reduce the "artificial" look, and not make it more and more fake. So you can see I actually used it a lot, but in a really controlled way.

Post Process

Post Process makes a huge difference over the final product. I used the Post-processing Stack V.2 which can be found here. The reason for me to use the V.2 (which still in beta development) was the effect blending between cameras. Post Process Stack V.1 doesn't support camera blending and that was a major feature for me. Using the Post-processing Stack with Cinemachine is really simple, and you can find a good tutorial about this topic here. When working with post processing you need to be really careful not to overuse it, or you will end up with something that looks really artificial or amateur. I'm only going to talk about the effects on my main camera because all the other cameras just have specific overrides for the desired effects. The only effect I didn't add in the main camera and added in individual cameras was Depth of Field (I'll explain this later). So here's a little about my Post Processing configs:
  • Color Grading
Color Grading is a major part of the job. The main idea behind it is to modify color values from the original shot to achieve a certain look. Color Grading is also very important to make shots that are presenting different color tones, look alike. Of course this is an oversimplification of what color grading actually means, but you get the idea. Unity gives us plenty to work around with when it comes to this effect, so I'll describe my configs for the main camera very briefly.
I was working with HDR on my cameras, so as you can see, I have High Definition Range selected in my "Mode" tab. Tonemapping has three options, which are None, Neutral (default) and Filmic (ACES). Since I was working with HDR I couldn't use "None" (HDR requires tonemapping, as far as I know), so taking a better look at both alternatives I chose to go with the neutral, since it adds a smaller amount of contrast then Filmic, keeping the "dead" look I was looking for.
Here's a comparison between how the scene looked with and without the color grading effect:
  • Bloom
According to the post-processing stack documentation, Blooming is the optical effect where light from a bright source (such as a glint) appears to leak into surrounding objects. This effect is very cool, and also very easy to overuse it (because it's so cool, you can't never get enough of it). I strongly advise you to review all of your cameras really carefully if you're using bloom, to check if there are no over the top instances of this, and if needed, override the effect. In this project I used a medium-high value in the main camera, which I maintained in shots where the bright source was very far (in some flying shots I actually bumped the intensity it up to 7). For closeups I overridden the intensity to around 0.5 or less.
Also bloom comes with this cool lens dirt textures that you can use to make the effect even more realistic.
  • Vignette
Vignette is the effect that darkens the corners of the screen, maybe for artistic purposes, maybe to simulate lenses, it's up to you. In my case, you can see I'm barely using it. The idea was to give a little more of a introspective mood to the image by darkening the corners a little.

  • Depth of Field
This is my favorite effect in photography. Depth of Field gives you the power to direct the viewer on the scene, drawing his attention to wherever you want, whenever you want. I only use this when I have objects close to the camera. Shots from long distances between the object and the camera often benefit very little from this effect. This is the main reason I didn't put it on my main camera. I like to keep the main camera effects global for cameras that are a little more distant from the scene. on the closest cameras I add the effect and configure accordingly.
I like to use this effect to show the scenario for the viewer. To do that, I usually start moving the focal point around, to the place I want. One cool feature to calibrate Depth of Field and other effects is the Debug layer from the post-processing stack. To use it you just have to add the component to the camera and select the desired debug mode. This tool is a huge time saver, since it makes it easier to see the effects configs you're applying.
  • Anti Aliasing
This is was actually a very fast decision. Since I had no problems with performance I chose to use the TAA since it delivers the best quality of all options.


This project was an incredible opportunity to learn and improve. I love working in projects like this, when there is no time (due my other jobs) to do everything I wanted, but need to find a way to do something as good as intended in the same amount of time. This kind of pressure challenges me to push myself beyond my limits and this is how I improve! It was a good opportunity for me to learn a little more about scope, since I needed to plan everything really well and know the quality limit I could deliver in the given amount of time! I knew that if I focused on creating assets with better quality or on terrain texturing I would not be able to deliver the hole thing, so I decided to shift the workflows to deliver a good looking piece, with good story, showing the power of Unity and it's tools to create games and cinematographic sequences!
I hope people enjoy the video and this article. I tried to detail the process as much as possible so it may help someone! If you have any feedback, please leave on the comments, or feel free to message me!!
Best regards
Guilherme Zanchett
Guilherme Zanchett
Game Design Teacher and Consultant - Educator