A Journey Through 3D Modeling
Published 2 years ago
Learn how the process of 3D Modeling actually works.
Do you ever sit back and wonder … how is it done? How does a plane stay in the air once in flight? How does the iCloud work? How do artists create massive hordes of orcs in Lord of the Rings? Questions like these appear difficult to find a concrete answer for, and if it’s out there, it’s hidden behind technical jargon and our eyes gloss over the details. It all seems so elaborate or far too confusing to truly comprehend, but ordinarily there are simple, underlying explanations. So, whether it’s hordes of beautifully detailed orcs in Mordor or rebelling pedestrians in Tehran, the answer you’re looking for lies in understanding 3D modeling. Without losing you in the details and jargon of this industry, let’s venture through the pipeline of development with 3D modeling as our lens.  
In 1979 Revolution, the process of creating characters can be relayed in three easy steps. First is the creation of concept art and assets, then taking those ideas into 3D modeling, and finally the actual implementation and animation into the game engine. To paint the clearest picture, I figured the best place to start was deep in the iNK Stories server where I managed to track down some of the original concept art for the game. For someone who has played about a hundred iterations of the game by now, it was welcoming to see just how far we’ve come, and wonder... What if we went in this direction instead?
1979 Revolution: Black Friday was actually originally designed as cartoonish 2D platformer… (It wasn’t)
Concept art represents the first stages of creating characters - it begins the process of providing the visual foundation or groundwork for the game's aesthetics. Picking the right art style for the characters is a crucial element and should be matched with the themes and concepts the game is trying to convey. With that being said, in 1979 we wanted to create a world that is historically authentic to late 70’s Iran, so we used photographs which served as historical references to fashion and culture when designing the the characters.
Below you may recognize an early concept version of Bibi (a character in the game)
Once the initial character concepts are completed, it’s time to implement them into the 3D realm. Using 3D software like Autodesk’s Maya and Pixologic’s Zbrush, the characters get sculpted into usable assets for the game.
Typically, the character models come from humble beginnings - nothing more than a cube. From here, the cube is subdivided from which we are able to extrude various faces into distinguishable features, such as the limbs and head. Once this is in place, the edges and vertices are manipulated and further subdivided into shapes that begin to resemble actual human anatomy. The edges are rounded and then moved around with consideration to the shapes deformation (how the geometry reacts during animation). This process continues until the mesh takes form of the particular character being modeled. Upon completion, what is left is called the base mesh.
At this point, the character mesh is in “quads” - the individual faces throughout the mesh are comprised, for the most part, of four edges. To optimize the mesh for gaming purposes, the mesh is converted to a triangle mesh.  -Why you ask? Well to put it simply, four points or more may not be on the same plane, but three points always are. This means that triangles will always be quicker to render on screen.- This can be done procedurally or manually (for better control of edge flow), again for the purpose of good deformation. At this stage the character will look like a very basic version of itself, hence “base mesh”. At this point, the finalized base mesh is ready to be exported into a digital sculpting tool like Zbrush or Mudbox. Before this is done, we had to create UV maps, or a character unwrap. These basically serves as a 2D “skinned” version of the 3D model. To simplify, think back to elementary school, when you had to draw a box grid in the shape of a “T” on a piece of paper to cut out and fold into a cube. This process is similar, though instead of a box we are unwrapping a complex, human character. Hold on to that notion as we’ll talk about why this is important a little later on.
Zbrush, or any other digital sculpting software, works exactly as it sounds - we use it to start to add all of the little details and refinement to the characters to make them look good. This is where we’ll be able to add all those stylized aesthetics based upon our concept art references; things like wrinkles, seams, hair strands, pockets, even pores and scratches are sculpted into the mesh. Because of this level of detail, the character becomes very complicated or dense, by which I mean to achieve something as small as a seam in a pair of pants, for example, you must subdivide your mesh. Picture a square split in half both ways to create 4 new squares, over and over until there is more detail. These levels of subdivision create more data, and more data to process in real time. If you were to put this sculpted version of the character into the game, it would never run properly trying to process all of that information, especially if we attempted this level of detail for all of the many characters and NPCs in our game. This brings us back to the “skinned” version of our character we talked about earlier and why it was important to have an optimized base mesh at the beginning of the process.
With this high-poly, or high detailed, character completed, we’re able to generate something called normal maps. Normal maps are RGB (red, green, blue) images where each character of RGB is mapped to information in the XYZ axes.  
Here’s what this looks like:
Pretty weird...
These maps trick the light into rendering the base mesh as if it had the detail of the high poly mesh. The final product is able to take a detailed character at the computational cost of the base mesh, allowing the game to run smoothly while still having detailed characters.
I know this is a lot to digest, but stick with me here!
At this stage, the character models are complete and the next step would be to texture them. Otherwise they’re just going to be a good looking grey blob. We went about texturing in a couple ways. One way is to actually continue working in Zbrush as this is allows us to paint directly onto a specific character model and then export it to a diffuse map (a texture map with just the color information of your character). We also used the UV map and/or the Normal map and brought it into photoshop. By removing the saturation of the Normal map image to get shadow and highlight detail, we can use a 2D form to paint textures. Either method works and we often found out that using some combination of the two is the most effective way.
The result of either looks something like this:  
Again still pretty creepy, but definitely getting there.
The final map we create is the specular map - a black and white image that dictates an object's shininess. The specularity helps define the material of an object. For example, metals are extremely specular (represented by white in the image) whereas natural or unfinished wood has very little specularity (lack of - represented by black or dark grey in the image). In relation to our characters, you can think about the difference between eyes and skin. Eyes are extremely specular where skin is only moderately specular with different areas being more or less so. Here's an example of what the character we’ve been showing in a specular map looks like:
As you can see in the image, things like teeth, nails and parts of the character that protrude out have more specularity than areas of recession or duller materials.
Now that we have the base mesh and our maps (normal, diffuse, and specular), we have a completed character. We can drop the model into the game and it would be ready to go if what we essentially wanted was a good looking statue. Now we need this character to move.
Enter rigging. Put simply, rigging is the process of creating a skeletal system with user-friendly controls that allow the animators and/or motion capture data to drive the movement and deformation of the character.   
As you can see in the images, a rig looks a lot like a simple skeleton. In fact, those little spheres at the points of articulation are called joints. Each of those joints manipulate a set amount of the mesh surrounding it. The process of setting how much geometry those joints control is called weight painting. Below you can see a picture of what that process looks like.
Admittedly, this may just look like a bad Icy Hot commercial sans Shaq. What this is the range of control the top spinal joint has on the character mesh. Joints can share control over areas of geometry. It's this blend of control that helps the character deform in a realistic way.
Once your rig is set up and all of your weights painted, you finally have a complete character that you can animate and apply Mocap data too. It is officially game ready!
You can find 1979 Revolution: Black Friday on Steam and the Apple AppStore.
Navid Khonsari