An overview of rendering pipeline can be understood on the basis of life cycle of a mesh .When we have an app or game running , it requests to load the mesh . Then it request to render it . When unity or an application request to load the mesh . It loads the mesh from the disk on to the RAM .
So basically the file exists in the form of .obj or .fbx format and once it loads its in the form of mesh information .
Once the mesh is loaded , the interaction between CPU and GPU begins . (Central Processing Unit and Graphics processing Unit )
CPU doesn’t directly talk with GPU but the communication is through a queue called command buffer . It is also called as a ring Buffer / Ring
GPU also has a RAM called as VRAM . ( Video RAM)
When CPU request GPU to draw something , That request is made in two steps .
1. Setting the Render State . ( State or an environment in which mesh will be drawn)
2. Drawing the mesh
What is render state ?
It is a state or env in which mesh will be drawn . Render State Contains following
1. Vertex Shader
2. Pixel Shader
3. Texture - (Used to texture the mesh )
4. Lighting Env - ( Tells in which lighting settings it will be rendered)
So once we have set the state to execute an object we can execute the command to draw the mesh . During this state the information of the texture and the mesh is transferred over to the VRAM as the video RAM is the is the RAM for GPU , GPU access lot faster than our systems RAM , Thats why information is transferred to the VRAM . Thats where GPU access the information .
What is Batching Process :
Once we have set the render state we can start drawing all the meshes who use the same state to render . Setting the state is lot heavier operation called DrawCall thats where process of batching occurs .
Once we have done with this process actual execution begins .
1. First execution of vertex shader will begin . -
Vertex shader reads the attributes from the mesh , like Vert Positions , normals , tangents . Etc . Mesh will be initially in object space , That will be converted in to World space then View Space and then finally Projection space . Projection space matrix is based on the frustum of the camera .Once this information is ready it is ready to send to the Rasteriser . But before it is being sent to the rasteriser an optional step happens . Which is called as Culling.
What is Culling ?
Culling is a processes where front faces and back faces differentiated . They are differentiated by the order of the vertices drawn to draw the triangle we have 3 vertices .
There are two types of culling
Front Face culling
Back face culling
If this information is collected in clock wise manner that means it is front face . And if collected in antilock wise means it is back face .
In this step we can ignore the information . We can ignore the front face info or ignore back face info , but by default Back face information are ignored .
Process Of Rasterisation
After that the rasterisation process comes in - > What rasteriser does , based on normalised
screen space rasteriser determines , which pixel will be drawn for drawing the mesh on the screen . And drawing this mesh off the screen to show that these meshes have been clipped when we converted the information to the projection space . So because this portion of the mesh will be out of screen thats why this will be ignored .
1. And Rasteriser will find which pixel will be drawn .
2. Interpolated the information from vertex shader before sending it to the pixel shader .
Once the pixels or the pixel location has been determined , the fragment shader is executed ,(Fragment shader/ Pixel shader )
So when the fragment shader executed , the output of the fragment shader is 2B pixel because , it might have an impact of the final colour pixel .
So one thing we get out of the fragment shader is colour of the pixel .
Then the alpha of the pixel .
And the third thing we receive is Z depth of the pixel .
In order to render object properly in the scene a depth test is done . Which is called as Z depth testing , in which distance of the pixel from the camera is determined .
And based on the pixels our renderer will decide which pixel will be drawn and which pixel will be ignored because it is in the back .
For example the tree comes infirm of the hut , because z distance of the tree is lesser that the z distance of the hut .
After Fragment shader , Z test will be done . The result will be pass or fail .
Following is the screen where we want to output or display all the objects we want to render .
And the screen is divided in to pixels . And for every pixel of screen , there are at least two buffers , that are maintained .
10 rows and 10 columns ,
￼There will be two buffers , that will be of size 100 , 0 - 99
And one buffer will be called as Colour buffer .
Another buffer will be called as Z buffer . Colour buffer also called as Z buffer .
As for final output of the screen there is colour buffer similarly , every pixel of the screen there is Z depth stored , in the Z buffer . Which will calculate as a result of the Z test . So these are the final colour value and Z value for the particular pixel to be drawn on the screen .
So if the Z test passes the test of a pixel , after that blending happens . Blending is also an optional step which is done for transparent or translucent pixels.
For example ,
If a translucent card is coming in-front of a opaque object , Thats when you want to perform a blending function .
After the blending next step is Stencil test (Optional Step)
In this we set a stencil for the screen , and the Stencil is set before drawing any object so that we can test that object before drawing against this staincil
So when when a pyramid is drawn, the mesh will be drawn for only the pixels whose stencil is set to 1 will be drawn and look like sec pic on the screen
Stencils are set in the condition where you want to restrict area of the screen for e.g. you have a whole scene . And you are watching the area from a window . And the final view observed from the window after cropping , will be drawn on the screen . This is called Stencil .