Notifications
Article
Ultrasound simulation with shaders
Published 4 months ago
64
0
Introduction to shaders
When we look at an HD monitor we’re looking at over 2 million pixels that the GPU is sending to the screen. The GPU knows what to send because we give it that information. We don’t do it pixel by pixel but we give it something like a 3D scene and it uses various tools to turn that into a 2D array of pixels.
Graphics APIs like OpenGL and DirectX are some of those tools. Our data goes through a series of steps, usually referred to as a graphics pipeline. Each graphics API has a slightly different one but there are usually a few common steps:
1. Prepare vertex stream
The graphics pipeline needs a list of vertices to work on, so the first thing that happens is it prepares the data for all the vertices that make up the objects in our scene. Their positions, normals, texture coordinates are passed into the pipeline to be processed by the next steps.
2. Process vertex data with vertex shaders
A vertex shader is a function that takes the data for one vertex as input and at a minimum outputs the processed vertex position. There’s often more data involved that is needed further down in the pipeline.
3. Process primitive data with geometry and tessellation shaders
Newer versions of graphics APIs offer the option to use geometry shaders to process primitive data (usually triangles) and even create new primitives. One modern use for these is improving the quality of shadows in 3D scenes with shadow volumes. Tessellation shaders are another modern addition to graphics APIs and they allow lower-detail meshes to be rendered at a higher-detail level, reducing memory footprint at the cost of GPU processing time.
We don’t need to use these features to get a convincing simulation of ultrasound images so we won’t be going deeper into them in this material.
4. Process fragment data with fragment shaders
A fragment shader (also called a pixel shader) receives interpolated data from the vertex shader and returns per-fragment data. Here interpolated means that the fragment shader input data is calculated automatically to values between the neighboring vertices’ data.
The fragment shader is where we can look up textures for our 3D objects, calculate the correct lighting and return the color of a particular pixel on our screen.
5. Generate final pixel value from previous data and depth/stencil buffers
Finally, depth and stencil tests are performed on the calculated fragment data to decide if it will be drawn and if yes, how to combine (blend) it with other data that’s available.
There’s an important difference between how shader code runs on the GPU and how regular code runs on the CPU. A shader function is executed many times per frame for different input values (vertices, fragments, primitives) and many of these executions are parallel. This allows the GPU to perform optimizations that wouldn’t be possible in sequential execution but it also means that each execution of the shader function is independent of the rest.

Performance considerations

Vertex shaders are run for each vertex once per frame, while fragment shaders are run for each pixel. This means that if we have a simple scene with a rectangular mesh that covers the entire screen, the vertex shader will run 4 times per frame (once for each vertex), while the fragment shader can run more than 2 million times (for a 1920×1080 resolution). This means it’s efficient to move as many calculations as we can to the vertex shader.
That said, modern hardware is very capable of performing large amounts of calculations and GPU optimizations help reduce the load.

Shaders in Unity

Unity allows us to program the shaders at each step using a language called ShaderLab that wraps the rest of the shader syntax.
Shader "FancyShader" { Properties { // properties passed to the shader from unity cs scripts } SubShader { Pass { // shader code } } }
The shaders themselves are simply functions written in Cg/HLSL. Technically, Cg and HLSL are two languages that share the same syntax, since they were developed in collaboration between Microsoft and Nvidia. Microsoft’s HLSL documentation can be used as a reliable source of information for learning. We tell the compiler which function corresponds to which shader using #pragma statements.
CGPROGRAM // the vert function will be used as a vertex shader #pragma vertex vert // the frag function will be used as a fragment shader #pragma fragment frag #include "UnityCG.cginc" float4 vert(float4 vertexPosition : POSITION) : POSITION { // the output position needs to be transformed from world space coordinates to clip space coordinates // easily done with a built-in function found in UnityCG.cginc float4 clipSpacePosition = UnityObjectToClipPos(vertexPosition); return clipSpacePosition; } float4 frag(float4 fragmentPosition : SV_POSITION) : SV_Target { float4 red = float4(1, 0, 0, 1); return red; } ENDCG
The above shader simply outputs a red screen. We hard-coded a red color (rgba) in the fragment shader so regardless of input the output will always be red. The vertex position returned by the vertex shader is expected to be changed from world space to clip space so we use a function provided by Unity.

The keywords after some parameter and function declarations in the above code are called semantics. They specify what a particular input or output variable’s role is. For example, a vertex shader input parameter marked with the POSITION semantic will contain the vertex position in world space. The processed vertex position in the output must also be marked with POSITION. More info on semantics can be found in the Unity documentation.
Shader input and output data is read according to the semantics we use.
Shaders can have user-defined structs as input or output, as long as the struct members are marked with the right semantics. Structs included with UnityCG.cginc by Unity are also available.
CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // vertex to fragment struct struct v2f { float4 position : SV_POSITION; }; // appdata_base is defined in UnityCG.cginc v2f vert(appdata_base vertInput) { v2f vertOutput; vertOutput.position = UnityObjectToClipPos(vertInput.vertex); return vertOutput; } float4 frag(v2f fragInput) : SV_Target { float4 red = float4(1, 0, 0, 1); return red; }
If we want to replace the fragment shader’s red output with a texture we’ll need a way to pass that texture into the shader. This is what the Properties block is for. We’ll use this chance to add texture coordinates to the data passed from the vertex shader to the fragment shader.
Shader "FancyShader" { Properties { _FancyTexture ("Fancy Texture", 2D) = "white" {} } SubShader { Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 position : SV_POSITION; float2 textureCoordinates : TEXCOORD0; }; // we need to declare the texture in the CGPROGRAM block sampler2D _FancyTexture; v2f vert(appdata_base vertInput) { v2f vertOutput; vertOutput.position = UnityObjectToClipPos(vertInput.vertex); vertOutput.textureCoordinates = vertInput.texcoord; return vertOutput; } float4 frag(v2f fragInput) : SV_Target { // we sample the texture at the coordinates provided in the input struct return tex2D(_FancyTexture, fragInput.textureCoordinates); } ENDCG } } }
It’s a simple shader but at least now we know where to start.

Other options

Unity offers a few alternatives to writing vertex and fragment shaders by hand.
One of them is surface shaders. This feature allows you to write lit shaders faster than incorporating lighting into vertex and fragment shaders, which can take a lot of work. Surface shaders don’t directly correspond to a step in the graphics pipeline but are compiled into vertex and fragment shaders.
Unity 2018.1 also introduced Shader Graph, a visual node editor, useful for creating shaders without directly writing code.

Unity utilities

If we want to add post-processing effects to a Camera the best place to do it is the MonoBehaviour.OnRenderImage method. If you implement it in a script and you attach your script to a GameObject with a Camera component, that Camera’s output will be passed to this method as a RenderTexture. This allows you to process it before it’s rendered to the screen.
void OnRenderImage(RenderTexture source, RenderTexture destination) { // process "source" and write the result to "destination" }
The recommended way to process it is to use Graphics.Blit. Passing the source RenderTexture as the first parameter and destination as the second will simply copy the data from one to the other. Setting the third parameter to a material will process the texture with the material’s shader. Unity will automatically assign the source texture to the shader’s _MainTex property.
void OnRenderImage(RenderTexture source, RenderTexture destination) { Graphics.Blit(source, destination, fancyMaterial); }
You can do this repeatedly if you need to apply multiple effects and one shader pass isn’t enough. Using the same texture in both parameters of Blit will overwrite the texture, but this can be inefficient on some hardware, especially mobile GPUs. The recommended way to do this is to allocate a new temporary texture and release it when we’re done.
void OnRenderImage(RenderTexture source, RenderTexture destination) { RenderTexture temporaryTexture = RenderTexture.GetTemporary(source.width, source.height); // process it once and write it to a temporary texture Graphics.Blit(source, temporaryTexture, firstFancyMaterial); // process it again and write it to destination Graphis.Blit(temporaryTexture, destination, secondFancyMaterial); // release the temporary variable so it can be reused or cleaned up RenderTexture.ReleaseTemporary(temporaryTexture); }

Ultrasound image

How medical ultrasound works

The creation of a medical ultrasound (aka ultrasonography) image can be broken up into three parts:
  1. Generating a sound wave and sending it into the tissue we want to scan. This is done with a device called an ultrasonic transducer, usually built into a hand-held probe.
  2. Receiving echoes from the tissue, also done with the transducer.
  3. Interpreting the echoes into a useful image.
The resulting image represents a slice of the scanned body, each pixel telling us how strong the echo from a particular spot in the tissue is. A stronger echo usually corresponds to a brighter pixel.
We can see the interior of different organs and tissues because they are not perfectly homogeneous. They have tiny irregularities that scatter sound waves in multiple directions. This is referred to as diffuse reflection. Some of the sound echoes back to the transducer and usually shows up as gray in our final image. Note that liquids and air appear black because they reflect little to no sound.
As sound passes from one tissue to another tissue and their densities are sufficiently different the sound will be reflected. This shows up as a bright line between the tissues in the final image. This specular reflection is proportional to the difference in density.
Diffuse scattering causes multiple echoes to reflect on the tiny structures such as cell walls. The resulting interference gives the image a grainy, noise-like quality, referred to as speckle.
Once sound reaches a structure that causes a reflection its strength will be reduced. This causes shadows to show up in the parts of the image behind these structures.
As sound passes through the body it gradually gets absorbed and reflected and its strength is attenuated. This means that the echo will be strongest for tissues nearest to the ultrasound probe and weaker for tissue farther away.

Creating a simulation

To create a convincing simulation we need to handle the elements highlighted in the previous section: diffuse reflection, specular reflection, speckle, shadows and attenuation.
We’ll do most of our work in the MonoBehaviour.OnRenderImage method, which means that from a 3D perspective all we’re doing is rendering a rectangle the size of our screen. Our vertex shaders will run four times per frame, once for each corner, and the fragment shader data will be interpolated between those four corners.

Prepare a 3D texture

Our main source of information will be a 3D texture representing a body, such as an MRI scan.
We identify the different tissues in the texture by their “color” value. We’ll use these values as the base for our simulated image, simulating diffuse scattering. We want to make sure that there is enough contrast between the colors. If they are too similar they will be less distinguishable in the final image, with weaker reflections.
To make things easier we’ll use a simple demo texture, a 256×256 cube with a sphere (r=100) inside it and another cube (80×80) inside the sphere.

Get a slice of the texture

The first thing we need to do is get the slice of the texture that would be scanned by our virtual ultrasound probe. We’ll use a shader to do this.
Since we’ll be processing this texture several times before we reach something that looks like an ultrasound image we can downscale it to improve performance. We do this by simply using a smaller destination texture for Graphics.Blit.
We’re passing our 3D texture to Graphics.Blit as the first parameter, so Unity will set it to the shader’s _MainTex property. We’re also passing the position of our screen’s bottom left corner and its x and y axes. These will be normalized to the texture’s space, so (0,0,0) and (1,1,1) will be opposite corners of the texture.
Texture3D volumeTexture; Material volumeSliceMat; void Start() { volumeSliceMat = new Material(volumeSliceShader); } void Update() { // ... volumeSliceMat.SetVector("_RectOrigin", rectOrigin); volumeSliceMat.SetVector("_RectXAxis", rectXAxis); volumeSliceMat.SetVector("_RectYAxis", rectYAxis); } void OnRenderImage(RenderTexture source, RenderTexture destination) { RenderTexture mainTex = RenderTexture.GetTemporary(256, 256, 0); Graphics.Blit(volumeTexture, mainTex, volumeSliceMat); // ... }
Read the rest of the article here
Comments