top of page

TECHNICAL ART: Comprehensive Cheatsheet on Unity 2024/2025. CG Shader Operations & ShaderLab

Writer: Pavel ZosimPavel Zosim

by Pavel Zosim · v1.3


1. Introduction

This post outlines core principles and practical tips for writing CG shaders in Unity. Whether you’re transitioning from ShaderGraph or diving into hand-written shaders, this guide provides you with an in-depth look at shader code structure, property declaration, and how the GPU processes shader code. We’ll also explore special ShaderLab features such as drawer attributes, tags, and SubShaders, and discuss how these concepts vary across render pipelines (Built-in, URP, HDRP).


What Is a CG Shader and Shaderlab?

  • CG Shader: Uses the Cg/HLSL language to write shader programs. A shader file typically contains sections enclosed by CGPROGRAM and ENDCG (or HLSLPROGRAM for URP/HDRP) that are compiled into GPU instructions.

  • ShaderLab: A Unity-specific wrapper that defines properties, subshaders, and passes. It controls how your shader interacts with the Inspector and the rendering pipeline.

Critical Changes (2024):

  • CGPROGRAM Fully Deprecated in URP/HDRP:

    In URP and HDRP, the recommended block is now HLSLPROGRAM/ENDHLSL to fully support modern shader targets.

  • URP Now Supports 3D Textures (v12+):

    Recent updates allow 3D textures in URP.

  • HDRP Requires DirectX 12/Vulkan or Metal 3:

    For full HDRP features, including advanced lighting and ray tracing, these graphics APIs are needed.

 

Pipeline Comparison

Capability

Built-in

URP (v12+)

HDRP (2024+)

Stereo Instancing

Ray Traced AO

✓ (DX12/Vulkan)

Shader Model

3.0

4.5

6.5+

VRSS Support

 

2. CG Shader Execution Sequence

The GPU reads shader code sequentially — from top to bottom. The typical structure is:


Code Reading Order & Structure:
[Properties] Defines parameters exposed in the Inspector. 
	↓
[SubShader Setup] Contains tags and passes.
	↓
[Variable Declarations & Data Structures] Global variables and helper structures.
	↓
[Helper Functions] Common routines used by the shader.
	↓
[Vertex Shader] Transforms geometry.
	↓
[Rasterization] Converts vertices to pixels.
	↓
[Fragment Shader] Colors individual pixels.
	↓
[Frame Buffer] Final rendered image

Key Execution Rules:

  • Function Declaration: Functions must be declared before they’re called.

  • Shader Stages: The vertex shader always runs before the fragment shader.

  • Parallel Execution: The GPU executes shader code in massively parallel batches.

  • No Fixed Order: There’s no guaranteed order for processing individual vertices or pixels.



Data flow:
CPU → [Vertex Shader] → [Rasterizer] → [Fragment Shader] → [Frame Buffer]
   			  |        		            |  
    			  ↳ (Parallel Batches)		   ↳ (Pixel Interpolation)

  • Code Example: Basic Color Shader


Explanation:

  • Properties Block:

    Declares _Color with display name “Main Color” and a default white color.

    Remember: no semicolon at the end!

  • Global Variable Usage:

    The auto-generated uniform fixed4 _Color is used in the shader code.

  • Shader Stages:

    The vertex shader (vert) transforms geometry, and the fragment shader (frag) applies the final color.


 

3. Shader Declaration & Properties

Before you dive into writing shader code, it's essential to declare your shader’s name and define its properties. This section serves as the entry point for both the developer and the Unity Editor, allowing you to:


  • Establish the Shader’s Identity:The shader’s name not only determines how it appears in Unity’s material and shader menus but also categorizes it into logical folders (e.g., Custom/ExampleFlow). This organizational structure is vital for maintaining a clean project and ensuring that artists and technical users can quickly locate and apply the shader.

  • Expose Inspector-Editable Properties:The Properties block is where you declare variables (textures, colors, floats, etc.) that users can adjust in the Inspector. These properties drive the look and behavior of your shader without requiring direct code edits. For instance, a color tint or a texture input can be modified in real time to see the impact on your rendered material.


First Step: Define shader name and inspector-visible properties.


  • Syntax:



3.1 Defining the Shader Name (Inspector Path)

The shader name determines its hierarchical location in the Unity shader dropdown menus.


  • Default Path:


  • Custom path:


Shader path in Unity Inspector panel
Shader path in Unity Inspector panel

3.2 Properties Syntax & Data Types

Within the shader declaration, the Properties block allows you to expose parameters to the user. The basic syntax for each property is:


  • Syntax:


Components of the Syntax:

  • PropertyName: This is the internal identifier (e.g., _MainTex), which the shader code uses to reference the property.

  • Display Name: The human-readable label that appears in the Inspector. This makes it easier for users to understand what the property does.

  • Data Type: Specifies the kind of data the property holds. (read sec. 3.2.x)

  • Default Value: This is the initial value assigned to the property when the shader is first used. For textures, this might be a reference like "white" or "defaultTexture".


 

3.2.1 Numeric Data Types - These properties control scalar values such as factors, intensity, or other numerical parameters.


  • Float - A single-precision floating-point value. Useful for parameters that need fractional precision.

  • Int - An integer value for discrete parameters. When the value should be a whole number.

  • Range - A float that’s constrained between a minimum and maximum value and is displayed as a slider in the Inspector. Ideal for values where you want to restrict input to a specific range.


 

3.2.2 Vector & Color Types


  • Vector - A four-component vector. Often used to pass directional or positional data. Can be used for positions, directions, or any custom 4D data.

 
  • Color - Represents a color using four components (R, G, B, A). Commonly used for tinting, diffuse colors, etc.

  • [HDR] - For HDR effects in HDRP, (Attribute Drawer sec. 3.3.4) - When using high-intensity colors (e.g., for emission), the [HDR] attribute is required to ensure proper tonemapping.


 

3.2.3 Texture Types - Textures provide image-based data for surfaces. Unity includes several texture types with built-in defaults (e.g., "white", "black", "gray", "bump").


  • 2D - Represents a 2D image, typically used for diffuse maps, normal maps, etc.

  • Cube - A cubemap used for environmental mapping or reflections.

  • 3D -  A volumetric texture, used less frequently. May have additional constraints; consult HDRP documentation for detailed behavior.


 

3.3 Attribute Drawers

Attribute drawers in ShaderLab allow you to customize how properties appear in the Unity Inspector and how they affect shader variants without needing to swap materials. They serve two main purposes:


  • Inspector Organization:They help arrange properties visually for better usability and clarity.

  • Shader Functionality & Variants:They can automatically generate shader keywords for conditional compilation, ensuring that toggled effects are efficiently managed.


 

3.3.1 Organization Drawers

Organization drawers control the visibility and layout of properties in the Inspector.


  • [HideInInspector] - Hides the property value from the material Inspector. Used for internal or technical parameters that shouldn’t be modified by artists.

 
  • [NoScaleOffset] - Prevents the display of tiling and offset fields for texture properties. Use this when the texture’s transform is fixed or controlled elsewhere.

 
  • [Header] & [Space] - Organize properties with headers and spacing.


Pipeline Note: These organization attributes work consistently across Built-in, URP, and HDRP. However, clarity in grouping and hiding properties is especially valuable in complex HDRP shaders where many technical settings are involved. 
 

3.3.2 Range & Numeric Drawers

These drawers customize how numeric inputs are displayed, offering more control than a standard slider.


  • [PowerSlider] - Provides a slider with a non-linear (exponential) scaling factor.

  • [IntRange] - Generates a slider for integer values.


 

3.3.3 Shader Variant Drawers

These drawers enable the creation of shader variants and toggle features without needing multiple materials. Since ShaderLab doesn’t support booleans directly, these attributes simulate boolean toggles using float properties.


  • [Toggle(ENABLE_FANCY)] - Simulates a boolean toggle (0 = Off, 1 = On) and auto-generates a corresponding shader keyword (e.g., ENABLEFANCY_ON).

  • [KeywordEnum] - Creates a dropdown with multiple options that generate keywords, such as for blend modes or other multi-option settings.

 
  • [Enum] - Generates dropdown menus for selecting from a list of options like blend modes, depth testing functions, or culling modes.

 

Pipeline Note: Shader variant drawers work similarly across render pipelines. However, when working with HDRP, you might have additional keywords or variant controls to handle advanced features (like volumetric lighting) that require conditional compilation.

 

3.3.4 Texture & Color Specific Drawers

These drawers tailor the display and handling of texture and color properties to match their intended use.


  • [Normal] - Indicates that the texture is a normal map, helping the editor set up the correct import settings.

  • [HDR] - Marks a color property as high-dynamic range, which is crucial for proper tonemapping in HDRP.


Pipeline Note: While Built-in and URP handle standard color values, 							HDRP requires the [HDR] attribute to ensure correct tonemapping.
  • [Gamma] - Denotes that a float or vector property is defined in sRGB (gamma) space. Unity uses this to convert values appropriately, especially in projects set to a linear color space.

  • [PerRendererData] - Signals that the property may be overridden per renderer, for example using a MaterialPropertyBlock, allowing for instance-specific data.


    • Syntax

 

4. SUbShader Setup

Every shader must include at least one SubShader. In the SubShader, you define one or more passes (each of which contains your actual shader code) and specify metadata that informs Unity how and when to use that SubShader. This setup is crucial because Unity selects the most compatible SubShader based on your target hardware and active render pipeline.


  • Basic Structure Example


 

4.1 Subshader Tags

Tags are key–value pairs that provide essential metadata for a SubShader. They affect shader behavior, drawing order, and how the shader interacts with replacement systems or post-processing. Below is an overview of the main tags and their pipeline-specific nuances.

4.1.1 Queue Tags

Queue tags determine the order in which objects are drawn by the GPU. The render queue affects layering, transparency, and masking. The GPU sorts draw calls by these numerical values:

  • Background - This render queue is rendered before any others. You’d typically use this for things that really need to be in the background.

    • Range: 0–1499 (default 1000)

  • Geometry - For opaque objects; the default for most shaders.

    • Range: 1500–2399 (default 2000)

  • AlphaTest - Suitable for objects with cutout transparency (e.g., foliage, vegetation) that need to be rendered after opaque objects.

    • Range: 2400–2699 (default 2450)

  • Transparent - For transparent objects; drawn back-to-front.

    • Range: 2700–3599 (default 3000)

  • Overlay - For UI or overlay elements rendered last.

    • Range: 3600–5000 (default 4000)


Pipeline-Specific Notes:

  • Built-in & URP: Queue tags are set directly in the shader and appear in the material Inspector.

  • HDRP (2024/2025): Queue tags can be overridden by Material Settings (Rendering Priority). To enforce draw order in HDRP, set the queue in the SubShader tags and adjust Material Order in the Inspector.


 

4.1.2 RenderType Tags

The RenderType tag categorizes shaders into logical groups for global replacements or post-processing. This tag helps Unity identify which shaders can be replaced at runtime (e.g., via Camera.RenderWithShader).


  • Syntax:


Common RenderType Values:

  • Opaque - Default for most non-transparent materials. (Normal, Self Illuminated, Reflective, terrain shaders)

  • Transparent - Used for shaders with blending, such as glass or particles.  (Transparent, Particle, Font, terrain additive pass shaders).

  • TransparentCutout - For materials that use alpha testing (masked transparency , Transparent Cutout, two pass vegetation shaders).

  • Background - For skyboxes and distant scenery.

  • Overlay - For UI and similar overlay elements. (GUITexture, Halo, Flare shaders.)


  • Built-in & URP: Directly specified and visible.

  • HDRP: Although still used, HDRP may manage some transparency or advanced effects via additional material settings.


 

4.1.3 Specialized Tags

Beyond Queue and RenderType, several other tags refine shader behavior:


  • TreeOpaque - Used for opaque tree bark.

  • TreeTransparentCutout - Used for tree leaves that use alpha cutout.

  • TreeBillboard - Used for trees rendered as billboards.

  • Grass - For full 3D grass models.

  • GrassBillboard - For grass rendered as billboards to improve performance.


 

4.1.4 Additional Tags

Beyond Queue and RenderType, several other tags refine shader behavior:


  • RequireOptions - Ensures a pass is rendered only if specific quality settings (e.g., SoftVegetation) are enabled.

  • DisableBatching - Controls whether dynamic batching is allowed (values: True, False, LODFading).

  • ForceNoShadowCasting - Prevents the shader from casting shadows. (Note: In HDRP, this does not prevent contact shadows.)



  • IgnoreProjector - Excludes the shader from being affected by projectors. (Built-in only) Tells Unity to ignore projectors when rendering this geometry.)

  • CanUseSpriteAtlas - Indicates that the shader supports batching through a Sprite Atlas.

  • PreviewType - Specifies how the shader should be previewed in the editor (e.g., Sphere, Plane, Skybox).

  • GrabPass - Captures current screen contents into a texture will grab current screen contents into a texture. The texture can be accessed in further passes by _GrabTexture name. Note: this form of grab pass will do the expensive screen grabbing operation for each object that uses it! GrabPass { "TextureName" } will grab screen contents into a texture, but will only do that once per frame for the first object that uses the given texture name. The texture can be accessed in further passes by the given texture name. This is a more performant way when you have multiple objects using grab pass in the scene.

    Additionally, GrabPass can use Name and Tags commands.



  • UsePass - Reuses passes from another shade. Inserts all passes with a given name from a given shader. Shader/Name contains the name of the shader and the name of the pass, separated by a slash character. Note that only first supported subshader is taken into account.



Usage: RenderType tags help with shader replacement systems (e.g., using Camera.RenderWithShader) and enable filtering when applying post-processing effects.


Pipeline-Specific Notes:

  • Built-in & URP: RenderType tags work as defined and help organize shader behavior.

  • HDRP: Although available, HDRP’s advanced material system often uses additional parameters to control transparency and rendering effects.


(Works only with URP/Built-in RP) Tags are labels that show how and when our shaders are processed. Like a GameObject Tag, these can be used to recognize how a shader will be rendered or how a group of shaders will behave graphically.


 

4.2 Subshader Blending

Blending is the cornerstone of creating visually rich materials like glass, fire, smoke, and holograms. Let’s break down this critical stage in the rendering pipeline.


Blending combines the fragment shader’s output color (SrcValue) with the color already in the render target (DstValue). Think of it as layering in Photoshop—but in real-time 3D. This “merging” stage occurs after the fragment shader and is responsible for incorporating transparency, depth, and stencil data into the final pixel color.


Blending options can be written in different fields: within the SubShader field or the Pass

field, the position will depend on the number of passes and the final result that we need.


  • Default Behavior: When blending is disabled (default: Blend Off), Source color overwrites the destination (no transparency).

  • Activating Blending: Enable blending + set "Queue"="Transparent" to render after opaque objects.

 

Blending combines fragment shader output (SrcValue) with the existing render target color (DstValue) using the equation:

FinalColor = SrcFactor * SrcValue [OP] DstFactor * DstValue

Default operation ([OP]) is Add, making the typical equation:

FinalColor = SrcFactor * SrcValue + DstFactor * DstValue

Blend Factors Cheat Sheet

Factor

RGB Value

Use Case

One

(1,1,1)

Full color contribution (e.g., additive glow).

SrcAlpha

(A,A,A)

Standard transparency (UI, glass).

OneMinusSrcAlpha

(1-A,1-A,1-A)

Inverse alpha masking (soft particles).

DstColor

(R,G,B)

Multiplicative effects (stained glass).

 

Numerical Example

Let’s say we have an RGB pixel with the following destination values:

  • DstValue = [0.5R, 0.45G, 0.35B]


Step 1:Multiply DstValue by SrcFactor One (which is [1, 1, 1]):

  • Result: [0.5R,0.45G,0.35B][0.5R, 0.45G, 0.35B][0.5R,0.45G,0.35B]


Step 2:Assume the DstFactor also equals the current DstValue ([0.5, 0.45, 0.35]). Multiply each component:

  • Calculation:

    • R: 0.5×0.5=0.250.5 \times 0.5 = 0.250.5×0.5=0.25

    • G: 0.45×0.45≈0.200.45 \ times 0.45 \ approx 0.200.45×0.45≈0.20

    • B: 0.35×0.35≈0.120.35 \ times 0.35 \ approx 0.120.35×0.35≈0.12

  • Result: [0.25R,0.20G,0.12B][0.25R, 0.20G, 0.12B][0.25R,0.20G,0.12B]


Step 3:Add the two results (using Add as [OP]):

  • Calculation:

    • R: 0.5+0.25=0.750.5 + 0.25 = 0.750.5+0.25=0.75

    • G: 0.45+0.20=0.650.45 + 0.20 = 0.650.45+0.20=0.65

    • B: 0.35+0.12≈0.470.35 + 0.12 \approx 0.470.35+0.12≈0.47

  • Final Result: [0.75R,0.65G,0.47B][0.75R, 0.65G, 0.47B][0.75R,0.65G,0.47B]

 

Common Blend Factors

Some frequently used blend factors include:

  • Off - Disables blending.

  • One - (1, 1, 1) – Leaves the value unchanged.

  • Zero - (0, 0, 0) – Eliminates the value.

  • SrcColor / SrcAlpha - Uses the corresponding components from the source.

  • OneMinusSrcColor / OneMinusSrcAlpha - Uses (1 - source component) values.

  • DstColor / DstAlpha - Uses the destination color or alpha.

  • OneMinusDstColor / OneMinusDstAlpha - Uses (1 - destination component) values.


To use blending in your shader, you'll need to modify the “Queue” tag—by default, it’s set to “Geometry” (making the object opaque). Change it to “Transparent” to indicate that the object should be rendered after opaque objects and blended accordingly.

 

Blend presets provide standard configurations for frequently used blending effects. The table below outlines these presets, the associated Blend command, and a brief description of their typical usage.

Blend Preset

Blend Command

Description

Traditional Alpha Blending

Blend SrcAlpha OneMinusSrcAlpha

The classic method for transparent materials, where the source’s alpha determines transparency.

Additive Blending

Blend One One

Often used for particles or glow effects, as it adds color values, making bright effects stand out.

Mild Additive Blending

Blend OneMinusDstColor One

A softer additive effect where the influence of the destination color is partially accounted for.

Multiplicative Blending

Blend DstColor Zero

Multiplies the source with the destination, useful for effects like stained glass.

Multiplicative Blending x2

Blend DstColor SrcColor

An intensified version of multiplicative blending, doubling the effect for richer colors.

Overlay Blending

Blend SrcColor One

Uses the source color as the dominant factor, creating an overlay effect.

Soft Light Blending

Blend OneMinusSrcColor One

Produces a soft lighting effect, ideal for subtle highlights and shadows.

Negative Color Blending

Blend Zero OneMinusSrcColor

Creates a negative color effect by inverting the source color’s contribution.

 

Pipeline-Specific Notes on Blending

  • Built-in Render Pipeline:Blending commands are written directly in the shader. Depth writes (ZWrite) are typically disabled for transparent materials.

  • URP (2024):You can set blend factors in the shader; however, URP’s material system may also allow artists to adjust these settings via the Inspector. Use HLSLPROGRAM/ENDHLSL blocks in URP shaders.

  • HDRP: Blending is often managed indirectly via material settings—especially for transparency. In HDRP, you might not see explicit Blend commands in the shader code because transparency is handled through the Surface Type setting (Opaque vs. Transparent) in the material Inspector.


  • Common Blend Presets


  • Option 1: Hardcode in the shader


  • Option 2: Let artists control blending via the Material Inspector


  • Built-in Render Pipeline


  • URP (2024)

 

Troubleshooting Blending

Issue

Fix

HDRP Transparency Not Working

Set Surface Type → Transparent in Material Inspector.

URP Additive Artifacts

Adjust render queue: Tags { "Queue"="Transparent+100" }.

Depth Sorting Issues

Use Offset -1, -1 to manually adjust depth in Built-in Pipeline.

 

4.3 Subshader AlphaToMask

In some cases, standard blending (such as using the "SrcAlpha OneMinusSrcAlpha" blend mode) produces smooth gradients of transparency using fractional alpha values. However, for certain effects—like vegetation cutouts or space portal effects—you might require a binary (on/off) approach to transparency. This is where AlphaToMask comes into play.


How AlphaToMask Works

  • Purpose: AlphaToMask forces the shader to treat the alpha channel as a binary mask rather than a continuous gradient. That means the alpha output can only be 0 or 1 (off or on), producing a harsher, more defined transparency edge.

  • Usage: This technique is particularly useful when you need crisp, hard-edged transparency (for example, for foliage or portal effects) that blending cannot provide.

  • Declaration: The command accepts only two values: On or Off (with the default being Off). It can be declared at either the SubShader or Pass level. When enabled, the shader automatically converts the alpha output to a binary mask.


Note: Unlike blending, you don't need to modify the Render Queue or add additional transparency tags when using AlphaToMask. The fourth color channel “A” is automatically treated as a binary mask.

Pipeline-Specific Considerations for AlphaToMask

  • General Behavior: AlphaToMask forces the shader’s alpha channel to be treated as a binary mask (0 or 1) rather than allowing smooth gradients. This produces hard-edged transparency effects, which are ideal for vegetation, portals, or any scenario where crisp cutouts are needed.

  • Built-in Render Pipeline: Works as expected when enabled. Simply adding AlphaToMask On in the SubShader or Pass converts the alpha output into a binary mask without any further changes.

  • URP (2024/2025): In URP, AlphaToMask is fully supported. Just as in the Built-in pipeline, you declare it (preferably within an HLSLPROGRAM block for full compatibility), and it forces the alpha channel into a binary state. No additional transparency tags are needed.

  • HDRP (2024/2025): HDRP primarily manages transparency via material settings (like the Surface Type property). Although HDRP often relies on these high-level controls for transparency, AlphaToMask remains available for cases where a hard, masked transparency is required. The command works the same way as in other pipelines, but you might see it used less often because HDRP typically employs its own transparency workflow.


 

4.4 SubshaderColorMask

The ColorMask command allows you to restrict the GPU to writing only specific color channels (Red, Green, Blue, or Alpha). By default, the GPU writes all channels (RGBA), but you might want to limit the output for certain effects or optimizations.


How ColorMask Works

  • Purpose: ColorMask lets you select which components of the output color should be written to the render target. This is useful, for example, if you want to create a red-only effect or isolate one channel for debugging.

  • Usage Examples:

    • ColorMask R: Only the red channel is written, so the output appears red.

    • ColorMask G: Only the green channel is written.

    • ColorMask B: Only the blue channel is written.

    • ColorMask A: Only the alpha channel is written, which can be used to debug transparency.

    • ColorMask RG: Writes both red and green channels, allowing for channel mixing.


Pipeline Note: The ColorMask command is compatible with both the Built-in Render Pipeline and Scriptable Render Pipelines (URP/HDRP). You can declare it at either the SubShader or Pass level, depending on the desired scope.

Pipeline-Specific Considerations for ColorMask

  • General Behavior: The ColorMask command restricts the GPU to writing only specific color channels (R, G, B, and/or A). By default, all channels (RGBA) are written, but you can limit this to isolate or blend specific outputs (e.g., output only the red channel).

  • Built-in Render Pipeline: ColorMask works identically to how it’s defined in ShaderLab—its syntax and function are the same. It’s commonly used for effects like channel isolation or performance optimizations.

  • URP (2024/2025): URP supports ColorMask without any changes. When writing custom shaders using HLSLPROGRAM blocks, you can still use ColorMask in the SubShader or Pass as usual. Artists can also control these settings via the material’s inspector if exposed.

  • HDRP (2024/2025): HDRP fully supports ColorMask as well. However, HDRP’s advanced color management (including HDR tonemapping and color space conversion) means that if you’re isolating channels or mixing them, you may need to ensure that your shader’s color values are correctly tagged (e.g., using the [HDR] attribute) so that the final output appears as expected.


 

4.5 Subshader Culling and Depth Testing

To fully grasp culling and depth testing, you must first understand how the Z-Buffer (or Depth Buffer) works. Every pixel rendered on screen has an associated depth value stored in the Z-Buffer. This value determines whether an object appears in front of or behind another. In simple terms, objects are rendered from the closest to the farthest from the camera, which allows the GPU to discard pixels that should remain hidden when geometry overlaps.


Before processing a given pixel, the GPU compares the pixel’s depth against the existing value in the Z-Buffer. If the new pixel is closer than the stored value (or meets the specified comparison criteria), it is drawn and its depth is updated; otherwise, it is discarded.


By manipulating depth-related commands, you can generate various visual effects. Three important ShaderLab commands related to this are Cull, ZWrite, and ZTest.



Core Concepts

  • Z-Buffer / Depth Buffer: A dedicated memory space that holds depth values for each pixel. The closer an object is to the camera, the lower (or sometimes higher, depending on the device) its depth value will be. Each rendered pixel has a depth value representing its distance from the camera. The closer an object is, the lower its depth value. When multiple objects contribute to the same pixel, the one with the lowest depth value (closest to the camera) overwrites the others.

The Z-Buffer stores the depth of the object in the scene, The Z-Buffer stores the depth of the object in the scene, and the Color Buffer stores the RGBA color information
The Z-Buffer stores the depth of the object in the scene, The Z-Buffer stores the depth of the object in the scene, and the Color Buffer stores the RGBA color information

  • Depth Testing: A conditional check that determines whether a pixel should be drawn based on its depth value relative to what’s already stored in the Z-Buffer. The most common depth test is "Less" or "LEqual," meaning that only pixels closer (or at the same distance) to the camera will overwrite existing ones. This is the process where the GPU checks each pixel against the Z-Buffer value. If a pixel passes the depth test (for example, if it’s closer than what is already stored), it gets drawn and its depth value is written to the Z-Buffer.


  • Rendering Order: For opaque objects, a front-to-back order ensures that only the visible (closest) pixels are processed. For transparent objects, Unity (by default) uses a back-to-front order so that blending occurs correctly.


Like Tags, culling and depth testing options can be written in different fields: within the SubShader field or the Pass field. The position will depend on the result we want to achieve and the number of passes we want to work with.


Additional Notes on Transparent Shaders and Depth Buffer Management

Transparent shaders introduce additional complexity in depth management due to the blending process. Here are key points and challenges:


  • Opaque vs. Transparent Rendering: Opaque objects are rendered in a front-to-back order, which allows the Z-Buffer to correctly discard hidden pixels. Transparent objects are typically rendered in a back-to-front order (though with Scriptable Render Pipelines, this can be customized) to blend correctly.


  • Depth Buffer Behavior in Transparent Objects: Transparent shaders often do not update the Z-Buffer (using ZWrite Off) to prevent artifacts. This is because updating the depth buffer could cause partially overlapped transparent pixels to be incorrectly discarded, leading to issues like parts of an object not being rendered.

    For example, consider a cube with a semi-transparent material and two-sided rendering. If the cube is composed of quads (for simplicity), the GPU might render some polygons later even though a closer polygon should have prevented them via ZTest. If depth values were updated, pixels of later-drawn (but farther) polygons might incorrectly overwrite the closer ones.


  • Sorting Transparent Objects: When rendering two transparent objects (e.g., with alpha = 0.5), the expected outcome is that the first object blends with the opaque background, and then the second object blends with the updated background. This requires careful management of the Z-Buffer—if updated, artifacts can appear (e.g., parts of one object may vanish behind another).


  • Practical Solutions: There is no one-size-fits-all fix. Some approaches include:

    1. Shader Variants: Write separate shader variants that use different ZWrite settings depending on the object's alpha. For instance, one variant with ZWrite On for alpha values in [1.0, 0.5) (to render an "external shell") and another with ZWrite Off for alpha values in [0.5, 0].

    2. Two Different Shaders: Use two shaders (one with ZWrite On and one with ZWrite Off) and combine their output to handle different transparency levels.

    3. Custom Render Pipeline Adjustments: With Scriptable Render Pipelines, you can sometimes modify the transparent rendering order (for example, using front-to-back transparent rendering) to better suit your needs.


    Example Problem Scenario: Imagine placing one transparent object (e.g., a helmet) inside another (e.g., a wall) so that the helmet appears on both sides of the wall. If ZWrite is enabled (writing depth), half of the helmet might not render because the wall’s depth overrides it. With ZWrite Off, both objects blend correctly, though this may affect overall scene depth sorting.


  • Shader Graph Consideration: Unfortunately, Shader Graph currently does not offer easy access to change subshader properties like ZWrite directly (unlike some alternatives such as Amplify Shader Editor). In such cases, you may need to create a custom shader (for example, by starting with a Standard Surface Shader and modifying the generated code) to achieve the desired depth behavior.


 

4.5.1 SubShaderCull

Controls which faces of a polygon are rendered (Back, Front, or Off). Works consistently across pipelines; additional HDRP material settings may enhance its usage. In 3D models, each polygon has a front and a back face:


  • Cull Back (default): Only the front faces are rendered. This is typically more efficient.

  • Cull Front: Only the back faces are rendered.

  • Cull Off: Both faces are rendered. This can be useful when you need to display double-sided materials, such as thin cloth or foliage.


Example:


You can also expose culling as a parameter in the Inspector using an Enum:


Another helpful option occurs through the semantics SV_IsFrontFace, which allows us to

project different colors and textures on both mesh faces. To do so, we simply declare a

boolean variable and assign such semantics as an argument in the fragment shader stage.



 

4.5.2 ShaderLab ZWrite

ZWrite controls whether a shader writes depth information (i.e., the pixel’s distance to the camera) into the Z-Buffer. This command controls the writing of the surface pixels of an object to the Z-Buffer, that is, it allows us to ignore or respect the depth distance between the camera and an object. This is especially important when dealing with transparency e.g., when we activate

the Blending options. Typically On for opaque objects and Off for transparent ones to avoid Z-fighting.


  • ZWrite On (default): The shader writes depth information. This is typical for opaque materials.

  • ZWrite Off: Depth values are not written. This is usually used for transparent materials to prevent issues like Z-fighting (flickering when objects overlap).



The Z-fighting occurs when we have two or more objects at the same distance from the

camera, causing identical values in the Z-Buffer.


This effect occurs when trying to render a pixel at the end of the rendering pipeline. Since

the Z-Buffer cannot determine which element is behind the other, it produces flickering lines

that change shape depending on the camera’s position. To correct this issue, we simply need to disable the Z-Buffer using the “ZWrite off” command.


 

4.5.3 ShaderLab ZTest

ZTest defines how depth testing is performed by comparing each pixel’s depth to the Z-Buffer.

It supports several comparison functions:


  • Less (<): Draws the pixel if its depth is less than the stored depth.

  • Greater (>): Draws if its depth is greater.

  • LEqual (≤): (Default) Draws if the pixel’s depth is less than or equal to the stored depth.

  • GEqual (≥): Draws if the pixel’s depth is greater than or equal.

  • Equal (==): Draws if the depths are exactly equal.

  • NotEqual (!=): Draws if the depths differ.

  • Always: Ignores depth testing and draws every pixel regardless of depth.


ZTest Less: (<) Draws the objects in front. It ignores objects that are at the same distance or

behind the shader object.


To understand this command, we will do the following exercise: Let’s suppose we have two

objects in our scene; a Cube and a Sphere. The Sphere is in front of the Cube relative to the

camera, and the pixel depth is as expected.


If we position the Sphere behind the Cube then again, the depth values will be as expected,

why? Because the Z-Buffer is storing depth values for each pixel on the screen. The depth

values are calculated concerning the proximity of an object to the camera.


Now, what would happen if we activated ZTest Always? In this case, Depth Testing would not

be done, therefore, all pixels would appear at the same depth on screen.


Example:


 

4.5.4 ShaderLab Stencil

The Stencil Buffer is a specialized buffer that stores an 8-bit integer (values 0–255) for each pixel in the Frame Buffer. Before executing the fragment shader for a pixel, the GPU can perform a Stencil Test — comparing the current value in the Stencil Buffer with a specified reference value. If the test passes, the GPU then performs the depth test; if it fails, the GPU skips further processing for that pixel. Essentially, the Stencil Buffer acts as a mask, letting you control which pixels are drawn and which are discarded.

How the Stencil Buffer Works

  • Storage: The Stencil Buffer is like a “texture” that covers the entire frame, with each pixel storing an integer between 0 and 255.

  • Stencil Test: Before a fragment is processed, the GPU compares the pixel’s current stencil value (from the buffer) with a reference value that you supply (StencilRef), using a mask (StencilReadMask) and a comparison function (Comp).The test can be conceptually described as:


  • StencilRef: The reference value you want to compare against. Think of it as an ID you write into the Stencil Buffer.

  • StencilReadMask: A mask that determines which bits to consider during the comparison.

  • Comp: A comparison function that evaluates to true or false (see below).


Available Comparison Functions

The following operators can be used for the stencil comparison:


  • Comp Never: Always fails the test.

  • Comp Less: Passes if the reference is less than the buffer value.

  • Comp Equal: Passes if the values are equal.

  • Comp LEqual: Passes if the reference is less than or equal to the buffer value.

  • Comp Greater: Passes if the reference is greater than the buffer value.

  • Comp NotEqual: Passes if the values are not equal.

  • Comp GEqual: Passes if the reference is greater than or equal to the buffer value.

  • Comp Always: Always passes the test.


Practical Example

Suppose you have three objects in your scene—a Cube, a Sphere, and a Square—and you want to use the Square as a mask so that only the Sphere inside the Cube is visible.


1. Mask Shader (USB_stencil_ref)

This shader writes a reference value (for example, 2) into the Stencil Buffer for all pixels covered by the mask (the Square). Since we only need to mark the pixels without drawing any color, we disable color output and depth writing.


2. Masked Object Shader (USB_stencil_value)

This shader renders the object (e.g., the Cube) but only where the Stencil Test indicates it is not part of the mask. It compares the current Stencil Buffer value with the reference value (2) and uses a comparison function to decide whether to draw a pixel.


In this example:

  • The mask shader (USB_stencil_ref) writes a stencil value of 2 over the area of the Square.

  • The masked object shader (USB_stencil_value) renders only where the stencil value is not equal to 2—hiding the Cube where the mask is applied, allowing the Sphere behind it to be visible.


We configured “Queue” to “Geometry minus one”. Since Geometry defaults to 2000, this equals 1999, processing our mask in the Z-Buffer. Unity processes objects based on their scene position relative to the camera. To disable this, set “ZWrite to Off”. Set “ColorMask to zero” to discard mask pixels in the Frame Buffer, making them transparent.


 

4.5.5 ShaderLab Pass

A Pass in rendering refers to generating different layers (e.g., color, light, occlusion) separately in 3D software like Maya or Blender. Each Pass renders one object at a time, equivalent to a draw call, so minimizing passes is crucial to avoid significant graphic load.


A Pass in ShaderLab represents a single render pass—a discrete set of rendering instructions executed by the GPU for an object. By default, Unity adds one pass inside the SubShader. However, you can define multiple passes if needed. Each pass is equivalent to one draw call, which means that an object with multiple passes is rendered multiple times. This can be useful for creating layered visual effects (such as separate passes for base color, lighting, and occlusion) but can also significantly increase the rendering workload.


Since each pass equals a draw call, increasing the number of passes can lead to a heavier GPU load. It's best to use the minimum number of passes necessary for your effect.


Below is an example of a simple shader with a single pass, similar to a basic color shader:


If you want to create a shader with two passes (for example, one pass for the base color and another for an additional effect), you would structure your shader like this:



Render Pipeline Considerations (2024/2025)

  • Built-in Render Pipeline: Passes are typically written using CGPROGRAM/ENDCG. This is the classic method.

  • URP/HDRP: For Scriptable Render Pipelines, it is recommended to use HLSLPROGRAM/ENDHLSL instead. Although the concept of a pass remains the same, the newer syntax provides better compatibility with modern shader targets. Regardless of the pipeline, the idea of each pass representing a separate draw call stays consistent.


 

3. CGPROGRAM Section

The sections we have examined earlier are composed in the ShaderLab declarative language. The true challenge in graphics programming begins here with the CGPROGRAM or HLSLPROGRAM declaration.



 

4. Render Pipeline Fundamentals


Key Unity Rendering Architectures:

  1. Built-In Render Pipeline (BIRP)

    • Legacy system using CG/HLSL

    • Uses CGPROGRAM and UnityCG.cginc

    • Good for: Simple projects, mobile (with careful optimization)

  2. Universal Render Pipeline (URP)

    • Modern replacement for BIRP

    • Requires HLSLPROGRAM and Core.hlsl includes

    • Features: Single-pass VR, 2D Renderer

    • Good for: Mobile, PC, consoles (scalable graphics)

  3. High Definition Render Pipeline (HDRP)

    • Physically-based rendering (PBR)

    • Uses compute shaders and complex lighting models

    • Good for: AAA graphics, PC/consoles with high-end hardware


Pipeline Comparison Table:

Feature

BIRP

URP

HDRP

Shader Language

CG/HLSL

HLSL

HLSL

Base Include File

UnityCG.cginc

Core.hlsl

Common.hlsl

Lightmap Support

Deferred Rendering

Limited

Shader Graph Support

No

 

5. GPU Optimization Essentials


Critical Performance Considerations:

  • Math Operations:

GPUs are optimized for parallel multiply-accumulate (MAC) operations - the core building block of linear algebra:

  • Multiplication is implemented directly in dedicated silicon

  • Division requires iterative approximation algorithms (Newton-Raphson method) or lookup tables

// Simplified GPU execution pattern
Multiply: [Operand A] → [Multiplier Unit] → Result (1-2 cycles)
Divide:   [Operand A] → [Reciprocal Unit] → [Multiplier Unit] → Result (5-20 cycles)

When to Actually Use Division

  1. Dynamic divisors (values changing per-pixel)

    float t = saturate(time / duration);  // Unavoidable

  2. Reciprocal not precomputable

    float specular = energy / (distance * distance); 

  3. Numerical stability required

    // Division preserves precision better in some cases float3 normalized = vector / length(vector);


 

  • Data Type Optimization:

 
  • Texture Sampling:

    • Prefer tex2Dlod with explicit mip levels for distant objects

    • Use texture compression formats (ASTC, ETC2)

    • Limit texture lookups in fragment shaders


 

  • Branching Efficiency:

 
  • SRP Batcher Compatibility:

 
  • Vectorization Tips:

 
  • Mipmap Calculations:

 
  • Shader Variant Reduction:

 
  • Debugging & Profiling

    1. Frame Debugger: Analyze draw call order and state changes

    2. Platform-Specific Tweaks:


  • Precision Analysis:

    • Use asfloat(asuint(value) & 0xFFFF0000) to simulate half precision

    • Check NaN values with isnan()


 

6. Cross-Pipeline Shader Adaptation


URP Shader Example:


Key Pipeline Differences:

  1. Includes:

    • URP: Core.hlsl instead of UnityCG.cginc

    • HDRP: Common.hlsl and ShaderVariables.hlsl

  2. Matrix Operations:

    • BIRP: UnityObjectToClipPos(v.vertex)

    • URP: TransformObjectToHClip(positionOS.xyz)

  3. Texture Sampling:

    • URP/HDRP use macro-based sampling:



 

References:


High Definition Render Pipeline overview


Z-Sorting in transparent shader


The Stencil Buffer in Unity3D


Unity Shaders Cheat Sheet:



 

Follow my work:

Comments


bottom of page