top of page

Houdini Procedural Explosion VFX — Systems Breakdown

  • Writer: Pavel Zosim
    Pavel Zosim
  • Mar 2
  • 8 min read

Updated: Mar 30

Why This Breakdown Exists

This Houdini procedural explosion VFX setup is built as a deterministic, layer-based system designed for simulation accuracy and export flexibility.


What I find genuinely interesting in VFX work is not any single effect — it's the patterns underneath. Most effects, when you look closely enough, share the same structural logic. Find that logic, build a system around it, and you get something reusable, predictable, and fast to iterate.


Houdini procedural explosion VFX to Unity
Unity Explosion VFX

This breakdown is about that process — using a procedural explosion as the example. Not because explosions are unique, but because they're a good stress test: they have clear physical reference, multiple simultaneous layers, and they need to work across very different rendering contexts. Getting all of that into one coherent system requires thinking about architecture first and aesthetics second.


The techniques covered here — layered point animation, velocity field construction, Pyro sourcing, VAT export, Six Point Lighting — are not explosion-specific. The same approach applies to any effect where organic behavior and pipeline reusability both matter.


One tool. Predictable behavior. Any engine.




Before Opening Houdini: Scale and Reference

The first thing I do before touching any node is establish ground truth — physically correct scene scale and a solid visual reference.


Scale matters more than most people realize. In Houdini's geometry context (SOPs), units are dimensionless — one unit can mean anything. But the moment you move into simulation (DOPs), everything changes. Pyro, RBD, and FLIP solvers operate on MKS units: meters, kilograms, seconds. Gravity, viscosity, density, buoyancy — all of these are calculated against real physical values. The industry standard is simple: 1 unit = 1 meter. Break that convention and your simulation misbehaves before you've changed a single solver parameter.


For reference, I used real footage of a 122mm high-explosive artillery shell detonation. Before building anything, I needed to understand what I was actually looking at.


Any VFX effect is a combination of static shape and animation. Break it into layers and you get: flame, smoke, debris, displaced ground, shockwave, crater. There's an impact, a detonation, an active expansion phase, and a decay. Animationally: sharp collision → explosive expansion → dissipation. These patterns — extracted from reference — became the architecture of the tool.




Houdini Procedural Explosion VFX Architecture

I built the system as two connected HDAs.


HDA 1 — intial mesh setup: Generates velocity and prepares all point data for simulation. No simulation runs here. Everything is deterministic and procedural.


HDA 2 — simulation: Takes the output of the first HDA, rasterizes it into VDB volumes, and runs the Pyro solver. A Python script reads the layer count and types from HDA 1 and auto-configures the interface — the designer doesn't touch the graph structure.

The pipeline looks like this:


explosion_guide → se_initial_mesh_setup → se_simulation → Export → Unity / Unreal


The core idea is layer-based organization. At a high level, an explosion is smoke, fire, and fragments. That became the foundation. Each layer is independent: main burst, shockwave, debris, pump impulse. The graph stays closed — only parameters are exposed, grouped into a clean interface.




HDA 1: Initial Mesh and Velocity

Everything starts with two objects: a sphere defines the shape and scale of the explosion, a point defines the direction. These are the two inputs to the first HDA. From there, three tabs.


Houdini procedural explosion VFX layer-based HDA architecture
Initial mesh setup

Tab 1: Initial Shape

This tab controls the base geometry from which velocity is calculated.


Houdini procedural explosion VFX layer-based HDA architecture

A turbulent Perlin noise is applied to the sphere. This creates the characteristic spikes radiating from the epicenter — clearly visible in the first frames of any real explosion. The noise is masked through the dot product of the mesh normals: near the ground, the shape automatically smooths out. This is physically correct — the blast wave flattens as it meets surface resistance.





A separate parameter — Point Direction Interpolation (Bias) — interpolates between the original point position and the reference direction point (second input). At Bias = 0, the explosion is symmetric. Offset it and you get a directional blast matching the shell's angle of entry. No manual geometry editing required.



Houdini procedural explosion VFX layer-based HDA architecture
An optional Flow Noise layer can be enabled on top for additional organic variation.

Tab 2: Burst Animation

This tab controls how the explosion lives in time.

  • Burst Duration — total length of the burst phase in frames

  • Min Life / Max Life — randomized per-point lifetime range. Points die at different times, so the explosion dissolves organically rather than cutting off in a single frame

  • Burst Time Remap — a curve that remaps the animation time progress. This is the main artistic tool for controlling the "character" of the blast — sharp start with slow decay, or a delayed peak

  • Death Rate — a separate decay curve, independent of duration. Together with Time Remap, these two curves give full control over the temporal shape of the effect

Houdini procedural explosion VFX layer-based HDA architecture

Tab 3: Layers

This is where the core logic lives. The current setup has 4 layers, each independent.



Layer 1 — Main burst. Scale (1,1,1), shape taken directly from Initial Shape. Two independent Jitter passes — one for large-scale spread, one for fine detail. This creates two-level organic point distribution.


Layer 2 — Curl Noise velocity layer. Animated Curl Noise with custom swirl and turbulence parameters. Curl Noise is divergence-free — it creates only rotation in the velocity field, no sources or sinks. This is what produces the organic billowing behavior of real smoke.


Layer 3 — Shockwave. Scale compressed heavily on Y, expanded on X/Z, rotated -90° on Y. The sphere becomes a flat disc. This shape generates the horizontal outward blast wave.


Layer 4 — Pump. "Layer is Pump" enabled, with a 2-frame Time Shift delay. Scale inverted relative to Layer 3: narrow on X/Z, extended on Y. This creates a vertical column — the rising impulse that forms the characteristic mushroom after detonation.


Layer Debris — a separate section for fragments. Points scatter per fragment, additional vertical velocity is added, Curl Noise controls trajectory variation, and a Spinning Force range drives per-fragment random rotation.


Houdini procedural explosion VFX layer-based HDA architecture


The output of HDA 1 is several separate streams: point cloud layers with velocity and animation, the pump layer, debris points with trails, and a crater mask.


No simulation has run yet. All of this is deterministic and procedural — recalculates in seconds when any parameter changes.



HDA 2: Rasterization and Pyro

The second HDA receives the data from the first. On initialization, a Python script reads the layer count and types and auto-configures the interface. The designer gets a ready-made interface that matches whatever was set up in HDA 1.


Houdini procedural explosion VFX layer-based HDA architecture
se_simulation with LAYERS / DEBRIS / PUMP / SIM branches

Change the layer count in the first HDA — the second one updates automatically.


Tab 1: Pyro Setup

Per-layer noise settings for density and temperature — type, frequency, detail. This is where the character of the smoke is defined: how lumpy, how uniform, how the edges behave.


For the main burst layer, I used Alligator noise — it produces the characteristic bumpy structure close to how real smoke behaves at cloud boundaries. Fractal parameters control detail at different scales.


The Debris layer has a separate Velocity tab with optional Curl Noise for additional swirl along fragment trajectories.


Tab 2: Rasterize Attributes

This controls how the point cloud is converted into a VDB volume. Each layer has a Particle Scale — the influence radius of each point during rasterization. Too large and you lose detail. Too small and you get discretization artifacts.

The debris layer uses a small particle scale by design — fragments are point sources, leaving a minimal volumetric trail.


Tab 3: Smoke Simulation

Pyro solver settings. Voxel Size balances detail against simulation time. Per-layer multipliers for Density, Temperature, and Velocity determine the weight of each layer in the final simulation. The shockwave and curl layers get a higher velocity multiplier to amplify their influence on smoke behavior.




Export: The Engine Doesn't Matter

After simulation, the output is exported in whatever format the pipeline requires:

  • VDB — for cinematic rendering in Karma, Arnold, or any renderer

  • Vector Fields — for real-time particle simulation in-engine

  • VAT (Vertex Animation Texture) — for playback in Unity and Unreal without runtime simulation

  • Static mesh or cache — for geometry


The engine doesn't matter because all the physics and shape were determined at the point animation stage. By the time you export, it's data — not logic. The same setup delivers cinematic renders and real-time game effects. The fundamentals are always the same. Only the output format changes.


I've covered various export methods — VAT, Vector Fields, mesh caching — in a separate breakdown: FX Flame Simulation & Vertex Animation Export


Camera Setup

Camera configuration is often overlooked but it directly affects both render time and export quality — and the setup is completely different depending on your target.


For cinematic rendering, you can use camera frustum culling to clip parts of the simulation that extend beyond the frame — with an offset to avoid cutting too aggressively. This can save meaningful simulation and render time on large explosions. One important caveat: before enabling frustum culling, check whether any out-of-frame geometry contributes to shadows or indirect lighting in the shot. Culling shadow casters that aren't visible can break the lighting in ways that are hard to diagnose.


For game engine export (VAT / flipbook), the camera requirements are entirely different:

  • Set projection to Orthographic — perspective introduces distortion that breaks the VAT reconstruction in-engine

  • Resolution must be a power of two (512, 1024, 2048) — non-power-of-two textures cause issues with mip generation and sampling in most engines

  • Keep the animation within 16 frames for the spritesheet — this is the practical limit before texture memory becomes a problem. A 9×9 flipbook (81 frames) works for higher quality, but know your target platform's constraints before committing


Getting the camera wrong at this stage means redoing the render. Lock it down before the simulation cache is written.


Houdini VAT export orthographic camera setup
Houdini VAT export orthographic camera setup
Houdini VAT export orthographic camera setup
Houdini VAT export orthographic camera setup

Motion Vectors — Frame Blending

Motion Vector texture stores the per-pixel direction and magnitude of movement between frames. In a game engine, the shader uses this data to interpolate between flipbook frames at runtime — effectively faking in-between frames that were never rendered.


Why it matters: without Motion Vectors, a flipbook plays as a hard cut between frames — you see the stutter, especially at low frame counts. With Motion Vectors enabled, the engine blends between frames based on actual movement direction, so 16 or 24 rendered frames can look as smooth as 60. This directly reduces texture memory and render time without sacrificing perceived quality.


One important note: Motion Vector Influence should be kept low — typically between 0.0005 and 0.001. Too high and the interpolation overshoots, causing the effect to smear or swim. Find the minimum value that eliminates stutter and stop there.


Six Point Lighting — Why It Matters for Real-Time


Standard sprite or flipbook effects are lit as flat surfaces — they receive light from one direction and ignore everything else. This looks fine in a static scene but breaks the moment a dynamic light source moves, or the effect is placed near colored environment lighting.


Unity six point lighting flipbook comparison
Unity six point lighting flipbook comparison

Six Point Lighting solves this. During the Houdini render, six separate cameras capture the volume from six directions — Top, Left, Right, Bottom, Back, Front. The resulting two textures (TLR and BBF) store baked illumination for each voxel from all six axes.


In-engine, the shader reconstructs approximate volumetric lighting by blending these six samples based on the current light direction. The result: the explosion responds correctly to any light in the scene — a muzzle flash nearby, rotating sunlight, colored fill from an environment — without any volumetric rendering, without runtime simulation.


This is the key difference between a VFX asset that integrates into a scene and one that sits on top of it. Six Point Lighting is what makes a flipbook explosion feel like it belongs in the world rather than being composited over it.


six point lighting setup houdini
Houdini 6-point Lighting Preparation for Rendering

 

Like this post? ( ´◔ ω◔`) ノシ

Support: Buy Me a Coffee | Patreon | GitHub | Gumroad YouTube 

Comments


Ⓒpavelzosim.com — Technical Artist (Houdini, Game Engines)

bottom of page