Skip to content

Deferred Rendering Basics

Tyler Wozniak edited this page Jan 26, 2014 · 2 revisions

Deferred Rendering, in laymans terms:

Normally:

Vertex shader calculates positions and passes data to Fragment Shader

Fragment shader takes data like position, color, lighting, normal, etc, and compiles into final color. Then it pushes final color to buffer.

Push the back buffer to the screen.

Deferred:

Vertex shader calculates positions and passes data to Fragment Shader

Fragment shader does as little calculation as possible, and stores all the data by pushing them to one or more (usually more) textures.

Then take the textures, and redo the Vertex shader and Fragment shader steps, where the vertices are for a billboard at the front of the screen, and the textures are the ones you created in the previous step.

Final fragment shader run will take all of the data for each pixel and compile them into one final color with SCIENCE and push the color to the back buffer.

Push the back buffer to the screen.

Steps as concisely as possible:

  1. Set render targets to textures used in deferred rendering
  2. Push objects to render pipeline as normal
  3. Vertex shader does usual stuff, nothing new here
  4. Fragment shader does simple calculations, stores values in correct textures
  5. First render pass ends
  6. Set render target to back buffer (or texture again if we want post-processing)
  7. Push billboard mesh to render pipeline with textures
  8. Very simple vertex shader is all that’s needed, since you can pass in screen space coordinates
  9. Fragment shader takes each channel of the textures and now does appropriate math on them as needed. Push to back buffer
  10. Push back buffer to screen (or do post-processing steps)

Textures To Render:

Deferred rendering at its core should require about 2-3 textures depending on effects you want to support. Generally, here’s what we want:

Texture 1 – Diffuse & Specular – 4 channel, first 3 are RGB, fourth is specular color or intensity

Texture 2 – Normal & Specular – 4 channel, first 3 are Normal XYZ, and fourth is the other component of specular if you need it

Texture 3 – Depth – Single channel, storing Depth. This is necessary to reverse engineer the position for things like lighting checks

Optional: Optionally, if you are less worried about memory efficiency, the 3rd texture can be 3 channel and store the actual position. However, this is generally unnecessary

Additional Considerations:

• To convert Depth into screen space, you’ll need the inverse of the ViewProjection matrix

• I never got to doing point/spot lights with this, so there might be a handful of things I’m missing that we need for those

• The deferred rendering textures need to be resized to match the window size if the window changes.