24 Comments

Decals (deferred rendering)

I knew I needed something to spruce up my monotonous terrain, so I recently implemented a decal system. This lets me paste moss, leaves, footprints, bullet holes and anything else “flat” anywhere on my terrain (or any object in the world).

 

Some decals on a cliff and brick wall (admittedly pretty poor examples right now).

Some decals on a cliff and brick wall (admittedly pretty poor examples right now).

 

A deferred rendering engine offers an opportunity for easy-to-implement decals. Instead of trying generate decal geometry that matches the scene geometry (a very complicated proposition), we can just use information in our G-buffer to project the decal textures onto the scene.

Some literature:

 

The basic premise is to render a rectangular box enclosing the area where you want the decal projected. This geometry must be rendered after the regular G-buffer pass, since it requires the depth buffer (and possible the normal buffer). It could be done during the lighting pass, but then you’ll need to apply lighting to each decal.

Instead, my decals are rendered to the albedo part of the G-buffer, but using the data from the depth buffer. I also support normal-mapped decals, which means I might also render to the normal buffer. This has some implications which I’ll go into later on.

 

Decal projected onto a cliff, showing the box geometry we use to render it.

Decal projected onto a cliff, showing the box geometry we use to render it.

 

Note that you aren’t necessarily limited to rendering a rectangular box, it could be any geometry as long as it covers the area you need to, and doesn’t cover too much (thus resulting in the decal being projected over too large an area).

 

The decal box was too big, and ended up projecting the texture onto the white cylinder too.

The decal box was too big, and ended up projecting the texture onto the white cylinder too.

 

 

A rectangular box also provides a simple way to determine the texture coordinates for your decal sample. Given your depth buffer sample, you calculate the pixel’s world position. That position is then transformed back into the rectangular box’s object space using the inverse World matrix. From there we can just use the position’s xy coordinates to determine the decal texture coordinate.

 

	float4 objectPosition = mul(worldPositionFromDepthBuffer, InverseWorld);
	// objectPosition gives us a position inside (or not) of our 1x1x1 cube centered at (0, 0, 0).
	//  Reject anything outside.
	clip(0.5 - abs(objectPosition.xyz));
	// Add 0.5 to get our texture coordinates.
	float2 decalTexCoord = objectPosition.xy + 0.5;
	float4 color = tex2D(DecalTextureSampler, decalTexCoord);

 

 

Some leaf decals projected onto the ground.

Some leaf decals projected onto the ground.

 

Normal-mapped decals

Often you’ll want your decals to include a normal map. Otherwise they’ll “inherit” the normal information that is in your G-buffer.

 

The leaves on the right don't have a normal map associated with them, so they get the normal information from the underlying grass.

The leaves on the right don’t have a normal map associated with them, so they get the normal information from the underlying grass.

 

If you’re using a normal map, you obviously need a normal (not to mention a binormal and tangent). But you can’t sample from the normal buffer if you’re also writing to it (yes, you could make a copy of it – at some cost). The information in the depth buffer can give you the normal, however. Not only that, but there is a way to construct a tangent basis from this information too. I won’t go into the details here (maybe I’ll save that for a future post), but the link describes in great detail how to do this and provides a shader snippet.

One thing to note is that if you extract normals from the depth buffer, you’ll be getting “hard-edged” normals from the scene geometry. That’s not always desirable. As I mentioned before, you can use the normal buffer (if you make a copy if it) – although that has its own problems, since these are the “normal-mapped normals”, not the base smooth surface normals (which is ideally what you want, since this is what was used to generate the original image).

As an addtional alternative, I allow using a normal based off the actual decal box. This means the decal has a single constant normal all the way across. That’s fine when the surface being projected on is flat and the decal box is well-aligned with it. Here’s an image that shows the differences.

 

The decal-based normal has very obvious "edges", but in an actual-use case this would be mitigated by alpha blended edges.

The depth buffer-based normal shows the hard angles of the underlying geometry. The decal-based normal has very obvious “edges” at the edge of the decal, but in an actual-use case this would be mitigated by alpha blended edges.

 

I also support decals with just normal maps and no regular texture. I use this to have footprints in the snow, for instance:

 

Who left these?

Who left these?

Alpha blending

I was worried about how blending normals into the normal part of the G-buffer would work, and also blending in the sRGB color space. It turns out neither is a big visual problem.

I use the spheremap transform for storing my normals. I haven’t done a mathematical analysis, but it seems to behave fine when blending new normals in (i.e. there’s nothing too crazy going on on the edges of my decals).

Now is a good time to talk a bit more about my G-buffer layout. My albedo buffer also contains an emmissive channel (in A), and the normal buffer has specular intensity and power (in B and A). I simply turn off color writes to those channels when drawing decals.

Emmissive

Eventually I’ll support modification of the specular channels with decals, but I decided I definitely wanted modification of the emmissive channel now. This makes it easy to put mysterious glowing marks on cliffs, for instance.

 

What does it mean?

What does it mean?

 

I was worried that I wouldn’t be able to draw both into the color part of my albedo buffer (RGB) and into the emmissive part (A). This is because I need the alpha to control which parts of the decal are transparent. Or that I would have to draw the decal twice to achieve this.

However, as long as I’m ok with a constant emmissive value over the entire decal, I can do this will a single draw operation. The trick is to control the color blending with the (source) alpha channel, but control the alpha channel blending with the blend factor:

 

        BlendState blendStateTexture = new BlendState
        {
            AlphaBlendFunction = BlendFunction.Add,
            ColorSourceBlend = Blend.One,
            AlphaSourceBlend = Blend.BlendFactor,
            ColorDestinationBlend = Blend.InverseSourceAlpha,
            AlphaDestinationBlend = Blend.InverseSourceAlpha,
        };

 

Then when I draw, I set the GraphicsDevice’s BlendFactor to the emmissive value I require.

 

Restricting which objects are affected.

If you project a texture against a cliff, you don’t want a character to walk through the decal projection box and suddenly have the decal appear on the character. People solve this issue in various ways. One method might be to draw moving objects after your decal pass. The way I solve this is to write an object type id into my g-buffer. I have a channel reserved for this in my depth buffer (very convenient). So when the decal shader samples from the depth buffer, it can tell what kind of object that pixel is from. I can set the decal to match only certain types of objects. This is more flexible than a simple dynamic/static choice and imposes less restrictions on render order. I can, for instance, have a decal projected onto the ground but not affect low-to-the-ground shrubs.

 

Neither the character nor the cactus is affected by the decal on the ground, despite being inside the decal projection box.

Neither the character nor the cactus is affected by the decal on the ground, despite being inside the decal projection box.

 

I’ll try to follow this up in a while with a post giving more details on the tangent space reconstruction, perhaps some shader instruction counts, and any other issues I come across (like fading out the decal texture when it’s projected at oblique angles, which I haven’t addressed yet).

Advertisements

24 comments on “Decals (deferred rendering)

  1. Very nice!
    I’ve integrated this into my engine (it’s called “Motor”).

    BTW:
    It seems like your alpha blending is incorrect on the blood. It fades to white instead of simply becoming transparent (in the first screenshot).

  2. Hey,

    I really like your articles and the information you provide with them.
    At the moment I’m trying to implement screen space decals into my deferred rendering engine as well.

    I draw the decals after the gbuffer pass on my albedo and normal buffer.

    Unfortunately I’m not able to get normal maps to work with my decals properly. Whenever I rotate or move my camera, the normal map changes it’s color. It seems like I’m missing something. I’m using the algorithm you provided in the link above from Christian Schüler.

    Is there any chance that you might share some more information on your normal map implementation? I think there might be something wrong with my normals I provide to the cotangent-function or with the view-vector.

    I really appreciate any help. Thanks in advance!

    Cheers

    • Hi!
      For your normal gbuffer, are you using world-space or view-space normals? (If the normal buffer changes color when you rotate the camera, that suggests there might be a mismatch somewhere – i.e. your decals are writing world-space normals, but your deferred lighting is expecting view-space or something)

    • In any case, here are the shader snippets for when I construct the tangent space based on the depth buffer:

      // These ones are hard-edges, but orientated to the scene geometry.
      float3 pixelNormal = normalize(cross(ddy(worldPosition), ddx(worldPosition)));
      float3x3 tbn = cotangent_frame(pixelNormal, worldPosition, decalTexCoord);

      // So now we can sample from the normal map, and turn it into a world normal, which we then encode.
      float3 normal = mul(normalMap.xyz, tbn);
      normal = normalize(normal); // this is the worldspace normal

      and then:

      // N is the vertex normal.
      // uv is the texcoord for the decal.
      // p is the worldspace position.
      float3x3 cotangent_frame(float3 N, float3 p, float2 uv)
      {
      // get edge vectors of the pixel triangle
      float3 dp1 = ddx(p);
      float3 dp2 = ddy(p);
      float2 duv1 = ddx(uv);
      float2 duv2 = ddy(uv);

      // solve the linear system
      float3 dp2perp = cross(dp2, N);
      float3 dp1perp = cross(N, dp1);
      float3 T = dp2perp * duv1.x + dp1perp * duv2.x;
      float3 B = dp2perp * duv1.y + dp1perp * duv2.y;

      // construct a scale-invariant frame
      float invmax = rsqrt(max(dot(T,T), dot(B,B)));
      return float3x3(T * invmax, B * invmax, N);
      }

  3. Thank you very much for your answer and your shader snippet. Your comment about view-space and world-space normals was just the right hint. I was writing world-space normals but my deferred lighting expects view-space normals.

    I think I’ve got that going now.

    But unfortunately I have another problem. The quality of the normals calculated by the line normalize(cross(ddy(wsPos), ddx(wsPos))) seems to be very bad. when I move the camera the normals are flickering and my specular mapping is flickering too. do you have any solution for improving the quality of the normals?

    Thank you very much for your support.
    I really appreciate any help!

    Cheers

    • I haven’t noticed this before, but maybe I just haven’t used the right combination of geometry, normal map textures, etc… Can you describe a bit more what your situation is? What does your normal map look like? (what happens if you just use a smooth normal map, to validate thing?) What kind of underlying geometry is it being projected on?

    • So here’s what the normals g-buffer looks like for a smooth (flat normal map) decal projected onto some cliff geometry, as seen from different angles:

      You can see there is some banding on the surface. I suspect this is due to similar depths and poor depth resolution. It’s not obvious from the picture, but the patterns change quickly with small viewing angle changes, so I suspect a strong specular lighting would “flicker” quite a bit (I don’t yet support specular in my decals)

      • Thanks again for your answer! Yeah thats exactly what I meant. The patterns are even worse with my implementation. My normals look almost like a noise effect since I’m using a 32 bit floating point single buffer for depth. Its a mix of streaks and flickering points. I will post a Screenshot later.

        I think the only solution to avoid this is to use a copy of the original normal buffer but like you mentioned in your post: this comes at additional cost and with other problems, too. And rendering a 2nd normal buffer with only world normals and without normal mapping seems quite expensive for decals only.

        I could use leftover channels for tagging where decals apply and where they dont like you did. But then again. Rendering a 4th gbuffer target is very expensive in dx9.

        Is there a way to calculate more accurate normals with linear depth maybe?

        Anyway thanks again!
        Cheers

      • As promised, here’s an image showing my problem. I’m not able to insert images here, so I’ll just drop a link here.

        The normal buffer is showing depth-normals only. I disabled normal mapping of the decals. On the right you can see the lighting buffer with specular lighting. As you can see, the quality is very bad. I don’t know what’s causing the problem. Maybe the depth normals are just too inaccurate.

        Thanks!

    • Hmm, I’m not convinced your screenshot is showing the same artifact as mine.

      I’m using 16 bit linear viewspace depth for my depth values. You’re using 32 bit floating point (and non-linear)? Like basically what would be written to the actual depth buffer?

      In any case, your normal reconstruction should be more accurate than mine, unless you’re using just a narrow band of the floating point range and losing precision. What are the typical range of depth values across your scene? Maybe you need to adjust your near/far planes.

    • So I think specular lighting might just be incompatible with this form of normal reconstruction. First of all, even if you could solve the precision issues we’re seeing, you’ll still get discontinuities at the geometry boundaries.

      Here’s a high specular intensity sphere with a plain decal projected onto it:

      In addition to the artifacts within the triangles, you can see the obvious discontinuities along the geometry edges. Any effect that changes drastically with small changes in the normal will have this.

      It’s definitely less noticeable with a strong normal map applied in the decal (since this “overwhelms” the base normal). It’s still a tad flickery when the camera moves, but possibly acceptable.

  4. I’m writing a 32bit Single floating point depth buffer with non-linear View-Depth (ViewPos.z / ViewPos.w).

    I reconstruct linear depth for water and other stuff on the fly in the pixel shader where it is needed to achieve effects like soft edged water or for SSAO.

    I reconstruct the pixel’s WorldPosition this way

    float2 postProjToScreen(float4 position)
    {
    float2 screenPos = position.xy / position.w;
    return 0.5f * (float2(screenPos.x, -screenPos.y) + 1);
    }

    float2 halfPixel()
    {
    return float2(1 / ViewPortWidth, 1 / ViewPortHeight);
    }

    // Calculate UVs
    float2 UV = postProjToScreen(input.ViewPosition) + halfPixel() / 2;

    // sample the Depth from the Depthsampler
    float Depth = tex2D(SamplerDepth, UV).r;

    // Calculate Worldposition by recreating it out of the coordinates and depth-sample
    float4 ScreenPosition;
    ScreenPosition.x = UV.x * 2.0f – 1.0f;
    ScreenPosition.y = -(UV.y * 2.0f – 1.0f);
    ScreenPosition.z = Depth;
    ScreenPosition.w = 1.0f;

    // Transform position from screen space to world space
    float4 WorldPosition = mul(ScreenPosition, InverseViewProjection);
    WorldPosition.xyz /= WorldPosition.w;
    WorldPosition.w = 1.0f;

    I use this reconstruction for shadow mapping and lighting as well. And It worked fine up until now.

    The WorldPosition is then used to get the local Position for the decal, the calculation for the decal UV’s as well as for creating the pixel’s Normal like you pointed out above.

    // Calculate the ObjectSpace Position of the RasterizerBox
    float4 localPos = mul(WorldPosition, InverseWorld);

    // Calculate the Decal’s UV Coordinates
    float2 decalUV = localPos.xz + 0.5f;

    // Calculate the Surface Normal from depth
    float3 pixelNormal = normalize(cross(ddy(WorldPosition.xyz), ddx(WorldPosition.xyz)));

    I think there might be something wrong with the normal’s spaces again…
    Unfortunately I’m no professional, so I get confused now and then by the spaces.

    For general normal mapping I shift the normal-sample from the normal map to View space. Then It is multiplied by a WorldToTangentSpace-Matrix, which is precalculated and passed to the shader with Vertex-elements. After that I encode the Normal to the R and G Channel of the GBuffer’s Normalbuffer.

    float3 NormalMap = 2.0 * (tex2D(SamplerNormalmap, input.UV.xy)) – 1.0f;
    NormalMap = normalize(mul(NormalMap, input.WorldToTangentSpace));
    output.Normal.xy = encodeNormal(NormalMap);

    For decal’s it’s easier to calculate this matrix on the fly. So here I’m using your method.

    float3x3 TBN = CotangentFrame(pixelNormal, WorldPosition.xyz, decalUV);
    float3 normalMap = 2.0f * tex2D(SamplerNormalmap, decalUV).xyz – 1.0f;
    float3 wsNormal = normalize(mul(normalMap.xyz, TBN));
    output.Normal.xy = encodeNormal(wsNormal);

    And I think here might be a flaw concerning the spaces. Maybe you can spot a mistake or something. If you spend hours looking on the same code over and over again it’s easy to overlook something.

    Thanks again and I really appreciate your help!

    Cheers

  5. I think I found out what the problem is. Since I’m using XNA and Directx9 I’m limited to point-filtering for the depthbuffer when using a rendertarget with single-format.

    The patterns on my normal-gbuffer are not that different from yours with the exception that I have a strong noise-effect. This effect appears because of the point-filtering. when I change my depth-rendertarget to surface format color and I use texture-filter LINEAR instead of POINT it looks similar to your implementation.

    I think the only solution for that to have a high quality implementation is to write the normal-map-unaffected world-normals to another rendertarget in the gbuffer. Then I could use this target to sample the world-normal in the decal pass. After that I’m able to use these normals for clipping the bending artifacts as well as calculating the tangents on the fly. And the specular mapping should result in good quality, too.

    But then again this is very expensive. But I’m in need for another channel as well.
    I will try this and have a look on the performance hit.

    Thanks for your help, you really helped me to think this through and finally get normal mapped decals to work.

    Cheers

    • Well I have no doubt that linear filtering would smooth noise, but it might also produce some strange artifacts along depth discontinuities – since it doesn’t generally make sense to filter depth values. But hey, if you got something that looks good, then I suppose that’s all that matters… but it would be good to truly understand what’s going on.

      (fyi, I’m using point sampling when I sample my depth texture).

      How did you store depth when you switched it to color? If you didn’t change your shader code, you’re now just using an 8 bit component for depth, which surely isn’t enough.

      I use two channels of a RGBA8 render target, reconstructed like this:
      depthBufferValue.r + depthBufferValue.g / 255
      And of course if you do this, you can’t apply linear filtering anymore anyway.

  6. Yeah I think you’re right. I think I really need to work out the way shader’s work and to catch up to the math witch is going on. Unfortunately it’s not easy to find resources in the internet explaining this stuff in depth. Especially if english is not your main language.

    I think that the engine we have now is sufficient for our first project. It’s my first time programming a game engine ever anyway and I’m not a professional. When we finish our project we’re going to drop XNA and dx9 and move on to dx11 or dx12.

    Then I really need to buy a book or something to fully understand what’s going on.

    However I implemented the method I described above (additional rendertarget) and it works great, it looks really nice too. So I think I got it to work properly now thanks to you. The performance hit wasn’t too bad so I think with a little optimization and batched drawing of the decals I’m able to make them pay off the cost for the additional target.

    Finally, I stored depth the same way like I do now, so the results were very poor. 8bit aren’t enough for storing depth properly. But it was a test anyway since I wanted to find out if the point filtering is the culprit for the noise-like effect. And in my case it seems like this was just it.

    Thanks again for all your help and good luck to you.
    I hope I’ll read some more interesting articles in the future on your blog!

    Cheers

  7. Hey,

    I am Noah Zuo from China, I read your blog and it is really cool.

    I want to translate it into Chinese, can I have your permission please?

    Cheers

  8. Hi, I’m a little confused on how to render the decal. I’m guessing the idea, for a simple decal, is to write to the diffuse texture in the gbuffer (overriding certain pixels), but since this FBO would need to be bound for writing I can’t see how I can bind the depth texture for reading in the fragment shader. Do you use a separate FBO for the decals and mix it together later? An explanation for this step would be appreciated.

    • It’s been a while since I looked at this, but no, they’re just rendered straight to the GBuffer. The decal shader reads from the depth render target of the GBuffer (not the real depth buffer), but only writes to the normal and diffuse RT’s of the GBuffer.

      • Thanks. I have that side off things working now. So far I have the decal projecting within the box bounds as expected, but I appear to be limited to what axis the decal texture is applied to. For example, it will be aligned properly to a surface in front of me but will be stretched on the other two axis, even if the decal is only applied to those surfaces.

        This is how I am getting the UVs just before I clip:

        vec2 screenPosition = InData.mWorldPositionCS.xy / InData.mWorldPositionCS.w;
        vec2 UV = 0.5 * (screenPosition + 1.0);
        float depth = texture2D(gDepthMap, UV).x * 2.0 – 1.0;
        vec4 sp = vec4(UV * 2.0 – 1.0, depth, 1.0);
        vec4 wp = gInvViewProj * sp;
        wp.xyz /= wp.w;
        wp.w = 1.0;
        wp = gInvModelMatrix * wp;

        I then (if clip function doesn’t discard;) add 0.5 to wp.xy and use that to access the decal diffuse texture and then write that output to diffuse in the gbuffer.

        gInvViewProj = camera’s viewProjection inverted.
        gInvModelMatrix = decal box (rasterizer) model matrix inverted.
        InData.mWorldPositionCS = MVP * vertexPostion from vertex shader.

        I’m using OpenGL.

      • This image should illustrate the problem:

        I know there are techniques to dismiss the stretching using an angle threshold, but the problem is that I don’t have any control over the projected planes. I’m guessing I am missing a vital step,

      • I don’t think I ever solved that problem robustly. I just don’t place decals in those places (or ensure the edges of the decal are transparent). It seems like you could compare the normal of the decal with the normal of the surface you’re applying it to – and fade the decal as it becomes significantly different.

  9. I ended up clipping them and will figure out a way to orient the box dynamically on the CPU end (bullet decals etc). I used GLSL dFdx() and dFdy() functions, which I believe are the equivalent to the hlsl ddx and ddy functions..

    Thanks for the guide. 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Just another WordPress site

Just another WordPress.com site

The Space Quest Historian

Adventure game blogs, Let's Plays, live streams, and more

Harebrained Schemes

Developer's blog for IceFall Games

kosmonaut's blog

3d GFX and more

Halogenica

Turn up the rez!

bitsquid: development blog

Developer's blog for IceFall Games

Game Development by Sean

Developer's blog for IceFall Games

Lost Garden

Developer's blog for IceFall Games

Memories

Developer's blog for IceFall Games

Casey's Blog

Developer's blog for IceFall Games

Blog

Developer's blog for IceFall Games

Rendering Evolution

Developer's blog for IceFall Games

Simon schreibt.

Developer's blog for IceFall Games

Dev & Techno-phage

Do Computers Dream of Electric Developper?

- Woolfe -

Developer's blog for IceFall Games

Fabio Ferrara

Game Developer

Clone of Duty: Stonehenge

First Person Shooter coming soon to the XBOX 360

Low Tide Productions

Games and other artsy stuff...

BadCorporateLogo

Just another WordPress.com site

Sipty's Writing

Take a look inside the mind of a game developer.

%d bloggers like this: