2 Comments

Water Shader part 3 – Deferred Rendering

In this post, I’ll talk about various issues I encountered when trying to fit my water shader (discussed previously in part one and part two) into the engine for my game, which uses deferred rendering. Not all of the discussion is relevant or particular to deferred rendering.

Just a pretty picture to start things off.

Overview

I’ll just quickly describe how I integrated it into the deferred engine.

We render opaque geometry to the G-buffer as usual. And then, we run our lighting pass into our final frame buffer as usual. Following that, the water is rendered into the frame buffer, and then other translucent geometry (particles, fog, precipitation, etc…)

The forward-rendered water uses the depth and diffuse buffers from the G-buffer. The diffuse buffer is used to get the correct color to show for the water bottom, and the depth buffer is used in order to determine the world position of the terrain on the bottom of the water. This is necessary to calculate the amount of “accumulated water” along the eye vector (see my previous posts for an explanation). Even though I don’t currently, I may also end up using the normal buffer (see later in this post for an explanation).

One thing to note is that I need to do a special trick to store two separate depths in my G-buffer. Why is this? Well, the lighting passes and some of the post-lighting passes (fog, soft particles) need to know the scene depth, usually to reconstruct world position.

The issue is: are they interested in the world position on the water surface, or the terrain at the bottom of the water? The answer is ” both”!

For instance, fog and particles (most likely) want to know the surface of the water. However, the water pass itself needs to know the position of the objects/terrain underneath water in order to correctly calculate light extinction and scattering (simply using water depth – as opposed to accumulated water along the eye vector – does not produce satisfactory results).

So how can I store two depths? I have 32 bit depth buffer, 16 bits which are reserved for the actual depth. Another 8 bits is for an ambient occlusion term, and the final 8 bits has no purpose in my engine (yet). It turns out I’m not particularly interested in the ambient occlusion term where there is water. Thus, where there is water, I can use the other 16 bits to store the depth of the water surface.

So my G-buffer now looks like this (please excuse the crude drawing):

The “secondary depth” is denoted by D’ above.

How do I put it there? At the end of the G-buffer pass, I render the water geometry, but with color writes only enabled on the B and A channels of the Depth buffer. Then, later on when most clients request depth from the Depth buffer, I take the max of D and D’. Special clients – such as the water shader trying to figure out accumulated water – specifically request just the primary D.

Whew! Now onto the interesting stuff…

Lighting artifacts at the edge

Once I got it up and running, one of the first things I noticed was a bright rim around the edge of the water. What could cause this?

I use simple hemispheric ambient lighting in my game. The ambient value is modulated by the direction that the normal faces. This makes it brighter for geometry that faces the sky, and darker for geometry that faces the ground (typically you would lerp between different colors, but currently I just adjust the intensity). I was applying this correction in my water shader too. However, I was doing it blindly, based on the water normal which points to the sky.

It’s clear from the above picture that it would suddenly get brighter at the water surface, since terrain near the edge generally doesn’t face upward. To fix this, I instead use a normal that is tangent to the water surface. This of course has the opposite effect of darkening the water edge, especially when seen adjacent to terrain that faces generally upward. But this is much less objectionable. None of this is physically based anyway.

Here’s an example of the problem and the fix:

Once this was done, I still noticed additional obvious “white rim” problems. For example, the left shoreline here:

This was of course because I had not yet implemented shadows on the water surface yet. The left shoreline is shadowed, but the water edge is not. I’ll discuss shadows more in depth later in this post, but here is an example of with and without shadows (this time on the right shore):

Of course, there are yet additional issues here. Have a look at the following image and note the white rim along the edge prior to shadows taking over (so it couldn’t be from the aforementioned shadow problem).

Of course, this is happening because I’ve made the correction for ambient light, but not the directional light (sunlight). I can’t do what I did for ambient light – use a normal that is tangent to the water surface – because directional light actually comes from a particular direction on the world surface plane! If I chose an arbitrary tangent vector, I would get vastly different results depending on the direction of the light.

Probably the right thing to do here is to actually apply correct lighting to the surface underwater. However, this would require an additional sample from the g-buffer, coming with some performance penalty. I haven’t yet decided if fixing the artifact is worth the cost.

Shadows

I didn’t really think about this until implementing the shadows to fix one of the white rim artifacts mentioned above, but you have a decision to make: shadows on the water surface, or the bottom of the water? Or both?

The more transparent the water is, the more it makes sense to do them on the bottom. But for silty water, that would look completely wrong. The correct solution is to calculate the shadows twice. Even this isn’t completely correct though, as the light hitting the bottom of the water gets there through a much more complicated path. All we’re trying to do is to approximate the process and come up with something reasonable.

You can compare the four different scenarios here:

Bottom shadows for silty water just look horrible (even disregarding the artifacts present due to a quick and dirty implementation), and surface shadows for clear water are somewhat reasonable. So I will stick with surface shadows for now.

Refraction

I’ve come across a number of articles/posts on the internet talking about how deferred rendering makes refractive effects for water easy. It some ways it does, – because we have a diffuse buffer from which to sample – but there are some big drawbacks.

You can read all about refraction here, but basically light waves change direction when they pass from air to water (or vice-versa).

When rendering the point on the water’s surface above, if we just read from the g-buffer at that screen position, we’ll get the point directly behind the water surface. Since water has a different refractive index than air however, we actually want the point determined by the refraction vector shown above.

It’s straightforward to calculate the refraction vector via Snell’s law (or a rough estimation that is more suitable for a shader). However, figuring out which point to sample from in the G-buffer is not. We could perform an expensive ray march until we find the best point, but this comes at too significant a cost in performance.

Generally what is done is that we use the incorrect point (directly behind the water surface from the viewer), but offset the texture coordinates by some amount based on the ideal refraction vector. I won’t go into details on how to do this – an internet search will yield several options (here is one).

This produces satisfactory results, save for one thing. Since it just gives us a texture coordinate for the G-buffer, we might be sampling from geometry that was rendered in front of the water. And in fact, the point we want might not be in the G-buffer (this is a common problem with screen-space algorithms of all types). If you look at the diagram above, I’ve lightly shaded (may be tough to see on some monitors) an area underwater that is invisible from the user’s perspective, where our ideal refraction sample would lie.

The artifact is fairly noticeable. Look at the area surrounding the leaves on this plant. We’re essentially seeing the plant leaves underwater!

My workaround – which looks more acceptable in a static image than when in motion – is to check the depth of our sample point (an additional texture fetch). If it’s in front of the water surface, then abandon it and sample from the original point (additional texture fetch, or dependent texture fetch). The result:

It’s better, but it’s also significantly more expensive. I may make refraction effects an optional thing that can be enabled/disabled depending on performance (and given that in my game things are viewed from a fair distance above the world surface, refraction doesn’t provide a huge visual benefit anyway).

An additional problem comes when you need to sample from outside the G-buffer. The artifact you’ll notice here is streaking along the edges of your image. To fix this (I haven’t tried it yet), I think we can just gradually scale the refractive texture offset to zero near the edges of the image. This will produce some motion artifacts, but hopefully they won’t be too noticeable.

Conclusion

That’s it for this post, hope you enjoyed. I’ll suspect I’ll make at least one more post about water, as there are some additional issues I want to explore.

Advertisements

2 comments on “Water Shader part 3 – Deferred Rendering

  1. To get good bottom shadows in silty water I am rendering water surface as semitransparent object on top of framebuffer after lighting. The turbidity of water determines alpha and thus shadows will be faint to nonexistent in deep water.
    The problem with that approach is, of course, refraction. At moment I am not doing any refractions at all. But in theory they should be doable by using additional pass with lighted framebuffer as source texture.

  2. Thank for the comment. Yeah, I had thought about the possibility of alpha-blending my water shader over the post-lit frame buffer (currently it renders as opaque and samples from the g-buffer). I’m not sure the water shader equations I’m currently using can be expressed as an alpha-blending operation, but it’s something I may look into if I have time. Performing a regular lighting pass on the geometry underwater isn’t “physically correct”, but it may look better than what I’m doing now.

    Although then you incur double the cost of lighting/shadowing wherever there is water, while now I can avoid doing a lighting/shadowing pass on the underwater geometry (using stencil operations I suppose – haven’t implemented that yet), since it will be overwritten by the water shader.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Space Quest Historian

Let's Play's, Podcasts, and General Adventure Game Goodness

Harebrained Schemes

Developer's blog for IceFall Games

kosmonaut games

Development blog of "Bounty Road"

Halogenica

Turn up the rez!

bitsquid: development blog

Developer's blog for IceFall Games

Game Development by Sean

Developer's blog for IceFall Games

Lost Garden

Developer's blog for IceFall Games

Memories

Developer's blog for IceFall Games

Casey's Blog

Developer's blog for IceFall Games

Blog

Developer's blog for IceFall Games

Rendering Evolution

Developer's blog for IceFall Games

Simon schreibt.

Developer's blog for IceFall Games

Dev & Techno-phage

Do Computers Dream of Electric Developper?

- Woolfe -

Developer's blog for IceFall Games

Ferrara Fabio

Game & Application Developer, 3D Animator, Composer.

Clone of Duty: Stonehenge

First Person Shooter coming soon to the XBOX 360

Low Tide Productions

Games and other artsy stuff...

BadCorporateLogo

Just another WordPress.com site

Sipty's Writing

Take a look inside the mind of a game developer.

Jonas Kyratzes

Writer, game designer, filmmaker.

%d bloggers like this: