>These are some of the recent changes and visual polish I’ve been working on (since submitting to PAX10).
- Soft particles
I finally got around to implementing soft particles. This is visual polish that avoids the ugly seams that appear where particle billboards intersect the scene geometry. There were a number of challenges for me here. I had tried this before and given up after some roadblocks I didn’t have the skills to solve at the time.
The main problem was the construction of the depth buffer. If you don’t get this exactly right, things won’t work, and I only now am comfortable enough with PIX to enable me to debug some of the tricker problems. One problem I see many making (in looking at online tutorials) is using a depth calculation that involves passing (position.z / position.w) from the vertex shader to the pixel shader. Because you are dividing by w, this value will not be interpolated properly from vertex to vertex. This will only show up if you have triangles of varying size (as I do, for example, on my floor – which is one big quad – and the walls, which are much smaller): the depth value for touching geometry will not be continuous where they touch. The shadow mapping sample on the official XNA site is guilty of this, but of course it doesn’t show up as a problem with the scene objects they use. The solution is not to divide by w. Leave it as z, or divide by a constant if you want to keep the output value between 0 and 1 for more floating point accuracy.
Another problem I encountered on the Xbox 360 only, was that of discontinuities in the depth buffer (I was using a 32 bit floating point buffer). Strange gaps in the depth values that were interpolated (presumably). With some tests, I found they were occuring at “even” numbers, such as when the value was 0.5, or .0625. I was finally able to repro the problem just drawing a simple gradient to the buffer. It turns out the problem was using something other than point sampling when reading the depth buffer. Anything but that (on the Xbox) will cause issues with floating point surfaces.
With those problems solved, I figured I would first try just drawing a quad to the screen using the “soft particle” technique. The vortex is the obvious choice, because it looked terrible where it intersected with the tanks (the vortex *is* a particle system, but at the point it is drawn to the screen, it has already been rendered to a texture, so it is no longer). This worked well, and so I followed that with changing the particle system too. Things were working great!
… until I tried running it on the Xbox. No dice. Eventually I observed that things worked fine on the top half of the screen, but not in the bottom half. Uh-oh. predicated tiling.
Eventually I realized it was the convenient VPOS semantic I was using. This gives you screen position in the pixel shader, which I use to look up the correct spot in the depth texture. VPOS apparently “resets” for each tile. And the XNA framework handles the tiling for you, so there is no way to know which tile you’re rendering from within the pixel shader. No problem, I can calculate the screen position in the vertex shader, and give it to the pixel shader in a TEXCOORD.
Except – I’m using point sprites for the particle systems. That workaround is fine for a quad (the vortex), but won’t work for my point sprites (since they are a point, and there is nothing to interpolate). I certainly didn’t want to re-write the complex particle system to use quads.
Eventually I realized I could figure out my screen position using: 1) the screen position of the center of the sprite. 2) the current texture coordinate, and 3) the pixel side of the sprite. And so that’s what I did, and so far it seems to work. Since TEXCOORDs don’t work for point sprites, I need to use a COLOR semantic to store all these values to pass to the pixel shader. COLOR semantics apparently only have 8bit precision on many PC graphics cards (which is insufficient for my uses here), but luckily on the Xbox they have greater precision (likely 32 bits).
One additional hurdle was that I was hoping to use MRT (multiple render targets) to help render the depth texture, so I didn’t need to render the scene geometry twice. This worked fine on the PC, but again – not on the Xbox. I need to switch out the depth render target (at index 1) so I can read from it, while still keeping the main render target (index 0) to continue rendering to. However, the contents of the main render target are lost when any other render target is removed. It has to do with the nature of the Xbox’s 10MB of EDRAM, and predicated tiling. It is obvious why this has to happen if you think about it.
So in the end I had to go with a separate rendering pass. I’m still getting 60FPS at 1080p, so so far so good.
This post is too long already, so I will discuss HDR in another post.