A (probably final) quick post about this. The previous post is here.
I’ve been finishing up my new terrain implementation (including using a scrolling terrain grid, instead of a the chunk-based mechanism I used before).
I’ve made one more visual improvement regarding the normals. I already align height along the shape of the terrain (interpolating between the two heights most similar, instead of all four adjacent heightmap samples, as explained in the last post – this mimics the quad re-orientation of my original implementation). So now I am doing this with normals too. This results in significant improvement, especially in pathological cases (extreme changes in terrain height over small distances). So, we are essentially doing the quad re-orientation for both height and normals separately (possible because of the way I triangulate the terrain).
As a result, the “jaggies” I mentioned in my previous post are nearly eliminated. Here’s an example from a river bank:
It’s easier to see the problem looking at the normal G-buffer, in the following pathological case:
Another scene from last time, with jaggies fixed:
Doing this costs me about 4 vertex shader instructions (each, for the normal and the height).
Compared with my original implementation, this is much cheaper on the CPU (preparing the terrain VBOs when new chunks were loaded caused a noticeable hitch on the Xbox), but is slightly more expensive on the GPU. I have quadruple the number of vertices, and the vertex shader makes 5 texture samples where before it make none.
Now I’m using a scrolling terrain grid however, which lets me reduce the number of vertices I need. In my original implementation, I would render between 4 and 9 chunks of 2048 triangles. So roughly between 8000 and 18000 triangles.
With the new implementation I render a 48×48 grid, “doubly tessellated”, so also roughly 18000 triangles.
I made some haphazard performance measurements, and got an increased cost of about 0.4ms per frame on the PC, and 0.7ms on the Xbox. I’m suspicious of those measurements though, because if I disable the terrain pixel shader completely (by having it output a solid color), then the difference on the Xbox is only 0.2ms. That doesn’t really make sense, since the pixel shader shouldn’t have much effect in the performance comparison – so I’ll just chalk it up to complicated GPU behavior!
I may make one more post about this, going into more detail with how I setup my vertices and the vertex shader implementation.