The USS Quad Damage

Hardware Tesselation

I discover that hardware tessellation isn't really a feature, but isn't not a feature either.

That's why they're artists and you're a programmer

You may have heard the term hardware tessellation bandied about recently, and wondered what it was, and how it was revolutionary. The simple answer is, it isn’t. All it does is take older technology and makes it slightly more useful.

Tesselation, in concept, is really simple. You take a mesh, and you tesselate it to make a bigger mesh. You should know about this from maths class in high school. One could tesselate like this:

  1. Pick a large triangle.
  2. Choose the largest edge of the triangle.
  3. Bifurcate the edge.
  4. Viola! Two triangles.
  5. Repeat until bored.
  6. ...
  7. Profit.

Why would you do this? The answer is, by itself, it’s a waste of time. After all, drawing 4 triangles is the same as drawing 1 triangle.

<larkson>Except it isn’t. <clarkson>

To explain why, a little history is required. However, before we get to that, I’m going to talk about software tessellation for a bit.

You tend to do “tessellation” in software at times to implement different mesh levels of detail (LOD). Meshes far away can have a lower LOD and meshes closer need a greater LOD. This can be done by having multiple meshes (so, a soldier could have 4 meshes depending on how far he is), and there are techniques for “smoothing” these meshes out so there’s no mesh “popping” (which used to happen with older games). Alternately, you could use some sort of software tessellation. More on this later.

A long time ago, ATI (IIRC, I can’t remember the name) came up with a hardware tesselation on the card which didn’t require software support. The “sell” was that you buy one of these cards and your existing games would look better than if you got an NVidia card. What it would do is use a vertex’s normal (a vertex has a location, as well as a normal, which is a line which points out of the plane of the vertex) and imply where it should tesselate to. The upside of this was that it made the meshes look really smooth.

The downside was that it made the meshes look really smooth.

If you’ve ever played on a large HD screen using Snes-9x, using the best pixel smoothing algorithm, you’ll know what I mean. The screen still looks like balls, but now it looks blurry too. In the meshes case, the meshes look smooth, even when they’re not meant to. Sharp edges got rounded off. It was basically a dumb gimmick.

Another part of it was that 3D artists put in a lot of effort to minimise the number of vertices on their meshes and still get them looking nice. Those vertices are golden. Messing with them always makes em look worse. That’s why they’re artists and you’re a programmer.

Later, games like Doom introduced dynamic bump mapping. Bump Mapping is basically a technique where you light a vertex as if it was really high resolution. You do this by using the bump map to displace the lighting data. The nice part of this was that you were effectively just texturing the mesh like you normally do, but now it looked like it was a really high res mesh.

At least, as long as you were looking at it head on (or close). When you looked at a wall you were all “man those bricks are really coming out at me”. When you looked at the wall from an angle, it was easy to tell that it was all a trick. The wall isn’t coming out at all, it’s just lit as if it were.

The solution is displacement mapping, where you actually displace the mesh instead of pretending to displace it. This technique has been around for as long as vertex shaders have been around. It’s occasionally been used by some games — you keep a displacement map and manually (i.e. software) tessellate when objects get closer.

But often, there’s a question of “why would you do this?” Why not just have a higher LOD object which is “displaced” to the correct amount right in the mesh instead of telling the card about a flat mesh and then adding bumps? Now, this tessellation is in hardware, and what this really means is that you don’t need to ship all the tesselated (now determined to be meaningless) data to the card.

In conclusion, then, what hardware tessellation does is make displacement mapping more useful. It’s not a feature in itself, and it’s not something you couldn’t do anyway. It doesn’t need DX11, and the technology has been around since I’ve been in high school.