 Programming

What is a normal map? An object normal map?

Normal maps preserve the lighting fidelity of a high polygon count model when it is reduced to a lower polygon count. Model designers tend to work with models that are far too detailed to be handled in typical real-time applications, like games. The models are reduced from this high count to a low count, producing a normal map in the process. This map is used during rendering to help recreate the details of the original model.

This article is based on how I use normal maps in my game Radial Blitz for iPad or iPhone. Radial Blitz – A high paced 3d reaction game.

Normals

Lighting plays a vital role in the rendering of a mesh (the vertices that define the shape of the model). One of the key variables in lighting is the face normal: the direction the triangle is facing. This is combined with the light sources to determine how much the individual pixels are illuminated.

In a very high-poly mesh it’s sufficient to use a single normal per face. Each triangle may only cover a few pixels. With a low-poly model however, each triangle covers a large number of pixels. If we use a single normal for the entire surface the result would have hard edges — we’d clearly see the polygon construction of the object and each face wold look quite flat.

Most meshes actually use a different normal for each vertex, in which case the pixels use an interpolated value. The principle above still applies.

This is where a normal map comes in. Instead of using the same normal for the face, we use a per-pixel normal. It’s easy enough to encode such information in an image, just like any other texture map. But where do the values for this image come from? Tunnel segments normal map

Calculating a normal map

The high-poly mesh has a distinct normal per face. When it’s reduced to a low-poly mesh those normals can be burned into a map on the new surface.

Let’s use a 2d line segment as an example. The top is the high-poly segment, and on the bottom it has been reduced to a single segment (one face). Notice how the per-face normals from the top have been transcribed to positions along the reduced model. Though we’ve lost the faces of the high-poly model, we’ve retained their normals. This allows the scene lighting to treat the surface as though it was still comprised of many faces instead of just one.

The tangents

The values in the normal map are relative to the face they occupy. The world normal of each pixel must be calculated to do the lighting. This involves transforming the normal vector by the tangent matrix.

 1 public float3 Normal: Transform(Normal, Tangent3x3);

That seems simple except Tangent3x3 is a rather complicated construction.

 1 2 3 4 5 public float3 WorldNormal: Normalize(pixel Transform(VertexNormal, WorldRotation)); public float3 WorldTangent: Normalize(pixel Transform(VertexTangent, WorldRotation)); public float3 WorldBitangent: Normalize(pixel Transform(VertexBitangent, WorldRotation)); float3x3 Tangent3x3: float3x3(WorldTangent, WorldBitangent, WorldNormal);

Each vertex of a face has the three attributes: VertexNormal, VertexTangent, VertexBitangent. These are stored with the mesh as vertex attributes. The VertexNormal is the easiest to understand. It says what direction the vertex is pointing. This is relative to the object, thus we rotate it by the WorldRotation.

At some point in the history of computer graphics the Bitangent became known as the Binormal. You’ll very frequently see the term Binormal used instead of Bitangent. There is actually a difference, but so long as you are using the right formula in rendering, it won’t matter which term you choose.

VertexTangent and VertexBitanget are a bit more complicated. In order to define a 3d space we need three vectors, to define the X, Y, and Z axes. The VertexNormal defines only one of these axes, the VertextTangent and VertexBitangent define the others. These complete the Tangent3x3 matrix, which transforms a vector from the local face space to the world space.

Typically the vectors of a 3-space are orthogonal: a euclidean space. Knowing this, the tangent and bitangent could actually be calculated from the normal (well mostly, there are a few missing details). There is however no need for a model to export a euclidean space. If the source model is curved, the vertices may be better represented by non-orthogonal axes.

An object normal map

The calculation of the Tangent3x3 can be quite expensive on a mobile device, the target of my Radial Blitz game. Whereas on a desktop this calculation cost is not really an issue. To reduce this cost many of our models use “object normal maps” instead of regular “normal maps”.

In a normal map, for contrast we’ll call it the face normal map now, the normal vector is encoded relative to the polygon face. In an object normal map the normal vector is encoded relative to the center of the model. Common target object normal map

Note how the object normal map, when displayed as an image, has more colors than the face normal map we showed earlier. This is because the vectors here point in all directions. The face normal map has limited variation since the face normals point roughly away from their face, not in every direction.

This greatly reduces the calculation cost of the per-pixel normal. There is no need to calculate a tangent matrix.

 1 Normal: Vector.Transform( ObjectNormal, Rotation );

This major efficiency boost comes with a couple of extreme limitations however:

• The model must be static. Since the normals are relative to the center of the model, the polygon faces can never change the direction the face. This precludes their use with animated models, like skeletal models used in game characters. The main objects in Radial Blitz aren’t animated so an object normal map works fine.
• The textures must be hard-backed to one model. There is no ability to reuse the texture across multiple models, or for dynamically constructed meshes. In Radial Blitz this is an issue with the tunnel itself, which is a dynamic mesh using stock textures. These textures thus use a face normal map, not an object normal map.

Radial Blitz uses object normal maps for the modelled targets and uses face normal maps for the dynamically constructed objects, like the tunnel, and several small objects like the rocks. The use of object maps definitely made a difference in frame rate, but I’ve unfortunately lost the exact numbers.

2 replies »

1. Tim Stoev says:

>The model must be static. Since the normals are relative to the center of the model, the polygon faces can never change the direction the face. This precludes their use with animated models, like skeletal models used in game characters. The main objects in Radial Blitz aren’t animated so an object normal map works fine.<

what if take the animated model(complex one) and reduce it down to a set of static models. If we take a human body for example we will re-define the model of the whole body as a mathematical system that basically sets constrains for the joints and use the static models(defined around their center of gravity for example) to compose the entity that will be the body.
We will have to do some more work around the joint itself but since it is a logical center point that links two static models we must be able to do some magic.
Going that way we must end having three type of models for one entity:
1. mechanical model – a master entity that is composed from static elements, joints and constrains. Can also expose parameters that enable generation of multiple similar entities of the type it describes(human body in the example)
2. component models – static models, can be described by parameters once published to the entity. Can be linked with joints(defined as interface in the mechanical model and then registered as instances there)
3. joint models – semi-static models, allow additional rendering around two different components. Accept "commands" from the entity(an instance of the type in functional environment). Each command goes through the mechanical model where the constrains are applied.

Depending on the complexity of the mechanical model one can implement state machine to deal with constrains- for example if a foot moves forward to the point where a particular constraint is reached the body changes positions for related joints.

• mortoray says:

This can be done, but it may not be worth the effort. If you split up the model you’ll greatly increase the number of objects that need to be drawn, which will hurt the performance significantly on many devices (especially mobile). It is nonetheless how some of the bosses are done in the game — several obejcts orbitting a larger one.

There are lots of ways to do skeletal models, though I must admit I’ve not worked much with them. I guess technically some of my groups in Radial Blitz could be called skeletons, but it’s a bit of stretch.