Perspective Corrected Textures
When applying textures to polygons with linear texture mapping we get distortions especially when the polygons are tilted with one edge being close and the other distant to the observer. A good example are walls in a tunnel. The walls look like being bent outwards:
This is caused by the uniform distortion of the texture in the depth direction, not taking into account wether the distorted part of the texture is nearer or farther away. The next picture illustrates this underlying problem:
What can we do about this problem? For each pixel in the polygon
to paint we have to determine the perspectively correct texture coordinates. We are given
the texture and screen coordinates of the polygon nodes. The relationship between screen
coordinates and depth is a hyperbola, with depth going to infinity when approaching the
vanishing point. To determine the texture coordinates of a pixel we divide the texture
coordinates by the depth at each node, interpolate this calculated value linearly along
edges and scanlines and for each pixel multiply it by its depth value to obtain the
texture coordinates of this pixel.
How do we get the depth value of each pixel? Depth varies hyperbolically with screen coordinates, with depth going to infinity when approaching the vanishing point. This means the reciprocal of depth varies linearly. Therefor we calculate the reciprocal of depth at each node, again interpolate this value along the edges and scanlines and calculate the real value of depth by taking the reciprocal again. The next picture summarizes the technique:
In principle we have solved the problem. But computational expense raised considerably. We now have two divisions per pixel or at least one division calculating the depth from its reciprocal, then two multiplications to get the texture coordinates. It would be nice to have these costs cut down. I now present some techniques to do that. The first two are based on calculating the exact values for some pixels only, using linear texturing for the pixels between them.
The first thing that comes into mind is that using linear texturing does not look that bad, at least not for small polygons, why not dividing the bigger polygons into smaller ones? We calculate exact texture coordinates for the nodes of all the smaller polygons, and texture the small polygons linearly. The next picture shows two different schemes for subdivision which both can be applied recursively to get even smaller polygons:
The left scheme is better than the right one because the right one has the tendency to create long thin triangles again prone to the distortions of linear texture mapping. The polygon subdivision technique has some disadvantages. First we have to analyze the size of the polygon to decide about how far to go with the subdivision to get reasonable polygon sizes. Second there is some computational expense for administering the created triangles. Third, assuming we use the (simpler and better) left scheme, it is still badly suited to the limited register set of microprocessors because at least four interpolations have to be done, the perspectively correct interpolations along the two edges of the big triangle, in steps depending on how far we went with our subdivision, and the pixelwise linear interpolation along one edge of the small triangles and the interpolation along the scanline.
An algorithm called scanline subdivision partially solves these problems. We interpolate along the polygon edge, but within a scanline we calculate accurate texture coordinates only each 16 or 32 pixels, using linear texturization in between. This is simpler, polygon size is corrected for automatically and only three interpolations are left. As disadvantages we have to calculate more perspective corrected texture coordinates and we have to do some boundary treatment for scanlines with lengths other than a multiple of 16 or 32. Despite these drawbacks this algorithm is in practice the best and is used in PC games like Descent and Tomb Raider.
The next algorithm uses a different approach, the egdes are computates accurately, but for the scanlines the hyperbola is approximated by a parabola which is much easier to compute using forward differencing. A parabola is a quadratic function of the form
To accelerate computation one uses a special scheme called foward differencing. For a fixed step size the function value changes as follows:
Now we repeat this step with d(x):
After initialization for each scanline we are now left with two additions per pixel, namely:
Now we are fast at calculating the parabola, the remaining issue is how to determine the constants a, b, c for each scanline to setup a well approximating parabola. A polynom with three unknown variables can meet three conditions. One might think three points, two at each end of the scanline and one in the middle might be a good idea, but it causes problems because it is not assured that the function values of the parabola do not leave the range of valid texture coordinates, causing garbage to show up at the far end of the texture:
A better condition is to match the two end points of the scanline and the first derivative at the end farther away, assuring that the parabola is strictly monotone within the range of the scan line and its function values do not leave the valid range. With this condition quadratic approximation is a good alternative to piecewise linear approximation.
Constant Z Mapping
The last technique I want to talk about is constant z mapping. It is based on the observation that on each polygon there is a direction along which depth does not change:
This fact is most obvious regarding floors, walls and ceilings. When looking horizontally and with an upright head, all walls have a vertical direction of constant depth, while the ceiling and the floor have horizontal directions of constant z. Under these circumstances we calculate two accurate texture coordinates at the end of each screen column (walls) or row (floor and ceiling) and do linear texturing in between. This technique is flawless, for the reason given above. This technique is widely used in computer games like Mario Cart racing, Wolfenstein 3D, Doom I & II and even the fancy-looking Duke Nukem 3D. All tilting of the observer in Duke 3D is faked, with inaccurate texturing and perspective. Nevertheless if you know in advance that all polygons are either vertical or horizontal and the observer only looks horizontally and holds is head upright, this method is the fastest known.
On the other hand, when polygons are arbitrarily
oriented the lines of constant z are diagonal lines. When rendering a polygon you have to
fill it not along scanlines or columns but along diagonal lines. While this is certainly
possible, you cannot use an unmodified Bresenham due to gaps between the diagonal lines or
pixels painted twice, which is computationally expensive. The algorithm to setup lines
filling the polygons completely while oriented along the constant z direction is not
trivial and involves a lot of overhead making this method slower than the ones mentioned
Questions and corrections: firstname.lastname@example.org
Author: Arno Schödl
Last change: Fri Aug 22 1997
© PLUM project, Technical University of Berlin, 1996