Texturing allow us to add detail to images without using extra polygons
A Texture object needs to convert texture coordinates (for a texture file (png, bmp, etc)) into pixel data (rgba). Texture coords are usually from (0,0) to (1,1). It’s important for texture files to be squares as a result of this.
Rasterisation converts polygons into collections of fragments. A fragment is a single image pixel with extra information attached. More on this further down.
Texture Mapping
To add textures to surfaces in on our model we set texture coordinates for each vertex. Interpolation is then done for each point on the face after the object has been rasterised (split into pixel fragments).
Unfortunately bilinear interpolation doesn’t work because it doesn’t take into account foreshortening on tilted objects (we need hyperbolic interpolation).
But maths details of hyperbolic interpolation are beyond us :D
Textures + Shading
There are a couple of ways of handling textures interacting with light:
- (Simplest): Remove affects of lighting
- Replace I(p) with T(s(P), t(P));
- E.g. take the texture details and use that as illumination details
- Modulate ambient and diffuse based on the texture details. (Specular left alone because specific texturing shouldn’t change the shininess, apparently).
3D Textures
Rather than using a 2D texture and wrapping it around a 3D object, we can add an extra texture coordinate in (for z) and ‘carve out’ the 3D object from the 3D texture block.
The actual file type of a 3D texture is not explored.
Pros:
- Eliminates weird seams and distortions from wrapping
- Easier calculations of texture coords
Procedural/Generated Textures
To create more natural looking textures we can generate them, rather than using a repeating image.
Minification (Aliasing) and Magnification
Texture pixels (texels) are individual pixels of a texture image file. These are distinct from screen pixels - sometimes multiple texels make one screen pixel, sometimes 1:1, sometimes 1 texel is spread out over multiple screen pixels (when we’re super zoomed in).
This can cause issues.
Magnification is fixed by smoothing out the pixelation with bilinear filtering (interpolating between texel values).
Minification is fixed by anti-aliasing/filtering - we average out the texels that each pixel draws from.
MIPS
Pixel averaging is expensive to do on the fly, so to speed things up we create low resolution versions.
Starting with a 512x512 texture we compute and store 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, 2x2 and 1x1 versions. = 4/3 total memory.
The simplest approach is to use the next smallest mipmap for the required resolution.
E.g. To render a 40x40 pixel image, use the 32x32 pixel mipmap and magnify using bilinear filtering.
Trilinear Filtering
Use mips maps above and below the desired resolution (i.e if 190 use 128 and 256) then INTERPOLATE between the resolutions.
Aniso Filtering
So far we have been assuming that minification is happening equally in all dimensions. However if the object is tilted this may not be true.
This may lead to problems where the texels become warped. i.e you use a magnification that is incorrect for the horizontal direction. This means that your pixels will use lerping from these texels
Which will lead to a really fucked looking chessboard. (white, white, black, white, white)
Aniso filtering solves this by treating each axis separately.
Instead of using MIP Maps we can use RIP Maps. where instead of 256 * 256, 128 * 128…. etc we use 256 * 256, 256 * 128, 256 * 64… 128 *256, 128 * 128, 128 * 64 …
Limitations of RIP MAPs:
- Diagonal anisotropy will rip RIP maps apart
- RIP maps require 4 times the memory of a regular texture
Rendering to a Texture
Malcolm doesn’t go into detail about this, but one way to do security camera on screens, or portals or mirrors, is to render the scene (at the correct angle) into an offscreen buffer. This buffer is converted to a texture image, which is applied to the portals etc.