Ray Tracing

Models specular reflection/refraction.

Ray tracing is a different approach to rendering - in the OpenGL Pipeline we model objects as meshes which get converted into fragments for display. In Ray tracing models are ‘implicit surfaces/forms’* and we compute each pixel by casting a ray and seeing which model it intersects with.

*Implicit surfaces appear to be where every point in space gets a value Ray Tracing

# The Maths # Intersections

In order to compute where a ray intersects an object we need to look at the implicit form of the shape’s equation and substitute in the R(t) equation. This is written as f(P) = 0 (or F(x,y,z) = 0). Unit Sphere Working and results for a sphere ## Generic Forms

For generic planes and cubes the calculations are pretty straightforward. Generic Planes are below: For the -1,-1,-1 to 1,1,1 cube we use the Cyrus-Beck clipping algorithm. This is then generalised for 3D planes.

## Non-Generic Forms To avoid writing many specific implementations, we figure out how the shape was transformed from its generic version, apply the inverse of that transform to the ray, and then test the new ray against the generic shape. ## Pseudo Code Once we know we have hit an object, we can use the information about the object hit to calculate the illumination and texture coords to compute the pixel colour.

# Anti Aliasing

Adaptive sampling and super sampling are useful here (e.g. cast lots more rays than necessary and average).

# Efficiency Optimisations

## Tight Fits - Computing Extents

An extent is a bounding box/sphere that contains an object - given it’s super easy to test against a box or a sphere, we can do a quick check to see if the ray does intersect, and then finer detail testing if it does. It’s important to find the best possible fit.

Box:

• Take min and max x,y,x coords for all the points involved

Sphere:

• Find the center of all the vertices by averaging all coords
• Radius = distance to farthest vertex from center

## Projection Extent

A projection extent is an extent calculated in screen space - a box that includes all pixels that would be in the image of the object. These are computed by projecting everything into screen space and finding the min/max x,y coords.

## BSP Trees

We can split all the objects in the world arbitrarily into nodes of a BSP tree, then rather than examining every node area, we just examine the objects in the node areas that the ray intersects with. I assume each node of the BSP tree lists an area or set of points it includes, so that checking if e in a node is easy Algorithm for Traversal

Shadows are easy - we use shadow feelers: every point we hit casts a ray towards each light source and uses only the ones it reaches (without hitting other objects) in the illumination equation. To eliminate self shadows (we’ll always get an intersection at t=0) we have to determine if the light source is on the other side of the object.

# Reflections

Ray Tracing allows for easy reflection by just casting rays off objects every time we hit them. Usually we stop after a number of levels of reflection to stop inf looping.

# Transparency

Transparency can be similarly handled by passing a ray through a transparent object. We end up with a tree of rays, new ones being cast at every hit - I assume r = reflection, t = transparency

# Illumination The lighting equation has to include reflection and transparent components, computed recursively. These gain material coefficients to weight those components appropriately.

# Refraction To handle transparency appropriate we need to take into account the refraction of light. Light bends as it moves between mediums - Snell’s Law describes this (c1 and c2 are speeds of light in each medium). //Different wavelengths of light move at different speeds (except in a vacuum).

So for maximum realism, we should calculate different paths for different colours (RAINBOWS).//

# Volumetric Ray Tracing/Marching

Volumetric (e.g. objects like smoke or fire that are made up of fire) objects can also be displayed with ray tracing.

We treat all the particles as being part of the one object (‘volume’), and represent the area it covers in a 3D array - each index is a point and the array stores the colour and the transparency of said point. Volume is represented with 2 functions

Points without explicit data are generated with interpolation. The actual functions to determine this data could be based on density, lighting or other physical properties.

## Sampling

To actually determine how a point looks on screen, we cast a ray through the volume and take N+1 samples at fixed intervals. We alpha-blend the results. Collecting the samples After collecting the samples we get the data on the left, which plugs into the alpha-compositing equation on the right.

CN is the background behind the volume (as shown in the image above).

alphaN is opaque (as something must be drawn behind it?).

We’re trying to come up with a colour value for the pixel. So we work from front to back combining values until we reach the end, or the transparency is small enough that nothing more can be seen. Closed formula for the colour (e.g. sums up from a to b, or 0 to N (most likely usage))

Similar to the formula above, just a sum.

## OpenGL Implementation

This kind of Ray Tracing doesn’t require a full ray tracing engine - it can be implemented as a fragment shader applied to a cube with a 3D texture.

page revision: 5, last edited: 24 Sep 2014 12:21