Chapter 3: Cameras, Eyes, and Ray Tracing

3.1 Images distort reality

Ray tracing is a technique that was inspired by the functioning of the human eye and devices like film cameras and TV cameras. Eyes and cameras receive light from a three-dimensional environment and focus it onto a two-dimensional surface. Whether this surface is flat (as in a film camera or a digital camera) or curved (as the eye's retina), the focused image has lost a dimension compared to the reality it represents. Just like no paper map can represent the spherical Earth without distortion, a focused image in a camera or eye causes distortions. Such distortions are so familiar to us in our everyday lives, and our brains so adept at processing the images to extract useful information from them, that it takes deliberate thought to even notice these effects as distortion:

Again, these effects on the image are real, not imaginary; to accurately portray a 3D scene, they must be incorporated into the construction of any 2D image for a person to perceive that image as realistic.

3.2 Perspective in art

Any trained artist will appreciate the technical skill required to depict a three-dimensional scene convincingly on a flat canvas. Mastering the subtle use of light and shadow and the precise geometry of perspective require a great deal of practice and refinement.

But we cannot expect a computer to have a creative and artistic sense of expression. For a computer to create a realistic image, it must be told both what objects to draw and how to draw them. The three-dimensional objects must be modeled as mathematical formulas, and the optics of a camera must be simulated somehow in order to create a flat image from these formulas.

Contemplating this, one might despair after inspecting the innards of any modern camera. Whether of a more traditional film design or a more modern digital design, a high-end camera includes a complex arrangement of lenses, along with mechanical parts to move them relative to each other and the focal plane, where they work together to form images. Even the simplest practical camera has a single lens of an exquisitely precise shape, multiple layers of materials having different optical properties, and special chemical coatings to reduce internal reflections and glare.

Fortunately, we can avoid the conceptual difficulties of optical lenses in our computer simulations entirely. We can do this by searching back in history. Over 2000 years ago Aristotle (and others) recorded that a flat surface with a small hole, placed between a candle flame and a wall, will cause an inverted image of the flame to appear on the wall. Aristotle correctly surmised from this observation that light travels in straight lines.

3.3 Cameras

In the Renaissance, artists learned to draw landscapes and buildings with accurate perspective by sitting in a darkened room with a small hole in one wall. On a bright, sunny day, an upside-down image of the outdoor scenery would be projected on the wall opposite the hole. The artist could then trace outlines of the scene on paper or canvas, and simply turn it right-side-up when finished. In 1604 the famous astronomer Johannes Kepler used the Latin phrase for "darkened room" to describe this setup: camera obscura, hence the origin of the word camera.

The discovery of photochemicals in the 1800s, combined with a miniaturized camera obscura, led to the invention of pinhole cameras. Instead of an entire room, a small closed box with a pinhole in one side would form an image on a photographic plate situated on the opposite side of the box. (See Figure 3.2.)

Figure 3.2: A pinhole camera.

The ray tracing algorithm presented here simulates an idealized pinhole camera. It models light traveling in straight lines passing through an extremely small hole (a perfect mathematical point, actually) and landing on a flat screen to form an image. However, for the sake of computational speed, ray tracing reverses the direction of the light rays. Instead of simulated rays of light reflecting from an object in all directions (as happens in reality), and only a tiny portion of them passing through the camera's pinhole, in ray tracing we start with each pixel on the imaginary screen, draw a straight line passing from that pixel through the pinhole and beyond to the scene outside the camera. This way, we ignore all rays of light that did not pass through the pinhole and are therefore irrelevant to the intended image. If the line strikes a point on the surface of some object in the scene, say a point on a bright red ball, we know that the associated pixel at the other end of the line should be colored red.

In a real pinhole camera, the light would have reflected from that spot on the red ball in all directions, but the only ray that would find its way onto the imaging screen would be the one that travels in the exact direction of the pinhole from that spot. Other spots on the ball (for example, the parts facing away from the camera) would be blocked from the pinhole's view, and rays of light striking them would have no effect on the image at all. Reversing the direction of the light rays saves a tremendous amount of computing effort by eliminating consideration of all the irrelevant light rays.


Copyright © 2013 by Don Cross. All Rights Reserved.