Stereo Rendering in ioquake3
Foreword on stereo rendering with Quake3
As of SVN revision 1339, ioquake3 officially supports the rendering of stereoscopic images. However, id software seemed to have done research on this topic before point release 1.17 and possibly earlier already, as several references to stereo rendering can be found in cgame source code released at that time. However, support for this seems to have been discontinued and we're happy we can finally introduce it again. Since most people will at some point in their life have seen something as mundane as a corn flakes box with a 3D image on it and special 3D glasses on them to get kids buy their products, probably all people know what this stereo vision page is about. Suffice it to say that the technique behind stereo viewing is to create two displaced images, one for the left eye and the other one for the right eye, to create a considerably enhanced illusion of depth for the 3D scenes rendered by the Quake3 engine. This page tells you how you can best achieve this with ioquake3.
Methods for creating stereo images
A well tested method to achieve the illusion of depth is the anaglyph rendering mode, which uses separate colours for left and right eye, so with special glasses using colour filters you can get a vastly better perception of depth in the images already. It can be enabled/disabled on the fly in-game and yields acceptable results at low cost (namely the price you pay for buying those glasses or the materials if you want to make them yourself).
Then there are shutter glasses or even newer technologies working with polarized light to basically achieve the same effect, just without some of the problems viewing anaglyph images has, like colour distortions. That type of rendering (enabled via r_stereoEnabled 1) has not been tested yet as we lack the hardware to do so.
Enabling stereo rendering in ioquake3
The first method can be enabled by setting r_anaglyphMode to a non-zero value. There are different glasses with different colour combinations.
The most common combination is red-cyan, meaning red colour filter for the left eye, cyan colour filter for the right eye. As cyan contains the colours green and blue, you can view coloured stereo images which will nevertheless result in the colours looking a bit strange of course, since with the left eye you only see red colours and with the right one green and blue ones only. This is also problematic when viewing objects that only have a red colour component or none at all, as you then see the object on one eye only, which is uncomfortable and the stereo impression for that object fails.
Another common type of glasses are red-green glasses and finally there are red-blue glasses, which are the best glasses to avoid the so called "ghosting effect". Here is a list of all possible values for r_anaglyphMode:
* 1: red-cyan * 2: red-blue * 3: red-green * 4: cyan-red * 5: blue-red * 6: green-red
OpenGL stereo rendering
This method uses writing to different OpenGL buffers to create stereo images. Enable this by setting r_stereoEnabled 1. It will only work on hardware that supports quad buffers and since we don't have such hardware, we don't know whether this works at all.
Perceiving 3D stereo images
At least in anaglyph mode, you will probably not really be able to play quake3 competitively, as watching anaglyph images for a longer time is generally very strenuous to your eyes. But you can take steps to ease strain on your eyes and here is a rough guide for setting up sane cvar values in ioquake3 to get your eyes used to these types of images.
First of all, 3D image rendering works by putting a pointiform observer in front of a projection plane, which is ultimately mapped to your computer screen. Objects that are in front of that projection plane will appear to be in front of your computer screen, and objects that are behind that projection plane will appear to behind your screen. A very good explanation on all this including graphics can be found . To change the distance of the observer to this projection plane, you can modify the cvar r_zProj. The measuring unit is in quake3 standard units, where eight units equal one foot.
A second cvar to influence the way stereo images are created is the r_stereoSeparation cvar. The actual eye separation in quake3 standard units used in the engine can be got by the division r_zProj / r_stereoSeparation. If you did read up on the first part of the text linked above, you'll notice that for objects at large distances, the stereo seperation of those objects is nearing asymptotically your eye separation which is about five centimeters or two inches for most people. As that is quite a large separation, the visual system of your brain may have difficulties merging the two images it gets from your eyes to one single three-dimensional impression. I will discuss this in a minute. So at the beginning, a good value to set this to is something like 64 which will already give you a depth illusion for near objects. If you get used to it, you can start decreasing that value to maybe 32, 16 or even 8.
Now, why can large stereo separations not be merged so easily anymore? The reason for this is that the brain uses several depth cues for perception of depth of objects it sees, one of them is said stereo separation which is a result of the eyes being apart. Since 3D images created by the engine are only a projection of a scene onto a two dimensional plane, a very important information your brain needs to merge these images gets lost: visual acuity. From years of training your brain knows that for viewing distant objects, your eyes need to firstly be rotated such that their view direction is parallel, and secondly the lens must be relaxed so that distant objects are in focus. As the second information is lost, you'll have a harder time merging the images the further they are apart. However, there's a way to make up for this loss of information by giving more information: movement. When moving, your brain knows distant objects appear to move slowlier than near objects. This means, if you're moving your character in quake3 you'll be able to merge the images again even if you're down to separations as much as 6 which corresponds to about your eye separation! Another curious thing you may notice is that you'll see the image is blurry, and this is due to the reason I gave above: Your brain expects the objects on screen to be very far away and thus you'll lose focus because the actual focal length your eye needs to adjust to is the distance to the computer screen right in front of you.
Another nice effect are objects appearing to be in front of your computer screen for a "coming at ya" effect. This is generally more difficult to achieve, but the best way to do this is to put the object in question centered on your screen and move close up to it. Also make sure to set r_zProj to a value of at least 64 or even 128. The object must not touch the edge of your display as the illusion would fail there. Try "grab" the object with your hand, you'll notice a funny "surprise" effect, as your brain expects your hand to touch something solid instead of thin air.