Stereo Rendering

From ioquake3 wiki
Jump to: navigation, search

Contents

Foreword on stereo rendering with Quake3

As of SVN revision 1339, ioquake3 officially supports the rendering of stereoscopic images. id software seemed to have done research on this topic before point release 1.17 and possibly earlier already, as several references to stereo rendering can be found in cgame source code released at that time. However, support for this seems to have been discontinued and we're happy we can finally introduce it again. Since most people will at some point in their life have seen something as mundane as a corn flakes box with a 3D image on it and special 3D glasses in it to get kids buy the product, probably all people know what this stereo vision page is about. Suffice it to say that the technique behind stereo viewing is to create two displaced images, one for the left eye and the other one for the right eye, to create a considerably enhanced illusion of depth for the 3D scenes rendered by the Quake3 engine. This page tells you how you can best achieve this with ioquake3.

Methods for creating stereo images

A well tested method to achieve the illusion of depth is the anaglyph rendering mode, which uses separate colours for left and right eye, so with special glasses using colour filters you can get a vastly better perception of depth in the images already. It can be enabled/disabled on the fly in-game and yields acceptable results at low cost (namely the price you pay for buying those glasses or the materials if you want to make them yourself).

Then there are shutter glasses or even newer technologies working with polarized light to basically achieve the same effect, just without some of the problems viewing anaglyph images has, like colour distortions. That type of rendering (enabled via r_stereoEnabled 1) has not been tested well as we lack the hardware to do so.

Enabling stereo rendering in ioquake3

Anaglyph rendering

The first method can be enabled by setting r_anaglyphMode to a non-zero value. There are different glasses with different colour combinations.

The most common combination is red-cyan, meaning red colour filter for the left eye, cyan colour filter for the right eye. As cyan contains the colours green and blue, you can view coloured stereo images which will nevertheless result in the colours looking a bit strange of course, since with the left eye you only see red colours and with the right one green and blue ones only. This is also problematic when viewing objects that only have a red colour component or none at all, as you then see the object on one eye only. Firstly, this is uncomfortable and secondly the stereo impression for that object fails.

Another common type of glasses are red-green glasses and finally there are red-blue glasses, which are the best glasses to avoid the so called "ghosting effect" which is leakage of the image from one eye to the other eye. Here is a list of all possible values for r_anaglyphMode:

* 1: red-cyan
* 2: red-blue
* 3: red-green
* 4: cyan-red
* 5: blue-red
* 6: green-red

For any other colour combination than red and cyan you probably want to enable r_greyscale as this creates black-and-white images that do not have problems with objects having only a single colour.

OpenGL stereo rendering

This method uses writing to different OpenGL buffers to create stereo images. Enable this by setting r_stereoEnabled 1. It will only work on hardware that supports quad buffers. Usually, shutter glasses go with this solution but in the future there may be other techniques to take advantage of stereo rendering. There have been several success reports using this technique on the internet. If you have the appropriate hardware, you definitely should try it out!

Controlling perception of ioquake3 3D stereo images

At least in anaglyph mode, you will probably not really be able to play quake3 competitively, as watching anaglyph images for a longer time is generally very strenuous to your eyes. But you can take steps to ease strain on your eyes and here is a rough guide for setting up sane cvar values in ioquake3 to get your eyes used to these types of images.

First of all, 3D image rendering works by putting a pointiform observer in front of a projection plane, of which a rectangle is cut out and mapped to your computer screen. With stereo effects added, objects that are in front of that projection plane will appear to stick out of your computer screen, and objects that are behind that projection plane will appear to be behind your screen. A very good explanation on all this - including illustrations - can be found here. To change the distance of the observer to this projection plane, you can modify the cvar r_zProj. The measuring unit is in quake3 standard units, where eight units equal one foot.

A second cvar to influence the way stereo images are created is the r_stereoSeparation cvar. The actual eye separation in quake3 standard units used in the engine can be got by the division r_zProj / r_stereoSeparation. If you did read up on the first part of the text linked above, you'll know that for objects at large distances, the stereo seperation of those objects is nearing asymptotically your eye separation which is about five centimeters or two inches for most people. As that is quite a large separation, the visual system of your brain may have difficulties merging the two images it gets from your eyes to one single three-dimensional impression. I will discuss this in a second. So at the beginning, a good value to set r_stereoSeparation to is something like 64 which will already give you a depth illusion for near objects. If you get used to it, you can start decreasing that value to maybe 32, 16 or even 8.

Now, why can large stereo separations not be merged so easily anymore? The reason for this is that the brain uses several depth cues for perception of depth of objects it sees, one of them is said stereo separation which is a result of the eyes being apart. Since 3D images created by the engine are only a projection of a scene onto a two dimensional plane, another very important information your brain needs to merge these images gets lost: visual acuity. Your brain knows that for viewing distant objects, your eyes need to firstly be rotated such that their view direction is parallel, and secondly the lens must be relaxed so that distant objects are in focus. As the second information is lost, you'll have a harder time merging the images the further they are apart. However, there's a way to make up for this loss of information by giving a third depth cue: movement. When moving, your brain knows distant objects appear to move slowlier than near objects. This means, if you're moving your character in quake3 you'll be able to merge the images again even if you've decreased r_stereoSeparation to as much as six, which corresponds to about your real eye separation! Another curious thing you may notice is that you'll see the image is blurry, and this is due to the reason I gave above: Your brain expects the objects on screen to be very far away and thus you'll lose focus because the actual focal length your eye needs to adjust to is the distance to the computer screen right in front of you.

Another nice effect are objects appearing to be in front of your computer screen for a "coming at ya" effect. This is generally more difficult to achieve, but the best way to do this is to put the object in question centered on your screen and move up close to it. Also make sure to set r_zProj to a value of at least 64 or even 128. The object must not touch the edge of your display as the illusion would fail there. Try grabbing the object with your hand, you'll have a funny "surprise" effect, as your brain expects your hand to touch something solid instead of thin air.

3D Crosshair

For larger stereo separations, when focusing distant objects you will see two crosshairs which makes targeting very difficult. This has been remedied by adding the crosshair as a sprite into the 3D scene at the same distance as the object you are looking at when rendering in stereo mode. This will make it possible for you to target accurately but it will only work if the server and client use updated VMs. We'll release newer VMs in future versions of ioquake3 where this is included, in the meantime you need to deal with two crosshairs.

A few final thoughts

Like I already said, this 3D stuff will probably not be suitable for playing competitively. But it may be a nice thing for the machinima community or if you just want to see your favorite maps in 3D for once. I don't know whether non-anaglyph technology like shutter glasses will improve the situation as I don't have the hardware for that, but if you do have the technology, we'd love to hear what you have to say about this via one of the means we provide for contacting us.

Personal tools