8
OmniStereo for Panoramic Virtual Environment Display Systems Andreas Simon Fraunhofer Institute, IMK [email protected] Randall C. Smith General Motors R&D [email protected] Richard R. Pawlicki R.R.P. and Associates Abstract This paper discusses the use of omnidirectional ste- reo for panoramic virtual environments. It presents two methods for real-time rendering of omnistereo images. Conventional perspective stereo is correct every- where in the visual field, but only in one view direction. Omnistereo is correct in every view direction, but only in the center of the visual field, degrading in the periphery. Omnistereo images make it possible to use wide field of view virtual environment display systems– like the CAVE™–without head tracking, and still show correct stereoscopic depth over the full 360º viewing circle. This allows the use of these systems as true multi- user displays, where viewers can look around and browse a panoramic scene independently. Because there is no need to rerender the image according to view direction, we can also use this technique to present static omnistereo images, generated by offline rendering or real image capture, in panoramic displays. We have implemented omnistereo in a four-sided CAVE™ and in a 240º i-Cone™ curved screen projec- tion system. Informal user evaluation confirms that om- nistereo images present a seamless image with correct stereoscopic depth in every view direction without head tracking. 1. Introduction Panoramic virtual environment display systems–the CAVE™ [1] or curved screen systems like the i-Cone™ [2]–are designed to present a stereoscopic image with a very wide (up to 360º) field of view. In these systems, a number of graphics channels combine images to form one seamless image on the screen. This image is continuously updated, according to the view position and view direction of a single head-tracked viewer. Stereo images are computed and drawn for two eye points in each channel. Each eye point and channel de- termines an off-axis perspective projection. One can consider a camera located at the eye point, oriented Fig. 1. Viewers browsing a panoramic scene in the 240º i-Cone™ display towards the screen. The view frustum of that camera extends from the eye point through the screen and be- yond. The associated projection captures objects within the frustum as the screen’s image. For a box surround- ing the eye point, the view frusta associated with the graphics channels meet exactly at the edges, forming a canonical image [3]. The canonical image captures a scene in every direction from a single center of projec- tion (COP). Two such canonical images, one for each eye point, produce a conventional stereo image of the scene. For the tracked viewer in a panoramic virtual envi- ronment, the two centers of projection of the stereo im- age on the screen and the physical position of his eye points inside the display system coincide. This produces a completely accurate view of the scene, with correct stereoscopic depth over the full field of view. As this image is continuously rerendered and updated, the tracked viewer can look around anywhere in the scene and use the full field of view of the display system. Other viewers, facing in a different direction than the head tracked viewer, necessarily perceive incorrect stereoscopic depth from a conventional stereo image. They are presented with inaccurate or even disturbing visual disparities. View directions up to 90º to the origi- nal view direction introduce unwanted vertical dispari- ties and diminish horizontal disparity. Viewers facing backwards, see a “pseudo-stereo” image with left and IEEE Virtual Reality 2004 March 27-31, Chicago, IL USA 0-7803-8415-6/04/$20.00©2004 IEEE. Please see the color plates on page 279 67 Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

[IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

  • Upload
    rr

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

OmniStereo for Panoramic

Virtual Environment Display Systems

Andreas Simon Fraunhofer Institute, IMK [email protected]

Randall C. SmithGeneral Motors R&D

[email protected]

Richard R. Pawlicki R.R.P. and Associates

Abstract

This paper discusses the use of omnidirectional ste-

reo for panoramic virtual environments. It presents two methods for real-time rendering of omnistereo images.

Conventional perspective stereo is correct every-

where in the visual field, but only in one view direction. Omnistereo is correct in every view direction, but only

in the center of the visual field, degrading in the

periphery. Omnistereo images make it possible to use wide field of view virtual environment display systems–

like the CAVE™–without head tracking, and still show correct stereoscopic depth over the full 360º viewing

circle. This allows the use of these systems as true multi-

user displays, where viewers can look around and browse a panoramic scene independently. Because there

is no need to rerender the image according to view

direction, we can also use this technique to present static omnistereo images, generated by offline rendering

or real image capture, in panoramic displays.

We have implemented omnistereo in a four-sided CAVE™ and in a 240º i-Cone™ curved screen projec-

tion system. Informal user evaluation confirms that om-

nistereo images present a seamless image with correct stereoscopic depth in every view direction without head

tracking.

1. Introduction

Panoramic virtual environment display systems–the CAVE™ [1] or curved screen systems like the i-Cone™ [2]–are designed to present a stereoscopic image with a very wide (up to 360º) field of view. In these systems, a number of graphics channels combine images to form one seamless image on the screen. This image is continuously updated, according to the view position and view direction of a single head-tracked viewer.

Stereo images are computed and drawn for two eye points in each channel. Each eye point and channel de-termines an off-axis perspective projection. One can consider a camera located at the eye point, oriented

Fig. 1. Viewers browsing a panoramic scene in the

240º i-Cone™ display

towards the screen. The view frustum of that camera extends from the eye point through the screen and be-yond. The associated projection captures objects within the frustum as the screen’s image. For a box surround-ing the eye point, the view frusta associated with the graphics channels meet exactly at the edges, forming a canonical image [3]. The canonical image captures a scene in every direction from a single center of projec-

tion (COP). Two such canonical images, one for each eye point, produce a conventional stereo image of the scene.

For the tracked viewer in a panoramic virtual envi-ronment, the two centers of projection of the stereo im-age on the screen and the physical position of his eye points inside the display system coincide. This produces a completely accurate view of the scene, with correct stereoscopic depth over the full field of view. As this image is continuously rerendered and updated, the tracked viewer can look around anywhere in the scene and use the full field of view of the display system.

Other viewers, facing in a different direction than the head tracked viewer, necessarily perceive incorrect stereoscopic depth from a conventional stereo image. They are presented with inaccurate or even disturbing visual disparities. View directions up to 90º to the origi-nal view direction introduce unwanted vertical dispari-ties and diminish horizontal disparity. Viewers facing backwards, see a “pseudo-stereo” image with left and

IEEE Virtual Reality 2004 March 27-31, Chicago, IL USA 0-7803-8415-6/04/$20.00©2004 IEEE.

Please see the color plates on page 279

67Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 2: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

right eye images erroneously swapped. A similar prob-lem occurs when the view direction of a stereo image is “frozen” and not dynamically updated with the head-tracked viewer. Now all viewers are presented with a more or less incorrect stereo image.

Conventional perspective stereo is correct every-where in the visual field, but only in one view direction. It is therefore necessary to continuously update the ste-reo image according to the view direction of the viewer, just to maintain correct stereo viewing. In a typical application scenario, where a small group shares a pano-ramic display system, maintaining correct stereo for everyone becomes virtually impossible, since viewers face in different directions (Figure 1). In this paper, we propose the use of omnistereo images for panoramic virtual environment displays. Omnistereo–based on a multiperspective technique called Circular Projection

[10,11]–presents a different trade-off: it is correct only in the center of the visual field, the area of acute stereo vision, and worsens in the periphery. However, because of rotational symmetry, it is correct for every view direction over the full 360º horizontal field of view.

2. Background and contribution

Techniques to look around in a panoramic image are ubiquitous today, and commercial products make it pos-sible to pan around car interiors and hotel rooms on the World Wide Web. These techniques are not designed for panoramic displays, since they produce a single view at a time, which can either be rendered in real-time from models, or assembled from pre-computed photo-realistic images or actual photographs [3,4].

Multi-perspective techniques [8], Layered Depth Images [5], Lightfields [6], and Lumigraphs [7] make it feasible to reconstruct new images for any viewpoint contained within a region of space. In principle, these approaches offer a promising alternative to traditional rendering, since the source data can be highly realistic images, and the computational time to produce a new image for a new view is constant, irrespective of the image complexity. It is dependent only on image reso-lution. In all the cases above, however, tracking would still be necessary to follow the viewpoints as they move so a computer could create new images dynamically; stereoscopic versions of these methods would still result in single valid view directions per display screen; and dynamically changing scenery presents a problem.

Finally, without parallel processing implementations of these techniques, we may not soon achieve real-time, stereoscopic, high-resolution look-around. The stereo-scopic hologram [9] is in the spirit of the current effort. It provides good stereoscopic look-around, and can pro-duce new views in real-time, performing optical reas-sembly from multiple images.

An early approach by Nelson Max produced omni-directional stereo imagery in a true panoramic display–an OMNIMAX dome [12]. The method depends on ray tracing, and as such, is not appropriate for our real-time rendering needs. It however provides a simple descrip-tive process (see Section 3.4) to create an omnistereo projection.

This kind of projection has more recently been for-malized by Peleg et al. and given the name Circular Projection [10,11]. Coming from the Computer Vision literature, applications have focused on omnistereo-scopic data acquisition from real cameras for scene analysis. Strips of images are acquired from a camera for each different viewing direction around a circle and composed to provide a stereo panorama.

The work of Max and Peleg are not well known to the VR community. We introduce their ideas in the context of real-time rendering and panoramic virtual environment display systems. The main technical contri-bution of this paper is to present novel real-time tech-niques for rendering omnistereo images, the Object Warp method in particular.

3. Conceptual model

Before describing implementations in Section 4., the current section will provide necessary terminology and concepts.

3.1. Binocular projections

Assume two projections having different COPs that create images on the same screen. A 3D model point will generally project to two different screen locations (different pixels in the two screen-aligned images). The difference, or image disparity, is crucial to correct re-covery of 3D depth and scale when the two images are viewed stereoscopically, and depends on binocular viewing position and direction. It is therefore impossible for multiple people viewing the same images from dif-ferent positions and directions to all correctly recon-struct the 3D scene; it is, however, possible for them to do so accurately (if not perfectly) given some con-straints. In the following it will be suggested that accu-rate stereo reproduction is only effective, and therefore only necessary, around a narrow field of view in the viewing direction.

3.2. Visual system

People usually turn their heads when fixating on

points outside approximately 20º of their head-centric median plane (Figure 2). While there is a much wider binocular visual field, foveal, or detailed stereo vision is

68Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 3: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

usually attained by turning the head and limiting the amount of eye rotation.

The consequence of turning the head is a translation of the eyes. Viewing direction (head direction) will be defined by the median plane fixed to the head between the eyes. Head motion will be limited to rotation in the horizontal plane about the midpoint between the eyes.

Fig. 2. Comfortable viewing range for eye motion

(version)

In the figures following, a "Screen" is assumed to have two images superimposed on it—one for the left eye, and one for the right. A 3D scene reconstructed from a pair of stereo images will have distortions in depth and scale if viewed from the wrong location and direction. Model distortion will be evaluated within the version limit range. The goal is to create projections for every horizontal viewing direction so that model distortion in scene reconstruction is minimal.

3.3. Perspective projection

Figure 3a shows an overhead view of a two perspec-tive projections on a single screen. The "model" consists of two simple dark lines composed of a few points. The projections of a particular 3D model point to the screen (as left and right eye image points) are shown. All model points are projected using this single viewing (head) position and orientation.

The idealized stereoscopic reproduction of the 3D scene occurs when the viewer’s two eyes are located in the same fixed positions as those used in projecting the model. The intersection of the two rays from the eyes to corresponding screen image points reproduces the posi-tion of a 3D model point. In the case of Figure 3a, the model is undistorted because the points (and therefore the model lines) are exactly recreated. The eyes are exactly where they should be.

(a)

(b)

Fig. 3. Model distortion increases with head

rotation for off-axis projection

If the eye point locations are changed (by rotating the head), but the images are not changed (Figure 3b), the intersection points change. Incorrectly viewed screen disparities lead to reconstruction of a distorted object, shown by the dashed curves. Note in particular, that the distorted model points do not match the corresponding true model points anywhere, even along the viewing

direction.

3.4. Circular projection

The use of Circular Projections is illustrated in Figure 4 and is described conceptually as:

Turn your head to point in each direction in the hori-zontal plane (thus moving the eyes in a circle), and

capture the model image for each eye in that direction.

In Figure 4, each model point is projected to the screen, and the image points saved, when the head is pointed in the same azimuthal direction as the model point.

One consequence of this projection is that the entire model is never reconstructed perfectly (as it can be for

69Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 4: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

the one specific viewing direction in Figure 3a), but it is always correct where you are looking (which is not true for Figure 3b).

For a panoramic display, it is preferable to have a projection that induces a correct sense of the 3D model at every viewing direction (by construction) with per-ceived model distortion in peripheral vision.

(a)

(b)

Fig. 4. Model distortion only in the visual

periphery with the Circular Projection

4. Implementation

A ray-tracing program can be adapted to implement omnistereo directly using Circular Projections from the prescription in Section 3.4. This section describes alter-native techniques that are more suitable for real-time rendering.

In Section 4.1 a simple technique is described that re-renders the scene for a small number of discrete viewing directions.

Section 4.2 provides a novel way to pre-warp the original 3D model, so that it may be rendered and pro-jected once (per screen) with a standard off-axis projec-tion to achieve omnistereo. This model pre-warping technique, called Object Warp, is appropriate for real-time implementation in a GPU, and is independent of

the rendering system of the application.

4.1. Multiple View Method

The standard off-axis projection implements a pro-jection from a single center of projection (COP) and view direction (Figure 5a). Circular Projection (Figure 5c) is the limiting case of using every viewing direction in the horizontal plane (with the center of projection moving on the viewing circle). One can approximate omnistereo images with a limited number of views. Fig-ure 5b shows the (inadequate) approach where only one view per channel is used to approximate Circular Pro-jection. In this case, the view is rotated 90º each time to face each of the three vertical screens of a 4-wall CAVE. The view frustra connecting the moving center of projection to the individual screens do no match at edges and surfaces (as they do when there is a single COP in 5a). There are regions on the viewer side of the screen not covered by the frustra of adjacent views, leaving gaps, and regions beyond the screen where the frustra overlap. Example images in the color plate (top) show discontinuities in the model surfaces. The problem has nothing to do with stereo, but rather with the use of multiple centers of projection in a single panoramic image.

Using a larger number of view directions on the viewing circle will obviously reduce the size of the discontinuities between adjacent views. But how many views are enough? Reducing the position error between adjacent slices (gap/overlap) below half a pixel in screen space will remove most visible artifacts from the image. A pixel in common virtual environment display systems has a size of 1-2mm. We can estimate, that for common viewing conditions (objects typically between infinity and ½ screen distance), the approximation error is about 1mm for 20º views and is reduced to 0.5mm for 15º views. Using 15º views will lead to less then half a pixel error for common viewing conditions in most virtual environment displays. Our implementation (section 5) of the multiple view technique confirms this approximation.

(a) (b) (c)

Fig. 5. Increasing number of views

70Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 5: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

4.2. Object Warping

In Object Warping, the projection (I) of an object (point P) is created from one viewpoint (E). When the viewpoint is changed, the 3D model is warped so it has the same projection. The application of Object Warping to the creation of omnistereo images is presented in the next section. Object pre-warping has also been proposed in the context of correcting for (deliberately) mismatched viewer/rendering eye-separation[13].

The two cases in Figure 6 show the 3D point pro-jected to the screen from the same side of the screen as the eye, or from the opposite side.

Fig. 6. Object warping

The quantities

0ZZ IEd

ZZ IPc

are the signed distances from eye to screen, and from 3D model point to screen respectively. Given the coor-dinate system definitions, the distances are simply Z

coordinate differences, and by construction, d is always positive. By similar triangles, the following relation-

ships are indicated:

IEIPIEIPEIIP ''/'''/'//|| dc

EP *)/( dc

Finally,

PPP .

The locus of point P' (or P'') as the eye moves is a scaled, possibly reversed, version of the eye locus. Any new solution for P merely has to be on the line of sight to the image point I. For binocular projections and rigid head translations (with the eyes in a plane parallel to the screen), the given solution is the unique point intersec-tion for the two rays from the fixed projected image points to the new eye positions. For head rotations, an intersection point does not generally exist. The mid-point of the shortest perpendicular segment between the resulting skew lines might be used in that case, but we have not tried it because of the extra computation needed. We chose instead to use the formulae above and create two warped models, one for each eye. The solution is simple, fast, and maintains object propor-tions, as well as being the same solution for head trans-lations.

4.2.1. Panoramic Object Warping. The use of ob-ject warping to create an omnistereo projection is now described.

Define a single, starting head orientation to be used for the final off-axis projections (for instance, as in Figure 5a). This defines the starting eye positions.

For each model point P, rotate the head toward it, neglecting elevation angle. Call the head direction D.

For the point P, compute distance c, and for the current eye location compute the distance d. As given in Figure 6, c and d are simple differences of Z coordi-nates.

Compute how the eye location changes when the

head rotates back to the start position. This is E .

Now calculate P and P' and change the model. When projected from the starting position for the head, P' projects to the desired image point I (the same geo-metric result as projecting P with head direction D).

A similar operation for the other eye results in a sec-ond image point for P, and a new warped point. P will be recovered from the two image points whenever the head direction is D.

Since the same starting head position is used for all points, the final projection is simply the off-axis projec-tion applied to the pre-warped model.

71Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 6: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

5. Evaluation and results

We have implemented the multiple-view method, as well as the object-warp technique, to generate omnistereo images and have evaluated them in two panoramic virtual environment display systems using active stereo projec-tion: a four channel 240º i-Cone™ curved screen display at Fraunhofer Institute, IMK in Sankt Augustin, Germany and a four sided CAVE™ display at GM R&D. We have conducted an informal evaluation of omnistereo with more than 50 visitors in the i-Cone™ system.

The multiple-view method for omnistereo rendering was implemented on an Onyx2 with IR2 graphics, driving the i-Cone™ system at 1440x1320 pixels per channel with a resulting pixel size of about 2mm. It uses four 15º views per 60º channel and leads to real-time capable interactive performance. Rendering a channel in four views is typically 50% slower than rendering the full channel in one pass for conventional stereo. We have shown the omnistereo implementation in the i-Cone™ display system to more than 50 visitors. We have con-ducted an informal evaluation of omnistereo with these visitors, most of them expert users of immersive visuali-zation systems. Participants, mostly in groups of 3 to 10, were presented with the same scenario (e.g. panoramic architectural scenes like in Figure 1) and were asked to compare conventional stereo, rendered with fixed orien-tation, versus the multiple-view implementation of om-nistereo. Before evaluation, participants were not told the purpose or background of the technique.

All users noticed a “better quality stereo” and “wider field of view” with omnistereo. Some users commented that “the image looks deeper” and they were “looking around more” in the omnistereo version compared to conventional stereo. A few negative comments received for omnistereo were about the noticeably lower rendering performance. Individual participants were specifically asked about problems in stereo perception or visible artifacts in the image with the omnistereo im-plementation, but no negative reports were received.

At GM R&D omnistereo images were rendered for an 8-foot cube CAVE™ with four walls (three vertical walls and a floor). Each screen has a resolution of 1280x1024 pixels, with a resulting pixel size of about 2mm. While IMK testing mainly addressed architectural type scenes outside the display screen, in this case the typical scene is the interior of a car as seen from driver's position. The model data is considerably closer–inside the display space with the viewer in most cases. The floor display is important (contrasted with the i-Cone™ which has no floor projection). The color plate (bottom) shows the result of applying Object Warp on a vehicle interior from the driver's position. Per-pixel-column multiple view renderings were also tested and produced comparable results.

Objects outside the display, and inside the display up to 1/2 the distance to the viewer appear visually accept-able, though no user testing has yet been done. Closer objects show noticeable distortion.

Objects nearly under the observer can be difficult to view as the projected data all comes together at a pole. In our particular case this problem is eased since the viewer is seated in a physical car seat, and there is no need to look straight down.

6. Conclusion and future work

In this paper, we have discussed the use of omnistereo images for panoramic virtual environment displays. The generation of omnistereo images is based on a multiper-spective technique called Circular Projection. We have presented two techniques to render omnistereo images in real-time on current graphics hardware. The multiple-view technique constructs a seamless image from a lim-ited number of separate, conventional off-axis projection stereo images. A limited number of four to six views per channel is sufficient to reduce the approximation error between adjacent views to less than half a pixel in image space for typical object distances. With four views per channel, the rendering overhead is typically around 50% for the multiple-view technique. The object-warp technique is suitable for use as a separate pre-render op-eration, and can be combined with any conventional per-spective projection rendering algorithm. We have devel-oped a GPU-based CG vertex shader [14] implementation of the object-warp algorithm with a rendering overhead of less than 20%.

Informal user evaluation confirms that omnistereo is generally capable of producing seamless stereo images with correct stereoscopic depth in every view direction without head tracking. According to the informal user evaluation, omnistereo images perform substantially better than conventional stereo images with a single, fixed view direction in wide field of view virtual environment displays.

In some situations however, for objects very close to the viewer or for objects on the CAVE™ floor, the qual-ity of omnistereo is not sufficient--object distortion is too great, even for a relatively narrow field of view. Further, as Circular Perspective is a multiperspective technique, lighting is problematic. What should the lighting be for an image created from pieces corresponding to different views, then viewed from a single perspective, especially with strong specular highlights?

Overall, the results we have obtained with omnistereo images in virtual environment display systems are very encouraging. In particular in large scale displays, that are typically used by groups of viewers without head track-ing, omnistereo imaging performs substantially better than conventional stereo. It delivers correct stereoscopic

72Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 7: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

depth over all view directions in a wide field-of-view display. We have started to use omnistereo rendering on a regular basis for group demonstrations in our i-Cone™ system. For future work, we feel the results of our informal user evaluation warrant a formal investigation of the omnistereo image in panoramic displays, comparing its perceptual quality directly to that of a head tracked stereo image.

7. Acknowledgements

We would like to express appreciation to Dan Sandin for several thoughtful conversations on this topic with one author (RCS); and in particular, for pointing us to the work by Nelson Max.

8. References

[1] C. Cruz-Neira, D. Sandin, T. Defanti, R. Kenyon, and J. Hart, "The CAVE Audio-Visual Environment," ACM Trans. on Graphics, 35, 1992, pp. 65-72.

[2] A. Simon, and M. Göbel, "The i-Cone™ – A Panoramic Display System for Virtual Environments," Pacific Graphics '02,

Conference Proceedings, September 2002, pp. 3-7.

[3] L. McMillian, and G. Bishop, "Plenoptic modeling: An image-based rendering system," SIGGRAPH ’95, Conference Proceedings, Los Angeles, California, August 1995.

[4] S. Chen, "Quicktime VR – an Image-Based Approach to Virtual Environment Navigation," SIGGRAPH ’95, Conference Proceedings, Los Angeles, California, August 1995.

[5] J. Shade, S. Gortler, H. Li-wei, and R. Szeliski, "Layered Depth Images," SIGGRAPH '98, Conference Proceedings, New York, New York, August 1998.

[6] M. Levoy, and P. Hanrahan, "Light Field Rendering," SIGGRAPH '96, Conference Proceedings, New York, New York, August 1996.

[7] S. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen, "The Lumigraph," SIGGRAPH '96, Conference Proceedings, New York, New York, August 1996.

[8] P. Rademacher, and G. Bishop, "Multiple Center of Projection Images," SIGGRAPH ’98, Conference Proceedings, Orlando, Florida, August July 1998.

[9] M.W. Halle, S.A. Benton, M.A. Klug, and J.S. Underkottler, “The Ultragram: A Generalized Holographic Stereogram”, Practical Holography V, S.A. Benton, ed., SPIE Vol. 1461 February 1991, pp. 142-155.

[10] S. Peleg, and J. Herman, "Panoramic Mosaics by Manifold Projection," IEEE Conference on Computer Vision and Pattern Recognition, Conference Proceedings, June 1997.

[11] S. Peleg, and M. Ben-Ezra, "Stereo Panorama with a Single Camera," IEEE Conference on Computer Vision and Pattern Recognition, Conference Proceedings, Ft. Collins, Colorado, June 1999.

[12] N. MAX, "Computer Graphics Distortion for IMAX and OMNIMAX Projection," Nicograph '83, Conference Proceedings, December 1983, pp. 137-159.

[13] Z. Wartell, L.H. Hodges, and W. Ribarsky, "Balancing Fusion, Image Depth and Distortion in Stereoscopic Head-Tracked Displays," SIGGRAPH ’99, Conference Proceedings, Los Angeles, California, August 8-13, 1999.

[14] B. Mark, S. Glanville, K. Akeley, and M. Kilgard, "CG: A System for Programming Graphics Hardware in a C-like Language," SIGGRAPH ’03, Conference Proceedings, San Diego, California, August 2003.

73Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE

Page 8: [IEEE IEEE Virtual Reality 2004 - Chicago, IL, USA (27-31 March 2004)] IEEE Virtual Reality 2004 - Omnistereo for panoramic virtual environment display systems

OmniStereo for Panoramic

Virtual Environment Display Systems

Andreas Simon

Fraunhofer Institute, IMK

[email protected]

Randall C. Smith

General Motors R&D

[email protected]

Richard R. Pawlicki

R.R.P. and Associates

Breaks in images due to per-screen COPs

Breaks are gone with per-pixel-column COPs

279

Color plate of paper OmniStereo for Panoramic Virtual Environment Display Systems on page 67

Proceedings of the 2004 Virtual Reality (VR’04) 1087-8270/04 $ 20.00 IEEE