12
A s usual with infant technologies, realiz- ing the early dreams for virtual reality (VR) and harnessing it to real work has taken longer than the initial wild hype predicted. Now, finally, it’s happening. In his great invited lecture in 1965, “The Ultimate Dis- play,” Ivan Sutherland laid out a vision 1 (see the side- bar), which I paraphrase: Don’t think of that thing as a screen, think of it as a window, a window through which one looks into a virtual world. The challenge to computer graphics is to make that virtual world look real, sound real, move and respond to interaction in real time, and even feel real. This research program has driven the field ever since. What is VR? For better or worse, the label virtual reality stuck to this particular branch of computer graphics. I define a virtual reality experience as any in which the user is effectively immersed in a responsive virtual world. This implies user dynamic control of viewpoint. VR almost worked in 1994. In 1994, I surveyed the field of VR in a lecture that asked, “Is There Any Real Virtue in Virtual Reality?” 2 My assessment then was that VR almost worked—that our discipline stood on Mount Pisgah looking into the Promised Land, but that we were not yet there. There were lots of demos and pilot systems, but except for vehicle simulators and entertainment applications, VR was not yet in produc- tion use doing real work. Net assessment—VR now barely works. This year I was invited to do an up-to-date assessment of VR, with funding to visit major centers in North America and Europe. Every one of the component technologies has made big strides. Moreover, I found that there now exist some VR applications routinely operated for the results they produce. As best I can determine, there were more than 10 and fewer than 100 such installations as of March 1999; this count again excludes vehicle simula- tors and entertainment applications. I think our tech- nology has crossed over the pass—VR that used to almost work now barely works. VR is now really real. Why the exclusions? In the technology compari- son between 1994 and 1999, I exclude vehicle simula- tors and entertainment VR applications for different reasons. Vehicle simulators were developed much earlier and independently of the VR vision. Although they today provide the best VR experiences available, that excel- lence did not arise from the development of VR tech- nologies nor does it represent the state of VR in general, because of specialized properties of the application. Entertainment I exclude for two other reasons. First, in entertainment the VR experience itself is the result sought rather than the insight or fruit resulting from the experience. Second, because entertainment exploits Coleridge’s “willing suspension of disbelief,” 3 the fidelity demands are much lower than in other VR applications. Technologies Four technologies are crucial for VR: 4,5 the visual (and aural and haptic) displays that immerse the user in the virtual world and that block out contradictory sensory impressions from the real world; the graphics rendering system that generates, at 20 to 30 frames per second, the ever-changing images; the tracking system that continually reports the posi- tion and orientation of the user’s head and limbs; and the database construction and maintenance system for building and maintaining detailed and realistic models of the virtual world. 0272-1716/99/$10.00 © 1999 IEEE Special Report 16 November/December 1999 This personal assessment of the state of the VR art concludes that whereas VR almost worked in 1994, it now really works. Real users routinely use production applications. Frederick P. Brooks, Jr. University of North Carolina at Chapel Hill What’s Real About Virtual Reality?

What’s Real About Virtual Reality?

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: What’s Real About Virtual Reality?

As usual with infant technologies, realiz-ing the early dreams for virtual reality

(VR) and harnessing it to real work has taken longerthan the initial wild hype predicted. Now, finally, it’shappening.

In his great invited lecture in 1965, “The Ultimate Dis-play,” Ivan Sutherland laid out a vision1 (see the side-bar), which I paraphrase:

Don’t think of that thing as a screen, think of it asa window, a window through which one looksinto a virtual world. The challenge to computergraphics is to make that virtual world look real,sound real, move and respond to interaction inreal time, and even feel real.

This research program has driventhe field ever since.

What is VR? For better or worse,the label virtual reality stuck to thisparticular branch of computergraphics. I define a virtual realityexperience as any in which the user iseffectively immersed in a responsivevirtual world. This implies userdynamic control of viewpoint.

VR almost worked in 1994.

In 1994, I surveyed the field of VR ina lecture that asked, “Is There Any

Real Virtue in Virtual Reality?”2 My assessment thenwas that VR almost worked—that our discipline stoodon Mount Pisgah looking into the Promised Land, butthat we were not yet there. There were lots of demos andpilot systems, but except for vehicle simulators andentertainment applications, VR was not yet in produc-tion use doing real work.

Net assessment—VR now barely works. Thisyear I was invited to do an up-to-date assessment of VR,with funding to visit major centers in North America and

Europe. Every one of the component technologies hasmade big strides. Moreover, I found that there now existsome VR applications routinely operated for the resultsthey produce. As best I can determine, there were morethan 10 and fewer than 100 such installations as ofMarch 1999; this count again excludes vehicle simula-tors and entertainment applications. I think our tech-nology has crossed over the pass—VR that used toalmost work now barely works. VR is now really real.

Why the exclusions? In the technology compari-son between 1994 and 1999, I exclude vehicle simula-tors and entertainment VR applications for differentreasons.

Vehicle simulators were developed much earlier andindependently of the VR vision. Although they todayprovide the best VR experiences available, that excel-lence did not arise from the development of VR tech-nologies nor does it represent the state of VR in general,because of specialized properties of the application.

Entertainment I exclude for two other reasons. First,in entertainment the VR experience itself is the resultsought rather than the insight or fruit resulting fromthe experience. Second, because entertainmentexploits Coleridge’s “willing suspension of disbelief,”3

the fidelity demands are much lower than in other VRapplications.

TechnologiesFour technologies are crucial for VR:4,5

� the visual (and aural and haptic) displays thatimmerse the user in the virtual world and that blockout contradictory sensory impressions from the realworld;

� the graphics rendering system that generates, at 20to 30 frames per second, the ever-changing images;

� the tracking system that continually reports the posi-tion and orientation of the user’s head and limbs; and

� the database construction and maintenance systemfor building and maintaining detailed and realisticmodels of the virtual world.

0272-1716/99/$10.00 © 1999 IEEE

Special Report

16 November/December 1999

This personal assessment of

the state of the VR art

concludes that whereas VR

almost worked in 1994, it

now really works. Real users

routinely use production

applications.

Frederick P. Brooks, Jr.

University of North Carolina at Chapel Hill

What’s Real AboutVirtual Reality?

Page 2: What’s Real About Virtual Reality?

Four auxiliary technologies are important, but notnearly so crucial:

� synthesized sound, displayed to the ears, includingdirectional sound and simulated sound fields;

� display of synthesized forces and other haptic sensa-tions to the kinesthetic senses;

� devices, such as tracked gloves with pushbuttons, bywhich the user specifies interactions with virtualobjects; and

� interaction techniques that substitute for the realinteractions possible with the physical world.

Table 1 summarizes the technology progress since1994.

DisplaysDisplay technology has advanced very rapidly, pulled

along by the television, presentation-projection, andLCD-device markets, rather than just the still-small VRmarket. As VR was developing, much ink was spilledover the relative merits of various formats of displays:head-mounted displays (HMDs), CAVE-like (Cave Auto-matic Virtual Environment) surround projectors,panoramic projectors, workbench projectors, and desk-top displays.

Most workers consider desktop displays not to be VRbecause they

� hardly block out the real world,� do not present virtual-world objects in life size, and

therefore� do not create the illusion of immersion.

To be sure, one could say the same for workbenches,but somehow the result is not at all the same. Work-benches were originally designed for human-body mod-els, which they do display life-size. Moreover, theysubtend a large visual angle and hence achieve sub-stantial real-world blocking. When used for battle plan-ning, for example, the workbench in fact represents inlife size the sandtable devices they displace.

The debate about display format seems to be over. Peo-ple recognize that each format proves clearly superiorfor some set of real applications, yet none dominates. Inour laboratory at the University of North Carolina atChapel Hill, we use them all—HMDs, surround projec-

tion, panoramic projection, and workbenches. Each hasits own peculiar merits and disadvantages.

Head-mounted displays. The most noticeableadvances in HMDs have occurred in resolution, althoughcolor saturation, brightness, and ergonomics have alsoimproved considerably. In 1994, one had a choice of cost-ly and cumbersome CRT HMDs, which had excellent res-olution and color, or economical LCDs, which had coarseresolution and poor saturation. Today economical LCDshave acceptable resolution (640 × 480 tricolor pixels)and good color saturation.

Field of view still poses a major problem with HMDs,with 45-degree full-overlap about the industry median,at prices in the $5,000 range.

CAVEs and kin. Many major VR installations nowuse the surround-projection technology first introducedin the University of Illinois-Chicago Circle CAVE. Fromthree to six faces of a rectangular solid are fitted withrear-projection screens, each driven by one of a set ofcoordinated image-generation systems.

Standard off-the-shelf projector resolution is now upto 1280 × 1024, better than SVGA (super video graphicsarray) and even XGA (extended graphics array). In a 10-foot cave, this gives an angular resolution of about 4minutes of arc; human visual acuity is 1 minute of arcfor 20/20 vision.

The principal advantages of surround-projection dis-plays are

� a wide, surrounding field of view, and� the ability to give a shared experience to a small group

(of whom one or none are head-tracked).

IEEE Computer Graphics and Applications 17

Table 1. Progress in VR technologies.

1994 1999

Swimming due to lags Still a major problem for HMDs and caves, not screensDisplays: narrow field of view HMD resolution of 460 by 680 real pixels now affordableor poor resolution or high cost Projector resolution 1280 by 1024Limited model complexity Rendering limited mostly by costPoor registration in augmented reality Augmented reality dynamic registration still hard;

1 ms = 1 mm errorTethered ranging Wide-area tracking available; wireless not yet in useBad ergonomics Ergonomics getting thereTedious model building Model engineering a major task

Sutherland’s 1965 Vision

Display as a window into a virtual worldImprove image generation until the picture looks realComputer maintains world model in real timeUser directly manipulates virtual objectsManipulated objects move realisticallyImmersion in virtual world via head-mounted displayVirtual world also sounds real, feels real

Page 3: What’s Real About Virtual Reality?

The principal disadvantages are

� the cost of multiple image-generation systems,� space requirements for rear projection,� brightness limitations due to large screen size, which

results in scenes of approximately full-moon bright-ness and hinders color perception,

� corner and edge effects that intrude on displayedscenes, and

� reduced contrast and color saturation due to lightscattering, especially from opposing screens.

This latter problem, which I notice in UNC’s surround-projection VR facility and in all I have visited, seemsinherent in the geometry itself. However, careful choiceof screen material (Carolina Cruz-Neira of Iowa StateUniversity has an unpublished study on this) and prob-ably polarizing glasses can ameliorate the problem. Thebrightness, contrast, and color saturation problems areserious enough that the team at the Fraunhofer Insti-tute at Stuttgart reports that their client automobile styl-ists and industrial designers have rejected their cave infavor of Fraunhofer’s panoramic display installation.The users do insist on a life-sized display.

Panoramic displays. One or more screens arealternatively arranged in a panoramic configuration.This suits groups especially, and multidisciplinarydesign reviews commonly use this type of display. Oneperson drives the viewpoint.

Workbenches. The workbench configuration laysa rear-projection screen flat and positions the projectorso that the workbench’s length approximates that of ahuman body. One, two, or conceivably more trackedviewers each perceive a custom-generated image.Angular resolution typically is about 4 minutes of arcnear the center of the display. Since the eye-to-far-screen-border plane limits the apparent height of anobject, many workbenches can be tilted to resembledrafting tables.

Rendering enginesRendering engines have benefited from significant

advances in speed and reductions in cost.

Speed. Rendering engines have improved radicallyin the past five years (almost four of Gordon Moore’s 18-month performance-doubling periods). The complexi-ty of the virtual worlds that could be visualized wassharply limited in 1994, when the fastest commerciallyavailable engines had actual speeds of about 600 K poly-gons per second, or about 30 K polygons in a 1/20-second frame. Today each pipe of an 8-pipe,32-processor SGI Reality Monster can render scenes ofup to 180 K polygons in 1/20 second. Moreover, muchlarger configurations are available.

In one sense, world-model complexity is more dollar-limited than technology-limited today. In practice, worldmodels containing more than 500 K polygons stillrequire algorithmic attacks to achieve real-time rendering.

Cost. Meanwhile, steady progress in mass-marketCPUs yields multi-hundred-megahertz clock speeds.The progress of graphics accelerator cards, driven bythe game market, has matched this CPU progress. Con-sequently, VR configurations of quite acceptable per-formance can now be assembled from mass-marketimage-generation engines. For many applications,image generation no longer dominates system cost.

TrackingIn 1994, tracking the viewer’s head motion was a

major problem. Tracker ranges tethered the viewer toan effective radius of about four feet. Tracker accuracysuffered from severe field distortion caused by metalobjects and magnetic fields.

Unlike display technology and image-generation tech-nology, tracking technology has not had a substantialnon-VR market to pull it along. The most important col-lateral market has been motion capture for entertain-ment applications, and that market has not pressed thetechnology on accuracy. So progress in tracking has notmatched that of displays and image generation.

Nevertheless, progress has occurred. The UNC-Chapel Hill outward-looking optical tracker is routine-ly operated over an 18 × 32-foot range, with accuracy of1 mm and 0.1 degree at 1,500 updates per second. Com-mercial off-the-shelf (COTS) trackers give workingrange radii of 8 to 10 feet. Hybrid-technology trackersseem most promising, combining inertial, optical, ultra-sonic, and/or magnetic technologies.

ErgonomicsWith wide-range trackers, another problem

emerges—the wires to the user. The physical tether, nottoo much of a bother when the user was electronicallytethered anyway, becomes an ergonomic nuisance oncethe user can walk around in a room-sized area.

In principle, substituting wireless links for wires cansolve this problem. For users in surround-projection sys-tems, COTS wireless systems serve, since only trackingand button-push data need to be transmitted.

For users with HMDs, the problem becomes muchmore serious—two channels of high-definition videomust also be transmitted, and COTS wireless systemsdo not yet have the required portability. HMDs alsorequire body-mounted power for free-ranging viewers towear them.

System latencyPerceptually, the greatest illusion breaker in 1994 sys-

tems was the latency between user motion and its rep-resentation to the visual system. Latencies routinely ran250 to 500 ms. Flight simulator experience had shownlatencies of greater than 50 ms to be perceptible. In myopinion, end-to-end system latency is still the most seri-ous technical shortcoming of today’s VR systems.

In HMD systems, head rotation is the most demand-ing motion, with typical angular velocities of 50degrees per second. Latencies of 150 to 500 ms makethe presented scene “swim” for the user, seriously dam-aging the illusion of presence. In projection systems,whether surround, panoramic, or workbench, the

Special Report

18 November/December 1999

Page 4: What’s Real About Virtual Reality?

viewer’s head rotation does not change the generatedimage—only viewpoint translation, and any motionsof the interaction device and of virtual objects. Hence,system latencies are not so noticeable in these sys-tems—but they nevertheless damage the illusion ofpresence. I see latencies of 150 to 250 ms being accept-ed without outcry.

Many advances in image rendering speed camethrough graphics processor pipelining, which has hurtsystem latency. The designers of tracking systems havegiven insufficient attention to system latency. More-over, many VR systems have been pieced together usingstandard networking between the tracker’s computerand the image generator, and this contributes notice-ably to latency.

The latency problem becomes extremely serious inaugmented reality systems in which the virtual worldsuperimposes on the real world. The challenge lies indynamic registration—the two worlds should stay rigid-ly locked together both for the illusion of reality and fortask performance. Holloway has studied the viewermotions of a cranio-facial surgeon while examining apatient and developing an operating plan. For thoseviewer motions, Holloway found that a millisecond oflatency translated into a maximum registration error ofone millimeter in, for example, superimposing CT scanand/or MRI scan data on the patient’s visible face, asperceived by the surgeon through an HMD.6 Since theapplication required millimeter accuracy, and today’sbest systems have achieved latencies of at best 40 to 50ms, Holloway chose to pursue the cranio-facial applica-tion no further.

The most exciting, although not the easiest, aug-mented reality applications are surgical. One attack usesvideo camera technology to acquire the real-worldimage, which is then video-combined with the virtualimage. This approach has in principle two advantagesover optical combination—the video real-world imagecan be delayed to match the virtual image, and obscu-ration of far objects by near ones can be done symmet-rically between the images.

Model engineeringNow that we can explore quite large virtual world

models in real time, we find that acquiring, cleaning,updating, and versioning even static world models isitself a substantial engineering task. It resembles soft-ware engineering in magnitude and in some, but not all,other aspects.My own rule of thumb is that managing amodel of n polygons is roughly equivalent to managinga software construct of n source lines.

Model acquisition. VR practitioners acquire mod-els in one of three ways: build them, inherit them asbyproducts of computer-aided design efforts, or acquirethem directly by sensing existing objects.

In spite of a variety of excellent COTS tools for modelbuilding, it is tedious and costly work, especially whenaccuracy is important. We have found it takes severalman-years to make a model of an existing kitchen thataims at quarter-inch accuracy. (We do not actually aimfor that resolution where a textured image will serve as

a substitute, such as the spice rack and contents, or stoveknobs.)

I have found a breadth-first iterative-refinement strat-egy best for modeling. First, make a simple representa-tion of each major object, then of each minor object.Then do a level of refinement on each, guided by theeye—What approximation hurts worst as one experi-ences the world?

Textures are extremely powerful, as SGI’s PerformerTown first demonstrated. Image textures on block mod-els yield a pretty good model rather quickly. Movingthrough even a rough model wonderfully boosts themodeler’s morale and enthusiasm, as well as guidingrefinement.

Buying models. Several firms offer catalogs ofmodels, at different levels of detail, of common objectsincluding people. It is cheaper, and faster, to buy thanto build these components of a custom world model.

Inheriting CAD models. By far the simplest wayto get superbly detailed models of designed objects,whether existing or planned, is to get the computer-assisted design (CAD) data. But even that rarely provessimple:

� We have found it very difficult to get the original com-putational-solid-geometry data, which best encap-sulates the designer’s thoughts about form, function,and expected method of construction. We usuallyreceive an already tessellated polygonal representa-tion. Quite often, it will be severely over-tessellated,needing polygonal simplification to produce alterna-tive models at different levels of detail.

� Formats will almost always have to be translated. Doit in an information-preserving way.

� CAD models often require substantial amounts ofmanual cleaning. In some CAD systems, deleted,moved, or inactive objects stay in the database aspolygonal ghosts. Coincident (twinkling) polygonsand cracks are common.

� AutoCAD does not capture the orientation of polygons.Orienting them takes automatic and manual work.

� If any of the subsequent processing programs requiresmanifolds, making them from CAD data will takemuch work.

� CAD models, particularly of architectural structures,typically show the object as designed, rather than asbuilt.

Models from images. For existing objects only, asopposed to imagined or designed objects, imaging can

IEEE Computer Graphics and Applications 19

In my opinion, end-to-end system latency

is still the most serious technical

shortcoming of today’s VR systems.

Page 5: What’s Real About Virtual Reality?

yield models. Imaging may be done by visible light, laserranging, CAT and MRI scans, ultrasound, and so forth.Sometimes one must combine different imaging modal-ities and then register them to yield both 3D geometryand visual attributes such as color and surface textures.Recovering models from images is a whole separatetechnology and an active research area, which I cannottreat here.

ApplicationsThe most important thing in VR since 1994 is not the

advances in technologies, but the increasing adoptionof its technologies and techniques to increase produc-tivity, improve team communication, and reduce costs.VR is now really real.

Finding the production-stage applicationsFor my assessment, I broadcast e-mail messages to

the mailing list for the Virtual Reality 99 Conference andto the United Kingdom VR Special Interest Group (ukvr-sig), inviting people to send reports of VR applications.I defined three stages of application maturity:

� Demonstration� Pilot production—has real users but remains in the

developers’ hands, under test� Production—has real users doing real work, with the

system in the users’ hands

I especially invited reports of applications in the pilotand production stages.

In the discussion here, I report on some production-stage applications, that is, employed in dead earnest byusers—not developers—for the results gained, ratherthan as an experiment or trial. This report does not aimto describe all such applications. On the other hand,most published reports do not sharply distinguish thestages of progress of systems. This report describesapplications that I know to be in production. In mostcases I visited the application site, verified the applica-tion’s status, and tried to learn what unexpected expe-riences and lessons have resulted from the process ofmaking them work. In a sense, this spells out “what’sreal” about some VR applications.

The most surprising result of my call for informationon VR projects is how few I could find in mid-1998 thatwere really in routine production use. Some, of course,are not discussed openly for competitive or security rea-sons. A great many more were in pilot status, well pastthe demo stage and almost to the production stage. Ibelieve this situation to be changing very rapidly andthat another five years will find VR applications such asthose I describe to be much more widespread.

For example, the use of immersive VR for visualiza-tion of seismological data has been intensively pursuedboth in oil companies and in universities. One compa-ny, Landmark, is even making major VR installations inthe US and in Europe for that purpose. Yet, as far as Icould learn by inquiries, no one was quite yet using VRin routine seismic interpretation.

Here are the kinds of verifiable production applica-tions I found:

� Vehicle simulation—first and still much the best� Entertainment—virtual sets, virtual rides� Vehicle design—ergonomics, styling, engineering� Architectural design and spatial arrangement; sub-

marines, deep-sea oil platforms, process plants� Training—only the National Aeronautics and Space

Administration� Medicine—psychiatric treatment� Probe microscopy

Of these, vehicle simulation was the first application ofwhat today we call VR. It is not only the oldest, it is alsothe most advanced. The results accomplished in vehiclesimulation provide both an existence theorem and achallenge for other applications: It is possible to do VRexceptionally well, and it pays. Surprisingly, there hasbeen relatively little knowledge interchange betweenthe vehicle simulator discipline and the new VR disci-pline, due I think to ignorance and sloth on the part ofthe VR community.

A second batch of for-profit production applications ofVR lie in the entertainment sphere. By and large, the mostelaborate use of computer graphics in entertainment hasbeen non-real-time graphics, used for animation and spe-cial effects, rather than VR. However, theme parks andarcades are increasingly installing true VR experiences. Ishall not treat these applications at all here.

Flying a 747 simulator at British AirwaysTo my delight, I got to spend an hour flying a 747 sim-

ulator at British Airways’ facility at Heathrow. It was astunningly good illusion—the best VR I have ever expe-rienced. Rediffusion Ltd. built the simulator, which costabout $13 million.

The visuals appear on a spherical zonal screen about12 feet from the pilot and copilot seats. The physicalsetup faithfully models the interior of the cockpit, andthe instruments faithfully display the results of realavionics—the steering yoke seems to fight back proper-ly. The whole pilots’ cabin, plus room for the instructor,is mounted on a motion platform that gyrates within athree-story space. The vehicle dynamics and simulatedmotions appear to be of very high quality. The sound issuperb: engine sound, wind sound, taxiing bumps in thepavement, radios.

Within a very few minutes I was not in a simulator, Iwas flying the airplane: taxiing, taking off, climbing out,circling the airport, and trying to keep the plane at con-stant altitude. (The 747’s dynamics differ markedly fromthose of the light planes in which I learned to fly yearsago.) So compelling was the illusion that the breaks inpresence came as visceral, not intellectual, shocks. Oneoccurred when the instructor abruptly changed thescene, from circling above London to an approach toHong Kong. The other occurred when I taxied up to ahangar in Beijing and looked back (to about 4 o’clock)to ensure that I would not brush wingtips with a parkedaircraft—and the view was just empty gray! The pro-jected visuals did not reach that far around.

Lessons. My host, Michael Burtenshaw, explainedthat their successes drove the steady evolution and uni-

Special Report

20 November/December 1999

Page 6: What’s Real About Virtual Reality?

versal adoption of flight simulatorsto train pilots on new aircraft types.At Heathrow, British Airways nowhas 18 simulators in four buildings,each specialized to an aircraft type.These they keep busy training boththeir own pilots and pilots of small-er airlines, to whom they sell in-struction. Simulators, though costly,are much cheaper than airplanes.Much more important, pilots cantrain and exercise in extreme situa-tions and emergency procedureswhere real practice would imperilaircraft and lives. Many major air-lines have similar sets of simulators;so do various air forces.

Increasingly, real avionics, whichmake up a good chunk of the cost ofhigh-end simulators, are beingreplaced by individual PCs. Each PCsimulates one instrument’s behav-ior and drives its output dials. Significant economiesresult.

British Airways needs to keep a simulator type as longas it keeps the corresponding aircraft type—up to 25years. I found it memorable to walk through the facili-ty and see old computers running whose existence I hadalmost forgotten and whose maintenance is a night-mare for the airline: Xerox Sigma 3, DEC PDP-11, andVax 11-780.

Merchant ship simulation at WarsashThe Warsash Maritime Centre of Southampton Insti-

tute trains both deck and engineering officers for themerchant marine. It offers a two-year cadet course atthe technical-school level. A principal undertaking,however, is a set of one-week short courses designed forboth the qualification testing and skill enhancement ofdeck and engineering officers. Shipping companies reg-ularly send deck or engineering teams of four officers totrain together, to build team skills. The facility operatesroutinely as a revenue-producing element of Southamp-ton Institute. David Gatfield served as my host.

Warsash runs a rich set of simulators: an engine-roomcontrol system, a liquid-gas cargo-handling simulator,a first-class single-bridge simulator, and a coupled sim-ulator consisting of three bridge simulators, each capa-ble of handling a full deck-officer team, for multishipmaneuvers. The best bridge simulator represents ageneric merchant vessel, although the view of the shipfrom the bridge can be customized (via texture map-ping) to represent a particular ship type or even a par-ticular vessel. Similarly, the look and feel of the controlscan be customized to a small degree, to simulate differ-ent bridge configurations (see Figure 1).

The visual surround, approximately 180 degrees, isgenerated by seven behind-screen Barco projectors. Theimagery is generated with 768 × 576-pixel resolutionby a set of PCs with accelerator cards. Woofers mount-ed under the floor do a quite convincing job of simu-lating engine noises for several different power-plant

types; other speakers provide wind, wave, buoy, andship-whistle noises.

The ocean simulation provides a variety of sea states;the sea model includes tides and currents. Atmospher-ics, including a variety of visibility and fog conditions,are effective. An auxiliary monitor provides the func-tion of binoculars—a higher-resolution, restricted field-of-view image—without the aiming naturalness of truebinoculars. Radar, Loran, geographic positioning sys-tem (GPS), depth (fathometer), over-the-ground speedindicator, and other instruments are faithfully simulat-ed. Norcontrol built the simulator, which cost approxi-mately £2 million in 1995.

I experienced a ferryboat trip from a southern Nor-wegian port, at twilight. As wind speed and sea staterose, the most surprising effect was the realism of thevessel’s pitch and roll, achieved entirely by manipula-tion of the imagery. My colleagues and I found ourselvesrocking to and fro, and side-to-side, to compensate forthe ship motion. One recent visitor, a professional in shipsimulation, asked the Warsash people if they had muchtrouble with their hydraulics. He was surprised whentold there were none.

A fascinating non-VR team-training simulator atWarsash consists of a small fleet of 1/12 to 1/25 scale-model merchant ships navigated around a 13-acre lakewith some 30 ship-handling hazards or stations. Eachship seats two persons, canoe-style. The master’s eyesare exactly at the height of the bridge. He gives oral com-mands to the pilot, who actually handles the controls.The ship scaling results in a seven-fold scaling-up of thenatural breezes and winds.

Wish list. First and foremost, our hosts want more-natural binoculars—an important item often mentionedin student debriefings. Today’s commercial off-the-shelftechnology offers such capability—it just takes money.Second, they want better screen resolution, and third,more screen brightness. Indeed, the simulator workedwell for twilight and night scenes, but could not begin to

IEEE Computer Graphics and Applications 21

1 Ship bridgesimulator atWarsash Mar-itime Centre.

Cou

rtes

y W

arsa

sh M

ariti

me

Cen

tre,

Sou

tham

pto

n In

stitu

te

Page 7: What’s Real About Virtual Reality?

approximate either the brightness or the dynamic rangeof a sunlight-illuminated scene.

Lessons. As with flight simulators, our hosts reportseveral advantages of simulation over real-ship training:

� Emergency scenarios, even extreme ones, can be thor-oughly exercised.

� Scenarios can readily be run, accelerated, andswitched, enabling more significant experience timeper hour of training.

� Cost.

A few engineering officer teams report that Warsash’slarge control-board simulator seems dated—their ownengine rooms now have glass-cockpit-type controls,everything displayed and actuated via computerscreens. Warsash is currently updating the engine-roomsimulator to reflect this type of control system. I won-der at the human factors effects of seeing everything atonce via a visual scan versus having to act to bring upinformation. Older may be better.

I am convinced that much of the sense of presenceand participation in vehicle simulators comes from thefact that one can reach out and touch on the simulatoreverything reachable on the real vehicle—the near-fieldhaptics are exactly right.

The second take-home lesson for me from experi-encing these working simulators is the extreme impor-tance of getting sound right.

Ergonomics and design at Daimler-ChryslerFor years, VR researchers have worked toward mak-

ing VR an effective tool for product design and designreview. Today the automobile industry seems way aheadin adopting the technology for design applications—most major automobile manufacturers have installa-tions or use nearby ones. Some are in routine productionstatus.

I visited one such production system, Daimler-

Chrysler’s installation in the SmallCar Platform Advance Vehicle Engi-neering area of their Auburn Hills,Michigan, Technical Center. KenSocks, Don Misson, and Josh David-son served as my hosts. The config-uration includes a high-resolutionstereoscopic Boom by FakeSpaceSystems, worn on the user’s head asa head-mounted display. The Boommechanism provides high-accuracy,extremely low-latency mechanicaltracking of the user’s head pose. Theuser sits in a “buck”—a real car seat,complete with all its adjustments,combined with a real steering wheeland a mocked-up instrument con-trol panel.

Imagery comes from an SGI OnyxInfinite Reality system. It drives notonly the Boom but also a monocularprojector that displays on a 4-foot by

6-foot screen at one end of a nearby conference tablethat seats eight. An Ascension magnetic tracker tracksthe positions of the driver’s hands (but not the fingers)and of auxiliary objects such as a coffee cup. A short cal-ibration sequence establishes the length and position ofthe extended index finger relative to the hand tracker.Modal buttons enable the driver to cycle among the con-formations of the hand avatars—reaching, holding anobject, grasping the steering wheel, and so on.

This installation sees routine weekly use for produc-tion design work. I talked with user engineers fromgroups doing ergonomic studies (driver controls, accessfrom front passenger seat, view, and so forth), wind-shield wiper design (visibility as a function of body size;see Figure 2), interior trim design (color studies, amongothers), and painting studies (detecting accidentalappearances of exterior paint in the interior, due tometal wrapping). The windshield wiper engineers, forexample, report that the system obviates many argu-ments—a whole team sees together the visibility as per-ceived by drivers of different body sizes and shapes.

Lessons. A major factor in the success of the Daim-ler-Chrysler installation: it was desired, funded, speci-fied, and is operated by a user group who really wantedit, the Ergonomics activity within Small Car PlatformAdvance Vehicle Engineering. They enlisted the (essen-tial) help of the Technical Computer Center in realizingtheir system. It offers a prime example of user-pull ver-sus technology-push. Consequently, system fixes andenhancements follow the users’ priority order, and thecorporate know-how for using the system most effec-tively develops within one of the user groups. Interest-ingly, use now extends to many departments other thanErgonomics, and the system is used as much for otheraspects of design as for ergonomics.

Models provide a second important lesson from theDaimler-Chrysler experience. Some years ago, their lab-oratories operated a variety of CAD systems. The Engi-neering Vice President ordered that all CAD would move

Special Report

22 November/December 1999

2 View ofvirtual wind-shield wipervisibility atDaimler-Chrysler’s Tech-nology Center.

Cou

rtes

y of

Dai

mle

r-C

hrys

ler

Page 8: What’s Real About Virtual Reality?

to one system; they selected Catia from Dassault Sys-temes. Since Catia is the design medium, the Catia mod-els of automobiles under design are current, accurate,and maintained—the metal is cut from those models.This means that one of the hardest problems in VR—how to get accurate, detailed models—does not arisefor the Daimler-Chrysler applications. The models arebyproducts of the design process, and they are furnishedby someone other than the VR systems team.

Design review at Electric BoatVR today remains an expensive technology, more

because of the people, the models, and the softwarethan because of the image generation, tracking, and dis-play equipment. Therefore, adoption of the technologyfor design tasks happened first in applications where thevalue of design is greatest: mass-produced vehicles andexceedingly complex structures such as submarines andprocess plants.

The Electric Boat Division of General Dynamics Cor-poration, which designs and builds nuclear submarines,uses VR principally for multidisciplinary design review.The configuration consists of a large conference roomwith a panoramic monocular projection system. A sin-gle user specifies travel using a mouse, and an SGI OnyxInfinite Reality Engine generates imagery. Electric Boatmakes heavy use of four such high-security VR rooms atits Groton, Connecticut, development laboratories. Myhosts there were Don Slawski and Jim Boudreaux.

Submarine design is done in and controlled withCatia. Approximately two-thirds of the total designeffort (for the vessel, not including its nuclear powerplant) goes into the design of piping, wiring, and duct-work. The visualizable model of a submarine containsmillions of polygons. This model is derived from theCatia model, transmitted to the visualization file systemby “sneakernet,” and displayed using Deneb’s visual-ization software.

A typical design review session includes not only thevarious engineering groups involved in a particulardesign area, but also manufacturing, capital tooling,maintenance, and operations people. During a sessionthe group will usually move to a particular local areaand spend many minutes studying it. Viewpoint changesmay happen every few minutes to aid study, but not con-tinually. Representatives of each discipline evaluate theproposed design from their own particular points ofview, discuss problems, and propose fixes. After themeeting, the responsible engineer details the fixes andenters the changes into the Catia system manually,through the normal change-control process. The nor-mal structural, stress, acoustic, and other analyses arerun, usually as batch operations, on the changed design.

In particular, the oft-imagined scenario in which thevisiting admiral says “Move that bulkhead one foot aft!”and the change is immediately effected in the mastermodel is not how things are done, of course. Such athing would be technically very difficult for the VR sys-tem, to be sure. But such a scenario would not happeneven with magical technology, because design disciplinedemands the human and computational analysis of allthe interactions and effects of any proposed change.

Experienced engineering organizations have quite for-mal change-control processes.

The design and design review processes as describedby my hosts at Electric Boat corresponded closely tothose described to me at Newport News Shipbuilding inNewport News, Virginia, who make aircraft carriers andother ships, and at Brown and Root Engineering Divi-sion of Halliburton in Leatherhead, Surrey, UK, whodesign off-shore oil platforms.

Lessons. All the engineering organizations perceivelarge advantages from VR walkthroughs as a part ofdesign reviews.

In one review at Electric Boat, a capital tooling per-son from the factory pointed out that a certain massivesemicylindrical tank could be better fabricated as a cylin-der—for which they had extant tooling—then cut in halfand roofed, saving thousands over the engineer’s pro-posed fabrication from piece parts.

In a multidisciplinary design review at Brown andRoot, the painters from the maintenance force remarkedthat certain fixtures on an oil platform should be madeout of extra-heavy-gauge steel because “We can paintits interior once in the shop, but we’ll never again be ableto paint it after you’ve installed it like that.” The design-ers chose to change the configuration.

In describing their two years of experience, my hostsat Brown and Root—Arthur Barker and MartinWilliams—said that the most important effects of theirinstalling and using their VR theater and its associatedtelecommunications was not effects on designs, buteffects on the design processes, in particular the com-munication of ideas between the Halliburton divisionsin London and in Houston. Indeed, they now plan todesign a South American project in Brazil and to design-review it from Leatherhead.

These groups find true scale for structures is impor-tant for detailed understanding. This is, I believe, themajor advantage that HMDs, caves, and panoramic dis-plays have over so-called desktop VR. I think the advan-tage justifies the extra cost when dealing with complexstructures.

Second, system latency—a major concern in the VRlaboratory—is not necessarily a showstopper for thisapplication, because viewpoint motion is slow and infre-quent, due to the intense study going on.

Design review at John DeereThe experience of the Construction Machinery Divi-

sion of John Deere as it has moved toward virtual pro-totyping proves instructive. Pilot studies of VR at theMoline, Illinois division began in 1994, but nothingmuch really happened until a key technical personjoined in 1996. The configuration is rather typical: anSGI Onyx Infinite Reality, a head-mounted display, anAscension Flock of Birds tracking system, Division’sdVise software, and the Jack human model forergonomic studies. Zones of reach and zones of com-fort, provided by Engineering Animation Inc.’s Pro-Engi-neering software, are shown superimposed on thevirtual models.

Jerry Duncan and Mac Klinger served as my infor-

IEEE Computer Graphics and Applications 23

Page 9: What’s Real About Virtual Reality?

mants. Four wins to date reflect the sort of savings result-ing from the nontrivial investment in VR. The first studywas a styling study for a new machine, normally donewith a sequence of fiberboard mockups. The VR modeldisplaced two-thirds, but not all, of the usual mockupsand saved $80K in mockup cost—real savings only if youalready have a CAD model of the prototyped object.

The second win was a safety technology evaluationformerly done on an iron prototype, successfully per-formed in VR. This evaluation studies, among otherthings, encumbrances and egress in emergencies. A six-person jury performs the evaluation, using a formal pro-tocol. It normally takes two days, because each jurymember climbs into the prototype, evaluates the nextgroup of items on the checklist, and climbs back out.The VR evaluation had the whole jury viewing the videoprojection on a screen while one person, wearing thehead-mounted display, did the various maneuvers.Obviating the serialized access to the cab shortened thewhole process to three hours. The whole jury saw anddiscussed each aspect from a common eyepoint, focus-ing the discussion. The savings were both man-hoursand the construction of the iron prototype.

The third win was exactly the kind hoped for with vir-tual prototype design review. The issue was steps andhandholds, for all kinds of body sizes—a matter thatseems minor but makes big differences in usability.Three entire rounds of review and change were effect-ed in five days—with no iron-cutting.

Finally, Deere was able to conduct early customer testsof the new 450-H Crawler-dozer using the virtual pro-totype, well before they could have done them with aniron prototype.

Wish list. The most strongly desired tools are geom-etry manipulation tools, ways of easily specifying inter-actions with the design. This includes collision detectionand collision response. The great desire is for interfacessimple enough for the occasional user to participate inmodel changing. Then, since they too find that CAD tools

over-tessellate curved parts, theywish for a suite of tools for polyhe-dral simplification and for file man-agement for large models and manyversions of models.

Lessons. The foresight andactivity of an advocate at the corpo-rate technical level facilitated exper-imentation and then adoption of VRtechnology at the division level. Thevision at the corporate level for JohnDeere as a whole encompasses notonly VR for rapid virtual prototyp-ing of products, but for analysis andevaluation as well as design. And thevision encompasses not only prod-ucts, but also manufacturingprocesses, for more Deere peopleare engaged in the manufacturingprocess than in product design.

An important insight is that thefarther downstream a design gets from its conceiver, thebetter the visuals must be to convey an accurate, inter-nalized perception of the design to those who mustbuild, maintain, and use it. The creating engineer canvisualize from a sketch, and the experienced toolmakerfrom an engineering drawing, but a factory-floor fore-man or an operations person may be unable to visual-ize from anything short of a detailed, realistic picture.

As all the examples above indicate, each user of VRfor design has found that it radically facilitates commu-nication among the design team. This may be the mostdramatic and cost-effective result.

John Deere has for some time collaborated with the VRresearch group at Iowa State University, where CarolinaCruz-Neira showed me around. The company sees theIowa State work as a technical forerunner and ground-breaker for Deere’s own in-house applications. The IowaState group, for example, has a Deere tractor buck mount-ed in a cave configuration, with a detailed model of a par-ticular Iowa farm, acquired by GPS surveys. John Deerecurrently operates a more modest configuration for itsproduction work, with exploratory work done at the IowaState facility. I saw this pattern emerging in several places,including automobile companies that use the FraunhoferInstitutes in Stuttgart and Darmstadt as the sites of theirexplorations. It makes a lot of sense to me. Universityresearch groups should be in the trailblazing business,and the degree to which others then follow those trailsindicates the excellence and relevance of the research.

Astronaut training at NASATraining applications in routine production status

proved hard to find, although quite a few are far alongin prototype status. The training work with the longestexperience—several years as a production operation—seems to be that of NASA-Houston for training astro-nauts for extra-vehicular activity, where Bowen Loftinand David Homan served as my hosts.

It’s hardly surprising that NASA’s astronaut trainingshould lead the way; the training has very high value,

Special Report

24 November/December 1999

3 NASA-Hous-ton’s “Char-lotte” VirtualWeightless Masslets astronautspractice han-dling weightlessmassy objects.

Cou

rtes

y of

NA

SA

Page 10: What’s Real About Virtual Reality?

and the alternatives to VR technology are few and poor.Of course, much training can use vehicle mockups.Weightless experience can be gained in swimming poolsand 30-second-long weightless arcs in airplanes.Nonetheless, extra-vehicular activity is very difficult tosimulate.

A difficult skill, for which VR training has proven verypowerful, is flying about in space using the back-mount-ed flight unit. It is like no earthly experience. Newton’slaws apply ideally—with no drag, an object in motionor rotation continues forever unless stopped. The flightunit is designed principally as an emergency device foruse if an astronaut’s tether breaks; velocities are veryslow. The NASA VR configuration for this is rather stan-dard: an SGI Onyx Infinite Reality for imagery genera-tion, head-mounted displays, and magnetic tracking.An astronaut said the resulting system was the mostfaithful flight simulator he had used.

Moving around on the outside of a space vehicle isanother unearthly skill. The VR system lets astronautspractice the careful planting of hands and feet in rock-climbing fashion. Difficult and unprecedented teamtasks, such as correcting the Hubble telescope mirror’soptics, made new training demands. The additionalunearthly experience for such tasks is the team-coordi-nated moving of massy but weightless objects. Thedynamics are, of course, totally unfamiliar, and viscousdamping seriously confounds underwater simulation.For this training, NASA augments visual simulation witha unique haptic simulator called “Charlotte” after thespider of the same name. A real but very light 2-footcubical box attaches to motors on the corners of an 8-foot cubical frame, as shown in Figure 3. As pairs ofastronauts move the object by its handles, the systemsimulates the dynamics and drives the motors appro-priately. Users report very high fidelity for masses of 300pounds and up.

Lessons. Interaction with objects in virtual worldsstill challenges VR researchers. So much of our interac-tion with objects depends on how they feel—purely visu-al simulation misses realism much too far. This is criticalfor some training tasks; we really do not know howimportant it is for design tasks.

Early adoption of VR, even with less-than-satisfacto-ry technologies, enabled NASA to get the years of expe-rience that brought their applications to their presenteffectiveness.

Collaboration with researchers at the University ofHouston has worked much like the John Deere-IowaState collaboration. The university can do wider explo-ration and trailblazing; the mission agency does thefocused work that leads to production applications.

Psychiatric treatment at Georgia Tech-EmoryResearch collaborators at Georgia Institute of Tech-

nology and the Emory University Medical School haveexplored the use of VR for psychiatric treatment. LarryHodges, Barbara Rothbaum, and David Ready hostedmy visit there.

The success of this research has led to the develop-ment and fielding of a COTS system by Virtually Better,Inc. They offer a system for about $18K, approximately$8K for the hardware and $10K for software for oneapplication. Eight installations are already in produc-tion around the US and one in Argentina, all routinelyused by practicing psychiatrists and psychologists. Thesepractitioners do not have computer support locally; anynecessary support comes by telephone from Atlanta.

The hardware configuration consists of a PC, a graph-ics accelerator card, a Virtual Research V-6 or V-8 head-mounted display, and a magnetically tracked bat. A gooddeal of attention has gone to audio quality; the visualslook rather cartoonish.

The most popular and cost-effective application treatsfear-of-flying. The conventional treatment has the prac-titioner and patient together make multiple trips to anairport, sit on an airplane mockup, sit on an airplane,fly a short hop, and so on. Much expensive practitionertime disappears in travel. The effectiveness of the VRtreatment seems just as good as the conventional one,although there are no formal studies yet. The cost is rad-ically lower. Virtually Better also offers fear-of-heightsand fear-of-public speaking simulation packages.

The most dramatic procedure offered, in routine useat the Atlanta Veterans Administration Hospital, treatspost-traumatic stress disorder for Vietnam War veter-ans. Physiological monitoring of the patient augments

IEEE Computer Graphics and Applications 25

4 Vietnam warsimulation atthe AtlantaVeteransAdministrationHospital (a) inuse. (b) Imageryseen by theuser.

Cou

rtes

y of

Geo

rgia

Inst

itute

of T

echn

olog

y

Page 11: What’s Real About Virtual Reality?

the VR configuration, to give an independent measureof his emotional stress level. The psychologist gentlyleads the patient into a simulated Vietnam battle scene(see Figure 4), step-by-step recreating the situationwhere the patient “locks up” in reliving his stress expe-rience. By leading the patient completely through thescene and out the other side, the psychologist aims tohelp the patient learn how to get himself out of the dam-aging patterns.

So far, the treatment seems to help the patients whopersevere. About half of the first 13 opted out, perhapsbecause of the realism of the recreated experiences.Studying tapes of typical sessions, I was struck by howcompletely the patients are present and involved in thesimulated Vietnam situation. It hurts to watch themrelive painful experiences.

A second installation of this application is under wayat the VA hospital in Boston.

Lessons. The big lesson for me was the power of auralVR for reproducing an overall environment. LarryHodges, the computer scientist on the team, thinks thataudio quality is, in several of their applications and exper-iments, consistently more important than visual quality.His Vietnam simulation certainly supports that opinion.

Indeed, the Fraunhofer IAO (Industrial EngineeringInstitute) at Stuttgart has a prototype VR application,the routine quality testing of electric drills after manu-facture, in which the only VR environment is aural. Asound shower displays the sound field of a well-madedrill all over the test cell, facilitating aural comparison.It uses no visuals at all.

I am also impressed with the importance of full-scaleimmersion for the Vietnam application. It is hard tobelieve that desktop VR would even begin to achieve thesame effects.

nanoManipulator at UNC-Chapel HillProbe microscopes, including atomic force micro-

scopes, scanning tunneling microscopes, and near-field

optical microscopes, form images by scanning a probeacross the surface of a sample, yielding one or morescalar functions of the two dimensions. Resolution tothe nanometer can be obtained; even individual atomscan be imaged under best conditions. As significantly,the tip of the probe can modify the sample, by mechan-ical or electrical action.

Researchers at the University of North Carolina atChapel Hill applied VR technology to the task of mak-ing the using scientist think and act as if shrunk by a fac-tor of 105 to 106 and present on the surface, in a VRsystem called the nanoManipulator. (An alternative wayof thinking about it is that we expand the sample by sucha factor, to laptop sizes.) The scientist controls the view-point and the lighting on the dynamically updatedimage of the sample, as the microscope continues toscan. He may, if he chooses, suspend scanning, positionthe probe to a particular spot, and make measurementsthere. Alternatively, he may take control of the probeand effect the modifications as if he were scratching thesurface directly. Russell Taylor and Richard Superfineare leading this project.

Perception is effected two ways: through a stereo visu-al image rendered on an SGI Onyx Infinite Realityengine and displayed to a head-tracked observer on aworkbench or a desktop monitor, and through a Sens-able Systems Phantom haptic display (see Figure 5). Theuser holds this motor-controlled stylus like a pen; sen-sors yield 4D measurements of position and pose of thetip. Motors provide three to six degrees of force output.The nanoManipulator currently presents three degreesof force output. Moving the Phantom manually controlstip position and action.

The haptic display and control prove essential formanipulations. Since the probe can either image orscrape, but not both at the same time, the scientist isblind while manipulating the surface with the micro-scope probe. The standard technique has the scientistimage the sample, manipulate the probe while work-ing blind but seeing the previous image, then imageagain to find out what really happened. Although theprobe cannot produce images while being used formanipulations, it does produce the control signals thatdisplay as vertical or lateral forces. So the scientist canfeel what he’s doing even when he cannot see what’sgoing on.

The sensation of interacting with a real macroscopicsample on a workbench is very strong. For this applica-tion, the size and small field of view of a desktop moni-tor does not hinder perception because the virtual objectis created “full size,” as if on a workbench

Replications of the nanoManipulator system havebeen installed in four locations in the US and Europe.In routine daily use, they produce science published byphysicists, chemists, gene therapists, and others.7

Lessons. This application illustrates and emphasizesthe fruitfulness of haptics in a VR configuration. It offersan almost ideal application of the Phantom, which candisplay only the forces on a point probe. Since the micro-scope can only measure forces on a point probe, the twomatch well.

Special Report

26 November/December 1999

5 View of UNCnanoManipula-tor connectedto an atomicforce micro-scope.

Cou

rtes

y of

Rus

sell

Tayl

or

Page 12: What’s Real About Virtual Reality?

Therefore, the methods of interaction seem quite nat-ural for both probe motions and for head, sample, andlight motions to improve viewing. Early research workshowed that realistic haptic rendering requires updaterates greater than 500 updates per second. The Phan-tom runs at 1,000 updates per second, which seemsquite satisfactory.

Second, each VR display mode has some applicationfor which it provides the optimal solution. This one is anatural for a projection-display workbench, a work-bench equipped with a haptic display.

Hot open challengesAlthough VR has crossed the high pass from “almost

works” to “barely works,” many challenges remain bothin the enabling technologies and in the systems engi-neering and human factors disciplines. The tasks thatseem most crucial to me follow:

Technological:

� Getting latency down to acceptable levels.� Rendering massive models (> 1 M polygons) in real

time.� Choosing which display best fits each application:

HMD, cave, bench, or panorama.� Producing satisfactory haptic augmentation for VR

illusions.

Systems:

� Interacting most effectively with virtual worlds:� Manipulation� Specifying travel� Wayfinding

� Making model worlds efficiently:� Modeling the existing world—image-based

techniques look promising� Modeling nonexisting worlds—through

CAD byproducts or hard work� Measuring the illusion of presence and its operational

effectiveness

How the field addresses these challenges will dra-matically affect the continuing success and speed ofadoption of VR. I look forward to seeing substantiallymore projects move from prototype to production sta-tus in the near future. �

AcknowledgmentsA major part of this report was given as a keynote

address at the Virtual Reality 99 conference.My greatest thanks go to the correspondents who sent

me descriptions of their VR projects, and most especiallyto those who welcomed me to their installations andshared with me their experiences and the lessons theydrew from them. The project descriptions list the prin-cipal hosts at each installation; each had colleagues whoalso shared their knowledge and time. I am most grate-ful to each of them.

An imaginative grant from the Office of NavalResearch, arranged by Larry Rosenblum, made possi-ble the travel by which I gathered the solid first-handknowledge of applications that are really in production

status. I also thank my faculty, staff, and student col-leagues at Chapel Hill, who have taught me manylessons about virtual reality and who continually amazeme with their creativity and accomplishment. HenryFuchs, in particular, has shared the vision and inspiredmy efforts by his own immense imagination.

References1. I.E. Sutherland, “The Ultimate Display,” invited lecture,

IFIP Congress 65. An abstract appears in Information Pro-cessing 1965: Proc. IFIP Congress 65, Vol. 2, W.A. Kalenich,ed., Spartan Books, Washington, D.C., and Macmillan,New York, pp. 506-508. A similar, but quite complete,paper is Sutherland’s “Computer Displays,” Scientific Amer-ican, Vol. 222, No. 6, June, 1970, pp. 57-81.

2. F.P. Brooks, “Is There Any Real Virtue in Virtual Reality?,”public lecture cosponsored by the Royal Academy of Engi-neering and the British Computer Society, London, 30 Nov.1994; http://www.cs.unc.edu/~brooks.

3. S. Coleridge, Biographia Literaria, Ch. 1, in The CollectedWorks of Samuel Taylor Coleridge, Vol. 7, J. Engell and W.J.Bate, eds., Routledge & Kegan Paul, London, and PrincetonUniversity Press, Princeton, 1983.

4. N.I. Durlach and A.S. Mavor, eds., Virtual Reality—Scien-tific and Technological Challenges, National Research Coun-cil, Washington, D.C., 1995.

5. G. Burdea and P. Coiffet, Virtual Reality Technology, JohnWiley, New York, 1994.

6. R.L. Holloway, Registration Errors in Augmented Reality Sys-tems, PhD dissertation, Department of Computer Science,University of North Carolina at Chapel Hill, 1995.

7. M.R. Falvo et al., “Bending and Buckling of Carbon Nan-otubes under Large Strain,” Nature, Vol. 389, Oct. 1997,pp. 582-584.

Frederick P. Brooks, Jr. isKenan Professor of Computer Scienceat the University of North Carolinaat Chapel Hill, where he founded thedepartment in 1964. His currentresearch field is interactive 3D com-puter graphics. He received his PhD

in 1956 in what would today be called computer science,under Howard Aiken at Harvard. His AB was in physicsfrom Duke.

While Brooks worked for IBM, he was finally projectmanager for the development of the System/360 family ofcomputers and of its software, Operating System/360, forwhich he was awarded the National Medal of Technologyand the IEEE John von Neumann Medal. His recent booksare The Mythical Man-Month: Essays on Software Engi-neering (20th Anniversary Edition, Addison-Wesley,Reading, Mass., 1995) and, with G.A. Blaauw, Comput-er Architecture: Concepts and Evolution (Addison-Wes-ley, 1997). He is a fellow of the IEEE and a member of theNational Academy of Engineering.

Readers may contact him at [email protected].

IEEE Computer Graphics and Applications 27