22
Augmented reality (AR) systems merge computer-generated graphics with a view of the physical world. Ideally, the graphics should be perfectly aligned, or registered, with the physical world. Perfect registration requires the computer to have accurate knowledge of the structure of the physical world and the spatial relationships between the world, the display, and the viewer. Unfortunately, in many real-world situations, the available information is not accurate enough to support perfect registration. Uncertainty may exist in world knowledge (e.g., accurate, up-to-date models of the physical world may be impossible to obtain) or in the spatial relationships between the viewer and the world (e.g., the technology used to track the viewer may have limited accuracy). Emerging Technologies of Augmented Reality: Interfaces and Design M chael Haller Upper Austr a Un vers ty of Appl ed Sc ences, Austr a Mark B ll nghurst Human Interface Technology Laboratory, New Zealand Bruce H. Thomas Wearable Computer Laboratory, Un vers ty of South Austral a, Austral a 2007 hal24 In recent years, mobile phones have developed into an ideal platform for augmented reality (AR). The current generation of phones has full color displays, integrated cameras, fast processors, and even dedicated 3D graphics chips. It is important to conduct research on the types of AR applications that are ideally suited to mobile phones and user interface

Draft Kasar LANDASAN TEORI Augmented Reality

Embed Size (px)

DESCRIPTION

AR definition collection for myCurrentSchoolWork ^^

Citation preview

Page 1: Draft Kasar LANDASAN TEORI Augmented Reality

Augmented reality (AR) systems merge computer-generated graphics with a view of thephysical world. Ideally, the graphics should be perfectly aligned, or registered, with thephysical world. Perfect registration requires the computer to have accurate knowledge of thestructure of the physical world and the spatial relationships between the world, the display,and the viewer. Unfortunately, in many real-world situations, the available information is notaccurate enough to support perfect registration. Uncertainty may exist in world knowledge(e.g., accurate, up-to-date models of the physical world may be impossible to obtain) orin the spatial relationships between the viewer and the world (e.g., the technology used totrack the viewer may have limited accuracy).

Emerging Technologiesof Augmented Reality:Interfaces and DesignM chael HallerUpper Austr a Un vers ty of Appl ed Sc ences, Austr aMark B ll nghurstHuman Interface Technology Laboratory, New ZealandBruce H. ThomasWearable Computer Laboratory, Un vers ty of South Austral a, Austral a

2007 hal24

In recent years, mobile phones have developed into an ideal platform for augmented reality(AR). The current generation of phones has full color displays, integrated cameras, fastprocessors, and even dedicated 3D graphics chips. It is important to conduct research onthe types of AR applications that are ideally suited to mobile phones and user interfaceguidelines for developing these applications. This is because the widespread adoption ofmobile phones means that they could be one of the dominant platforms for AR applicationsin the near future.Traditionally AR content is viewed through a head mounted display (HMD). Wearing anHMD leaves the users hands free to interact with the virtual content, either directly or byusing an input device such as a mouse or digital glove. However, for handheld and mobilephone based AR, the user looks through the screen and needs at least one hand to hold thedevice. The user interface for these applications is very different than those for HMD basedAR applications. Thus, there is a need to conduct research on interaction techniques forhandheld AR displays, and to produce formal user studies to evaluate these techniques.

Hal91

Augmented reality (AR) makes it possible to concurrently visualize both the real world andoverlaid virtual information. While designing a conventional user interface requires decidingwhat information should be presented, and how and where it should be shown, designing anAR user interface thus requires addressing another crucial problem: the superimposed virtualinformation can occlude things that we would otherwise see in the real world. Therefore,to avoid obscuring important objects, it is necessary to determine the position and size ofvirtual objects within the user interface relative to what is seen in the real world.View management (Bell, Feiner, & Höllerer, 2001) refers to the layout decisions that determine

Page 2: Draft Kasar LANDASAN TEORI Augmented Reality

spatial relationships among objects in a 3D user interface. For an application to performview management, it must compute the visibility of objects of interest in a 3D environment,as seen from a selected 3D viewpoint, taking into account visibility constraints. It also mustrepresent and process the visibility information once it has been projected onto the 2D screenspace representing the user’s view.Figure 1 shows the order in which computations are done in our view management pipeline(Bell, Feiner, & Höllerer, 2005). To precisely maintain the specified visual constraints, thispipeline should get executed for each frame rendered. However, since some of these computationscan take a considerable amount of time to compute, asynchronous execution ofthe pipeline, as in the decoupled simulation model (Shaw, Green, Liang, & Sun, 1993), canpreserve interactive rendering frame rates at the expense of view management accuracy.(These tradeoffs might be preferable depending on the user’s preference or task.)

Hal111

While augmented reality (AR) technology is steadily maturing, application and contentdevelopment for those systems still mostly takes place at source code level. Besides limitingdeveloper productivity, this also prevents professionals from other domains such as writers,designers, or artists from taking an active role in the development of AR applications andpresentations. Previous attempts to adapt authoring concepts from other domains such asvirtual reality or multimedia have met with only partial success.In order to develop authoring tools that genuinely support AR content creation, we haveto look into some of the unique properties of the AR paradigm. We argue that a successfulAR authoring solution must provide more than an attractive graphical user interface for anexisting AR application framework. It must provide conceptual models and correspondingworkflow tools, which are appropriate to the specific domain of AR.In this chapter, we explore such models and tools and describe a working solution, the augmentedpresentation and interaction language (APRIL). In particular, we discuss aspectsrelating to real-world interfaces, hardware abstraction, and authoring workflow.

Hal139

The requirements for an appropriate AR authoring solution have been derived from our ownexperience in realizing AR applications, both by expert programmers and by our students,and from published work in this field that points out the need for authoring solutions (Navab,2004; Regenbrecht, Baratoff, & Wilke, 2005). To illustrate these requirements and theproposed solutions in the context of this article, we will use a widely known example of anAR application scenario: construction assistance (Figure 1). This scenario is well suited toour analysis for a number of reasons. It is a widely known example with existing case stud-

ies and implementations (Webster, Feiner, MacIntyre, Massie, & Krueger, 1996, Zauner,Haller, & Brandl, 2003).It is a moderately complex, non-linear task involving untrained end-users—exactly fitting thetype of application that we want to support with our solution. It is related to a wide range ofindustrial and engineering applications. The realization of this application can be decomposedinto separate tasks and delegated to individuals in a team of collaborating experts.

Hal139 - Hal140

Page 3: Draft Kasar LANDASAN TEORI Augmented Reality

Real-World InterfacesOne aspect that makes AR setups fundamentally different from other media is the presenceof the real world in the user’s perception of the application space—a feature that we haveto take into account when structuring application space and interaction. Furthermore, in applicationslike our furniture construction example, the world is not only a passive containerfor the application’s content, but parts of the real world (e.g., pieces of furniture to assemble,real-world tools) are part of the applications user interface. In our conceptual model, wehave to address the different possibilities of relating application content to the real world.For example, the parts of furniture to be assembled must be explicitly modeled as applicationobjects despite the fact that they will not be rendered graphically.

Hal141

Hardware AbstractionAnother fundamental aspect of AR is the heterogeneity of hardware setups, devices, andinteraction techniques that usually prohibit a “write once, run anywhere” approach or thedevelopment of standardized interaction toolkits. We will present strategies for hardwareabstraction and an interaction concept that can be transparently applied to a wide range of inputdevices. By applying these abstractions, application portability can be improved, enablingapplications to be developed on desktop workstations or in other test environments insteadof directly on the target AR system, which may be scarce or expensive to use. Note that thisvirtualization of resources is equally useful for classic virtual reality (VR) development.An important requirement is that the framework should support the manifold combinationpossibilities of input and output peripherals found in the hybrid, distributed AR systems weare developing in our research. In some cases, such as when working with mobile systems orhandheld devices, it is also much more convenient to develop the application on a desktopPC and then run it on the target system exclusively for fine-tuning and final deployment.Applications and their components should be reusable in different setups, and applicationsdeveloped for one system should run on another setup, with little or no modification. Thefurniture construction application can be configured to run on a Web cam-based home setup, aswell as on the tracked see-through augmented reality head-mounted displays in our lab.

Hal141

ContributionThe aforementioned requirements have been implemented in an authoring system calledAPRIL. We will present this system as a proof of concept implementation of the presentedideas, and discuss results and experiences. The contributions presented in this chapter are:(1) identification of key concepts and properties of AR systems that are relevant for contentcreation, (2) description of the state-of-the-art in AR authoring, (3) a consistent conceptualmodel for content creators covering hardware abstraction, interaction, spatial and temporalstructuring, (4) presenting a reference implementation of an authoring system using theaforementioned model.

Hal142

Page 4: Draft Kasar LANDASAN TEORI Augmented Reality

3D Modeling and ScriptingThe first attempts to support rapid prototyping for 3D graphics were based on text file formatsand scripting languages such as Open Inventor (Strauss & Carey, 1992), VRML (VRMLConsortium, 1997), or X3D (Web3D Consortium, 2005). New types of objects and behaviorscan only be added by implementing them in C++ and compiling them to native code.While scriptable frameworks represent an improvement in the workflow of programmers,who can create application prototypes without the need to compile code, they do not offerthe necessary concepts and abstractions for controlling an application’s temporal structureand interactive behavior, and provide no built-in support for AR/VR devices. Platforms likeAvango (Tramberend, 1999) or Studierstube (Schmalstieg et al., 2002) add the necessaryclasses to such frameworks to support the creation of AR/VR applications. However, fromthe perspective of an author the power of these frameworks further complicates mattersrather than providing the required level of abstraction.Among the tools targeted towards beginners, the Alice system (Conway, Pausch, Gossweiler,& Burnette, 1994) is particularly noteworthy. It was designed as a tool to introduce noviceprogrammers to 3D graphics programming. Alice comes with its own scene editor and anextensive set of scripting commands, but is clearly targeted at an educational setting. Forcreating “real world” applications, the reusability and modularity of Alice is insufficient.Also, Alice focuses on animation and behavior control of individual objects and does notoffer any high-level concepts for application control.

Hal143

The APRIL LanguageAPRIL, the augmented reality presentation and interaction language, covers all aspects ofAR authoring defined in the requirements analysis. APRIL provides elements to describethe hardware setup, including displays and tracking devices, as well as the content of theapplication and its temporal organization and interactive capabilities. Rather then developingAPRIL from scratch, we built the authoring and playback facilities on top of our existingStudierstube runtime system. However, it should be possible to use other runtime platformsfor playing back applications created using our framework.We decided to create an XML-based language for expressing all aspects needed to createcompelling interactive AR content. This language acts as the “glue-code” between thoseparts where we could use existing content formats. XML was chosen for three reasons: It isa widely used standard for describing structural data, allows the incorporation of other textor XML based file formats into documents, and offers a wide range of tools that operate onXML data, such as parsers, validators, or XSLT transformations.Enumerating all elements and features that APRIL provides is beyond the scope of thischapter. Interested readers are referred to Ledermann (2004), where detailed informationand the APRIL schema specification can be found. In this chapter, we focus on the illustrationof the main concepts of the APRIL language and an analysis of the implications of ourapproach. Whenever references to concrete APRIL element names are made, these will beset in typewr ter letters.

Hal145

Page 5: Draft Kasar LANDASAN TEORI Augmented Reality

Hardware AbstractionFlexibility in AR authoring requires separation of the content of the application from allaspects that depend on the actual system that the application will run on. However, the mappingfrom the hardware-dependent layer to the application must be sufficiently expressiveto allow the application to make full use of the hardware features like tracking devices ordisplays.APRIL allows all hardware aspects to be placed into a separate setup description file andsupports the running of an application on different setups each with their respective setupdescription files. Each of these files contain XML code that describes the arrangement ofcomputers, displays, pointing and other interaction devices that the system is composed of,and the definition of stages and input devices that will be available in the application.

Hal148

Each computer that is part of the setup is represented by a corresponding host element, thatdefines the name and IP-address of that host, and the operating system and AR platform thatruns on that machine. For each display, a d splay element carries information about its size andthe geometry of the virtual camera generating the image. For configuring tracking devices,we use the existing OpenTracker configuration language (Reitmayr & Schmalstieg, 2001),that is simply included in the APRIL file by using a namespace for OpenTracker elements.OpenTracker allows the definition of tracking sources and a filter graph for transforming andfiltering tracking data. Rather than reinventing a similar technology, we decided to directlyinclude the OpenTracker elements into the APRIL setup description files.OpenTracker only defines tracking devices and their relations, but not the meaning of thetracking data for the application. In APRIL, OpenTracker elements are used inside appropriateAPRIL elements to add semantics to the tracking data: headtrack ng or d splaytrack ngelements inside a d splay element contain OpenTracker elements that define the tracking ofthe user’s head or the display surface for the given display, po nter elements define pointingdevices that are driven by the tracking data, and stat on elements define general-purposetracking input that can be used by the application.Pointing at objects and regions in space plays a central role in Augmented Reality applications,and several techniques have been developed to allow users to perform pointing tasksunder various constraints. APRIL provides the po nter element to define a pointing device,allowing the author to choose from several pointing techniques. The simplest case would bea pointing device that operates in world space. Other applications have used a ray-pickingtechnique, using a “virtual laser pointer” to select objects at a distance. Some techniqueswork only in combination with a display, such as performing ray-picking that originatesfrom the eye point of the user, effectively allowing her to use 2D input to select objects inspace. These pointers can only be used in conjunction with a specific display and are placedinside the corresponding d splay element.

Page 6: Draft Kasar LANDASAN TEORI Augmented Reality

Stages, the top-level spatial containers for the application’s content, are also defined in thesetup description file. A stage can be defined inside a d splay element, in which case thecontent of the stage will only be visible on that specific display. Content placed in stagesthat are defined at the top level of the configuration file is publicly visible for all users. Foreach stage, it is possible to choose whether the content should be rendered in 3D or as a 2Dtexture, and whether it should be positioned relative to the global world coordinate systemor located at a fixed offset from the display surface.Figure 4 lists an example hardware configuration file for a single-host setup using a pointerand four stages.

Hal149

Background: Reported Industrial AR ApplicationsThere are three driving forces in any industrial context, which lead to the introduction ofnew technologies: cost reduction, speed-up of processes, and quality improvement. If onecan bring the appropriate information, to the right place, at the right time all three forcescan be addressed. Augmented reality seems to be an ideal candidate in almost any contexts.It lies in the very nature of AR to be applied within the current working context (e.g., theassembly line) and to deliver accurate, useful, and up-to-date information (e.g., the numberand representation of the next product part to be assembled). This will eventually lead toshorter production times, less training effort, reduction of errors, and finally to lower productioncosts.

But, why is it so difficult to implement AR technology in an industrial context? We believethat the maturity of the contributing technologies (tracking, displays, content generation,wearable computing, etc.) does not suit the demanding industrial environment conditionsyet regarding robustness, reliability, quality, and practical experience. But, we also believe,that it is very close to being applied successfully in a broad range of fields.It can be said that the application of augmented reality in an industrial context started withBoeing’s wire bundle assembly project in the early ’90’s (see Mizell, 2001). The wiringfor each individual airplane to be built is unique. Therefore, the wire bundles needed to bepre-configured in a workshop beforehand according to plans displayed on large boards inhuge numbers. The display of the wiring paths for this very manual task of forming bundlesat the boards seemed to be a promising area for the application of AR technology: the wireplans are augmented directly onto the board using an optical see-through head-worn display.Even if this application did not make it into the real production process for various, mainlyorganizational reasons, the wire bundle assembly still stays as the first prototype examplefor industrial AR.This project was followed by several smaller projects until the end of the last century. Whilenumerous academic projects evolved in the following years, industrial augmented reality(IAR) applications are still rare. In some cases, AR technology was applied successfullyin certain use cases. For instance in supporting welding processes (Echtler et al., 2003),where the welding helmet itself is used to overlay information and to ease the visibility ofthe welding point and seam.

Page 7: Draft Kasar LANDASAN TEORI Augmented Reality

To date there have been two major initiatives for AR innovation. The Mixed Reality SystemsLaboratory in Japan, with its focus set on the development of mixed reality prototype applicationscomprising hardware and software, has demonstrated the potential for the real-worlduse of AR (see Tamura, Yamamoto, & Katayama, 2001). The success of this project ledto the release of the mixed reality platform, a comprehensive toolkit consisting of display,tracking, and AR software technology.The other initiative has been the German project “ARVIKA” lead by Siemens, which includedthe majority of the manufacturing industry in the country as well as selected partnersin academia, and small and medium enterprises (see Friedrich, 2004). The focus here wason the application of AR in the fields of design, production, and servicing.There is noticeable progress in the application of all kind of augmented reality technologyin a broad scope of fields. For instance, the use of projection-based augmented reality in thecontext of museum exhibitions by Froehlich et al. (2005), where a permanent installationat a German exhibition clearly shows the reliable use of AR technology. Another examplebeing the use of head-mounted display based AR technology in the education of students atTU Vienna reported by Kauffmann (2005) with major benefits for students in understandingcomplex geometric properties by applying interactive techniques.The European Commission currently supports a variety of projects related to augmentedreality in its framework programs (Badique, 2005), which is a strong indicator for the importanceof the dissemination of this technology.In the realm of industrial augmented reality applications, Navab (2004) identified design,commissioning, manufacturing, quality control, training, monitoring and control, and serviceand maintenance as main application field for augmented reality and gives guidance basedon own experiences made. Navab emphasizes the need for “killer applications” to progress

further the research and development in IAR applications. He, for instance, overlays images,drawings, and virtual models onto the geometry of plant equipment, in particular industrialpipelines with high accuracy.All projects encountered serious problems regarding the instrumentation of the industrialsite with tracking equipment to track the user’s position and orientation, several calibrationissues, and the robustness, ergonomics, and fidelity of the AR display technology, amongothers.All these initiatives brought forward various prototypes and demonstrated applications andhave therefore been valuable in progressing the field of AR. The lessons learned in theseprojects have had a strong influence on the direction of AR R&D worldwide.

Hal 284 – Hal 286

Wearable TechnologyIf one wants to support a given workflow with new technology, the technology itself shouldreally be of help and must not disturb or distract the attention of the user. In our case, theAR technology and interface should be as unobtrusive as possible and should not requiremassive instrumentation of the environment or worker.We considered various display and computer technologies for use within our scenario, like

Page 8: Draft Kasar LANDASAN TEORI Augmented Reality

tracked video-see-through head-mounted displays, head-mounted or environment mountedprojectors, and displays on the cart or within the storage environment. Eventually two systemsremained worth considering, the first being a combination of a MicroOptical displayunit and a personal digital assistant (PDA) and the second, a Microvision Nomad ExpertTechnician System consisting of a head-worn display unit connected to a Nomad wearablecomputer (see Figure 9).Both systems include integrated wireless LAN (WLAN), work with lightweight head-worndisplays, the computer units can be worn on a belt and are battery-operated.The MicroOptical display unit was attached to standard safety glasses. Workers are obligedto wear these glasses while working. The PDA was integrated into an industrial housingwith customized buttons for operating the unit mounted and interfaced to the PDA. A highcapacity battery was used instead of the standard one. The Microvision display unit wasattached to a baseball cap (see Figure 9 bottom left).The pros and cons of each system can be summarized as follows: Both systems were robustenough to be suitable for the picking task environment. The housings and components used

already do, or will soon, comply with industry requirements for the near future. Althoughnot extensively tested yet, both computer units as well as the display units were acceptablycomfortable to wear, though we recognize there is room for improvement.The main issue is the display technology. The MicroOptical system blocks parts of the user’ssight. Even though this is only a small portion of the entire field of view, it is disturbing andconcerns the customers. The Microvision system allows for optical see-through and thereforeblocks only those parts of the environment where the actual information is displayed. Unfortunately,a psychological aspect comes into play with the Microvision system. Because theimage is provided by applying a laser beam to the retina of the user, an acceptance barrierhas to be broken first before one can introduce the display.To be worn for a full working day (a shift), a wearable AR system has to operate continuouslyfor about eight hours. Because of the high-capacity battery used in the MicroOptical/PDAsystem, this can be achieved easily. The Microvision system on the other hand, when operatedtogether with the WLAN, has to be recharged after less than 4 hours. If no extra batteryoption is available, more than one system per worker and shift has to be provided includinga transparent, continuous, and seamless information provision/hand-over.The user interface has to be robust, easy-to-use, and must not require fine motor-movementsto control, like mouse cursor movements. For this reason, the use of a button-onlyinterface is advisable. This can be implemented with both systems. The lack of colors withthe Microvision system can be substituted with a careful interface design.

Hal 298 – Hal 299

Spatially Immersive DisplayHome entertainment centres enhanced with spatially immersive displays (SID) will be aninteresting entry point for AR. Wrapping the game’s visual and sound space around the userwould semi-immerse the user in the game. In the near future AR gaming could make useof a number of current technologies. An attractive option over an HMD is to use a SID. ASID is a set of displays that physically surrounds the viewer with a panorama of imagery(Lantz, 1996).

Hal 372

Page 9: Draft Kasar LANDASAN TEORI Augmented Reality

SafetySafety is a major concern operating an AR system in an outdoor setting. While using theHMD with AR information displayed, a user may experience information tunnelling. Whileoperating the AR application, the user could pay more attention to the information on theHMD than the physical world. This may escort the user into dangerous situations, such asplacing the user in way of motor car traffic or tripping over an obstacle.I envision three major ways to tackle this problem. A first method is to have the game necessitatethe user to focus on the physical world. Bringing elements of the physical worldinto the game would help the users focus on the virtual and physical at the same time. Inthe ARQuake game, the users have to pay attention to the physical buildings, as they occludemonsters and game pieces. This stops users from bumping into walls, but the usersare oblivious to hazards not associated with the game. For example the poor tracking of thesystem could cause game pieces to float onto roads. The desire to acquire a powerful newweapon would entice users to step onto the road. The second method is to contain the userto a safe playing arena. For example, the game could require use of a fenced-in area, suchas a cricket oval. This would eradicate a number of possible dangerous elements. The fieldmay be examined for possible loose footing circumstances. The games would then haveto be restricted to basically open field arenas. The final method to solve a number of these

problems is to design the games to lure people away from dangerous situations. Interestingvirtual objects should be placed away from motorways, unsafe footing, and the like.

Hal 376 – hal 377

Augmented Reality (AR) refers to a live view of physical real world environmentwhose elements are merged with augmented computer-generated images creatinga mixed reality. The augmentation is typically done in real time and in semanticcontext with environmental elements. By using the latest AR techniques andtechnologies, the information about the surrounding real world becomes interactiveand digitally usable.

(Borko FurhtEditor

Handbook of AugmentedReality)

Preface vii

Like Virtual Reality (VR), Augmented Reality (AR) is becoming an emerging edutainment platform for museums. Many artists have started using this technology in semi-permanent exhibitions. Industrial use of augmented reality is also on the rise. Some of these efforts are, however, limited

Page 10: Draft Kasar LANDASAN TEORI Augmented Reality

to using off-the-shelf head-worn displays. New, application-specific alternative display approaches pave the way towards flexibility, higher efficiency, and new applications for augmented reality in many non-mobile application domains. Novel approaches have taken augmented reality beyond traditional eye-worn or hand-held displays, enabling new application areas for museums, edutainment, research, industry, and the art community. This book discusses spatial augmented reality (SAR) approaches that exploit large optical elements and video-projectors, as well as interactive rendering algorithms, calibration techniques, and display examples. It provides a comprehensive overview with detailed mathematics equations and formulas, code fragments, and implementation instructions that enable interested readers to realize spatial AR displays by themselves.

(

Spatial Augmented Realityby Oliver Bimber and Ramesh Ras A K Peters ゥ 2005 (384 pages)ISBN:1568812302)

Chapter1 page1 Overview

1.1 What is Augmented Reality

The terms virtual reality and cyberspace have become very popular outside the research community within the last two decades. Science fiction movies, such as Star Trek, have not only brought this concept to the public, but have also influenced the research community more than they are willing to admit. Most of us associate these terms with the technological possibility to dive into a completely synthetic, computer-generated world—sometimes referred to as a virtual environment. In a virtual environment our senses, such as vision, hearing, haptics, smell, etc., are controlled by a computer while our actions influence the produced stimuli. Star Trek's Holodeck is probably one of the most popular examples. Although some bits and pieces of the Holodeck have been realized today, most of it is still science fiction.

So what is augmented reality then? As is the case for virtual reality, several formal definitions and classifications for augmented reality exist (e.g., [109, 110]). Some define AR as a special case of VR; others argue that AR is a more general concept and see VR as a special case of AR. We do not want to make a formal definition here, but rather leave it to the reader to philosophize on their own. The fact is that in contrast to traditional VR, in AR the real environment is not completely suppressed; instead it plays a dominant role. Rather than immersing a person into a completely synthetic world, AR attempts to embed synthetic supplements into the real environment (or into a live video of the real environment). This

Page 11: Draft Kasar LANDASAN TEORI Augmented Reality

leads to a fundamental problem: a real environment is much more difficult to control than a completely synthetic one. Figure 1.1 shows some examples of augmented reality applications.

As stated previously, augmented reality means to integrate synthetic information into the real environment. With this statement in mind, would a TV screen playing a cartoon movie, or a radio playing music, then be an AR display? Most of us would say no—but why not? Obviously, there is more to it. The augmented information has to have a much stronger link to the real environment. This link is mostly a spatial relation between the augmentations and the real environment. We call this link registration. R2-D2's spatial projection of Princess Leia in Star Wars would be a popular science fiction example for augmented reality. Some technological approaches that mimic a holographic-like spatial projection, like the Holodeck, do exist today. But once again, the technical implementation as shown in Star Wars still remains a Hollywood illusion.

Some say that Ivan Sutherland established the theoretical foundations of virtual reality in 1965, describing what in his opinion would be the ultimate display [182]:

The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Hand-cuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming, such a display could literally be the Wonderland into which Alice walked.

However, technical virtual reality display solutions were proposed much earlier. In the late 1950s, for instance, a young cinematographer named Mort Heilig invented the Sensorama simulator, which was a one-person demo unit that combined 3D movies, stereo sound, mechanical vibrations, fan-blown air, and aromas. Stereoscopy even dates back to 1832 when Charles Wheatstone invented the stereoscopic viewer.

Then why did Sutherland's suggestions lay the foundation for virtual reality? In contrast to existing systems, he stressed that the user of such an ultimate display should be able to interact with the virtual environment. This led him to the development of the first functioning Head-Mounted Display (HMD) [183], which was also the birth of augmented reality. He used half-silvered mirrors as optical combiners that allowed the user to see both the computer-generated images reflected from cathode ray tubes (CRTs) and objects in the room, simultaneously. In addition, he used mechanical and ultrasonic head position sensors to measure the position of the user's head. This ensured a correct registration of the real environment and the graphical overlays.

The interested reader is referred to several surveys [4], [5] and Web sites [3], 193] of augmented reality projects and achievements. Section 1.2 gives a brief overview of today's

Page 12: Draft Kasar LANDASAN TEORI Augmented Reality

technical challenges for augmented reality. It is beyond the scope of this book to discuss these challenges in great detail.

Dalam teknologi augmented reality ada tiga karakteristikyang menjadi dasar diantaranya adalah kombinasi pada dunianyata dan virtual, interaksi yang berjalan secara real-time, dankarakteristik terakhir adalah bentuk obyek yang berupa model3 dimensi atau 3D [2]. Bentuk data konstektual dalam sistemaugmented reality ini dapat berupa data lokasi, audio, videoataupun dalam bentuk data model 3D. untuk membuat datamodel ini dapat memanfaatkan beberapa aplikasi computeraided design.Beberapa komponen yang diperlukan dalam pembuatan danpengembangan aplikasi Augmented reality adalah sebagaiberikut :1. Komputer2. Head Mounted Display (HMD)3. MarkerKomputer merupakan perangkat yang digunakan untukmengendalikan semua proses yang akan terjadi dalam sebuahaplikasi. Penggunaan komputer ini disesuaikan dengan kondisidari aplikasi yang akan digunaka. Head Mounted Display(HMD) merupakan perangkat keras yang digunakan sebagaidisplay atau monitor yang akan menampilkan obyek 3Dataupun informasi yang akan disampaikan oleh sistem. PadaHMD ini terdapat display yang dipasang didepan mata daripengguna[3]. HMD dapat dilihat pada Gambar 2. Denganmenggunakan HMD diharapkan obyek yang ditampilkandalam aplikasi yang berbasis augmented reality menjadi lebihnyata. Hal ini dikarenakan obyek akan langsung terlihat olehmata. Penggunaan HMD ini dapat dimodifikasi denganmenggunakan monitor ataupun dengan menggunakanproyektor.

Proses kerja dari HMD dapat dilihat pada Gambar 3. Realworld merupakan keadaan nyata yang dilihat oleh penggunaaplikasi. Keadaan nyata pada real world ini akan diambil olehoptical combiner atau kamera. Scene generator merupakansebuah perangkat lunak yang akan bertanggung jawab

mengenai proses rendering dari obyek virtual yang akandigabungakan kedalam dunia nyata. Monitor pada HMDberfungsi sebagai tampilan yang akan menampilkan hasilrendering dari scene generator.

Marker merupakan gambar (image) dengan warna hitam danputih dengan bentuk persegi. Dengan menggunakan marker inimaka proses tracking pada saat aplikasi digunakan. Komputerakan mengenali posisi dan orientasi dari marker dan akanmenciptakan obyek virtual yang berupa obyek 3D yaitu padatitik (0, 0, 0) dan 3 sumbu (X, Y, Z). ARToolkit merupakan

Page 13: Draft Kasar LANDASAN TEORI Augmented Reality

sebuah library yang digunakan dalam pengembangan teknologiaugmented reality. Dalam library ini telah disediakanbeberapa macam jenis marker yang dapat digunakan dalampengembangan sistem. Gambar 3 menunjukkan sebuah markeryang digunakan dalam pengembangan sistem augmentedreality.

Penggunaan marker dalam aplikasi augmented reality inibergantung pada library yang digunakan dalampengembangannya. Proses kerja dari marker denganmenggunakan library ARToolkit ini dapat dilihat pada Gambar5.

Langkah –langkah deteksi marker ini adalah :1. kamera akan mengambil video pada dunia nyata (realworld) ke dalam komputer2. aplikasi perangkat lunak yang ada dalam komputerakan mencari setiap frame video yang terdeteksi padamarker3. jika kotak marker ditemukan atau terdeteksi, makaaplikasi perangkat lunak akan menghitung posisikamera terhadap marker sesuai persamaan yang telahdimasukan4. ketika posisi kamera telah mengenali marker makakomputer akan menggambarkan model yang telahdibuat sebelumnya5. model yang telah dibuat ini akan ditampilkan diatasmarker yang telah terdeteksi

Selain perangkat keras yang dibutuhkan di atas adaperangkat lunak yang dibutuhkan dalam mengembangkanaplikasi augmented realty ini adalah library diantaranya adalahARToolkit. ARToolkit merupakan library yang digunakanuntuk membuat aplikasi Augmented Reality[3]. AplikasiARToolkit ini adalah aplikasi yang melibatkan penumpanganaplikasi pada pencitraan virtual dengan dunia nyata.ARToolkit dikembangkan dengan bahasa pemorgraman C danC++. Penggunaan ARToolkit ini bebas karena library inibersifat opensource. Komponen yang ada dalam ARToolkitantara lain :OpenGL, GLUT, dan DirerctShow. Gambar 5menunjukan bagaimana sebuah kamera yang memilikikoordinat tertentu yang akan digunakan untuk mendeteksisebuah marker dan kemudian menampilakan model yang sudahdidefinisikan.

Augmented Reality Sebagai Metafora Barudalam Teknologi Interaksi Manusia danKomputer

Page 14: Draft Kasar LANDASAN TEORI Augmented Reality

Kurniawan Teguh Martono