10
Int J Interact Des Manuf (2011) 5:85–94 DOI 10.1007/s12008-011-0117-9 ORIGINAL PAPER GARDE: a gesture-based augmented reality design evaluation system L. X. Ng · S. W. Oon · S. K. Ong · A. Y. C. Nee Received: 5 January 2010 / Accepted: 9 March 2011 / Published online: 26 March 2011 © Springer-Verlag 2011 Abstract Design evaluation and modification plays a criti- cal role in the design process to develop products that meet the user requirements. Physical prototypes are commonly used to evaluate a design and justify design changes. However, physical prototypes are costly and cannot be modified. Vir- tual prototypes have been used but it is difficult to simulate virtual prototypes accurately and contextualize them. In this paper, a gesture-based augmented reality design evaluation system (GARDE) is presented. Using AR, GARDE is able to enjoy the benefits of both physical and virtual prototyping in design evaluation. In addition, the use of hand gestures as the main interaction input allows for interactive and intuitive design evaluation. Through the use of an off-the-shelf CAD software API, modifications of the 3D models in an AR envi- ronment can be achieved. As design evaluation is conducted in a real environment, in-situ design can also be performed using GARDE. Two case studies have been carried to dem- onstrate the GARDE system and are described in this paper. Keywords Gesture interaction · Design evaluation · Augmented reality · Prototyping L. X. Ng · S. W. Oon · S. K. Ong (B ) · A. Y. C. Nee Mechanical Engineering Department, Faculty of Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore 117576, Singapore e-mail: [email protected] L. X. Ng · S. K. Ong · A. Y. C. Nee NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, 28 Medical Drive, #05-01, Singapore 117456, Singapore L. X. Ng e-mail: [email protected] A. Y. C. Nee e-mail: [email protected] 1 Introduction Designing and developing a product is an iterative process of constant modifying and evaluating the design to meet the user requirements. Design evaluation is performed by building prototypes, both physical and virtual ones, to test the working principles and performance of the product. Physical prototypes are mainly mock-ups of the final product. They can be made by combining existing components to evaluate specific functions of the product or using rapid prototyping techniques to assess the form and overall functionality of the product. While they are quite similar to the final prod- ucts and can provide more information about the design, they are usually expensive and time-consuming to build. Fur- thermore, limited modifications can be made directly on the physical prototypes [1]. When a modification is needed, a new prototype will have to be built to evaluate the design changes. Therefore, physical prototypes are usually used in the later stages of the development process where there are fewer changes. Virtual prototyping make use of computer simulation and virtual reality to evaluate the design of a product. From the 3D CAD model of the product, designers and engineers can assess the performance of the product using proprietary sim- ulation software and make changes accordingly. In addi- tion, they can simulate the manufacturing processes, which can help to reduce the development costs and time. How- ever, simulation results are heavily reliant on the design- ers and engineers to replicate the physical characteristics of the product and the interactions with the virtual pro- totypes. It is generally difficult to fully contextualize the use of the product [1]. Virtual environments of real work- ing environments have to be reproduced at the expense of high computational load, time and costs in order to do so. 123

GARDE: a gesture-based augmented reality design evaluation system

  • Upload
    l-x-ng

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Int J Interact Des Manuf (2011) 5:85–94DOI 10.1007/s12008-011-0117-9

ORIGINAL PAPER

GARDE: a gesture-based augmented reality design evaluationsystem

L. X. Ng · S. W. Oon · S. K. Ong · A. Y. C. Nee

Received: 5 January 2010 / Accepted: 9 March 2011 / Published online: 26 March 2011© Springer-Verlag 2011

Abstract Design evaluation and modification plays a criti-cal role in the design process to develop products that meet theuser requirements. Physical prototypes are commonly usedto evaluate a design and justify design changes. However,physical prototypes are costly and cannot be modified. Vir-tual prototypes have been used but it is difficult to simulatevirtual prototypes accurately and contextualize them. In thispaper, a gesture-based augmented reality design evaluationsystem (GARDE) is presented. Using AR, GARDE is ableto enjoy the benefits of both physical and virtual prototypingin design evaluation. In addition, the use of hand gestures asthe main interaction input allows for interactive and intuitivedesign evaluation. Through the use of an off-the-shelf CADsoftware API, modifications of the 3D models in an AR envi-ronment can be achieved. As design evaluation is conductedin a real environment, in-situ design can also be performedusing GARDE. Two case studies have been carried to dem-onstrate the GARDE system and are described in this paper.

Keywords Gesture interaction · Design evaluation ·Augmented reality · Prototyping

L. X. Ng · S. W. Oon · S. K. Ong (B) · A. Y. C. NeeMechanical Engineering Department, Faculty of Engineering,National University of Singapore, 9 Engineering Drive 1,Singapore 117576, Singaporee-mail: [email protected]

L. X. Ng · S. K. Ong · A. Y. C. NeeNUS Graduate School for Integrative Sciences and Engineering,National University of Singapore, 28 Medical Drive, #05-01,Singapore 117456, Singapore

L. X. Nge-mail: [email protected]

A. Y. C. Neee-mail: [email protected]

1 Introduction

Designing and developing a product is an iterative processof constant modifying and evaluating the design to meetthe user requirements. Design evaluation is performed bybuilding prototypes, both physical and virtual ones, to test theworking principles and performance of the product. Physicalprototypes are mainly mock-ups of the final product. Theycan be made by combining existing components to evaluatespecific functions of the product or using rapid prototypingtechniques to assess the form and overall functionality ofthe product. While they are quite similar to the final prod-ucts and can provide more information about the design,they are usually expensive and time-consuming to build. Fur-thermore, limited modifications can be made directly on thephysical prototypes [1]. When a modification is needed, anew prototype will have to be built to evaluate the designchanges. Therefore, physical prototypes are usually used inthe later stages of the development process where there arefewer changes.

Virtual prototyping make use of computer simulation andvirtual reality to evaluate the design of a product. From the3D CAD model of the product, designers and engineers canassess the performance of the product using proprietary sim-ulation software and make changes accordingly. In addi-tion, they can simulate the manufacturing processes, whichcan help to reduce the development costs and time. How-ever, simulation results are heavily reliant on the design-ers and engineers to replicate the physical characteristicsof the product and the interactions with the virtual pro-totypes. It is generally difficult to fully contextualize theuse of the product [1]. Virtual environments of real work-ing environments have to be reproduced at the expenseof high computational load, time and costs in order todo so.

123

86 Int J Interact Des Manuf (2011) 5:85–94

Augmented prototyping is a hybrid of both physical pro-totyping and virtual prototyping. It involves the overlayof product information created virtually, such as 3D CADmodels and simulation results, onto physical prototypes orobjects [2]. Designers and engineers can interact with boththe virtual and real contents in real time and modificationsmade will be reflected by the virtual contents immediately.Benefits of both physical and virtual prototypes, such as easeof modification, realistic evaluation and contextualization,can be enjoyed using an augmented prototype.

In this paper, an augmented reality (AR) design evaluationsystem used for augmented prototyping, GARDE, is pre-sented. In this system, virtual prototypes are overlaid on tan-gible AR markers and hand gesture inputs captured from datagloves worn by the users are used to manipulate and modifythe virtual prototypes. Using GARDE, the users can contex-tualize a virtual prototype in the real working environmentwithout building a physical prototype and make the requiredmodifications to the virtual prototype in real time. This willhelp to reduce the time taken for the modification-evaluationiterations and allow the users to be better informed of theeffects of the changes made. Moreover, intuitive bi-manualinteraction is supported with the use of two data gloves andgesture commands. This makes the system easy to use andreduces the time taken for design evaluation and modificationtasks.

The main contributions of the GARDE system presentedin this paper are:

• Six degrees-of-freedom (DOF) manipulation of virtualprototypes overlaid on tangible markers for contextual-ization, and

• Interactive modification of virtual prototypes using ges-tures

These will provide the users with a cost effective, interac-tive and intuitive tool to evaluate designs in a real workingenvironment and make the necessary changes in-situ. Newdesigns can be manipulated dynamically, contextualized andmodified using simple gestures, allowing design evaluationto be performed in the early design stages, such as concep-tual and embodiment design. This feedback streamlines thedesign process and enables the users to interact with thedesigns as these designs are being generated, thus achiev-ing interactive design via gesture-based interactions andconcurrent design generation, visualization, evaluation andmodification.

The structure of this paper is as follow. Section 2 will pres-ent the background and rationales for using gesture inputs andthe related augmented prototyping and glove-based systems.Section 3 will provide an overview and description of thefeatures of the GARDE system and Sect. 4 will demonstratethe implementation of the system through two case studies.

Section 5 will consist of the conclusion and the future workto be done.

2 Background and related work

The first data glove was developed in 1987 [3]. In general,data gloves use flex sensors to determine the flexion of thefingers and a magnetic tracker attached to the back of the handto determine the position and orientation of the hand. Usingdata gloves, information of the hand postures and gesturescan be input to the applications for interaction. Some of theapplications that make use of hand postures and gestures aresign language communication, computer-aided presentation,navigation and communication tools in virtual environment,3D modeling, multimodal interaction, human-robot manip-ulation and remote control [4].

Using data gloves, one can use either directly manipu-lation, which is similar to manipulating a real object, orgesture commands to interact with a virtual object. Whiledirect manipulation of the virtual prototype appears to bemore interactive and intuitive as compared to gesture com-mands, there are three main issues that need to be overcomefor direct manipulation. Firstly, it is difficult to browse in a3D space within a 2D AR display. Occluded virtual objectscannot be reached due to the lack of visual feedback. Tac-tile feedback can be provided to overcome this but this isusually at the expense of costly equipment [5]. Gesture com-mands can overcome this problem by ‘flying through’ theaugmented environment and selecting the occluded virtualobjects. The second issue pertains to the spatial constraintsof using direct manipulation. The size of the virtual objectis limited by the arm span of the user and the field of viewof the camera, whichever is smaller. Gesture commands arenot restricted by these constraints. Thirdly, humans are inher-ently less capable of precise positioning and orientation tasks.This is worsen with the lack of realistic tactile feedback usingdirect manipulation. Gesture commands can be used to pro-vide numerical input to place the virtual object preciselyin the augmented environment. Therefore, in the context ofdesign evaluation and modification, gesture commands usingdata gloves are more suitable for interacting with the virtualprototypes.

Augmented prototyping enables interaction of physicalobjects and environments, which are realistic and natural,with virtual models and simulations, which are digital andeasily modified. This increases the interactivity of the pro-totypes and allows the designers to obtain better insightabout the design through a fusion of physical and vir-tual prototypes. Research and development of augmentedprototyping systems has been on the rise in recent years.ARMO [6] augments a virtual product on a reconfigura-ble foam and the users can evaluate products designs that

123

Int J Interact Des Manuf (2011) 5:85–94 87

have similar form. Spacedesign [7] supports surface model-ing of augmented prototypes and allows the users to directlymodify the surfaces of both physical and virtual prototypes.Tangible augmented prototyping of digital handheld prod-ucts using physical prototypes with virtual overlay and tan-gible AR interaction tools have been demonstrated by Parket al. [8]. In the work reported by Verlinden and Horvath[9], the users can evaluate the design of a product usingprojected video images. In the work reported by Aoyamaand Kimishima [10], designability and operability of infor-mation appliances are tested using augmented prototypes.Comprehensive reviews have been conducted on the enablingtechnologies, reference frameworks and application areas ofaugmented prototyping systems [2,11].

3 System overview

The GARDE system consists of three main modules:

1. AR tracking and registration module to track the ARmarkers and register the virtual prototypes in the realenvironment.

2. Data gloves gesture interaction module to handle thegesture inputs and convert them into commands for thesystem.

3. Solid modeling module based on SolidWorks API to per-form the modeling operations.

The setup includes a head mounted device (HMD) mountedwith a web camera (A4 Tech PC Camera) and a see-throughLiteye, two 5DT DataGloves and AR markers. Programmingof the application is performed in Visual C++ 6.0 on theWinAR platform developed in the authors’ laboratory. ARtracking and registration is performed using the ARToolkitlibrary [12]. The 5DT DataGlove API [13] is used to imple-ment the functionalities of the data gloves and the Solid-Works API [14] is used for modeling the 3D virtual mod-els. Rendering is achieved using OpenGL. The system isimplemented on a desktop personal computer (ACER VeritonM420G—AMD Phenom 2.20 GHz processor, 4 GB SDRAMand Nvidia GeForce G100 512 MB graphic card). Figure 1shows the system setup.

In GARDE, tracking is achieved using a single webcamera mounted on the HMD. This offers a relatively inex-pensive and easily replicated method for tracking the ARmarkers in the design environment. The poses of the ARmarkers are tracked using this web camera and computedusing the ARToolkit library. The pose information is usedfor registering the virtual model on the markers. One of themarkers will be used as the origin of the AR environment sothat both the real and virtual objects share the same coordinatesystem.

Fig. 1 GARDE system setup

3.1 Datagloves functionalities

Using the 5DT DataGloves API, the pitch and roll of thehand, as well as the flexion of the fingers can be tracked.Sensors data from the tilt (pitch and roll) sensors and the flex-ure sensors are used to define the status of the hand. Theseinformation can be further processed to obtain the postureand gestures input of the hand.

To use the DataGloves for GARDE, an initialization stageis required. Initialization consists of checking the DataGlovesconnection, calibrating the DataGloves and verifying thecalibration results. As the system is able to support the useof two DataGloves, connection of the right and left gloveshave to be correct to prevent processing error. To perform thischeck, the API is used to check the connection and a simplegraphical user interface (GUI) is created to guide this pro-cess. Calibration of the DataGloves is required to determinethe correct orientation and flexion of the hands and fingersrespectively. This is especially important for finger flexion asthe flexion of each finger varies and there is a need to set indi-vidual “flex-fraction”, which is used to determine the flexedstatus of the finger. Hence, the calibration task requires theuser to open and close each hand about 10 ten times. Aftercalibration, the results can be verified using the Glove StatusViewer, which indicates the pitch, roll and flexion of the handand fingers. The user will be able to modify the parametersobtained from the calibration task if there is any error. Ingeneral, the entire initialization process takes about 2–3 minand is simple and guided throughout.

A Gesture Manager GUI is used to pre-define interac-tion commands for the different gesture inputs. One Data-Glove is able to produce 288 different gestures according tothe permutation of the different sensor data and a total of238,144 gestures are possible for two DataGloves. The Ges-ture Manager is able to support all these gestures by differen-tiating between them, matching the gestures to the associated

123

88 Int J Interact Des Manuf (2011) 5:85–94

Table 1 Sensor states and assigned checksum values of DataGloves

Right hand Left hand

Sensor state Checksum Sensor state Checksum

Thumb flexed 160000 Thumb flexed 5120000

Index finger flexed 80000 Index finger flexed 2560000

Middle finger flexed 40000 Middle finger flexed 1280000

Ring finger flexed 20000 Ring finger flexed 640000

Little finger flexed 10000 Little finger flexed 320000

Roll left 1000 Roll left 10

Roll center 2000 Roll center 20

Roll right 4000 Roll right 40

Pitch up 100 Pitch up 1

Pitch flat 200 Pitch flat 2

Pitch down 400 Pitch down 4

interaction commands and allowing the user to modify theinteraction commands for the gestures.

Differentiation of gestures is achieved by assigning indi-vidual checksum to the ten unique sensor states of aDataGlove. Table 1 shows the sensor states and their corre-sponding checksums for both DataGloves. Matching of thegestures is performed by comparing the checksum of thegesture input with that of the pre-defined checksum associ-ated to an interaction command. In GARDE, pitch and rollare generally used for directional input whereas finger flex-ions are used for specific gesture commands. The interactioncommands are stored in a user-defined list and the user canmodify this list using the Gesture Manager GUI.

3.2 Gesture interactions

Gesture inputs commands using the DataGloves can be cat-egorized into four different groups of interaction, namely

1. Mouse Emulation2. Numerical input3. Model Visualization4. Model Modification

Normal computer mouse operations are emulated using theDataGloves for interacting with the system’s GUIs. Numeri-cal input is read using the American Sign Language and gen-erally used for numerical information, such as dimensions.Model visualization is achieved through the use of the rele-vant gesture commands to view the 3D model dynamically.Model modification is performed using gesture commandsto perform simple modeling operations, such as extrude, cut-extrude and fillet.

3.2.1 Mouse emulation using 5DT DataGloves

Emulating a mouse in GARDE employs the use of the‘mouse_event’ function-call within the Win32 API to ini-tiate events involving a mouse-cursor. There are essentiallythree types of mouse events involving the cursor that areemulated, namely ‘left-click’, ‘right-click’ and ‘move’. Theuser will assign specific gestures to the preceding three clas-ses of events using the Gesture Manager, and gesture map-ping will be applied to initiate the events. Both ‘left-click’and ‘right-click’ requires two gestures each for simulatingthe button down and button up commands of a click. Formouse-movement, the system uses nine different gestures todefine eight different move directions, which are given inTable 2. Each of these nine gestures has similar flexure statesbut different roll and pitch states. This is to strike a com-mon ground, and allow the user to switch the direction ofmovement seamlessly during mouse-emulation.

3.2.2 Numerical input using American Sign Language

The purpose of this interaction is to allow the user to inputnumerical values into the system. The input numbers canbe used to type words for annotations in GARDE and alsofor dimensioning the parameters involved in model modifica-tion. Words are typed using the similar T-9 input system usedin mobile phones. The dimensions of the parameters, such asextrude height, cut extrude depth, and fillet radius, could begestured by the user in the form of numbers. The AmericanSign Language is used, as it is considered to be the world

Table 2 Move event definitions for gestures with similar flexure states

Roll left Roll center Roll rightPitch up Pitch up Pitch upMove towards top-left of AR display Move directly upwards in AR display Move towards top-right of AR display

Roll left Roll center Roll rightPitch flat Pitch flat Pitch flatMove directly left in AR display Do not move Move directly right in AR display

Roll left Roll center Roll rightPitch down Pitch down Pitch downMove towards bottom-left of AR display Move directly downwards in AR display Move towards bottom-right of AR display

123

Int J Interact Des Manuf (2011) 5:85–94 89

standard. Figure 2 shows the corresponding gestures for thenumeric digits 0–9.

To ensure the integrity of the gestures for the numericdigits, the particular gesture will only be registered by thesystem if it is consistent over a period of 1–2 s. This increasethe time taken for input but it is more important that the inputis correct. As only one digit can be input at a time, special con-siderations have to be taken for inputting large numbers. Forexample, during model modification, when an extrude/cut-extrude/fillet operation is initiated, the user will be promptedto first gesture the number of digits of the particular inputparameter. Thereafter, the user will be prompted by similardisplay messages, in the form of overlays, to sequentiallygesture the individual number involved for each digit. Whenthe final digit has been registered, the parameter is calcu-lated and passed on to SolidWorks API for the creation ofthe particular extrude/cut-extrude/fillet feature.

3.2.3 Model visualization

In GARDE, the 3D models overlaid on the markers canbe visualized dynamically in the real environment. This isachieved by allowing the user to translate, rotate and scalethe model using gesture inputs. While this may be similarto viewing a 3D model in conventional CAD applications,doing so in an AR environment allows the user to contextu-alize the 3D model in the working environment, leading tobetter design evaluation.

Translation, rotation and scaling of the 3D model aredirect modifications of the model-view matrix in the OpenGLtransformation pipeline. The overlay on the scene is ren-dered based on this new model-view matrix, and a pro-jection-matrix. Using the gesture commands from the 5DTDataGlove, six DOF model manipulation can be accom-plished by changing the parameters in the transformationmatrix, such as, translateX, translateY, translateZ, rotateX,rotateY, rotateZ, scaleX, scaleY and scaleZ. This allows

translation, rotation and scaling along the three different axesduring run-time. Figure 3 shows some of the correspondinggestures for the model visualization commands.

In addition, the user can obtain the direct distance betweentwo arbitrary vertices in the virtual prototype model. This willbe useful for the user to obtain a feel of the size of the pro-totype in relation to the real objects in the AR environment.The implementation of this interaction mode hinges on hav-ing a list of known vertex coordinates in the AR environment.This list is built using the SolidWorks API whereby all theedges are scanned for its start/end points. After the list ofprototype vertices in the model space is compiled, the ver-tices are magnified by drawing a small sphere at the vertexlocation, as illustrated in Fig. 4. The sphere allows the vertexto be selected easily with a single ‘left-click’ event insidethe sphere. When the system has registered that two smallspheres have been selected, the direct distance between ver-tex#1 (x1, y1, z1) and vertex#2 (x2, y2, z2) can be determinedusing the Pythagoras Theorem. This dimension is displayedat the midpoint between the two vertices in the form of anoverlay.

3.2.4 Model modification

In GARDE at the start of a design session, loading a vir-tual prototype into the AR environment involves opening thecorresponding SolidWorks part document (*.sldprt) withinSolidWorks. Thereafter, the topological information of theSolidWorks part is extracted in accordance to the hierarchi-cal structure of the 3D-models, and passed on to WinARDataGlove to be rendered. Thus, the particular SolidWorkspart model should have a state that matches the virtual proto-type rendered in the AR environment at the start of the designsession.

The SolidWorks part model and the virtual prototypemodel are coherent throughout the run-time of a designsession. To achieve this, the model modification interaction

Fig. 2 Corresponding gestures for 0–9 numerical input in GARDE

123

90 Int J Interact Des Manuf (2011) 5:85–94

Fig. 3 Gestures for some of themodel visualization commands

Fig. 4 3D model with all its vertices displayed in GARDE

commands from the 5DT DataGloves in the AR environmentare first dissected into a sequential list of SolidWorks func-tions. Next, this list of SolidWorks functions is initiated viathe interfacing methods in the SolidWorks API.

At this juncture, the interaction command has yet to effecta visible change on the virtual prototype overlay that is visibleto the user in the AR environment. After the list of equivalentSolidWorks commands has been completely executed withinSolidWorks and the SolidWorks part model has been modi-fied, the modified model is ‘sent’ back to the GARDE system.GARDE reads and displays the overlay modifications and theuser is able to observe visible changes in the virtual proto-

type overlay. In this way, both the SolidWorks part modeland the virtual prototype are kept coherent after the modelmodification commands have been processed.

In this current research, three elementary modeling opera-tions for modifying a virtual prototype have been selected forimplementation within the GARDE system, namely Extrude,Cut-Extrude and Fillet. Their selection stems from theirprevalent use in model design, and the simplicity of theiruse within SolidWorks.

(a) Extrude & cut-extrude

To perform an extrude/cut-extrude operation on a virtualprototype, a well-defined sketch needs to be available. Thissketch is defined in the same manner as a design engineerwould in SolidWorks. First, a desired reference plane inwhich the sketch will lay needs to be selected. To achieve this,the user gestures and moves the mouse-emulated cursor to aparticular plane, and selects it with a left-click gesture com-mand. The sketch plane for the extrude/cut-extrude operationis thus defined. The next step involves defining the sketch pat-tern. Using function calls from the SolidWorks API, the usercan create a 2D sketch pattern from basic shapes, such ascircle, rectangle and polygon. In addition, lines can be usedto draw complex 2D sketches as well.

To ensure that the virtual prototype and the SolidWorkspart model are coherent, definition coordinates of the sketchpatterns on the selected face of the virtual prototype must beat the same coordinates with respect to the part model withinSolidWorks. This is addressed in GARDE by systemicallyand continuously selecting and placing known model coor-dinates on the selected face through the OpenGL transforma-tion pipeline until a match occurs. Using the minimum andmaximum values for the X-, Y- and Z-axes obtained from the

123

Int J Interact Des Manuf (2011) 5:85–94 91

bounding box of the model, the set of possible model coordi-nates that exists on the selected face can be determined. Next,an arbitrary known model coordinate from this set is inputto the forward OpenGL transformation pipeline to obtain thecorresponding image coordinate, which will be comparedwith the image coordinate of the user-defined point on theAR display. If there is no match, another arbitrary point forthe set of possible model-coordinates will be selected andtransformed until there is a match within a certain tolerance.Lastly, the matched coordinates are selected for use in thesketch pattern definition.

After the definition of the sketch pattern has been com-pleted, GARDE will initiate a SolidWorks function call forthe extrude/cut-extrude operation. In SolidWorks, an equiv-alent sketch will be drawn at the similar model coordi-nates defined by the user. Next, function calls are used toextrude/cut-extrude the sketch respectively with respect tothe existing virtual prototype. Finally, the modified virtualprototype with the additional extrude/cut-extrude feature willbe sent back to the GARDE system, and the modified infor-mation are read, updated and displayed. The user will be ableto see a modified virtual prototype in the AR environment.

(b) Fillet

The fillet modeling operation allows the user to fillet all edgeson a selected face. Here, similar to the extrude/cut-extrudeoperation, the user first selects a face by gesturing and mov-ing the mouse-emulated cursor to a particular face, thereafterselecting it with a left-click gesture command. The selectedface is passed as a parameter to SolidWorks and a functioncall is initiated to fillet all the edges on this selected face. Themodified SolidWorks part model is sent back to the system,where the virtual prototype model is updated, and the usersees an additional fillet feature in the AR environment.

3.3 SolidWorks modeling

The SolidWorks API is used to perform model modifica-tion operations, such as extrude and fillet using the 5DTDataGloves in GARDE. It is responsible for determiningthe common solution to each modification operation, suchthat constraints and relationships with respect to each poly-gon/vertex primitive are adhered to. In this way, the 3D mod-eling workload is outsourced to SolidWorks, making it morecomputational cost-efficient.

To harness the modeling capabilities of SolidWorks, onecan access the hundreds of functions that are available inits API. There are two interfacing methods in which GARDEcommunicates with SolidWorks using the function calls. Inthe first method, the functions are exposed through standardComponent Object Model (COM) objects. This allows thesystem to have direct access to the underlying objects or

data within SolidWorks, resulting in increased performance.However, this approach does not allow for SolidWorks toobtain or pass data arrays to/from GARDE. For passing ofdata arrays to be possible, the Object Linking and Embed-ding (OLE) automation approach should be used, where theparticular function is exposed though the Dispatch inter-face. The data array is always encapsulated in a VARIANTdata structure, a requirement stipulated by SolidWorks. TheSolidWorks API Object Model represents the 3D model inGARDE and function calls are performed on instances of thisobject model in the system. The object model consists of ahierarchical structure and all other SolidWorks objects can beaccessed directly or indirectly from it by following the hier-archical structure and referencing the parent object. For thedevelopment of GARDE, the ModelDoc2 object is of partic-ular interest. This is because the PartDoc object that can bederived from the ModelDoc2 objects contains the functionsthat are essential to edit the prototype part model.

4 System implementation and case studies

Two case studies have been performed using GARDE andpresented in this section to demonstrate the working princi-ples of the system. These two case studies involve mainlytwo design tasks, namely, visualization and evaluation of a3D model in an AR environment, and modification of a 3Dmodel.

4.1 Design evaluation in an AR environment

In GARDE, the 3D models are loaded into the AR environ-ment and manipulated dynamically using hand gestures toevaluate these designs. This allows the user to analyze thespatial information of the designs, such as the geometrical fitand the layout in the actual use environment. The main con-tributions of performing design evaluation in the GARDEsystem are the dynamic visualization using gestures and thecontextualization of the design in the use environment, whichare difficult to achieve using physical and virtual prototypes.This case study showcases the six DOF model manipula-tion operations, scaling operations, and the vertex-to-vertexdimension finder that the GARDE system provides. For theselection of the prototype entities, the user utilizes the mouseemulation function of the 5DT DataGloves.

The user starts off by loading a SolidWorks model in theAR environment. This is done by selecting the desired modelin a dialog box. After loading the model, the user can movethe model to examine it. To take a closer look of the model, theuser can translate the model towards him using the relevantgestures commands. The orientation of the model can alsobe modified by rotating the model. To ensure that the modelis a good fit, the user can scale the model to the desired size.

123

92 Int J Interact Des Manuf (2011) 5:85–94

Next, the dimensions of the scaled model can be verifiedby checking the vertex-to-vertex dimensions of the model.Performing these tasks in an AR environment offers the usera much better perception of the spatial constraints in a realenvironment. This information can be used to ensure the fitand interference of the 3D model in the actual working envi-ronment. Figure 5 shows the sequences of model visuali-zation and manipulation operations performed in this casestudy.

4.2 Design modification in an AR environment

In the GARDE system, changes can be made while evaluatingthe designs. As the modifications are carried out in the ARenvironment, the user will be able to evaluate the designchanges immediately with little disruption to the workflow.This case study demonstrates the model modification capa-bilities of the GARDE system. In this case study, the userdraws a 2D sketch on top of a 3D model in the AR environ-ment. Thereafter, the sketch is extruded. The extrude heightis entered in the form of numerical gesture input based onthe American Sign Language.

The user first loads a simple cuboid model in the ARenvironment. Next he starts the model modification processby first selecting the face to perform the extrusion and thedesired extrusion modeling operation. In this case, extrusion

is performed using a free-hand polygon sketch. Four verticesare defined on the face to create the desired polygon. Whenthe 2D sketch pattern has been well defined, the user willbe prompted to enter the height of the extrusion. Next, theextrusion operation will be carried out by SolidWorks andthe modified model will be reflected in the AR environment.This interactive model modification process allows the userto make changes to the design in an AR environment and atthe same time evaluate the design immediately after modifi-cation. Less time is taken for designing and this is especiallyso if the design evaluation and modification is conductedin-situ. Figure 6 shows the sequences of model modificationoperations performed in this case study.

4.3 Discussion

These two case studies have demonstrated the viability ofusing gestures as commands in driving interactive designevaluation and modification in an AR environment. In thissystem, these commands are generally separated into twotypes: direct and indirect commands. A direct command isone that will bring about a direct manipulation of the 3Dmodel upon its execution. Examples will include the modelvisualization and modification operations. Conversely, indi-rect commands refer to those that effect changes on inter-mediary devices that interact with the virtual prototype.

Fig. 5 Model visualization and manipulation commands carried out in the first case study (From top left: Loading of 3D model, Translation in thepositive z-direction, Rotation of 90 about z-axis, Negative scaling of model, Verification of dimensions)

123

Int J Interact Des Manuf (2011) 5:85–94 93

Fig. 6 Model modification in second case study [From top left: Loading of 3D model, Selection of face, Sketching of polygon, Inputting dimensions(no. of digits and individual numbers), Final extruded feature]

An example will be the use of gesture commands that per-form mouse-emulation functions. This implies that gesturescommands can be used to perform almost limitless operationson the 3D model, and are sufficient on their own without otherforms of input to do so.

In addition, the GARDE system can serve as a platform todevelop a more sophisticated interactive AR design system.Currently, only basic design evaluation tasks have been dem-onstrated and future work will aim to carry out more complexand detailed analyses of the design and augment the resultsonto the AR environment. This will enhance the decision-making of the user to make during the design developmentand evaluation process. Concurrent design generation, visu-alization, evaluation and modification form the basis of theinteractivity of the design activities in the GARDE system,allowing the user to create new models in the actual use envi-ronment and evaluate them in real time.

5 Conclusion

This paper has presented the research of a gesture-based ARdesign environment, GARDE and demonstrated the use ofgestures in carrying out design evaluation tasks. Using ges-tures, the user can visualize a 3D model in an AR envi-ronment, evaluate the design and make modifications. Themodifications will be reflected in the model and also in a

conventional CAD software application, SolidWorks, in realtime. This allows the user to contextualize the design in theworking environment and make use of the spatial informa-tion of the real environment in modifying the design. Ges-tures offer an interactive and intuitive way of performingthese design tasks and they can be used as a standalone inputmethod due to their variations. Users can customize their owngesture sets to carry out the modeling operations as well.

GARDE can be considered as a form of augmented pro-totype and it enjoys the benefits of both physical prototypesand virtual prototypes, such as realism, ease of modifica-tion and contextualization. In the current research, GARDEis used to perform two basic design tasks, namely, modelvisualization and model modification. More design tasks canbe supported using GARDE and this will form the basis forfuture work. Other future developments will include using astandardized 3D model data format that is compatible withOpenGL and most of the CAD software to handle the 3Dmodels in GARDE, the tracking of 3D pose and orienta-tion of the hand to use for more gesture inputs, and makingthe system unencumbered with the implementation of eitherbare-hand interaction or wireless data gloves.

One key development will be to implement 3D modelgeneration from scratch. This is currently hard to achievedue to the difficulty in defining the dimensions of the 3Dmodel precisely without a reference 3D model and the lackof modeling data being transferred from the AR system to

123

94 Int J Interact Des Manuf (2011) 5:85–94

other CAD software using current file formats, such as theSTL format. The hierarchal structure of the virtual proto-types will be defined as a collection of faces within the CADmodeler, instead of a model with meaningful features. Theseissues will have to be overcome before GARDE can be usedas a standalone design environment.

References

1. Zorriassatine, F., Wykes, C., Parkin, R., Gindy, N.: A survey ofvirtual prototyping techniques for mechanical product develop-ment. Proc. Inst. Mech. Eng. B: J. Eng. Manuf. 217(4), 513–530(2003)

2. Verlinden, J., Horvath, I., Edelenbos, E.: Treatise of technologiesfor interactive augmented prototyping. In: Proc. of Tools and Meth-ods of Competitive Engineering, pp. 523–536 (2006)

3. Sturman, D.J., Zeltzer, D.: A survey of glove-based input. IEEEComput. Graph. Appl. 14(1), 30–39 (1994)

4. LaViola, J.: A survey of hand posture and gesture recognition tech-niques and technology. Technical Report CS-99-11, Brown Univer-sity, Department of Computer Science, Providence, RI (1999)

5. CyberGrasp. CyberGlove System: http://www.cyberglovesystems.com/products/hardware/cybergrasp.php. Accessed 7 June 2009

6. Jin, Y.-S., Kim Y.-W., Park, J.: ARMO: Augmented Realitybased Reconfigurable MOck-up. In: Proceedings of the Interna-tional Symposium on Mixed and Augmented Reality (ISMAR’07),pp. 1–2 (2007)

7. Fiorentino, M., de Amicis, R., Monno, G., Stork, A.: Spacedesign:a mixed reality workspace for aesthetic industrial design. In: Pro-ceedings of the International Symposium on Mixed and AugmentedReality (ISMAR ’02), pp. 86–94 (2002)

8. Park, H., Moon, H., Lee, J.Y.: Tangible augmented prototyping ofdigital handheld products. Comput. Ind. 60(2), 114–125 (2009)

9. Verlinden, J., Horvath, I.: Framework for testing and validatinginteractive augmented prototyping as a design means in industrialpractice. In: Proceedings of Virtual Concept (2006)

10. Aoyama, H., Kimishima, Y.: Mixed reality system for evaluat-ing designability and operability of information appliances. Int.J. Interact. Des. Manuf. 3(3), 157–164 (2009)

11. Bordegoni, M., Cugini, U., Caruso, G., Polistina, S.: Mixed pro-totyping for product assessment: a reference framework. Int. J.Interact. Des. Manuf. 3(3), 177–187 (2009)

12. ARToolkit.: http://www.hitl.washington.edu/artoolkit (2007).Accessed 16 Feb 2009

13. 5DT DataGlove Driver Manual.: Fifth Dimension Technologies(2004)

14. SolidWorks API. SolidWorks Corporation (2006)

123