Touching Floating Objects in Projection-based Virtual Reality

  • Published on
    10-Jan-2017

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

  • Joint Virtual Reality Conference of EuroVR - EGVE - VEC (2010)T. Kuhlen, S. Coquillart, and V. Interrante (Editors)

    Touching Floating Objects inProjection-based Virtual Reality Environments

    D. Valkov1, F. Steinicke1, G. Bruder1, K. Hinrichs1, J. Schning2, F. Daiber2, A. Krger2

    1Visualization and Computer Graphics (VisCG) Research Group,Department of Computer Science, WWU Mnster, Germany

    2German Research Centre for Artificial Intelligence (DFKI), Saarbrcken, Germany

    Abstract

    Touch-sensitive screens enable natural interaction without any instrumentation and support tangible feedback onthe touch surface. In particular multi-touch interaction has proven its usability for 2D tasks, but the challenges toexploit these technologies in virtual reality (VR) setups have rarely been studied.In this paper we address the challenge to allow users to interact with stereoscopically displayed virtual environ-ments when the input is constrained to a 2D touch surface. During interaction with a large-scale touch display auser changes between three different states: (1) beyond the arm-reach distance from the surface, (2) at arm-reachdistance and (3) interaction. We have analyzed the users ability to discriminate stereoscopic display parallaxeswhile she moves through these states, i. e., if objects can be imperceptibly shifted onto the interactive surface andbecome accessible for natural touch interaction. Our results show that the detection thresholds for such manip-ulations are related to both user motion and stereoscopic parallax, and that users have problems to discriminatewhether they touched an object or not, when tangible feedback is expected.

    Categories and Subject Descriptors (according to ACM CCS): Information Interfaces and Presentation [H.5.1]: Mul-timedia Information SystemsArtificial, augmented, and virtual realities; Information Interfaces and Presentation[H.5.2]: User InterfacesInput devices and strategies;

    1. Introduction

    Common virtual reality (VR) techniques such as stereo-scopic rendering and head tracking often allow to easilyexplore and better understand complex data sets reducingthe overall cognitive effort for the user. However, VR sys-tems usually require complex and inconvenient instrumenta-tions, such as tracked gloves, head-mounted displays, etc.,which limits their acceptance by common users and evenby experts. Using devices with six degrees-of-freedom isoften perceived as complicated, and users can be easilyconfused by non-intuitive interaction techniques or unin-tended input actions. Another issue for interaction in vir-tual environments (VEs) is that in most setups virtual ob-jects lack haptic feedback reducing the naturalness of theinteraction [BKLP04, Min95]. Many different devices existto support active haptic by specialized hardware which gen-erates certain haptic stimuli [Cal05]. Although these tech-

    nologies can provide compelling haptic feedback, they areusually cumbersome to use as well as limited in their appli-cation scope. In head-mounted display (HMD) environmentspassive haptic feedback to users may be provided [Ins01] byphysical props registered to virtual objects. For instance, auser might touch a physical table while viewing a virtualrepresentation of it in the VE. Until now, only little efforthas been undertaken to extend passive haptic feedback intoprojection-based VEs.

    Theoretically, a projection screen itself might serve as aphysical prop and provide passive feedback for the objectsdisplayed on it, for instance, if a virtual object is alignedwith the projection wall (as it is the case in 2D touch dis-plays). In addition, a touch-sensitive surface could providea powerful extension of this approach. Furthermore, sepa-rating the touch-enabled surface from the projection screen,for example, by using a physical transparent prop as pro-

    c The Eurographics Association 2010.

  • D. Valkov et al. / Touching Floating Objects in Projection-based Virtual Reality Environments

    posed by Schmalstieg [SES99], increases the possible in-teraction volume in which touch-based interaction may beavailable. Recently, the FTIR (frustrated total internal re-flection) and DI (diffused illumination) technologies andtheir inexpensive footprint [Han05,SHB10] provide an op-tion to turn almost any large-scale projection display into atouch or multi-touch enabled surface. Multi-touch technol-ogy extends the capabilities of traditional touch-based sur-faces by tracking multiple finger or palm contacts simultane-ously [DL01,SHB10,ML04]. Since humans in their every-day life usually use multiple fingers and both hands for inter-action with their real world surroundings, such technologieshave the potential to build intuitive and natural metaphors.

    However, the usage of the projection wall as a physicalhaptic prop as well as an input device introduces new chal-lenges. The touch sensitivity of most multi-touch surfaces islimited to the 2D plane determined by the surface or onlya small area above it, whereas stereoscopic displays allowto render objects which might be floating in space with dif-ferent parallaxes. While objects rendered with zero parallaxare perfectly suited for touch-based interaction, especially if2D input is intended, floating objects with positive parallaxcannot be touched directly, since the screen surface limitsthe users reach [GWB05]. In this case indirect selection andmanipulation techniques [BKLP04, Min95, PFC97] can beused. Those techniques cannot be applied for objects in frontof the screen. In fact, objects that float in front of the projec-tion screen, i. e., objects with negative parallax, introducethe major challenge in this context. When the user wants totouch such an object, she is limited to touching the area be-hind the object, i. e., the user has to reach "through" virtualobjects to the touch surface, and the stereoscopic impres-sion would be disturbed. As illustrated in Figure 1 (left), ifthe users reaches through a virtual object while focusing onher finger, the stereoscopic impression would be disturbeddue to the difference in accommodation and convergence be-tween virtual object and the finger. As a result, left and rightstereo images could not be merged anymore, since the objectappears blurred. On the other hand, focusing on the virtualobject would lead to the opposite effect in described situa-tion (see Figure 1 (right)). In both cases touching an objectmay become unnatural and ambiguous.

    Recent findings in the area of human perception in VEshave shown that users have problems to estimate their ownmotions [BRP05,SBJ10], and in particular that vision usu-ally dominates the other senses if they disagree [BRP05].Therefore it sounds reasonable that the virtual scene couldbe imperceptibly moved along or against the users motiondirection, such that a floating object is shifted onto the inter-active surface potentially providing passive haptic feedback.Another relevant question is to what extent a visual repre-sentation could be misaligned from its physical counterpartwithout the user noticing. In other words, how precisely canusers discriminate between visual and haptic contact of theirfinger with a floating object.

    Figure 1: Illustration of a common problem for touch inter-action with stereoscopic data.

    In this paper we address the challenges to allow users tointeract with stereoscopically rendered data sets when the in-put is constrained to a 2D plane. When interacting with largescale touch displays a user usually changes between threedifferent states: (1) beyond the arm-reach distance from thesurface, (2) at arm-reach distance (but not interacting), and(3) interaction. We have performed two experiments in or-der to determine if, and how much, the stereoscopic paral-lax can be manipulated during the users transitions betweenthose states, and how precisely a user can determine the ex-act point of contact with a virtual object, when haptic feed-back is expected.

    The remainder of this paper is structured as follows: Sec-tion 2 summarizes related work. Section 3 describes thesetup and the options for shifting objects to the interactivesurface. Sections 4 and 5 present the experiments. Section 6discusses the results and gives an overview of future work.

    2. Related Work

    The challenges introduced by touch interaction with stereo-scopically rendered VEs are described by Schning etal. [SSV09]. In their work anaglyph-based and passive-polarized stereo visualization systems were combined withFTIR-technology on a multi-touch enabled wall. Further-more, approaches based on mobile devices for addressingthe described parallax problems were discussed. The sepa-ration of the touch surface from the projection screen hasbeen proposed by Schmalstieg et al. [SES99]. In this ap-proach, a tracked transparent prop is proposed, which canbe moved while associated floating objects (such as a menu)are displayed on top of it. Recently, multi-touch devices withnon planar touch surfaces, e. g., cubic [dlRKOD08] or spher-ical [BWB08], were proposed, which could be used to spec-ify 3D axes or points for indirect object manipulation.

    The option to provide passive haptic feedback in HMDsetups by representing each virtual object by means of aregistered physical prop has considerable potential to en-hance the user experience [Ins01]. However, if each virtualobject shall be represented by a physical prop, the physicalinteraction space would be populated with several physical

    c The Eurographics Association 2010.

  • D. Valkov et al. / Touching Floating Objects in Projection-based Virtual Reality Environments

    obstacles restricting the interaction space of the user. Re-cently, various approaches for VR have been proposed thatexploit the humans imperfection to discriminate betweendiscrepancies induced by different stimuli from at least twosenses. In this context experiments have demonstrated thathumans tolerate a certain amount of inconsistency betweenvisual and proprioceptive sensation [BRP05, KBMF05]. Inthese approaches users can touch several different virtual ob-jects, which are all physically represented by a single real-world object. Such scenarios are often combined with redi-rected walking techniques to guide users to a correspondingphysical prop [KBMF05, SBK08]. In this context, manypsychological and VR research groups have also consid-ered the limitations of human perception of locomotion andreorientation [BIL00, BRP05]. Experiments have demon-strated that humans tolerate inconsistencies during locomo-tion [BIL00, SBJ10] or head rotation [JAH02] within cer-tain detection thresholds. Similar to the approach describedin this paper, Steinicke et al. [SBJ10] have determined de-tection thresholds for self-motion speed in HMD display en-vironments, and they have shown that humans are usuallynot able to determine their own locomotion speed with ac-curacy better than 20%. While those results have significantimpact on the development of HMD-based VR interfaces,their applicability to projection-based VEs has not yet beeninvestigated in depth.

    3. Touching Floating Objects

    In this section we explain our setup and discuss user inter-action states within a large scale stereoscopic touch-enableddisplay environment. Furthermore, we describe options toshift floating objects to the interactive surface while the useris transiting through these different states.

    3.1. Setup

    In our setup (sketched in Figure 2) we use a 300cm200cmscreen with passive-stereoscopic, circular polarized backprojection for visualization. Two DLP projectors with a res-olution of 1280 1024 pixels provide stereo images for theleft and the right eye of the user. The VE is rendered onan Intel Core i7 @ 2.66GHz processor (4 GB RAM) withnVidia GTX295 graphics card. We tracked the users headposition with an optical IR tracking system (InnoTeamSEOS 3D Tracking). We have extended the setup by Rear-DI [SHB10] instrumentation in order to support multi-touch interaction. Using this approach, infrared (IR) light il-luminates the screen from behind the touch surface. Whenan object, such as a finger or palm, comes in contact withthe surface it reflects the IR light, which is then sensed bya camera. Therefore, we have added four IR illuminators(i. e., high power IR LED-lamps) for back-lighting the pro-jection screen and a digital video camera (PointGrey Drag-onfly2) equipped with a wide-angle lens and a matching in-frared band-pass filter, which is mounted at a distance of 3m

    Figure 2: Sketch of stereoscopic multi-touch surface setup.

    from the screen. The camera captures an 8-bit monochromevideo stream with resolution of 1024 768 pixels at 30fps(2.95mm2 precision on the surface). Since our projectionscreen is made from a mat, diffusing material, we do notneed an additional diffusing layer for it.

    3.2. User Interaction States

    During observation of several users interacting within thesetup described above, we have identified typical user be-havior similar to their activities in front of large public dis-plays [VB04], where users change between different statesof interaction. In contrast to public displays where the focusis on different "levels" of user involvement, and attractingthe users attention is one major goal, in most VR setupsusually the user already intends to interact with the VE. Toillustrate the users activities while she interacts within thedescribed VR-based touch display environment, we adaptNormans interaction cycle [Nor98] resulting in three dif-ferent states (see Figure 3).

    In the observation state the user is at such a distance fromthe display that the whole scene is in the view. Because of thesize of our display this is usually beyond her arm-reach dis-tance. In this state often the goal of the intended interactionis formed, and the global task is subdivided. Users usuallyswitch to this state in order to keep track of the scene as awhole (i. e., to get the "big picture") and to identify new lo-cal areas or objects for further local interaction. The user is inthe specification state while she is within arm-reach distancefrom the surface but still not interacting. We have observedthat the user spends only a short period of time in this state,plans the local input action and speculates about the systemsreaction. The key feature of the transition between the obser-vation state and the specification state is that real walking isinvolved. In the observation state the user is approximately1.52m away from the interactive surface, whereas during

    c The Eurographics Association 2010.

  • D. Valkov et al. / Touching Floating Objects in Projection-based Virtual Reality Environments

    Figure 3: Illustration of the states of user interaction with a wide, stereoscopic multi-touch display.

    the specification state she is within 5060cm of the screenin our setup. Finally, in the execution state the user mightperform the actions planned in the specification state. Bytouch-based interaction the user is applying an input actionwhile simultaneously observing and evaluating the result ofthis action and...