4

Click here to load reader

Imaginary Interfaces: Touchscreen-like Interaction without ... · Abstract Screenless mobile devices achieve maximum mobility, ... screenless mobile devices traditionally do not

  • Upload
    ngotu

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Imaginary Interfaces: Touchscreen-like Interaction without ... · Abstract Screenless mobile devices achieve maximum mobility, ... screenless mobile devices traditionally do not

Imaginary Interfaces: Touchscreen-like Interaction without the Screen

Abstract Screenless mobile devices achieve maximum mobility, but at the expense of the visual feedback that is gener-ally assumed to be necessary for spatial interaction. With Imaginary Interfaces we re-enable spatial interac-tion on screenless devices. Users point and draw in the empty space in front of them or on the palm of their hands. While they cannot see the results of their inter-action, they do obtain some visual feedback by watching their hands move. Our user studies show that Imaginary Interfaces allow users to create simple draw-ings, to annotate with them and to operate interfaces, as long as their layout mimics a physical device they have used before. We demonstrate how this allows an imaginary interface to serve as a shortcut for a physical device and we believe that ultimately Imaginary Inter-faces will lead to the development of standalone ultra-mobile devices.

Keywords imaginary interfaces; mobile; wearable; spatial memory; screenless; memory; non-visual; touch.

ACM Classification Keywords H.5.2 [Information Interfaces and Presentation]: User Interfaces - Interaction styles;

Introduction In order to achieve ultimate mobility, researchers have proposed abandoning screens [10]. Such wearable devices are operated using hand gestures, voice com-mands, or physical buttons. Gesture Pendant [13], for example, allows users to perform gestures that invoke commands, such as “open door”.

However, screenless mobile devices traditionally do not support spatial interaction, such as tapping on a but-ton, because, seemingly, there is nothing to tap on. This is a substantial loss as spatial interaction is the main interaction paradigm across all of today’s form factors, including desktops, tabletops and mobile devic-es.

Imaginary Interfaces (UIST 2010) In our paper presented at UIST 2010 [3], we intro-duced Imaginary Interfaces, screenless devices that allow users to perform spatial interaction without visual feedback. Instead of receiving feedback from a screen, all visual “feedback” takes place in the user’s imagina-tion.

Copyright is held by the author/owner(s).

CHI’12, May 5–10, 2012, Austin, Texas, USA.

ACM 978-1-4503-1016-1/12/05.

Sean Gustafson Hasso Plattner Institute Potsdam, Germany [email protected]

Figure 1. When users draw using an Imaginary Interface, they retain an image of their creation in their minds. This image allows users to interact with the otherwise invisible content, such as to annotate it.

Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback

Sean Gustafson, Daniel Bierwirth and Patrick Baudisch

Hasso Plattner Institute Potsdam, Germany

{sean.gustafson, patrick.baudisch} @ hpi.uni-potsdam.de daniel.bierwirth @ student.hpi.uni-potsdam.de

ABSTRACT Screen-less wearable devices allow for the smallest form factor and thus the maximum mobility. However, current screen-less devices only support buttons and gestures. Pointing is not supported because users have nothing to point at. However, we challenge the notion that spatial interaction requires a screen and propose a method for bringing spatial interaction to screen-less devices. We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solu-tions, such as Sixth Sense, all visual “feedback” takes place in the user’s imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space. With three user studies we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination? Participants created simple drawings, annotated existing drawings, and pointed at locations described in imaginary space. Our findings suggest that users’ visual short-term memory can, in part, replace the feedback conventionally displayed on a screen. Author Keywords: mobile, wearable, spatial, screen-less, memory, gesture, bimanual, computer vision ACM Classification Keywords: H5.2 [Information inter-faces and presentation]: User Interfaces. - Graphical user interfaces. General Terms: Design, Human Factors, Experimentation INTRODUCTION Today’s desktop computer interfaces are mostly based on spatial interaction: objects are located at a specific location; in order to interact with an object, users select it by point-ing at its location. A key component in this interaction model is the screen, as it provides the reference frame in which pointing takes place. In mobile interaction, however, screens need to be much smaller than on the desktop. In order to achieve ultimate

mobility, some researchers have abandoned screens alto-gether [23]. These wearable devices are operated with hand gestures, because there is no screen to point to anymore. Unfortunately, the gestures are generally categorical, rather than spatial: Gesture Pendant [25], for example, allows users to perform a series of commands out of a finite voca-bulary, such as “open door”. Virtual Shelves [20] extends this concept by allowing users to point at set of virtual positions without visual feedback. This result is interesting because it suggests that users might be able to interact with arbitrary spatial interfaces without seeing what they are interacting with. In this paper, we investigate the question: To what extent can users interact spatially with a user interface that exists only in their imagination?

Figure 1: User sketching a stock curve using an im-aginary interface. The curve exists only in the user’s imagination. Unlike Sixth Sense [22] there is no pro-jector or display—users only see their bare hands.

IMAGINARY INTERFACES Imaginary Interfaces allow users to create and interact with objects located in an invisible 2D space (Figure 1). As with conventional computer systems, users select objects by pointing at them and they manipulate objects spatially. Unlike these interfaces, however, with Imaginary Interfaces there is no screen. All visual “feedback” takes place in the user’s imagination. Figure 2 illustrates an envisioned scenario. Karl has called his stock broker using a Bluetooth earpiece to discuss his investment portfolio. (a) He invokes an imaginary interface

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. UIST’10, October 3–6, 2010, New York, New York, USA. Copyright 2010 ACM 978-1-4503-0271-5/10/10....$10.00.

Page 2: Imaginary Interfaces: Touchscreen-like Interaction without ... · Abstract Screenless mobile devices achieve maximum mobility, ... screenless mobile devices traditionally do not

Figure 1 illustrates the interaction. The user is drawing a stock price curve which he is then annotates. While he cannot see the drawing he has created, watching his hands interact creates a mental after-image that pro-vides enough of a spatial reference to allow him to interact with the invisible drawing.

The figure also shows our early device prototype. The user is wearing a brooch containing an infrared illumi-nant and an infrared camera that observes the space in front of the user. The device extracts the user’s hands using computer vision techniques.

In three user studies, we had participants create simple drawings, annotate existing drawings, and point at locations described in a coordinate system of finger and thumb units (“one finger up and two thumbs right”).

Imaginary Phone and Transfer Learning (UIST 2011) Given that imaginary interfaces offer no visual output, one of the key challenges is how users can learn their layout. In our UIST 2011 paper [4] we proposed learn-ing by transfer. We designed imaginary interfaces that mimic the layout of a physical device that users are already familiar with, such as an iPhone. Figure 2 shows the Imaginary Phone, a prototype that users operate by tapping on their hand; the relative position of the touch event is then transmitted wirelessly to an actual iPhone, here located in the user’s pocket.

Unlike our earlier system that had users interact in empty space, we moved the interaction onto the user’s hand. The hand not only provides users with tactile feedback but also allows them to associate functions with physical landmarks, such as finger segments.

For the new prototype we replaced the infrared camera with a time-of-flight depth camera, which helps sepa-rate the hands from each other. We also found time-of-flight cameras work well in direct sunlight, allowing for truly mobile use.

Figure 3. The three assumptions of transfer learning. The concept of transfer learning introduced in our paper is based on the three assumptions illustrated by Figure 3. We validated each of them in a user study: (1) Users build up spatial memory automatically while using a physical device; in our study, participants knew the correct location of 68% of their own iPhone home screen apps by heart without training. (2) Spatial memory transfers from a physical to an imaginary in-terface; in our study, participants recalled 61% of their home screen apps when recalling app location on the palm of their hand. (3) Palm interaction is precise enough to operate a typical mobile phone; in a second study, participants reliably acquired 0.95cm wide iPh-one targets on their palm—accurate enough to operate standard iPhone widgets.

Research topic and expected contribution: standalone ultra-mobile devices These findings illustrate what is conceptually possible on screenless mobile devices, namely interactions rang-ing from drawing to the operation of more or less traditional GUIs. This is beneficial because GUI interac-tion can be more expressive than gesture interaction, where interaction is typically limited by the fact that one gesture maps to one function. Additionally, a GUI-

2 31

Figure 2. The Imaginary Phone allows this user to operate the phone while it remains in his pock-et. Instead, he invokes an app by tapping on the corresponding location on his hand. The similarity between four fingers and four columns of icons on the physical device simplify this.

“clock”

Page 3: Imaginary Interfaces: Touchscreen-like Interaction without ... · Abstract Screenless mobile devices achieve maximum mobility, ... screenless mobile devices traditionally do not

inspired model allows us to leverage users’ experience with more traditional devices, such as desktops and mobiles.

These abilities suggest two different use cases.

On one hand, imaginary interfaces could serve as a shortcut mode for physical devices. In such a use case, users continue to carry their physical device but to save time execute functions on the imaginary counterpart without retrieving the physical device.

On the other hand, our findings have inspired us to think more broadly. With sufficient training based on a physical device, shouldn’t users be able to operate an imaginary device standalone? In the future, such devic-es should allow for extremely small form factors that leave the user’s hands free, allowing for truly ubiqui-tous use. At the same time, by leveraging the interaction paradigm discussed above, such devices could allow for a reasonably expressive interaction, closer to what we find on today’s phones, desktops and tabletop computers.

The goal of my work is to explore this space, i.e., to create standalone ultra-mobile devices with powerful interaction.

The work described in the preceding sections repre-sents the first steps toward this goal. In the remaining time of my PhD, I plan to push the concept further by exploring how to author imaginary interfaces on the fly, how to browse interfaces, and how to allow multiple users to collaborate.

1. Authoring: draw-your-own interface We have started to explore draw-your-own imaginary interfaces that allow users to draw interface elements and then later use them. As a side effect, since users

create the interface themselves they are already famil-iar with its layout and there is no need for explicit learning.

2. Exploring unfamiliar interfaces Transfer learning, as described earlier, allows users to leverage their experience with a familiar physical de-vice. To allow users to operate an unfamiliar interface, we are porting audio-based browsing techniques in-tended for blind users (e.g., Apple VoiceOver [1]) to imaginary interfaces.

3. Collaborative imaginary interfaces The designs described so far all assume a single user, but conceptually multiple imaginary interfaces could be used in concert with multiple users. As shown in Figure 4, we imagine two users coming together, each wearing their own sensor and co-opting an arbitrary surface or space for interaction.

By watching each other interact, we expect users to form a common understanding of the interface and to be able to combine their previous experiences (e.g., “the Save button is over here”).

Possible implications for visually impaired While our main goal is to create and explore ultra-mobile devices, Imaginary Interfaces and interfaces designed for the visually impaired have interesting similarities and differences worth exploring. In particu-lar, we plan to explore the value derived from the extra feedback users obtain from watching their hands inter-act. Exploring this and related questions will help us better understand Imaginary Interfaces and at the same time it will allow us to discover which aspects of our technology can inform the design of interfaces for the visually impaired.

Figure 4. With collaborative imag-inary interfaces, users can come together and co-opt any surface or space for their use.

Page 4: Imaginary Interfaces: Touchscreen-like Interaction without ... · Abstract Screenless mobile devices achieve maximum mobility, ... screenless mobile devices traditionally do not

Related Work Highly mobile devices have supplied visual feedback by utilizing projectors. For instance, Skinput [5] with vi-bration-based touch sensing on the hand and forearm and Sixth Sense [9] with computer vision based sens-ing both use mobile projectors to provide visual feedback. OmniTouch [6] provides similar functionality with a robust depth-camera-based sensing algorithm that detects interaction surfaces and touch.

The goal of screenless devices has been explored in the field of wearable computing. GesturePad [12] is one example of a technology users could always carry with them. Gesture Pendant [13] introduced the chest-worn camera to sense hand interaction; our tracking hard-ware is directly inspired by it. Virtual Shelves [7] lets users place objects in a finite number of containers that exist at precise locations in the hemisphere in front of them.

Our future prototypes will rely on auditory feedback and will therefore draw from audio-based mobile inter-faces [11] and interfaces for the visually impaired [1].

Research Situation and Dissertation Status At the time of this writing I have published two UIST full papers on Imaginary Interfaces [3][4] and an ex-ploration into one-handed form factors (PinchWatch [8]). My PhD work builds on my Master thesis on visu-alizing off-screen targets on maps (Wedge [2]).

References [1] Apple. iPhone VoiceOver http://www.apple.com/accessibility/iphone/vision.html

[2] Gustafson, S., Baudisch, P., Gutwin, C.,and Irani, P. Wedge: clutter-free visualization of off-screen locations. In Proc. CHI, (2008), 787–796.

[3] Gustafson, S., Bierwirth, D. and Baudisch, P. Imagi-nary Interfaces: spatial interaction with empty hands and without visual feedback. In Proc. UIST, (2010), 3–12.

[4] Gustafson, S., Holz, C. and Baudisch, P. Imaginary Phone: learning imaginary interfaces by transferring spatial memory from a familiar device. In Proc. UIST, (2011), 283–292.

[5] Harrison, C., Tan, D., and Morris, D. Skinput: appro-priating the body as an input surface. In Proc. CHI, (2010), 453–462.

[6] Harrison, C., Benko, H., and Wilson, A.D. OmniTouch: wearable multitouch interaction everywhere. In Proc. UIST, (2011), 441–450.

[7] Li, F.C.Y., Dearman, D. and Truong, K.N. Virtual Shelves: interactions with orientation aware devices. In Proc. UIST, (2009), 125–128.

[8] Loclair, C., Gustafson, S. and Baudisch, P. Pinch-Watch: a wearable device for one-handed microinteractions. In Proc. MobileHCI Workshop on Ensem-bles of On-Body Devices, (2010), 4 pages.

[9] Mistry, P., Maes, P. and Chang, L. WUW - wear Ur world: a wearable gestural interface. In CHI Ext. Abs., (2009), 4111–4116.

[10] Ni, T. and Baudisch, P. Disappearing mobile devices. In Proc. UIST, (2009), 101–110.

[11] Pirhonen, A., Brewster, S. and Holguin, C. Gestural and audio metaphors as a means of control for mobile devices. In Proc. CHI, (2002), 291–298.

[12] Rekimoto, J. GestureWrist and GesturePad: unobtru-sive wearable interaction devices. In Proc. ISWC, (2001), 21–27.

[13] Starner, T., Auxier, J., Ashbrook, D. and Gandy, M. The Gesture Pendant: a self-illuminating, wearable, infra-red computer vision system for home automation control and medical monitoring. In Proc. ISWC, (2000), 87–94.