10
48 PERVASIVE computing 1536-1268/02/$17.00 © 2002 IEEE The Human Experience W eiser originated the term ubiq- uitous computing, creating a vision of people and environ- ments augmented with compu- tational resources that provide information and services when and where desired. Although this vision has excited many technologists, we must realize that the main motivation behind Weiser’s vision is the impact ubicomp could have on the human experience: “Machines that fit the human environment instead of forcing humans to enter theirs will make using a computer as refreshing as a walk in the woods.” 1 For the past decade, researchers have worked toward the implicit goal of assisting everyday life and not overwhelming it. Although the names applied to their research efforts vary (perva- sive, wearable, augmented, invisible, disappearing, calm, and so forth), almost all share the goal that Weiser so eloquently characterized: Inspired by the social scientists, philosophers, and anthropologists at Parc, we have been trying to take a radical look at what computing and networking ought to be like. We believe that people live through their practices and tacit knowledge so that the most powerful things are those that are effectively invisi- ble in use. This is a challenge that affects all of com- puter science. Our preliminary approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1- inch displays to wall-sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work ‘ubiquitous computing.’ This is different from PDAs, dynabooks, or information at your fingertips. It is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. 2 To realize Weiser’s vision, we must address sev- eral clear goals. First, the everyday practices of peo- ple must be understood and supported. Second, the world must be augmented through the provisioning of heterogeneous devices offering different forms of interactive experience. Finally, the networked devices must be orchestrated to provide for a holis- tic user experience. Here, we overview how these goals have affected research in three areas: the def- inition of the appropriate physical interaction expe- rience, the discovery of general application features, and the evolution of theories for designing and eval- uating the human experience in ubicomp. Defining the appropriate physical interaction experience Ubiquitous computing inspires application devel- opment that is “off the desktop.” In addition to sug- gesting a freedom from well-defined interaction locales (such as the desktop), this vision assumes that physical interaction between humans and com- putation will be less like the current keyboard, mouse, and display paradigm and more like the way humans interact with the physical world. We speak, gesture, and write to communicate with other humans and alter physical artifacts. The drive for the correct ubicomp experience has resulted in a vari- To address Weiser’s human-centered vision of ubiquitous computing, the authors focus on physical interaction, general application features, and theories of design and evaluation for this new mode of human–computer interaction. REACHING FOR WEISER’S VISION Gregory D. Abowd and Elizabeth D. Mynatt Georgia Institute of Technology Tom Rodden University of Nottingham

The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

48 PERVASIVEcomputing 1536-1268/02/$17.00 © 2002 IEEE

The Human Experience

Weiser originated the term ubiq-uitous computing, creating avision of people and environ-ments augmented with compu-tational resources that provide

information and services when and where desired.Although this vision has excited many technologists,we must realize that the main motivation behindWeiser’s vision is the impact ubicomp could have onthe human experience: “Machines that fit the human

environment instead of forcinghumans to enter theirs will makeusing a computer as refreshing asa walk in the woods.”1 For thepast decade, researchers haveworked toward the implicit goalof assisting everyday life and notoverwhelming it. Although the

names applied to their research efforts vary (perva-sive, wearable, augmented, invisible, disappearing,calm, and so forth), almost all share the goal thatWeiser so eloquently characterized:

Inspired by the social scientists, philosophers, andanthropologists at Parc, we have been trying to takea radical look at what computing and networkingought to be like. We believe that people live throughtheir practices and tacit knowledge so that the mostpowerful things are those that are effectively invisi-ble in use. This is a challenge that affects all of com-puter science. Our preliminary approach: Activatethe world. Provide hundreds of wireless computingdevices per person per office, of all scales (from 1-inch displays to wall-sized). This has required newwork in operating systems, user interfaces, networks,

wireless, displays, and many other areas. We call ourwork ‘ubiquitous computing.’ This is different fromPDAs, dynabooks, or information at your fingertips.It is invisible, everywhere computing that does notlive on a personal device of any sort, but is in thewoodwork everywhere.2

To realize Weiser’s vision, we must address sev-eral clear goals. First, the everyday practices of peo-ple must be understood and supported. Second, theworld must be augmented through the provisioningof heterogeneous devices offering different forms ofinteractive experience. Finally, the networkeddevices must be orchestrated to provide for a holis-tic user experience. Here, we overview how thesegoals have affected research in three areas: the def-inition of the appropriate physical interaction expe-rience, the discovery of general application features,and the evolution of theories for designing and eval-uating the human experience in ubicomp.

Defining the appropriate physicalinteraction experience

Ubiquitous computing inspires application devel-opment that is “off the desktop.” In addition to sug-gesting a freedom from well-defined interactionlocales (such as the desktop), this vision assumesthat physical interaction between humans and com-putation will be less like the current keyboard,mouse, and display paradigm and more like the wayhumans interact with the physical world. We speak,gesture, and write to communicate with otherhumans and alter physical artifacts. The drive forthe correct ubicomp experience has resulted in a vari-

To address Weiser’s human-centered vision of ubiquitous computing,the authors focus on physical interaction, general applicationfeatures, and theories of design and evaluation for this new mode ofhuman–computer interaction.

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Gregory D. Abowd and Elizabeth D. Mynatt Georgia Institute of Technology

Tom RoddenUniversity of Nottingham

Page 2: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

ety of important changes to the input, out-put, and interactions that define the humanexperience with computing.

We have traditionally treated input asexplicit communication. However, theadvance of sensing and recognition tech-nologies has challenged us to provide morehumanlike communications capabilities andeffectively incorporate implicit actions intothe subset of meaningful system input. Sim-ilarly, the communication from the envi-ronment to the user—the output—hasbecome highly distributed and available inmany form factors and modalities. The chal-lenge is to coordinate across many outputlocations and modalities without over-whelming our limited attention spans.Finally, the relationship between input andoutput is important in ubicomp, becausetechnology’s invisible nature can be seen asa smooth integration between the physicaland virtual worlds.

Toward implicit inputInput has moved beyond the explicit

nature of textual input (from keyboards)and selection (from pointing devices) to agreater variety of data types. This hasresulted in not only a greater variety of inputtechnologies but also a shift from explicit

means of human input to more implicitforms of input. In other words, our naturalinteractions with the physical environmentprovide sufficient input to a variety of atten-dant services, without any further user inter-vention. For example, walking into a spaceis enough to announce your presence andidentity in that location.

Computer interfaces that support morenatural human forms of communication(such as handwriting, speech, and gestures)are beginning to supplement or replace ele-ments of the graphical user interface inter-action paradigm. Long-standing researchcommunities in computer vision and mul-timodal recognition technologies (mainlyhandwriting and speech) are driving theemerging area of perceptual interfaces.Pen-based interaction, unsuccessfullyrushed to the market in the early 1990s, isalso experiencing a resurgence. Large-scaletouch-interactive surfaces, using technolo-gies such as capacitive coupling, have madeit possible to create multiperson interac-tive surfaces on tables and walls (see Figure1). Recognition of freehand writing isimproving, but more significantly, mass

adoption has followed the introduction ofless sophisticated and more robust recog-nition technologies, such as Grafiti (Parc’shandwriting recognition system). We haveeven seen compelling examples of voiceand pen input effectively used in applica-tions without requiring any recognition.3

These recognition technologies exem-plify how computers can interpret mean-ing from sensed signals of human activity.There are many other ways to infer infor-mation about people and environments bysensing a variety of other physical signals(“Connecting the Physical World with Per-vasive Networks” in this issue directlyaddresses the advances in sensing of thephysical world). However, sensing andinterpreting human activity provides amore implicit notion of input to an inter-active system. For example, many re-searchers have investigated4,5 how we canincorporate simple sensors such as radiofrequency identification, accelerometers,tilt sensors, capacitive coupling, and IRrange finders into artifacts to increase thelanguage a user can provide as input to con-trol that artifact (see Figure 2).

JANUARY–MARCH 2002 PERVASIVEcomputing 49

Figure 1. The DiamondTouch input technology from Mitsubishi ElectricResearch Lab uses capacitive couplingthrough humans to provide a large-scaleinput surface for multiple simultaneoususers (see www.merl.com/projects/DiamondTouch for more details). Figurecourtesy of MERL.

Figure 2. Two examples of simple sensingembedded into devices: (a) The ListenReader from Xerox Parc uses electric fieldsensors located in the book binding tosense the proximity of the reader’s handsand to control audio parameters.4 RFID(radio frequency identification) tagsembedded in each page allow fast, robustpage identification. Picture courtesy ofXerox PARC. (b) An experimental PDAplatform used at Microsoft Research toinvestigate how a variety of simple sensorscan improve the interaction between auser and various handheld applications.5

Figure courtesy of Ken Hinckley.

Proximity range sensor:• Infrared receiver• IR emitter

(below receiver to right)Touch sensitivity:• Screen bezel• On sides & back of device

Tilt sensor:• Inside device, in plane of

the display• 2-axis linear accelerometer

(a) (b)

Page 3: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

Invisibility of computing, from thehuman perspective, can start when we candetermine an individual’s identity, location,effect, or activity through his or her merepresence and natural interactions in an envi-ronment. The union of explicit and implicitinput defines the context of interactionbetween the human and the environment.

Toward multiscale and distributed output

Integrating ubicomp capabilities intoeveryday life also requires novel output tech-nologies and techniques. To start, the designof targeted information appliances, such asPDAs and future home technologies,requires addressing the technology’s form,including its aesthetic appeal. Output is nolonger exclusively in the form of self-con-tained desktop or laptop visual displays thatdemand our attention. A variety of scales of visual displays are being distributedthroughout our environments. For exam-ple, many of us carry cell phones with smallscreens, and large electronic whiteboards—similar to the Liveboard that Weiser’s groupinvented at Parc—are common in officesand classrooms today. More importantly,we are seeing multiple modalities of infor-mation sources that lie more at the periph-ery of our senses and provide qualitative,ambient forms of communication.

Weiser described the form factor of ubi-comp technology in three scales —the inch,foot, and yard. The middle (foot) scale issimilar to the standard laptop and desktop

displays. We each have at least one of thesedevices, and we largely use them in sta-tionary settings. A new generation oftablet-like portable pen-based comput-ers—devices that rival the experimentalPad prototypes developed at Xerox Parc—is expected to hit the market this year.Pagers, cellular phones, and PDAs thatincorporate handheld displays with rela-tively low resolution currently representthe small end of the scale (inch). We carryaround an increasing number of these dis-play devices at all times. High-resolutionwall-sized displays now represent the largeend of the scale (yard). Such displays arecreated by effectively stitching togethermultiple low-resolution projected displays,such as the Stanford Interactive Mural6 (seeFigure 3a) or the Princeton Display Wall.7

As these displays continue to proliferatein number and variety, two important trendshave emerged. First, as Jun Rekimoto’s“pick and drop” demonstration8 initiallymotivated and the Stanford InteractiveRoom9 further explored, we want to easilymove information between separate dis-plays and coordinate the interactions be-tween multiple displays. Second, we wantdisplays that are less demanding of ourattention. We’ll achieve Weiserian invisibil-ity by designing output that provides forperipheral awareness of information out ofthe foreground of our conscious attention.

Researchers have explored the trendtoward peripheral output for a particularclass of displays—ambient displays. Such

displays require minimal attention and cog-nitive effort and are thus more easily inte-grated into a persistent physical space. Theartist Natalie Jeremijenko at Xerox Parcinvented one of the first ambient displays,the Dangling String.10 Using analog sensingof network traffic from the cabling in theceiling, a motor drove the spin of a longstring—and the more traffic, the faster therotation. During high traffic periods, thewhir of the string was faintly audible as well.

The Dangling String shares many fea-tures with subsequent efforts in ambientdisplays. A data source drives the abstractrepresentation such that the user’s periph-eral perception can monitor the output.The data source is generally informationof medium to low priority, but it is benefi-cial for the user to be aware of it, perhapsfor some opportunistic action. Becausethese displays are meant to be persistentlyavailable in the environment, they are oftendesigned to be aesthetically appealing andnovel (see, for example, the Water Lampin Figure 3b). Other examples of ambientdisplays include ambientROOM,11 whichprojects information about colleagues aspinpoints of light on the wall; AudioAura,12 which encodes the arrival ofincoming email as auditory cues in amobile device; and Kandinsky,13 whichassembles images triggered from keywordsin information bulletins into an aestheti-cally pleasing and intriguing collage.

Although using the visual channel dom-inates our experience with computing out-

50 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Figure 3. Form factors of ubiquitous computing. (a) The Stanford InteractiveMural—an example of a large-scale interactive display surface created by tilingmultiple lower-resolution projectors. Figure courtesy of Françoise Guimbetrière.(b) The Water Lamp from Hiroshi Ishii’sTangible Media Group at the MIT MediaLab—an example of an ambient display.Light shines upward through a pan ofwater, which is actuated by digitally con-trolled solenoids that can tap the waterand cause ripples. External informationcan drive the tapping of the solenoids. Figure courtesy of Hiroshi Ishii.(a) (b)

Page 4: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

put, examples such as Audio Aura demon-strate how other modes of output, such asactuation of small devices, can effectivelycommunicate ambient information. Withthe introduction of simple programmingtools for dealing with motors and otheractuators, such as Phidgets,14 mechanicalactuation to drive distributed outputdevices will increase.

Seamless integration of physical andvirtual worlds

An important feature of ubicomp tech-nology is that it attempts to merge com-putational artifacts smoothly with theworld of physical artifacts. Plenty ofexamples demonstrate how we can over-lay electronic information on the realworld, thus producing an augmented real-ity.15 An example of such augmented real-ity is NaviCam,16 a portable camera andTV that recognizes 2D glyphs placed onobjects and can then superimpose relevantinformation over the object for display ona TV screen (see Figure 4a). This form ofaugmented reality only affects the output.When input and output are intermixed, aswith the DigitalDesk17 (see Figure 4b), webegin to approach the seamless integra-tion of the physical and virtual worlds.Researchers have suggested techniques forusing objects in the physical world tomanipulate electronic artifacts, creatingso-called graspable18 or tangible19 userinterfaces. Sensors attached to devices pro-vide ways to physically manipulate thosedevices to be interpreted appropriately bythe applications they run (see Figure 2).5,20

Application themes

Applications are of course the whole pointof ubiquitous computing. —Mark Weiser21

Many applications-focused researchersin human–computer interaction (HCI) seekthe Holy Grail of ubicomp, the killer appli-cation that will cause significant investmentin the infrastructure and then enable a widevariety of ubicomp applications to flourish.It could be argued that person-to-personcommunication is such a killer app for ubi-comp, because it has caused a large invest-ment in environmental and personal infra-structure, moving us closer to a completelyconnected existence. Regardless of whetherpersonal communication is the killer app,the vision of ubicomp from the human per-spective is much more holistic. It is not thevalue of any single service that will makecomputing a disappearing technology.Rather, it is the combination of a large rangeof services, all of which are available whenand as needed, and all of which work asdesired without extraordinary human inter-vention. A major challenge for applicationsresearch is discovering an evolutionary pathtoward this idyllic interactive experience.

The brief history of ubicomp demon-strates three emergent features that appearacross many applications. First, we must beable to use implicitly sensed context fromthe physical and electronic environment todetermine a given service’s correct behavior.Context-aware computing demonstratespromise for making our interactions withservices more seamless and less distractingfrom our everyday activities. Applicationscan work well when properly informedabout the context of their use. Second, wemust provision automated services to eas-

ily capture and store memories of live expe-riences and serve them up for later use.Finally, we need continuously available ser-vices. As we move toward the infusion ofubicomp into our everyday lives, the servicesprovided will need to become constantlyavailable partners with the human users,always interrupted and easily resumed.

Context-aware computingTwo compelling early ubicomp demon-

strations were the Olivetti Research Lab’sActive Badge22 and the Xerox ParcTab,23

both location-aware appliances. Thesedevices leverage a simple piece of context—user location—and provide valuable services(automatic call forwarding for a phone sys-tem and automatically updated maps of userlocations in an office). These simple loca-tion-aware appliances are perhaps the firstdemonstration of linking implicit humanactivity with computational services thatserve to augment general human activity.

Locating identifiable entities (usuallypeople) is a common piece of context usedin ubicomp application development. Themost widespread applications have beenGPS-based car navigation systems andhandheld tour guide systems that vary thecontent displayed (video or audio) on ahandheld unit, given the user’s physicallocation in an exhibit area.24,25 The Sen-tient Computing Project (see www.uk.research.att.com/spirit) demonstrates themost complex set of location-aware appli-cations and provides the most complexindoor location-aware infrastructure and

JANUARY–MARCH 2002 PERVASIVEcomputing 51

Figure 4. Augmented reality: (a) The NaviCam system, which recognizes 2Dglyphs on objects and then superimposesadditional information over that object.16

Figure courtesy of Sony Computer ScienceLaboratories. (b) DigitalDesk prototypeintegrates physical and virtual desktopenvironments with the aid of projectionand vision technology.17

(a) (b)

Page 5: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

applications development (see Figure 5).Of course, there is more to context than

position (where) and identity (who). Al-though a complete definition of contextremains an illusive research challenge, itis clear that in addition to who and where,context-awareness involves when, what,and why.

With the exception of using time as anindex into a captured record or summa-rizing how long a person has been at a par-ticular location, most context-driven appli-cations are unaware of the passage of time.Of particular interest are the relativechanges in time as an aid for interpretinghuman activity. For example, brief visits atan exhibit could indicate a general lack ofinterest. Additionally, when we can estab-lish a baseline of behavior, action that vio-lates a perceived pattern would be of par-ticular interest. For example, a context-aware home might notice when an elderlyperson deviates from a typically activemorning routine.

The interaction in current systems eitherassumes what the user is doing or leaves thequestion open. Perceiving and interpretinghuman activity is a difficult problem, butinteraction with continuously worn, con-text-driven devices will likely need to incor-porate interpretations of human activity toprovide useful information. One strategy isto incorporate information about what auser is doing in the virtual realm. Whatapplication is he using? What informationis he accessing? One example that has bothpositive and negative uses is cookies, whichdescribe people’s activity on the Web.Another way of interpreting the what ofcontext is to view it as the focus of atten-tion of one or more people during a liveevent. Knowing this focus of attention caninform a better capture of the event.

Even more challenging than perceivingwhat a person is doing is understandingwhy he is doing it. Sensing other forms ofcontextual information that could indicatea person’s effective state, such as body tem-perature, heart rate, and galvanic skin re-sponse, might be a useful place to start.

Related to the definition of context isthe question of how to represent context.This issue is important once we consider

separating out an application’s context-sensing portion from its context-awarebehavior. Without good representationsfor context, applications developers areleft to develop ad hoc and limited schemesfor storing and manipulating this keyinformation. The evolution of moresophisticated representations will enablea wider range of capabilities and a trueseparation of sensing context from theprogrammable reaction to that context.

An obvious challenge of context-awarecomputing is making it truly ubiquitous.Having certain context—in particular,positioning information—is useful. How-ever, there are few truly ubiquitous, single-source context services. Positioning is agood example. GPS does not work indoorsand is even suspect in some urban regions.Several indoor positioning schemes exist,with differing characteristics in terms ofcost, range, granularity, and requirementsfor tagging, and no single solution is likelyto ever meet all requirements.

The solution for obtaining ubiquitouscontext is to assemble contextual informa-tion from a combination of related contextservices. Such context fusion is similar in intent to the related and well-researched area of sensor fusion. Contextfusion must seamlessly hand off sensingresponsibility between boundaries of dif-ferent context services. Negotiation and res-olution strategies must integrate informa-tion from competing context services whenmore than one service concurrently pro-vides the same piece of context. This fusionis also required because sensing technolo-gies are not 100 percent reliable or deter-ministic. Combining measures from multi-ple sources could increase the confidencevalue for a particular interpretation. Inshort, context fusion assists in providingreliable ubiquitous context by combiningservices in parallel (to offset noise in the sig-nal) and sequentially (to provide greatercoverage). For example, we could combineinformation from speaker identification,face recognition, and gait recognition toimprove identification of an individual in ahome setting. Additionally, informationfrom different sources might be availableand more appropriate at different times.

Automated capture and accessMuch of our life in business and acad-

emia is spent listening to and recording(more or less accurately) the events thatsurround us and then trying to rememberthe important information from thoseevents. There is clear value, and potentialdanger, in using computational resourcesto augment the inefficiency of humanrecord taking, especially when there aremultiple streams of related informationthat are virtually impossible to manuallycapture as a whole. Tools that supportautomated capture of and access to liveexperiences can remove the burden ofdoing something at which humans strug-gle (such as recording), so they can focusattention on activities at which they suc-ceed (indicating relationships, summariz-ing, and interpreting).

We define capture and access as the taskof preserving a record of some live experi-ence that is then reviewed at some point inthe future. Vannevar Bush was perhaps thefirst to write about the benefits of a gener-alized capture and access system when heintroduced the concept of the memex.26 Thememex was intended to store the artifactsthat we come in contact with in our every-day lives and the associations that we createbetween them. Over the years, many re-searchers have worked toward this vision.As a result, many systems have been builtto capture and access experiences in class-rooms, meetings, and other live experiences.

Xerox Parc explored the earliest workon automated support for meeting capture.Over the past decade, numerous captureapplications have been explored in a vari-ety of environments (see the “Classroom2000” sidebar for description of a class-room capture environment) in support ofindividuals or groups. A full review ofautomated capture appears elsewhere.27

Toward continuous interactionProviding continuous interaction moves

computing from a localized tool to a con-stant presence. A new thread of ubicompresearch, everyday computing, promotesinformal and unstructured activities typicalof much of our everyday lives. Familiarexamples are orchestrating daily routines,

52 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Page 6: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

communicating with family and friends,and managing information.

The focus on activities as opposed totasks is a crucial departure from traditionalHCI design. The majority of computerapplications support well-defined tasksthat have a marked beginning and end withmultiple subtasks in between. Take wordprocessing, for example. Word processingfeatures are tuned for starting with a blankdocument (or template), entering text, for-matting, printing, and saving. These appli-cations are not well suited to the more gen-eral activity of writing, encompassingmultiple versions of documents where textis reused and content evolves over time.

The emphasis on designing for continu-ously available interaction requires ad-dressing these features of informal, dailyactivities:

• They rarely have a clear beginning or end,so the design cannot assume a commonstarting point or closure and thus requiresgreater flexibility and simplicity.

• Interruption is expected as users switchattention between competing concerns.

• Multiple activities operate concurrentlyand might need to be loosely coordinated.

• Time is an important discriminator incharacterizing the ongoing relationshipbetween people and computers.

• Associative models of information areneeded, because information is reusedfrom multiple perspectives.

Of course, activities and tasks are notunrelated to each other. Often an activitywill comprise several tasks, but the activ-ity itself is more than these componentparts. For example, communication activ-ities contain well-defined tasks such asreading a message or composing a reply.The interaction falters when the task refersto the larger activity: How does this newmessage relate to previous messages fromthis person? What other issues should beincluded in the reply? The challenge indesigning for activities is encompassingthese tasks in an environment that supportscontinuous interaction.

Theories of design and evaluation Because the implementation of Weiser’s

ubicomp vision alters the relationshipbetween humans and technology, we mustrevise our theories of HCI that inform designand evaluation. Traditional work in HCI hasproduced considerable human factors guid-ance for designing various kinds of computerinterfaces (for example, graphical displays,direct manipulation interfaces, multimediasystems, and Web sites). Although widelyused, most of these guidelines tend to focus

on the needs and demands of designing desk-top computer interfaces. During the past fewyears, ubicomp’s emergence has led researchcommunities to ask whether these currentapproaches are appropriate to the design ofinterfaces where interaction extends beyondthe traditional monitor, keyboard, andmouse arranged on a desk.

A particular concern for ubicomp isdeveloping support for designing andassessing systems that are appropriatewhen computing functionality becomesembedded in the surrounding environ-ment, specific physical objects, or evenobjects that are carried. This movementaway from the desktop with its well-under-stood and fixed arrangement of devices hasbeen a catalyst for three broad researchactivities:

• The development of new models of inter-action that incorporate the relationshipof ubicomp with the physical world

• The emergence of methods that focus ongaining richer understandings of settings

• The development of approaches toassess ubicomp’s utility

New models of interactionThe shift in focus from the desktop to the

surrounding environment inherent in ubi-comp mirrors previous work in HCI and

JANUARY–MARCH 2002 PERVASIVEcomputing 53

Figure 5. Indoor location systems: (a) AT&T Laboratories Bat device, which is part of a 3D ultrasonic indoor location system, and(b) A map of office worker locations that helps coworkers find each other and talk by phone.

(a) (b)

Page 7: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

computer-supported cooperative work. Asthe computer has increasingly spreadthroughout organizations, researchers havehad to shift their focus from a singlemachine engaging with an individual to abroader set of organizational and socialarrangements and the cooperative interac-tion inherent in these arrangements. Thisshift has seen the development of new mod-els of interaction to support the designprocess in broader organizational settings.Many of these models apply to ubicompwith its emphasis on integrating numerousdevices in one setting.

Traditionally, the Model Human Proces-sor theory of human cognition and behav-ior has informed HCI research and evalu-ation efforts.28 This model focuses oninternal cognition driven by the coopera-tion of three independent units of sensory,cognitive, and motor activity, where eachunit maintains its own working store ofinformation. As the application of com-puters has broadened, designers haveturned to models that consider the natureof the relationship between the internalcognitive processes and the outside world.Designing for a balance between “knowl-edge in the world” versus “knowledge inthe head” is now a common maxim in thedesign community.29,30 The ubicomp com-munity is currently exploring three mainmodels of cognition as guides for futuredesign and evaluation.

Activity theory is the oldest of the three,building on Lev Vygotsky’s work.31 Theclosest to traditional theories, activity the-ory recognizes concepts such as goals(objects), actions, and operations. How-ever, both goals and actions are fluid, basedon the world’s changing physical stateinstead of more fixed, a priori plans. Addi-tionally, although operations require little tono explicit attention, such as an expert dri-ver driving home, the operation can shiftto an action based on changing circum-stances such as difficult traffic and weatherconditions. Activity theory also emphasizesthe transformational properties of artifactsthat implicitly carry knowledge and tradi-tions, such as musical instruments, cars, andother tools. The user’s behavior is shapedby the capabilities implicit in the tool

itself.32 Ubicomp’s efforts informed byactivity theory, therefore, focus on the trans-formational properties of artifacts and thefluid execution of actions and operations.

Situated action emphasizes the improvi-sational aspects of human behavior and de-emphasizes a priori plans that the personsimply executes. In this model, knowledgein the world continually shapes the on-going interpretation and execution of atask. For example, a downhill skier con-stantly adjusts her behavior given thechanging physical terrain, the presence ofother people, and the signals from her ownbody. Any external plan, such as “ski tothe bottom of the hill without falling,” isvague and driven by the context of the skiresort itself (based on the terrain, snowconditions, other skiers, and so forth).33

Ubicomp’s efforts informed by a situatedaction also emphasize improvisationalbehavior and would not require, nor antic-ipate, the user to follow a predefined script.The system would aim to add knowledgeto the world that could effectively assist inshaping the user’s action, hence an empha-sis on continuously updated peripheral dis-plays. Additionally, evaluating this systemwould require watching authentic humanbehavior and would discount post-taskinterviews in which people attempt toexplain their actions as rationalizations ofbehavior that is not necessarily rationale.

Distributed cognition also de-emphasizesinternal human cognition, but in this case,it turns to a systems perspective wherehumans are just part of a larger system. Thistheory focuses on the collaborative process,where multiple people use multiple objectsto achieve a larger systems goal, such asnaval crewmembers using numerous toolsto navigate a ship into port.29 Of all threetheories, distributed cognition pays thegreatest attention to knowledge in the worldbecause much of the information needed toaccomplish a system’s goal is encoded in theindividual objects. Cognition occurs as peo-ple translate this information to achieve onepart of the larger task. Ubicomp effortsinformed by distributed cognition focus ondesigning for a larger system goal in con-trast to using an individual appliance. Theseefforts emphasize how information is

encoded in objects and how different userstranslate or transcribe that information.

Gaining a richer understanding of settings

Considerable debate exists in the socialsciences about the nature of cognition andthe observable world of everyday practices.Lucy Suchman first highlighted the need togain a rich understanding of the everydayworld to inform IT development.33 In con-trast to developing abstract models, manyresearchers focus on gaining rich under-standings of particular settings and convey-ing these understandings to the designprocess. Weiser emphasized the importanceof understanding these everyday practicesto inform ubicomp research: “We believethat people live through their practices andtacit knowledge so that the most powerfulthings are those that are effectively invisiblein use [emphasis ours].”2

The challenge for ubicomp designers isto uncover the very practices throughwhich people live and to make these invis-ible practices visible and available to thedevelopers of ubicomp environments.Ethnography has emerged as a primaryapproach to address the need to gain richunderstandings of a particular setting andthe everyday practices that encompassthese settings.

Ethnographic studies have their roots inanthropology and sociology and focus onuncovering everyday practices as they areunderstood within a particular community.Ethnography relies on an observer goinginto the field and learning the ropesthrough questioning, listening, watching,talking, and so forth, with practitioners.The fieldworker’s task is to immerse him-self into the setting and its activities with aview to describing these as the skillful andsocially organized accomplishments of thatsetting’s inhabitants.

In the context of ubicomp, the ethno-graphic investigation’s goal is to providethese descriptions and analysis of everydaylife to the IT designers and developers sothat ubicomp environments seamlesslymesh with the everyday practices thatencapsulate the goals, attitudes, social rela-tionships, knowledge, and language of the

54 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Page 8: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

intended setting. These techniques havebeen applied to inform design of social com-munications devices for the home34 and toenhance the social connection betweensenior citizens and their extended families.35

Perhaps a more intriguing method forconveying the nature of settings to devel-

opers of future technologies has emergedfrom an art and design tradition. Bill Gaverand colleagues have explored the use of cul-tural probes to collect information from set-tings to inspire the development of new dig-ital devices.36 As part of a broader researchproject, the design group at the Royal Col-

lege of Art in London is undertaking a cul-tural probe study of domestic environmentsin which volunteers are given packages ofevocative materials (such as cameras, tele-phone pads, visitor books, listening glasses,and dream recorders) designed to elicitinspirational data. Unlike ethnography,

JANUARY–MARCH 2002 PERVASIVEcomputing 55

A n influential case study in deploying and evaluating a

ubicomp application is the Classroom 2000 system, devel-

oped at Georgia Institute of Technology (see Figure A).1 The pro-

ject began in July 1995 with the intent of producing a system that

would capture as much of the classroom experience as possible to

facilitate later review by both students and teachers. In many lec-

tures, students have their heads down, furiously writing down

what they hear and see for future reference. Although some of this

writing activity is useful as a processing cue for the student, we

wanted to offer students the opportunity to lift their heads occa-

sionally and engage in the lecture experience. The capture system

aimed to relieve some of the note-taking burden.

To quickly test this hypothesis’s feasibility, we implemented an

environment for capture in six months and used it to capture an

entire course. We learned some valuable lessons during this first

extended experience. The initial experiments included student

note-taking devices that clearly distracted the students. We aban-

doned support for individual student note-taking, only to resume

it two years later when the technology caught up.

To understand this capture system’s impact on teaching and

learning, more students and teachers had to use it in a wider vari-

ety of courses. This required a significant engineering effort to cre-

ate a robust and reliable capture system that, by early 1997, could

support multiple classes simultaneously. During a three-year

experimental period ending in mid-2000, over 100 courses were

supported for 30 different instructors. In what will hopefully serve

as a model for longitudinal study of ubicomp systems, Jason Broth-

erton reports in his thesis on the extensive quantitative analysis that

reveals how such an automated capture and access system affects

the educational experience once it is incorporated into the every-

day experience.2 As a direct result of these deeper evaluations, we

know that the system encourages 60 percent of its users to modify

their in-class note-taking behavior. However, not all of this modified

behavior is for the better. Taking no notes, for example, is not a

good learning practice to reinforce. We need to facilitate more con-

tent-based retrieval and synchronized playback of the lecture expe-

rience. These insights have motivated further research efforts and

established a long-term research project, eClass, that stands as a

model for ubicomp research and automated capture and access.

REFERENCES

1. G.D. Abowd, “Classroom 2000: An Experiment with the Instrumenta-tion of a Living Educational Environment,” IBM Systems J., vol. 38, no.4, Oct. 1999, pp. 508–530.

2. J.A. Brotherton, eClass: Building, Observing, and Understanding theImpact of Capture and Access in an Educational Domain, doctoral disser-tation, College of Computing and GVU Center, Georgia Inst. of Tech-nology, 2001; www.jasonbrotherton.com/brothert/thesis/Thesis.pdf.

Classroom 2000

Figure A. Classroom 2000 (1) in operation, with an extended electronic whiteboard; (2) the automatically generated lecturenotes, which include slides presented with teacher annotations and Web pages visited during the lecture, all organized in atimeline presentation allowing random access to streaming audio or video recordings of the lecture.

(1) (2)

Page 9: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

which focuses on the everyday and routinenature of the setting, cultural probes seekto uncover the emotional, unusual, andeven spiritual to inspire designers.

Assessment of useWe have considered models and theories

to understand users and some of the meth-ods researchers have used to uncover userneeds and desires. However, we must alsoassess the utility of ubicomp solutions.Researchers have only recently begun toaddress the development of assessment andevaluation techniques that meet ubicomp’sdemands. One reason for this relatively slowdevelopment is the gradual evolution ofubiquitous technology and applications. Tounderstand ubicomp’s impact on everydaylife, we navigate a delicate balance betweenpredicting how novel technologies will servea real human need and observing authenticuse and subsequent coevolution of humanactivities and novel technologies.

Formative and summative evaluation ofubicomp systems is difficult and representsa real challenge for the ubicomp commu-nity. With the notable exception of thework at Xerox Parc on the use of the Tivolicapture system and at Georgia Tech withthe Classroom 2000/eClass system (see therelated sidebar), there has been surprisinglylittle research published from an evalua-tion or end-user perspective in the ubicompcommunity. We must address significantchallenges to develop appropriate assess-ment methods and techniques.

The need for new measuresThe shift away from the desktop inher-

ent in the ubicomp vision also represents ashift away from the office and the managedstructuring of work inherent within theseenvironments. Much of our understandingof work has developed from Fordist andTaylorist principles on the structuring ofactivities and tasks (human behavior canbe decomposed into structured tasks).Evaluation in HCI reflects these roots andis often predicated on notions of task andthe measurement of performance and effi-ciency in meeting these goals and tasks.

However, it is not clear that these mea-sures can apply universally across activities

when we move away from structured andpaid work to other activities. For example,it is unclear how we might assess the domes-tic devices suggested by the Royal Collegeof Art37 or the broad range of devices toemerge from Philips’ vision of the Future.38

This shift away from the world of workmeans that there is still the question of howto apply qualitative or quantitative evalua-tion methods. Answering this questionrequires researchers to consider new repre-sentations of human activity and to considerhow to undertake assessment that broadensfrom existing task-oriented approaches.Although many researchers have investi-gated the use of observational and semi-structured interviews, the lack of deploy-ment of ubiquitous environments hashampered many of these activities.

The technology used to createubicomp systems is often on thecutting edge; creating reliableand robust systems that support

some activity on a continuous basis is dif-ficult. Consequently, a good portion ofreported ubicomp applications workremains at the level of demonstrational pro-totypes that are not designed to be robust.Deeper empirical evaluation results cannotbe obtained through controlled studies intraditional, contained usability laboratory.Rather, the requirement is for real use of asystem, deployed in an authentic setting.

Many researchers are seeking to roll outubiquitous devices into a range of settings,such as museums, outdoor city centers, andthe home. These researchers are creating“living laboratories” for ubicomp researchby creating testbeds that support advancedresearch and development as well as use bya targeted user community. By pushing onthe deployment of more living laboratoriesfor ubicomp research, the science and prac-tice of HCI evaluation will mature. It is ourhope that IEEE Pervasive Computing willbecome a forum for the dissemination ofinformation about other living laboratoriesin the coming years.

As Weiser said,

The most profound technologies are thosethat disappear.1

And the most prophetic visions are thosethat are succinct.

REFERENCES 1. M. Weiser, “ The Computer of the 21st Cen-

tury,” Scientific American, vol. 265, no. 3,Sept. 1991, pp. 66–75 (reprinted in thisissue, see pp. 19–25).

2. M. Weiser, “The World Is Not a Desktop,”ACM Interactions, vol. 1, no. 1, Jan. 1994,pp. 7–8.

3. D. Hindus and C. Schmandt, “UbiquitousAudio: Capturing Spontaneous Collabo-ration,” Proc. ACM Conf. ComputerSupported Cooperative Work (CSCW92), ACM Press, New York, 1992, pp.210–217.

4. M. Back et al., “Listen Reader: An Elec-tronically Augmented Paper-Based Book,”Proc. 2001 ACM Conf. Human Factors inComputing Systems (CHI 2001), ACMPress, New York, 2001, pp. 23–29.

5. K. Hinckley et al., “Sensing Techniques forMobile Interaction,” Proc. 13th Ann. ACMSymp. User Interface Software and Tech-nology (UIST 2000), ACM Press, NewYork, 2000, pp. 91–100.

6. G. Humphreys and P. Hanrahan, “A Dis-tributed Graphics System for Large TiledDisplays,” Proc. IEEE Visualization, IEEEPress, Piscataway, N.J.,1999.

7. Y.D. Chen et al., “Early Experiences andChallenges in Building and Using a ScalableDisplay Wall System,” IEEE ComputerGraphics & Applications, vol. 20, no. 4,July/Aug. 2000, pp. 671–680.

8. J. Rekimoto, “Pick-and-Drop: A DirectManipulation Technique for Multiple Com-puter Environments,” Proc. ACM Symp.User Interface Software and Technology(UIST 97), ACM Press, New York, 1997,pp. 31–39.

9. A. Fox et al., “Integrating InformationAppliances into an Interactive Workspace,”IEEE Computer Graphics & Applications,vol. 20, no. 3, May/June 2000, pp. 54–65.

10.M. Weiser and J.S. Brown, “Designing CalmTechnology,” PowerGrid J., vol. 1.01, July1996; www.ubiq.com/hypertext/weiser/calmtech/calmtech.htm.

11.H. Ishii et al., “ambientROOM: IntegratingAmbient Media with Architectural Space,”Proc. 1998 ACM Conf. Human Factors inComputing Systems (CHI 98), CompanionProc., ACM Press, New York, 1998, pp.173–174.

56 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Page 10: The Human Experienceecho.iat.sfu.ca/library/abowd_02_human-experience.pdf · 2009-08-14 · technology’s invisible nature can be seen as a smooth integration between the physical

12.E.D. Mynatt et al., “Designing Audio Aura,”Proc. 1998 ACM Conf. Human Factors inComputing Systems (CHI 98), ACM Press,New York, 1998, pp. 566–573.

13. J. Fogarty, J. Forlizzi, and S.E. Hudson,“Aesthetic Information Collages: Generat-ing Decorative Displays that Contain Infor-mation,” Proc. 14th Ann. ACM Symp. UserInterface Software and Technology (UIST2001), ACM Press, New York, 2001, pp.141–150.

14.S. Greenberg and C. Fitchett, “Phidgets:Easy Development of Physical Interfacesthrough Physical Widgets,” Proc. 14th Ann.ACM Symp. User Interface Software andTechnology (UIST 2001), ACM Press, NewYork, 2001, pp. 209–218.

15.S. Feiner, B. MacIntyre, and D. Seligmann,“Knowledge-Based Augmented Reality,”Comm. ACM, vol. 36, no. 7, July 1993, pp.53–62.

16. J. Rekimoto and N. Katashi, “The Worldthrough the Computer: Computer Aug-mented Interaction with Real World Envi-ronments,” Proc. ACM Symp. User Inter-face Software and Technology (UIST 95),ACM Press, New York, 1995, pp. 29–36.

17.P. Wellner, “Interacting with Paper on theDigital Desk,” Comm. ACM , vol. 36, no.7, July 1993, pp. 87–96.

18.G.W. Fitzmaurice, H. Ishii, and W. Buxton,“Bricks: Laying the Foundations for Gras-pable User Interfaces,” Proc. 1995 ACMConf. Human Factors in Computing Sys-tems (CHI 95), ACM Press, New York,1995, pp. 442–449.

19. H. Ishii and B. Ullmer, “Tangible Bits:Towards Seamless Interfaces Between People,Bits and Atoms,” Proc. ACM Conf. HumanFactors in Computing Systems (CHI 97),ACM Press, New York, 1997, pp. 234–241.

20.B.J. Harrison et al., “Squeeze Me, Hold Me,Tilt Me! An Exploration of ManipulativeUser Interfaces,” Proc. 1998 ACM Conf.Human Factors in Computing Systems (CHI98), ACM Press, New York, 1998, pp.17–24.

21.M. Weiser, “Some Computer Science Issuesin Ubiquitous Computing,” Comm. ACM,vol. 36, no. 7, July 1993, pp. 75–84.

22.R. Want et al., “The Active Badge LocationSystem,” ACM Trans. Information Systems,vol. 10, no. 1, 1992, pp. 91–102.

23. R. Want et al., The PARCTab UbiquitousComputing Experiment,” tech. report CSL-

95-1, Xerox Palo Alto Research Center, 1995.

24.G.D. Abowd et al., “Cyberguide: A MobileContext-Aware Tour Guide,” ACM Wire-less Networks, vol. 3, 1997, pp. 421–433.

25. K. Cheverst, K. Mitchell, and N. Davies,“Design of an Object Model for a ContextSensitive Tourist Guide,” Proc. InteractiveApplications of Mobile Computing (IMC 98),Neuer Hochschulschriftverlag, 1998; www.egd.igd.fhg.de/~imc98/proceedings.html.

26.V. Bush, “As We May Think,” AtlanticMonthly, vol. 176, no. 1, July 1945, pp.101–108.

27.K.N. Truong, G.D. Abowd, and J.A. Broth-erton, “Who, What, When, Where, How:Design Issues of Capture and Access Appli-cations,” Ubicomp2001: Ubiquitous Com-puting, Lecture Notes in Computer Science,,vol. 2201, Springer-Verlag, New York, 2001,pp. 209–224.

28.S. Card, T. Moran, and A. Newell, The Psy-chology of Human-Computer Interaction,Lawrence Erlbaum, Mahwah, N.J., 1983.

29.E. Hutchins, Cognition in the Wild, MITPress, Cambridge, Mass., 1995.

30.D.E. Norman, The Design of EverydayThings, Doubleday, New York, 1990.

31.L. Vygotsky, “The Instrumental Method inPsychology,” The Concept of Activity inSoviet Psychology, J. Wertsch, ed., Sharpe,Armonk, New York, 1981.

32.B. Nardi, ed., Context and Consciousness:Activity Theory and Human-ComputerInteraction, MIT Press, Cambridge, Mass.,1996.

33.L. Suchman, Plans and Situated Actions. TheProblem of Human-Machine Communica-tion, Cambridge Univ. Press, Cambridge,1987.

34.D. Hindus et al., “Casablanca: DesigningSocial Communications Devices for theHome,” Proc. 2001 ACM Conf. HumanFactors in Computing Systems (CHI 2001),ACM Press, New York, 2001, pp. 325–332.

35.E.D. Mynatt et al., “Digital Family Portraits:Providing Peace of Mind for Extended Fam-ily Members,” Proc. 2001 ACM Conf.Human Factors in Computing Systems (CHI2001), ACM Press, New York, 2001, pp.333–340.

36.W. Gaver, A. Dunne, and E. Pacenti,“Design: Cultural Probes,” Interactions:New Visions of Human- Computer Interac-

tion, ACM Press, New York, 1999.

37.W. Gaver and A. Dunne, “Projected Reali-ties: Conceptual Design for Cultural Effect,”Proc. 1999 ACM Conf. Human Factors inComputing Systems (CHI 99), ACM Press,New York, 1999, pp. 600–607.

38.Philips Corporate Design, Vision of theFuture, 1996; www.design.philips.com/vof.

For more information on this or any other com-puting topic, please visit our Digital Library athttp://computer.org/publications/dlib.

JANUARY–MARCH 2002 PERVASIVEcomputing 57

the AUTHORS

Gregory D. Abowd is anassociate professor in theCollege of Computing andthe GVU Center at GeorgiaInstitute of Technology. Hisresearch interests involveapplications research inubiquitous computing, con-

cerning both HCI and software engineeringissues. He founded the Future ComputingEnvironments Group at Georgia Tech and iscurrently director of the Aware Home ResearchInitiative. He received a DPhil in Computationfrom the University of Oxford, where he attendedas a Rhodes Scholar. He is a member of the IEEEComputer Society and ACM. Contact him [email protected].

Elizabeth D. Mynatt is anassociate professor in theCollege of Computing andthe GVU Center at the Geor-gia Institute of Technology.She directs the research pro-gram in Everyday Comput-ing within the Future Com-

puting Environments Group, examining theimplications of having computation continu-ously present in many aspects of everyday life.She previously worked for three years at XeroxParc with Mark Weiser. Her research exploredhow to augment everyday places and objectswith computational capabilities. Contact her [email protected].

Tom Rodden is a professorof computing at the Univer-sity of Nottingham. He cur-rently codirects the EquatorInterdisciplinary ResearchCollaboration, a multidiscipli-nary research programexploring the ubiquitouscomputing across a wide

range of application domains (see www.equa-tor.ac.uk). His research interests are in computer-supported cooperative work and the design ofinteractive systems that are sensitive to the needsof many users. Most recently, this has involvedexploring how to combine physical and digitalinteraction to develop ubiquitous computingenvironments. Contact him at [email protected].