7
IEEE Wireless Communications • December 2002 2 1070-9916/02/$17.00 © 2002 IEEE Event Heap ISlider Event IButtonEvent Win2K PC Web configuration Encoder IC Decoder IC RF trans. RF rec. F 300 MHz RF FM band FM Most smart homes are created evolutionarily. This incremental addition of technology requires a highly flexible infrastructure to accommodate both future exten- sions and legacy systems without requiring extensive rewiring of hardware or reconfiguration on the software level. SM ART H OMES IROS: APPLICATION COORDINATION IN UBIQUITOUS COMPUTING ENVIRONMENTS SOFTWARE REQUIREMENTS FOR RAPID INTEGRATION AND EVOLUTION The ability to continually integrate new tech- nologies and handle failures in a noncatastrophic manner is be essential to smart homes and relat- ed ubiquitous computing environments. Our experience working in the Stanford iRoom enables us to identify four important require- ments for a software infrastructure in a ubiqui- tous computing environment. Heterogeneity: The software infrastructure must accommodate a tremendous variety of devices with widely ranging capabilities. This implies that it should be lightweight and make few assumptions about client devices so that the effort to “port” any necessary software compo- nents to new devices will be small. Robustness: The software system as a whole must be robust against transient or partial fail- ures of particular components. Failures should not cascade, and failure or unexpected behavior of one component should not be able to infect the rest of the working system. Evolvability: The application program inter- face (API) provided must be sufficiently flexible to maintain forward and backward compatibility as technology evolves. For example, it should be possible to integrate a new type of pointing device that provides higher resolution or addi- tional features not found in older devices, with- out breaking compatibility with those older devices or existing applications. Compatibility: It should be easy to leverage legacy applications and technologies as building blocks. For example, Web technologies have been used for user interface (UI) prototyping, accessing remote applications, and bringing rich content to small devices; desktop productivity JAN BORCHERS, MEREDITH RINGEL, JOSHUA TYLER, AND ARMANDO FOX STANFORD UNIVERSITY OVERVIEW Most smart homes are created evolutionarily by adding more and more technologies to an existing home, rather than being developed on a single occasion by building a new home from scratch. This incremental addition of technology requires a highly flexible infrastructure to accom- modate both future extensions and legacy sys- tems without requiring extensive rewiring of hardware or extensive reconfiguration on the software level. Stanford’s iStuff (Interactive Stuff) provides an example of a hardware inter- face abstraction technique that enables quick customization and reconfiguration of Smart Home solutions. iStuff gains its power from its combination with the Stanford Interactive Room Operating System (iROS), which creates a flexi- ble and robust software framework that allows custom and legacy applications to communicate with each other and with user interface devices in a dynamically configurable way. The Stanford Interactive Room (iRoom,Fig. 1), while not a residential environment, has many characteristics of a smart home: a wide array of advanced user interface technologies, abundant computation power, and infrastructure with which to coordinate the use of these resources (for more information on the iRoom or the Interac- tive Workspaces project, please visit http://iwork.stanford.edu).As a result, many aspects of the iRoom environment have strong implications for, and can be intuitively translated to, smart homes. In particular, the rapid and fluid development of physical user interfaces using iStuff and the iROS, which has been demonstrat- ed in the iRoom, is an equally powerful concept for designing and living in smart homes. Before focusing on the details of iStuff, we describe the software infrastructure on which it is based and the considerations that went into designing this infrastructure. S TANFORD I NTERACTIVE W ORKSPACES : A F RAMEWORK FOR P HYSICAL AND G RAPHICAL U SER I NTERFACE P ROTOTYPING

SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 20022 1070-9916/02/$17.00 © 2002 IEEE

EventHeap

ISlider

Event

IButtonEvent

Win2K PC

Web configurationEncoder IC Decoder ICRF trans. RF rec.

F 300 MHz RF

FM band

FM

Most smart homesare createdevolutionarily. Thisincremental additionof technologyrequires a highlyflexible infrastructureto accommodateboth future exten-sions and legacysystems withoutrequiring extensiverewiring of hardwareor reconfiguration onthe software level.

SMART HOMES

IROS: APPLICATION COORDINATION INUBIQUITOUS COMPUTING ENVIRONMENTS

SOFTWARE REQUIREMENTS FORRAPID INTEGRATION AND EVOLUTION

The ability to continually integrate new tech-nologies and handle failures in a noncatastrophicmanner is be essential to smart homes and relat-ed ubiquitous computing environments. Ourexperience working in the Stanford iRoomenables us to identify four important require-ments for a software infrastructure in a ubiqui-tous computing environment.

Heterogeneity: The software infrastructuremust accommodate a tremendous variety ofdevices with widely ranging capabilities. Thisimplies that it should be lightweight and makefew assumptions about client devices so that theeffort to “port” any necessary software compo-nents to new devices will be small.

Robustness: The software system as a wholemust be robust against transient or partial fail-ures of particular components. Failures shouldnot cascade, and failure or unexpected behaviorof one component should not be able to infectthe rest of the working system.

Evolvability: The application program inter-face (API) provided must be sufficiently flexibleto maintain forward and backward compatibilityas technology evolves. For example, it should bepossible to integrate a new type of pointingdevice that provides higher resolution or addi-tional features not found in older devices, with-out breaking compatibility with those olderdevices or existing applications.

Compatibility: It should be easy to leveragelegacy applications and technologies as buildingblocks. For example, Web technologies havebeen used for user interface (UI) prototyping,accessing remote applications, and bringing richcontent to small devices; desktop productivity

JAN BORCHERS, MEREDITH RINGEL, JOSHUA TYLER, AND ARMANDO FOXSTANFORD UNIVERSITY

OVERVIEWMost smart homes are created evolutionarily

by adding more and more technologies to anexisting home, rather than being developed on asingle occasion by building a new home fromscratch. This incremental addition of technologyrequires a highly flexible infrastructure to accom-modate both future extensions and legacy sys-tems without requiring extensive rewiring ofhardware or extensive reconfiguration on thesoftware level. Stanford’s iStuff (InteractiveStuff) provides an example of a hardware inter-face abstraction technique that enables quickcustomization and reconfiguration of SmartHome solutions. iStuff gains its power from itscombination with the Stanford Interactive RoomOperating System (iROS), which creates a flexi-ble and robust software framework that allowscustom and legacy applications to communicatewith each other and with user interface devicesin a dynamically configurable way.

The Stanford Interactive Room (iRoom,Fig.1), while not a residential environment, has manycharacteristics of a smart home: a wide array ofadvanced user interface technologies, abundantcomputation power, and infrastructure with whichto coordinate the use of these resources (formore information on the iRoom or the Interac-tive Workspaces project, please visithttp://iwork.stanford.edu).As a result, manyaspects of the iRoom environment have strongimplications for, and can be intuitively translatedto, smart homes. In particular, the rapid and fluiddevelopment of physical user interfaces usingiStuff and the iROS, which has been demonstrat-ed in the iRoom, is an equally powerful conceptfor designing and living in smart homes.

Before focusing on the details of iStuff, wedescribe the software infrastructure on which itis based and the considerations that went intodesigning this infrastructure.

STANFORD INTERACTIVE WORKSPACES:A FRAMEWORK FOR PHYSICAL AND

GRAPHICAL USER INTERFACE PROTOTYPING

Page 2: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 2002 3

applications such as Microsoft PowerPoint™contain many elements of a “rich content displayserver;” and so on. Furthermore, since technolo-gy in smart spaces tends to accrete over time,today’s new hardware and software will rapidlybecome tomorrow’s legacy hardware and soft-ware, so this problem will not go away.

Our prototype meta-operating system, iROS(Interactive Room Operating System), meets theabove criteria. We call it a meta-OS since it con-sists entirely of user-level code running onunmodified commodity operating systems, con-necting the various iRoom entities into a “sys-tem of systems.” We discuss the main principlesof iROS here to give the reader an understand-ing of how it facilitates building new behaviorsusing iStuff.

IROS AND APPLICATION COORDINATIONWe will frame our discussion in the context ofthe Stanford iRoom, a prototype environmentwe constructed that we believe is representativeof an important class of ubiquitous computinginstallations. The iRoom is intended to be a ded-icated technology-enhanced space where peoplecome together for collaborative problem solving(meetings, design reviews, brainstorming, etc.),and applications we prototyped and deployedwere driven by such scenarios.

The basis of iROS is application coordination.In the original formulation of Gelernter andCarriero [1], coordination languages express theinteraction between autonomous processes, andcomputation languages express how calculationsof those processes proceed. For example, proce-dure calls are a special case in which the callerprocess suspends itself pending a response fromthe callee. Gelertner and Carriero argue thatcomputation and coordination are orthogonaland that there are benefits to expressing coordi-nation in a separate general-purpose coordina-tion language; our problem constraint ofintegrating existing diverse components acrossheterogeneous platforms leads directly to sepa-rating computation (the existing applicationsthemselves) from coordination (how their behav-iors can be linked).

In iROS, the coordination layer is called theEvent Heap [2].The name was chosen to reflectthat its functionality could be viewed as analo-gous to the traditional event queue in single-computer operating systems. The Event Heap isan enhanced version of a tuplespace, one of thegeneral-purpose coordination languages identi-fied by Gelernter and Carriero. A tuple is a col-lection of ordered fields; a tuplespace is a“blackboard” visible to all participants in a par-ticular scope (in our case, all software entities inthe iRoom), in which any entity can post a tupleand any entity can retrieve or subscribe for noti-fication of new tuples matching a wildcard-basedmatching template. We have identified impor-tant advantages of this coordination approachover using rendezvous and RMI (as Jini does) orsimple client-server techniques (as has beendone using HTTP, Tcl/Tk [3], and otherapproaches); these advantages include improvedrobustness due to decoupling of communicatingentities, rapid integration of new platforms dueto the extremely lightweight client API (we sup-

port all major programming languages, includingHTML, for posting and retrieving tuples), andthe ability to accommodate legacy applications(simple “hooks” written in Visual Basic or Javacan be used to connect existing productivity,Web, and desktop applications to theiRoom).The only criterion for making a newdevice or application “iRoom-aware” is its abilityto post and/or subscribe to tuples in the EventHeap; since we can create Web pages that dothis, any device that enters the room running aWeb browser is already minimally iRoom-aware.

ON-THE-FLYUSER INTERFACE GENERATION IN IROS

The Event Heap is the core of iROS, but wehave also built other iROS services that providehigher-level functionality. Most notably, theInterface Crafter (iCrafter) framework [4] cangenerate UIs dynamically for virtually any iRoomentity and on virtually any iRoom-aware device.Although it extends previous work in severalimportant ways, including integration of servicediscovery with robustness and the ability to cre-ate UIs ranging from fully custom to fully auto-matic, its main role in the present scenarios is toserve as an abstraction layer between devicesand UIs. Briefly, iCrafter is used as follows:

¶Controllable entities beacon their presenceby depositing self-expiring advertisements in theEvent Heap. These advertisements contain adescription of the service’s controllable behav-iors (i.e., methods and their parameters)expressed in SDL, a simple XML-based markuplanguage we developed.

¶A device capable of displaying a UI (Webbrowser, handheld, etc.) can make a request for

� Figure 1. The Stanford iRoom contains a wireless GyroMouse and keyboard(visible on the table), three touch-sensitive SmartBOARDs and one non-touch-sensitive tabletop display, and a custom-built OpenGL hi-res graphicmural. The room is networked using IEEE 802.11b wireless Ethernet. Exceptfor the hi-res mural and the tabletop, all hardware is off-the-shelf, all operatingsystems are unmodified Windows (various flavors) or Linux, and all softwarewe have written is user-level.

Page 3: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 20024

the UI of a specific service, or can query theiCrafter’s Interface Manager (which tracks alladvertisements) to request a list of available ser-vices. This initial request is made via whateverrequest-response technology is available on theclient: visiting a well-known dynamically generat-ed Web page is one possibility.

¶The desired UI is created by feeding theSDL contained in a recent service advertisementto one or more interface generators. These maybe local or remote (i.e., downloaded on demandover the Web), and may be specialized per ser-vice and/or per device. The Interface Managerdetermines the policy for selecting a generator.Part of this process includes integrating contex-tual information from a separate contextdatabase relevant to each workspace, making a“static” UI description portable across installa-tions. For example, in a workspace such as ourswith three large wall-mounted displays, it ispreferable for a UI to refer to these as “Left,Center, Right” than to use generic names suchas “screen0, screen1, screen2” (Fig. 2).

Note that in the last step the client deviceand service do not need to establish a direct con-nection (client-server style).This makes each

robust to the failure of the other. They do noteven need to be able to name each other usinglower-level names such as network addressesbecause the tuple-matching mechanism can bebased on application-level names or attributes ofthe service (“retrieve advertisements for alldevices of type LightSwitch”).The same servicecan be controlled from a variety of differentdevices without knowing in advance what typesof devices are involved, since the same SDLdescription can be processed into quite differentUIs by different interface generators.

The ability to insulate the service and UI fromeach other in these ways has been critical to therapid prototyping of new UI behaviors. iStuffbuilds on this ability, usin this indirection toenable rapid prototyping of physical UIs as well.

ISTUFF: PHYSICAL DEVICES FORUBIQUITOUS COMPUTING

ISTUFF MOTIVATION AND DEFINITIONiStuff is a toolbox of wireless platform-indepen-dent physical UI components designed to lever-age the iROS infrastructure (which allows ourcustom-designed physical devices to send andreceive arbitrary commands to and from otherdevices, machines, and applications in theiRoom).The capability to connect a physicalactuator to a software service on the fly hasappeal to users of a ubiquitous computing envi-ronment such as a smart home. Residents wouldhave the ability to flexibly set up sensors andactuators in their home, and designers of suchhomes would be able to prototype and test vari-ous configurations of their technology beforefinal installation.

There are several characteristics that are cru-cial for our iStuff:• Completely autonomous packaging, wireless

connection to the rest of the room, and bat-tery-powered operation

• Seamless integration of the devices with iROSas an existing, available, cross-platform ubiqui-

IMPLEMENTATION DETAILSTransmitting devices (buttons, sliders) contain a Ming TX-99 V3.0 300 MHzFM radio frequency (RF) transmitter and a Holtek HT-640 encoder to send 8bits of data to a receiver board, which contains a Ming RE-99 V3.0 RF receiverand a Holtek HT-648L decoder. The receiver board sends its data to a PC usingeither the parallel or USB port, and a listener program running on the PC thenposts an appropriate tuple (based on the ID received) to the iRoom’s EventHeap. Receiving devices (buzzers, LEDs) work in the opposite manner: a listen-er program receives an event intended for the iStuff and sends the targetdevice ID through either the parallel or USB port to an RF transmitter. Thisdata is then received wirelessly by an RF receiver in the device, resulting in thedesired behavior. The iSpeaker has a different architecture, since the RF tech-nology we employed is not sufficient for handling streaming media. Instead, alistener program on the PC waits for speaker-targeted events, and in responsestreams sound files over an FM transmitter, which our iSpeaker (a smallportable FM radio) then broadcasts.

� Figure 2. Screen/projector control UIs customized for various devices and incorporating installation-spe-cific information from the context database: (a) A fragment of a projector HTML interface that can berendered by any Web browser. This UI was generated by a projector-specific HTML generator. The symbol-ic names such as iRoom Server are stored in the context database and appear in the SDL markup as“machine0,” “machine1,” and so on. (b) Room control applet for the same room, generated by a JavaSwing-UI generator. The geometry information for drawing the widgets comes from the context database,so the generator itself is not installation-specific. Users can drag and drop Web pages onto the screen wid-gets to cause those documents to appear on the corresponding room screens. (c) The same UI using differ-ent geometry information (for a different room) from the context database. (d) A Palm UI rendered inMoDAL [8] that lacks the drag-and-drop feature.

Page 4: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 2002 5

tous computing environment to let devices,machines, and services talk to each other andpass information and control around

• Easy configuration of mappings betweendevices and their application functionality, bycustomizing application source code, or evenjust updating function mappings using a Webinterface

• Simple and affordable circuitryVarious other research projects have looked

at physical devices in the past; Ishii’s TangibleBits project [6] introduced the notion of bridgingthe world between bits and atoms in UIs, andmore recently Greenberg’s Phidgets [7] repre-sent an advanced and novel project in physicalwidgets. Phidgets, however, are designed for usein isolation with a single computer, are tethered,and do not work across multiple platforms.

DEVICE CLASSIFICATION AND IMPLEMENTATIONThe range of potentially useful UI componentsis almost unlimited, and the really useful devicesto go in a standard toolbox will only be identi-fied over time. Ideas for such devices can be cat-egorized according to whether they are input oroutput devices, and the amount of informationthey handle, as in the following examples:• One-bit input devices, such as pushbuttons and

toggle buttons, or binary sensors such as lightgates

• Multiple-bit discrete input devices, such asrotary switches or digital joysticks as well aspackaged complex readers that deliver identi-fication data as a result (e.g., barcode readersor fingerprint scanners)

• Near-continuous input devices, such as sliders,potentiometers, analog joysticks or trackballs,and various sensors (light, heat, force, loca-tion, motion, acceleration, etc.)

• Streaming input devices, such as microphonesand small cameras

• One-bit output devices, such as a control orstatus light, beepers/buzzers, solenoids, andpower switchers

• Multiple-bit discrete output devices, such asLED arrays or alphanumerical LCD displays

• Near-continuous output devices, such as ser-vos, motors, dials, and dimmers

• Streaming output devices, such as speakersand small screensThus far, students in our laboratory have

designed and built five types of prototype iStuffdevices spanning four of the above categories:iButtons, iSliders, iBuzzers, iLEDs, and iSpeak-ers (Fig. 3).While our hardware designs haveproven surprisingly powerful and proofs of con-cept, they are simple enough to be reproducedeasily (see box).

We have developed several successful setupsusing iStuff in the Stanford iRoom:

¶New users coming into our iRoom are notfamiliar with the environment, and need an“entry point” to learn about the room and itsfeatures. Using our iStuff configuration Webinterface, we programmed one iButton to• Send events that turn on all the lights in the

room• Switch on all SMARTBoards (large touch-sen-

sitive displays) and our interactive table dis-play

• Bring up a Web-based introduction to theroom on one SMARTBoard

• Show an overview of document directories forthe various user groups on a second SMART-Board

• Open up our brainstorming and sketchingapplication on the third SMARTBoardIt is worth nothing that setting up this iRoom

Starter took less than 15 minutes of configura-tion using the Web interface.

¶SMARTBoards provide only a rather incon-venient way to issue right clicks when using thetouch-sensitive board for input — users have topress a right-click “mode key” on the tray in frontof the board to have their next touch at the boardbe interpreted as a right click. To study whetherhaving the right-click modifier closer to the actualinput location at the board would make this inter-action more fluid, we built a specialized iButtonthat was shaped to fit inside a hollow pen proto-type made from RenShape plastic by a productdesign student in our model shop. When the but-ton is pressed it sends an event to the EventHeap that is then received by a listener applica-tion running on the computer associated with theSMARTBoard. The listener then tells theSMARTBoard driver to interpret the next tap asa right click. Users can now simply press the but-ton on the pen and then tap to issue a right click.

¶We found our iSlider could convenientlycontrol the paddles for our multiscreen versionof the classic video game Pong, described below.

¶The iSpeaker has been extended to provideverbal feedback for user actions (e.g., “Pong gamestarted”) by means of a text-to-speech program —applications simply send a SpeakText event to theiSpeaker containing the ASCII text to be spoken.

� Figure 3. The various types of iStuff created so far: buttons, potentiometers,speakers, and buzzers.

Page 5: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 20026

¶We are experimenkting with our iLEDs andiBuzzers to provide feedback about the status ofdevices in the room.

As discussed before, the Event Heap is a corecomponent of the iROS that makes it possible todecouple the sender and receiver from the con-tents of the message itself (Fig. 4).This architec-ture allows great flexibility for the prototyping ofinterfaces; for instance, an application can becontrolled by either a traditional mouse, a graph-ical slider widget, or an iSlider as long as each ofthose devices sends the event type (perhaps anevent containing a new Y-coordinate) for whichthe application is listening.

In our iRoom we have demonstrated the util-ity of the combination of Event Heap softwarewith iStuff hardware by developing iPong, a mul-timachine version of the video game where play-ers control the vertical position of virtual paddlesto make contact with a virtual bouncing ball.The game was written to listen for PaddleEvents, which contain information about thenew position of the target paddle. Any inputmethod that can generate a Paddle Event cancontrol the paddle position. We have mappedthe standard mouse, touch panel input, and aniSlider (a sliding-potentiometer iStuff widget) todrive the paddle. To the application, the physicalsource of the events is irrelevant. Thus, we havedecoupled the link between hardware and soft-ware components in a physical UI.

Our iButtons are already reconfigurabledynamically via a Web interface that lets usersenter arbitrary events to send when a specific

button is pressed. We intend to provide this flex-ible interactive mechanism for mapping applica-tions and events for all iStuff, using theon-the-fly service discovery tools of iROS(described earlier).The result will be a generalvirtual Patch Panel that allows even end users tomap events to services and map conversionsbetween related event types. Thus, iStuff makerscan send and receive their own types of events(e.g., button events or slider events) withoutconcern for the exact names of events desired byend-user applications, and application develop-ers can send and receive their own types ofevents (e.g., Paddle Event) without prior knowl-edge of every possible type of device the usermight choose to interface with their application.

The iStuff/Event Heap combination has directapplications to the Smart Home that incremen-tally acquires new technologies. When residentsacquire a new device or wish to reconfigureexisting devices, they can simply use a utilitysuch as our Patch Panel to map the event typesent by the new device to the event type expect-ed by the target application.

SMART HOME APPLICATIONSWhile our iStuff was originally designed with ouriRoom (a space used for meetings and brain-storming/design sessions) in mind, our technologyand infrastructure could be useful in a dmarthome environment. In particular, the ability tocreate task-oriented user interfaces — interfacesreflecting the user’s task as opposed to the techni-cal features of an appliance — makes iStuff par-

� Figure 4. The overall system architecture for iStuff. In darker boxes are the actual physical devices, and inthe lighter ones are a couple of examples of applications using iStuff and the iROS Event Heap. An iStuffserver translates the wireless device transmissions into software events, which can be consumed by interest-ed applications. For example, when the potentiometer of the iSlider is moved, it sends a radio signal, whichis received by the server and turned into a SliderEvent. The event is posted to the Event Heap, and subse-quently received by the iPong application, which is listening for SliderEvents.

EventHeap

ISlider

Event

IButtonEvent

iPen (right click)...EventHeep.putEvent(iLEDEvent);...

iPong...this.waitFarEvent(iSliderEvent);...

Pushbutton

Encoder ICRF transmitter

Sliding pot

Encoder ICRF transmitter

LED

Decoder ICRF receiver

Buzzer

Decoder ICRF receiver

Win2K PC

Web configurationEncoder IC Decoder ICRF trans. RF rec.

300 MHz RF 300 MHz RF

FM band

FM receiver

When residentsacquire a newdevice, or wish toreconfigure existingdevices, they cansimply use a utilitysuch as our “PatchPanel” to map theevent type sent bythe new device tothe event typeexpected by thetarget application.

Page 6: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 2002 7

ticularly compelling for Smart Home applications.Dynamic, task-based remote controls: Cur-

rently, when a user wants to watch a movie on aDVD, they need several remote controls: one tocontrol the DVD player, another to control theirhome’s surround-sound system, and a third tocontrol the television set (and then the user hasto get up to dim the lights!).Today’s remotes aredevice-based, but because the Event Heap archi-tecture allows for the decoupling of devices frommessages we are able to use iStuff to constructtask-based remote controls. By gathering appro-priate iStuff components and using the PatchPanel application to ensure that the appropriateiStuff events are converted to the events appro-priate to the target devices (DVD player, speak-ers, TV set, lights), the user can construct atask-oriented controller-one device that controlsall appliances relevant to viewing a DVD movie,regardless of their physical connectivity. iCraftercould be used in an analogous manner to dynam-ically create GUI controllers for householdappliances, thus transforming a PDA into a task-based universal remote control.

Monitoring house state: A user is on her wayout the door of her smart home, about to headoff to work. The display near her door shows herthe status of several devices in her home thathave been instrumented with iSensors: did sheleave the stove on? The lights in her bedroom? Isthe thermostat too high? Is the burglar alarm on?

Setting house state: A user can create aniButton or similar device to set the house ’s“state” as she leaves for work every day, andmount this button by her door. She might con-figure it to lower her thermostat, switch off alllights, and activate her security system, for exam-ple. This type of button is analogous to our StartiRoom button mentioned earlier.

Smart home design: Architects and interiordesigners could use iStuff to fine-tune the place-ment of controls, speakers, and other interactiveelements of a Smart Home. Researchers andtechnology developers could use iStuff to quicklyprototype and test their products before puttingthem on the market for addition to smart homes.

DISCUSSION AND SUMMARY

Technology advancements have made much ofthe original vision of ubiquitous computing feasi-ble. A software framework, however, that inte-grates those heterogeneous technologies in adynamic, robust, and legacy-aware fashion andprovides a seamless user experience has beenmissing. We have created the Stanford iRoom, aphysical prototypical space for ubiquitous com-puting scenarios that has been in constant usefor almost two years now, to address this need.iROS, our iRoom Operating System, runs as ameta-OS to coordinate the various applicationsin the room. iROS is based on a tuplespacemodel, which leads to the aforementioneddesired characteristics. Its failure robustness hasbeen better than average for both induced andreal faults. Its ability to leverage and extendexisting applications has been critical for rapidprototyping in our research.

The iStuff project builds on iROS, and tack-les the problem that customizing or prototyping

physical user interfaces for ubiquitous computingscenarios (e.g., smart homes) is still a very ardu-ous process. It offers a toolbox of wireless physi-cal user interface components that can becombined to quickly create nonstandard userinterfaces for experimentation. The iROS infra-structure has proven invaluable in making thesoftware integration of these custom devices verystraightforward. The flexibility of the technologywe developed for Stanford’s iRoom has potentialbenefits in a smart home scenario, for example,by enabling users to quickly create a customizedtask-based interface to a system in their home.

In all, we hope that our approach to buildinga software and hardware framework for ubiqui-tous computing environments, and the variousbuilding blocks we have implemented anddeployed, are general and useful enough so thatothers will find them of value. For more infor-mation on our Stanford Interactive Workspacesproject, access iStuff documentation, or down-load iROS software, please visit our projecthomepage at http://iwork.stanford.edu/.

ACKNOWLEDGMENTSThe authors would like to thank Maureen Stone,Michael Champlin, Hans Anderson and JeffRaymakers for their contributions to this work,as well as the Wallenberg Foundation(http://www.wgln.org) for its financial support.

REFERENCES[1] D. Gelernter and N. Carriero, “Coordination Languages

and their Significance,” Commun. ACM, vol. 32, no. 2,Feb. 1992.

[2] B. Johanson and A. Fox, “The Event Heap: A CoordinationInfrastructure for Interactive Workspaces,” to appear inProc. WMCSA 2002, Callicoon, NY, June 2002.

[3] T. D. Hodes et al., “Composable Ad-Hoc Mobile Servicesfor Universal Interaction,” Proc. ACM MobiCom ’97,Budapest, Hungary, Sept. 1997.

[4] S. R. Ponnekanti et al., “ICrafter: A Service Frameworkfor Ubiquitous Computing Environments,” Proc. Ubi-Comp. ’01, Atlanta, GA.

[5] T. Lehman et al., MoDAL (Mobile Document ApplicationLanguage); http://www.almaden.ibm.com/cs/TSpaces/MoDAL

[6] H. Ishii and B. Ullmer, “Tangible bits: Towards seamlessinterfaces between people, bits and atoms,” Proc. CHI’97, Atlanta, GA, Mar. 22–27, 1997, pp. 234–41.

[7] S. Greenberg and C. Fitchett, “Phidgets: Easy Developmentof Physical Interfaces Through Physical Widgets,” Proc. UIST2001, Orlando, FL, Nov. 11–14, 2001, pp. 209–18.

ADDITIONAL READING[1] W. K. Edwards and R. E. Grinter, “At Home with Ubiq-

uitous Computing: Seven Challenges,” Proc. UbiComp’01, Atlanta, GA, pp. 256–72.

[2] E. Kiciman and A. Fox, “Using Dynamic Mediation toIntegrate COTS Entities in a Ubiquitous ComputingEnvironment,” Proc. HUC2K, LNCS, Springer Verlag.

BIOGRAPHIESJAN BORCHERS ([email protected]) is an acting assis-tant professor of computer science at Stanford University.He works on human-computer interaction in the StanfordInteractivity Lab, where he studies post-desktop user inter-faces, HCI design patterns, and new interaction metaphorsfor music and other types of multimedia. He holds a Ph.D. incomputer science from Darmstadt University, and has beenknown to turn his research into public interactive exhibits.

MEREDITH RINGEL ([email protected]) is a first-yearPh.D. student in computer science at Stanford, with afocus on human-computer Interaction.She received her B.S.in computer science from Brown University.

JOSHUA TYLER ([email protected]) is a second-year Mas-ter’s student in computer ccience at Stanford with a spe-

A user can create aniButton or similardevice to set the

house’s “state” asshe leaves for work

everyday, and mountthis button by her

door. She mightconfigure it to lower

her thermostat,switch off all lights,

and activate hersecurity system,

for example.

Page 7: SMART HOMES STANFORD INTERACTIVE WORKSPACES A F PHYSICAL AND G U INTERFACE PROTOTYPING · 2018-01-04 · Web, and desktop applications to the iRoom).The only criterion for making

IEEE Wireless Communications • December 20028

cialization in human-computer interaction. He received aB.S. in computer science from Washington University.

ARMANDO FOX ([email protected]) joined the Stanford fac-ulty in January 1999.His research interests include the designof robust Internet-scale software infrastructure, particularlyas it relates to the support of mobile and ubiquitous com-puting, and user interface issues related to mobile and ubiq-uitous computing. He received a B.S.E.E. from M.I.T., anM.S.E.E. from the University of Illinois, and a Ph.D. from UCBerkeley. He is a founder of ProxiNet, Inc. (now a division ofPumaTech), which commercialized thin client mobile com-puting technology developed at UC Berkeley.

The iStuff projectbuilds on iROS, andtackles the problemthat customizing orprototyping physicaluser interfaces forubiquitous computingscenarios is stilla very arduousprocess.