7
IEEE Wireless Communications • October 2002 39 1070-9916/02/$17.00 © 2002 IEEE ntext A (IN) n1–7 n8–10 Context ntext A (IN) n1–7 n8–10 Context pdating collaborative con Mobile devices, together with their users, are constantly moving from one situation to another. To adapt applications to these changing contexts, the devices must have ways to recognize the contexts. INTRODUCTION Context awareness has recently become a hot topic in the areas of ubiquitous computing, human-machine interaction, and signal under- standing [1]. A central problem in the field is reli- ably acquiring the necessary context. Meanwhile, mobile communication has become common- place, in some areas practically ubiquitous. With the number of mobile computing/communication devices (phones, personal digital assistants — PDAs) today approaching 1 billion, it is becoming very attractive to offer context-related informa- tion for users of these devices. In this article, we discuss ways to determine, share, and use context over wireless ad hoc proximity networks. THE NEED FOR CONTEXT WITH MOBILE DEVICES Humans routinely use context in communication with other actors (so far humans). For more nat- ural and effective interaction with people, mobile devices should also use context information. When the interaction takes place in physical space, the context gives valuable information to enhance the interaction. For instance, people routinely refer to the surrounding space in com- munication (“You will find the restaurant by the river four blocks down the road”), which enables concise descriptive sentences. Without such con- text information, interaction with mobile devices will feel unnatural for users who are moving about in physical space. Mobile devices should be made more aware of their surroundings than their desktop counter- parts. To become successful sensors and actors that share situations with humans, the devices must develop some sense of contexts. For mobile devices this is a hard task, since the environment is constantly changing. With stationary devices, more static configuration descriptions may suffice. SOURCES FOR CONTEXT INFORMATION Various sources for context information have been reported: sensors in the mobile devices and in the environment, tags and beacons, position- ing systems, auditory scene recognition, image understanding, and biosensors on users, to name a few. Often the obtained data is noisy, ambigu- ous, and hard to interpret. Static context clues, emitted by stationary tags, can become out of date when the environment changes. In some sit- uations, more precise context data sources can be used, such as online calendars. However, interpreting this data and tying it into the users’ context will require nontrivial reasoning for most applications. It seems that, for higher-level con- text understanding, machine-readable models of the world are needed [2]. However, it will take some time for progress in knowledge representa- tion to render such models practical. We propose an easier partial solution: benefit from the social context to provide valuable con- text clues. People are used to adjusting their behavior depending on their social context. We lower our voice in public libraries, yell madly at soccer matches, and smile politely when pho- tographed. Sometimes the place is a sufficient clue for proper behavior; sometimes it is the behavior of other people that seems to suggest to us how to behave. Mobile devices could use context information from surrounding ones. Where a number of peo- ple (and their devices) are gathered, it is reason- JANI MÄNTYJÄRVI, PERTTI HUUSKONEN, AND JOHAN HIMBERG, NOKIA RESEARCH CENTER ABSTRACT Mobile devices, together with their users, are constantly moving from one situation to another. To adapt applications to these changing contexts, the devices must have ways to recognize the con- texts. There are various sources for context infor- mation: sensors, tags, positioning systems, to name a few. The raw signals from these sources are translated into higher-level interpretations of the situation. Unfortunately, such data is often unreli- able and constantly changing. We seek to improve the reliability of context recognition through an analogy to human behavior. Where multiple devices are around, they can jointly negotiate on a suitable context and behave accordingly. This approach is becoming particularly attractive with the multitude of personal devices on the market. We present a collaborative context determination scheme, suggest examples of potential applications of such collaborative behavior, and raise issues of context recognition, context communication, and network requirements. C OLLABORATIVE C ONTEXT D ETERMINATION TO S UPPORT M OBILE T ERMINAL A PPLICATIONS C ONTEXT -A WARE C OMPUTING

Collaborative context determination to support mobile terminal applications

  • Upload
    j

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 2002 391070-9916/02/$17.00 © 2002 IEEE

ntext A (IN)

n1–7 n8–10

Context

ntext A (IN)

n1–7 n8–10

Context

pdating collaborative con

Mobile devices,together with theirusers, are constantlymoving from onesituation to another.To adapt applicationsto these changingcontexts, the devicesmust have waysto recognize thecontexts.

INTRODUCTIONContext awareness has recently become a hot

topic in the areas of ubiquitous computing,human-machine interaction, and signal under-standing [1]. A central problem in the field is reli-ably acquiring the necessary context. Meanwhile,mobile communication has become common-place, in some areas practically ubiquitous. Withthe number of mobile computing/communicationdevices (phones, personal digital assistants —PDAs) today approaching 1 billion, it is becomingvery attractive to offer context-related informa-tion for users of these devices. In this article, wediscuss ways to determine, share, and use contextover wireless ad hoc proximity networks.

THE NEED FOR CONTEXT WITHMOBILE DEVICES

Humans routinely use context in communicationwith other actors (so far humans). For more nat-ural and effective interaction with people, mobiledevices should also use context information.When the interaction takes place in physical

space, the context gives valuable information toenhance the interaction. For instance, peopleroutinely refer to the surrounding space in com-munication (“You will find the restaurant by theriver four blocks down the road”), which enablesconcise descriptive sentences. Without such con-text information, interaction with mobile deviceswill feel unnatural for users who are movingabout in physical space.

Mobile devices should be made more aware oftheir surroundings than their desktop counter-parts. To become successful sensors and actorsthat share situations with humans, the devicesmust develop some sense of contexts. For mobiledevices this is a hard task, since the environmentis constantly changing. With stationary devices,more static configuration descriptions may suffice.

SOURCES FOR CONTEXT INFORMATIONVarious sources for context information havebeen reported: sensors in the mobile devices andin the environment, tags and beacons, position-ing systems, auditory scene recognition, imageunderstanding, and biosensors on users, to namea few. Often the obtained data is noisy, ambigu-ous, and hard to interpret. Static context clues,emitted by stationary tags, can become out ofdate when the environment changes. In some sit-uations, more precise context data sources canbe used, such as online calendars. However,interpreting this data and tying it into the users’context will require nontrivial reasoning for mostapplications. It seems that, for higher-level con-text understanding, machine-readable models ofthe world are needed [2]. However, it will takesome time for progress in knowledge representa-tion to render such models practical.

We propose an easier partial solution: benefitfrom the social context to provide valuable con-text clues. People are used to adjusting theirbehavior depending on their social context. Welower our voice in public libraries, yell madly atsoccer matches, and smile politely when pho-tographed. Sometimes the place is a sufficientclue for proper behavior; sometimes it is thebehavior of other people that seems to suggestto us how to behave.

Mobile devices could use context informationfrom surrounding ones. Where a number of peo-ple (and their devices) are gathered, it is reason-

JANI MÄNTYJÄRVI, PERTTI HUUSKONEN, AND JOHAN HIMBERG, NOKIA RESEARCH CENTER

ABSTRACTMobile devices, together with their users, are

constantly moving from one situation to another.To adapt applications to these changing contexts,the devices must have ways to recognize the con-texts. There are various sources for context infor-mation: sensors, tags, positioning systems, to namea few. The raw signals from these sources aretranslated into higher-level interpretations of thesituation. Unfortunately, such data is often unreli-able and constantly changing. We seek to improvethe reliability of context recognition through ananalogy to human behavior. Where multipledevices are around, they can jointly negotiate on asuitable context and behave accordingly. Thisapproach is becoming particularly attractive withthe multitude of personal devices on the market.We present a collaborative context determinationscheme, suggest examples of potential applicationsof such collaborative behavior, and raise issues ofcontext recognition, context communication, andnetwork requirements.

COLLABORATIVE CONTEXT DETERMINATION TOSUPPORT MOBILE TERMINAL APPLICATIONS

CO N T E X T-AWARE CO M P U T I N G

Page 2: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 200240

able to assume that many of the devices should bein the same context. In a given setting, differentpeople will want different behavior from theirdevices, but the basis for context determinationcould be the same. This collaboration helps con-text recognition by providing a first approxima-tion for further refinement at each device.

The devices can form a network where nodesadapt to their neighbors. This requires somemeans of proximity networking, such as personalarea networks over Bluetooth, or a way to com-municate with immediate neighbors over a long-range link. Moreover, a collaboration method isneeded for agreeing on the more reliable jointcontext over the proximity network.

CONTEXT RECOGNITION WITHMOBILE DEVICES

Mobile devices equipped with sensors are ableto recognize some aspects of the surroundingcontext. However, reliable context determinationin a single device requires sophisticated algo-rithms, sometimes involving heavy processing.The more data available from the sensors, themore welcome would be rough initial assump-tions of the context.

APPROACHES TO CONTEXT RECOGNITIONTo enhance usability of mobile devices, thehuman-computer interaction research communi-ty has experimentally integrated various kinds ofsensors into the devices.

This makes possible implicit user interfaces,including gesture-based functions, switching dis-play modes based on device orientation, adaptingfont sizes according to movements, and adaptingsound levels and display brightness according toenvironment conditions [3–6]. Research effortshave also been carried out on recognizing theactivity of a mobile device user [7, 8].

There exist several studies concerning develop-ment of context recognition and extraction sys-tems. Context recognition for wearable computershas been studied by means of wearable cameras,environmental audio signal processing, and hid-den Markov models [9]. Context recognition frommultidimensional sensor signals can be carriedout, for example, by combining the benefits ofneural networks and Markov models [3, 10].

The information from multiple sensors can beprocessed to multidimensional context vectors,from which relevant context information can beextracted and compressed by using statistical meth-ods [11]. Time series segmentation can be used toextract higher-level descriptions of contexts [12].

A CONTEXT RECOGNITION FRAMEWORKFor our purposes, we define context to be theinformation available for mobile devices that char-acterizes the situation of the devices, their users,the immediate environment, and the mobile com-munication networks. The definition is adaptedfrom Dey [13].

The first task in any context recognition pro-cess is feature extraction. In pattern recognition,a compact set of expressive features is needed.We pay special attention to abstracting the datain such a way that the features describe some

common notions of the real world, and com-pressing the information using various signalprocessing methods that produce featuresdescribing the information more compactly. Forfeature extraction and recognition we haveapplied interpretation of user gestures, frequen-cy domain analysis, and statistical methods. Wecall the extracted features context atoms. Eachatom describes the action of the user and/orstate of the environment.

We apply fuzzy set quantization to the fea-tures. The primary motivation for using fuzzy setsis that many events in the world are not easilyrepresented with crisp logic. For example, shiftingfrom the action of walking at normal speed towalking fast is fuzzy, and a two-valued expressionresulting from setting crisp limits is insufficient todescribe the meaning of the action. By fuzzifica-tion, the representation of the speed of walking ismore expressive, since the action can be some-thing between the two values.

We define the context atom as a continuousvariable from 0 to 1. We use a data structurethat simply presents the values of the contextatoms in a prespecified order at each discretetime instant. The order of context atoms is sup-posed to be the same in all devices. An exampleof context atoms from a user scenario is present-ed in Fig. 1. In the scenario the user is walkingfrom inside to outside. Values of context atomsin two context atom vectors, one from inside (t= 22 s) and one from outside (t = 32 s), are tab-ulated in Fig. 1. There are certain context atomswhose value changes notably while coming frominside to outside. Descriptions for these contextatoms are highlighted using boldface.

PROBLEMS WITH RECOGNITIONVarious “real world” problems plague sensor-based systems: noise, faulty connections, drift, mis-calibration, wear and tear, and humidity, to namea few. Especially in mobile systems, reliable sensordata may be hard to obtain. Even if the acquireddata is usable, many signal processing and recogni-tion methods consume significant amounts ofmemory, energy, and/or CPU time. Still, theobtained results will often be unambiguous.

Contexts are fuzzy, overlapping, and changingwith time. When the context atoms describeactivities of people, they will be just approxima-tions at best. It seems difficult to devise contextrepresentations that can accommodate all thesesources of uncertainty.

The environment of mobile devices can behighly dynamic. The recognition algorithms mustadapt to the changing environment. For systemswith supervised context learning, the meaning ofthe situation must be labeled. This may requireconstant effort of the user, becoming a nuisance.On the other hand, unsupervised learning pro-duces context clusters, the meaning of which canbe hard to interpret. Thus, context recognitionbased on learning adds difficulties.

The above difficulties lead us to seek alterna-tives to heavy recognition methods. We hope toavoid some of the problems by exploiting redun-dant context data available from neighboringdevices. We believe it is possible to significantlyenhance context recognition by deriving initialguesses for the current context from the neigh-

The devices can forma network wherenodes adapt to theirneighbors. Thisrequires some meansof proximitynetworking, such aspersonal areanetworks overBluetooth, or a wayto communicate withimmediateneighbors over along-range link.

Page 3: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 2002 41

bors. This obviously requires ways to communi-cate the context between the devices.

CONTEXT IN PROXIMITY NETWORKS

Recent advances in short-range communicationspromise small dynamic wireless networks that arelocal to a building, room, or even a person. Acentral property of such proximity networks isthat they can be configured ad hoc, often withoutmaintenance staff. They are therefore suited tosituations where a number of people get togetherin an unplanned manner and need to communi-cate, either explicitly or through hidden automat-ed procedures. Ad hoc proximity networks makecollaborative context determination feasible.

AD HOC PROXIMITY NETWORKSMany technologies exist today for wireless localizedcommunication: wireless LANs, miniature-sized cel-lular networks, IrDA, Bluetooth, and HomeRF, tomention a few [14]. In addition, wireless RF-IDtags are routinely used in the industry, with bothunidirectional and bidirectional links. Ultra-wide-band communication holds renewed interest.

Of the existing technologies, Bluetoothpromises perhaps the widest deployment, due tothe adoption of the standard into mobile phonesand accessories. However, improvements inexisting solutions are needed before ad hoc per-sonal area networks can become commonplacewith smaller devices.

Many short-range communication technologiesfall short of some of the necessary requirementsfor ad hoc proximity operation. Some of the essen-tial properties for proximity networks (as neededfor collaborative context determination) are:

Ad hoc network creation. The network topol-ogy does not need to be defined in advance.

New networks or subnets can be created on thefly, as needed.

Zero administration. The need for humanintervention in network setup must be minimal.

Aggressive mobility. The networks will followthe mobile devices (who, in turn, often follow peo-ple). The handovers between networks and recon-figuration must be (almost) invisible to users.

Quick setup. A passenger in a crowded Tokyosubway passes hundreds of people within a fewseconds. To exchange data during these encoun-ters, network setup should happen on a subsec-ond scale. With mobile vehicles, requirementswill be worse.

Invisibility. For most applications, users must notbe burdened with the beehive of activity that occursin the cloud of proximity networks around them.

In this article we are not proposing a designthat fulfills these requirements, but point outthat, for collaborative context determination,proximity networks with the mentioned proper-ties must exist. It seems a reasonable expectationthat suitable networks would become availablewithin a decade, well before the wide deploy-ment of context awareness technologies.

COLLABORATIVE CONTEXT DETERMINATIONWe have developed a method for collaborativelydetermining the context of a group of mobiledevices over wireless ad hoc networks. The mainidea is to recognize the need for requesting andlocally processing context information distributedover ad hoc networks. The description of the con-text of a device can be defined as a time-depen-dent collection of the contexts of other deviceswithin the ad hoc network. The contribution ofvarious devices to collaborative context determina-tion is highly dynamic, depending on the propertiesof context information, content, and reliability.

■ Figure 1. An example of context atoms from a user scenario. Black means maximum activity (one) whilewhite means zero activity. Data is recorded from a scenario where a user is moving indoors and outdoorswith a context-aware mobile terminal. The table shows two context atom vectors: ct=22 recorded at time t= 22 s and ct=32 for t = 32 s. The descriptions of context atoms whose value changes notably are printedin boldface.

Atom description

RunningWalking fastWalkingLoudModest soundSilentTotal darknessNatural lightDim lightNormal lightBright lightAt handUnstableStableSideways leftSideways rightAntenna upAntenna downDisplay upDisplay down

ct=22

0.000.390.610.000.350.650.000.000.001.000.000.001.000.000.000.000.980.000.000.00

ct=32

0.000.660.340.001.000.000.001.000.000.220.780.001.000.000.000.000.990.000.000.00

t = 22 t = 33

Time (s)

Context A:walking inside

Context B:walking outside

500

RunningWalking fast

WalkingLoud

ModestSilent

Total darknessNatural

DarkNormal

BrightAt hand

UnstableStable

Sideways leftSideways right

Antenna upAntenna down

Display upDisplay down

40302010

We believe thatis possible to

significantly enhancecontext recognition

by deriving initialguesses for the

current context fromthe neighbors. Thisobviously requires

ways to communicatethe context between

the devices.

Page 4: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 200242

The architecture of collaborative contextdetermination presented in Fig. 2 is based ondynamic examination of current context of thenode and contexts of its surroundings communi-cated by the ad hoc network and, importantly,the stability of these contexts. We call the cur-rent context of a node local context, and contextdetermined with the procedure collaborative con-text. More specifically, each node maintains thefollowing information onboard:

Local context consists of the context atoms asingle device can deduce using its context infor-mation sources and context extraction algo-rithms. The history of the local context may bealso stored for a certain period of time.

Local context reliability reflects the stabilityof the local context. It is updated repeatedlyalong with the local context using its history. Thelocal context reliability is considered high whentemporal changes in the context of a single nodeare small. The stability decreases when largertemporal changes occur in the local context.Consequently, the more stable the context atomis currently, the more reliable it is considered.

Collaborative context consists of the samecontext atoms as the local context. However, itreflects the impression of the device about itsenvironment. This impression is a summary ofthe situation of other devices in the local rangecorrected by the local context. Consequently, itmay be different in each device depending onthe dynamics of the system. Although the collab-orative context is stored locally in each device, itis updated partly in a distributed fashion.

Collaborative context reliability reflects thelevel of accordance that the device has to the

collaborative context. In general, the reliabilitydecreases when the context varies locally (withinneighboring nodes).

Different computational and signalingschemes may be considered for updating thecontext data. We have experimented with a sys-tem where a triggering condition initiates arequest in some node. The requesting nodereceives the local contexts and their reliabilitiesfrom the neighboring nodes. It makes a summa-ry of the received information and sends it backto the neighboring nodes. All the nodes involvedthen individually determine their accordance tothis summary, and update their collaborativecontext and collaborative context reliability.

Signaling is initiated if any of the followingtriggering conditions are fulfilled:• Mismatch between some/all context atoms of

local and collaborative context• Local context reliability below a threshold• Collaborative context reliability below a

threshold• Neighboring node request for context data for

collaborative context determinationThresholds are determined on the basis of

the adaptive stability measures of local and col-laborative contexts. The signaling betweendevices within an ad hoc network during the pro-cedure operates as follows:• The node sends a request for local context

information to all devices in the range of thelocal wireless ad hoc network.

• Nodes in the range receive a request and sendtheir local context and context reliability (orrequested parts of them) to the one who isrequesting.

■ Figure 2. The architecture of collaborative context determination in a single terminal.

Terminal

Local context

Neighboring nodes in environment Contractextraction

Device internal contextinformation sources, e.g.,sensors, applications,comm. networks, www

Contextregister

Mismatchestimation

Control of theprocedure-Queries

-Algorithm

Triggering conditions

Stabilitymeasure

Collaborative context

Contextregister

Stabilitymeasure

Stabilityregister

Stabilityregister

Buffer for contexts fromneighboring nodes

We have developeda method forcollaborativelydetermining thecontext of a group ofmobile devices overwireless ad-hocnetworks. The mainidea is to recognizethe need forrequesting andlocally processingcontext informationdistributed overad-hoc networks.

Page 5: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 2002 43

• The requesting node receives context informa-tion multiple times and calculates a summary.The node can use both the individual contextreliabilities and the inhomogenity of the localcontexts when it calculates the context sum-mary and its reliability.

• The device sends out the summary and its reli-ability to all nodes in range of the local wire-less ad hoc network.

• All nodes including the requesting node con-tinue by estimating their values for collabora-tive context toward their own local context.This is done by using the local context andreceived summary.When all the nodes are in the same context

the collaborative context can be derived by aver-aging. In this simple approach, problems arisewhen several contexts appear within the range ofthe local ad hoc network. Next we present simula-tion results to show that the method maintainsdifferent contexts of nodes within the ad hoc net-work. The method emphasizes the contribution ofnodes that are in similar context, and particularlynodes with reliable context information.

PERFORMANCE OF THE METHODWe have evaluated the performance of themethod with simulations using real context data.Data used in experiments is recorded with tai-lored equipment [11]. In test scenarios a numberof context-aware mobile nodes are in two con-texts: context A, “walking inside,” and context B,“walking outside.” The situation in the test sce-nario describes movements of mobile nodes inthe vicinity of the front door of a building; somepeople are leaving, some entering. Usually physi-cal barriers like doors distinguish higher-levelcontexts (e.g., café–street, lobby–lecture).

Figure 3 presents a particular situation fromexperiments with 10 nodes. A node moves frominside to outside (Fig. 3a). The node triggers col-laborative context determination since it consid-ers local context unreliable because of thesudden change. The other nodes respond by

sending their local contexts (Fig. 3b). Therequesting node computes context summary, andsends it back to other nodes (Fig. 3c). All nodesestimate their values for collaborative contexttoward their own local context (Fig. 3d).

In Fig. 3d, 70 percent of nodes are in contextA and 30 percent in B, respectively. The resultof the collaborative context determination pro-cess for this particular setting can be found inFig. 4a, marked with arrows. Context informa-tion exchanged between nodes in experiments isin the format presented in Fig. 1.

The performance of the method is measuredwith the similarity measure between actual con-texts cA and estimated collaborative contexts Aor B. Actual contexts are defined as median con-texts of A and B calculated from a large contextdata set. Similarity measures from the experi-ments with 10 and 20 nodes are presented inFig. 4. In both experiments, the similarity mea-sure S(cA, A) increases rapidly, indicating thatthe estimated collaborative context is closer tothe actual context when a small amount of thenodes belong to context A. Correspondingly, thesimilarity measure S(cB, B) decreases slowly,suggesting that a method is able to maintain theestimated collaborative context among the nodesin context B even if the majority of all nodes arein context A.

The results suggest that the method is capa-ble of providing reliable context information ofthe current context even when there are only afew percent of nodes in the same context. Jointlydetermined, more reliable context informationcan be utilized in various applications.

COMMUNICATING THE CONTEXTCommunicating the context requires a commoncontext format. Considering the rather divergentuse situations of mobile terminals, it is evidentthat numerous lower- and higher-level descrip-tions of user contexts are needed. Recognizingthem all, or at least the most typical ones, is achallenging task. It will be difficult to develop

■ Figure 3. An illustration of the collaborative context determination procedure.

Context A (IN)

n1–8 n9–10

Context B (OUT)a) Context A (IN)

n1–7 n8–10

Context B (OUT)b)

Context A (IN)

n1–7 n8–10

Context B (OUT)

Updating collaborative context

d)

Context?

Context A (IN)

n1–7 n8–10

Context B (OUT)c)

Contextsummary

Localcontext

B!

Localcontext

A!

Communicating thecontext requires acommon context

format. Consideringthe rather divergent

use situations ofmobile terminals,it is evident that

numerous lower leveland higher level

descriptions of usercontexts are needed.

Page 6: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 200244

standard models for context to be exploited incontext communication.

A higher-level context comprises several low-level aspects of current and past situations of auser. Communicating higher-level contexts andusing that information in applications in differ-ent nodes requires that they all share similarbackground knowledge about the higher-levelcontexts, which is very difficult to achieve. Thisparticular problem can be overcome to somedegree by utilizing central context databases.

We are not trying to model higher-level con-texts. Instead, we experiment with highly mobileand distributed context-aware nodes with simplelow-level context representation, assuming thatall nodes have a standard context representation,and context atoms are in a prespecified order.The presented method allows nodes to empha-size individual context atoms, that is, to rely oncertain types of atomary context information.

This context communication method is suffi-cient when the amount of context atoms is rela-tively small. When the number of context atomsincreases the compression of context informationto communicate only compressed, task relevantcontext information becomes essential [11].

APPLICATIONS

We believe that collaborative context determina-tion benefits many kinds of context-aware appli-cations by making context recognition easier. Toillustrate the utility of our approach, we discusstwo cell phone application cases where ourmethod may be useful. Not surprisingly, theexamples we offer here deal with cases where anumber of people meet.

RINGTONE SILENCINGThe archetypal application with widely popularmobile phones must be automated ringtone silenc-ing. Many people are by now familiar with awk-ward situations a suddenly ringing cell phonecauses in sensitive settings, such as museums,libraries, and classrooms. Various techniques are

available for automated silencing, including sensor-based detection of quiet situations and jamming ofthe phone receiver with a suitable transmitter.

With collaborative context determination, theproblem seems to be solved moderately easily.Where several people are gathered, only the firstones need to set their phones in a suitable mode(which may also be automated). For the rest,their phones can follow the majority or, uponuser confirmation, switch into silent mode. Notethat the technique is less useful where there arenot many people present, but such situations arealso less likely to be sensitive.

PRESENCEThe situation often influences our choice of howreachable and visible we want to be: do we wantto receive calls, do we accept unsolicited advertis-ing, do we want to pass our dietary preferenceprofiles to a restaurant. Collaborative context mayhelp in determining a suitable presence profile onthe basis of the preferences of the people nearby.For example, if 100 people in the bar tonighthave chosen to receive a free soft drink in returnfor accepting to view a commercial, maybe Icould safely do the same. The freedom to decidemust remain with the individual, but peer pres-sure may be allowed to affect our choices.

CONCLUSIONS

KEY FINDINGS

The main points in our approach are:• The method for collaborative context determi-

nation resembles the way people adapt theirbehavior in groups. We believe that thisresemblance would make the method moreunderstandable to humans.

• A collaboratively agreed context may be morereliable than that of a single device.

• The method can give a reasonable initial guessfor context recognition in each device. Sincethis makes recognition easier, less CPU powerand memory are consumed.

■ Figure 4. Examples of the similarities of determined collaborative contexts and actual contexts as a function of percent of nodes in con-text A: a) 10 nodes; b) 20 nodes.

100900

0

0.9

1

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

8070605040302010 100900

0

0.9

1

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

8070605040302010

S(cA,A)S(cB,B)

S(cA,A)S(cB,B)

Sim

ilari

ty t

o tr

ue c

onte

xt

Sim

ilari

ty t

o tr

ue c

onte

xt

% of nodes in context A % of nodes in context

Page 7: Collaborative context determination to support mobile terminal applications

IEEE Wireless Communications • October 2002 45

• In dynamic situations, our method may use con-siderable bandwidth. A trade-off must be madebetween communications and computation.

• In some cases there may be direct mappingfrom context detection to the correspondingaction (e.g., cell phone silencing in meetings).

• Our method is neutral to the chosen contextrepresentation. However, all participatingdevices must share a common context repre-sentation.

• Our method supports highly mobile systems; itdoes not require any central context server.

MAJOR CHALLENGES AND FUTURE WORKSocial acceptance and privacy will be majorproblems for collaborative context determina-tion. Many people would not like to reveal theircontext to nearby strangers, some not even tofriends, relatives, or loved ones. The problemwill be more apparent for higher-level contextsthat directly speak of the situation of a person.We expect, however, that people will be lessprotective of their lower-level sensor data. Wealso expect that, as applications using contextawareness and presence techniques becomemore available, an etiquette will arise, definingsocially acceptable ways to share (and hide)contexts.

People must retain the control of how theircontext information is communicated to theenvironment. It remains a major research chal-lenge to maximize the level of control and mini-mize the user actions needed for the control.This, of course, is a general problem with com-munication of personal data, and thus not partic-ular to our approach.

Proximity networks could contain anonymityservers that would allow communication of con-text information while hiding the identity of theparticipating devices. Such servers would also beuseful from the viewpoint of network economy— it may be more bandwidth-efficient if a cen-tral node computes the collaborative context andcommunicates it to other nodes in a centralizedfashion. However, since proximity networks areexpected to be very dynamic, there are discoveryand bookkeeping problems associated with theserver approach.

Our method may have problems with stabilityin dynamic situations where the contexts ofdevices fluctuate rapidly. The context collabora-tion method may lead to transient erroneouscontexts, or even oscillation and deadlocks.

Even in static situations, there will be prob-lems. When the contexts of neighboring devicesremain different, the joint context will converge,but the result may be an artificial context thatdoes not correspond to any physical situation.Moreover, in any crowd there will form clustersof different partial contexts. In a concert, somepeople listen to the orchestra, some are convers-ing, some are sleeping. What is the joint contextin such a case? The problem becomes less signif-icant the smaller the proximity network cells are,though. We expect that advances in contextmodeling and representation will ultimatelyanswer some of these questions.

There may be certain configurations that oftenlead to errors in context determination. The algo-rithm should be able to detect such configurations

and recover gracefully. Our method should bestudied from the viewpoints of parallel program-ming, network theory, and self-organizing adaptivesystems. Alternatively, since context collaborationis a form of synchronization, work in distributeddatabases might offer insight to our method.

We use a simple representation of context. Forpractical implementations, a more flexible repre-sentation should be used, hopefully a universallyagreed format based on standards such as XML.

REFERENCES[1] G. D. Abowd and E. D. Mynatt, “Charting Past, Present,

and Future Research in Ubiquitous Computing,” ACMTrans. Comp.-Human Interaction, vol.7, no. 1, Mar.2000, pp. 29–58.

[2] T. Berners-Lee, J. Hendler, and O. Lassila, “The SemanticWeb,” Sci. Amer., vol. 284, no. 5, May, 2001, pp. 34–43.

[3] A. Schmidt et al., “Advanced Interaction In Context,”Proc. Int’l. Symp. Hand Held and Ubiquitous Comp.,LNCS, no. 1707, Springer-Verlag, Berlin, Germany, Sept.1999, pp. 89–101.

[4] A. Schmidt, “Implicit Human Computer InteractionThrough Context,” J. Pers. Tech., Springer-Verlag, vol.4, 2000, pp. 191–199.

[5] V-M. Mäntylä et al., “Hand Gesture Recognition of aMobile Device User,” Proc. IEEE Int’l. Conf. Multimediaand Expo, Piscataway, NJ, July 2000, pp. 281–284.

[6] K. Hinckley et al., “Sensing Techniques for Mobile Inter-action,” ACM Symp. User Interface Software and Tech.,New York, NY, 2000, pp. 91–100.

[7] A. R. Golding and N. Lesh, “Indoor Navigation Using aDiverse Set of Cheap, Wearable Sensors,” Int’l. Symp.Wearable Comp. Dig. of Papers, Los Alamitos, CA, Oct.1999, pp. 29–36.

[8] J. Mäntyjärvi, J. Himberg, and T. Seppänen, “Recogniz-ing Human Motion with multiple acceleration Sensors,”Proc. IEEE Int’l. Conf. Sys., Man and Cybernetics, Piscat-away, NJ, Oct. 2001, pp. 747–52.

[9] A. Pentland, “Looking at People: Sensing for Ubiquitousand Wearable Computing,” IEEE Trans. , vol. 22, no. 1,Jan. 2000, pp. 107–19.

[10] K. Van Laerhoven and O. Cakmakci, “What Shall WeTeach our Pants?” 4th Int’l. Symp. WearableComputers, Los Alamitos, CA, Nov. 2000, pp. 77–83.

[11] J. Himberg, J. Mäntyjärvi, and P. Korpipää, “Using PCAand ICA for Exploratory, Data Analysis in SituationAwareness,” IEEE Conf. Multisensor Fusion and Integra-tion for Intel. Sys., Sept., 2001, Duesseldorf, Germany,pp. 127–31.

[12] J. Himberg et al., “Time Series Segmentation for Con-text Recognition in Mobile Devices,” IEEE Conf. DataMining, Los Alamitos, CA, Nov. 2001, pp. 203–10.

[13] A. K. Dey, “Understanding and Using Context,” Pers.and Ubiquitous Comp., Springer-Verlag, vol. 5, no. 1,2001, pp. 4–7.

[14] Special issue on “Technologies for Short Range Wire-less Communications,” IEEE Pers. Commun., Feb. 2000.

BIOGRAPHIESJANI MÄNTYJÄRVI ([email protected]) received an M.Sc.in biophysics from the University of Oulu, Finland in 1999. Hecurrently works as a senior research scientist at Nokia ResearchCenter, Helsinki, Finland, and is working toward a Ph.D. in theDepartment of Electrical Engineering and Information Process-ing at the University of Oulu, Finland. His research interestsinclude pervasive and context-aware computing, handhelddevices, and technologies for user interaction.

PERTTI HUUSKONEN ([email protected]) is principalscientist at Nokia Research Center, Tampere, Finland. Hecreates visions and steers research programs in the areas ofcontext awareness, ubiquitous computing, and interactionin mobile systems. His background is in artificial intelli-gence, with focus on advanced user interfaces for industri-al applications. He has a doctorate in computer sciencefrom the University of Oulu.

JOHAN HIMBERG ([email protected]) earned hisM.Sc. (Tech.) degree in Helsinki University of Technology(HUT) 1997. Prior to joining Nokia Research Center heworked as a researcher in the Laboratory of Computer andInformation Science (CIS) at HUT. He specializes in datamining and information visualization.

Social acceptanceand privacy will bemajor problems for

collaborative contextdetermination. Many

people would notlike to reveal theircontext to nearby

strangers, some noteven to friends,

relatives, or lovedones. The problem

will be more apparentfor higher-level

contexts that directlyspeak of the situation

of a person.