6
Enabling Seamless Contextual Collaborations for Mobile Enterprises Siddhartha Bose, Subodh Sohi Applications Research Centre, Motorola India Research Labs, Bangalore, India {siddhartha.bose, subodhsohi}@motorola.com Abstract Enterprise workers are increasingly becoming mobile and this is creating scenarios for anytime- anywhere collaboration, using different sets of devices, services and modalities at different times. One of the challenges is to enable seamless contextual collaborations, where people get the best possible collaboration experience at all times. This paper describes a method for generating the most appropriate view of an active collaboration context for a user, in terms of combinations of devices-media- services, using specified constraints. As the user moves from one set of devices to another, these combinations are seamlessly adapted. Keywords: context-aware collaboration, mobile enterprise, seamless mobility, adaptive collaboration, optimal collaboration experience 1. Introduction Within any enterprise, people communicate using different modalities, devices and services, such as conference calls, Microsoft TM NetMeeting sessions, etc. Often, for accomplishing a task, multiple such collaboration mechanisms are used. In mobile enterprise collaboration scenarios, users may use multiple devices at a time, may use different devices at different times, and may want to change their devices during a collaboration session. This requires a system that enables seamless and adaptive collaboration sessions across devices, media and services, so that a user gets an optimal collaboration experience at all times. To understand the concept of optimal collaboration experience better, let us consider the following use case: Tom is trying to have a video call with Alice. Since Alice is in a meeting, the audio stream (audio component of video) is delivered on her laptop through IM (after converting audio to text) and the image stream (image component of video) is not delivered. Now, when Alice enters her office, the audio stream is transferred to her desktop phone, and the image stream is pushed to her PC. To enable such an optimal collaboration experience, one needs to consider two important aspects of mobile enterprise collaborations: (1) user context and (2) collaboration context. User context refers to parameters associated with a user – his current activity, devices available to him, media types and services supported by his devices, his preferences with respect to media types and services, etc. Collaboration context refers to parameters associated with the collaboration process, such as, the media types used, the services used for delivering media, etc. Any system that aims at providing an optimal collaboration experience should address the following two problems: 1. To provide the user with most appropriate view of collaboration context, i.e. to adapt the collaboration context in an optimal way given a user’s current context 2. Seamless adaptation of view of collaboration context as the user moves across location, activity, device, service, etc. Seamless adaptation means that for any change in user context, the adaptation of optimal view of collaboration context occurs automatically and in real-time. From the use case and identified problems, the main research challenge is to have an efficient generation of the most appropriate device-media-service mappings that map media-service streams from the collaboration context to the devices in the current user context. There are many solutions for adaptive collaboration in a mobile computing environment. Most systems such as the Liquid Media System [2] handle media transformation and routing based on device capabilities. This leaves the issues of service transformations, and media to service association unaddressed. In our work, we address the complete problem, which includes media transformation and routing, service transformation and routing, and device-media-service association. Proceedings of the 15th IEEE International Workshops on Enabling Technologies:Infrastructure for Collaborative Enterprises (WETICE'06) 0-7695-2623-3/06 $20.00 © 2006

[IEEE 15th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE'06) - Manchester, UK (2006.06.26-2006.06.28)] 15th IEEE International

  • Upload
    subodh

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Enabling Seamless Contextual Collaborations for Mobile Enterprises

Siddhartha Bose, Subodh Sohi Applications Research Centre, Motorola India Research Labs,

Bangalore, India {siddhartha.bose, subodhsohi}@motorola.com

Abstract

Enterprise workers are increasingly becoming mobile and this is creating scenarios for anytime-anywhere collaboration, using different sets of devices, services and modalities at different times. One of the challenges is to enable seamless contextual collaborations, where people get the best possible collaboration experience at all times. This paper describes a method for generating the most appropriate view of an active collaboration context for a user, in terms of combinations of devices-media-services, using specified constraints. As the user moves from one set of devices to another, these combinations are seamlessly adapted. Keywords: context-aware collaboration, mobile enterprise, seamless mobility, adaptive collaboration, optimal collaboration experience 1. Introduction

Within any enterprise, people communicate using different modalities, devices and services, such as conference calls, MicrosoftTM NetMeeting sessions, etc. Often, for accomplishing a task, multiple such collaboration mechanisms are used. In mobile enterprise collaboration scenarios, users may use multiple devices at a time, may use different devices at different times, and may want to change their devices during a collaboration session. This requires a system that enables seamless and adaptive collaboration sessions across devices, media and services, so that a user gets an optimal collaboration experience at all times. To understand the concept of optimal collaboration experience better, let us consider the following use case: Tom is trying to have a video call with Alice. Since Alice is in a meeting, the audio stream (audio component of video) is delivered on her laptop through IM (after converting audio to text) and the image stream (image component of video) is not

delivered. Now, when Alice enters her office, the audio stream is transferred to her desktop phone, and the image stream is pushed to her PC.

To enable such an optimal collaboration experience, one needs to consider two important aspects of mobile enterprise collaborations: (1) user context and (2) collaboration context. User context refers to parameters associated with a user – his current activity, devices available to him, media types and services supported by his devices, his preferences with respect to media types and services, etc. Collaboration context refers to parameters associated with the collaboration process, such as, the media types used, the services used for delivering media, etc.

Any system that aims at providing an optimal collaboration experience should address the following two problems: 1. To provide the user with most appropriate view of

collaboration context, i.e. to adapt the collaboration context in an optimal way given a user’s current context

2. Seamless adaptation of view of collaboration context as the user moves across location, activity, device, service, etc. Seamless adaptation means that for any change in user context, the adaptation of optimal view of collaboration context occurs automatically and in real-time.

From the use case and identified problems, the main research challenge is to have an efficient generation of the most appropriate device-media-service mappings that map media-service streams from the collaboration context to the devices in the current user context.

There are many solutions for adaptive collaboration in a mobile computing environment. Most systems such as the Liquid Media System [2] handle media transformation and routing based on device capabilities. This leaves the issues of service transformations, and media to service association unaddressed. In our work, we address the complete problem, which includes media transformation and routing, service transformation and routing, and device-media-service association.

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006

The rest of the paper is organized as follows: Section 2 introduces some essential concepts and gives the rationale behind algorithm design. Section 3 describes the system and the mechanism. Section 4 analyzes the algorithm complexity. Section 5 gives performance figures from our implementation. Section 6 gives an overview of related work in the area. Section 7 summarizes the paper and provides future research directions.

2. Essential Concepts

To understand the above stated problems, we need to have a clear understanding of – collaboration media, collaboration services, association between collaboration media and services, and transformation of one collaboration media-service combination to another. We also need to understand the concept of user preferences.

2.1. Collaboration Media and Services

In any collaboration system, multiple media types and services are used. Collaboration media refers to the content of a collaboration session, while collaboration service refers to the delivery mechanism for the content. Here we present our interpretation of collaboration media, collaboration services and the association between the two. This is important for understanding the concept of transforming one collaboration media-service combination to another.

We classify collaboration media as follows – text, audio, image, audio stream and image stream. We differentiate between an image and an image stream,

and audio and an audio stream – image and audio streams denote the image and audio components of a video stream.

Figure 1 shows the association between collaboration services and media types. Note that figure 1 gives a representative set of collaboration services and associated media types, for us to understand the difference between the two.

2.2. Transformations and Associated Cost

Figure 2 gives a representative set of transformations between media-service combinations one should interpret it in conjunction with figure 1. The transformation matrix is not symmetric because of the following reasons: - It is possible to convert textual content to an image

(by transmitting a snapshot of the text), but the reverse is not possible in all cases.

- It is possible to convert streaming media (audio or image) to static media (audio or image file), by taking snapshots (or clips) at regular intervals, but the reverse is not possible. For any transformation of one media-service

combination to another, there is an associated cost. We classify such costs into two categories: (1) media transcoding cost, and (2) service transformation cost. The relative weight of the two categories of cost depends on multiple environmental factors, such as, types of media transcoders available, types of service transformations available etc.

In our work, we use a cost model based on experimental measurements of the time taken for transforming one service to another, and transcoding

White Board

App Sharing

File Sharing

Email

Video Call

Audio Call

Text IM

Image

stream

Audio

stream

Image

Audio

Text

Media

Ser

vice

Figure 1: Collaboration Services with Associated Media

Whiteboard, Image stream

App sharing, Image stream

File sharing, Image

File sharing, Audio

File sharing, Text

Email, Text

Video call, Image stream

Video call, Audio stream

Audio call, Audio stream

Text IM, Text

Whiteboard,

Image stream

App sharing,

Image stream

File sharing, Im

ageFile sharing, A

udioFile sharing, Text

Em

ail, Text

Video call,

Image stream

Video call,

Audio stream

Audio call,

Audio stream

Text IM, Text

Sou

rce

Destination

Figure 2: Media-Service Transformations

White Board

App Sharing

File Sharing

Email

Video Call

Audio Call

Text IM

Image

stream

Audio

stream

Image

Audio

Text

Media

Ser

vice

Figure 1: Collaboration Services with Associated Media

White Board

App Sharing

File Sharing

Email

Video Call

Audio Call

Text IM

Image

stream

Audio

stream

Image

Audio

Text

White Board

App Sharing

File Sharing

Email

Video Call

Audio Call

Text IM

Image

stream

Audio

stream

Image

Audio

Text

Media

Ser

vice

Figure 1: Collaboration Services with Associated Media

Whiteboard, Image stream

App sharing, Image stream

File sharing, Image

File sharing, Audio

File sharing, Text

Email, Text

Video call, Image stream

Video call, Audio stream

Audio call, Audio stream

Text IM, Text

Whiteboard,

Image stream

App sharing,

Image stream

File sharing, Im

ageFile sharing, A

udioFile sharing, Text

Em

ail, Text

Video call,

Image stream

Video call,

Audio stream

Audio call,

Audio stream

Text IM, Text

Sou

rce

Destination

Figure 2: Media-Service Transformations

Whiteboard, Image stream

App sharing, Image stream

File sharing, Image

File sharing, Audio

File sharing, Text

Email, Text

Video call, Image stream

Video call, Audio stream

Audio call, Audio stream

Text IM, Text

Whiteboard,

Image stream

App sharing,

Image stream

File sharing, Im

ageFile sharing, A

udioFile sharing, Text

Em

ail, Text

Video call,

Image stream

Video call,

Audio stream

Audio call,

Audio stream

Text IM, Text

Sou

rce

Destination

Figure 2: Media-Service Transformations

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006

one media to another. There can be three types of transformations: Case 1: Media m1 → m2, service s constant Cost = w1 Case 2: Media m constant, service s1 → s2 Cost = w2

Case 3: Media m1 → m2 and service s1 → s2 Cost = αw1 + βw2 , where α and β are weights

Note: We use α = 1, β = 1 in our implementation For case 3 above, we use a staged transformation

mechanism consisting of media transcoding followed by service transformation. We find that the above model works well for most practical purposes. We use transformation cost as one of the factors for determining optimal device-media-service mappings. It is important to have a good cost model for having practical implementations, and transformation cost analysis is an area of future work for us. 2.3. User Context – Activity and Preferences

Our work captures a user’s context in two parts –current activity and preferences. A user’s current activity is dynamic while user preferences are usually static. We utilize both these elements for determining optimal device-media-service mappings.

We define activity in terms of location, individual or group activity, and free or busy status. We use a combination of location information and calendar information for determining a user’s current activity.

A user provides preferences in terms of activity, source media, source service, destination media and destination service. A user can define preferences using any number of parameters. For example, a user defines a preference P1 as follows: In a meeting, any audio call should come as a text-based IM invitation.

P1: {meeting (office, group, busy), audio, audio call, text, text IM}

We store the preferences in a prioritized list. Prioritization of preferences is required so that the system can choose the most appropriate preference(s) in minimum time. A simple mechanism for preference prioritization is as follows: When a user defines a new preference, the system classifies it for a particular activity, and prioritizes the available preferences from the most specific to the most general in descending order.

To compare preferences we utilize the following scheme of relative importance between parameters of a preference – activity > source media > source service. The user resolves conflicts arising out of the preference comparison mechanism at the time of preference definition.

2.4. Algorithm Design Rationale

We have used two main drivers while designing the algorithm for producing optimal device-media-service mappings, namely: 1. Maximize the priority of user preferences used for

the final mappings. 2. Minimize the cost of transformation of media-

service mappings. Our algorithm utilizes the drivers in the order

mentioned above, that is, at first it tries to maximize the priority of preferences and then minimize the cost of transformation.

3. Enabling Seamless Collaborations

This section describes the system and method for

enabling seamless collaborations.

3.1. Description of the System Figure 3 gives an overview of the system and its

working. The central component of the system is the transformation mappings generator (tag “5”). Supporting components are collaboration system, device context store, user context manager, allowed mappings store, and media gateway.

User Context Manager. The user context manager (tag “1”) stores user preferences defined in the format given in section 2.3, that is, activity (AP), source media type (MSRC), source service (SSRC), destination media type (MDEST), destination service (SDEST). Users can define preferences using any device, as shown by tag “1-1”. The store maintains a prioritized list of preferences, which the mappings generator uses. It also stores the current activity of the user. Typically, in a large enterprise, there will not be a single context manager, but a federated set of context managers.

Device Context Store. The device context store (tag “2”) stores capabilities related to media types supported (MD) and services available (SD) on the user’s current devices (DD). A simple mechanism for determining which devices are currently available to a user can be using a combination of his presence information at various devices, his location information, and the location information of various devices. Devices communicate their capabilities using CC/PP protocol, as shown by tag “2-1”.

Allowed Transformations Store. The allowed transformations store (tag “3”) keeps information related to valid media-service transformations supported by the infrastructure. These are stored as tuples of source media type (MSRC), source service

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006

(SSRC), destination media type (MDEST), destination service (SDEST) and cost of transformation (W). Typically, this store is static and changes only when the system administrator adds a new type of service or media type to the infrastructure.

Collaboration System. Enterprise users use the collaboration system (tag “4”) to interact using multiple device, media types and services. As shown in figure 3, Tom (tag “A”) is collaborating with Alice (tags “B” and “C”) using two different media-service combinations – (audio, voice call) and (text, IM). Typical collaboration systems include MicrosoftTM NetMeeting, MicrosoftTM Live Communications Server, etc. The collaboration system maintains information about the current collaboration context, which includes information about current media-service streams denoted as MC and SC. We use the format defined in [1] to store collaboration context.

Transformation Mappings Generator. The transformations mapping generator (tag “5”) implements the mechanism outlined in section 3.2. It takes as input the following: prioritized user preferences and current activity from the user context manager, device capabilities from the device context store, allowed transformations from the allowed transformations store, and current collaboration context from the collaboration system. It generates media-service-device mappings that the media gateway uses to adapt the collaboration context.

Media Gateway. The media gateway (tag “6”) uses mappings generated by the transformation mappings generator to adapt input media-service streams and route them to the user. We can utilize existing media gateways, such as the Liquid Media system [2], with appropriate modifications. 3.2. Description of the Method

The method consists of four steps as described below:

Definition of terms used: CC = Collaboration Context = {MC, SC} DC = Device context Store = {DD, MD, SD} PT = Set of Possible Transformations AT = Allowed Transformations Store = {MSRC, SSRC, MDEST, SDEST, W} VT = Set of Valid Transformations UP = User Preferences Store = {AP, MSRC, SSRC, MDEST, SDEST} AP = Set of Applicable Preference Rules SR = Selected Rules for input media-service stream ST = Selected Transforms for a preference rule M = Set of Selected mappings

Input: CC, DC, AT, UP

Output: Set of optimal device-media-service mappings (M) from input media-service streams to user’s devices

generate mappings { /* step I: compute possible transformations */

User Context Manager

Allowed Media-Service

Transformations Store

SystemAdministration

Transformation Mappings Generator

Parser

CC/PPProfiles

DeviceContextStore

Transformations

Preferences,Current activity

Device Capabilities

Collaboration Context

Mappings

Figure 3: The System

ALICE

2

1

5

MediaGateway

AdaptedCollaboration

ContextCollaboration

ContextCollaboration

SystemTOM

A

B C

[Video, Video call]

Meeting Cabin

ALICEALICE

[Text, IM, Laptop][Audio, Voice call, phone][Image stream, Media player, PC]

[Video, Video call]

3

4 6

1-1

2-1

User Context Manager

Allowed Media-Service

Transformations Store

SystemAdministration

Transformation Mappings Generator

Parser

CC/PPProfiles

DeviceContextStore

Transformations

Preferences,Current activity

Device Capabilities

Collaboration Context

Mappings

Figure 3: The System

ALICE

2

1

5

MediaGateway

AdaptedCollaboration

ContextCollaboration

ContextCollaboration

SystemTOM

A

B C

[Video, Video call]

Meeting CabinMeeting Cabin

ALICEALICEALICEALICE

[Text, IM, Laptop][Audio, Voice call, phone][Image stream, Media player, PC]

[Video, Video call]

3

4 6

1-1

2-1

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006

multiply collaboration context (CC) and device capabilities (DC) store to get a set of all possible transformations (PT) /* step II: filter valid transforms */ Using set of allowed transformations (AT), filter out the set of valid transformations (VT) from PT generated above /* step III: compute applicable preferences */ Using current activity of the user, filter out the set of applicable preferences (AP) from the set of user preferences (UP) /* step IV: select optimal transforms */ for each media-service stream in collaboration context(CC) 1. Determine the set of applicable preference rules (SR), which is ordered by priority of rule for particular media- service stream 2. For the highest priority rule in SR, select set of transformations (ST) where preference rule destination media-service combination matches transformation rule destination media-service combination a. If |ST| = 1, add ST to final set of mappings (M). Proceed to next input media-service combination b. If |ST| > 1, select transformation with minimum cost (WT) and add it to final set of mappings (M). Proceed to next input media-service combination c. If |ST| = 0, skip preference rule and repeat step 2 with next highest priority preference rule in SR 3. If no mapping is found for any of the input media- service streams a. If there is a valid transform for which user has not defined a preference rule, prompt him for using that transform. If user accepts, add transform to final set of mappings (M). b. If no valid transform is possible, skip media-service stream and do not present it to the user. 4. Repeat steps 1, 2, and 3 for each media-service stream }

The above algorithm generates optimal device-media-service mappings for a given collaboration context and user context, which the media gateway use to adapt the collaboration context for the receiver. This addresses the first problem listed in section 1, i.e., generation of most appropriate view of collaboration context.

Any change in user context, device context or collaboration context triggers a regeneration of mappings using the changed inputs, and re-adaptation of collaboration context by the media gateway. This addresses the second problem listed in section 1, i.e., seamless adaptation of view of collaboration context.

4. Complexity Analysis The problem can be broken into two steps: I. Determine most appropriate set of preference rules II. Determine most appropriate set of media-service

transformations, so that i media-service streams on

x devices are now transformed to o media-service streams on y devices.

Definition of terms used in complexity analysis Size of user preferences table PL= Size of allowed transformations table TL= Size of collaboration context i=

Problem size

Time complexity of Step I ( )PO L≅

Time complexity of Step II ( ) ,TLyO C y x≈≅ ∵

Hence, time complexity of Problem ( ) , TL

P T T y TO L L L y C L≅ ⋅ ⇒ ≈∵

Time complexity of our algorithm

Time complexity of Step I ( )2O yi≅

Time complexity of Step II ( )( )2. .log TO y i L≅

Time complexity of Step III ( )PO L≅

Time complexity of Step IV ( )( )2 2logPO L yi yi≅

Hence, time complexity our algorithm

( )( ) ( )2 22

, 0 and loglog , 0 0P T P T P T

PL L L L L LO L yi yiy i yi

≅ ⇒≅ > ⇒ >

∵∵

Comparison of problem size and time complexity of our algorithm

The solution is a good algorithm if ( ) 12 2log

LTyi yi

>>

( ) ( )2 no. of devices > no. of services

; 2 2 2 ; , , 0log log

1 i o i oi i

sL iT m m s ss y s y iyi yi y yi

= ≈ ≈ > >

>> ∵

Hence, solution time complexity is far lesser than that of problem size and gives real-time performance for practical mobile collaboration scenarios.

5. Performance Trends

We have done a Java-based implementation to study the performance trends of the algorithm. Figure 4 shows performance trends for time of algorithm execution for three different cases based on our implementation of the algorithm. The graphs show that the time of execution is more sensitive to changes in number of input media-service streams and applicable preferences, than to changes in the number of devices. For practical mobile collaboration scenarios (i.e. number of devices ≤ 10, number of applicable preferences ≤ 25, and number of input media-service streams ≤ 10, which corresponds to the bottom left

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006

sector of the graphs) we observe that the time of execution is close to real-time. 6. Related Work

Although there are many solutions for supporting adaptive mobile collaboration [4][5][6], none of them address the complete problem of adaptive delivery of media and services to user’s devices. In addition, unlike our solution, none of the prior art considers user activity while delivering content to devices.

This paper uses collaboration context as in [1], which discusses a mechanism for capturing mobile enterprise collaboration contexts in a portable format.

Horvitz et al [3] discuss various models for determining the best communication link between the caller and receiver based on their location, preferences and device capabilities. However, they do not cover collaboration using multiple mobile devices, device-media-service association, and seamless adaptation based on mobility.

The Liquid Media System [2] talks about adaptive delivery of media streams based on a user’s available devices. However, it does not address adaptation of services, and does not use user’s activity and activity-based preferences in the adaptation process.

Prammanee et al [7] discuss a mechanism of adaptive delivery of multimedia streams using a combination of discovery of available devices and service, and adaptation of multimedia streams. However, it focuses on the discovery of available device capabilities and available services, and does not cover the use of user activity and activity-based preferences.

Other solutions, such as [4][5][6], use a suspend-resume mechanism for handling user mobility during collaboration.

7. Conclusion and Future Work

The anytime-anywhere aspect of communication in mobile enterprises generates the need for enabling seamless contextual collaborations. In this paper, we provide a system and method for generating device-media-service mappings to provide the most appropriate view of collaboration context (media-service streams) to the user, and its seamless adaptation as he moves from one set of devices to another. Based on our sample algorithm implementation, we provide performance trends, and show that it is possible to achieve real-time performance for practical mobile collaboration scenarios, given certain constraints.

In future, we plan to work towards capturing user activities in a more precise manner. At the same time, we plan to address a larger space of user preferences in order to provide a better adaptation of collaboration context. We also plan to work on transformation cost analysis to develop better and more practical cost models. In addition, we plan to implement the complete system as shown in figure 3. Some components of the system, such as the media gateway, are already available as part of earlier projects. 8. References [1] Bose S., Ennai A., and Sohi S., “Portable Enterprise

Collaboration Contexts”, Intl. Conf. on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), IEEE Computer Society, 2005, ISBN: 1-4244-0031-7.

[2] Mysore J., Vasudevan V., Almaula J., and Haneef A., “The Liquid Media System – A Multi-Device Streaming Media Orchestration Framework”, Workshop on Multi-Device Interfaces for Ubiquitous Peripheral Interaction. Fifth Intl. Conf. on Ubiquitous Computing (Ubicomp), Oct 2003.

[3] Horvitz E., Kadie C., Paek T., and Hovel D., “Models of Attention in Computing and Communication: From Principles to Applications”, Communications of the ACM, 46(3):52-59, March 2003.

[4] Litiu R., Zeitoun A., “Infrastructure support for mobile collaboration”, Thirty-Seventh Annual Hawaii Intl. Conf. on System Sciences, Jan. 2004, pp. 31–40.

[5] Badram J E., “Activity-based computing – support for mobility and collaboration in ubiquitous computing”, Personal and Ubiquitous Computing, 9:312-322, 2005.

[6] Su X., Prabhu B.S., Chu C.-C., and Gadh R., "Middleware for Multimedia Mobile Collaborative System", Third Annual Wireless Telecommunications Symposium (WTS), May 2004.

[7] Prammanee S., Moessner K., and Tafazolli R., “Discovering Modalities for Adaptive Multimodal Interfaces”, ACM Interactions Magazine, 13(3):66-70, May-June 2006.

No of media-servicestreams

No of Applicable

Preferences

Exec

utio

n Ti

me

No of Devices constant (= 5)

No of Devices

Exec

utio

n Ti

me

No of Applicable Preferences constant (= 10)

No of media-se

rvice

streams

No of Devices

Exe

cutio

n Ti

me

No of Applica

ble

Preferences

No of media-service streams constant (= 10)

Figure 4: Performance Trends

No of media-servicestreams

No of Applicable

Preferences

Exec

utio

n Ti

me

No of Devices constant (= 5)

No of Devices

Exec

utio

n Ti

me

No of Applicable Preferences constant (= 10)

No of media-se

rvice

streams

No of Devices

Exe

cutio

n Ti

me

No of Applica

ble

Preferences

No of media-service streams constant (= 10)

No of media-servicestreams

No of Applicable

Preferences

Exec

utio

n Ti

me

No of Devices constant (= 5)

No of Devices

Exec

utio

n Ti

me

No of Applicable Preferences constant (= 10)

No of media-se

rvice

streams

No of media-servicestreams

No of Applicable

Preferences

Exec

utio

n Ti

me

No of Devices constant (= 5)

No of media-servicestreams

No of Applicable

Preferences

Exec

utio

n Ti

me

No of Devices constant (= 5)

No of Devices

Exec

utio

n Ti

me

No of Applicable Preferences constant (= 10)

No of media-se

rvice

streams

No of Devices

Exec

utio

n Ti

me

No of Applicable Preferences constant (= 10)

No of media-se

rvice

streams

No of Devices

Exe

cutio

n Ti

me

No of Applica

ble

Preferences

No of media-service streams constant (= 10)

No of Devices

Exe

cutio

n Ti

me

No of Applica

ble

Preferences

No of media-service streams constant (= 10)

Figure 4: Performance Trends

Proceedings of the 15th IEEE International Workshops on EnablingTechnologies:Infrastructure for Collaborative Enterprises (WETICE'06)0-7695-2623-3/06 $20.00 © 2006