9
Devices Descriptions for Context-Based Content Adaptation Robson Eisinger, Marcelo G. Manzato, Rudinei Goularte Mathematics and Computing Institute – University of Sao Paulo Av. Trabalhador Sancarlense, 400 – PO Box 668 – 13560-970 – Sao Carlos, SP – Brazil Email: {eisinger,mmanzato,rudinei}@icmc.usp.br Abstract Nowadays, networks can be accessed by multiple devices with different characteristics. Some of these characteristics such as low processing power and memory capacity restrict the access to multimedia content. Researchers have then focused on making automatic system adaptations in order to present content according to devices’ capabilities. Most of the works on this area include the description of hardware and software devices’ features using the CC/PP specification. However, the indiscriminate and direct use of CC/PP makes it difficult to generate consistent descriptions when using the same device into different contexts. The novel of this paper is an alternative method to describe devices’ features. This method aims to facilitate context-based content adaptation decisions to be made before content delivery. This paper also presents an application for content adaptation that uses our method. 1. Introduction The development of new technologies has been changing the known paradigm where the Internet access was done only by personal computers. Nowadays, different devices, such as PDAs, mobile phones, tablets, and others, are also connected to the World Wide Web, being able to access not only web pages, but also multimedia content. In this way, the Internet becomes a more heterogeneous environment, making complex the development of applications which may manipulate video, audio and animation. In the case of the video transmission, besides the concern with bandwidth needs, there is a problem when considering different devices characteristics, such as display: a personal computer has resolution and screen format different from PDAs. In this way, some devices with restricted capabilities cannot access the multimedia content in a appropriated manner. Exploring the Ubiquitous Computing paradigm [1], in the specific field of Context-Aware Computing [2], researchers have been focused on using context information to make automatic adaptations in order to provide valuable services to the user [3,4,5,6,7]. In the content adaptation field one of the main context information is device description. Based on this information, the system will have subsidies indicating the best multimedia data configuration solution (video in our case) best suited for a specific device. Most of the works available in the literature focusing on devices’ description [4,7,8,9] use the CC/PP specification [10], developed by the W3C. The CC/PP offers an RDF- based way to describe devices’ capabilities. However, CC/PP was not designed to describe contextual information, leading it inadequate in context-aware devices’ and users’ representations [8]. As detailed at section 2, the use of CC/PP in complex pervasive context-aware systems is difficult and not intuitive. Most of these systems, Semantic Web, for example, use contextual descriptions trying to infer semantic relationships between system’s components (users, applications, devices, etc.), and RDF lacks expressivity and representativity to reach that kind of inference [8,11]. Those inferences can be obtained in a more natural and powerful way using ontologies to represent context. However, research on this topic needs more investigation towards both, agreement and establishment about ontologies models. Another difficulty imposed by the CC/PP specification is that it does not describe composition of devices. The literature [12,13] shows that describe compositions of contextual elements is important, for example, when describing a person is natural describe identity information like name and ID inside a <identity> element, which, in turn, is inside a <user> element. In this example we have, at least, three levels of hierarchy and CC/PP is a one level tree (root and its children) only. Besides that, the CC/PP does not define a range of values that an attribute may be assigned. For example, a vendor could assign the value “keypad” to the attribute keyboard, while another vendor could assign the value “TelephonePad” to the same attribute, when both represent the same functionality [11]. While ontological context representations are not yet mature and the CC/PP approach is not adequate, an alternative solution to describe devices in a context-aware system is to use XML-Schema. In this paper we present a flexible and extensible XML Schema-based method to describe devices in a context-structured way. If, on one hand, our solution does not have the power of an Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

[IEEE Third Latin American Web Congress (LA-WEB'2005) - Buenos Aires, Argentina (31-02 Oct. 2005)] Third Latin American Web Congress (LA-WEB'2005) - Devices Descriptions for Context-Based

  • Upload
    r

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Devices Descriptions for Context-Based Content Adaptation

Robson Eisinger, Marcelo G. Manzato, Rudinei Goularte Mathematics and Computing Institute – University of Sao Paulo

Av. Trabalhador Sancarlense, 400 – PO Box 668 – 13560-970 – Sao Carlos, SP – Brazil Email: {eisinger,mmanzato,rudinei}@icmc.usp.br

Abstract

Nowadays, networks can be accessed by multiple devices with different characteristics. Some of these characteristics such as low processing power and memory capacity restrict the access to multimedia content. Researchers have then focused on making automatic system adaptations in order to present content according to devices’ capabilities. Most of the works on this area include the description of hardware and software devices’ features using the CC/PP specification. However, the indiscriminate and direct use of CC/PP makes it difficult to generate consistent descriptions when using the same device into different contexts. The novel of this paper is an alternative method to describe devices’ features. This method aims to facilitate context-based content adaptation decisions to be made before content delivery. This paper also presents an application for content adaptation that uses our method.

1. Introduction

The development of new technologies has been changing the known paradigm where the Internet access was done only by personal computers. Nowadays, different devices, such as PDAs, mobile phones, tablets, and others, are also connected to the World Wide Web, being able to access not only web pages, but also multimedia content. In this way, the Internet becomes a more heterogeneous environment, making complex the development of applications which may manipulate video, audio and animation. In the case of the video transmission, besides the concern with bandwidth needs, there is a problem when considering different devices characteristics, such as display: a personal computer has resolution and screen format different from PDAs. In this way, some devices with restricted capabilities cannot access the multimedia content in a appropriated manner.

Exploring the Ubiquitous Computing paradigm [1], in the specific field of Context-Aware Computing [2], researchers have been focused on using context information to make automatic adaptations in order to provide valuable services to the user [3,4,5,6,7]. In the content adaptation field one of the main context information is device description. Based on this

information, the system will have subsidies indicating the best multimedia data configuration solution (video in our case) best suited for a specific device.

Most of the works available in the literature focusing on devices’ description [4,7,8,9] use the CC/PP specification [10], developed by the W3C. The CC/PP offers an RDF-based way to describe devices’ capabilities. However, CC/PP was not designed to describe contextual information, leading it inadequate in context-aware devices’ and users’ representations [8].

As detailed at section 2, the use of CC/PP in complex pervasive context-aware systems is difficult and not intuitive. Most of these systems, Semantic Web, for example, use contextual descriptions trying to infer semantic relationships between system’s components (users, applications, devices, etc.), and RDF lacks expressivity and representativity to reach that kind of inference [8,11].

Those inferences can be obtained in a more natural and powerful way using ontologies to represent context. However, research on this topic needs more investigation towards both, agreement and establishment about ontologies models.

Another difficulty imposed by the CC/PP specification is that it does not describe composition of devices. The literature [12,13] shows that describe compositions of contextual elements is important, for example, when describing a person is natural describe identity information like name and ID inside a <identity> element, which, in turn, is inside a <user> element. In this example we have, at least, three levels of hierarchy and CC/PP is a one level tree (root and its children) only.

Besides that, the CC/PP does not define a range of values that an attribute may be assigned. For example, a vendor could assign the value “keypad” to the attribute keyboard, while another vendor could assign the value “TelephonePad” to the same attribute, when both represent the same functionality [11].

While ontological context representations are not yet mature and the CC/PP approach is not adequate, an alternative solution to describe devices in a context-aware system is to use XML-Schema. In this paper we present a flexible and extensible XML Schema-based method to describe devices in a context-structured way. If, on one hand, our solution does not have the power of an

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

ontological approach, on the other hand it does not suffer the CC/PP drawbacks, making it a well-suited mid-term solution.

This paper is organized as follows: section 2 describes related work in context representation and device description. In the section 3 we describe our previous work, which is the underlying model for contextual representations. Section 4 presents our solution for devices representation. Section 5 shows an application example of our models and Section 6 presents final remarks and future works.

2. Related Work

The literature shows that many works related to context-aware computing focus direct or indirectly on the devices’ description process, and therefore, they need to define a representation model.

One of the ancient works in this area is the Context Toolkit [13], which defines a basic architecture to describe context information to context-aware applications. This information is represented using attribute-value pairs, which simplify the definition process. However, this simplicity difficulties the knowledge share, and the context comprehension, once there is not a vocabulary to warranty a more sophisticated representation.

Trying to specify a pattern in the devices’ representation and users’ preferences, the W3C proposed the CC/PP (Composite Capacities/Preferences Profiles) [10]. A work that has explored this specification is DELI (Delivery Context Library for CC/PP and UAProf) [7], which defines a free source code library that uses JAVA servlets to process HTTP requests. These requests may include context information sent by a device, which is compatible with CC/PP or UAProf. The UAProf (User Agent Profiles) [9] is a specification developed by the WAP Forum that stands for a modification of the CC/PP, focusing, only, on mobile devices. Another example that uses the CC/PP is the Lemlouma’s work [4], which combines the CC/PP-based UPS (Universal Profile Schema) specification with SMIL (Synchronized Multimedia Integration Language), in order to adapt content in mobile devices.

However, CC/PP is not adequate in order to describe context. Indulska et. al. [8] present their experiences when using CC/PP in a context-aware notification system called Elvin, which handles messages containing contextual information, modeled with CC/PP. Their conclusion is that although it is possible to use CC/PP to define contextual information, including relationships and dependencies, the CC/PP limitations make it not suitable for that task.

Another limitation related with CC/PP is about multiple vocabularies and profile resolution. As pointed by Butler [11], processing multiple vocabularies involves distinguish between different vocabularies (namespaces) for the same profile. For example, the UAProf [9] CC/PP profile has, at least, two different vocabularies. Each vocabulary is

associated with a different namespace so one possible interpretation could be for the CC/PP-UAProf processor to assume that the attributes, even if they have the same name, are totally different because they are in different namespaces. Other interpretations are possible. This comes out the need of resolution rules.

An alternative solution is adopted by WURFL (Wireless Universal Resource File) [14], which centralizes mobile devices’ schemas in a unique library. The updates in the library schemas and the addition of new schemas in the library are done by human intervention, evaluating and correcting possible inconsistencies. Although this approach solves the resolution problems with the CC/PP, it is not flexible, once additions will have to wait for evaluation by the specification working group before the integration in the stable version of the vocabulary.

Another way available in the literature to represent context explores the use of ontology. It is the case of the Context Broker Architecture (CoBrA) [15,16], which defines an architecture based on agents to support context-aware computing in intelligent spaces, and the Context Ontology (CONON) [3], which proposes an ontology to model context in pervasive computing environments. The study of ontologies in this area still is in progress, besides, its construction and the integration of different ontologies are very complex and difficult.

Our approach to describe devices is based on our previous XML Schema representations of context [12] (see section 3). That model was specifically designed to represent contextual information, different from CC/PP. In order to provide device descriptions we have built devices descriptors that can be used to characterize a device contextual entity in a hierarchical way. Also different from CC/PP RDF based solution, our XML Schema based approach allows for specifying range of attributes and allows for the description of device compositions. For while, we do not have resolution rules specifications, and so we have adopted the WURFL strategy in order to guarantee vocabulary consistency.

3. Context Representation

The context definition proposed by Dey [2] has filled the gap where many authors have used to define context in a restrict way – based on examples [2]. Dey [2] had proposed a more generic definition, supporting information to be used in order to characterize the situation of an interaction member between a user and an application, so this information is context.

In this way, the developer needs to decide which information can be considered context. To ease this job, it is possible to use the guidelines “who”, “where”, “what” and “when”, that, respectively, represent the primary context types – identity, location, time and activity. These guidelines, when related to some entity, can be used to determine “why” a specific situation is happening [2,20].

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

According to recent literature, the context classification is still an open question – authors focus on information relative to user and infrastructure, leaving apart information relative to system, application, domain and environment. This problem occurs, in part, because of the difficulties in defining the boundaries among those entities.

In this way, in a previous work [12], we defined a framework based on the Dey’s definition [2], which offers: (a) a representation for context; (b) the means to define contextual entities classifying them according to the context framework model and (c) an extensible set of pre-defined contextual elements – the context library. The representation for context is done in an organized way, once we classified the context types in: infrastructure, system, user and application, as shown in Figure 1.

Figure 1. System's context components According to Figure 1, it is possible to visualize the

context types’ organization. The rectangles symbolize entities with a self-contained representation, and the dashed rectangles symbolize entities composed by other entities. Users, application and infrastructure compose the computational system. Users can use devices (personal computers, PDAs, remote-controls, and others) to access applications and request a specific job. Applications use devices (printers, projectors, and others) and services (printer servers, archive transfer services, search engines, and others) from the infrastructure to do the requested job. The communication among users, devices and applications is done by the network.

The context information representation was developed using a XML namespace including an extensible library of structured context elements, built up in XML Schema [17].

In our framework, each context type can be defined extending primary contexts. Primary contexts are represented by the abstract type PrimaryContextType, which is composed by the abstract types IdentityType, LocationType, TimeType and ActivityType – which are, respectively, representations of the primary contexts identity, location, time and activity.

Using these base-classes it is possible to build specialized ones in a classified fashion according to the

primary context (identity, location, time or activity). We use this approach to define a context library formed by a set of these specialized classes – each library item is an XML Schema type representing context information useful to characterize an entity.

The framework aims at facilitating the developer’s task: when describing a context entity, a developer chooses in the library which information best characterizes an entity. If the library does not have the right contextual information representation, developers can create their own. Since PrimaryContextType and its components are abstract, XML complex type extensions are allowed. For example, when creating a User entity through the UserType, this type must be an extension of PrimaryContextType, as in the Figure 2, which implies that the UserType inherits all elements defined in PrimaryContextType (the entire context library).

<schema xmlns:ctx="http://www.icmc.usp.br/~rudinei/Context/"…><complexType name="UserType">

<complexContent><extension base="ctx:PrimaryContextType"/>

</complexContent></complexType> …

<schema>

Figure 2. Definition of UserType as an extension of PrimaryContextType

When describing a user, the developer chooses some pieces from the library that best match his interests. This is done using the XML xsi:type tool, the XML Schema resource used to indicate which implementation of the abstract type is being used [18] (Figure 4). If the library does not have the right piece of information, for example in order to represent a nick name, developers extend one of the PrimaryContext components, in this case IdentityType (Figure 3), so that NickNameType can be used (Figure 4).

<complexType name="NickNameType"><complexContent>

<extension base="ctx:IdentityType"><sequence>

<element name="NickName" type=”string”/></sequence>

</extension></complexContent>

</complexType>

Figure 3. Example of creating a new user type < ctx:User>

< ctx:Identity xsi:type="ctx:NickNameType">< ctx:NickName>Rudy</ ctx:NickName>

</ ctx:Identity></ ctx:User>

Figure 4. Example of using a new user type

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

Figure 5. Devices’ representation

4. Device Representation

Our previous work has classified context and provided the means to describe context information in a structured way. Besides that, we have defined a user model whose instance is a set of contextual elements of the context library.

Device can be defined as a hardware or part of a hardware, which is used to reach a specific purpose [18,19]. Beginning from this definition, a representation is proposed to the device’s context type, as shown in the Figure 5. As an extension to the previous work, we have defined a model to the devices element (Figure 1) – the device model.

The Figure 5 shows an XMLSpy IDE [23] representation of the extended model, with the devices element implemented as an XML Schema. It should be clear that the devices element represents the proposed device model. The Infrastructure element is composed by Network, Services and Devices, and the Devices element is composed by a Device element. The dashed rectangles indicate that the element is not mandatory. The Device element is composed by the abstract context type PrimaryContextType and/or by other Device element.

As seen at the Introduction, it is important to describe contextual elements in a composite way, so that the description can be well-structured, easing the parsers’ job. In special, the use of composition in devices’ representation can also facilitate the device descriptor document generation, once devices are composed by many other devices, which each one has specific characteristics.

In this way, our specification can simplify the device’s composition task. One can be represented by using the abstract context type PrimaryContextType, including its

four extended abstract types (IdentityType, LocationType, TimeType and ActivityType), or by being composed by its components, which are seen as devices too. For example, a Laptop can be represented by its basic characteristics, such as model (Identity), IP address (Location), and others, or by devices (or components), such as hard disk, display, memory, and others. The proposed model allows users to decide between a simplified representation or a specific device representation.

Once it has been defined the way a device is represented into the conceptual model, the next step is to define the devices’ basic library. In the same way it was done for users in the previous work, devices are defined based on the four conceptual abstract types: its identity, describing device’s basic characteristics, such as its model; its location, which can be physic or virtual; information related to time, which may contain, for example, the time a device is usually used; and finally, the activities related to a device, which may describe the jobs done by a specific device.

The device’s characteristics used in our model were designed based on a survey about the most common vendors (IBM, Intel, Samsung and HP) specifications (model, product line, brand name and serial number, etc.). This survey was not a exaustive list. At this time it represents a core of common characteristics which are validated against the equipments we are using.

Beginning from the users’ contextual descriptors available at the library for identity, location, time and activity, the most significant improvement necessary to describe device is related to identity. The device’s identification is very important for the system, once an application might need to know which device a user is using to access data. They can be identified using the <DeviceID> element.

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

Although <DeviceID> can be used to identify a device, it doesn’t give us any information about the devices’ features. In order to ease the features’ description, these characteristics were encapsulated by likeness. First, we have created basic types, such as storage (StorageType), speed (SpeedType), connector (ConnectorType), and others, and from these basic types, we have defined the SpecTypes, which indicate specific devices’ characteristics, as for instance, a hard disk. This organization eases the reuse of types, once we can combine two or more basic types. Figure 6 shows some possible SpecTypes examples.

The new devices’ specification allows users to extend the library, adding new devices or features, preserving this previous work’s feature, as seen in the section 3.

Figure 6. Examples of SpecType utilization The library extension is done by using XML Schema in

order to validate the documents that describe a new or existing device. At first, we must define the specific device as being extended from the abstract type device. This is shown in the Figure 7. <complexType name=”MemoryType”>

<complexContent><extension base=”ctx:DeviceType”/>

</complexContent></complexType>

Figure 7. Extending the library by creating the MemoryType device

Next, we must define the device’s specification. It is done by identifying a SpecType, which can have basic characteristics, like the MemType, as shown in the Figure 8.

Observing the Figure 8, it is possible to visualize that the <MemType> value is restricted to a fixed range, so users are allowed to insert another value only by specifying a new item in the XML Schema. This feature is

useful to prevent different vendors to assign different values to the same functionality, generating inconsistence. It must be noticed that the CC/PP does not have such feature.

Besides identifying a SpecType in the device’s specification, we can also use a basic type, exemplified by the <MemSpeed> field shown in the Figure 8, which uses the SpeedType, defined in the Figure 9. <complexType name=”MemorySpecType”>

<sequence><element name=”MemType”>

<simpleType><restriction base=”string”><enumeration value=”DDR”/><enumeration value=”SDRAM”/><enumeration value=”CompactFlash”/><enumeration value=”Smart Media”/><enumeration value=”Memory Stick”/><enumeration value=”MMC”/><enumeration value=”RAMBus”/> </restriction>

</simpleType></element><element name=”MemSpeed” type=”ctx:SpeedType”/><element name=”MemSize” type=”ctx:StorageType”/>

</sequence></complexType>

Figure 8. Basic characteristics of MemType <complexType name="SpeedType">

<all><element name="Speed" type="string"/><element name="Scale">

<simpleType><restriction base="string"><enumeration value="Hz"/><enumeration value="KHz"/><enumeration value="MHz"/><enumeration value="GHz"/><enumeration value="THz"/><enumeration value="rpm"/><enumeration value="none"/></restriction>

</simpleType></element>

</all></complexType>

Figure 9. SpeedType definition Finally, we must define the new SpecType into the

<DeviceIdentityType>, so that the user can use the new specification in his device’s description, as shown in the Figure 10.

The Figure 11 shows an example that describes a new device, in this case a memory device.

5. Content Personalization

As computing is becoming more pervasive and network access via mobile devices becomes usual, users have the expectation to be interacting with services and applications anywhere, anytime. Besides that, it is useful to users to access services in a device-independent way, allowing that the same service or content to be transparently accessed by

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

<complexType name="DeviceIdentityType"><complexContent>

<extension base="ctx:IdentityType"><sequence>

<element name="DeviceID"type="ctx:DeviceIDType"/>

<element name="MemorySpec"type="ctx:MemorySpecType" minOccurs="0"/>

</sequence></extension>

</complexContent></complexType>

Figure 10. Example of DeviceIdentityType definition

<Device xsi:type="MemoryType"><Identity xsi:type="DeviceIdentityType">

<DeviceID><BrandName>PDP</BrandName><ModelName>Patriot</ModelName><ProductLine>PDC22G4200 XBLK</ProductLine>

</DeviceID><MemorySpec>

<MemType>SDRAM</MemType><MemSpeed>

<Speed>333</Speed><Scale>MHz</Scale>

</MemSpeed><MemSize>

<Size>512</Size><Scale>Mbyte</Scale>

</MemSize></MemorySpec>

</Identity></Device>

Figure 11. Example of using a new SpecType fixed and mobile devices. However, as stated before, mobile devices have restrictions when accessing content. These restrictions are related with small screen sizes, wireless network speeds and battery, memory and processor low capacity.

In this scenario, one challenge is to provide rich interactive multimedia content access from a variety of devices (mobiles and non-mobiles) matching devices restrictions and users interests.

In order to attend those expectations, applications need to know the system context, using contextual information in order to adapt the system providing valuable personalized services to the users. That information includes users’ and devices’ descriptions.

In this section we show how we have used the devices descriptions presented in section 4 in our content adaptation prototype in order to adapt the content of videos to match devices restrictions.

The prototype is based on the MPEG-4 standard [21], which allows a multimedia presentation to be composed by a set of multimedia objects (audio, video, graphics and images). MPEG-4 presentations can be composed by the means of a XMT (eXtensible MPEG-4 File Format [22]) file, which combines MPEG-4 objects in a MPEG-4 scene and allows adding interactive behavior to each object.

XMT files are used by MPEG-4 coders in order to generate MPEG-4 final presentations, which are MPEG-4 video files too.

Repository

New video XMT file

Coding

... MPEG-4 video file

User’s preferences

Original video’s XMT

file

Matching Analysis

System’s context descriptions

Figure 12. Schematic view of the personalization service

Figure 12 shows a schematic view of the prototype. The repository stores the multimedia objects in the MPEG-4 format. Based on user preferences [12] and on the video XMT file, the prototype selects content (video objects) of user’s interest in order to build a personalized presentation. Before building the presentation the prototype analyzes the frame size and the frame rate (fps) of each original presentation’s video component. This information is validated against the device description in order to perform a matching analysis. This analysis, for while, is based on a fixed table containing frame size – fps tuples. If device’s screen size is smaller than the frame size from a tuple, the corresponding media is coded, generating a new version, in order to reduce its frame size matching the device screen size.

After the matching analysis, the prototype generates a new XMT file that points to the new versions of the video objects. This file is sent to the code, which, in turn, generates the MPEG-4 presentation. As illustrated in Figure 12, this presentation is a video that just has pointers to the real media stored in the repository.

Figure 13 shows a 640 x 480 frame size MPEG-4 presentation being comfortably viewed in a desktop PC with a 1024 x 768 resolution display. Figure 14 shows the desktop PC descriptor document which was created using our device representation framework, as exposed at section 4. It is possible to note that the description is organized in a composite way: devices such as Monitor and Video Card are part of the Desktop description. This device-structured way helps the adaptation process, once some kinds of components are linked to each other. As for instance, for describing display features, it is necessary to describe the monitor capabilities as well as the video card.

The content access is made via a wired network. This presentation is based on a sports user profile (user preferences descriptions), and the user has three video options (the three buttons at the bottom of the Figure 15

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

(front)): national, regional and live. The first two options are 352 x 288 frame size videos stored as MPEG-4 in the repository. The last one is a live video captured from a web cam connected to the server. The interactions options allow users to select a button in order to watch the desired video. Once the video starts to play users can click over it and the video goes “full-screen” (640 x 480), as depicted in Figure 15 (back).

Figure 13. MPEG-4 presentation at a desktop PC

<Device xsi:type="DesktopType"><Identity xsi:type="DeviceIdentityType">

<DeviceID><BrandName>generic</BrandName><ModelName>2giga Asus</ModelName><ProductLine>PentiumIV</ProductLine>

</DeviceID></Identity><Device xsi:type="MonitorType">

<Identity xsi:type="DeviceIdentityType"><DeviceID>

<BrandName>Philips</BrandName><ModelName>105S</ModelName><ProductLine>Monitor 15"</ProductLine>

</DeviceID><DisplaySpec>

<ScreenModel>CRT</ScreenModel><ColorResolution>32</ColorResolution><ScreenResolution>1024x768</ScreenResolution><VideoStandard>SVGA</VideoStandard><ScreenSize>15"</ScreenSize><MaxResolution>1024x768</MaxResolution><Connector>Mini D-15</Connector>

</DisplaySpec></Identity>

</Device><Device xsi:type="VideoCardType">

<Identity xsi:type="DeviceIdentityType"><DeviceID>

<BrandName>Nvidia</BrandName><ModelName>FX5200</ModelName><ProductLine>GeForce</ProductLine>

</DeviceID><DisplaySpec>

<ColorResolution>32</ColorResolution><ScreenResolution>1024x768</ScreenResolution>

<VideoStandard>SVGA</VideoStandard><MaxResolution>2048x1536</MaxResolution><Connector>AGP</Connector>

</DisplaySpec></Identity>

</Device></Device>

Figure 14. Desktop PC representation

Figure 15. Example of interaction in a MPEG-4 presentation

Trying directly to access that resolution presentation via a PDA will result in a poor visualization. As shown in Figure 16 the user can visualize only a portion of the original image.

Figure 16. Non-compatible video size at a PDA Using the prototype, the PDA description device is sent

to the server together with the presentation requisition. It is possible to visualize the PDA description document through Figure 17.

<Device xsi:type="PDAType"><Identity xsi:type="DeviceIdentityType">

<DeviceID><BrandName>HP</BrandName><ModelName>xxxxx</ModelName><ProductLine>iPAQ</ProductLine>

</DeviceID><DisplaySpec>

<ScreenModel>TFT</ScreenModel>

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

<ColorResolution>16</ColorResolution><ScreenResolution>240x320</ScreenResolution><VideoStandard>none</VideoStandard><ScreenSize>3,5"</ScreenSize><MaxResolution>240x320</MaxResolution><Connector>none</Connector>

</DisplaySpec></Identity>

</Device>

Figure 17. PDA representation

After the matching analysis, the new, adapted, presentation is sent back to the PDA; the Figure 18 shows an example of a 320 x 240 frame size presentation with the videos (including the live video) coded as 176 x 144 frame size.

Figure 18. Adapted presentation Figure 19 shows an example of a presentation

assembled using “news” user preference.

(a) (b)

Figure 19. Example of news profile

6. Final Remarks

In this paper, we have presented an alternative approach to describe devices in context-aware systems. This approach is XML Schema based and our goal is reach a mid-term key between the solutions that use ontology, which constructions are complex, and the CC/PP, which, although it is useful for devices representation, it is not adequate to represent contextual information.

The present work fills some gaps existing in the CC/PP specification, such as not defining a fixed range to attribute values, and the impossibility to represent devices in a composite way, which has the advantage of being able to organize the device descriptor document and to ease the parsers’ job.

As future works we plan evolve the adopted vocabulary resolution approach related to the devices library, which is based on the WURFL, to a dynamic one.

Also as future works we are investigating automatic strategies to adapt content. This includes to get the context of the network and, with the help of the presentations characteristics (like frame size and frame rate) that we already have, adapt the bit rate in order to reach a more accurate video transmission when a user change his reception device.

We also plan to improve our survey about vendors specifications, describing devices characteristics in a exaustive way. Furthermore, the dynamic adaptation rules and the application of the concepts in other media formats are also part of future works.

Acknowledgements

The authors would like to thank CAPES. This work is part of VIMOS (Video Mobility and Security) project supported by CNPq.

References

[1] G.D. Abowd, E.D. Mynatt, T. Rodden, “The Human experience”, IEEE Pervasive Computing, v. 1, n. 1, 2002, pp. 48-57.

[2] A.K. Dey, “Understanding and using context”, ACMPersonal and Ubiquitous Computing Journal, v. 5, n. 1, Spring-Verlag, Atlanta, GA, USA, 2001, pp. 4-7.

[3] X.H. Wang, D.Q. Zhang, T. Gu, K.P. Pung, “Ontology Based Context Modeling and Reasoning using OWL”, Proceedings of Workshop on Context Modeling and Reasoning (CoMoRea'04),Orlando, USA, March, 2004.

[4] T. Lemlouma, N. Layaïda, “SMIL Content Adaptation for Embedded Devices”, SMIL Europe 2003, Paris, February, 2003.

[5] J. Huang, C. Krasic, J. Walpole, “Adaptive Live Video Streaming by Priority Drop”, IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS'03), Miami, USA, July, 2003.

[6] T. Gu, H.K. Pung, D.Q. Zhang, “A middleware for building context-aware mobile devices”, IEEE Vehicular Technology Conference, Los Angeles, USA, September, 2004.

[7] DELI home-page: http://delicon.sourceforge.net/

[8] J. Indulska, R. Robinson, A. Rakotonirainy, K. Henricksen, “Experiences in Using CC/PP in Context-Aware Systems”,

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE

Proceedings of the 4th International Conference on Mobile Data Management, Springer-Verlag, Melbourne, Australia, January, 2003, pp. 247 – 261.

[9] WAG UAPROF: Wireless Application Group – User Agent Profile Specification, WAP Forum, November, 10, 1999.

[10] The CC/PP WorkingGroup home-page: http://www.w3.org/Mobile/CCPP/

[11] M.H. Butler, “CC/PP and UAProf: Issues, Improvements and Future Directions”, W3C Workshop on Delivery Context,Hewlett-Packard, Sophia-Antipolis, France, March, 2002.

[12] R. Goularte, E.S. Moreira, M.G.C. Pimentel, “Structuring Interactive TV Documents”, DocEng’03, ACM Press, Grenoble, France, November, 2003, pp. 42-51.

[13] D. Salber, A.K. Dey, G.D. Abowd, “The Context Toolkit: Aiding the Development of Context-Enabled Applications”,Proceedings of CHI'99, ACM Press, Pittsburgh, PA, May, 1999, (to appear).

[14] WURFL home-page: http://wurfl.sourceforge.net/

[15] H. Chen, T. Finin, “An Ontology for Context-Aware Pervasive Computing Environments”, Workshop on Ontologies and Distributed Systems IJCAI, Acapulco, Mexico, August, 2003.

[16] H. Chen, T. Finin, A. Joshi, “Semantic Web in a Pervasive Context-Aware Architecture”, Artificial Intelligence in Mobile System 2003 (AIMS), Seattle, USA, October, 2003, pp. 33-40.

[17] XML Schema home-page: http://www.w3.org/XML/Schema

[18] Wikipedia home-page: http://en.wikipedia.org/

[19] Dictionary.com home-page: http://dictionary.reference.com/

[20] G.D. Abowd, E.D. Mynatt, “Charting past, present, and future research in ubiquitous computing”, ACM Transactions on Computer-Human Interaction (TOCHI), v. 7, n. 1, 2000, p. 29-58.

[21] STANDARDISATION, I. O. for. MPEG-4 Overview. 2002. Available at: http://www.chiariglione.org/mpeg/standards/mpeg-4/mpeg-4.htm.

[22] M. Kim, S. Wood, “Extensible MPEG-4 Textual Format”. In F. Pereira, T. Ebrahimi, The MPEG-4 Book, Upper Saddle River, Prentice Hall PTR, 2002, pp. 187-225.

[23] XMLSpy home-page: http://www.altova.com/products_ide.html

Proceedings of the Third Latin American Web Congress (LA-WEB’05) 0-7695-2471-0/05 $20.00 © 2005 IEEE