77
SEVENTH FRAMEWORK PROGRAMME THEME – ICT [Information and Communication Technologies] Contract Number: 224395 Project Title: Integrated Platform for Autonomic Computing Project Acronym: IPAC Deliverable Number: D21 Deliverable Type: Contractual Date of Delivery: 31 July 2008 Actual Date of Delivery: 05 August 2008……. Title of Deliverable: State of the Art Work-Package contributing to the Deliverable: WP2 Nature of the Deliverable: Report Author(s): C. Anagnostopoulos, G. D'Angelo, J.-D. Decotignie, K. Gerakos, S. Hadjiefthymiades, C. Kassapoglou-Faist, K. Kolomvatsos, D. Kotsakos, C. Panayiotou, V. Papataxiarhis, T. Raptis, G. Samaras, O. Sekkas, D. Spanoudakis, A. Rongas, V. Tsetsos Abstract: A general and extensive survey presenting the relevant enabling technologies. Such technologies include sensor equipment, communication systems, autonomic computing, frameworks and languages for

SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

Embed Size (px)

Citation preview

Page 1: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

SEVENTH FRAMEWORK PROGRAMMETHEME – ICT

[Information and Communication Technologies]

Contract Number: 224395Project Title: Integrated Platform for Autonomic ComputingProject Acronym: IPAC

Deliverable Number: D21Deliverable Type: Contractual Date of Delivery: 31 July 2008Actual Date of Delivery: 05 August 2008…….Title of Deliverable: State of the ArtWork-Package contributing to the Deliverable: WP2Nature of the Deliverable: ReportAuthor(s): C. Anagnostopoulos, G. D'Angelo, J.-D. Decotignie, K. Gerakos, S. Hadjiefthymiades,

C. Kassapoglou-Faist, K. Kolomvatsos, D. Kotsakos, C. Panayiotou, V. Papataxiarhis, T. Raptis, G. Samaras, O. Sekkas, D. Spanoudakis, A. Rongas, V. Tsetsos

Abstract: A general and extensive survey presenting the relevant enabling technologies. Such technologies include sensor equipment, communication systems, autonomic computing, frameworks and languages for building service creation environments, etc. Moreover, some state-of-the-art algorithms and research works in the area of context-aware computing and information dissemination are presented.

Keyword List: Communication technologies, GUI frameworks, middleware technologies, data modelling

Copyright by the IPAC Consortium

(Co-ordinator) Siemens A. E. Electrotechnical Projects and Products SAE GreeceContractor National and Kapodistrian University of Athens NKUA GreeceContractor Centre Suisse d’Electronique et de Microtechnique SA CSEM Switzerland

Page 2: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Contractor CENTRO RICERCHE FIAT S.C.p.A. CRF ItalyContractor Hellenic Ministry of Defence HMOD GreeceContractor University of Cyprus UCY Cyprus

Table of Contents1 Introduction 32 Short range communications technology 3

2.1 IEEE 802.11 32.2 Bluetooth (IEEE 802.15.1) 42.3 IEEE 802.15.4 and Zigbee 42.4 DECT 62.5 Wireless HART 62.6 Wireless Sensor Networks 62.7 Wisenet 62.8 DSRC 7

3 Sensor management 103.1 IEEE 1451 OVERVIEW 113.2 SensorML overview 12

4 Data modelling technologies based on the XML standard 144.1 Features of XML 144.2 Strengths of XML 154.3 Weaknesses of XML 154.4 Data modelling with XML 16

5 Service Creation Environments 175.1 Development platforms, GUI frameworks and technologies for SCEs 18

5.1.1 Development Platforms for SCEs 185.1.2 Emulators 195.1.3 Service Description Languages 215.1.3.1 Various XML-based Service Description Languages 21

5.2 Relevant and Popular Service Creation Environments 246 Services/Middleware technologies 28

6.1 Middleware Technologies 286.1.1 Development Platforms 286.1.2 Java Virtual Machines for Embedded and Mobile Systems 306.1.3 OSGi (Open Service Gateway initiative) 316.1.4 Technologies, Protocols and Frameworks for Agent-based Systems 32

6.2 IPAC-like middleware frameworks 336.2.1 Sensor Computing-oriented Middleware 35

6.3 Autonomic and Reconfigurable Systems 366.4 Technologies for the Embedded Knowledge Plane 39

6.4.1 Knowledge Representation Languages 396.4.2 Language Serialization Modes 416.4.3 Policy Representation Languages 426.4.4 Mobile Reasoning Techniques and Tools 426.4.5 Models for Autonomic Computing Platforms 43

7 Information dissemination and collective context awareness algorithms 457.1 Information Dissemination in Nomadic Environments 457.2 Context Dissemination for Collaborative Context-awareness 47

8 Similar past and existing EU projects 489 Conclusion 52

IPAC ICT-2008-224395 2/56

Page 3: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

1 Introduction

This report is presenting a general and extensive survey regarding the relevant enabling technologies for advanced context awareness. This survey will help in selecting the appropriate enabling technologies for the development of the IPAC system, after the detailed analysis and specification of the IPAC platform requirements has been done.The relevant technologies include sensor equipment, communication systems, autonomic computing, etc. Enabling technologies for each component were examined. Such technologies include among others:

Short range communication technologies Sensor management GUI frameworks like the Eclipse and NetBeans platforms Data mmodelling technologies based on the XML standard Services/Middleware technologies such as OSGI, J2ME or other java technologies appropriate for

handheld devices Information dissemination and collective context awareness algorithms.

The survey is completed by some assessment of suitable technologies to build the IPAC nodes and the application creation environment.Moreover, an extensive scientific survey for past and existing European projects, related to IPAC, has been performed, in order to investigate the possibility of exploiting their results and use them as a guide for achieving the IPAC’s objectives.

2 Short range communications technology

3 IEEE 802.11

IEEE 802.11 is an ISO/IEC international standard (ISO/IEC 8802-11) originally defined by the IEEE. It is a wireless LAN for use in the SOHO (Small Office and Home) environment although experiments have been made for transmission up to 30 Km. The standard only defines the physical and medium access control layers. The former originally came in three versions: a basic version with a raw bit rate of 1 and 2Mbit/s, an extended version (802.11b) at 11 Mbit/s and a high speed version (802.11a) at bit rates up to 54 Mbit/s. The first two operates in the 2.4 GHz band while the latter uses the 5 GHz ISM band. More recently, extensions to 54 Mbit/s (802.11g) and 270Mbits/s (802.11n) in the 2.4GHz band have been released. The original version of the standard specified infrared transmission as well frequency-hopping and direct sequence spectrum (DSSS) techniques on radio channels. Nowadays, on the DSSS technique is available. The high speed version physical layer uses orthogonal frequency division multiplexing (OFDM) on 52 subcarriers modulated with binary or quadrature phase shift keying (BPSK / QPSK), 16-QAM (Quadrature Amplitude Modulation) and 64 QAM.CSMA with collision avoidance is used as a medium access control. Collisions are partially avoided by sending reservation frames (RTS) that are acknowledged by the receiver using a CTS frame. Both frames include the reservation duration. When receiving one or both of these frames, all other stations refrain from requesting any traffic. This mode of operation corresponds to the DCF (Distributed Coordination Function). In this mode, there is no temporal guarantee. When guarantees are required, the PCF (Point Coordination Function) can be used. A special node, the point coordinator, is used to create a cycle in the traffic. This cycle is split in two parts. In the first part, there is no contention and medium access control is managed by the point coordinator. In the second part, all nodes can compete as in the previous mode. None of the currently available products implement PCF which is now obsolete. 2007 has seen the approval of a QoS oriented version of the standard (802.11e) that adds new rules to the medium access control while keeping compatibility. It defines 4 classes of traffic and an equivalent of the PCF called HCF (Hybrid Coordination Function). IEEE 802.11e keeps the compatibility with the previous versions.IEEE 802.11 operates in two different modes, ad-hoc and infrastructure. In the infrastructure mode, special nodes, access points, are connected to conventional networks (typically Ethernet) and act as coordinators for the traffic on the wireless side and bridges between the wired and the wireless parts. In such a mode, wireless nodes associate to an access point and all traffic either originate or goes the access point. In the ad-hoc mode, wireless nodes may communicate directly from node to node in direct visibility, all nodes being equal. The standard also defines encryption to protect the transmissions.IEEE 802.11 is possibly the most widely spread wireless local area network technology. It suffers from two main drawbacks, its high power consumption and the absence of a good solution for mesh networks. Mesh networks may be supported if all nodes become routers. Solutions are often based on IP with limited

IPAC ICT-2008-224395 3/56

Page 4: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

resulting performances. There is also an emerging IEEE 802.11n standard that plans to provide multi-hop communications at the link layer. There is however no satisfactory solution for the time being.

4 Bluetooth (IEEE 802.15.1)

Bluetooth is an industry standard for short-range radio in office and home environments. The initial version (V1.1) has served as a basic for IEEE 802.15.1. Bluetooth uses the 2.4GHz ISM license free band available worldwide (although with some restriction in some countries). Bluetooth networks are composed of piconets. Interconnected piconets in the same geographical area form a scatternet. A piconet has a single master and up to 7 active slaves. Many more slaves may be synchronised with the piconet traffic but cannot participate actively (park mode). Bluetooth operates in the 10 meter range with extensions to longer distances (similar to 802.11). It uses a frequency hopping spread spectrum technique on 79 carrier frequencies. Each hopping sequence lasts approximately 24 hours. The raw bit rate is 1Mbit/s (3 Mbit/s in the extended mode). Traffic is master slave with the piconet master polling a single slave in one hop. The slave has to answer in the next hop. This transaction is repeated in case of error. The order in which slaves are polled is not defined by the protocol but depends on the pending traffic. Besides this sporadic traffic, Bluetooth supports real-time isochronous traffic. Each isochronous connection uses one third of the available bandwidth and up to 3 isochronous connections can be established. A node can participate in more than a single piconet but cannot be the master on both as the hopping sequence and its phase are defined by the master identity and clock phase. Communication between piconets (scatternet) is provided by nodes that participate in 2 piconets and forward the information from one piconet to the next one. This aspect is not standardised.Bluetooth offers authentication and ciphering to protect the communication. It also includes a service discovery protocol that allows to find a node with a given capability. Higher layers are based on a serial link emulation (RFComm) and the PPP/IP/TCP-UDP suite as well as the object exchange protocol OBEX. Besides the isochronous traffic which cannot be used between more that 3 devices, Bluetooth does not offer temporal guarantees. The limitation in the number of devices can be overcome by regularly swapping nodes from active to park mode and vice versa. The fact that traffic is master slave is certainly a problem for distributed application. Bluetooth strength lies in its robustness and absence of preconfiguration as well as the relatively low cost. The major drawbacks are the long connection time and the strong limitation in terms of multihop communications with the absence of a corresponding standard.

5 IEEE 802.15.4 and ZigBee

As the other IEEE 802 standard, IEEE 802.15.4 defines the physical layer and the medium access control sublayer. It operates in 3 bands, 2.4 GHz (16 channels), 915 MHz (30 channels in the US) and 868 MHz (3 channels in Europe). Extensions to other bands are planned in some countries. The standard defines different modulation techniques and signalling rates as described in Table 2-1Error: Reference source notfound.

Table 2-1 802.15.4 signalling rates

IEEE 802.15.4 makes the distinction between full-function devices (FFD) and reduced-function devices (RFD). FFDs may become coordinator. One of the coordinators of a PAN (Personal Area Network) will function as a PAN coordinator. This is typically the first coordinator that starts. RFDs may not communicate directly. They need to use one FFD as a relay. The purpose of this distinction is to have very simple devices with very low power consumption being able to participate in a more complex wireless network.At the MAC layer, IEEE 802.15.4 draws from IEEE 802.11. In the beacon-enabled operation, time is divided into intervals of fixed duration, called beacon intervals, that start with a beacon frame followed by a period

IPAC ICT-2008-224395 4/56

Page 5: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

during which nodes may contend for access using a slotted version of CSMA/CA. At the end of this period, there may be an optional contention-less period during which nodes are assigned time slots by configuration. The beacon interval ends with an inactive period during which nodes may sleep to save energy. This is one of the different techniques used to keep power consumption of non-coordinator nodes as low as possible. Indirect communication is a second. A node may sleep at its will and wake from time to time to look for possible incoming traffic. The beacon sent by a coordinator contains an indication of pending traffic with a list of addressed nodes. When the sleeping node wakes-up, it waits for a beacon and can check if there is some pending traffic for him. If this is the case, it may request the data from the coordinator during the contention period. It may then go back to sleep. This procedure implies that the coordinator buffers the pending traffic waiting for some poll from the destination node. Direct traffic is also possible from a node to a coordinator or vice-versa. The standard does not explicitly preclude direct traffic from node to node but as there is no provision for neighbourhood discovery at MAC level this would enter in conflict with the routing protocols.When all nodes have a direct visibility to the PAN coordinator, IEEE 802.15.4 offers a simple and efficient solution. For more complex networks, when multihop communication is necessary to read all nodes, IEEE 802.15.4 supports a tree-like topology. The root is the PAN coordinator to which a number of nodes in direct visibility associate. One or more of the theses nodes may serve as coordinators to extend the span of the network. Nodes that does not see the PAN coordinator but see one of the coordinators will associate to this. The same procedure may be repeated to create a tree-like network. Each coordinator will manage a beacon interval. To avoid interferences, all beacon intervals in a PAN should be identical and each coordinator is given an offset from the beginning of the interval (see Figure below) that defines the time at which it has to send its beacon.

Figure 2-1 Relationship between incoming and outgoing beacons

The calculation and the distribution of the offsets to the different coordinators are not defined in the standard. One may comment that this solution will not be able to support mobility in an efficient way because all offsets would need to be recalculated when a coordinator moves. All associations would also be changed. Note that routing from coordinator to coordinator is clearly not defined in the standard (this is routing). The second mode of operation is the beaconless mode. In this mode, coordinators are still present but their role is limited to managing indirect traffic. In direct traffic, nodes may send packets at any time using the CSMA/CA protocol (non slotted). When indirect traffic is used to reach a node, that node may sleep most of the time and wake-up from time to time to poll a coordinator for possible traffic. Note that beacons may still be transmitted on request to discover the neighbourhood. This means that direct traffic is only possible from or to a coordinator. However, nothing precludes all nodes to become coordinators. When power consumption must be kept low, this must be discouraged because coordinators must be in the receiving mode most of the time and this one of the most power consuming tasks. Note that hop by hop is only possible using the coordinators and some routing protocol. IEEE 802.15.4 also defines different levels of packet encryption. Key management is left to higher layers. In summary, IEEE 802.15.4 is the only standard that addresses low power wireless communications at the time of writing. In the contact of IPAC, the tree-like topology, which is the most power-efficient option, cannot be used because it does not scale and does not support mobility. The beacon-less is power hungry and does not support well mobility.ZigBee is an industry standard that governs routing, service description and discovery as well as security management. While we may reuse part of a ZigBee, its main component, the network (routing) part cannot because one of the objectives of the project is to define new routing techniques more adapted to distributed systems.

IPAC ICT-2008-224395 5/56

Page 6: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

6 DECT

DECT (Digital Enhanced Cordless Telephone) is an ETSI (European Telecommunications Standards Institute) that defines an in-door wireless extension to ISDN. It operates in a reserved band in more than 100 countries. DECT uses 10 frequency channels. On each channel, cycles of 10 ms duration are repeated. Each cycle contains 24 time slots. A single connection uses 2 slots, one for the uplink and one for the downlink. Up to 120 connections may be established from a single base station. The information is sent at 1Mbit/s which means that each slot can accommodate 320 bits for a voice channel or 256 bits for a data channel. For a data connection, up to 23 slots can be used in one direction if asymmetric traffic is necessary. Multiple cells (multiple base stations) can coexist without configuration. The 120 channels need however to be shared among the cells. Through the static assignment of a channel, traffic can be guaranteed. Furthermore, the use of a protected band makes DECT immune to other sources of perturbation. Despite the official support of data communication in the standard, all implementations we know off do not implement it. It thus cannot be used in the project.

7 Wireless HART

WirelessHART is an industry initiative to develop a wireless solution for process control automation. It complies with the IEEE 802.15.4 physical layer specification at 2.4 GHz (16 channels). Contrary to IEEE 802.15.4 and similar to Bluetooth, it uses slow frequency hopping to mitigate multipath fading and interferences. A WirelessHART network contains a Network Gateway, a Network Manager, one of more Access Point and end devices that include a router function. The Gateway is the connection point to other networks. The Manager is in charge of the network configuration. The Gateway and the Manager are usually collocated. Mesh topologies are preferred but trees and stars are supported as special cases. For dependability reasons, routes from the end devices to the Gateway are redundant thanks to the mesh topology. Exchanges are organised in Superframes that repeat periodically. Each superframe is further divided into slots. All slots have the same duration, enough time to transmit a packet and get the acknowledgement plus some margin for time desynchronisation. Slots of a superframe are transmitted on different channels. To accommodate traffic with multiple periods, multiple superframes may coexist with the only restriction that two slots of two different superframes must be temporally aligned. Based on the traffic needs, the Network manager assigns (schedules) the exchanges into superframes and slots. When a message needs to be routed, the Network Manager assigns a slot for the relaying of the message to the next hop. This slot is contiguous to the slot during the message is received. In general, a message produced by an end device is transmitted through two different routes to the Gateway and the Network Manager assigns slots accordingly. A slot may be defined for a point-to-point transmission from a given device to another one or for broadcast transmission. It may also be left open to contention. In such a case, CSMA/CA with a backoff technique is employed. Once ready, the schedule is loaded into all devices. Schedule changes are possible during operations. This is usually done by loading a new superframe definition.Wireless HART is an interesting solution in its market. Process control application operates cyclically and thus a pure fixed TDMA schedule is a good solution. As all nodes know the schedule, low-power operation is possible. By construction, Wireless HART solves some of drawbacks of ZigBee, sensitivity to interferences and multipath fading as well as high power consumption.

8 Wireless Sensor Networks

IEEE 802.15.4 together with ZigBee and Wireless HART are the two standard proposals for wireless sensor networks. Some IEEE groups are working on new proposals. There are however a number of other proposals from the industry, Zwave (Ember, Zensys), Xmesh (Crossbow), SmartMesh (Dust Networks), Millenialnet and the open source community (TinyOS). Most of the industrial systems do not come with any technical description of the solution. It is thus impossible to describe them here.The developments available within or around TinyOS attract a lot of attention because the hardware is relatively inexpensive and the software freely available. However, most sources report the inefficiency of the solutions.

9 Wisenet

Wisenet is a wireless sensor network project developed at CSEM since 2002. The WiseNET system includes an ultra low-power system-on-chip (SoC) hardware platform and WiseMAC, a low power medium access control protocol (MAC) dedicated to duty-cycled radio. Both elements have been designed to meet the specific requirements of wireless sensor networks and are particularly well suited to ad-hoc and hybrid

IPAC ICT-2008-224395 6/56

Page 7: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

infrastructure networks. The WiseNET radio offers dual-band operation (434-MHz and 868-MHz) and runs from a single 1.5-V battery. It consumes only 2.5-mW in receive mode with a sensitivity smaller than −108-dBm at a BER of 10−3 and for a 100 kb/s data rate. In addition to this low-power radio, the WiseNET system-on-chip (SoC) also includes all the functions required for data acquisition, processing and storage of the information provided by the sensor. Ultra-low power consumption with the WiseNET system is achieved thanks to the combination of the low power consumption of the transceiver and the high energy efficiency of the WiseMAC protocol.The WiseNET solution consumes more than 250 times less power than comparable solutions based on the IEEE 802.15.4 standard.WiseMAC is a low power MAC protocol specially developed for WSN [4] in combination with the SoC. It is a single channel contention protocol based on non-persistent carrier sense multiple access (CSMA). Non-persistent CSMA has been combined with preamble sampling to mitigate idle listening. The preamble sampling technique consists in regularly sampling the medium to check for activity. All nodes in a network sample the medium with the same constant period. Their relative sampling schedule offsets are independent. If the medium is found busy, a node continues to listen until a data packet is received or until the medium becomes idle again. At the transmitter, a wake-up preamble of size equal to the sampling period is transmitted in front of every data packet to ensure that the receiver will be awake at the start of the data portion of the transmission. This technique enables very low power consumption when the traffic is very low as it is usually the case in WSN. It provides the lowest possible power consumption in the absence of traffic and for a given wakeup latency using a conventional receiver. The main disadvantage of this basic preamble sampling protocol is that the low power consumption in idle mode is coupled with a high power consumption overhead both in transmit and receive, due to the wake-up preamble. In an ad-hoc network, the cost of reception is not only paid by the intended destination, but also by all other nodes overhearing the transmission. The novel scheme introduced by WiseMAC reduces the length of this costly wake-up preamble. It consists in learning and exploiting the sampling schedule of the direct neighboring nodes.

Figure 2-2 WiseMAC preamble

To use a wake-up preamble of minimized size, as illustrated in Figure 2-2, the sampling schedule of a neighbor is learned, or refreshed, during every data exchange by piggybacking in the acknowledgement messages the remaining time until the next sampling instant. Every node keeps a table of sampling time offsets of all its usual destinations up-to-date. Since a node will have only a few direct destinations, such a table is manageable even with very limited memory resources. The duration of the wake-up preamble must cover the potential clock drift between the clock at the source and at the destination. This drift is proportional to the time since the last re-synchronization (i.e. the last time an acknowledgement was received). It can be shown that the required duration of the wake-up preamble is given by TP = min(4θTC, TW), where θ is the frequency tolerance of the time-base quartz, TW is the sampling period and TC the interval between communications. A transmission is scheduled such that the middle of the wake-up preamble coincides with the expected sampling time of the destination.Systematic collision situations eventually introduced by this synchronization are mitigated using a wake- up preamble of randomized size.

10 DSRC

DSRC (Dedicated Short Range Communications) is a new, short-to-medium-range wireless communication protocol that is meant to complement cellular communications in application domains where high data rates and low latency are critical and where communication zones are relatively small. It has been specifically

IPAC ICT-2008-224395 7/56

Page 8: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

designed for automotive applications, namely automatic road toll collection, road safety and traffic management in the context of ITS (Intelligent Transportation Systems). DSRC supports Public Safety and Private operations by establishing communication links between moving cars (vehicle-to-vehicle) or between a car and the roadside (vehicle-to-infrastructure), as shown in Figure 2-3.

Figure 2-3 DSRC Component Architecture

DSRC involves both mobile and fixed units, establishing a highly-dynamic, adhoc, multi-hop network. Using frequency bands at 5.8 GHz (Europe) and 5.9 GHz (USA), the range can be up to 1000 meters, data rates from 3 to 27 Mbps, latency smaller than 50ms. It supports car speeds up to 200 km/h and a relative car speed up to 500km/h.In order to specify high-speed communications between vehicles and service providers, and to harmonize communications interfaces between different automotive manufacturers, standards for DSRC are developed, in the USA by IEEE, ASTM (American Society for Testing and Materials), ITS (Intelligent Transportation Systems), in Europe by CEN (Comité Européen de Normalisation) and ETSI ISO. Different standards apply for different geographic regions (USA, Europe, and Japan) but there is an effort for harmonization.The IEEE standards for DSRC are composed of:

IEEE 802.11p: an amendment to IEEE 802.11, in order support wireless access for high speed nodes, under development. Current applications in the USA use IEEE 802.11a.

the IEEE 1609 Family of Standards for Wireless Access in Vehicular Environments (WAVE), under development:

o IEEE 1609-1: Resource Managero IEEE 1609-2: Securityo IEEE 1609-3: IP Network Serviceso IEEE 1609-4: Medium Access Control Extension Services

The CEN standards are the following: EN 12253:2004 Dedicated Short-Range Communication - Physical layer using microwave at 5.8

GHz EN 12795:2002 Dedicated Short-Range Communication (DSRC) - DSRC Data link layer: Medium

Access and Logical Link Control EN 12834:2002 Dedicated Short-Range Communication - Application layer EN 13372:2004 Dedicated Short-Range Communication (DSRC) - DSRC profiles for RTTT

applications EN ISO 14906:2004 Electronic Fee Collection - Application Interface

IPAC ICT-2008-224395 8/56

Page 9: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

DSRC defines two types of communication entities: on-board units (OBUs) and roadside units (RSUs). An RSU is a fixed WAVE device that supports communication and data exchange with OBUs, usually in a one-hop (often line-of-sight), centralized scheme. An OBU is a mobile, portable WAVE device that supports information exchange with RSUs and other OBUs, in the latter case in a distributed-coordination, multi-hop scheme. The flow of information is bi-directional: from the infrastructure to the vehicles as well as between two vehicles.The IEEE standards define seven channels, one for control and six for the services, as shown in Figure 2-4. By default, WAVE devices operate on the control channel, which is reserved for short, high-priority application and system control messages. The RSU uses the control channel in order to regularly send service advertisements (the available services and their channel) to the OBUs within range, at a rate of 10 advertisements per second. In order to receive information, an OBU monitors the control channel until a service advertisement is received that utilizes a specified service channel. In order to send information, an OBU chooses to utilize a particular service channel, based on the WAVE announcement frames it transmits over the control channel. At any case, WAVE devices must monitor the control channel for additional safety or private service advertisements during specific intervals, even if this requires the suspension of a transaction in progress over a service channel. This requires that WAVE devices that are not able to listen on the control channel and exchange data on a service channel at the same time, be synchronized to each other and to an absolute time reference, UTC, in order to know when to cease monitoring the control channel. UTC time is commonly provided by GPS, but can also be obtained from other devices (the accuracy requirements allow to do so).

Figure 2-4 DSRC channels

As depicted in Figure 2-5, WAVE accommodates two protocol families: standard IP and the WAVE Short Message Protocol (WSMP), designed for optimized wireless operation in the vehicular environment. WAVE short messages can be exchanged on any channel, while IP traffic is allowed only on a service channel. Applications have the choice to send their data in the form of WSMs on the control channel or to initiate/participate at a WBSS (Wave Base Service Set) on a service channel and use either IP or WSMs. A WBSS supports traffic to/from specific applications and its presence is announced for other devices with compatible applications to join. Typically, a RSU or a OBU (service provider) initiates a WBSS by sending a service advertisement on the control channel, after which both the service provider and the service user (one or more OBUs) are free to generate data packets for transmission on the associated service channel, in a connectionless scheme (see Figure 2-6). The WBSS can be persistent or of limited duration. It remains active until it is locally terminated (local notification that there is no more traffic), but there is no protocol exchange over the air confirming its termination.

IPAC ICT-2008-224395 9/56

Page 10: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 2-5 The WAVE protocol stack (IEEE 1609)

Figure 2-6 RSU Service Information Exchange Scheme

Data transmission can be unicast or broadcast. Service announcements are secured and the service user checks the provider's digital signature before joining a WBSS. The exchanged data are prioritized, according to access category (8 priority levels).

11 Sensor management

Sensors are devices that measure a physical attribute or a physical event. They output a functional reading of that measurement as an electrical, optical or digital signal. That signals are data that can be transformed by other devices into information. The information can be used by either intelligent devices or monitoring individuals to make intelligent decisions and maintain or change a course of action.Smart sensors are simply ones that handle their own acquisition and conversion of data into a calibrated result in the units of the physical attribute being measured. For example, a traditional thermocouple simply provided an analog voltage output. The voltmeter was responsible for taking this voltage and transforming it into a meaningful temperature measurement through a set of fairly complex algorithms as well as an analog to digital acquisition. A smart sensor would do all that internally and simply provide a temperature number as data. Smart sensors do not make judgments on the data collected unless that data goes out of range for the sensorBecause of the diversity of the transducer market, manufacturers are always looking for ways to build low-cost, networked smart transducers. The Institute of Electrical and Electronics Engineers (IEEE) in conjunction with the National Institute of Standards and Technology (NIST) addressed this issue by creating a family of standards to aid in the design and development of smart networked transducers. The ultimate goal of the standards is to achieve transducers to network interchangeability and transducer to networks interoperability. This is done by defining a set of common communication interfaces for connecting transducers to microprocessors, instruments and field networks.

IPAC ICT-2008-224395 10/56

Page 11: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

This chapter concentrates on the approved IEEE 1451 standards, in which inter-chip communications are eliminated yielding a speed improvement over the implemented solutions to date. Finally an overview of SensorML is given which provides a high level description of sensors and their usage.

12 IEEE 1451 OVERVIEW

The standards were first proposed in September 1993, when NIST and the IEEE's Technical Committee on Sensor Technology of the Instrumentation and Measurement Society co-sponsored a meeting to discuss smart sensor communication interfaces and the possibility of creating a standard interface. The response was to establish a common communication interface for smart transducers. Since then, a series of workshops has been held and seven technical working groups have been formed to address different aspects of the interface standard. Currently, all working groups under the IEEE 1451 umbrella provide standard interfaces for sensors on tethered networks. A wireless IEEE 1451 standard should provide seamless connectivity among sensors and users, no matter what distance separates them. And it must do this without requiring the installation of new wires and with reasonable cost and size additions at each sensor node. Figure 3.1 shows the NIST-supported IEEE 1451 standard based system.The general requirements of any wireless network include throughput, range, reliability, and power consumption. Some users need tens of megabits per second while others require only a few bits per day. Also, the sizes of the networks vary from a few feet to several miles.The Wireless Sensing Workshop in Chicago on 2001, tried to determine the interest and requirements for wireless interfaces for sensor-based networks, especially in the industrial community. An informal survey showed that most industrial users needed 32 or fewer nodes per network and typically <300 bps per node, or an aggregate data rate of <10 kbps for the network. At the same time, the industrial users generally wanted network ranges of a few kilometres, not just tens of meters. While this sampling population was too small to be statistically accurate, the results showed that at least one of the physical layer options must offer long-range operation, but perhaps fewer throughputs would be acceptable. Workshop panellists also discussed reliability. Some asserted that potential users of wireless sensors must sacrifice some reliability to go wireless; others wanted a network protocol that optimized the reliability of the link. One way or another, physical layers chosen for a wireless IEEE 1451 system must address data transmission reliability.Within this context, the concept of reliability has several possible definitions, depending on the application. These include the probability that a message will get through; the probability that a message will get through within a given amount of time; or the probability that errors in messages will be detected. Other reasons for using IEEE 1451 are

There are more than 3,000 global sensor manufacturers. Trying to use a variety of sensors from a number of different manufacturers together in a data acquisition system can be very complex and require expensive customization by the integration team.

A conventional sensor system has a lot of analog wiring and extensive switching. IEEE 1451 systems will greatly simplify development and installation of smart sensor systems.

IEEE 1451 compliant systems will cost significantly less to install than conventional sensor systems.

Intechno Consulting forecasts the global sensor to grow from $32.5 Billion a year in 1998 to $50.6 Billion a year in 2008. Some of the big players, such as Intel, are betting that this is “the next big thing” to drive both information technology and corporate profits. Several factors make IEEE 1451 the right standard at the right time to capitalize on this extremely high growth rate. Semiconductor manufacturing techniques can make a wide variety of sensors far easier and cheaper to manufacture in quantity. These techniques can also be applied to nano development and manufacturing. Low-power and power scavenging sensors can be wirelessly deployed, reducing the cost of installation. IEEE 1451 can enable networks of smart sensors to communicate and even to set up ad hoc networks.

The following part of this chapter goes into detail about the different standards.

IPAC ICT-2008-224395 11/56

Page 12: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 3-7 IEEE 1451 Block diagram

IEEE 1451 is a planned set of standards for smart sensors that will make it easier and cheaper to deploy a wide variety of sensors.

IEEE 1451.0 – This portion of the standard defines the structure of the TEDS (Transducer Electronic Data Sheets) the interface between .1 and .X, message exchange protocols and the command set for the transducers.

IEEE 1451.1 – Specifies collecting and distributing information over a conventional IP network. IEEE 1451.2 – Wired transducer interface – 12 wire bus working on a revision which will put IEEE

1451 on RS- 232, RS-485 and USB. IEEE 1451.3 - This is the information to make multi-drop IEEE 1451 sensors work within a network. IEEE 1451.4 – This portion of the standard specifies the requirements for TEDS (Transducer

Electronic Data Sheets). This is software only. IEEE 1451.5 – This section of the standard specifies information that will enable 1451 compliant

sensors and devices to communicate wirelessly, eliminating the monetary and time costs of installing cables to acquisition points. The IEEE is currently working on three different standards, 802.11, Bluetooth and ZigBee.

IEEE 1451.6 – This is the information required for the CAN (consolidated auto network) bus. Currently,

IEEE 1451.1 and IEEE 1451.4 have become published standards. IEEE 1451.3 has been approved and is awaiting publication. IEEE 1451.2 is awaiting revision. IEEE 1451.4 has commercially available products, largely because National Instruments has

enthusiastically backed this standard and is encouraging its clients and alliance members to take advantage of the synergies it provides

Different companies around the world are very interested in the development of test and measurement. After several years of research, it has been determined that one of the most promising areas of development for test and measurement are smart sensors. IEEE 1451 promises excellent communications, data reduction and intelligent data development. In short, 1451 promises to save money on deployment save money in operations and provide actionable intelligence so that users are not buried in data but have the opportunity to make efficient, effective and intelligent decisions based on accurate and analyzed data.

13 SensorML overview

In essence SensorML is an XML dialect that is used to describe sensors, sensor capabilities and sensor uses. In detail, SensorML (SensorML core Specification) provides standard models and an XML encoding for describing any process, including the process of measurement by sensors and instructions for deriving

IPAC ICT-2008-224395 12/56

Page 13: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

higher-level information from observations. Processes described in SensorML are discoverable and executable. All processes define their inputs, outputs, parameters, and method, as well as provide relevant metadata. SensorML models detectors and sensors as processes that convert real phenomena to data. In 1998, under the auspices of the international Committee for Earth Observing Satellites (CEOS), Dr. Mike Botts at the Earth System Science Center at the University of Alabama in Huntsville began development of an XML-based Sensor Model Language for describing certain properties of dynamic remote sensors. In 2000, SensorML was brought under the oversight of the Open Geospatial Consortium (OGC) where it served as a catalyst for the OGC Sensor Web Enablement (SWE) initiative. The continued development of SensorML has been supported by the Interoperability Program of OGC, as well as the US Environmental Protection Agency (EPA), the US National GeoSpatial-Intelligence Agency (NGA), the US Joint Interoperability Test Command (JITC), the US Defense Information Systems Agency (DISA), SAIC, Crystal Data International, General Dynamics, Northrop Grumman, Oak Ridge National Labs, and NASA. SensorML was approved by OGC as an international, open Technical Specification on June 23, 2007. The Open Geospatial Consortium (OGC) is an international voluntary consensus standards organization. In the OGC, more than 330 commercial, governmental, non-profit and research organizations worldwide collaborate in an open consensus process encouraging development and implementation of standards for geospatial content and services, GIS data processing and exchange. It was previously known as Open GIS Consortium. In summary, SensorML is good for:

Electronic Specification Sheet (for sensor components and systems).  Discovery of sensor, sensor systems, and processes. Lineage of Observations. On-demand processing of Observations. Support for tasking, observation, and alert services. Plug-N-Play, auto-configuring, and autonomous sensor networks. Archiving of Sensor Parameters.

The essential elements of SensorML are: Component. A physical atomic process that transforms information from one form to another. For

example, a Detector typically transforms a physical observable property or phenomenon to a digital number. Example Components include detectors, actuators, and physical filters.

System. A composite physically-based model of a group or array of components, which can include detectors, actuators, or sub-systems. A System relates a process to the real world and therefore provides additional definitions regarding relative positions of its components and communication interfaces.

Process Model. An atomic non-physical processing block usually used within a more complex Process Chain. It is associated to a Process Method which defines the process interface as well as how to execute the model. It also precisely defines its own inputs, outputs and parameters.

Process Chain. Composite non-physical processing block consisting of interconnected sub-processes, which can in turn be Process Models or Process Chains. A process chain also includes possible data sources as well as connections that explicitly link input and output signals of sub-processes together. It also precisely defines its own inputs, outputs and parameters.

Process Method. A definition of the behaviour and interface of a Process Model. It can be stored in a library so that it can be reused by different Process Model instances. It essentially describes the process interface and algorithm, and can point the user to existing implementations.

Detector. An atomic component of a composite Measurement System defining sampling and response characteristic of a simple detection device. A detector has only one input and one output, both being scalar quantities. More complex sensors such as a frame camera which are composed of multiple detectors can be described as a detector group or array using a System or Sensor. In SensorML a detector is a particular type of Process Model.

Sensor. A specific type of System representing a complete Sensor. This could be, for example, a complete airborne scanner which includes several Detectors (one for each band).

SensorML could be utilised in IPAC for the description, advertisement and discovery of sensors embedded in IPAC nodes. In fact SesnorML provides a powerful standard that can enable IPAC nodes to utilize sensors found on different nodes. What is needed is the storage of sensor descriptions in SensorML and a reasoning mechanism that utilizes this information. In this way, IPAC nodes can easily be aware of the sensing capabilities of neighbouring nodes and how these capabilities could be used. Also having an open and well known standard for describing sensing capabilities, leads to the easier incorporation of new IPAC nodes in the system. The node provider does not need to learn a new (possibly proprietary) standard in order to enable hardware for use within the IPAC middleware. Even better if a hardware sensing node already has been described in SensorML the step of sensor description in order to incorporate this device into IPAC can be skipped.

IPAC ICT-2008-224395 13/56

Page 14: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

14 Data modelling technologies based on the XML standard

The purpose of data modelling is to develop an accurate model, or graphical representation, of the client's information needs and business processes. In general there are three main types of data models: the conceptual schema which describes the semantics of a domain, the logical schema which describes the semantics represented by a particular data manipulation technology and the physical schema which describes the physical storage of the data. Moreover there are many available technologies for data modelling with each one having specific advantages over given types of applications. For example, for relational database design Entity-Relational models (logical) along with Relational models (physical) are the dominating technologies. This is so because those models are very good at describing data that are structured and (more or less) complete. However, when dealing with semi-structured or incomplete data (data instances do not always contain all the information) these models do not cope as well. Currently the dominating technology for dealing with semi-structured data is XML (Extensible Markup Language). In fact, one of the driving forces behind the creation of XML was to cover this specific weakness (modelling and storing self-describing, semi-structured data) that became obvious with the ever increasing use of the web for displaying/exporting data. Additionally the usage of text format (instead of binary) for XML files promoted the interoperability between heterogeneous systems. This is especially true for applications running on mobile and limited (regarding memory and processing) devices.Data exchanged within and utilized by the IPAC middleware will be semi-structured, sometimes incomplete, should be self-describing and need to be supported by heterogeneous and mobile devices. Thus XML seems to be ideal for storing and processing data in IPAC. Since XML can also be used for modelling data it is natural to adopt it for all the data related requirements (modelling being one of them) of the IPAC middleware.The Extensible Markup Language (XML) is a W3C (World Wide Web Consortium) -recommended general-purpose markup language for creating special-purpose markup languages, capable of describing many different kinds of data. In other words: XML is a way of describing data and an XML file can contain the data too, as in a database. It is a simplified subset of Standard Generalized Markup Language (SGML). Its primary purpose is to facilitate the sharing of data across different systems, particularly systems connected via the Internet (Bray T. et al., 2006)).There are two levels of correctness of an XML document:

Well-formed. A well-formed document conforms to all of XML's syntax rules. For example, if a start-tag appears without a corresponding end-tag, it is not well-formed. A document that is not well-formed is not considered to be XML; a conforming parser is not allowed to process it.

Valid. A valid document additionally conforms to some semantic rules. These rules are either user-defined, or included as an XML schema (W3C XML Schema) or DTD. For example, if a document contains an undefined element, then it is not valid; a validating parser is not allowed to process it.

15 Features of XML

XML provides a text-based means to describe and apply a tree-based structure to information. At its base level, all information manifests as text, interspersed with markup that indicates the information's separation into a hierarchy of character data, container-like elements, and attributes of those elements. In this respect, it is similar to the LISP programming language's S-expressions, which describe tree structures wherein each node may have its own property list.The fundamental unit in XML is the character, as defined by the Universal Character Set. Characters are combined in certain allowable combinations to form an XML document. The document consists of one or more entities, each of which is typically some portion of the document's characters, encoded as a series of bits and stored in a text file.The ubiquity of text file authoring software (word processors) facilitates rapid XML document authoring and maintenance, whereas prior to the advent of XML, there were very few data description languages that were general-purpose, Internet protocol-friendly, and very easy to learn and author. In fact, most data interchange formats were proprietary, special-purpose "binary" formats (based foremost on bit sequences rather than characters) that could not be easily shared by different software applications or across different computing platforms, much less authored and maintained in common text editors.By leaving the names, allowable hierarchy, and meanings of the elements and attributes open and definable by a customizable schema, XML provides a syntactic foundation for the creation of custom, XML-based markup languages. The general syntax of such languages is rigid — documents must adhere to the general rules of XML, assuring that all XML-aware software can at least read (parse) and understand the relative arrangement of information within them. The schema merely supplements the syntax rules with a set of constraints. Schemas typically restrict element and attribute names and their allowable containment

IPAC ICT-2008-224395 14/56

Page 15: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

hierarchies, such as only allowing an element named 'birthday' to contain 1 element named 'month' and 1 element named 'day', each of which has to contain only character data. The constraints in a schema may also include data type assignments that affect how information is processed; for example, the 'month' element's character data may be defined as being a month according to a particular schema language's conventions, perhaps meaning that it must not only be formatted a certain way, but also must not be processed as if it were some other type of data.In this way, XML contrasts with HTML, which has an inflexible, single-purpose vocabulary of elements and attributes that, in general, cannot be repurposed. With XML, it is much easier to write software that accesses the document's information, since the data structures are expressed in a formal, relatively simple way.XML makes no prohibitions on how it is used. Although XML is fundamentally text-based, software quickly emerged to abstract it into other, richer formats, largely through the use of datatype-oriented schemas and object-oriented programming paradigms (in which the document is manipulated as an object). Such software might treat XML as serialized text only when it needs to transmit data over a network, and some software doesn't even do that much. Such uses have led to “binary XML”, the relaxed restrictions of XML 1.1, and other proposals that run counter to XML's original spirit and thus garner an amount of criticism.

16 Strengths of XML

Some features of XML that make it well-suited for data transfer are: Its simultaneously human- and machine-readable format; It has support for Unicode, allowing almost any information in any human language to be

communicated; The ability to represent the most general computer science data structures: records, lists and trees; The self-documenting format that describes structure and field names as well as specific values; The strict syntax and parsing requirements that allows the necessary parsing algorithms to remain

simple, efficient, and consistent.

XML is also heavily used as a format for document storage and processing, both online and offline, and offers several benefits:

Its robust, logically-verifiable format is based on international standards; The hierarchical structure is suitable for most (but not all) types of documents; It manifests as plain text files, unencumbered by licenses or restrictions; It is platform-independent, thus relatively immune to changes in technology; It and its predecessor, SGML, have been in use since 1986, so there is extensive experience and

software available.These strengths can be easily incorporated and exploited in IPAC. For example XML files could be used to store and transport information between IPAC nodes or by the middleware itself in order to describe the nodes and their needs. Of course the heterogeneous interoperability that is required between IPAC nodes (and their data exchanges) can be supported by XML as just having the ability to handle Unicode text is sufficient for XML to be used for data exchange and storage. Furthermore the self explanatory nature of XML files is an added bonus for IPAC as it is expected that data produced and exchanged between nodes will not have a fixed form known beforehand, rather than conform to certain rules. Thus it becomes crucial to have a way to provide this kind of functionality. Finally, having an easy to use standard that does not demand the acquisition of extra licences makes the development of IPAC easier (in contrast of using a more restricted standard). Additionally, the adoption and usage of it by third parties also becomes easier (XML is well known).

17 Weaknesses of XML

For certain applications, XML also has the following weaknesses: Its syntax is fairly verbose and partially redundant. This can hurt human readability and application

efficiency, and yields higher storage costs. It can also make XML difficult to apply in cases where bandwidth is limited, though compression can reduce the problem in some cases. This is particularly true for multimedia applications running on cell phones and PDAs which want to use XML to describe images and video.

Parsers should be designed to recurse arbitrarily nested data structures and must perform additional checks to detect improperly formatted or differently ordered syntax or data (this is because the markup is descriptive and partially redundant, as noted above). This causes a significant overhead for most basic uses of XML, particularly where resources may be scarce - for example in embedded systems. Furthermore, additional security considerations arise when XML input is fed from untrustworthy sources, and resource exhaustion or stack overflows are possible.

IPAC ICT-2008-224395 15/56

Page 16: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Some consider the syntax to contain a number of obscure, unnecessary features born of its legacy of SGML compatibility. However, an effort to settle on a subset called "Minimal XML" led to the discovery that there was no consensus on which features were in fact obscure or unnecessary.

The basic parsing requirements do not support a very wide array of data types, so interpretation sometimes involves additional work in order to process the desired data from a document. For example, there is no provision in XML for mandating that “3.14159” is a floating-point number rather than a seven-character string. Some XML schema languages add this functionality.

Uses the hierarchical model for representation, which is limited compared to the relational model, since it only gives a fixed view of the actual information. For example either actors under movies, or movies under actors.

Modelling overlapping (non-hierarchical) data structures requires extra effort. Mapping XML to the relational or object oriented paradigms is often cumbersome. Some have argued that XML can be used for data storage only if the file is of low volume, but this

is only true given particular assumptions about architecture, data, implementation, and other issues.

The weakness could impact IPAC as well. However there are solutions that can be used in order to avoid theses problems. For example keeping the exchanged data between nodes in low volumes nullifies most of the XML associated problems (extreme recursion and redundancy). Even when having high volume files, there efficient solutions, such as compression and offline processing. Furthermore by carefully deciding when to, and when not to, use XML in the various part of IPAC we could also avoid many problematic situations.

18 Data modelling with XML

Data models can be described through an XML Schema (called XSD). XSDs are far more powerful than DTDs in describing XML languages. They use a rich datatyping system, allow for more detailed constraints on an XML document's logical structure, and must be processed in a more robust validation framework. An XSD is itself an XML document but its’ purpose is to provide the grammar used in XML documents. Another difference from plain XML documents is that, in XSDs, there is a set of predetermined tags which is used to describe the grammar of an XML document. XSD and XML documents are much like database tables which have a fixed schema provided by a model (e.g. ER models) and rows which contain the actual data. The main difference with databases is that XML Schemas are not so rigid, in the sense that they don’t define the exact attributes (elements) of the stored data, rather than specifying which attributes may exist in an XML document. It is possible to infer some restrictions (such as an element may exist only within another element or the cardinality of those elements) but in general they don’t provide a strict and rigid structure. That is, a structure is provided but it describes what data could be within an XML document instead of specifying the exact elements that should be in a document.Having an XML schema allows for using software tools that validate, produce, parse and even query XML documents using SQL like languages (e.g. XQuery, XPath, etc). Finally, even though an XSD is in essence a text document which may be produced with any text editor, there are many specialized software tools for authoring XSDs. These tools provide a graphical environment to author and represent an XML schema thus making the development of XSDs much easier. The standard symbol notations that are used in XML Schemas are the following:Element symbolsAn XML-element is the basic block of any XML document. It is a piece of text bounded by matching tags. Inside the tags there may be a combination of text and other elements (see Figure 4-8). The tags are user-defined. And they must be balanced. Furthermore, it is possible to abbreviate empty XML e.g. <married></married> can be abbreviated to <married/>.

IPAC ICT-2008-224395 16/56

Page 17: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

<UserInfo> <ID>ID_1</ID> <FirstName>Chris</FirstName> <LastName>Panayiotou</LastName> <Address AddressIdentifier="Home Address"> <Street>Street Name 10</Street> <City>Nicosia</City> <Area>Pallouriotissa</Area> <Country>Cyprus</Country> </Address> </UserInfo>

Figure 4-8 Example of an XML element

The cardinality of the element (0..1, 1 exactly, 0..n, 1..n) is indicated by the border of the elements. Optional elements are drawn with a dashed line, required elements with a solid line. A maximum occurrence greater one is indicated by a two stacked boxes.

Content symbolsThe content model of elements is symbolized on the left and right side of the element boxes. The left side indicates whether the element contains a simple type (text, numbers, dates, etc.) or a complex type (further elements). The right side of the element symbol indicates whether it contains child elements or not:

Model symbols A sequence of elements. The elements must appear exactly in the sequence in which they appear

in the schema diagram. A choice of elements. Only a single element from those in the choice may appear at this position. The "all" model, in which the sequence of elements is not fixed.

19 Service Creation Environments

Integrated Development Environments (IDEs) are software applications that provide facilities to developers for software creation. In the usual cases, an IDE can contain a code editor, a compiler, build automation tools, and a debugger. In some cases, where there is the need of benchmarking, an emulator could be used. In IPAC there is the need of an environment where applications and services will be created and tested off-line before their deployment and execution in IPAC nodes (the so-called Application Creation Component).

IPAC ICT-2008-224395 17/56

Page 18: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Hence, in this section of the deliverable, we review the most important IDEs and common Service Creation Environments (SCEs) and describe their features. Through this, we will be able to recognize which of them are the most appropriate for our purposes.

20 Development platforms, GUI frameworks and technologies for SCEs

An advanced Service Creation Environment (SCE) requires high level languages and tools for the design and development of the service logic, and powerful code generators for translating the design into the corresponding implementation code. It should provide to the service developer with models that correspond to components of the service architecture that may be reused in the creation of various services. It is also important to provide capabilities for service code deployment in the target environment. In the following section we survey existing technologies and platforms for developing such environments.

21 Development Platforms for SCEs

The platforms described in this section are the most commonly used development environments for SCEs.Eclipse PlatformThe Eclipse Platform (Rivieres & Wiegand, 2004; Eclipse Platform, 2006) is an open-source IDE (Integrated Development Environment), which contains the functionality required to build customized IDEs supporting various programming languages. It is itself a composition of components. By using a subset of these components, it is possible to build arbitrary tools and applications. Although it has a lot of built-in functionality, most of that functionality is very generic. Eclipse Platform is built on a mechanism for discovering, integrating, and running modules called plug-ins. Figure 5-9 A snapshot of the main Eclipseworkbench window with only the standard generic componentsshows a screen capture of the main workbench window.

Figure 5-9 A snapshot of the main Eclipse workbench window with only the standard generic components

There are two components in the Eclipse architecture – a small runtime kernel (i.e., Platform Runtime) and a set of plug-ins. Plug-ins represent the smallest unit of functionality within Eclipse Platform and the runtime kernel is responsible for the plug-in lifecycle management. Apart from the microkernel, everything else in Eclipse is a plug-in (including the GUI, the tools, etc.). The Eclipse runtime is a small, OSGi microkernel. The OSGi Service Platform (OSGi Alliance, 2004; Delap, 2006) presents a standardised environment responsible only for plug-in management. The Eclipse Platform UI is built around a workbench that supplies the structures in which tools interact with the user and presents an extensible UI. It is based on editors, views, and perspectives. When active, an editor can contribute actions to the workbench menus and tool bar. Furthermore, the Graphical Editing Framework (GEF) (GEF, 2008) allows developers to create a rich graphical editor from an existing application model. GEF employs an MVC (model-view-controller) architecture which enables simple changes to be applied to the model from the view. It is an application completely neutral and provides the groundwork to build almost any application, including but not limited to:

IPAC ICT-2008-224395 18/56

Page 19: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

activity diagrams, GUI builders, class diagram editors, state machines, and even WYSIWYG text editors. The basic components of the GEF framework are:

EditParts: An EditPart contains all information concerning an editable element of the visual interface. It is also responsible for controlling changes to the element and to notify other elements about status changes.

Model: The model is composed from a number of classes which are responsible for maintaining the actual status of the edited elements.

Figures: Figures are the visual representations of elements which are described in the model. Policies: Policies are responsible for mapping user requests to actions on the model.

NetBeansThe NetBeans (NetBeans IDE, 2008) is a free open-source IDE for software developers. It provides all the tools the user needs to create professional desktop, enterprise, web, and mobile applications with the Java, C/C++, and Ruby languages. The NetBeans Platform contains APIs that simplify the handling of windows, actions, files, and many other things typical in applications. Each distinct feature in a NetBeans Platform application can be provided by a distinct NetBeans module, which is comparable to an Eclipse plug-in. Some of the basic components of the IDE are the following: a) the Editor, b) the Swing GUI Builder, c) the Component Inspector, and, d) the Debugger.Furthermore, the NetBeans IDE supports the coding, testing and deployment of applications for mobile devices that support the Java ME, CLDC/MIDP and CDC technologies. Additionally, NetBeans provides some APIs for developing or extending an editor. The NetBeans APIs are the public interfaces and classes which are available to module writers. Some of them are described in the following list:

File Type Integration: It allows the IDE, or any other application built on the NetBeans Platform, to recognize a new file type.

Code Snippet Module: Used for the creation and addition of code snippets to component palettes. Code snippets are small pieces of code that can be dragged from a component palette and dropped in the Source Editor. They serve to speed up coding.

Editor Component Palette Module: Used for the creation of a component palette that provides drag-and-drop code snippets for a new file type. Hence, the user can create a component palette for a different file type - one that is not recognized by the IDE by default.

Visual Library API: It is a visualization API, useful in the context of, for example, modelling and graphs.

XML Editor Extension Module: Used for the creation of a module that extends the functionality offered by one of the IDE's editors. The IDE has several built-in editors: XML editor, Java editor, JSP editor, and SQL editor.

IntelliJ IDEAIntelliJ IDEA (JetBRAINS, 2008) is a commercial Java IDE proposed by the company JetBrains. It provides a very user friendly interface and enhances development productivity. Among other features, IntelliJ IDEA provides close integration with popular open source development tools. It provides the OpenAPI along with documentation and examples to enable developers to create their own plug-ins.Platform ComparisonAlthough NetBeans is a potential candidate for the development of services in the framework of IPAC, not much support is provided for the plug-in development. The same holds for the IntelliJ IDEA IDE. In practice, it is really difficult to create plug-ins and even tougher to create special development tasks on these plug-ins. Eclipse is a good IDE candidate for IPAC because it provides an open source platform for creating, at the easiest possible way, an extensible integrated development environment by allowing anyone to build tools that integrate seamlessly with the environment and other tools. This seamless integration of tools is done through the plug-ins. With the exception of a small run-time kernel, everything in Eclipse is a plug-in. This IDE provides all the necessary capabilities for the IPAC project and seems to be the most promising solution.

22 Emulators

Emulators help developers to test applications prior to deploying them in the actual devices. Usually, these are distributed as parts of a larger development platform. The aim of an emulator is to mimic the behaviour of a specific device. Most of the famous mobile device vendors provide emulators for their products. The use of an emulation phase in IPAC is required for providing the desired service reliability and efficiency. The reason is that after the services creation, the emulation phase will reveal its behaviour in the specified device before it will be available to every IPAC node. Hence, developers will be able to detect anomalies in service’s execution for the specific devices. In this section, we survey some works and technologies relevant to emulators. Some of these are expected to be integrated with the IPAC Application Creation Component.

IPAC ICT-2008-224395 19/56

Page 20: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Unified Emulator Interface (UEI)UEI (UEI, 2008) is a standard for interaction between IDEs and device emulators (it targets the Java language). This specification ensures that an IDE developer can efficiently use emulators provided by the third parties. Actually, UEI describes the requirements that an emulator must meet in order to cooperate with a development tool (e.g., which device metadata must be provided to the IDE). The UEI specification defines a number of commands for emulator handling. An emulator can be run in one out of two modes. The first, and most common, is running a MIDlet (a Java program for embedded devices) directly from classes in the file system. The second mode, which is optional, is to run the emulator according to the Over-The-Air (OTA) User Initiated Provisioning Recommended Practice document, which is part of the MIDP 2.0 specification. Sun Wireless ToolKit (WTK)The Sun Java Wireless Toolkit (JWDT, 2008) , known as Java 2 platform Micro Edition (J2ME) Wireless Toolkit, is a state-of-the-art toolbox for developing wireless applications that are based on J2ME’s Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP). It is designed to run on devices such as cell phones, personal digital assistants, and other small mobile devices. This toolkit contains an emulation environment (OTA emulator) and examples used from developers in order to easily develop wireless applications. Moreover, Sun WTK provides a push registry emulator which can receive and act on information asynchronously. NetBeans IDE Mobility Pack (NetBeans Mobility Pack, 2008) includes the Sun Java Wireless Toolkit, and supports many other software development kits (SDKs) that one can download from vendor sites such as Nokia, Sony Ericsson, and Motorola. The NetBeans Mobility Pack supports Java Micro Edition (ME) Connected Device Configuration (CDC) and Connected Limited Device Configuration (CLDC) but does not have a bundled CDC or CLDC platform for creating applications. So, in order to develop Java ME CDC or CLDC applications the IDE must have an Emulator Platform registered inside such as the Sun Java Wireless Toolkit (see Figure 5-10). Moreover, additional emulators can be integrated through Unified Emulator Interface. Emulators that do implement UEI will be automatically recognized by the IDE.

Figure 5-10 NetBeans emulator platform for application testing

Sprint Wireless ToolkitSprint’s Software Development Kit (SDK) (Sprint ADP, 2008) includes device emulators and tools in order to help developers to create and deploy their applications. It includes a light weight UI toolkit provided by Sun for MIDP 2.0 development. Moreover, it provides functionalities for MIDlet OTA installation, error checking and detection. Hence, developers can deploy their applications directly to the Sprint VDL which facilitates real-time, virtual access to devices on CDMA and iDEN networks. The whole package contains multi-tasking virtual machine, over 40 handset emulators, over 20 sample applications with the source code, code signing and support for Sprint Mobile Java extensions. MicroEmulatorThe MicroEmulator (MicroEmulator project, 2008) is a versatile and expandable CLDC/MIDP mobile device emulator. It can be used as a standalone application on any Java enabled workstation. It uses AWT or Swing as a presentation layer, and a Java WebStart version is also planned. The emulator can run as an Applet for demonstrating J2ME applications (MIDlets) over the web. It is a pure Java implementation of

IPAC ICT-2008-224395 20/56

Page 21: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

J2ME in J2SE. MicroEmulator is licensed under LGPL. There is also a Netbeans Mobility Microemulator Support.

23 Service Description Languages

A core component of every SCE, apart from editors and emulators, is a language for the description of services. Such language should provide means for describing service logic in terms of operational and functional semantics. Most service description languages found in the literature are based on the XML language which is a standard adopted by the W3C, and can be easily used for defining syntaxes of other languages. In this section, we survey several languages that have been proposed for service description. Some of them are closer to the IPAC paradigm, as sketched in the Description of Work document, while others target at different domains but are quite mature.

23.1.1.1 Various XML-based Service Description Languages

Extensive Markup Language (XML) (XML, 2008) is a widely adopted markup language for structuring data in the Web. The information marked by the XML could be content (words, pictures, etc) as well as metadata about content. Basic characteristics of XML are:

Data are identifiable using (nested) tags (elements). Each application can define the required tags. XML is plain text which means that is easily handled and human-readable. Moreover, data storage

is software and hardware-independent. It allows validation through schema languages like XML Schema (XSD).

Web Services Description Language (WSDL) and BPEL4WSWSDL is a language for describing web services and how they should be bounded to specific Web addresses (WSDL, 2008). Web services are a set of end-points operating on the exchange of messages containing either document or procedure-oriented information. The actions on these end-points are described abstractly and are bounded to a concrete protocol with a specific network address. Moreover, the message format is also specified in order to define an end-point. WSDL separates the abstract definition from the concrete specification in order to allow the high level definitions in different deployment environments. Hence, operations and end-points are firstly defined abstractly and then are bounded to a specific protocol and assigned to a specific address in order the concrete points to be defined. Definitions are generally expressed in XML and include both data type definitions and message definitions. These definitions are usually based upon some agreed XML vocabulary. This agreement could be within an organization or between organizations. Operations describe actions for the messages supported by a Web service. Operations are grouped into port types. Port types define a set of operations supported by the Web service. Service bindings connect port types to a port. A port is defined by associating a network address with a port type. A collection of ports defines a service. An extension of the Web Services model is the language BPLE4WS. BPEL4WS (BPEL4WS, 2008) is a language for the formal specification of business processes and interaction protocols. A business process may be composed by smaller activities. Business logic indicates that certain constraints hold between the executions of specific activities. For example, a sequence of activities indicates that each activity should start after the termination of the preceding one. Within a complex business process there can be some activities that are not essential for the overall process outcome. These activities are named optional in distinction to mandatory activities. XLIt is a high-level language for the specification of web services (Florescu et al, 2002). XL is portable and fully compliant with W3C standards. The main advantage of this language is that allows programmers to concentrate on the logic of their application. XL provides high-level and declarative constructs for actions which are typically carried out in the implementation of a Web Service, e.g. logging, error handling, retry of actions, workload management, events, etc. XL is based uniquely on the W3C XML logical data model. This model describes a set of entities (nodes, values, sequences, schema components) present in an XML document and a set of relationships among them. The data model is defined in XQuery, a functional language for XML data manipulation. XQuery provides the ability to retrieve and interpret data from diverse sources (XML documents, databases). Its basic building block is the expression. A query is composed of a preamble containing function definitions, local type declarations, function declarations, XML Schema imports, and of a main expression to be evaluated. A library of functions and operations is available.In XL a Web Service is identified by a unique URI. Additionally, the service specification can contain local data declarations, declarative clauses and specifications of the service operations. An example of service syntax is the following:

IPAC ICT-2008-224395 21/56

Page 22: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

service <URI> <Function Definitions><Local Declarations><Declarative Clauses><Operations Specification>

endserviceFunctions have the same meaning and are defined as in XQuery. In local declarations two kinds of variables can be defined: the first concerns the internal state of the service while the second concerns the particular conversation in which the specific service is involved in. The declarative clauses have as target to control the service global state, how the service operations are executed or how the service interacts with other services. This section involves a set of individual clauses each of them has a specific meaning. The operation section describes the operations that the service performs. Each service performs either simple or multiple tasks. Every time a service receives a message an operation is called. XL provides means for service composition and specification. It provides to developers ways to construct complex services without the knowledge of any other programming language. The most important is that XL adds to services a level of declarative behaviour. Service invocation statements are held through messages. A service sends a message to another Web service. Service invocation can be synchronous or asynchronous. PadusPadus is an XML-based language introducing two main concepts: aspects and aspect deployment (Braem et al, 2006b). An aspect is a reusable description of a crosscutting concern (i.e., common tasks that can be found in several processes across a system), and contains one or more pointcuts and advices. A pointcut chooses interesting points in the execution of the target WS-BPEL process, and exposes target objects to the advice, which expresses behaviour that should be inserted at that pointcut. The pointcut language of Padus is a logic language based on Prolog. The complete target process is reified as a collection of facts that can be queried by the pointcut. Advice code is defined in an XML element that specifies the type of the advice.Service Modelling Language (SML)SML (SML, 2008) is published by the W3 Consortium as a working draft submitted by several major IT companies. SML provides a rich set of constructs for creating models of complex IT services and systems. These models typically include information about configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, etc. Models focus on capturing all invariant aspects of a service that should be maintained for the service to be functional. They are units of communication and collaboration between designers, implementers, operators, and users, and can easily be shared. They drive modularity, re-use, and standardization. Also, when changes happen in a running service, they can be validated against the intended state described in the model. The actual service and its model together enable a self-healing service. They enable increased automation of management tasks. Automation facilities exposed by the majority of IT services could be driven by software for reliability.A model in SML is realized as a set of XML documents. The XML documents contain information about the parts of an IT service, as well as the constraints that each part must satisfy for the IT service to function properly. Constraints are captured in two ways: a) Schemas – these are constraints on the structure and content of the documents, and, b) Rules – are Boolean expressions that constrain the structure and content of documents in a model. XML-Based Languages ComparisonLanguages described in the current section utilize the advantages of the XML and convey its limitations. They are mainly used for the markup of basic components of each specific system. The main benefit of such languages is that developers are not restricted to using specific tags and they are able to define their own. Moreover, rules, e.g., for the format of each element, can be defined in order to have a higher degree of efficiency in information manipulation. However, all of them need an additional application that will handle the meaning of the elements defined in the document. The majority of the presented languages consist of Web Services description efforts. WSDL, BPEL4WS and XL include elements for the definition of basic characteristics of each service. Hence, developers should dedicate effort to define and determine elements for the description of the interface of Web Services, their functionalities as well as protocols and methods necessary for their invocation. SML is used for the description of complex IT services; however, effort should be spent on the definition of rules and restrictions on them. Mainly, this language is a combination of XML, XML Schema, Schematron and XPath. This requires the appropriate knowledge from the developer’s side.

IPAC ICT-2008-224395 22/56

Page 23: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

24 Agent Specification Languages

A special case of languages used for applications’ description can be found in the domain of agent-based systems. Specification languages are necessary in such systems because agents act in open environments and their behaviours can change dynamically. Hence, we need a formal language to describe basic characteristics of agents (behaviours, plans, goals, actions, etc.) as well as the environment where they act. In this section, we briefly describe the most important languages found in the relevant literature.EXtensible Agent Behaviour Specification Language (XABSL)XABSL (XABSL, 2008) is a very simple language to describe behaviour for autonomous agents, such as autonomous robots or virtual agent-like characters in computer games. The behaviour is described by a set of specific files. These files are: a) Symbol files, containing the definitions of symbols, b) Basic behaviour files, containing prototypes for basic behaviours and their parameters, and, c) Option files, containing a single option. These have to be compiled to an intermediate code using the XABSL-Compiler. At the start-up of the agent execution, this intermediate code is read by the XABSLEngine, which executes the behaviours during run-time.Agents following such hierarchical state machine architecture can be completely described in the XABSL language. The XABSL language design ensures the scalability of agent behaviour solutions, the extension of their behaviours and high validation and compile speed resulting in a short change-compile-test cycle. XABSL files could be exported in XML format providing interoperability with existing tools. PigeonPigeon (Tahara et al 2004) is a language for the description of mobile agent applications. Pigeon is characterized by features with which developers can handle variation in the environments where agents are called to operate. The language is able to handle security as it is one of the most important issues of the mobile agent technology. It also is capable of expressing temporal logic to specify security requirements. Pigeon has the following two parts. The first one is the behaviour specification part that looks like a program. The second part is the requirement specification part expressed in a logic system. Developers can flexibly customize Pigeon by changing the modelling. The logic system of Pigeon is Reflective – Rewriting Logic that has the reflection functionality. This provides descriptive power to Pigeon including the capability of modelling temporal logic.SLABSSLABS (Zhu, 2003) consists of a set of specifications of agents and castes. Caste in SLABS is a natural evolution of the concepts of classes in object-orientation. Castes can play a significant role in the requirements analysis and specification as well as design and implementation of multi-agent systems. A caste description contains a description of the structure of its states and actions, a description of its behaviour, and its environment. An inheritance relation could be defined on the castes. The relationship between agents and castes is similar to objects and classes. The SLABS language enables software engineers to explicitly specify the environment of an agent as a subset of the agents in the system that may affect its behaviour. As a template of agents, a caste may have parameters. The SLABS language uses transition rules to specify agent’s behaviour. Each rule consists of a description of a scenario of the environment, the action to be taken by the agent in the scenario and a condition of the agent’s internal state and previous behaviour. There are four basic forms of scenarios that can be logically combined by & (and), ∨ (or) and ¬ (not) to form more complicated descriptions of the system’s state.JADLJADL (Konnerth et al, 2006) is a language developed for supporting the creation of complex service-based applications. This language is based on three-valued predicate logic providing open world semantics. It comprises four main elements: plan elements, rules, ontologies, and services. From the perspective of agents a service call is handled the same way internal actions are executed. Agents consist of a set of ontologies, rules, plan elements, and initial goal states, as well as a set of so-called AgentBeans (which are Java classes implementing certain interfaces). Moreover, agents have a specific knowledge about the environment where they act. A fact base represents this knowledge. AgentBeans contain methods which can be called directly from within JADL, allowing the agent to interact with the real world, via user interfaces, database access, robot control, and more. JADL was designed for dynamic and open systems. In such systems, services and agents come and go any time, thus, the validity period of local information is quite short. Therefore, JADL aim to incorporate uncertainty into the knowledge representation and thus allow the developer to actively deal with outdated, incomplete or wrong data. Through JADL, knowledge bases can be defined. Every object that the language refers to needs to be defined in an ontology (i.e., knowledge base). JADL implements strong typing, variables range over categories, rather than the full universe of discourse. Categories are represented in a tree-like structure. Each node represents a category, with attached a set of (typed) attributes.

IPAC ICT-2008-224395 23/56

Page 24: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Cognitive Agents Specification Language (CASL)CASL (Shapiro et al, 2002) is used for specifying multiagent systems. It has a mix of declarative and procedural components to facilitate the specification and verification of complex systems. Agents are seen as entities with mental states, such as beliefs, plans, goals, and their behaviour could be defined in terms of their mental states. Developers using CASL can describe the effects of actions on the world and the mental states of agents.In CASL, a dynamic application domain is represented using action theory formulated in the situation calculus. A situation is a snapshot of the domain. There is a set of initial situations corresponding to the views of the various the agents. Situations can be structured into trees, where each root is an initial situation and the arcs are actions. Predicates and functions whose value may change from situation to situation are called fluents. The effects of actions on fluents are defined using successor state axioms.In this language, there are two types of connections between agents: a regular two-way telephone connection and a recording connection. The first case is the classical communication interaction while in the second one entity leaves a message to another. Concerning agents, two aspects of mental states are modelled: knowledge and goals. These are represented with a possible world’s semantics in the situation calculus using situations as possible worlds.

Some Notes on Agent Specification Languages Agent specification languages are mostly oriented to open multi-agent systems and their attributes. All of them define elements used for the description of basic concepts in agent systems such as behaviours, goals, plans, actions, etc. Usually, these are provided in a specific agent development framework and mainly aim at the manipulation of the mental state of each agents and the interaction process between them.These languages are complex in the sense that involves a lot of elements which are necessary for the description of autonomous entities such as agents. Mainly, they contain a set of concepts used for the manipulation of agents’ internal and external states as well as elements for the process of queries defined for the use of services provided by other entities. It should be noted that in the most of the cases specific protocols should be used for the communication procedure between agents. Whether such languages will be exploited in the context of IPAC, will depend on their expressiveness and applicability to the mobile nodes’ execution environment.

25 Relevant and Popular Service Creation Environments

In the following paragraphs some relevant and/or well-known SCEs are presented. They are either research products or industrial solutions. These SCEs can be useful during the IPAC requirements analysis phase, as they contain all typical components of a SCE.Oracle BPEL Process ManagerThe Oracle BPEL Process Manager (BPEL-PM, 2008) is provided as a member of the Oracle Fusion Middleware family of products. It is software for designing, deploying and managing BPEL processes (typically used in SOA solutions) and provides a GUI for building such processes. Figure 5-11 depicts the basic components of the BPEL Process Manager.

IPAC ICT-2008-224395 24/56

Page 25: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 5-11 Components of the BPEL Process Manager

The BPEL Designer provides a graphical and user-friendly way to build BPEL processes and it is available as a module/plug-in for JDeveloper and Eclipse. Moreover, it enables developers to view and modify the BPEL source at any time. The user designs BPEL processes by dragging and dropping elements into the process and editing their property pages. This eliminates the need to write BPEL code. The designer provides mechanisms to integrate BPEL processes with external services and wizards to integrate adapters and services such as workflows, transformations, notifications, sensors, and worklist task management with the process. JDeveloper BPEL Designer is integrated with Oracle JDeveloper 10g. Oracle JDeveloper 10g is an IDE for building applications and Web services using the Java, XML, and SQL languages. Oracle JDeveloper 10g supports the entire development life cycle with integrated features for designing, coding, debugging, testing, profiling, tuning, and deploying applications.

Figure 5-12 JDeveloper BPEL Designer with a BPEL process being designed

The basic components of the JDeveloper BPEL Designer (Figure 5-12) are listed below: Application Navigator: displays the project files. Diagram View: provides a visual view of the BPEL process.

IPAC ICT-2008-224395 25/56

Page 26: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Source View: view the syntax inside the BPEL process project files. Property Inspector: enables the user to view details about an activity. Structure Window: offers a structural view of the data in the project currently selected in the

Diagram View. Log Window: displays messages about the status of the deployment.

The Eclipse BPEL Designer is quite similar to JDeveloper BPEL Designer.PoLoS SCEPoLoS SCE assists the Service Creator in creating new location-based services and deploying them in the PoLoS platform (Ioannidis et al, 2003). Each service is defined through a script written in a new service specification language, which is based on XML and is capable to support the description of the functionality pertaining to each LBS. This language is flexible, and easy to use in order to allow for the specification and easy deployment of any type of LBS without too much effort and cost from the service provider.Eclipse and the GEF framework have been used as a basis for creating the PoLoS SCE. The service creation editor, that allows editing services and also interfaces to the repository and the deployment module, and the debugger have been implemented as plug-ins. The service control logic compiler has been developed based on the Java tools JavaCC (JavaCC, 2008) and Java Tree Builder (JTB) (JTB, 2008). Figure 5-13 depicts the overall architecture of the PoLoS SCE.

Figure 5-13 POLOS SCE architecture

The Service Editor (Figure 5-14) offers to the service designer the possibility to edit the various parts of a service specification script. The latter includes a service logic part, service configuration data and, possibly, other resources. Both a textual and a graphical editor are available. They are assisted by a service management tool unit that provides easy access to the various parts of the service script. The service logic is presented to the end users as an execution flow diagram. Within this diagram it is possible to add and remove commands corresponding to XML tags and to define the flow of execution between the various commands. Created services are stored in the Service Repository.The Service Deployment Module allows deploying a service on the PoLoS kernel once it has been created on the SCE. The deployment phase includes the transfer of the service specification file onto the PoLoS kernel, all the needed settings relative to the execution of the service on the PoLoS kernel, and finally the compilation of the service logic into Java code and its encapsulation into an enterprise Java Bean (EJB). The Service debugger allows debugging the service control logic. When a debug session is set up from the IDE for a given service, traces are generated during the service execution and they are returned to the debugger, which displays them and saves them into files.

IPAC ICT-2008-224395 26/56

Page 27: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 5-14 POLOS SCE Graphical Editor

RapidFLEX SCEThe RapidFLEX SCE from Pactolus (RapidFLEX, 2008) is a Windows-based, GUI application that allows users to quickly and easily create multimedia communication services for next generation IMS, converged TDM/IP, or pure VoIP networks. The RapidFLEX SCE enables fine-grain control of SIP signalling and control over database, web and IP media server resources. This gives developers the power to create sophisticated enhanced services - rapidly and cost effectively.Applications built with the RapidFLEX SCE are translated automatically into XTML (eXtensible Telephony Markup Language), an XML-based service description language developed by Pactolus. These XTML-based applications run on the Pactolus RapidFLEX Application Server.For developers, RapidFLEX Service Creation Environment advantages include:

Graphical Call Flow Representation: build applications using drag-and-drop Plug-in Action Components (PACs) to enable various built-in operations, linking them in a visual flow-chart style representation to construct the call flow.

Event Handlers for Asynchronous Call Events: simplify the overall design of call flows by eliminating embedded conditional logic that otherwise would be needed to handle these events explicitly

Built-in Programming and Installation Tools: includes a JavaScript editor that can be invoked from any PAC and an integrated XML parser for checking SCE XML output.

Extensible Functionality: Developers can add C, C++, or Java code to incorporate their own programming logic into a call flow.

WIT-CASE SCEWIT-CASE SCE (Braem et al, 2006b) is a high-level, visual Service Creation Environment (SCE) for Web Service-based applications that provides service composition templates, verification of compatibility and guidelines, and advanced separation of concerns through aspect-oriented software development. The main concepts of the SCE are listed below:

Services: the basic building blocks of the SCE which correspond to concrete web services. In addition to the usual WSDL API specification, a service is documented using a WS-BPEL process that specifies the external protocol the service adheres to.

Composition Templates: are used to compose multiple services (expressed through the WS-BPEL language). They are abstract descriptions of web service compositions, and may contain one or more placeholders for services.

Aspects: encapsulate crosscutting concerns and can be deployed to services and composition templates. Aspects are implemented using Padus (Braem et al, 2006a), an aspect-oriented extension to WS-BPEL.

Figure 5-15 provides a screenshot of the SCE’s GUI. It is implemented as an Eclipse plug-in (using GEF).

IPAC ICT-2008-224395 27/56

Page 28: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 5-15 The SCE Graphical User Interface

The editor view is used to edit compositions, and consists of the drawing canvas and the palette. The palette contains some selection and connection tools, and shows the available services, composition templates and aspects as they are loaded from the library. The outline view shows a tree-based overview of the state of the composition, and the properties view shows the properties of the element that is currently selected in the editor view or in the outline view. The changes made through the visual editor are taken into account for the composition at hand.The SCE also, guides users in creating correct compositions by verifying whether compositions are correct while they are created: when a service is dragged onto a placeholder, the SCE checks whether the service’s protocol is compatible with the composition template’s protocol. The verification engine is based on the PacoSuite approach (Wydaeghe, 2001) which introduces algorithms based on automata theory to perform protocol verification. The WS-BPEL specifications of each service, aspect and composition template are translated into deterministic finite automata (DFA) and by applying the PacoSuite algorithms, the SCE can decide whether the service’s protocol is compatible with the composition template’s protocol.

26 Services/Middleware technologies

27 Middleware Technologies

In this section, we survey existing technologies that seem appropriate for implementing middleware for embedded and mobile environments. Since, the focus of IPAC is on open technologies and platform-independent solutions, we emphasize on the Java-related technologies and tools.

28 Development Platforms

SymbianThe Symbian platform is a commonly used mobile and embedded devices platform (http://www.symbian.com/developer/index.html). From a development perspective, it is stable and very flexible. Moreover, applications built on Symbian, have an optimized memory management. However, the Symbian platform has some drawbacks. The major drawback is that it is not fully portable. Applications built for one device, cannot be always used in different devices and different underlying operating systems. Java Micro EditionJava Platform, Micro Edition (JME) is a technology that allows development on small devices such as of the cellular phones and personal digital assistants (PDAs) or other embedded devices (http://java.sun.com/javame/index.jsp). It includes implementations of Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) as well as complete or partial implementations for some optional packages – Java Specification Requests (JSR). Due to the significance and market share of JME, a more detailed analysis of this technology follows.JME is comprised of a configuration, which determines the virtual machine used, and a profile, which defines the application by adding domain-specific classes.

IPAC ICT-2008-224395 28/56

Page 29: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Currently two configurations exist, Connected Limited Device Configuration (CLDC) and Connected Device Configuration (CDC). CLDC includes the most basic set of libraries and Java virtual machine features required for each implementation of JME on highly constrained devices. CLDC targets devices with slow network connections, limited power (often battery operated), 128 KB or more of non-volatile memory, and 32 KB or more of volatile memory. CDC is as a subset of Java Standard Edition (JSE) with all the CLDC classes added to it. Since CDC was built upon CLDC applications developed for CLDC devices also run on CDC devices, but the opposite does not hold. This configuration provides a standardized, portable, full-featured virtual machine for embedded and/or of the self devices, such as smartphones, two-way pagers, PDAs, car navigation systems etc. Typically such devices run on a 32-bit processor and have 2 MB of memory minimum. CDC is also associated with the Foundation Profile, which will be discussed below. Regarding the profiles of JME, the most common is the Mobile Information Device Profile (MIDP). MIDP is used in cellular phones and pagers and is built on top of CLDC. It allows development of applications and services for network-connectable, battery-operated mobile handheld devices and provides a standard Java Runtime Environment (JRE) that allows new applications and services to be deployed dynamically on end-user devices. MIDP includes a two level UI architecture comprising of:

a) a low-level UI API. This API allows full access to a device's screen, as well as access to raw key and pointer events but no user interface controls

b) a high-level UI API. This API provides simple user interface controls for devices with small displays.In addition to the CLDC packages, MIDP includes the following three packages:

A package that defines the classes that provide for control over the UI in both a high level UI and low level UI (javax.microedition.lcdui).

The main MIDP class which provides MIDP applications access to information about the environment in which they are running (javax.microedition.midlet).

A set of classes that provide a mechanism for MIDlets to persistently store and retrieve data (javax.microedition.rms).

FoundationProfileThis Profile is a CDC profile intended to be used by devices requiring the complete java implementation of the virtual machine and up to the complete J2SE feature set. Typical FoundationProfile implementations will use some subset of that API set depending on the additional profiles supported. Other platformsPhoneME is a project that targets toward the expansion on usage of (Java) J2ME in the mobile market (https://phoneme.dev.java.net/). It is an open source technology addressing the technical requirements of “feature phone” devices. Most current, out of the self, mobile devices include high-resolution screen, SMS, MMS, IM, Email, graphics, camera, music player and Internet browser, The PhoneMe project aims at the better utilisation of all these characteristics. All the above feature set is enabled from the JME CLDC and MIDP configuration. CLDC and MIDP are the most widely adopted JME application platforms used in current mobile phones. PhoneME includes the latest milestone implementations of CLDC and MIDP as well as implementations for a number of optional JSR packages (see Table 6-2 for some of these packages).

Table 6-2 JSR Packages for JME

Package Description JSR 75 PDA Optional Packages for the J2ME Platform that standardizes and

specifies embedded device java access. This package sits on top of Connected Limited Device Configuration (CLDC). It has two main components, the first one that provides access to the Personal Information Management (PIM) data of the phone and the File Connection Optional Package (FCOP) that gives access to the local file systems on devices like PDA.

JSR 82 The Java APIs for Bluetooth is a specification for APIs that allow Java midlets to use Bluetooth on supporting devices.

Java API described in JSR-82 interface for following Bluetooth Profiles: SDAP - Service Discovery Application Profile RFCOMM - Serial Cable Emulation Protocol L2CAP - Logical Link Control and Adaptation Protocol

OBEX - Generic Object Exchange Profile (GOEP) profile JSR 120 and

JSR 205 Wireless Messaging API and Wireless Messaging API are optional

packages for J2ME that provide platform-independent access to wireless communication resources like Short Message Service (SMS). WMA can be used on top of CLDC and MIDP.

IPAC ICT-2008-224395 29/56

Page 30: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

JSR 179 Location API is a specification enabling developers to write mobile location-based applications for resource limited devices producing information about the device's present physical location to Java applications. This package can be used with various Profiles. The minimum platform profile is the Connected, Limited Device Configuration (CLDC) v1.1 profile.

The EmbeddedJava application environment targets at embedded devices with limited functionality and very small memory capabilities (https://mobileandembedded.dev.java.net). This technology gives out such configuration capabilities that enable the reduction of device requirements by leaving out unnecessary classes and virtual machine features. In advance, EmbeddedJava is optimized for applications that come with no GUI and aim at advanced networking capabilities. PersonalJava is a Java application environment (equivalent to Java 1.1.8) used for development on mobile and/or embedded systems (http://java.sun.com/products/personaljava/). However, both EmbeddedJava and PersonalJava are older products and have been superseded by Java Micro Edition.

29 Java Virtual Machines for Embedded and Mobile Systems

IBM VisualAge Micro EditionVisualAge Micro Edition (http://www.embedded.oti.com/download/platform.html) is a complete end to end development and deployment environment for connected embedded software solutions. It provides an integrated development environment, including IBM's Java language compiler, Smart Linker, Debugger and Micro Analyzer. Deployment components include several configurations of Java class libraries, the J9 virtual machine, and extended personal configuration components. These include frameworks for building AWT or MicroView oriented user interfaces, access to relational database via JavaSQL parts, real-time deterministic controls, and CORBA based remote method invocation. The runtime environments for this product in embedded systems follow:

PalmOS/68K, QNX/Neutrino PowerPC/MIPS/SH4, Linux StrongArm, Windows CE, ARM/MIPS/SH3/SH4.

Intent Java Technology EditionThe Intent JTE -Java Technology Edition- (http://www.tao-group.com) includes a Java VM for use in low end embedded devices. The Intent JTE platform achieves a very high degree of portability by defining a virtual processor layer. The Virtual Processor describes both a language and a platform isolation interface that allows a minimal set of platform dependent features to be isolated from the core of the intent JTE kernel. It is a comprehensive suite of class libraries including the full set required to meet the PersonalJava specification. Intent JTE is supported on embedded Linux and several other embedded operating systems.JbedEsmertec's Jbed Micro Edition CLDC (http://www.esmertec.com/) is a Java virtual machine (JVM) for PDAs (Personal Digital Assistants), mobile phones, and Internet appliances. It is compatible with the specification that was defined under the Java Community Process (JCP) Jbed Micro Edition CLDC can run both directly on the hardware or on top of an underlying operating system like embedded Linux or Palm OS. The new release of the Jbed Virtual Machine and Libraries for embedded Linux runs with the Yopy Linux PDA powered by the Strong ARM SA-1110 microprocessor. Esmertec's embedded Linux release includes the same feature-rich functionality currently available in the Jbed Micro Edition CLDC and the Jbed RTOS Package. Both products are compiling JVMs that always execute at full native speed, but they only use a memory footprint similar to a simple interpretive JVM. Runs on PowerPC 8xx, Motorola 68k family processors, ARM7TDMI, StrongArm and Coldfire. JeodeJeode is a virtual machine (http://www.insignia.com/4.0/products/jeode.htm) compliant with the PersonalJava 1.2 and EmbeddedJava 1.0.3 specifications. It is available for the following operating systems: Windows CE 2.12 and 3.0, Windows NT 4, VxWorks, Linux, ITRON, Nucleus, BSDi Unix and pSOS. The following microprocessor architectures are supported: ARM, MIPS, x86, Hitachi SuperH-3, Hitachi SuperH-4 and PowerPC. PercThe Perc JVM (http://www.newmonics.com/perc/info.shtml) is a commercial product with real-time capabilities for embedded systems. It can be tuned to prefer runtime performance, startup time or memory footprint. It is available for a variety of operating systems (NT, Linux, Solaris, VxWorks, RTX, pSOS, CE, OSE Delta, BeOS, ETS) and processors (x86, PPC, MIPS, ARM, SPARC, and 68k). Support for several IDE's and other development tools are included as well. SquawkA major goal of the Squawk project (https://squawk.dev.java.net) is to write as much of the virtual machine

IPAC ICT-2008-224395 30/56

Page 31: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

as possible in Java, for portability, ease of debugging, and maintainability. Squawk aims at a small footprint. Squawk is CLDC 1.1 and Information Module Profile (IMP) 1.1 compliant. Squawk is meant to be used in small, resource constrained devices. The main goal however of Squawk, is to enable Java technology in the micro-embedded space. The majority of development in the micro-embedded space today is done using low level languages and custom sets of tools and Operating Systems. Powering all this functionality is a core set of Java ME technologies known as CLDC (the Java VM) and IMP. IMP is a subset of MIDP (the mobile information device profile) that removes all parts of the API relating to the requirement of a physical display device. CLDC and MIDP are the most widely adopted Java ME application platforms used in mobile phones today. This allows developers to have access to a font of resources to aid in their development of applications. CDC HotSpot Implementation (ex CVM)CDC HotSpot (http://java.sun.com/j2me/docs/cdc_hotspotds.pdf) is another virtual machine, currently used mainly from CDC profile. HotSpot is a fully functional JVM designed for consumer and embedded devices. It supports VM security, JNI, JVMDI, RMI, and weak references features and libraries. Essentially, it has almost all the functionality of a typical desktop JVM.

30 OSGi (Open Service Gateway initiative)

The OSGi Service Platform is a computing environment for networked services, which provides a complete and dynamic component model. By enabling a service to work under an OSGi context, one can achieve to manage its operation from anywhere in the network. Software components can be installed, updated, or removed on-the-fly without ever having to disrupt the operation of the hosting device. The core part of the specifications is the OSGi Framework that defines an application life cycle management model, a service registry, an execution environment and modules. Based on this framework, a large number of OSGi Layers, APIs, and Services have been defined. The Framework implements a complete and dynamic component model. Bundles (i.e., components that encapsulate all the service logic and are packaged in the standard Java format - .jar files) can be installed, started, stopped, updated and uninstalled without rebooting the system and thus stopping other bundles from functioning. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly. The typical lifecycle of an OSGi bundle is shown in Figure 6-16.

Figure 6-16 OSGi bundle lifecycle

OSGi specifications are managed by the OSGi Alliance (http://www.osgi.org) which is a forum of many well-known companies and organizations. Initially, its target market was the residential gateways, telematics applications, mobile environments, etc. During the last years, several OSGi distributions were made available for deployment in embedded devices. In the following paragraphs we briefly describe two of the most promising solutions.Prosyst mBedded ServerProSyst mBedded Server (http://www.prosyst.com/products/osgi_framework.html) is a high performance, low footprint OSGi R4 implementation that is offered in two main distributions:

IPAC ICT-2008-224395 31/56

Page 32: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

The Open Source mBedded Server Equinox Edition is based on the Eclipse Equinox OSGi. It is the ideal choice for research and open source projects.

The optimized commercial mBedded Server Professional Edition is based on the ProSyst OSGi framework implementation - it contains everything that is needed to embed this platform into mass market devices. mBedded Server Professional Edition sets new standards for performance and memory efficiency through several unique technology innovations and is optimized for small platforms with limited resources.

The server implementation is suitable for use in embedded, desktop and server scenarios. The server is compatible with J9, JBed, Skelmir, JSCP, and Perc JVMs.ConciergeConcierge (http://concierge.sourceforge.net) is an optimized OSGi R3 framework implementations with a file footprint of about 80 kBytes. This makes it ideal for mobile or embedded devices. Typically, these devices have VMs that are more focused on compactness. For instance, purely interpreting VMs often have a negative effect on the performance of existing OSGi framework implementations. The design of Concierge has taken such platforms into account. Concierge uses resources in a very careful way and is able to provide significantly better performance in resource-constrained environments. Compatibility and interoperability with existing OSGi framework implementations was an important aspect during the design and implementation of Concierge. To allow for easy testing and migration, Concierge supports both Knopflerfish -like init.xargs startup files and Oscar style system.properties. Concierge has been tested on a large range of devices.

31 Technologies, Protocols and Frameworks for Agent-based Systems

Since mobile application development has been extensively studied in agent-based systems, we list several frameworks and technologies from this area. In general, these frameworks refer to more structured networks of nodes (e.g., P2P, client-server mobile applications). However, since they address similar application domains could be consulted during the analysis of the IPAC requirements. JadexThe Jadex system (Braubach et al, 2005) allows for the construction of rational agents, which exhibit goal-directed (as opposed to task-oriented) behaviour. The construction of Jadex agents is based on well-established software engineering techniques such as XML, Java and Object Query Language (OQL) enabling software engineers to quickly exploit the potential of the mentalistic approach. The Jadex project is also seen as a means for researchers to further investigate which mentalistic concepts are appropriate in the design and implementation of agent systems. In addition to its usage in context of the MedPAge project in Hamburg, several other institutes have used Jadex to implement research systems. JiniJini™ technology (http://www.jini.org/.) is a technology for building service oriented architectures that defines a programming model which both exploits and extends Java™ technology to enable the construction of secure, distributed systems consisting of federations of well-behaved network services and clients. Jini technology can be used to build adaptive network systems that are scalable, evolvable and flexible as typically required in dynamic computing environments. The term Jini refers to both a set of specifications and an implementation; the latter is referred to as the Jini Starter Kit. Both the specifications and the Starter Kit (initially created by Sun Microsystems) have been released under the Apache 2.0 license and have been offered to the Apache Software Foundation's Incubator. JNomadJNomad (Guo & Xing, 2007) is a framework that enables the integration of Jini technology with mobile agents. The new framework alleviates a lot of the issues and bottlenecks faced by existing client-server model. It is essentially a client-server model but one that taps into the inherent advantages of the Jini architecture and the mobile agents model. Using the Jini architecture, virtually any type of service (software component or hardware device) may freely interact in a network without the need for complex protocols, messaging driver, operating systems and cabling.JXTAJXTA (http://www.jxta.org/) is an open source peer-to-peer protocol specification begun by Sun Microsystems in 2001. Sun remains actively involved in the development and promotion of JXTA. The JXTA protocols are defined as a set of XML messages which allow any device connected to a network to exchange messages and collaborate independently of the underlying network topology. JXTA is the most mature general purpose P2P framework currently available and was designed to allow a wide range of devices - PCs, mainframes, cell phones, PDAs - to communicate in a decentralized manner. JXTA peers create a virtual overlay network which allows a peer to interact with other peers even when some of the peers and resources are behind firewalls and NATs or use different network transports. In addition, each resource is identified by a unique ID, a 160 bit SHA-1 URN in the Java binding, so that a peer can change its localization address while keeping a constant identification number.

IPAC ICT-2008-224395 32/56

Page 33: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

32 IPAC-like middleware frameworks

In this section we review several approaches to middleware that are relevant to IPAC. Relevance was decided based on the general architecture of the middleware solutions and their target application domains. GaiaThe middleware (Ranganathan & Campbell, 2003) is based on a predicate model of context. This model enables agents to be developed that either use rules or machine learning approaches to decide their behavior in different contexts. The middleware uses ontologies to ensure that different agents in the environment have the same semantic understanding of different context information. This allows better semantic interoperability between different agents, as well as between different ubiquitous computing environments. Middleware for Adaptive Semantic Support (MASS)MASS (Corradi et al, 2007) is an ontology-based middleware aiming to enhance the design, development and provisioning of context-aware applications. Specifically, MASS takes advantage of RDF and OWL in order to express semantic information in mobile devices with limited capabilities. This way, it enables automated reasoning processes that facilitate re-configuration and adaptation process according to user profile and device capabilities. Furthermore, it exploits configuration policies (e.g., obligation, authorization) that prescribe the means to access semantic services. The framework also involves a matching algorithm able to identify semantically compatible services with user applications. The layered architecture of MASS is presented in Figure 6-17.

Figure 6-17 The MASS Framework layered architecture

Services with Context awareness and Location awareness for Data Environments (SCaLaDE)

SCaLaDE (Bellavista et al, 2006) is a flexible middleware framework designed for accomplishing mobile Internet data services. The approach also investigates issues stemming from nomadic pervasive computing like availability of local services and resources, while the users change location. Moreover, a policy-based service management aims to enforce certain specifications effectively and consistently. Furthermore, SCaLaDE (see Figure 6-18) takes advantage of agent technology in order to achieve efficient autonomic and asynchronous behaviour.

IPAC ICT-2008-224395 33/56

Page 34: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 6-18 The SCaLaDE middleware architecture

CARISMA: Context-Aware Reflective Middleware System for Mobile ApplicationsCARISMA (Capra et al, 2003) is a middleware for mobile systems that emphasizes on context-aware policy selection so that it delivers specific quality of service to mobile applications. The system exploits state-of-the-art software engineering concepts such as introspection and reflection so that applications can dynamically affect the way the middleware services behave. The middleware is capable of dealing with dynamic policy conflicts through a micro-economic approach and sealed-bid auctions.CORTEXCORTEX (Biegel & Cahill, 2004) is a context-aware middleware approach. CORTEX is more suitable for mobile applications and more specifically for location-aware event-based middleware service designed for ad-hoc wireless networking environments. The system is based on the sentient object model approach for representing sensors and environmental information. It also exploits CLIPS rules and inference engine to capture domain knowledge and reason over contextual knowledge. CORTEX has three main parts: a) Sensory capture, b) Context hierarchy, and, c) Inference engine. Through the use of interfaces each object communicates with sensors, and, thus, can produce software events. For building objects, a graphical development tool is available.HydrogenHydrogen approach involves a layered architecture (Hofer et al, 2002) for context acquisition. The targeted application domain is mobile computing. Hydrogen manipulates the remote context that is information another device knows. Also, the local context is knowledge our own device is aware of. When the devices are in physical proximity they are able to exchange these contexts (context sharing). The architecture consists of three layers which are all located on the same device. These layers are: a) the Adaptor layer responsible for retrieving data from the sensors, b) the Management layer that is responsible to gain sensor data providing and retrieving contexts, and, c) the Context server which can offer the stored information. On top of the architecture is the Application layer capable of reacting on specific context changes reported by the context manager.Conclusion on Middleware ArchitecturesMany of the aforementioned middleware components (e.g., MASS, SCaLaDE) are based on a two-layer architecture, where the upper layer takes advantage of existing frameworks that provide underlying middleware services like naming and monitoring. IPAC should also adopt a similar layered architecture in order to cope with its distributed, nomadic nature, by exploiting underlying middleware technologies (e.g., OSGi). Specifically, IPAC middleware could benefit from the gained experience of such approaches enabling the exact determination of the required middleware services that will accomplish the overall system functionality.

IPAC ICT-2008-224395 34/56

Page 35: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

33 Sensor Computing-oriented Middleware

Since mobile embedded systems share several features with (wireless) sensor networks (e.g., operation in resource-constrained environments, event-based paradigm, data fusion), we survey some middleware platform for sensor computing environments. Most of these are based on TinyOS (http://www.tinyos.net/). TinyOS is an open-source operating system designed for wireless embedded sensor networks. Major features of TinyOS are – Component-based architecture, rapid innovation and implementation while minimizing code size, event driven execution model, and fine-grained power management. The programming language of TinyOS is nesC, which is a modified version of C programming language. TinyOS has been porte been ported to over a dozen platforms and numerous sensor boards.

AgillaAgilla (http://mobilab.wustl.edu/projects/agilla/, see Figure 6-19) is the first mobile agent middleware for WSNs that is implemented entirely in TinyOS. Agilla is a Mobile Agent based middleware with stack-based architecture., which reduces code size. Agilla allows agents to move from one node to another using two instructions – clone and move. Up to four agents can run on a single sensor node. Since one node can run multiple agents at the same time, multiple applications can be supported on the network simultaneously. To save energy, Agilla can move its agent to bring computation closer to data rather than transmitting data over unreliable wireless network. Agilla does not have any policy for authenticating or monitoring agent activities. Also, its assembly-like and stack-based programming model makes programs difficult to read and maintain.

Figure 6-19 Overall architecture of Agilla middleware

ImpalaImpala middleware (Wyckoff et al, 1998) was designed based on an event-based programming model with code modularity, ease of application adaptability and update, fault-tolerance, energy efficiency, and long deployment time in focus. Application adaptability and application update are two major issues implemented by this middleware. It follows a finite state machine based approach taking into consideration various application parameters to handle the adaptability issue. Application updater of Impala is capable of handling incomplete update, inconsistent update, on-the-fly update while code execution, etc. Although Impala has data communication support for getting data back to the base station, it does not have any support for data fusion. Its abstraction model does not take heterogeneity of the network into consideration and its application domain is rather simplistic.MateMate (Levis & Culler, 2002) is a virtual machine for sensor networks which is implemented on top of TinyOS. It hides the asynchrony and race conditions of underlying TinyOS. Mate has a stack-based architecture with three execution contexts – clock, send, and receive. Mate breaks down the program into small self-replicating capsules consisting of 24 instructions. These capsules are self-forwarding or self-propagating. Although Mate has a small, concise, resilient, and simple programming model, its energy consumption is high for long running programs. Mate’s virtual machine architecture increases security and it takes care of malicious capsules. But its programming model is not flexible enough to support a wide range of applications.TinyDBTinyDB (http://telegraph.cs.berkeley.edu/tinydb) is a query processing middleware system based on TinyOS. TinyDB provides power-efficient in-network query processing for collecting data from individual sensor nodes. This results in reduced number of exchanged messages and reduced energy consumption. It defines

IPAC ICT-2008-224395 35/56

Page 36: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

two different types of messages for query processing – Query Messages and Query Result Messages. It also has Command Messages for sending command to sensor nodes. While TinyDB provides nice abstraction support and a good aggregation model, it does not provide much functionality by means of middleware services. So most of the services have to be provided by the applications running on top of it.TinyCubusTinyCubus (http://www.ipvs.uni-stuttgart.de/abteilungen/vs/forschung/projekte/tinycubus) is a flexible, adaptive cross-layer framework implemented on top of TinyOS. Flexibility and adaptation are two major issues behind the design philosophy of TinyCubus. To achieve this, TinyCubus architecture is divided into three parts (see Figure 6-20): i. Tiny Cross-Layer Frameworkii. Tiny Configuration Engineiii. Tiny Data Management FrameworkAlthough TinyCubus’s flexible architecture allows it to be used in different environments, its overhead due to cross-layer design may be prohibitive in some environments. Also, adaptation policies are static and scalability is still not good.

Figure 6-20 Overall architecture of TinyCubus middleware

TinyLimeTinyLime (http://lime.sourceforge.net/info/tinyLime.html) is implemented on top of TinyOS exploiting Crossbow’s Mote platform. It is an extension of Lime (Murphy et al, 2001). TinyLime follows an abstraction model based on a shared tuple space. This tuple space contains sensed data. It supports data aggregation to find more information from collected data. TineLime consists of three main components:i. The Lime Integration Componentii. The Mote Interfaceiii. The Mote-Level SubsystemTinyLime however does not have any built in security support. Its programming model is rather one time and does not provide good support for adaptability or scalability.SensationAn approach to provide an abstraction layer to sensor network middleware is presented in (Hasiotis et al, 2005). The Sensation approach defines a middleware integration platform, where an application can exploit several underlying sensor networks (along with their respective operating systems). Each separate sensor network is integrated into the system through a driver-like plug-in, similar to the ODBC concept of driver. The applications can issue queries and receive requests through an abstract XML-based language.

34 Autonomic and Reconfigurable Systems

Autonomic reconfigurable systems consist of self-organization mechanisms, which can be found not only in technical systems, but also in our every-day life (Foerster, 1960), as their principles have evolved in nature. On the other hand, it was a challenging issue for the engineers to study and apply these ideas to technical systems. Especially in the area of autonomic computing and ad hoc networking, self-organization concept is important because of the spontaneous interaction of multiple heterogeneous components over wireless

IPAC ICT-2008-224395 36/56

Page 37: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

radio connections (Murthy & Manoj, 2004) without human interaction. Moreover, the term self-organization is used in conjunction with other so called self-X capabilities. These capabilities include the fundamental features of the autonomic computing architecture, which are self-configuring, self-healing, self-optimizing, and self-protecting (sometimes referred to as "self-CHOP").A typical definition (Dressler, 2006a) about self-organization could mention that it is a process in which a pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of a system. Moreover, the rules specifying interactions among the systems’ components are executed using only local information, without reference to the global pattern. When applying this idea to ad hoc networks, self-organization can be seen as the interactions between nodes in the network leading to globally visible effects, e.g. the transport of messages from a source node to a sink node.Self-Organization Methodologies in Ad Hoc NetworksAt the beginning, the communication in wireless networks was inspired by well-known approaches from wired standards such as the Internet. Then, issues such as mobility (Bai & Helmy, 2004; Bhatt et al, 2003), high dynamics in terms of new devices joining a network and other devices leaving, and rapid changes of the environment have been identified. Such problems made a distributed configuration necessary, that should be organized based on local feedback loops. Hence, networks could evolve autonomously (Low et al, 2005) with specific processes known as self-organization (Dressler, 2006a; Gerhenson & Heylighen, 2003). The most famous issues to be solved using self-organization methods in ad hoc networks are (a) scalability, (b) reliability and (c) availability.Self-organization in ad hoc networks is mostly addressed by either using local state information that is updated by feedback loops and neighborhood relationships, or probabilistic methods that are employed for making the local decisions. Interestingly, biologically inspired approaches show many possible solutions that might influence the technical systems (Dressler, 2006b).We can group the self-organization methodologies assuming specific criteria, such as (a) their use of state information and (b) their function in the protocol stack. The mechanisms which are based on the use of state information try to avoid global state information in order to increase the scalability of the particular approach. In this approach, they include: location-based mechanisms, neighbourhood information, probabilistic algorithms and bio-inspired methods (Figure 6-21). While the required state is reduced towards the probabilistic methods, the determinism or predictability of the algorithms is reduced as well. Therefore, the best solution for a particular application scenario must be chosen carefully by taking into account all application requirements.

Global state Neighborhood information

Location based Probabilistic

§ Link state routing§ Central coordination

§ Topology control§ Clustering

§ Table-driven routing§ MAC

§ Data centric routing§ Task allocation

Bio-inspired

Figure 6-21 Methodologies based on the use of state information

On the other hand, there are different kinds of self-organization issues when the mechanisms are depending on a particular layer. A common control plane coordinates and controls mobility questions and some additional cross-layer or cross-service issues have to be considered (Figure 6-22). Such cross-layer issues are, for example energy, security, end-to-end performance, and coverage. Based on the application scenario at hand, particular mechanisms from different layers might interact to achieve a common goal (e.g., to reduce the necessary amount of energy), but they might also interfere with one another (e.g., by defining different sleep cycles at different layers to reduce the energy consumption).

IPAC ICT-2008-224395 37/56

Page 38: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Application layer

Transport layer

Network layer

MAC layer

Physical layer

Cross layer

optimization

Mobility management

Figure 6-22 Methodologies based on the function in the protocol stack

Self-Healing and Self-Adaptation in Autonomous EnvironmentsFault detection and recovery are the main steps of self-healing. In (Ahmed et al, 2007), the challenges of self-healing have been addressed and an approach to develop a self-healing service for autonomic pervasive computing is presented. The self-healing service has been developed and integrated into the middleware named MARKS+ (Middleware Adaptability for Resource discovery, Knowledge usability, and Self-healing). Fault detection and notification, and faulty device isolation are taken care of by the healing manager of the self-healing service using system monitoring. This approach does not include any fault correction, which is the next step towards a fully self-healable system. In the currently running ANA (Autonomic Network Architecture) research project (ANA, 2008), self-healing techniques that address fault correction are studied. A self-healing system should also address the issue of adaptation within an autonomic environment. An autonomous system needs to be re-configurable, in order to be able to cooperate with different devices, states, conditions and loads, without need for user intervention to install plug-ins or reboot the system. One software engineering challenge in implementing such a system is managing the degree of coupling between the components that affect system adaptation (adaptation engine) and the components that realize the system’s functional requirements (target system). Designers can either hardwire adaptation logic into the target system or separate the concerns of adaptation and target system functionality. This can be achieved either through specialized middleware like IQ-Services (Eisenhauer & Schwan, 1998) and ACT (Sadjadi & McKinley, 2004) or through externalized architectures that include a reconfiguration/repair engine, as in Kinesthetics eXtreme (KX) (Kaiser et al, 2003) or Rainbow (Cheng et al, 2004).The Kheiron framework (Griffith et al, 2007) can dynamically attach/detach an engine capable of performing reconfigurations and repairs on a target system while it continues executing. Kheiron is lightweight and transparent to the application and the execution environment. This framework does not require recompilation of the application nor specially compiled versions of the managed execution runtime. It uses the profiling facility of Microsoft’s managed execution environment — the Common Language Runtime (CLR) — to track the application’s execution, and effects changes via bytecode rewriting and creating/augmenting the metadata associated with modules, types, and methods. The concept of this prototype could be expanded to work with any other modern execution environment, such as JVM, where the main challenge is to overcome the limitations of each environment. The Rainbow Framework (Cheng et al, 2004) uses externalized control mechanisms based on architectural models to dynamically monitor and adapt a running system, often at a fairly global, module level. Its architecture is illustrated in the Figure 6-23.

IPAC ICT-2008-224395 38/56

Page 39: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Figure 6-23 The Rainbow framework

Autonomic Architecture ConceptThe autonomic computing architecture concepts provide a mechanism for discussing, comparing and contrasting the approaches that different vendors use to deliver self-managing attributes in an autonomic computing system. The autonomic computing architecture starts from the premise that implementing self-managing attributes involves an intelligent control loop. This loop collects information from the system, makes decisions and then adjusts the system as necessary. An intelligent control loop can enable the system to perform major “autonomic” tasks. For instance, Self-configuration, by installing software when it detects that software is missing, self-healing, by restarting a failed element, self-optimization, by adjusting the current workload when it observes an increase in capacity, and self-protection, by taking resources offline if it detects an intrusion attempt (IBM, 2003). Another appealing design model of autonomic computing systems, with roots in applied control theory, encodes autonomy as rules. Such rules are generally embedded in meta-software systems (agents) exhibiting the advocated autonomic properties. In such an approach, the autonomic system is expected to perform intelligently and autonomously, taking into consideration the boundaries (rules) and nature of the environment applications (Omar et al, 2006) This approach is also used in the “Accord” project, which defines two classes of rules, behavioural rules and interaction rules (Liu & Parashar, 2006), with a three-phase execution model ensuring efficient and consistent rule execution. The main challenge in such approaches is to deploy rules that can practically lead to autonomous systems, with all necessary decision-making mechanisms which will lead to a stable and still flexible architecture. Additionally, an ideal autonomic system will be able to “create” rules on an on-event basis, which means that events not predicted in existing rules can be handled without any human intervention.

35 Technologies for the Embedded Knowledge Plane

Since the need for re-configuration and self-adaptation in mobile applications has only recently gain popularity inside the research community, few works have investigated knowledge representation and reasoning tasks on portable devices. The restricted resources of such equipment impose severe constraints. In this section we survey several state-of-the-art technologies that are used for knowledge-based systems, either emphasizing on the mobile computing domain, or not.

36 Knowledge Representation Languages

Regarding knowledge representation formalisms, the restricted resources of mobile nodes impose limitations to the languages used for knowledge representation. One common modelling languages in this area are based on ontologies aiming to take advantage of the maturity provided by the existing Semantic Web standards and efficient inference algorithms. Resource Description Framework (Schema) and

IPAC ICT-2008-224395 39/56

Page 40: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Description Logics (DL) and Prolog (logic programming) constitute such languages that allow the formal representation of semantic descriptions about resources.RDFResource Description Framework (RDF) is a language for modelling and representing knowledge about Web resources. Specifically, it is based on mature ideas of Artificial Intelligence like Semantic Networks and Frames. It is a data model for writing simple statements about objects (resources) and defining relations between them. The value of RDF becomes very useful in cases where that kind of knowledge needs to be processed by applications. It provides simple machine processable semantics and its common syntax is XML-based. Specifically, RDF identifies things through Web identifiers (Universal Resource Identifiers - URIs) and describes resources through properties and values. For example, in the case of a web document, such attributes could be the title, the author and the last modification date. Generally, these statements constitute triples of the form <subject, predicate, object>. In a more human-friendly context, the aforementioned triples can be considered to compose a directed graph where every arc (predicate) is directed from a resource (subject) to a value (object) which can either be a resource or a literal (Figure 6-24).

Figure 6-24 A simple graph representation

Description LogicsDescription Logics (DLs) (Baader et al, 2003) constitute a family of knowledge representation languages that are subsets of first-order-logic (FOL). DLs are equipped with formal, logic-based semantics, emphasizing on reasoning process. Typical reasoning tasks include consistency checking of the knowledge base, concept satisfiability, instance checking etc. Hence, DLs are more expressive than RDF aiming to provide well-understood mechanisms in order to formalize domain knowledge.A DL-based knowledge base is composed of two components: TBox and ABox. TBox contains the vocabulary of the application domain, called terminology, as well as axioms based on that vocabulary. Practically, such vocabulary consists of concepts and roles. Concepts are generic descriptions of sets of individuals, while roles constitute binary predicates for defining properties of the individuals. On the other hand, ABox includes assertions of individuals that may refer to either concepts or roles. For example, a statement declaring that a specific individual is instance of a concept resides in ABox, while a statement denoting that “every human is mortal” belongs to TBox. Figure 6-25 shows the generic architecture of a DL knowledge representation system.

Figure 6-25 Architecture of a Description Logics knowledge-based system

IPAC ICT-2008-224395 40/56

Page 41: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Each description logic language is determined by a set of constructs, enabling the use of atomic concepts and atomic roles in order to define complex ones. These constructs directly affect the expressive power of the language and, thus, the complexity of inference tasks. As a result, the selection of the appropriate description logic language in order to describe a specific domain includes the examination of the imposed requirements for representation expressiveness. Finally, DLs set up the base for the definition of Web Ontology Language (OWL) (Dean et al, 2004) for representing knowledge in the Web.PrologProlog (Deransart et al, 1996) is probably the most well-known logic programming language used in the areas of knowledge representation and artificial intelligence, in general. It is a general purpose, declarative language based on predicate logic. In Prolog, the domain knowledge is expressed in terms of rules and facts in textual format. Moreover, most of Prolog systems and frameworks provide very efficient reasoning capabilities. This fact has straightforwardly led to the adoption of Prolog by mobile and time-critical applications and, thus, several prolog engines are available for resource-limited devices (JIProlog: http://www.ugosweb.com/jiprolog/download.aspx, WitMate: www.witmate.com, MProlog: http://mindcode.org/mobile-prolog).

37 Language Serialization Modes

As already discussed, RDF is an abstract data model, and one of the most widely adopted languages for knowledge representation, nowadays. Moreover, multiple ways exist in order to represent RDF triples. First of all, RDF/XML (http://www.w3.org/TR/rdf-syntax-grammar/) is an XML syntax for RDF and a W3C standard since 1998. In order to encode RDF graphs in XML, this syntax transforms nodes and arcs of the graph into XML elements, attributes, element content and attribute values. Though RDF/XML is the common serialization format for representing RDF triples (and, thus, OWL expressions), other more light-weight syntactic representation techniques have been also proposed and supported by RDF-enabled applications. N-Triples (http://www.w3.org/2001/sw/RDFCore/ntriples/) is a plain text serialization format for RDF graphs. It is the simplest syntactic representation of RDF, expressed in US-ASCII characters and defined by a simple grammar. Triples are written on separate lines and consist of a subject, a predicate and an object, followed by a period. Turtle (http://www.w3.org/2007/11/21-turtle) constitutes a textual syntax for expressing and exchanging RDF triples. Specifically, it is a superset of N-Triples notation by adding some additional syntactic means (e.g., namespaces) to simplify the representation of RDF knowledge. Notation3 (http://www.w3.org/DesignIssues/Notation3.html) is a textual serialization format of RDF, aiming to achieve maximum human readability. It is also extending Turtle by providing a number of additional features like numeric literals. Trix (Carroll & Stickler, 2004) intends to transfer the simplicity of N-Triples to an XML-based serialization format. It is proposed by Hewlett-Packard aiming to overcome known RDF/XML limitations like the improper placement of RDF layer on top of XML (McBride, 2003). RDFON (http://n2.talis.com/wiki/RDF_JSON_Specification) constitutes a resource centric and light-weight serialization of RDF to JSON (http://www.json.org/). Specifically, it represents a set of RDF triples as nested data structures. Finally, a widely-known and simple syntax is provided by the Prolog and Datalog languages, where all information is provided as predicates or simple rules (composed as a sequence of predicates)Table 6-3 Existing serialization formats of RDFrepresents the example of a single triple, stating that the “name” property of some entity is “Bob”, expressed in the aforementioned serialization formats. Most of the RDF-related aforementioned formats are supported by tools able to transform existing RDF/XML documents to these syntaxes.

Table 6-3 Existing serialization formats of RDF

Representation

FormalismName Type Example

RDF/XMLXML

serialization <rdf:Description> rdf:about="http://www.example.org/bob"> <eg:name>Bob</eg:name></rdf:Description>

N-TriplesTextual

serializationhttp://www.example.org/bob

http://www.example.org#name "Bob" .

TurtleTextual

serialization eg:bob eg:name "Bob" .

Notation3Textual

serialization hcls:kb foo:asserts { eg:bob eg:name "Bob" }

IPAC ICT-2008-224395 41/56

Page 42: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

RDF(S) / DLs

TrixXML

serialization

<TriX xmlns="http://www.w3.org/2004/03/trix/trix-1/"xmlns:eg="http://example.org/"> <graph> <uri>http://example.org/graph1</uri> <triple> <qname> eg:bob</qname> <qname>eg:name</qname> <plainLiteral>Bob</plainLiteral> </triple> </graph></TriX>

RDFONTextual

serialization

{ " http://www.example.org/bob " : { " http://www.example.org#name ": [ { "type" : "literal" , "value" : " Bob" } ] }}

Logic Programming Prolog/

Datalog

Textual serialization name(bob, "Bob").

38 Policy Representation Languages

Nowadays, policy languages gain popularity in the area of real-world applications and autonomic computing environments, in particular. Such approaches allow re-configuration of component behaviour and dynamic adjustability of applications through efficient reasoning mechanisms.Ponder (Damianou et al, 2001) is a declarative, object-oriented language for specifying policies for management and security of distributed systems. It provides event-triggered condition-action representation of rules in order to specify obligations and authorization policies. The language takes also advantage of the management domain feature in order to facilitate the modelling of complex object systems. Ponder2 (http://ponder2.net/) is a recent re-design and re-implementation of Ponder coping with issues like scalability and extensibility.Rei (Kagal, 2002) is an ontology-based framework capable of representing policies and rules in pervasive computing applications. The language is based on deontic logic and represents policy specifications in terms of rights, obligations, dispensations and prohibitions. It is also associated with an inference engine able to reason over the specified policies. The language also exploits a prolog-like syntax in conjunct with RDFS expressions. KAoS (Uszok et al, 2003) is also an ontology-based policy language. Specifically, it takes advantage of DAML/OWL ontology languages that are based on description logics in order to define authorizations and obligations. It also provides a graphical tool for managing policies. Moreover, a Java Theorem Prover is responsible to accomplish the necessary reasoning support over the KAoS statements. Autonomic Computing Policy Language (ACPL) is an XML-based hierarchical model able to represent policy rules in autonomic computing environments. The language is based on Autonomic Computing Expression Language (ACEL) (Agrawal, 2005) in order to specify conditions when a policy should be applied. ACPL can further specify elements like decisions to be taken, results, configuration profiles, etc. It also provides priority of rules and an API which facilitates policy management.Protune (PROvisional TrUst NEgotiation) (Bonnati & Olmedilla, 2005) is a trust negotiation framework developed in the context of REWERSE project (http://rewerse.net/). Specifically, it provides a quite expressive rule-based policy language combining declarative and dynamic aspects. The language provides flexible ways to define and manage static and dynamic policies without requiring ad-hoc programming. It also comes with both automated trust negotiation and advanced user-awareness mechanisms. The former is responsible for deducing the derivations about the involved peers, while the latter is capable of presenting policies and explaining decisions in natural language.

IPAC ICT-2008-224395 42/56

Page 43: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

39 Mobile Reasoning Techniques and Tools

Pocket KRHyper (Sinner & Kleemann, 2005) is a Java-based theorem prover for first-order logic providing an API to access its services. It implements hyper-tableaux algorithms in order to support automated reasoning in portable devices. Furthermore, a transformation of DL expressions into sets of first order formulae allows KRHyper to be used in modern Semantic Web enabled applications. This reasoning engine was developed in the context of the IASON Project. IASON (www.uni-koblenz.de/~iason) is an integrated system that aims to provide personalized location-aware information in mobile environments. It takes advantage of semantic descriptions in terms of DL and reasoning on mobile devices for accomplishing matchmaking between user profiles and annotated messages. The key idea of the project is that the mobile devices should play an active role in the personalization process. This way, mobile nodes are not required to be connected to the internet in order to achieve adaptive behaviour. In the context of IASON project, a description logic language, able to express role hierarchies, transitive and inverse roles, is adopted. This rather simple language targets at modelling complex concepts of a mobile environment including user and service profiles.Another approach to mobile reasoning is presented in (Müller et al, 2006). Their approach is based on the well-known tableaux algorithm which is used in most modern DL reasoning systems. They target mobile devices that can support Java Micro Edition. However, this reasoner is not a fully-fledged engine yet, although its approach seems quite promising. Finally, mobile Prolog engines could be used for reasoning over knowledge bases in resource constrained environments. Such engines have already mentioned in section 36.

40 Models for Autonomic Computing Platforms

In this section, we survey several modelling efforts made by researchers or standardization fora. The presented models are some efforts to declaratively design autonomic, context-aware, personalized and/or mobile applications. Since, IPAC will exploit some knowledge engineering techniques for node and application-reconfiguration, several models may be found to be useful. Specifically, some common models that are involved in typical knowledge-based applications are: user model (or profile), context model, application profile, device model and possibly content model.

User ProfilingA user profile usually refers to a set of preferences, characteristics, information, rules and settings that are used by an application or service to deliver customized information to users. 3GPP has defined a specification (http://www.3gpp.org/) for User Profiling (Generic User Profile – GUP) for all user-related data including user, services, user equipment, and so on. The main objective is to provide means for enabling harmonized usage of the user-related information originating from different entities and locations. The 3GPP GUP can be used for diverse adaptations in different applications. In the IST SPICE (Villalonga et al, 2007) project a mobile ontology is used to describe information related to all aspects of a mobile application environment. This ontology contains a number of sub-ontologies. One of them is the User Profile ontology which provides the appropriate concepts for the description of users characteristics. The profile sub-ontology provides a means for describing situation dependent user preferences. Hence, the user can easily specify conditions describing a specific situation and, on the other hand, these conditions are machine processable so that the activation of situation dependent actions can be carried out automatically by the platform.A novel approach in user modelling is presented in (Wang et al, 2006). The model defines user's interests in a multi-layer tree which is dynamically adjusted. The top layers are used to model user interests on fixed categories, and the bottom layers are for dynamic events. Hence, the model presented can detect users’ behaviours on both fixed categories and dynamic events, and consequently capture the interest changes.Context ModellingContext modelling in Ambient Intelligence environments is studied in (Preuveneers et al, 2004). In such applications, the disposal of information about the context in a very general manner is necessary. Various types of information should be assembled to form a representation of the context of the device on which the application runs. To allow interoperability in an Ambient Intelligence environment, it is necessary that the context terminology is commonly understood by all participating devices. An extensible context ontology, for creating context-aware infrastructures, ranging from small embedded devices to high-end service platforms, is presented. Another context ontology is presented in (Wang et al, 2004). This ontology is CONON (CONtext Ontology). CONON is used for modelling context in pervasive computing environments and for supporting logic-based context reasoning. CONON provides an upper context ontology capturing general concepts about basic context. Moreover, it provides extensibility for adding domain-specific ontology in a hierarchical manner.

IPAC ICT-2008-224395 43/56

Page 44: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Researchers in (Henricksen et al, 2004) provide a fact-based tool for context modelling which facilitates developers to explore and specify the context requirements. Through this tool one can define the required objects for context information and the types of information (facts) that are of interest in relation to each object. Moreover, developers can define the appropriate source for each fact type, specify dependencies and constraints. A Context Modelling Language (CML) is presented, which provides a variety of extensions to the notation concerned the previous mentioned objects. Furthermore, CML provides extensions to support special constraints on temporal and alternative fact types, annotation of fact types with the appropriate metadata describing quality, dependencies and so on.Several approaches to spatial modelling have been published in the literature. In (Tsetsos et al, 2006) a spatial ontology is presented and several other relevant modelling approaches are cited. Lastly a survey on context modelling is published by (Strang & Linnhoff Popien, 2004).Application ModellingApplication modelling refers to the structured and formal definition of all entities used for the generic description of an application (including several aspects of its lifecycle, such as deployment, execution, discovery etc). In CARISMA framework (Capra et al, 2003), application profiles are described using the abstract syntax shown in Figure 6-26 CARISMA Application Profile Abstract Syntax Profiles are passed to the middleware each time a service is invoked and the middleware consults the profile of the application that requests it, queries the status of the resources of interest to the application itself and determines which policy can be applied to the current context, thus relieving the application from performing these steps. More approaches to policy description were discussed in section 37.A generic application description model is presented in (Lacour et al, 2005). This model can be expressed in several specific application descriptions. It describes an application as a list of entity hierarchies along with a list of connections between them. Three computing entities are considered: a) system entities, b) processes, and, c) codes to load. Each system entity may be deployed on distributed compute nodes.

Figure 6-26 CARISMA Application Profile Abstract Syntax

Content ModellingContent Modelling refers to the structured description of the most important aspects of information used (consumed, produced, etc.) by an application. Since in IPAC the basic content is the information itself (either originated from sensors, applications or users), we do not foresee content modelling to be a key part of its architecture. Nevertheless, we present the Dublin Core metadata. The Dublin Core Metadata Initiative (DCMI) (http://dublincore.org) is an organization dedicated to promoting the adoption of interoperable metadata standards and developing specialized metadata vocabularies for describing resources. The Dublin Core Metadata Initiative provides simple standards to facilitate the finding, sharing and management of information. This is held through:

The development and the maintenance of international standards for describing resources. The support of a worldwide community of users and developers. The promoting of the use of Dublin Core solutions.

The Dublin Core metadata element set is a standard for cross-domain information resource description. It provides a simple and standardised set of conventions for describing things online in ways that make them easier to find. Dublin Core is widely used to describe digital materials such as video, sound, image, text, and composite media like web pages. Implementations of Dublin Core typically make use of XML and are Resource Description Framework based. Device ModellingComposite Capabilities/Preferences Profiles (CC/PP) (http://www.w3.org/Mobile/CCPP/) is a W3C

IPAC ICT-2008-224395 44/56

Page 45: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

specification for describing device capabilities and user preferences. Latest CC/PP vocabulary is based on XML syntactic representation of RDF allowing to be extended with further semantic descriptions. It is also designed to work with a wide range of devices including PDAs, personal desktops and mobile phones.User Agent Profile (UAProf) (OMA, 2001) is an implementation of CC/PP focused on portable devices defined by Open Mobile Alliance (OMA) (http://www.openmobilealliance.org/). It describes certain features of mobile device including model, capabilities and media types supported. Such information can be exploited by external systems (e.g., content providers) in order to determine suitable formats for a specific device.Conclusion on Knowledge Representation and Serialization TechnologiesThe ad-hoc nature of the IPAC paradigm imposes severe limitations to the knowledge engineering processes employed. In the context of IPAC, the embedded devices will have restricted capabilities complicating the management of knowledge. Hence, intelligence has to be captured in compact forms in order to achieve efficiency in knowledge representation and reasoning. In this context, light-weight textual serialization formats (e.g., triple-based formalisms, Prolog) seem to be more preferable than the XML-based ones.

41 Information dissemination and collective context awareness algorithms

In self-organizing and nomadic networks, similar to the ones studied in IPAC, traditional message routing algorithms are not efficient. Such algorithms are mostly built for Mobile Ad hoc Networks (MANET). In section 8.1, we present some alternative approaches for information dissemination in such networks. In section 8.2, we extend this survey with related work in dissemination of contextual information that aims at achieving collaborative context-awareness. Such algorithms share many features with those presented in section 8.1, but also take into account the special features of contextual information (e.g., spatial and temporal validity, freshness).

42 Information Dissemination in Nomadic Environments

SODADA novel method for disseminating information in self-organizing Vehicular Ad hoc Networks (VANETs) is presented in (Wischhof et al, 2005) where segment-oriented data abstraction and dissemination (SODAD) is proposed. SODAD does not require that a large portion of vehicles is equipped with an inter-vehicle communication (IVC) system, because the main objective is that the information will be distributed in the largest possible range and not that it will reach the maximum possible number of vehicles. SODAD is divided in two layers: a) Map-Based Data Abstraction and b) Data Dissemination. Regarding the data abstraction process, each vehicle has to be equipped with a digital map, which is divided in segments with known varying size, chosen in an adaptive way. Each node generates information for all segments in its transmission range. Then, an aggregation function is applied and it produces a tuple that completely describes the information sensed or received for a segment at a node. Then, this node- and segment-specific tuple is distributed. The higher the number of IVC equipped vehicles is, the more accurate the distributed data will be. As long as data dissemination process is concerned, all messages are transmitted in one-hop broadcasts and no routing is performed. In order to send the segment-specific information, the application is using a store-and-forward technique: each node compares each received message with the currently available information, and if needed it performs a local update. Finally, the application recurrently sends broadcast packets with its current information on all relevant segments in the local area. For the purpose of evaluating the approach presented above, a self-organizing transportation system (SOTIS) that implements SODAD was developed. Traditional traffic-information systems (TIS) are organized in a centralistic way. Because of the disadvantages that centralized services for traffic information dissemination exhibit, a decentralized self-organizing TIS was designed by combining a digital map, a positioning system and wireless ad-hoc communication among vehicles. SOTIS implements an adaptive scheme with respect to the broadcast interval. As opposed to static schemes, an adaptive scheme avoids overload conditions and favors the propagation of significant changes. Simulations conducted in order to evaluate this adaptive broadcast scheme showed that it really has advantages over a static one, as far as collisions per node and information quality are concerned. Autonomous GossipingThe problem of searching resources in a MANET is very popular, and more explored than its dual one: that of disseminating resources in a network. Traditional epidemic algorithms follow the approach of flooding the whole network. Autonomous Gossiping (A/G), presented in (Datta et al, 2004), is an algorithm that does not require any existing infrastructure, but does not guarantee completeness, as opposed to publish/subscribe or flooding schemes. A/G is considered a well suited algorithm for non-critical applications, where the advantages of completeness are less important than the cost of infrastructure maintenance. MANETs differ from

IPAC ICT-2008-224395 45/56

Page 46: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

infrastructure based networks: the former are decentralized and self-organizing as opposed to the latter which require global coordination. Broadcast or geocast approaches have been proposed to distribute information aiming to cover the whole network or part of it. Except for them, there are methods for sending data to a specific node, using mobile ad-hoc routing algorithms. The flooding approach is more applicable in a wireless environment than in a wired one, because of the wireless multicast advantage. Following a completely opposite direction of publish/subscribe schemes, in A/G the data items themselves try to identify the nodes that are suitable for them to spread through. This decision about which nodes are suitable is based on the item’s profile and on the node’s advertisement (similar to subscription). In this way, the dissemination algorithm requires only local information (item’s profile and node’s advertisement) in order to take autonomous decisions. Mobile nodes have data items which might be of interest for each other. These data items compete for limited resources, e.g. available memory in a node or transmission priority. Nodes reward or punish incoming data items, according to their suitability for them. This way, emphasizing the selective nature of A/G, data items struggle constantly to survive, by trying to choose really suitable nodes to get into. A/G is an epidemic algorithm with emphasis on selectivity and not completeness. As opposed to traditional epidemic algorithms, A/G resembles more to actual epidemic spreading and brings together the domain of content-based communication and that of epidemic algorithms, in the context of MANETs. A/G behaves like a proactive scheme similar to the pre-fetching ones, while at the same time it has on-demand characteristics behaving like intentional multicast (Winoto et al, 1999). A/G can be used in order to perform broadcasting, multicasting, geocasting or combine geocasting and multicasting and it does not maintain routing tables. A/G in MANETs can be used to simulate real epidemic spreading of diseases, with the difference being that the goal here is to disseminate information and not to prevent it from spreading.Warning Delivery Service in Inter-vehicular Networks(Fracchia & Meo, 2008) focuses on inter-vehicular networks providing a warning delivery service. As soon as a potential risk is detected, the propagation of a warning message is triggered, with the aim of guaranteeing a safety area around the point where the risk was observed. Multiple broadcast cycles can be adopted so that a given lifetime of the safety area is guaranteed. The service is based on multi-hop ad hoc inter-vehicular communications with a probabilistic choice of relay nodes. The considered scenario consists of high speed streets and highways in which vehicles exhibit one-dimensional movements along the direction of the road. Some analytical models for the study of this service are proposed. The models are used to discuss system design issues which include the proper setting of the forwarding probability at each vehicle so that a given probability to receive the warning can be guaranteed to all vehicles in the safety area. When a danger is detected, the propagation of a warning message is triggered and the message is delivered through multiple hops in the ad hoc network. The core of multi-hop warning delivery is the probabilistic forwarding scheme performed locally by each vehicle that receives the message. As showed, the model can be used to accurately and effectively compute a number of performance indices that includes the probability to inform a vehicle, the message delivery delay, and the number of duplicated messages received by a vehicle. Multiple broadcasting cycles may be introduced to guarantee a safety area for a given time period. The broadcase period should be set according to the vehicle speed, the safety area extension, and the transmission range. The transmission range and the forwarding probability should be jointly set so as to decide the desired trade-off between service reliability and redundancy. Contrary to other wireless systems and services whose performance deteriorates with the transmission range, in the case of warning delivery services, large transmission ranges are preferable. The extension of the safety area is not relevant, provided that the transmission range, the forwarding probability and the vehicle density are in such a way that the network is easily connected.Other related work(Kephart & White, 1991) present a model for the epidemical spreading in random graphs. They extend a standard epidemiological model by placing it on a directed graph and use analysis and simulation to study its behaviour. They define the conditions under which epidemics are likely to occur and they explore the dynamics of the expected number of infected individuals as a function of time. Each of the individual systems is represented by a graph node. Directed edges from a given node to other nodes represent the set of individuals that can be infected by the initial node. A rate of infection is associated with each edge as well as a rate at which infection can be detected and cured. The authors in (Boguna et al, 2003) study epidemical spreading in complex networks. They provide a review of recent results concerning the epidemic spreading in random correlated complex networks. An analytical description is provided and a review of the analytical treatment of the epidemic SIS (Susceptible - Infected – Susceptible) and SIR (Susceptible – Infectious - Recovered) models in complex networks at different levels of approximation is presented. The authors report the existence of an epidemic threshold, separating an active or endemic phase from an inactive or healthy phase. In (Zesheng & Chuanyi, 2005) the authors analyze a Markov process-based framework that characterizes the spreading of epidemics through the SIS model and the impact of the underlying topology on propagation.

IPAC ICT-2008-224395 46/56

Page 47: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

In this work, a mono-epidemical propagation model is introduced based on the spatial independency and dependency of the infectious nodes.

43 Context Dissemination for Collaborative Context-awareness

Collaborative Context-Awareness (CCA) implies the understanding of the context of others that, consequently, provides more enhanced context for an individual. A Collaborative Context-Aware System (CCAS) comprises a group of nodes capable of sensing, fusing, inferring and intercommunicating in order to achieve common or similar context. A CCA application relies on information dissemination algorithms for context sharing. One can found great similarities between the epidemiological model and the internals of a CCAS, mostly in the abstractions of virus spreading, virus severity, virus interdependencies and possible transmutations. The dissemination of contextual information should be handled by a simple, message passing protocol to fulfil the desired spreading requirements (i.e., time-space constraints). Contextual information should reach potential recipients within a certain time frame (e.g., in less than 2 minutes). Moreover, such information could be considered valid only within a certain geographical area (e.g., detailed location information captured by a certain node can be exploited only by nearby nodes). The CCA application in a certain node does not necessarily require the incoming contextual information as input. The ad-hoc topology of the underlying network (i.e., nodes move and establish ad-hoc dialogs among themselves stochastically) reduces the applicability of neighbourhood discovery schemes and complex routing protocols. Although a simple flooding scheme could support the key requirements reported before, it does not safeguard the node energy efficiency and incurs many, undesirable transmissions. Other more elaborate schemes are less desirable as they fail to operate in highly dynamic topologies similar to the one assumed here. Another approach is epidemical spreading as the underlying mechanism for contextual information dissemination. One could claim that the stochastic nature of epidemical spreading does not guarantee the widest possible dissemination of context. However, the epidemical context spreading achieves very high topology coverage even for low infection rates (i.e., probability of message passing to a certain neighbour).A method for collaboratively determining and disseminating context and knowledge (inferred context) to a group of nomadic nodes through an epidemical model is proposed in (Anagnostopoulos & Hadjiefthymiades, 2008). In the considered setting, epidemical spreading results in energy efficient and robust mobile applications, w.r.t. transient sensor failures or context disturbances (Khelil et al, 2002). Context (e.g., location, speed, proximity to other devices) is propagated across an ad-hoc network. Collaborating nodes support each other through knowledge diffusion for achieving common goals. The exchange of context in the presented model is progressed beyond a simple flooding scheme by the adoption of an epidemic model that relays information in a stochastic manner. The proposed scheme covers the concept of epidemical transmutation. Several dependencies among pieces of context have to be taken into consideration in information dissemination (temporal dependencies or semantic relations). Nodes have reasoning capabilities which can further process and transform the incoming context. Then, the derived context can be relayed throughout the network. For that reason the notions of contextual hierarchy and temporal dependencies among pieces of context are adopted. New context that is locally inferred or recently sensed context becomes potential epidemic, which also propagates through the network. This multi-epidemic model relies on semantic dependencies, which are modelled through a hierarchical representation scheme. The proposed architecture is tested by configuring different spreading parameters. The mobility of the nodes and the temporal validity of the exchanged context are also taken into account (i.e., relayed information remains valid for a certain period of time and not forever).In the proposed model, the algorithms for context dissemination and reasoning are provided as long as the context validity period is used for aggregating contextual information from diverse sources. Moreover, for each epidemic a different spreading behaviour is assumed and through experimental results is showed, that such behaviour may affect the spreading pattern of the transmuted epidemic in light of mobility of nodes. In addition, the reliability and the efficiency of the proposed model is compared with several dissemination strategies (semantics-based or not). It was shown that this model demonstrates better performance than the compared strategies assuming an additional computational time linear in the possible epidemical transmutations. Finally, the performance of the scheme under conditions of sensor failures leads to economies of scale and increases robustness of mobile context-aware applications.Considerable research related to epidemic-based information dissemination in ad-hoc networks has been performed in (Salkham et al, 2006). Authors consider how the components of a context-aware system can collaborate to achieve a common goal. They provide a taxonomy of such CCA based on three axes: goal, approaches and means. Collaboration among context-aware entities may not only be based on communicating contextual information but also sensed and fused data in addition to possible next actions to perform. Such communication supports efficient collaboration to occur as a result of more precise inference, decision making and awareness.

IPAC ICT-2008-224395 47/56

Page 48: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Additionally, the architecture discussed in (Mantyjarvi et al, 2002) proposes an approach for collaborative context dissemination among groups of nodes. Such architecture takes into account the reliability of contextual information in the dissemination process. There are various sources for context information: sensors, tags, positioning systems. The raw data from these sources are translated into higher-level interpretations of the situation. The reliability of context recognition through an analogy to human behaviour is improved through the discussed model. A collaborative context determination scheme is described and issues of context recognition, context communication, and network requirements are raised. The manipulation of context takes place over wireless ad hoc proximity networks. The model in (Kempe et al, 2004) refers to spatially constrained context dissemination. The discussed model takes into consideration the dissemination range of the contextual parameter. Researchers propose distance-based propagation bounds as a performance measure for gossip algorithms. They describe their gossip algorithms and distinguish between two conceptual layers of protocol design: a) a basic gossip algorithm and b) a gossip-based protocol built on top of a gossip algorithm, which determines the contents of the messages that are sent and the way that these messages cause nodes to update their internal states.

Some conclusionsAs it becomes apparent from the aforementioned techniques, there are several different approaches for achieving information dissemination and context-awareness in nomadic environments. The applicability of them for IPAC will be assessed through extensive simulations performed during the first phases of the system design. In general, epidemic algorithms seem capable of delivering the desired functionality, but their fine-tuning and optimization is an issues that deserves further investigation.

44 Similar past and existing EU projects

In this section we review several approaches and projects that are relevant to IPAC. Relevance was decided based on the general architecture of the solutions and their target application domains. For example, we present projects:

for context-aware mobile systems for pervasive computing systems for environments with embedded devises (e.g., smart homes) for vehicle-to-vehicle services for autonomic and self-configured systems that exploits knowledge-based intelligence for service provisioning and adaptation

One common characteristic and inclusion criterion is the middleware that each project proposes and implements. For each of these solutions we provide some brief comments that identify their key points of relevance or irrelevance to the IPAC platform.SENSE – Smart Embedded Network of Sensing Entities [/www.sense-ist.org]The SENSE project is working on the creation of a distributed embedded network consisted of heterogeneous portable devices. The underlying sensor infrastructure of the network will cooperate in order to record a global view of the world and enable changes in the sensor layer (addition/removal of new sensing elements). One major goal of the project is the transformation of low-level knowledge stemming from the sensor entities into semantic information that will adapt network processes and management. However, SENSE does not aim to investigate any algorithmic solutions in the area of information dissemination and rumour spreading techniques. Moreover, IPAC will research on the creation of a distributed cognitive system that will both update middleware with information coming from the sensing elements and re-configure status of the sensors according to high level information (e.g. application requirements).EMMA - Embedded Middleware in Mobility Applications [www.emmaproject.eu]The main focus of EMMA Project (FP6-2005-IST-5-034097) is the deployment of a creation environment for embedded software as well as middleware platform that will facilitate the cooperation of sensing entities in the domain of transport applications. Specifically, the project emphasizes on the seamless collaboration of wireless sensing elements in order to achieve intelligent behaviour of cost-effective services. Nevertheless, EMMA does not intend to develop advanced context-aware services that will be based on efficient information exchange between nearby collaborating nodes. In the IPAC project, an effective context-aware mechanism will adopted for enabling collaborative behaviour of mobile nodes.DySCAS: Dynamically Self-Configuring Automotive Systems [www.dyscas.org]The main objective of the DySCAS project is the development of methods and tools, as well as architectural guidelines, for self-configurable systems in the context of embedded vehicle electronic systems. The project is driven by the fact that many future applications could include interaction of mobile devices with the built-in devices of a vehicle, using ad-hoc networking. Hence, DySCAS considers situations such as automatic discovery and use of new devices connected to a vehicle so that they can be seamlessly integrated with the

IPAC ICT-2008-224395 48/56

Page 49: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

vehicle system. Other situations include supporting automatic software updates for the mobile devices connected when the vehicle enters a WLAN hot spot.SOCRADES [www.socrades.eu] Socrades (Service-Oriented Cross-layer infRAstructure of Distributed smart Embedded Systems) focuses on the areas of ad-hoc sensor networking, enterprise integration and system management. Its primary objective is to develop a design, execution and management platform for next-generation industrial automation systems, exploiting the Service Oriented Architecture (SOA) paradigm both at the device and at the application level. It exploits novel methodologies, technologies and tools for modelling, designing, and implementing of networks made up of smart embedded devices. The middleware technologies developed in this project are based on semantic web services to provide open interfaces enabling interoperability at the semantic level. HYDRA: Middleware for networked devices [www.hydra.eu.com]Hydra targets at networked embedded systems providing an integrated environment that will facilitate the development and deployment of mobile applications. The first objective of the Hydra project is to develop middleware based on a Service Oriented Architecture, to which the underlying communication layer is transparent. The middleware will include support for distributed as well as centralized architectures, with strict security, trustworthiness, and fault tolerance requirements. Furthermore, Hydra involves the development of a Software Development Kit (SDK) in order to accommodate the design and implementation process of model-driven applications. Interoperability between heterogeneous devices and adaptability of applications constitute some of the major challenges of the project.Safespot [www.safespot-eu.org] The Safespot project aims at having intelligent vehicles cooperate with intelligent roads to produce a breakthrough for road safety. The goal is to prevent road accidents by developing a Safety Margin Assistant that detects in advance potentially dangerous situations and thus extends driver-awareness of the surrounding environment, both in space and time. The SMA is an intelligent cooperative system based on V2V and V2I communication. The aspects related to road safety and critical zones that are dealt within Safespot are: (i) intersection collision avoidance, (ii) violation warning, (iii) turn conflict warning, (iv) curve warning, (v) weather/road surface data, (vi) construction zones, (vii) highway rail intersection. The Safespot project, although not featuring middleware development, contains innovative ideas and covers an application area directly within IPAC interests.SIRENA [www.sirena-itea.org]SIRENA (Service Infrastructure for Real-time Embedded Networked Applications) is a service-oriented framework focused on enabling interoperability and extensibility of heterogeneous networks of embedded devices. The main goal of the project is the development of a service infrastructure that will enable the specification and implementation of distributed applications in real-time embedded computing environments. Furthermore, it aims to seamlessly connect heterogeneous devices with restricted resources from diverse domains. Specifically, SIRENA targets at 4 distinct application areas: industrial automation, automotive electronics, telecommunication systems and home automation. Moreover, SIRENA is based on Devices Profile for Web Services (DPWS) [http://schemas.xmlsoap.org/ws/2006/02/devprof/] in order to accomplish secure Web Service description, discovery and events on resource constrained devices.Amigo [www.hitech-projects.com/euprojects/amigo]Amigo aims to enable the ambient intelligence paradigm for the networked home environment. The main objectives of the project are the development of an open, standardized middleware and supporting of intelligent user services, as well, in order to accommodate home automation processes. Amigo also involves a programming and deployment framework in order to support developers and facilitate the design and implementation of context-aware services. Furthermore, one of the major challenges of Amigo is the attainment of interoperability between distributed, heterogeneous devices and services.RUNES [www.ist-runes.org]RUNES (Reconfigurable Ubiquitous Networked Embedded Systems) targets at delivering a standardized architecture that will facilitate the development of distributed embedded systems. The project focuses on adaptive ubiquitous computing in order to support emerging application areas like industrial control and medical monitoring. Its main goals involve the creation of a component-based middleware together with application development tools that will simplify the application creation process. Additionally, RUNES aims to provide scalable and dynamic solutions in the area of adaptive complex networks consisted of heterogeneous portable devices.Vivian [www-nrc.nokia.com/Vivian]Vivian aims to deliver a component-based middleware that will support the seamless interaction of application components that reside on heterogeneous devices. Vivian proposes a suite of middleware services together with a detailed developer’s guide. The main objective of the project involves the development of a mobile middleware platform for portable devices on top of the OS layer. Mainly, VIVIAN focuses on Symbian OS, taking advantage of a CORBA-based middleware technology.

IPAC ICT-2008-224395 49/56

Page 50: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

The interested reader can find the main features of the aforementioned projects in the following matrices and, thus, do a quick comparison.

Table 8-4 Project Comparison Matrix IProject EMMA DySCAS SOCRADES Safespot HYDRA IPAC

General Purpose Middleware

Application Domain

Automotive

V2V, V2I

In-vehicle

only

V2V, V2I

Vehicle Early

Warning System (V2V, V2I),

Traffic management

Industrial

Automation,

Control, Manufacturing

Crisis Management

Humanitarian

relief operations

Other

Healthcare

Advertising,

Nomadic applications

Technological & Scientific ObjectivesAutonomic nodes

in ad hoc networks

Reliable / efficient information

dissemination

Communication Layer

Service Creation and Development

Environment

Location/Context Awareness

Re-configurability Easy Application

Deployment

Knowledge engineering techniques

Collaborative Context

Awareness

Table 8-5 Project Comparison Matrix IIProject SIRENA SENSE RUNES AMIGO VIVIAN IPACGeneral

IPAC ICT-2008-224395 50/56

Page 51: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Purpose Middleware

Application Domain

Automotive

Vehicle Early

Warning System (V2V,

V2I),Traffic

management

Industrial

IndustrialControl

Automation,

Control, Manufacturing

Crisis Management

Emergency

services

Humanitarian

relief operations

Other

Tel/Com systems,

Home automation

Civil

Security

Healthcare

Home

automation

Distributed Automation Systems,Nomadic

applications

Advertising,

Nomadic applications

Technological & Scientific ObjectivesAutonomic

nodes in ad hoc networks

Reliable / efficient

information dissemination

Communication Layer

Service Creation and

Development Environment

Location/Context

Awareness

Re-configurability

Easy Application Deployment

Knowledge engineering techniques

Collaborative Context

Awareness

Although the aforementioned projects investigate certain fields of research and technology in the area of embedded applications, IPAC extends their scope in many ways. The projects EMMA and DySCAS are

IPAC ICT-2008-224395 51/56

Page 52: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

specifically focused on automotive applications and, thus, do not provide generic purpose middleware. Safespot also focuses on automotive environments, but it does not involve the development of a middleware platform at all. Moreover, SENSE also does not aim to provide middleware-based services, since it targets at just improving the communication and cooperation level of distributed embedded devices. Furthermore, SOCRADES and HYDRA have a more generic scope, as they aim to facilitate embedded applications development by providing an application-independent middleware platform. AMIGO and VIVIAN focus on specific application areas (automation systems, in particular) and cannot be further compared with IPAC, as IPAC provides an integrated framework for creating, developing, deploying and maintaining applications on diverse application domains. With regard to the remaining projects (SIRENA and RUNES), IPAC has the advantage of dealing with embedded applications’ design and implementation in its entirety, both at the middleware level and the levels below and above it. Finally, IPAC investigates advanced aspects like knowledge engineering and collaborative context-awareness.

45 Conclusion

This report is presenting the existing technologies that could be used in the implementation of the IPAC platform as well as the application creation environment. Whenever possible, assessment of the relevance of the technology for the project has been established. However, this is not always possible before having a thorough definition of the specification of the IPAC platform requirements. This document will serve to establish the choice of the best relevant technologies to be used in the development of the IPAC middleware and application creation environment conducted in workpackage 3. The document presents a thorough description of available technologies. Then, after the requirements definition and system specification are available, with this extensive survey in hand the development of the IPAC system could start in workpackage 3.This report has also identified existing and past project from which IPAC could benefit or seek closer collaboration.

ReferencesSection 2Hiertz, G.R et al (2008), “IEEE 802.11s: WLAN mesh standardization and high performance extensions ”, IEEE Network, Volume 22, Issue 3, May-June 2008 Page(s):12 - 19http://grouper.ieee.org/groups/802/11/Reports/tgp_update.htmhttp://www.unwired.ee.ucla.edu/dsrc/dsrc_testbed_simple.htmIEEE Std 1609.1-2006, IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE) – Resource ManagerIEEE Std 1609.2-2006, IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE) – Security Services for Applications and Management MessagesIEEE Std 1609.3-2006, IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE) – Networking ServicesIEEE Std 1609.4-2006, IEEE Trial-Use Standard for Wireless Access in Vehicular Environments (WAVE) – Multi-Channel Operationwww.leearmstrong.com/DSRC/DSRCHomeset.htmwww7.informatik.uni-erlangen.de/~dulz/fkom/06/10.pdfwww7.informatik.uni-erlangen.de/~dulz/fkom/06/8.pdf (slides 802.11p)

Section 3Darold Wobschall - IEEE 1451 -- A universal transducer protocol standard - Esensors Inc. Amherst NY 14226IEEE 1451 information – http//ieee1451.nist.orgPresentation at Sensor Expo (Chicago, June 2003 ) ISO 103 Fifth Wireless Sensing Standardization forumSensorML core specification. http://www.opengeospatial.org/standards/sensorml

Section 4Bray T. et al. (2006), “Extensible Markup Language (XML) 1.0 (Fourth Edition) - Origin and Goals”, World Wide Web ConsortiumW3C XML Schema. http://www.w3.org/XML/Schema

Section 5BEA, “WebLogic WorkShop 8.1”, http://edocs.bea.com/workshop/docs81/index.html.

IPAC ICT-2008-224395 52/56

Page 53: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

BPEL4WS (2008). “Business Process Execution Language for Web Services”, http://www.ibm.com/developerworks/library/specification/ws-bpel/.BPEL-PM, (2008) BPEL Process Manager, http://www.oracle.com/technology/products/ias/bpel/index.htmlBraem M. et al (2006b), "Guiding Service Composition in a Visual Service Creation Environment,", 4th European Conference on Web Services (ECOWS '06), pp.13-22.Braem, M. et al.: (2006a) “Isolating process-level concerns using Padus,” in Proceedings of the 4th International Conference on Business Process Management, Vienna, Austria, Springer-Verlag Delap S., (2006) “Understanding how Eclipse plug-ins work with OSGi,”. Available at http://www-128.ibm.com/developerworks/opensource/library/os-ecl-osgi/?ca=dgr-lnxw07EclipseEvolution.Eclipse Platform, Technical Overview, 2006. Available at http://www.eclipse.org/articles/Florescu, D., Grunhagen, A. & Kossman, D., (2002) “XL: An XML Programming Language for Web Service Specification and Composition“, In Proc. of the International World Wide Web Conference (WWW ’02), Honolulu, Hawaii, USA, pp. 65-76.GEF, (2008) Graphical Editing Framework, http://www.eclipse.org/gef/Ioannidis A., et al., (2003) "PoLoS: Integrated Platform for Location-Based Services", proceedings of the IST Mobile & Wireless Communications Summit, PortugalJavaCC. (2008) Java Compiler Compiler, https://javacc.dev.java.net/JetBRAINS, “IntelliJIDEA, The most Intelligent Java IDE”, http://www.jetbrains.com/idea.JTB (2008). Java Tree Builder, http://compilers.cs.ucla.edu/jtb/JWDT. (2008) Sun Microsystems, ‘Java Wireless Development Toolkit for CLDC’, available at http://java.sun.com/products/sjwtoolkit/.Konnerth, T., Hirsch, B., Albayrak, S., (2006) “JADL: An Agent Description Language for Smart Agents“, in Proc. of Declarative Agent Languages and Technologies, Hakodate, Japan, pp. 141-155.MicroEmulator Project (2008), http://www.microemu.org/.NetBeans IDE, (2008) http://www.netbeans.org/index.htmlNetbeans Mobility Pack (2008), available at http://www.netbeans.org/kb/articles/mobility.html.OSGi Alliance. (2004) “About the OSGi service platform. Technical Whitepaper” Available at http://www.osgi.org.RapidFLEX Service Creation Environment data sheet, Available at http://www.pactolus.com/products_RF_SoftArch.htm.Rivieres J. D. and Wiegand J. (2004) “Eclipse: A platform for integrating development tools,” IBM Systems Journal, 43(2).Shapiro, S., Lesperance, Y., Levesque, H. J., (2002) “The Cognitive Agents Specification Language and Verification Environment for Multiagent Systems”, in Proc. of the AAMAS ’02, Bologna, Italy.SML. (2008) “Service Modelling Language”, http://www.w3.org/Submission/sml/.Sprint ADP. (2008) ‘Sprint’s Wireless Toolkit’, http://developer.sprint.com/site/global/develop/technologies/java_me/sdk_tools/p_sdk_tools.jsp.Tahara, Y., Ohsuga, A., Honiden, S. (2004) “Pigeon: a Specification Language for Mobile Agent Applications”, In proc. of the AAMAS 2004, NY, Usa, pp. 1356-1357.UEI. (2008) Unified Emulator Interface Specification, available at https://uei.dev.java.net/.WSDL (2008). “Web Services Description Language”, http://www.w3.org/TR/wsdlWydaeghe, B. (2001). “PacoSuite: Component Composition Based on Composition Patterns and Usage Scenarios”. PhD thesis, System & Software Engineering Lab, Vrije Universiteit Brussel, Brussels, Belgium XABSL (2008). The Extensible Agent Behaviour Specification Language, http://www2.informatik.hu-berlin.de/ki/XABSL/.XML (2008). “Extensible Markup Language”, http://www.w3.org/XML/Zhu, H., (2003) “A Formal Specification Language for Agent-Oriented Software Engineering”, In Proc. of the 2nd International Conference on Autonomous Agents and Multiagent Systems, Melbourne, Australia, pp. 1174-1175.

Section 6Agrawal D. (2005). “Autonomic Computing Expressing Language”, Tutorial, IBM Corp.Ahmed, S., Ahamed, S. I., Sharmin, M., and Haque, M. M. (2007). “Self-healing for Autonomic Pervasive Computing”, 2007 ACM Symposium on Applied computing Seoul, Korea.ANA. (2008). Autonomic Networks Architecture Research Project: http://www.ana-project.org Baader, F., Calvanese, D., McGuiness, D., Nardi, D., & Patel-Schneider, P. (2003). The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge: Cambridge University Press.Bai, F. and Helmy, A. (2004). “A survey of mobility modeling and analysis in wireless adhoc networks,” in Wireless Ad Hoc and Sensor Networks. Kluwer Academic Publishers.Bellavista, P., Corradi, A., Montanari, R., and Stefanelli, C. (2006). A Mobile Computing Middleware for Location- and Context-aware Internet Data Services. ACM Transactions on Internet Technology (TOIT) 6, 4 (November), 356{380. ACM Press.

IPAC ICT-2008-224395 53/56

Page 54: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Bhatt, M., et al. (2003). “Impact of mobility on the performance of ad hoc wireless networks,” in IEEE Vehicular Tech. Conf.(VTC-Fall‘03), vol. 5, Orlando USA, pp. 3025–3029.Biegel, G., Cahill, V. (2004) A Framework for Developing Mobile, Context Aware Applications. in 2nd IEEE Conference on Pervasive Computing and Communications, Orlando, FL, March 14-17.Bonatti, P. A., and Olmedilla, D. (2005). Policy language specification. Project Deliverable D2, Working Group I2, EU NoE REWERSE.Braubach, L., Pokahr, A., and Lamersdorf, W. (2005) “Jadex: A BDI-Agent System Combining Middleware and Reasoning”. In: R. Umland, M. Klusch, M. Calisti (Eds.): Software Agent-Based Applications, Platforms, and Development Kits. Whitestein Series in Software Agent Technology. BirkhäuserCarroll, J. J., and Stickler, P., (2004). Trix: RDF Triples in XML, Hewlett-Packard Technical Report, http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdfCheng, S-W, et al. (2004) “Rainbow: Architecture-based self-adaptation with reusable infrastructure”, In IEEE Computer, 46–54.Corradi, A., Montanari, R. and Toninelli, A. (2007). Adaptive Semantic Middleware for Mobile Environments, Journal of Networks, Academy Publisher, Vol.2 Issue 1Damianou, N., Dulay, N., Lupu, E., and Sloman, M. (2001). The Ponder Policy Specification Language .In Proc. Policy 2001: Workshop on Policies for Distributed Systems and Networks, Bristol, UK, 29-31 Jan. 2001, Springer-Verlag LNCS 1995, pp. 18-39.Dean, M., Schreiber, G., Bechhofer, S., van Harmelen, F., Hendler, J., Horrocks, I., McGuinness, D., L., Patel-Schneider, P., F., & Stein, L., A. (2004). OWL Web Ontology Language Reference. W3C Recommendation 10 February 2004. Latest version is available at http://www.w3.org/TR/owl-ref/.Deransart, P., Ed-Dbali, A., and Cervoni, L. (1996). Prolog: The Standard. Springer Dressler, F. (2006a). “Self-organization in autonomous sensor/actuator networks,” 9th IEEE/ACM/GI/ITG International Conference on Architecture of Computing Systems - System Aspects in Organic Computing (ARCS’06).Dressler, F. (2006b) “Benefits of bio-inspired technologies for networked embedded systems: An overview,” in Dagstuhl Seminar 06031 on Organic Computing - Controlled Emergence, Schloss Dagstuhl, Wadern, Germany.Eisenhauer, G. and Schwan, K.. (1998) “An object-based infrastructure for program monitoring and steering”. In Proceedings of the 2nd SIGMETRICS Symposium on Parallel and Distributed Tools (SPDT98), pp. 10–20.Foerster, H. V. (1960) “On self-organizing systems and their environments,” in Self-Organizing Systems, M. C. Yovitts and S. Cameron, Eds. Pergamon Press, pp. 31–50.Gerhenson, C., and Heylighen, F. (2003) “When can we call a system self-organizing?” in 7th European Conference on Advances in Artificial Life (ECAL 2003), Dortmund, Germany, pp. 606–614.Griffith, R., Valetto, G. and Kaiser, G. (2007) “Effecting Runtime ReconfigurationGuo, J. & Xing, G. (2007). Using Mobile Agent-Based Middleware to Support Distributed Coordination for Vehicle Telematics. AINA Workshops (2): 374-379Hasiotis, T. et al (2005). "Sensation: A Middleware Integration Platform for Pervasive Applications in Wireless Sensor Networks", European Workshop on Wireless Sensor Networks, Istanbul, TurkeyHenricksen, K., Livingstone, S., Indulska, J., (2004). “Towards a hybrid approach to context modelling, reasoning and interoperation”, In Proc. of the UbiComp, 1st International Workshop on Advanced Context Modelling, Reasoning and Management, pp 54-61, Nottingham.Hofer, T., Schwinger, W., Pichler, M., Leonhartsberger, G. and Altmann, J. (2002). ”Context-awareness on mobile devices – the hydrogen approach”, In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, pp.292–302.IBM. (2003). An Architectural Blueprint for Autonomic Computing.in Managed Execution Environments”, chapter in “Autonomic Computing: Concepts, Infrastructure and Applications”, by M. Parashar, S. Hariri, CRC Press.Kagal, L. (2002). Rei: A Policy Language for the Me-Centric Project, HP Labs Technical Report, HPL-2002-270.Kaiser, G., et al. (2003). “Kinesthetics eXtreme: An external infrastructure for monitoring distributed legacy systems”, In Proceedings of the Autonomic Computing Workshop 5th Workshop on Active Middleware Services (AMS), 22–30.Lacour, S., Perez, C., Priol, T., (2005). "Generic application description model: toward automatic deployment of applications on computational grids", The 6th IEEE/ACM International Workshop on Grid Computing.Levis, P. and Culler, D. E.. (2002). “Maté: a Tiny Virtual Machine For Sensor Networks,” Architectural Support for Programming Languages and Operating Systems, URL: http://www.cs.berkeley.edu/~pal/pubs/mate.pdfLiu, H. and Parashar M., (2006) “Accord: A Programming Framework for Autonomic Applications”, IEEE Transactions on Systems, Man and Cybernetics, Special Issue on Engineering Autonomic Systems, 36(3).

IPAC ICT-2008-224395 54/56

Page 55: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Low, K. H., Leow, W. K., and Ang, M. H. (2005) “Autonomic mobile sensor network with self-coordinated task allocation and execution,” IEEE Transactions on Systems, Man, and Cypernetics–Part C: Applications and Reviews.McBride, B., (2003). RDF Issue Tracking, http://www.w3.org/2000/03/rdf-tracking/Müller, F., Hanselmann, M., Liebig, T., Noppens, O. (2006). “A Tableaux-based Mobile DL Reasoner - An Experience Report”, 2006 International Workshop on Description Logics - DL '06, Lake District, United KingdomMurphy, A. L., Picco, G. P. and Roman. G. C. (2001). “Lime: A Middleware for Physical and Logical Mobility,” 21st IEEE International Conference on Distributed Computing Systems (ICDCS ’01), pp. 524-533.Murthy, C. S. R. and Manoj, B. S. (2004). Ad Hoc Wireless Networks. Upper Saddle River, NJ: Prentice Hall PTR.OMA, (2001). Open Mobile Alliance WAG UAProf, http://www.openmobilealliance.org/tech/affiliates/wap/wap-248-uaprof-20011020-a.pdfOmar, W. M., Taleb-Bendiab, A. and Karam, Y. (2006). “Autonomic Middleware Services for Just-In-Time Grid Services Provisioning”, Journal of Computer Science 2 (6): pp. 521-527, Preuveneers, D., Van Den Bergh, J., Wangelaar, D., Georges, A., Rigole, P., Clerckx, T., Berbers, Y., Coninx, K., Jonckers, V., De Bosschere, K., (2004). “Towards an extensible context ontology for ambient intelligence”, In Proc. of the 2nd European Symposium, EUSAI 2004, Eindhoven, The Netherlands.Ranganathan, A. and Campbell, R. (2003). “A Middleware for Context-Aware Agents in Ubiquitous Computing Environments”. In: CM/IFIP/USENIX International Middleware Conference, Brazil.Sadjadi, S. M. and McKinley, P. K. (2004) “Transparent self-optimization in existing CORBA applications”, In Proceedings of the 1st IEEE International Conference on Autonomic Computing, 88–95.Sinner, A., & Kleemann, T. (2005). “KRHyper – In Your Pocket”. In Proc. Of the International Conference on Automated Deduction (CADE-20), volume 3632 of LNCS, pages 452–458.Strang, T., Linnhoff Popien, C., (2004). “A Context Modeling Survey”, In Proc. of the Workshop on Advanced Context Modelling, Reasoning and Management as part of UbiComp 2004 - The Sixth International Conference on Ubiquitous Computing, Nottingham/England. Tsetsos, V., Anagnostopoulos, C., Kikiras, P., Hadjiefthymiades, S (2006), "Semantically enriched navigation for indoor environments", International Journal of Web and Grid Services, Vol. 2, No. 4, Interscience PublishersUszok, A. et al (2003) “KAoS policy and domain services: Toward a description-logic approach to policy representation, deconfliction, and enforcement”. In Proc of IEEE Fourth International Workshop on Policy (Policy 2003). Lake Como, Italy, pp. 93–98Villalonga, C., Strohbach, M., Snoeck, N., Sutterer, M., Belaunde, M., Kovacs, E., Zhdanova, A. V., Walter Coix, L., Droegehorn, O., (2007). “Mobile Ontology: Towards a Standardized Semantic Model for the Mobile Domain”, In Proc. of the 1st International Workshop on Telecom Service Oriented Architectures (TSOA-07), Vienna (Austria). Wang, X. H., Zhang, D. Q., Gu, T., Pung, H. K., (2004). “Ontology Based Context Modeling and Reasoning using OWL”, In Proc. of the 2nd IEEE Annual Conference on Pervasive Computing and Communications, pp 18-22.Wyckoff, P., McLaughry, S. W., Lehman, T. J. and Ford, D. A.. (1998) “T Spaces,” IBM Systems Journal, pp. 454–474. URL : http://www.research.ibm.com/ journal/ sj/ 373/ wyckoff.html

Section 41Anagnostopoulos, C., Hadjiefthymiades, S. (2008). “On the Application of the Epidemical Spreading in Collaborative Context-Aware Computing”, ACM SIGMOBILE Mobile Computing and Communications Review (MC2R).Boguna, M., Pastor-Satorras, R., Vespignani, A., (2003). “Epidemic Spreading in Complex Networks with Degree Correlations”, Sitges Conf. on Statistical Mechanics of Complex Networks, pp 127-147.Datta, A., Quarteroni, S., and Aberer, K. (2004). “Autonomous Gossiping: A Self-Organizing Epidemic Algorithm for Selective Information Dissemination in Wireless Mobile Ad-Hoc Networks”.  In Proceedings of ICSNW, 126-143.Fracchia, R., & Meo, M. (2008). “Analysis and Design of Warning Delivery Service in Intervehicular Networks”, IEEE Transactions on Mobile Computing,Vol. 7.Kempe, D., Kleinberg, J., Demers, A., (2004). “Spatial Gossip and Resource Location Protocols”, Journal of the ACM, 51(6), pp 943-967.Kephart, J., White, S., (1991). ”Directed-Graph Epidemiological Models of Computer Viruses”, IEEE Int. Symp. on Research in Security and Privacy, pp 343-359.Khelil, A., Becker, C., Tian, J., Rothermel, K. (2002). “An epidemic model for information diffusion in Manets”, ACM Modeling Analysis and Simulation of Wireless and Mobile Systems.Mantyjarvi, J., Huuskonen, P., Himberg, J., (2002). “Collaborative Context Determination to Support Mobile Terminal Applications”, IEEE Wireless Communications Magazine, 9(5), pp 39-45.

IPAC ICT-2008-224395 55/56

Page 56: SEVENTH FRAMEWORK PROGRAMME - IPAC …ipac.di.uoa.gr/sites/default/files/IPAC_WP2_D21_v06_1Aug... · Web viewThe original version of the standard specified infrared transmission as

July 31, 2008 Report

Salkham, A., Cunningham, R., Cahill, A. (2006). “A Taxonomy of Collaborative Context-Aware Systems”, Ubiquitous Mobile Information and Collaboration Systems Workshop, pp 899-911.Winoto, W. A., Schwartz, E., Balakrishnan, H., and Lilley, J. (1999). The design and implementation of an intentional naming system. In SOSP.Wischhof, L., Ebner, A., Rohling, H. (2005) "Information dissemination in self-organizing intervehicle networks," Intelligent Transportation Systems, IEEE Transactions on , vol.6, no.1, pp. 90-101Zesheng, C., Chuanyi, J., (2005). “Spatial-temporal Modeling of Malware Propagation in Networks”, IEEE Transactions on Neural Networks, 16(5), pp 1291-1303.

IPAC ICT-2008-224395 56/56