68
International Journal of Computer Science Issues Volume 1, August 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814 © IJCSI PUBLICATION www.IJCSI.org Pervasive Computing Systems and Technologies International Journal of Computer Science Issues Volume 1, August 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814 © IJCSI PUBLICATION www.IJCSI.org Pervasive Computing Systems and Technologies

International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

Embed Size (px)

DESCRIPTION

Semantics of pervasiveness; Pervasive knowledge; Pervasive technologies for education and learning;Pervasive computing; Wearable computing; Mobile computing; Nomadic computing; Mobile commerce; Mobile learningInformation retrieval and filtering; Context awareness; Control of pervasive data; Data management and processing; Data replication, migration and disseminationMultimedia content recognition, indexing and search; Mobile graphics, games and entertainment; Pervasive multimedia applications and systems; Virtual reality in pervasive systemsBluetooth, 802.11.x, 802.15.x, ZigBee, WiMaxPervasive networks; Network management; Network performance evaluation; Networks and technology convergence; Internet access in pervasive systems; Pervasive mesh, ad hoc and sensor networksWeb 2.0; Semantic web; Web services; Ontology; Web Services applicationsDesign of devices for pervasive systems; Mobile devices; Wearable devices; Embedded systemsFrameworks, architectures, and languages for pervasive services; Algorithms for pervasive systems; SLA/QoS in pervasive services; Ontology based services; Location-based services; Protocols and interaction mechanisms for pervasive services; Mobile services and service convergence; Service discovery mechanisms; Tracking in pervasive environments; Measurement, control, and management of pervasive services; Design and development of pervasive servicesAmbient components; Agent technologies; Software for spontaneous interoperation; Dependability guarantees; Security; Key Management and Authentication; Trust; Privacy; Fault-tolerance; Multimedia Information SecurityCooperative networks for pervasive systems; Cooperative applications for pervasive networks; Handheld and wearable systems for interaction in collaborative groups and communities; Ad hoc collaboration in pervasive computing environments; Awareness of collaboration and of work environment; Inherently mobile collaborative workMobile user interfaces; Pervasive user-generated content (weblogs, wikis, etc.); Mobile and pervasive computing support for collaborative learning; User modeling and personalization; Context- and location-aware applications; Toolkits, testbeds, development environments; Tools and techniques for designing, implementing, & evaluating pervasive systems; Constructing, deploying and prototyping of pervasive applications; Evaluation of user models for pervasive environments; On-line analytical techniques; Human-computer interaction in pervasive computing environments; Pervasive e-Development (business, science, health, etc.); Case Studies; Emerging industrial/business/scientific pervasive scenarios; Ambient intelligence; Social issues and implications of pervasive system

Citation preview

Page 1: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

International Journal of Computer Science Issues

Volume 1, August 2009ISSN (Online): 1694-0784ISSN (Printed): 1694-0814

© IJCSI PUBLICATIONwww.IJCSI.org

Pervasive Computing Systems and Technologies

International Journal of Computer Science Issues

Volume 1, August 2009ISSN (Online): 1694-0784ISSN (Printed): 1694-0814

© IJCSI PUBLICATIONwww.IJCSI.org

Pervasive Computing Systems and Technologies

Page 2: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

© IJCSI PUBLICATION 2009

www.IJCSI.org

Page 3: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

EDITORIAL T h e r e a r e s e v e r a l j o u r n a l s a v a i l a b l e i n t h e a r e a s o f C o mp u t e r S c i e n c e

hav ing d i f f e r en t po l i c i e s . I JCSI i s among t he f ew o f t hose who be l i eve

g i v i n g f r e e a c ce s s t o s c i e n t i f i c r e s u l t s w i l l h e l p i n a d v a n c i n g c o mp u t e r

s c i e n ce r e s e a rc h an d h e l p t h e f e l l o w s c i e n t i s t .

I J C S I p a y p a r t i c u l a r c a r e i n e n s u r i ng w i d e d i s s e mi n a t i o n o f i t s a u t h o r s ’

w o r k s . A p a r t f r o m b e i n g i n d e x e d i n o the r da t abase s (Goog le Scho l a r ,

DOAJ , C i t eSee rX , e t c…) , I JCSI ma kes a r t i c l e s ava i l ab l e t o be

d o w n l o a d e d f o r f r e e t o i n c r e a s e t h e c h a n c e o f t h e l a t t e r t o b e c i t e d .

F u r t h e r mo r e , u n l i k e mos t j o u r n a l s , I JC S I s e n d a p r i n t e d c o p y o f i t s i s s u e

t o t h e c o n c e r n e d a u t h o r s f r e e o f c h a r g e i r r e s p e c t i v e o f g e o g r a p h i c

l o c a t i o n .

I J C S I E d i t o r i a l B o a r d i s p l ea s e d t o p r e s e n t I JC S I V o l u me O n e ( I J C S I V o l .

1 , 2009 ) . Th i s ed i t i on i s a r e su l t o f a s p e c i a l c a l l f o r p a p e r s o n Pe r v a s i v e

C o mp u t i n g S y s t e m s a n d T e c h n o l o g i e s . T h e p a p e r a c c e p t a n c e r a t e f o r t h i s

i s s u e i s 3 1 . 6 % ; se t a f t e r a l l s u b mi t t e d p a p e r s h a v e b e e n r e c e i v e d w i t h

i mp o r t a n t c o m m e n t s a n d r e c o m men d a t i o n s f r o m o u r r e v i e w e r s .

We s ince r e ly hope you wou ld f i nd impor t a n t i d e a s , c o n c e p t s , t e c h n i q u e s ,

o r r e s u l t s i n t h i s sp e c i a l i s su e .

As f i na l words , PUBLISH, GET CIT ED and MAKE AN IMPAC T. I J C S I E d i t o r i a l B oa r d A u g u s t 2 0 0 9 w w w . i j c s i . o r g

Page 4: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI EDITORIAL BOARD Dr Tristan Vanrullen Chief Editor LPL, Laboratoire Parole et Langage - CNRS - Aix en Provence, France LABRI, Laboratoire Bordelais de Recherche en Informatique - INRIA - Bordeaux, France LEEE, Laboratoire d'Esthétique et Expérimentations de l'Espace - Université d'auvergne, France Dr Mokhtar Beldjehem Professor Sainte-Anne University Halifax, NS, Canada Dr Pascal Chatonnay Assistant Professor Maître de Conférences Université de Franche-Comté (University of French-County) Laboratoire d'informatique de l'université de Franche-Comté (Computer Sience Laboratory of University of French-County) Prof N. Jaisankar School of Computing Sciences, VIT University Vellore, Tamilnadu, India

Page 5: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI REVIEWERS COMMITTEE • M r . M a r k u s S c h a t t e n , U n i v e r s i t y o f Z a g r e b , F a c u l t y o f O r g a n i z a t i o n a n d I n f o r ma t i c s , C r o a t i a • Mr . Fo r r e s t Sheng Bao , Texas Tech Un ive r s i t y , USA • M r . V a s s i l i s P a p a t a x i a r h i s , D e p a r t me n t o f I n f o r ma t i c s a n d T e l e c o m m u n i c a t i o n s , N a t i o n a l a n d K a p o d i s t r i a n U n i v e r s i t y o f A t h e n s , P a n e p i s t i mi o p o l i s , I l i s s i a , G R - 1 5 7 8 4 , A t h e n s , G r e e c e , G r e e c e • D r M o d e s t o s S t a v r a k i s , U n i v a r s i t y o f t h e A e g e a n , G r e e c e • P r o f D r . M o h a m e d A b d e l a l l I b r ah im , Facu l t y o f Eng inee r i ng - A l e x a n d r i a U n i v e r i s t y , E g yp t • D r Fad i KHALI L , LAAS - - CNRS Labo ra to ry , F r ance • D r D i mi t a r T r a j a n o v , F a c u l t y o f E l ec t r i c a l Eng inee r i ng and In fo r ma t ion t e c h n o l o g i e s , s s . Cy r i l a n d M e t h o d i u s Un ives i t y - Skop j e , Macedon i a • D r J i n p i n g Y u a n , C o l l e g e o f I n f o r ma t i o n S y s t e m a n d M a n a g e m e n t , N a t i o n a l U n i v . o f D e f e n s e T e c h . , C h i n a • D r A l e x i o s L a z a n a s , M i n i s t r y o f E d u c a t i o n , G r e e c e • D r S t a v r o u l a M o u g i a k a k o u , U n i v e r s i t y o f B e r n , A R T O R G C e n t e r f o r B i o me d i c a l E n g i n e e r i n g R e s e a r c h , S w i t z e r l a n d • D r DE RUNZ, C ReSTIC-SIC , IUT de Re ims , Un ive r s i t y o f Re ims , F r a n c e • M r . P r a mo d k u m a r P . G u p t a , D e p t o f B i o i n f o r ma t i c s , D r D Y P a t i l U n i v e r s i t y , I n d i a • D r A l i r e z a F e r e i d u n i a n , S c h o o l o f E C E , U n i v e r s i t y o f T e h r a n , I r a n • Mr . F r ed V iezens , O t to -Von-Gue r i c k e - U n i v e r s i t y M a g d e b u r g , G e r ma n y • Mr . J . Ca l eb Goodwin , Un ive r s i t y o f T e x a s a t H o u s t o n : H e a l t h S c i e n c e Cen t e r , USA • D r . R i c h a r d G . B u s h , L a w r e n c e Te c h n o l o g i c a l U n i v e r s i t y , U n i t e d S t a t e s • D r . O l a O s u n k o y a , I n f o r ma t io n S e c u r i t y A r c h i t e c t , U S A • Mr . Ko t sokos t a s N .An ton i o s , T E I P i r a e u s , H e l l a s • P r o f S t e v e n T o t o s y d e Z ep e t n e k , U o f H a l l e -Wi t t e n b e r g & P u rd u e U & N a t i o n a l S u n Y a t - s e n U , G e r ma n y , U S A , T a i w a n • M r . M A r i f S i d d i q u i , N a j r a n U n i v e r s i t y , S a ud i A r a b i a • M s . I l k n u r I c k e , T h e G r a d u a t e C e n t e r , C i t y Un ive r s i t y o f New York , USA • P r o f M i r o s l a v B a c a , A s s oc i a t e d P r o f e s s o r / Fa c u l t y o f O r g a n i z a t i o n a n d I n fo r ma t i c s / U n i v e r s i t y o f Z ag r e b , C r oa t i a • D r . E l v i a R u i z B e l t r á n , In s t i t u t o Te c n o l ó g i co d e A g u a s c a l i e n t e s , M e x i c o • Mr . Mous t a f a Banbouk , Eng inee r du Te l ecom, UAE • M r . K e v i n P . M o n a g h a n , W a y n e S t a t e U n i v e r s i t y , D e t r o i t , Mi ch i g a n , USA • Ms . Mo i r a S t ephens , Un ive r s i t y o f Sydney , Aus t r a l i a

Page 6: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

• Ms . Maryam Fe i l y , Na t i ona l Advanced IPv6 Cen t r e o f Exce l l ence ( N A V 6 ) , U n i v e r s i t i S a i n s M a lays i a (USM) , Ma lays i a • D r . C o n s t a n t i n e Y I A L O U R I S , I n f o r ma t i c s L a b o r a t o r y A g r i c u l t u r a l Un ive r s i t y o f A thens , Greece • D r . She r i f Ed r i s Ahme d , A in Sha ms U n i v e r s i t y , F a c . o f a g r i c u l t u r e , D e p t . o f G e n e t i c s , E g y p t • M r . B a r r i n g t o n S t e w a r t , C e n t e r f o r R e g i o n a l & T o u r i s m R e s e a r c h , D e n ma r k • M r s . A n g e l e s Ab e l l a , U . d e M o n t r e a l , C a n a d a • D r . Pa t r i z i o Ar r i go , CNR ISMAC, i t a l y • M r . A n i r b a n M uk h o p a d h y a y , B . P . P o d d a r I n s t i t u t e o f M a n a g e m e n t & Techno logy , I nd i a • Mr . D inesh Ku mar , DAV Ins t i t u t e o f E n g i n e e r i n g & T e c h n o l o g y , I n d i a • M r . J o r g e L . H e r n a n d e z - A r d i e t a , INDR A SISTEMAS / Un ive r s i t y Ca r lo s I I I o f Madr id , Spa in • M r . A l i R e z a S ha h r e s t a n i , U n i v e r s i t y o f Ma laya (UM) , Na t i ona l Advanced IPv6 Cen t r e o f Exce l l ence (NAv6) , Ma lays i a • M r . B l a g o j R i s t e v s k i , F a c u l t y o f Admi n i s t r a t i on and In fo rma t ion S y s t e ms M a n a g e m e n t - B i t o l a , R e p u b l i c o f M a c e d o n i a • Mr . Maur i c io Eg id io Can t ão , De p a r t me n t o f C o mp u t e r S c i e n c e / U n i v e r s i t y o f S ã o P a u l o , B r a z i l • M r . T h a d d e u s M . C a r v a j a l , T r i n i t y U n i v e r s i t y o f A s i a - S t L u k e ' s C o l l e g e o f N u r s i n g , P h i l i p p in e s • M r . J u l e s R u i s , F r a c t a l C o n s u l t a n c y , T h e n e t h e r l a n d s • M r . M o h a m m a d I f t e k h a r H u s a in , Un ive r s i t y a t Bu f f a lo , USA • D r . D e e p a k L a xmi N a r a s imha , VIT Un ive r s i t y , INDIA • D r . Pao l a D i Ma io , DMEM Unive r s i t y o f S t r a thc lyde , UK • D r . B h a n u P r a t a p S i n g h , I n s t i t u t e o f I n s t r u me n t a t i o n E n g i n e e r i n g , K u r u k s h e t r a U n i v e r s i t y K u r u k s h e t r a , I n d i a • M r . S a n a U l l a h , I n h a U n i v e r s i t y , S o u t h K o r e a • Mr . Co rne l i s P i e t e r P i e t e r s , Condas t , The Ne the r l ands • D r . A m o g h K a v i ma n d a n , T h e M a t h Wo r k s I n c . , U S A • D r . Zh inan Zhou , Sa ms ung Te l ecommun ica t i ons Ame r i ca , USA • M r . A l b e r t o d e S a n t o s S i e r r a , U n i v e r s i d a d P o l i t éc n i c a d e M ad r i d , S p a i n • D r . M d . A t i q u r R a h ma n A h a d , D e p a r t me n t o f A p p l i e d P h y s i c s , E l e c t r o n i c s & C o m m u n i c a t i o n E n g i n e e r in g ( A P E C E ) , U n i v e r s i t y o f D h a k a , Bang l adesh • D r . C h a r a l a mp o s B r a t s a s , L a b o f M ed ica l I n fo r ma t i c s , Med ica l Facu l t y , A r i s t o t l e U n i v e r s i t y , T h e s s a l o n i k i , Gr e e c e • M s . A l e x i a D i n i K o u n o u d e s , C y p r u s Un ive r s i t y o f Techno logy , Cyp rus • M r . A n t h o n y G e s a s e , U n i v e r s i t y o f D a r e s s a l a a m C o m p u t i n g C e n t r e , T a n z a n i a • D r . J o r g e A . R u i z - V a n o y e , U n i v e r s i d a d J u á r e z A u t ó n o ma d e T a b a s c o , M e x i c o

Page 7: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

• D r . A l e j a n d r o Fu e n t e s P e n n a , U n i v e r s i d a d Po p u l a r Au t ó n o ma d e l E s t a d o d e P u e b l a , M é x i c o • D r . O c o t l á n D í a z - P a r r a , U n i v e r s idad Juá r ez Au tónoma de Tabasco , M é x i c o • M r s . N a n t i a I a ko v i d o u , A r i s t o t l e U n i v e r s i t y o f T h e s s a l o n i k i , G r e e c e • Mr . V inay Chopra , DAV Ins t i t u t e o f E n g i n e e r i n g & T e c h n o l o g y , J a l andha r • M s . C a r me n L a s t r e s , U n i v e r s i d a d P o l i t éc n i c a d e M ad r i d - C en t r e fo r S ma r t E n v i r o n me n t s , S p a i n • D r . San j a Laza rova -Molna r , Un i t ed Arab Emi ra t e s Un ive r s i t y , UAE • M r . S r i k r i s h n a Nu d u r u ma t i , I ma g i n g & P r i n t i n g G r o u p R & D H u b , H e w l e t t - P a c k a r d , I n d i a • D r . O l i v i e r N o c e n t , C R e S T I C / S I C , U n i v e r s i t y o f R e i ms , F r a n c e • Mr . Bu rak C izmec i , I s i k Un ive r s i t y , Tu rkey • D r . C a r l o s J a i m e B a r r i o s H e r n a n d e z , L IG (Labo ra to ry Of In fo rma t i c s o f Grenob l e ) , F r ance • M r . M d . R a b i u l I s l a m, R a j s h a h i u n i v e r s i t y o f E n g i n e e r i n g & T e c h n o l o g y ( R U E T ) , B a n g l a d e s h • D r . LAKHOUA Mohame d Na jeh , ISSA T - L a b o r a t o r y o f A n a l y s i s a n d Con t ro l o f Sys t e ms , Tun i s i a • D r . A l e s s a n d r o L a v a c c h i , D e p a r t m e n t o f C h e mi s t r y - U n i v e r s i t y o f F i r e n z e , I t a l y • Mr . Mungwe , Un ive r s i t y o f O ldenbu rg , Ge rmany • M r . S o m n a t h T a g o r e , D r D Y P a t i l U n i v e r s i t y , I n d i a • M r . N e h i n b e J o s h u a , U n i v e r s i t y o f Es sex , Co l ches t e r , Es sex , UK • Ms . Xueq in Wa n g , ATCS, USA • D r . Bo r i s l av D Di mi t rov , Depa r t m e n t o f G e n e r a l P r a c t i c e , R o y a l Co l l ege o f Su rgeons i n I r e l and , Dub l in , I r e l and • D r . Fond jo Fo tou F rank l i n , Langs ton Un ive r s i t y , USA • M r . H a y t h a m M o h t a s s e b , D e p a r t m e n t o f Compu t ing - Un ive r s i t y o f L i n c o l n , U n i t e d K i n g d o m • D r . V i s h a l G o y a l , D e p a r t me n t o f C o mp u t e r S c i e n c e , P u n j a b i U n i v e r s i t y , P a t i a l a , I n d i a • M r . T h o ma s J . C l a n c y , A C M , U n i t e d S t a t e s • D r . Ahme d Nab ih Zak i Rashed , Dr . i n E l e c t r o n i c E n g i n e e r i n g , F a c u l t y o f E l e c t r o n i c E n g i n e e r i n g , me n o u f 3 2 9 5 1 , E l e c t r o n i c s a n d E l e c t r i c a l Commun ica t i on Eng inee r ing Depa r tme n t , Menouf i a un ive r s i t y , EGYPT, EGYPT • D r . R u s h e d K a n a w a t i , L I P N , F r a n c e • Mr . Ko te shwar Rao , K G REDDY COLLE GE OF ENGG.& TECH, C HILKUR, RR DIST . ,AP , INDIA • M r . M . N a g e s h K u ma r , D e p a r t me n t o f E l e c t r o n i c s a n d C o m mu n i c a t i o n , J . S . S . r e s e a r c h f o u n d a t i o n , M y s o r e Un i v e r s i t y , M y s o r e - 6 , I n d i a • D r . B a b u A M a n j a s e t t y , Re s e a rch & Indus t ry Incuba t i on Cen t e r , D a y a n a n d a S a g a r In s t i t u t i o ns , , I n d i a

Page 8: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

• M r . S a q i b S a e e d , U n i v e r s i t y o f S i e g e n , G e r ma n y • D r . I b r a h i m N o h a , G r e n o b l e I n f o r m a t i c s L a b o r a t o r y , F r a n c e • Mr . Muhammad Yas i r Qad r i , Un ive r s i t y o f Es sex , UK

Page 9: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

TABLE OF CONTENTS 1. A Survey on Service Composition Middleware in Pervasive Environments Noha Ibrahim, Grenoble Informatics Laboratory, Grenoble, France Frédéric Le Mouël, Université de Lyon, INRIA, INSA-Lyon, CITI, Lyon, France 2. Context Aware Adaptable Applications - A global approach Marc Dalmau, Philippe Roose and Sophie Laplace, LIUPPA, IUT de Bayonne 2, Allée du Parc Montaury 64600 Anglet, France 3 . Embedded Sensor System for Early Pathology Detection in Building Construction Santiago J. Barro Torres and Carlos J. Escudero Cascón, Department of Electronics and Systems, University of A Coruña, A Coruña, 15071 Campus Elviña, Spain 4. SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer Scott Carter and Laurent Denoue, FX Palo Alto Laboratory, Inc., 3400 Hillview Ave., Bldg. 4, Palo Alto, CA 94304 5. Improvement of Text Dependent Speaker Identification System Using Neuro-Genetic Hybrid Algorithm in Office Environmental Conditions Md. Rabiul Islam, Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh Md. Fayzur Rahman, Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh 6. MESURE Tool to benchmark Java Card platforms Samia Bouzefrane and Julien Cordry, CEDRIC Laboratory, Conservatoire National des Arts et Métiers, 292 rue Saint Martin, 75141, Paris Cédex 03, France Pierre Paradinas, INRIA, Domaine de Voluceau - Rocquencourt -B.P. 105, 78153 Le Chesnay Cedex, France

Page 10: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

1

A Survey on Service Composition Middleware in

Pervasive Environments Noha Ibrahim1, Frédéric Le Mouël2

1 Grenoble Informatics Laboratory

Grenoble, France [email protected]

2 Université de Lyon, INRIA, INSA-Lyon, CITI

Lyon, France [email protected]

Abstract The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments. Key words: middleware, service oriented architecture, service composition, pervasive environment, classification

1. Introduction

Middleware are enabling technologies for the development, execution and interaction of applications. These software layers are standing between the operating systems and applications. They have evolved from simple beginnings - hiding network details from applications - into sophisticated systems that handle many important functionalities for distributed applications - providing support for distribution, heterogeneity and mobility. SOA middleware[2] is a programming paradigm that uses ``services'' as the unit of computer work. Service-oriented computing enables the development of loosely coupled systems that are able to communicate, compose and evolve in an open, dynamic and heterogeneous environment. A service-oriented system comprises software systems that interact

with each other through well-defined interfaces. If middleware were designed to help manage the complexity and heterogeneity inherent in distributed systems, one can imagine the new role middleware has to play in order to respect the evolution from distributed and mobile computing to pervasive one. Hardly a day passes without some new evidence of the proliferation of portable computers in the marketplace, or of the growing demand for wireless communication. Support for mobility has been the focus of number of experimental systems, researches and commercial products, and that since several decades. The mission of mobile computing is to allow users to access any information using any device over any network at any time. When this access becomes to every information using every device and over every network at every time, one can say that mobile computing has evolved to what we now call pervasive computing[13]. In pervasive environments where SOA has been adopted, functionalities are more and more modeled as services, and published as interfaces. The proliferation of new services encourages the applications to use these latter, all combined together. In this case, we speak of a composite service. The process of developing a composite service is called service composition[7]. Composing services together is the new challenge awaiting the SOA middleware[2] meeting the pervasive environments[13]. Indeed, the variety of service providers in a pervasive environment, and the heterogeneity of the services they provide require applications and users of these kind of environments to develop models, techniques and algorithms in order to compose services and execute them. The service composition needs to follow some

Page 11: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

2

requirements[19][33][34] in order to resolve the challenges brought by pervasivity. Several surveys[5][7][22][31][33] dealt with service composition. Many of them[7][31] classified the middleware under exclusive criteria such as manual versus automated, static versus dynamic, and so on. Others[5][22][33] classified the service composition middleware under different domains such as artificial intelligence, formal methods, and so on. But none of these surveys proposed a generic reference model to describe the service composition middleware in pervasive environments. In this article, we propose:

• a generic service composition middleware model, the SCM model, a novel way to describe the service composition problem in pervasive environments,

• a description of six middleware architectures using the SCM model as a backbone and highlighting the strength and weakness of each middleware,

• and finally, a classification of these latter under pervasive requirements identified by the literature to be essential for service composition in pervasive environments.

The outlines are as follows. In section 2, we define the service composition middleware (SCM) model and explain its modules. In section 3, we describe six service composition middleware by mapping their architecture to the SCM model. In section 4, we classify these middleware according to the pervasive requirements we identify. Finally, section 5 concludes our work and gives research directions to the service composition problem.

2. SCM: Service Composition Middleware Model

Based on several studies[22][24] that resolve the service composition process problem into several fundamental problems, we define a service composition middleware as a framework providing tools and techniques for composing services. We define a service composition middleware model, SCM model, as an abstract layer, general enough to describe all existing service composition middleware. The SCM model is at a high-level of abstraction, without considering a particular service technology, language, platform or algorithm used in the composition process. The aim of this definition is

to give the basis to discuss similarities and differences, advantages and disadvantages of all available service composition middleware and to highlight the nowadays existing lacks concerning the service composition problem in pervasive environments. As depicted Figure 1, the SCM interacts with the application layer by receiving functionality requests from users or applications[5][7]. SCM needs to respond to the functionality requests by providing services that fulfill the demand. These services can be atomic or composite. The Service Repository represents all the distributed service repository where services are registered. The SCM interacts with the Service Repository to choose services to compose.

The SCM is split into four components: the Translator, the Generator, the Evaluator, and the Builder. The process of service composition includes the following phases:

1. Applications specify their needed functionalities by sending requests to the middleware. These requests can be described with diverse languages or techniques. The request descriptions are translated to a system comprehensible language in order to be used by the middleware. Most systems distinguish between external specification languages and internal ones. The external ones are used to enhance the accessibility with the outside world, commonly the users. Users can hence express what they need or want in a relatively easy way, usually using semantics and ontologies. Internal specification corresponds more to a formal way of expressing things and

Figure 1 SCM model

Page 12: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

3

uses specific languages, models, and logics, usually for SOA a generic service model. Some research[30] provide a translation mechanism of the available service technologies and service descriptions into one model. Others, such as SELF-SERV[25], propose a wrapper to provide a uniform access interface to services[8]. These middleware usually realize transformation from one model to another or from one technology to another. The technologies are predefined in advance and usually consist of the legacy ones. If new technology models appear in the environment, the Translator will need to be expanded to take these technologies into consideration. Another family of research[6][26] do not provide the Translator module as they use common model to describe all the services of the environment. They use common description languages such as DAML-S - recently called OWL-S[36], - for describing atomic services, composed services and user queries.

2. Once translated, the request specification is sent to the Generator. The Generator will try to provide the needed functionalities by composing the available service technologies, and hence composing their functionalities. It tries to generate one or several composition plans with the same or different technology services available in the environment. It is quite common to have several ways to do a same requirement, as the number of available functionalities in pervasive environments is in expansion. Composing service is technically performed by chaining interfaces using a syntactically or semantically method matching. The interface chaining is usually represented as a graph or described with a specific language. Graph based approaches[8][10], represent the semantic matching between the inputs and outputs of service operations. It is a powerful technique as many algorithms can be applied upon graphs and hence optimize the service composition. Number of languages have been proposed in the literature to describe data structure in general and functionalities offered by devices in particular. If some languages are widely used, such as XML, and generic for multiple uses others are more specific to certain tasks as service composition, orchestration or choreography such as Business Process Execution Language (BPEL4WS or BPEL[4]) and OWL-S[36].

3. The Evaluator chooses the most suitable

composition plan for a given context. This selection is done from all the plans provided by the Generator. In pervasive environments, this evaluation depends strongly on many criteria like the application context, the service technology model, the quality of the network, the non functional service QoS properties, and so on. The evaluation needs to be dynamic and adaptable as changes may occur unpredictably and at any time. Two main approaches are commonly used: the rule-based planning[27][28][29] and the formal methods approach[6][10][12][30]. The rules evaluate whether a given composition plan is appropriate or not in the actual context. If rules were commonly used as an evaluation approach, their use lacks of dynamism proper to pervasive environments. A major problem of the evaluation approach is namely the lack of dynamic tools to verify the correctness - functional and non functional aspects - of the service composition plan. This aspect is at the main advantage of what most formal methods offer. The nowadays most popular and advanced technique to evaluate a given composition plan is the evaluation by formal methods (like Petri nets and process algebras like the Pi-calculus). Petri nets are a framework to model concurrent systems. Their main attraction is the natural way of identifying basic aspects of concurrent systems, both mathematically and conceptually. Petri nets are very commonly merged with composition languages such as BPEL[4] and OWL-S[36]. On the other hand, Automata or labeled transition systems are a well-known model underlying formal system specifications and are more and more used in the service composition process[30].

4. The Builder executes the selected composition plan and produces an implementation corresponding to the required composite service. It can apply a range of techniques to realize the effective service composition. These techniques depend strongly on the service technology model we are composing and on the context we are evolving in. Once the composite service available, it can be executed by the application that required its functionality. In the literature, we distinguish different kinds of builders provided by the service composition middleware. Some builders are very basic and use only simple invocation in sequence to a list of services[17]. These services need to be

Page 13: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

4

available otherwise the composition result is not certain. Others[35] provide complex discovery protocols adapted to the heterogeneous nature of the pervasive environments. The discovery takes in charge to find and choose the services taking part into the composition process and to choose contextually the most suitable ones if many are available. Finally some systems propose solutions not only located in the middleware layer but also in the networking one.

We argue that the SCM model is generic enough to describe the service composition process in pervasive environments. In the next section, we use the SCM model as a backbone for describing various middleware that do the service composition.

3. Service Composition Middleware in Pervasive Environments

In this section, we describe six middleware for service composition adapted for pervasive environments by mapping them to our SCM model. The chosen middleware are architectures, platforms or algorithms that propose solutions to the service composition problem: MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12]. For each middleware, we describe the service composition runtime process, the prototypes developed and identify the four modules of our SCM model in their provided architectures.

3.1 MySIM: Spontaneous Service Integration for Pervasive Environment

MySIM[17] is a spontaneous middleware that integrate services in a transparent way without disturbing users and applications of the environment. Service integration is defined as being a service transformation from one service technology to another (Translator), a service composition and a service adaptation. MySIM selects services that are composable, generates composition plans (Generator), evaluate their QoS degrees (Evaluator) and implements new composite services in the environment (Builder). These new services publish well known interfaces but new implementations and better QoS. MySIM also proposes to adapt the application execution to the services available by redirecting the application call to services with better QoS.

MySIM architecture is depicted under the SCM model in Figure 2. The Translator service transforms services into a generic Service model. The Generator service is responsible of the syntactic and semantic matching of the service operations for composition and adaptation issues. The QoS service evaluates the composition or substitution matching via non functional properties and the Decision service decides which services to compose or to substitute. Finally the Builder service implements the composite service, and the Registry service publishes its interfaces. MySIM is implemented under the OSGi/Felix platform. It uses the reflexive techniques to do the syntactic interface matching and ontology online reasoner for the semantic matching. The service composition is technically done by generating new bundles (unit of deployment) that composes the services together. The results show the heavy cost of the semantic matching. The solution is interesting but solutions need to be found to make the spontaneous service integration scalable to large environments.

3.2 PERSE: Pervasive Semantic-aware Middleware

PERSE[30] proposes a semantic middleware, that deals with well known functionalities such as service discovery, registration and composition. This middleware provides a service model to support interoperability between heterogeneous both semantic and syntactic service description languages (Translator). The model further supports the formal specification of service conversations as finite state automata, which enables the automated reasoning about service behavior independently from the underlying conversation specification language. Hence, pervasive service conversations described with different service conversation languages (Generator) can be integrated (Builder) toward the realization of a user task. The model also supports the specification of service non-functional properties based on existing QoS models to meet the specific requirements of each pervasive application through the QoS aware Composition service (Evaluator).

Figure 2 MySIM mapped to SCM

Page 14: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

5

PERSE architecture is depicted under the SCM model in Figure 3. The Evaluator module is the most developed as it verifies the correctness of the composition plan and analyzes the service QoS before composing services. A Translator is also available to translate the legacy services into a common model semantically and syntactically described. The Generator semantically matches services. The Builder discovers the services in the environment and simply invoke them in sequence. [30] have implemented a prototype of PERSE using Java 1.5. Selected legacy plugins have been developed for SLP using jSLP, UPnP[35] using Cyberlink, and UDDI using jUDDI. The efficiency of PERSE has been tested and proved in the cost evaluation of semantic service matching, the time to organize the semantic service registry, the time to publish and locate a semantic service description as well as the comparison of the scalability of the registry compared with a WSDL service registry, and finally the processing time for service composition with and without the support of QoS.

3.3 SeSCo: Seamless Service Composition

SeSCo[10] presents a service composition mechanism for pervasive computing. It employs the service-oriented middleware platform called Pervasive Information Communities Organization (PICO) to model and represent resources as services. The proposed service composition mechanism models services as directed attributed graphs, maintains a repository of service graphs, and dynamically combines multiple basic services into complex services (Builder). The proposed service composition mechanism constructs possible service compositions based on their semantic and syntactic descriptions (Generator). SeSCo[10] proposes a hierarchical service overlay mechanism based on a LATCH protocol (Evaluator). The hierarchical scheme of aggregation exploits the presence of heterogeneity

through service cooperation. Devices with higher resources assist those with restricted resources in accomplishing service-related tasks such as discovery, composition, and execution.

SeSCo architecture is depicted under SCM model in Figure 4. No Translator module is provided and SeSCo uses the same language to present the user task and the composite service. The service matching is done on a semantic interface matching and the evaluation is upon the input/output matching correctness. SeSCo[10] evaluated its approach by calculating the composition success ratio for different lengths of composition which is essentially the number of services that can be used to compose a single service. This evaluation shows the effect of limiting the length of the composition to a predefined number. If the service density is higher, even with a lower value of composition length, a successful composition can be achieved. However, at lower service densities, it might be necessary to allow higher composition lengths for better composition.

3.4 Broker Approach for Service Composition

Broker[6] presents a distributed architecture and associated protocols for service composition in mobile environments that take into consideration mobility, dynamic changing service topology, and device resources. The composition protocols are based on distributed brokerage mechanisms (Evaluator) and utilize a distributed service discovery process over ad-hoc network connectivity. The proposed architecture is based on a composition manager, a device that manages the discovery, integration (Generator), and execution of a composite request (Builder). Two broker selection-based protocols - dynamic one and distributed one - are proposed in order to distribute the composition requests

Figure 3 PERSE mapped to SCM

Figure 4 SeSCo mapped to SCM

Page 15: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

6

to the composition managers available in the environment. These protocols depend on device-specific potential value, taking into account services available on the devices, computation and energy resources and the service topology of the surrounding vicinity.

Figure 5 Broker mapped to SCM Broker architecture is depicted under the SCM model in Figure 5. The Broker arbitration is the Evaluator module as it evaluates the available devices and decides to distribute the composition request, described in a special language (DSF), taking into account the device context. The evaluation is done here before the composition process. The Service Integration describes the composition sequence using a specific language (ESF) and pass it to the Service Execution (the Builder) to execute it. Broker[6] implemented a protocol as part of a distributed event-based mobile network simulator, to test the two proposed broker arbitration protocols and the composition efficiency. Simulation results show that their protocols - especially the distributed approach - exceed the usual centralized broker composition in terms of composition efficiency and broker arbitration efficiency.

3.5 SeGSeC: Semantic Graph-Based Service Composition

SeGSeC[8] proposes an architecture that obtains the semantics of the requested service in an intuitive form (e.g. using a natural language) (Tranlator), and dynamically composes the requested service based on its semantics (Generator). To compose a service based on its semantics, the proposed architecture supports semantic representation of services - through a

component model named Component Service Model with Semantics (CoSMoS) - discovers services required for composition - through a middleware named Component Runtime Environment (CoRE) - and composes the requested service based on its semantics and the semantics of the discovered services - through a service composition mechanism named Semantic Graph-Based Service Composition (SeGSeC).

Figure 6 SeGSeC mapped to SCM

SeGSeC architecture is depicted under SCM model in Figure 6. The Request Analyser translates the user request into an internal system language using graph-based approach. The Semantic Analyser and Service composer produce the composition workflow ready to be executed by the Service performer. The workflow respects the semantic matching composition rules and the correctness is guaranteed via the Evaluator module. SeGSeC[8] was evaluated according to the number of services deployed and the time needed to discover, match and compose services. Another set of evaluations took not only the number of deployed services but especially the number of operations these services implement. Their results show that SeGSeC performs efficiently when only a small number of services are deployed and that it scales to the number of services deployed if the discovery phase is done efficiently.

3.6 WebDG: Semantic Web Services Composition

WebDG[12] proposes an ontology-based framework for the automatic composition of web services. [12] presents an algorithm to generate composite services from high level declarative descriptions. The algorithm uses composability rules, in order to compare the syntactic and semantic features of web services to determine whether two services are composable.

Page 16: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

7

WebDG architecture is depicted under SCM model in Figure 7. The service composition approach is depicted under four phases of request specification (Translator), service description matchmaking (Generator), composition plan selection (Evaluator) and composite service generation (Builder). A prototype implementation WebDG is provided and tested on a E-government Web service applications. The WebDG evaluation aims to test the possibility of generating plans for a large number of service interfaces, the effectiveness and speed of the matchmaking algorithm, and the role of the selection phase (QoC parameters) in reducing the number of generated plans. The result show that most of the time is spent on checking message composability. On the other hand, a relatively low value of composition completeness generates more plans, each plan containing a small number of composable operations. In contrast, a high value of this ratio generates a smaller number of plans, each plan having more composable operations.

4. Classification of the Pervasive Service Composition Middleware

As shown above, the SCM model is generic enough to provide generic functional modules that describe the existing service composition middleware. We choose to classify the middleware – MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12] - according to pervasive environment requirements. We first list and explain these pervasive requirements for service composition middleware, then a classification of these middleware is given.

4.1 Pervasive Requirements

Pervasive computing brought new challenges to distributed and mobile computing. We identify the following eight fundamental requirements for service composition in pervasive environments: interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management and

autonomous management. Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. Ubiquitous computing environments, quoting Mark Weiser's definition, consist of various kinds of computational devices, networks and collaborating software and hardware entities. Due to the large number of heterogeneous and cooperating parties, interoperability is required at all levels of ubiquitous computing. Service composition middleware need to take advantage of all the functionalities available in the surroundings, and for that they need to be interoperable. Discoverability is a major issue for ubiquity and composition as devices and services need to be located and accessed before being composed. One of the fundamental challenges of distributed and highly dynamic environments is how the applications can discover the surrounding entities and, conversely, how the applications can be discovered by the other entities in the system. In a pervasive system, the execution environment of applications can be logically considered as a single container including all applications, other components, and resources. Moreover, the idea in distributed environments is that the resources can be accessed without any knowledge of where the resources or the users are physically located. Adaptability is the ability of a software entity to adapt to the changing environment. Changes in applications' and users' requirements or changes within the network, may require the presence of adaptation mechanisms within the middleware. Moreover, adaptation is necessary when a significant mismatch occurs between the supply and demand of a resource. As the application's execution environment changes due to the user's mobility, the vital resources need to be substituted by corresponding resources in the new environment in order to ensure continuous operation. The requirement for adaptation is present on many different layers of a computing system. Context awareness is the ability of pervasive middleware to be aware in terms of devices coming and leaving, functionalities offered and retrieved, quality of service changing, etc. They need to be aware of all these changes, in order to offer the best functionalities to applications regardless the context around. When considering context-aware systems in general, some common functionalities that are present in almost every system, can be identified: context sensing and processing, context information representation, and the applications that utilize the context information. In

Figure 7 WebDG mapped to SCM

Page 17: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

8

general, the context information can be divided into low- and high-level context information. Low-level context information can be collected using sensors in the system. Low-level context information sources can be combined or processed further to higher level context information. QoS management is essential in dynamic environments, where connectivity is very variable. A pervasive middleware for service composition need to take the non functional parameters of applications and devices into consideration in order to provide viable and flexible composition plans and composite services. QoS parameters concern not only the services but also the devices where the execution is taking place. The composition execution need to rely on this parameter in order to take place in the best conditions. Not only the QoS of different services need to be compatible, but also the devices performing the composition need to respect certain characteristics and constraints. Security mechanisms, such as authentication, authorization, and accounting (AAA) functions may be an important part of the middleware in order to intelligently control access to computer and network resources, enforcing policies, auditing network/user usage, etc. Another important aspect concerns privacy and trust in pervasive environments. In presence of unknown devices, middleware need to respect privacy of users, and provide trust mechanisms adapted to the ever changing nature of the environment. Spontaneous management concerns the ability of a pervasive middleware to compose services independently of user and application requests. The middleware spontaneously composes services that are compatible together and produces a new composite service into the environment. The new service is registered and can publish its interfaces in order to be discovered and executed by applications. Spontaneous service composition is an interesting feature in pervasive environments, as services meet when the user encounter, and interesting composite service can be generated from these meetings, even though not required at that moment by users. Autonomous Management concerns the ability for a pervasive middleware to control and manage its resources, functions, security and performance, in face of failures and changes, with little or no human intervention. The complexity of future ubiquitous computing environments will be such that it will be impossible for human administrators to perform their

traditional functions of configuration management, performability management, and security management. Instead, one must resort to automate most of these management functions, allowing humans to concentrate on the definition and supervision of high-level management policies, while the middleware itself takes care of the translation of these high-level policies into automated control structures. The challenge is therefore to move from classical middleware support for configuration, performability and security management to support for self-configuration, self-tuning, self-healing and self-protecting capabilities. We classify the service composition middleware – MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8], and WebDG[12] - under the above requirements. For each middleware, we analyze its four modules - Translator, Generator, Evaluator, and Builder - and detail if they respect the pervasive requirements. The first section depicts the requirements that are fulfilled, at a certain extend, by the service composition middleware. The second section explains the requirements that are until now left behind. Our classification is given in Figure 8.

4.2 Service Composition Middleware Meeting Pervasive Requirements

In this section, we are interested in the pervasive requirements that are fulfilled by service composition middleware: discoverability, adaptability, context awareness, and QoS management. If some pervasive requirements are relatively well fulfilled by the current composition middleware, others are still at a preliminary stage. All middleware provide the discoverability and context awareness requirements. These requirements are intrinsic to every composition middleware wanting to evolve in dynamic and ever changing environment such as the pervasive environments. These requirements are essential when constructing and evaluating composition plans, but also when discovering and invoking services. Indeed, generating and evaluating composition plans must be contextual, as services can come and go at any time, and a given composition plan constructed at a certain time, need to be evaluated before execution, in case some changes have affected it. Hence, the context awareness is not only provided by the Builder but also by the Generator and Evaluator modules.

Page 18: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

9

The adaptability requirement is fulfilled by four of the six classified middleware (MySIM[17], PERSE[30], SeSCo[10], and Broker[6]) for different SCM modules. The environmental changes, that affect a pervasive environment, such as devices coming and leaving, services being unavailable, require from the middleware special mechanisms in order to re-evaluate and adapt their service composition to these changes. As we can see, some middleware propose adaptation mechanisms, but this requirement is far from being fulfilled by all service composition middleware in the environment. In nowadays researches, adaptation is more considered as a field of research[35] than a requirement to fulfill. Adapting services can be seen as a way of integrating services into their new environments. The QoS management requirement is fulfilled by five of the six classified middleware (MySIM[17], PERSE[30], SeSCo[10], Broker[6] and WebDG[11]). The modules that usually respect the QoS properties are the Generator, Evaluator and the Builder. The Evaluator relies on the service QoS parameters in order to choose the most suitable plan from all possible composition plans. QoS is especially relevant for stateful services. A plan composition of stateful services need to take QoS

into account, as the resulting composition may not execute in case of severe incompatibilities in QoS between combined services. The Builder can analysis the QoS parameter in order to choose the devices and platforms where to execute the service composition, depending on power or memory properties, but also to choose services to compose depending on the devices they execute on. This requirement is especially considered in the development of multimedia applications in variable environments such as pervasive environments[16]. Indeed, composing services within multimedia applications imposes a rigorous respect of the QoS properties otherwise the whole application may not execute.

4.3 Service Composition Middleware Missing Pervasive Requirements

Nowadays service composition middleware present real lack in providing interoperability, spontaneous management, security mechanisms and autonomous management to service composition in pervasive environments. The interoperablity requirement is more than left behind in nowadays service composition middleware. Figure 8 shows that only three middleware (MySIM[17], PERSE[30] and SeGSeC[8]) fulfill this requirement, and only for the Translator module. Interoperability is currently resolved by explicit technical translations from one service model to another. By this way, interoperability is only resolved at a technology level. On a more theoretical and formal level, the use of semantic and ontology based languages[1] is not sufficient to make service composition fully interoperable. Very often, service providers use different ontology domain and ontology transformations from one domain to another are more than needed. Spontaneous management is only considered by MySIM[17] middleware. Indeed all of the other five middleware are goal-oriented and respond mainly to predefined functionality requests coming from the application layer. None of these middleware propose a spontaneous service composition that deliver new services and functionalities into the environment, without the intervention of users or applications. MySIM[17] proposes a service integration middleware that generates new services in the environment spontaneously and automatically. Compatible services are composed on the fly, without any intervention and upon the middleware own decision based on semantic and syntactic service matching.

Figure 8 Service composition Middleware Classification

Page 19: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

10

The middleware listed above, do not propose solutions to address the problem of security or trust. They rely on the existing mechanisms proposed by the middleware and network layers, if any. Several other studies[14][15] address security features for service composition using contracts[15], verification formal methods[14], or a security model for enforcing access control in extensible component platforms[20]. No real autonomous composition management is provided. The middleware do not propose mechanism to manage their resources, functions, security, and performance, in face of failures and changes, with little or no human intervention. Pervasive environments that are capable of composing functionalities autonomously are still at preliminary state of consumption. A major domain that dealt with autonomous management of the composition is the multi-agent systems. Combining multi-agent systems and service-oriented architecture is a well known research field to add autonomy features to services[9][18][21][23].

5. Conclusions

The development of pervasive computing has put the accent on a well identified problem, the service composition problem. Composing services together on various platforms, extending environments with new functionalities, are the new trends pervasive computing aims to achieve. Many composition middleware have reached a certain maturity, and propose complete architectures and protocols to discover and compose services in pervasive environments. Many surveys[5][7][22][31][33] list service composition middleware according to predefined criteria or properties. They very often consider middleware for the composition of a particular technology such as Web services composition middleware. The application of service composition middleware to pervasive environment is rather new, and a real lack in analyzing and classifying service composition middleware under a reference model is noticed. In this article, we surveyed six complete service composition architectures for pervasive environments, located in the middleware layer, MySIM[17], PERSE[30], SeSCo[10], Broker[6], SeGSeC[8] and WebDG[12]. We do not claim the exhaustiveness of our classification, but we think that the major middleware for service composition in pervasive environments are depicted. We introduced a novel approach to study the service composition problem. We studied these systems by reducing the composition problem to four main

problems: the service translations, the composition plan generations, the plan contextual evaluations, and finally the real composition implementation. In each of these domains, several trends appeared to be commonly used: simple translation between diverse service technologies for the Translator, graph based approach or language composition one for the Generator, formal methods approach for the Evaluator, and discovery and invocation mechanisms for the Builder. Finally, we classified these middleware under several requirements related to the ubiquity of the environments. If some requirements such as discoverability and context awareness are well verified, others are still being explored such as interoperability, adaptability and QoS management. Security, spontaneous and autonomous management open the way to many promising research trends, at the intersection of several major domains such as artificial intelligence and autonomic computing, for service composition middleware in pervasive environments. References [1] T. Bittner and M. Donnelly and S. Winter:, Ontology and

semantic interoperability, CRCpress (Tailor & Francis), D. Prosperi and S. Zlatanova (ed.): Large-scale 3D data integration: Challenges and Opportunities, pages 139-160, 2005.

[2] T. Erl:, Service-Oriented Architecture (SOA): Concepts, Technology, and Design, Prentice Hall, 2005.

[3] B. Cole-Gomolski:, Messaging Middleware Initiative Takes a Hit, PComputerworld, 1997.

[4] Matjaz Juric and Poornachandra Sarang and Benny Mathew:, Business Process Execution Language for Web Services (2nd edition), PACKT Publishing, 2006.

[5] A. Alamri and M. Eid and A. El Saddik "Classification of the state-of-the-art dynamic web services composition techniques", Int. J. Web and Grid Services, Vol. 2, No. 2, 2006, pp. 148-166.

[6] D. Chakraborty and A. Joshi and T. Finin and Y. Yesha "Service Composition for Mobile Environments", Journal on Mobile Networking and Applications, Special Issue on Mobile Services, Vol. 10, No. 4, 2005, pp. 435-451.

[7] S. Dustdar and W. Schreiner "A survey on web services composition", Int. J. Web and Grid Services, Vol. 1, No. 1, 2005, pp. 1-30.

[8] K. Fujii and T. Suda "Semantics-based dynamic service composition", IEEE Journal on Selected Areas in Communications, Vol. 23, No. 12, 2005.

[9] F. Ishikawa and N. Yoshioka and S. Honiden "Mobile agent system for Web service integration in pervasive network", Systems and Computers in Japan, Wiley-Interscience, Vol. 36, No. 11, 2005, pp. 34-48.

[10] S. Kalasapur and M. Kumar and B. Shirazi "Dynamic Service Composition in Pervasive Computing", IEEE Transactions on Parallel and Distributed Systems, Vol. 18, No. 7, 2007, pp. 907-918.

[11] B. Medjahed and Y. Atif "Context-based matching for

Page 20: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

11

web service composition", Distributed and Parallel Databases, Special Issue on Context-Aware Web Services, Vol. 21, No. 1, 2006, pp. 5-37.

[12] B. Medjahed and A. Bouguettaya and A. K. Elmagarmid "Composing Web services on the Semantic Web", The VLDB Journal, Vol. 12, No. 4, 2003, pp. 333-351.

[13] M. Satyanarayanan "Pervasive Computing: Vision and Challenges", IEEE Personal Communication, 2001.

[14] G. Barthe and D. Gurov and M. Huisman "Compositional Verification of Secure Applet Interactions", in FASE '02: Proceedings of the 5th International Conference on Fundamental Approaches to Software Engineering, 2002, London, UK, pp. 15-32.

[15] N. Dragoni and F. Massacci and C. Schaefer and T. Walter and E. Vetillard "A Security-by-contracts Architecture for Pervasive Services", in Security Privacy and Trust in Pervasive and Ubiquitous Computing Worshop, 2007.

[16] B. Girma and L. Brunie and J.-M. Pierson "Planning-Based Multimedia Adaptation Services Composition for Pervasive Computing", in 2nd International Conference on Signal-Image Technology & Internet & based Systems (SITIS'2006), 2006, LNCS series, Springer Verlag.

[17] N. Ibrahim and F. Le Mouël and S. Frénot "MySIM: a Spontaneous Service Integration Middleware for Pervasive Environments", in ACM International Conference on Pervasive Services (ICPS'2009), 2009, London, UK.

[18] Z. Maamar and S. Kouadri and H. Yahyaoui "A Web services composition approach based on software agents and context", in SAC'04: Proceedings of the 2004 ACM symposium on Applied computing, 2004, Nicosia, Cyprus.

[19] E. Niemela and J. Latvakoski "Survey of requirements and solutions for ubiquitous software", in 3rd international conference on Mobile and ubiquitous multimedia, 2004, Vol. x, pp. 71-78.

[20] P. Parrend and S. Frénot "Component-Based Access Control: Secure Software Composition through Static Analysis", in Proceedings of the 7th International Symposium, 2008 Springer, LNCS 4954, Budapest, Hungary.

[21] C. Preist and C. Bartolini and A. Byde "Agent-based service composition through simultaneous negotiation in forward and reverse auctions", in EC '03: Proceedings of the 4th ACM conference on Electronic commerce, 2003.

[22] J. Rao and X. Su "A Survey of Automated Web Service Composition Methods", in First International Workshop on Semantic Web Services and Web Process Composition, 2004, SWSWPC, San Diego, California, USA.

[23] Q. B. Vo and L. Padgham "Conversation-Based Specification and Composition of Agent Services", in Cooperative Information Agents (CIA), 2006, Edinburgh, UK, pp. 168-182.

[24] Z. Yang and R. Gay and C. Miao and J.-B. Zhang and Z. Shen and L. Zhuang and H. M. Lee "Automating integration of manufacturing systems and services: a semantic Web services approach", in Industrial Electronics Society (IECON) 31st Annual Conference of

IEEE, 2005. [25] Ion Constantinescu and Boi Faltings and Walter Binder

"Large Scale, Type-Compatible Service Composition", in ICWS '04: Proceedings of the IEEE International Conference on Web Services, Washington, DC, USA, 2004.

[26] Evren Sirin and James Hendler and Bijan Parsia "Semi-automatic Composition of Web Services using Semantic Descriptions", in Web Services: Modeling, Architecture and Infrastructure Workshop, Angers, France, 2003.

[27] Fabio Casati and Ski Ilnicki and Li-jie Jin and Vasudev Krishnamoorthy and Ming-Chien Shan "Adaptive and Dynamic Service Composition in eFlow", in CAiSE '00: Proceedings of the 12th International Conference on Advanced Information Systems Engineering, London, UK, 2000.

[28] Tao Gu and Hung Keng Pung and Da Qing Zhang "A Middleware for Building Context-Aware Mobile Services", in Proceedings of IEEE Vehicular Technology Conference, Los Angeles, USA, 2004.

[29] Shankar R. Ponnekanti and Armando Fox "SWORD: A developer toolkit for web service composition'', in 11th World Wide Web Conference, Honolulu, USA, 2002.

[30] S. Ben Mokhtar, "Semantic Middleware for Service-Oriented Pervasive Computing", Ph.D. thesis, University of Paris 6, Paris, France, 2007.

[31] D. Kuropka and H. Meyer "Survey on Service Composition", Technical Report, Hasso-Plattner-Institute, University of Potsdam, number 10, 2005.

[32] J. Floch ed. "Theory of adaptation", Delivrable D2.2, Mobility and ADaptation enAbling Middleware (MADAM), 2006.

[33] C. Mascolo and S. Hailes and L. Lymberopoulos and P. Picco and P. Costa and G. Blair and P. Okanda and T. Sivaharan and W. Fritsche and M. and M. A. Rónai and K. Fodor and A. Boulis "Survey of Middleware for Networked Embedded Systems", Technical Report, FP6 IP Project: Reconfigurable Ubiquitous Networked Embedded Systems, 2005.

[34] T. Salminen, "Lightweight middleware architecture for mobile phones", Ph.D. thesis, Department of Electrical and Information Engineering, University of oulu, Oulu, Finland, 2005.

[35] UPnP Forum, "Understanding UPnP: A White Paper", Technical Report, 2000.

[36] The OWL Services Coalition, "OWL-S: Semantic Markup for Web Servicesr", White paper, 2003.

Noha Ibrahim holds an engineering diploma from Ecole Nationale Supérieure d'Informatique et de Mathématique Appliquée de Grenoble (ENSIMAG), and a Phd degree from National Institute for Applied Science (INSA) Lyon, France. The Phd is about service integration in pervasive environments. Her Phd focused on providing a spontaneous service integration middleware adapted for pervasive middleware. Her main interests are middleware for pervasive and ambient computing. Noha Ibrahim is currently a post doctoral at the Grenoble Informatics Laboratory where she works on service composition based framework for optimizing queries.

Page 21: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

12

Frédéric Le Mouël holds an engineering diploma in Languages and Operating Systems, and a Phd degree from the University of Rennes 1, France. His dissertation focused on an adaptive environment for distributed execution of applications in a mobile computing context. Frédéric Le Mouël is currently associate professor in the National Institute for Applied Sciences of Lyon(INSA Lyon, France), Telecommunications Department, Center for Innovation in Telecommunication and Integration of Services (CITI Lab.). His main interests are service-oriented middleware and more specifically on the fields of dynamic adaptation, composition, coordination and trust of services.

Page 22: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

13

Context Aware Adaptable Applications - A global approach

Marc DALMAU, Philippe ROOSE, Sophie LAPLACE

LIUPPA, IUT de Bayonne 2, Allée du Parc Montaury

64600 Anglet FRANCE

Mail : {[email protected]}

Abstract Actual applications (mostly component based) requirements cannot be expressed without a ubiquitous and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Applications are then qualified as context aware applications. This first part of this paper presents an overview of context and its management by application adaptation. This part starts by a definition and proposes a model for the context. It also presents various techniques to adapt applications to the context: from self-adaptation to supervised approached. The second part is an overview of architectures for adaptable applications. It focuses on platforms based solutions and shows information flows between application, platform and context. Finally it makes a synthesis proposition with a platform for adaptable context-aware applications called Kalimucho. Then we present implementations tools for software components and a dataflow models in order to implement the Kalimucho platform. Key-words: Adaptation, Supervision, Platform, Context, Model

1. Introduction Actual applications (mostly component based) requirements cannot be expressed without a ubiquity and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Mobile computing and next, ubiquitous computing, focuses on the study of systems able to accept dynamic changes of hosts and environment [33] . Such systems are able to adapt themselves or to be adapted according to their mobility into a physical environment. That implies dynamic

interconnections, and the knowledge of the overall context. Due to the underlying constraints (mobility, heterogeneity, etc.), the management of such applications is complex and requires considering constraints as soon as possible and having a global vision of the application. Adaptation decision can be fully centralized (A - Figure 1) or fully distributed with all intermediary positions (B&C - Figure 1). The consequence is the level of autonomy of decision as well as the level of predictability. Obviously, the autonomy increases with decentralized supervision. Reciprocally, the complexity increases with the autonomy (problems of predictability, concurrency, etc.).

A- Centralized Supervision

B- Adaptation Platform(decentralized supervision)

C- Self Adaptation

Aut

onom

y

Predictability

+

- +

-

Figure 1 : Means of adaptation

Self-adaptable applications need to access to context information. This access can be active if the application captures itself the context (see A - Figure 1), or passive if an external mechanisms gives it access to the context (see B - Figure 1). Nevertheless, with mobile peripherals and the underlying connectivity problems, a fully centralized supervision is not possible. A pervasive supervision [29] appears is a good solution and allows managing complexity, predictability while keeping the advantages of autonomy. In order to be context-aware, applications need to get information corresponding to three adaptation types: data, service and presentation. The first one deals with “raw data” and its adaptations to provide complete and formatted information. Service adaptation deals with the architecture of the application and with dynamic adaptation (connection/disconnections/migration of

Page 23: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

14

components composing the application). It allows adapting the application in order to respect the required QoS. Presentation deals with HCI (not addressed in this paper). Here is a global schema of an adaptable application:

Application

AdaptationManager

wish a QoSprovided QoS

influences on user requirements

push/pull

adaptations

mon

itore

d by

Context Information

adaptations

Figure 2 : Adaptable applications

Whereas [34] [35] do not make distinction between context oriented and application oriented data (functional data), we think that such a distinction makes design easier and offers a good separation of concerns [36] .

2. What is context?

2.1 Definition and model

The origin of the term « context awareness » if attributed to Schilit and Theimer [42] . They explain that it is « the capacity for a mobile application and/or a user to discover and react to situations modifications ». Many other definitions can be found in [43] . The application context is the situation of the application so the context is a set of external data that may influence the application [36] . A context management system can interpret the context and formalize it in order to make a high level representation. It creates an abstraction for the entities reacting to situations evolutions, they can be applications [35] , platforms, middlewares, etc. In order to make such abstractions, a three layered taxonomy can be organized as shown in Figure 3: The first layer deals with context information capture. The first type of context is called Environmental: this is the physical context. It represents the external environment where information is captured by sensors. This information is about light, speed, temperature, etc. The second type, called User, gives a representation of users interacting with the application (language, localization, activity, etc.). This is the user profile. The third one deals with Hardware. Most probably, the more “classical” one; it gives information on available resources (memory, % CPU, connections, bandwidth, debit, etc.). It also gives information as displays resolutions, type of the host (PDA,

Smartphone, Mobile Phone, PC, etc.). The third one is the Temporal context. It preserves and manages the history (date, hour, synchronizations, actions performed, etc.). The last one is called Geographic and gives geographical information (GPS Data, horizontal/vertical moving, speed, etc.). The second layer, called « context management » [44] [45] is based on the previous layer representations. It provides services specifying the software environment of the application (platform, middleware, operating system, etc.). The Storage of context data in order to allow services retrieving them, the Quality giving a measure about the service itself or data processed and the Reflexivity allowing to represent the application itself. The localization manages geographic information in order to locate entities, predict their displacements. The last layer proposes mechanisms to permit the adaptation to the context. We will find several mechanisms in order to react to contextual events. The first one is the software component Composition, the second one is the Migration in order to move entities and the last one, the Adaptation to ensure the evolution of the application. This last point is no-functional, the middleware manages it, it can depend on a user profile or on rules provided by the user. The polymorphism facilitates the migration of entities and their adaptation to various hosts (with more or less constraints).

Context Management Tools

Adaptation Migration Composition

Type of the context

User Hardware TemporalEnvironment

Context Management Services

Service Storage ReflexivityQuality

Polymorphisme

localization

Geographic

Figure 3 : Taxonomy of context

We propose a context model able to design any context information. This model (called Context Object) provides information needed by entities managing the application. Some information defines the context (its nature) whereas others define its validity. The nature of the context can be [34] : - User (people) as his preferences, - Hardware (things) as network, - Environment (places) as temperature, sunlight, sound, movement, air pressure, etc. It is the physical context. It represents the external environment from where

Page 24: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

15

information is captured by sensors. It deals with users’ environment [36] as well as hardware environment. Such information is called ContextInformation and we call InformationValidity the validity area of a ContextInformation (example: old information or information which source is very far can be useless). InformationValidity is: - Temporal: Temporal information can be a date or time used as timestamp. Time is one aspect of durability so it is important to date information as soon as it is produced. This temporal information allows making historical report of the application and defining the validity (freshness) of ContextInformation [40] . This freshness is the time since the last sensor reads it. Ranganathan defines a temporal degradation function that degrades the confidence of the information. - Spatial: it is the current location (the host (identity) or the geographic coordinates (GPS)) and makes possible to distinguish local and remote context - Confidence information: how precise is the sensor - Information ownership: in some application hosted on a SmartPhone for example, privacy is very important, therefore, each information has to be stamped with its owner (the producer). Let’s notice that some information is strongly coupled as freshness and confidence whereas others are defined using application data as ownership. That is the reason why [39] identified physical, virtual (data source from software application or services) and logical sensors (combine physical and virtual sensors with additional information) Depending on the application, one information type could be a ContextInformation or a ValidityInformation. For example, location can be a ContextInformation for a user in a forest or can be a ValidityInformation for the sensor network that supervises temperature and air pressure measurement. According to this model, we organize all the characteristics of context information that define type, value, time stamp, source, confidence and ownership [37] or user, hardware, environment and temporal [45] [46] Error! Reference source not found.. In order to structure such contextual information, we proposed a meta-model structuring ContextInformation and ValidityInformation (see Figure 4).

Context

LocalContext RemoteContext

ContextObject

User Hardware Environment TemporalGeographic

1 1

1 1

*

* *

*

Spacio-TemporalContextObject

*

*ContextInformationObject

Characterization

Figure 4 : Context class diagram

2.2 Context and applications

Since several years, the natural evolution of applications to distribution shows the need of more than only processing information. Traditionally, applications are based on input/output, i.e. input data given to an application produces output data. This too restrictive approach is now old fashioned [48] . Data are not clearly identified, processes does not only depend on provided data but depend also on data such the hour, the localization, preferences of the user, the history of interactions, etc. in a word the context of the application. We can find a representative informal definition in [49] "The execution context of an application groups all entities and external situations that influence on the quality of service/performances (qualitative & quantitative) as the user perceives them". Designers and developers had to integrate the execution environment into their applications. This evolution allows applications to be aware of the context, then to be context-sensible and then to adapt their processes and next to dynamically reconfigure themselves in order to react as well as possible to demands. This is evidence, but to adapt itself to the context, the application needs to have a good knowledge of it and of its evolutions. With a research point of view, context needs a vertical approach. All research domains/layers manage contextual information. Many works deal with its design, management, evaluation, etc. Its impact is wide: Re-engineering, HCI, Grid, Distributed Applications,

Page 25: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

16

Ubiquitous Computing, Security, etc. But to be honest, the context it not a new concept in computer science! Since the early 90’s, Olivetti Research Center with the ActiveBadge [Harter, 1994] and most of all, with a lot a regrets, the Xerox PARC with the PARCTab System [51] gave the bases of modern context aware applications. In order to be aware of the context, the following architecture (see Figure 5) is “classical”. An example can be found in [46] . It can be summarized as a superposition of layers. Each of them matches to a contextual information acquisition process, a contextual information management and an adaptation of the application to the context (as defined in Figure 3).

Contextual Information Acquisition

Context Management

Application Adaptation

Figure 5 : Architectural layers of context aware applications

According to Figure 5, context management do imply to have dynamic applications in order to adapt them to variations of the context and so to provide a quality of service corresponding to current capabilities of the environment (application + runtime).

3. Context aware applications Context aware applications are tightly coupled to mobiles devices and ubiquitous computing in the meaning of "machines that fit the human environment, instead of forcing humans to enter theirs" [1] . These applications are able to be aware of their execution context (user, hardware and environment) and its evolutions. They extract information from the context about geographical localization, time, hardware conditions (network, memory, battery, etc.) as well as about users. Interactions between an application and its context can then be represented by two information flows (Figure 6):

− Application captures information from its context − Application acts its context

Application

Context

1 2 D ata Flow #1 = consulta tionD ata Flow #2 = modification

Figure 6 : Context aware application

The means operated to realize both data flows of the Figure 6 depend on types of context (Table 1). They are system and network primitives for hardware context (resource allocation, connections, consultation of available

resources, etc.). The user's context is captured through the interfaces and the information system (user's profile description files). At last, environmental context can be captured through sensors and modified by actuators.

Type of context Flow Hardware User Environment #1 System and

network primitives

Interfaces and information system

Sensors

#2 Resource allocation

Interfaces Actuators

Table 1 : Means of interaction between application and context

However, even if it is possible to design limited applications according to the use of contextual information, the main interest is to be able to adapt the behavior of the applications to the context evolutions. Particularly, the increasing use of mobile and limited devices implies the deployment of adaptable applications. Such approach allows having a quality of service management (functional and non-functional services as energy saving for example).

3.1 Adaptable context aware applications

Adding adaptation to context aware applications means the addition of a new interaction corresponding to the influence that the context has on the application. That is the property for the application to adapt itself to the context (Figure 7).

Application

Context

1 2

3

Data F low #1 = consulta tionData F low #2 = modificationData F low #3 = adaptation

Figure 7 : Adaptable Context Aware Application

Achievement of a context aware application can be done: − By self adaptation − By supervision

Application

Adaptation

Context

Self AdaptationSu

perv

ision

Figure 8 : Supervision vs Self Adaptation: a global vue.

Page 26: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

17

3.1.1 Self adaptation Such systems are expected to dynamically self-adapt to accommodate resource variability, change of user needs and system faults. In [27] , self-adaptive applications are described as useful for pervasive applications and sensor systems. Self-adaptive applications mean that adaptations are managed by the applications itself. It evaluates its behavior, configuration, and with distributed application, its deployment. The application captures the context (flow #1) and therefore adapts its behavior (data flow #3). The activity of the application modifies the context (flux #2). This approach, represented in Figure 7, raises the essential problem of accessing to distant context information. Indeed, through the interactions described in Table 1 it is only possible for the application to interact with its local context. In order to get or modify distant contextual information, the designer of the application has to set up specific services on the different sites of its application. It becomes necessary to set up many non functional mechanisms that strongly increases the complexity of the application and are difficult to maintain up to date. Moreover self-adaptive solutions imply to have a planning and an evaluation part at runtime and a control loop. In order to make the evaluation, such application needs components description, as well as software description, structure and various alternatives, i.e. various assembling configurations. Such solutions do not simplify the separations of concern, and so increase the practical viability of the application and its maintainability and possible evolutions. Moreover, with ubiquitous and heterogeneous environments, such generic solutions are not suitable to exploit the potential of hosts [28] . That is the reasons why most systems tend to solve these problems using platforms.

3.1.2 Supervised adaptation In these approaches a runtime platform interfaces the application and the context. It allows then access to distant context. The application only senses the context (flow #1) by means of the middleware of the platform. The application can modify the context and the platform itself (flow #2). Both the application and the platform adapt themselves to the context (flow #3). This kind of organization is shown in Figure 9.

Application

Platform

Context

1 2

1 2

33

2

Da ta F low #1 = consulta tionData F low #2 = modificationData F low #3 = adaptation

Figure 9 : Adaptable Context Aware Application with platform

Recent works as Rainbow use a closed-loop control based on external models and mechanisms to monitor and adapt system behavior at run time to achieve various goals [32] , such solution is closed to the use of pervasive supervision. In order to implement such a solution, we need a distributed platform on all heterogeneous hosts. Such architecture allows to capture local context, and to propose local adaptations. Additionally, communication between local platforms gives a global vision of the context permitting to have a global measure of the context and adapted reactions. Each platform has three main tasks to accomplish:

− Capture of the context: This task is important and implements tools to capture information of layer 1 (see Figure 3).

− Context Management Service. Its role is to manage and evaluate information from layer 1 in order to evaluate if adaptation is required.

− Context Management Tools. It proposes a set of mechanisms to adapt the application because of variations of the context.

The means operated to realize data flows #1 and #2 of Figure 9 depends on the types of context (Table 2). Interactions with local context use the mechanisms described in (Table 1) whereas those with distant context use services of the platform. The middleware of the platform offers services for context capture providing contextual information completed by time and localisation parameters as described in Figure 4.

Type of context Flow Hardware User Environment Context

System and network primitives

Interfaces Sensors Local

#1 Services of the platform

Services of the platform

Services of the platform

Distant

Resource allocation

Interfaces Actuators Local

#2 Services of the platform

Services of the platform

Services of the platform

Distant

Table 2 : Interactions between Application and Context with a Platform

Page 27: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

18

The role of the platform in this kind of organisation becomes central. We will now define more precisely the role and the architecture of a platform.

3.2 The platforms

Generally, we consider a platform as a set of elements of virtualization (Figure 10) allowing application designers to have a runtime environment independent of the hardware and network infrastructures, supporting distribution and offering non functional general services (persistence, security, transactions …) or services specific to a domain (business, medical …).

Context

Application

Framework

container

services distributionheterogeneitymiddleware

Figure 10 : Elements of virtualization in a platform

The container virtualizes the application or its components in make them suitable and compatible (interface) with the platform. The framework finishes this task allowing the designer to respect the corresponding model. The middleware virtualizes communications and offers services called by the application in order to access to the context. At last heterogeneity consists in virtualization of the hardware and the operating systems on witch the application runs. Interactions between platform and application are bidirectional and represent the core aspect of the whole system (platform/application). The platform has its proper state evolving when modifications occur in the underlying level (context) and in the application. Consequently, the platform can trigger updates of the application state. The interaction mode between application and platform can be achieved by:

- service - container

In the first case, the changes of the state of the application that the platform knows are those inserted into the

application itself by services, API or middleware calls ( Figure 11 left), while in the second case the containers of

the business components send to the platform information about their evolution (

Figure 11 right). These containers can themselves offer some services to the business components or capture information about their changes of state by observing their behavior.

Application

Middleware

platform

Context

imply

imply

state change

state change

Interaction platform/Application by services

Application

platform

causes

imply

state change

state change

Interaction platform/Application by container

Container1

2

Context

Figure 11 : Modes of interaction between Application and Platform

The interaction mode between platform and application allows distinguishing two families (Figure 12):

- Non intrusive platforms; - Intrusive platforms.

A non intrusive platform acts on external elements of the application like data or uses a event based mechanism. It raises events when an internal state change occurs. These events can be caught by specific components of the application (event listeners). These modifications of external elements and these events imply the changes of the application state. An intrusive platform can directly change the state of the application without participation of the application. This can be achieved by a direct action on the functional part either by a modification of the circulating of information either by directly modifying the architecture of the application itself. The use of objects and components facilitates greatly this task.

Application

platform

Context

cause

cause

chang ofstate

change ofstate

Non intrusive platform

Application

platform

cause

cause

change ofstate

change ofstate

Intrusive platform

Listener

event

Context

Figure 12 : Modes of interaction between platform and application

Page 28: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

19

3.3 Architecture of context aware adaptable applications

An overall schema of the architecture of an adaptable context aware application is presented in Figure 13. Relationship between platform and application are materialized by four flows:

Appl ication

Platform

A B C D

D ata Flow A = Requirements for resourcesD ata Flow B = Control of the p latformD ata Flow C = In formation from the platformD ata Flow D = C ontrol of the application

Figure 13: Information flows between application and platform [1]

This overall schema can be completed by adding the flows of interactions with the context as presented in Figure 9. We then obtain the general architecture shown in Figure 14 :

Application

Platform

Context

A B

1 23

2

C D Data Flow A = Access to services of the middleware some o f wi tch g ive access to the contextData Flow B = Control of the p latform by the applicationData Flow C = Information for non in trusive modeData Flow D = Information for intrusive modeData Flow #1 = Consultation of the contextData Flow #2 = Modification of the contextData Flow #3 = Adaptation to the context

Figure 14 : Interactions between application, platform and context

Interactions between application and platform can be described as follow:

− Data Flow A corresponds to information from the application to the platform through usage of services of the middleware.

− Data Flow B represents the possibility to the application to configure the behavior of the platform (events priorities, filtering of contextual information, etc.)

− Data Flow C corresponds to the non intrusive mode of interaction between platform and application. It deals with events produced by the platform for the listeners inside the application.

− Data Flow D represents the intrusive mode of interaction between platform and application. It deals with updates of the application by the platform (modification of the architecture by adding/suppressing/moving components or by changing their business part).

Now, let’s have a look on different context aware applications types that can be build according to data flows really used. Firstly it is important to notice that for context aware applications, data flow A is essential. In order to be adaptable, at least flow C or flow D need to be provided. If not, the platform is the only one able to be adaptable. The optional data flow B represents the possibility that the application has to configure the interaction modes corresponding to the flows A, C and D. The Table 3 presents the four models of adaptation that it is possible to realize according to the flows used: Flows

used Type of interaction Consequence

1 A

The platform is a middleware (services for accessing to local and distant context)

Only the platform is able to adapt itself to the context

2 A and C

The platform is a middleware (services for accessing to local and distant context) and offers an adaptation service

Adaptation is decided by the application according to information send by the platform

3 A and D

The platform is a middleware (services for accessing to local and distant context) and supervises the adaptation

Adaptation fully supervised

4 A, C and D

The platform is a middleware (services for accessing to local and distant context) and offers an adaptation service

Adaptation is partially supervised and partially decided by the application

Table 3 : Possible models of adaptation according to the flows used

Data flow B allows to enrich the interaction types presented in the above Table 3: − In the first case: the application only can configure the

services of context access provided by the platform − In the second case: the application can also choose the

events which are indicated to it and their priority. − In the third case: the application can configure the

level of intrusion of the platform and eventually protect itself from it at some moments.

− The fourth case is the union of the two before. According to the taxonomy proposed in [23] , middlewares like Aura [6] [7] [8] [9] , CARMEN [10] , CORTEX [11] [12] and CARISMA [13] [14] [15] belong to the first category while Cooltown [16] [17] , GAIA [18] [19] and MiddleWhere [20] belong to the second category. SOCAM [21] and Cadecomp [24] belong to the third category while MADAM [25] and Mobipads [22] belong to the fourth.

Page 29: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

20

ApplicationAdaptation

Services

Context

B

D

AC

2

2

Capture of context

1

Platform

Figure 15: General schema of adaptation with a platform

We can then draw a general schema of an adaptable context aware application (Figure 15). The platform is distributed on every device hosting components of the applications. Then it can access to all contextual information. It offers a set of services in order to allow the application accessing to local or distant context (data flow A). Moreover it includes an adaptation manager sending events (data flow C) and a manager supervising the application (data flow D). The execution of this supervision manager can be configured by the application (data flow B).

3.4 Functional model of adaptation

The execution of an adaptable context aware application looks like a looped system: the context modifies the application, the execution of the application modify the context and so on. When a platform is introduced between the context and the application, a new loop appears because the platform itself is modified by the context and reciprocally, the platform modifies the context. Depending on using an intrusive or a non intrusive platform model, these loops are achieved by different data flows.

ApplicationAdaptation

Services

Context

B

C

2

2

Context capture

1

Platform

Figure 16 : Non intrusive adaptation model

− Case 1: Adaptation controlled by the application (non intrusive model) : The context is captured by the platform (data flow #1) which signals its modifications to the application (data flow C). The application adapts itself using or not the services of the platform (data flow B). Activity of the application and platform modifies the context (data flow #2)

ApplicationAdaptation

Services

Context

D

2

2

Context capture

1

Platform

B

Figure 17 : Intrusive adaptation model

− Case 2: Adaptation monitored by the platform (intrusive model):

− The context is captured by the platform (data flow #1) which modifies the application (data flow D). This mechanism can be monitored by the application (data flow B). Activity of the application and of the platform modifies the context (data flow #2).

3.5 General architecture of a platform for adaptable context aware applications

The platform is composed of three main parts: 1. The capture of context is done by usual mechanisms

as described in Table 1. They are system and network primitives, information system and sensors. Moreover, the platform also receives information about the application’s running context from the containers of the business components (Figure 10).

2. The services concern both the application and the platform itself (more precisely the part in charge of the adaptation): For the application it corresponds to:

• Services for accessing to the context (hardware, user, environment) with filtering possibilities (time, localisation)

• Other usual services (persistence, …) For the adaptation it means:

• Services for accessing to the context • Services for Quality of Service measurement • Services for reflexivity that is to say the

knowledge that the system constituted by the platform and the application has of itself.

3. The adaptation matches the general schema of adaptation proposed in [3] which distinguishes two parts:

• The evolution manager which implements the mechanisms of the adaptation;

• The adaptation manager which monitors and evaluates the application.

Page 30: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

21

Plan changes

Collectobservations

Evaluate andmonitor

observations

Newdeployment

Maintaincoherency

ImplementationArchitectural

model

Adaptationmanagement

Evolutionmanagement

Figure 18 : General schema of adaptation [3]

The evolution manager monitors the application and its environment. Its architectural model selects an implementation maintaining the coherency of the application. The essential role of this manager is to check if deployment of the application is "causally connected" to the system [5]. Such a model integrates reflexivity like defined in [4] but limited to the architecture of the application and therefore protecting the encapsulation of the business components. The adaptation manager receives observations measured by the evolution manager. It evaluates them in order to select an adaptation and to find a new deployment of the components of the application (Figure 18).

4. Kalimucho platform and implementation tools The architecture of the application has to be virtualized in order to be monitored by the platform. The general architecture of the Kalimucho platform is the following:

Host 1

Host 2

Host 3

Osagaia Components

Data Fl ows between Components (Korrontea connectors)

Commandes États

Intra-platform communications Kalimucho platform

Platform Kali mucho

Pl atform Kalimucho

Pl atform Kali mucho

Osagaia Components

Osagai a Components

Osagaia Components

Figure 19: Kalimucho’s General Architecture

It is based on a distributed service based platform implementing non-functional services for adaptations (layer 2 – Figure 3). The functional part is implemented with software and hardware components running into the generic Osagaia container. Communication between

components uses the generic framework called Korrontea. This framework is a first class component connector able to implement various communications policies.

4.1 Kalimucho architecture

We propose to build the architecture of adaptable context aware applications on a distributed platform called Kalimucho. The application is made of business components (BC) interconnected by information flows. To directly modify the architecture of the application it is necessary that the platform should be able to add/remove/move/connect/disconnect the components. Moreover the platform has to capture the context on every site. We created a container for information data flows named Korrontea and another for business components named Osagaia [26] . These containers collect local contextual information from business components and connectors and send them to the platform. They receive back supervisions commands from the platform. Interactions between the platform and the application are implemented with the flows shown in Figure 20. We can notice that because Korrontea containers have a non functional role into the application (information transportation), they do not accept the data flow C and are not event listeners. On the other hand, some BC can react to context events sent by the platform towards Osagaia containers.

site #2site #1

OsagaiaBC

OsagaiaBC

OsagaiaBC

Platform Platform

A C D A C D A C D

KorronteaKorrontea

Communicationbetween platforms

A D A D

A D

Korrontea

Figure 20 : Interactions between application and platform in Kalimucho

Our work deals with various devices as sensors (which are CLDC compliant), PDA, SmartPhones (CDC compliant) and traditional PCs. Such an heterogeneous environment implies several services variations devoted to the platform: The capture of the context is done by components (Osagaia) and flow containers (Korrontea). Depending on the host running the component, it will capture users, environment, hardware, temporal or geographic information (see layer 1 - Figure 3). The second layer (context management services) is done by implementing an heuristic in order to evaluate the current Quality of Service (QoS) and to propose adaptations if needed and if possible. The last layer (context management tools) gives solutions to provide adaptations (add/remove/move/connect/disconnect components).

Page 31: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

22

The platform is distributed on every machine on which components of the application are deployed (desktops, mobile devices and sensors). The different parts of the platform communicate through the network. Communications between BCs (local or distant) are achieved by data flows encapsulated into Korrontea containers. Various versions of the platform are implemented on the different hosts according to their physical capacities. On a desktop all the parts of the platform are implemented whereas, on a mobile device, and particularly on a wireless sensor, light versions are proposed (one for CDC and one for CLDC compliant hosts). Consequently, only non avoidable services for the host are deployed (for example a service for persistence is useless on a sensor). In the same way, the adaptation manager implemented on a mobile device can be lightened using internal services of one of the neighbouring platform (for example, only local routing information is available on a limited device). If the platform of this device needs to find others routes in order to set up a new connection, it has to use services of the platforms implemented on neighbouring desktops.

4.2 Osagaia Software Component Model

Osagaia

...

AccessPort

Acc essPort

InputUnit

R eadF low

OutputUnit

WriteF low

AccessPort Acc essPort

BusinessComponent

W riteFlow

C, D

ControlUnit

A

A {Interactions with the Platform

C D

Figure 21: Osagaia Conceptual Model

Finally we design the software component model in order to ensure the implementation of distributed applications according to the specifications expressed by functional graphs [41] . Functional components are called business component since they implement the business functionalities of applications. These components need to be executed into a container whose role is to provide non-functional implementation for components. The architecture of this container is shown in Figure 21, we call it Osagaia. Its role is to perform interactions between business components and their environment. It is divided into two main parts: the exchange unit (composed of input and output units, see Figure 21) and the control unit. The exchange unit manages data flows input/output

connections. The control unit manages the life cycle of the business component and the interactions with the runtime platform. Thus, the platform supervises the containers and, indirectly, the business components (a full description of the Osagaia software component model is available in [31] ). Thanks to this container, business components read and write data flows managed by Connectors called Korrontea (see Figure 22). Its main role is to connect software components of the applications. The Korrontea container receives data flows produced by components and transports them. It is made up of two parts. The control unit implements interactions between the Korrontea container and the platform while an exchange unit manages the input/output connections with components. The container is the distributed entity of our model, i.e. it can transfer data flows between different sites of distributed applications. The flow management is done according to the business part of the connector implementing both the communication mode (client/server for example) and the communication politic (with or without synchronization, loss of data, etc.). A full description of the Korrontea component model is available in [28] ).

Korrontea

InputUnit

AccessPort

OutputUnit

AccessPortClient/Serv erProcess

GetSlice ProvideSlice

D

A A

ControlUnit

AInteractions with the Platform{D

Figure 22: Korrontea Conceptual Model

5. Conclusion In this paper, we presented an overview of adaptable applications. Because such applications need knowledge of their environment, we made a definition of the context and presented it according to applications uses. Next, we present adaptation management politics and their possible implementation. This part was followed by a presentation of implementation tools able to provide adaptations. We finished by the description of the Kalimucho platform, software and connectors containers models used in order to make adaptations. Implementing context-aware adaptable applications with a platform helps having a global view of the application and of the context. The global view of the application permits an optimum mobility and resource management. The global view of context permits considering the whole context of the application instead of the only local one.

Page 32: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

23

The system composed of the platform and the application make up a reflexive context aware system. The problem of such an approach is its inherent complexity. Context aware platforms become more and more complex in order to manage a context more and more variable and evanescent. So, depending on the targeted application, it could be much more interesting to provide various lighter, specialized and reflexive platforms providing a view of their state. Moreover, such platforms are able to be heaped with other light, specialized and reflexive ones. The influence of the environment on the system behavior leads to strongly couple the execution platform and the application[38] . So design methods for applications and platforms must also be coupled to constitute a sole design method. Instead of making a whole design step, we propose a life-cycle including both application and platform (which is also an application – this is recursive) to finish with implementation tools (platform specific, component and connector models and specific implementations). Such approach let us imagine wide development with automatic code generation.

6. Bibliography [1] Weiser, M. (1991) ‘The computer for the 21st century’,

Scientific American, pp.94–104. [2] C. Efstratiou, K. Cheverst, N. Davies, A. Friday : "An

Architecture for the Effective Support of Adaptive Context-Aware Applications". In Proc. of the Second Int’l Conference on Mobile Data Management (MDM 2001).

[3] P. Oreizy, M. M. Gorlick, R. N. Taylor, D. Heimbigner, G. Johnson, N. Medvidivic, A. Quilici, D. S. Rosenblum, A. L. Wolf : "An architecture-based approach to self-adaptative software". IEEE Intelligent Systems, vol 14 n°3, pp : 54-62. Mai/Juin 1999.

[4] P. Maes : "Concepts and experiments in computational reflection". In proceedings of the conference on object-oriented systems, languages and applications (OOPSLA'87), pp : 147-155. Orlando, Florida 1987.

[5] S. Krakowiak : "Introduction à l'intergiciel", Intergiciel et construction d'application réparties (ICAR), pp : 1-21, 19 Janv 2007, Licence Creative Commons

[6] D. Garlan, D. Siewiorek, A. Smailagic, and P. Steenkiste. Project Aura: Toward Distraction-Free Pervasive Computing. IEEE Pervasive computing, 1(2):22–31, April–June 2002.

[7] U. Hengartner and P. Steenkiste. Protecting access to people location information. In D. Hutter, G. Müller, W. Stephan, and M. Ullmann, editors, SPC, volume 2802 of LNCS, pages 25–38. Springer, 2003.

[8] G. Judd and P. Steenkiste. Providing contextual information to pervasive computing applications. In PERCOM ’03: Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, page 133,Washington, DC, USA, 2003. IEEE Computer Society.

[9] J. P. Sousa and D. Garlan. Aura: An architectural framework for user mobility in ubiquitous computing environments. In WICSA 3: Proceedings of the IFIP 17th World Computer Congress - TC2 Stream / 3rd IEEE/IFIP Conference on Software Architecture, pages 29–43, Deventer, The Netherlands, The Netherlands, 2002. Kluwer, B.V.

[10] P. Bellavista, A. Corradi, R. Montanari, and C. Stefanelli. Context-aware middleware for resource management in the wireless internet. IEEE Transactions on Software Engineering, 29(12):1086–1099, 2003.

[11] H. A. Duran-Limon, G. S. Blair, A. Friday, P. Grace, G. Samartzisdis, T. Sivahraran, and M. WU. Contextaware middleware for pervasive and ad hoc environments, 2000.

[12] C.-F. Sørensen, M. Wu, T. Sivaharan, G. S. Blair, P. Okanda, A. Friday, and H. Duran-Limon. A context-aware middleware for applications in mobile ad hoc environments. In MPAC ’04: Proc. of the 2nd workshop on Middleware for pervasive and ad-hoc computing, pages 107–110, New York, NY, USA, 2004. ACM Press.

[13] L. Capra. Mobile computing middleware for context aware applications. In ICSE ’02: Proceedings of the 24th International Conference on Software Engineering, pages 723–724, New York, NY, USA, 2002. ACM Press.

[14] L. Capra, W. Emmerich, and C. Mascolo. Reflective middleware solutions for context-aware applications. Lecture Notes in Computer Science, 2192:126–133, 2001.

[15] L. Capra, W. Emmerich, and C. Mascolo. Carisma: context-aware reflective middleware system for mobile applications. IEEE Transactions on Software Engineering, 29(10):929 – 45, 2003/10/.

[16] J. Barton and T. Kindberg. The Cooltown user experience. Technical report, Hewlett Packard, February 2001.

[17] P. Debaty, P. Goddi, and A. Vorbau. Integrating the physical world with the web to enable context-enhanced services. Technical report, Hewlett-Packard, Sept. 2003.

[18] M. Roman, C. Hess, R. Cerqueira, A. Ranganathan, R. Campbell, and K. Nahrstedt. A middleware infrastructure for active spaces. IEEE Pervasive Computing, 1(4):74 – 83, 2002/10/.

[19] M. Román, C. K. Hess, R. Cerqueira, A. Ranganathan, R. H. Campbell, and K. Nahrstedt. Gaia: A Middleware Infrastructure to Enable Active Spaces. IEEE Pervasive Computing, pages 74–83, Oct–Dec 2002.

[20] A. Ranganathan, J. Al-Muhtadi, S. Chetan, R. H. Campbell, and M. D. Mickunas. Middlewhere: A middleware for location awareness in ubiquitous computing applications. In H.-A. Jacobsen, editor, Middleware, volume 3231 of Lecture Notes in Computer Science, pages 397–416. Springer, 2004.

[21] T. Gu, H. K. Pung, and D. Q. Zhang. A middleware for building context-aware mobile services. In Proceedings of IEEE Vehicular Technology Conference, May 2004.

[22] A. Chan and S.-N. Chuang. Mobipads: a reflective middleware for context-aware mobile computing. IEEE Transactions on Software Engineering, 29(12):1072 – 85, 2003/12.

[23] Kristian Ellebæk Kjær. A survey of context-aware middleware. In Proceedings of the 25th conference on

Page 33: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

24

IASTED International Multi-Conference: Software Engineering Innsbruck, Austria ,Pages 148-155, 2007

[24] Dhouha Ayed, Nabiha Belhanafi, Chantal Taconet, Guy Bernard. Deployment of Component-based Applications on Top of a Context-aware Middleware. - The IASTED International Conference on Software Engineering (SE 2005) - Innsbruck, Austria - February 15-17, 2005. http://picolibre.int-evry.fr/projects/cadecomp

[25] MADAM Consortium. MADAM middleware platform core and middleware services. Editor Alessandro Mamelli (Hewlett-Packard), deliverable D4.2, 30 March 2007. hppt://www.intermedia.uio.no/confluence/madam/Home

[26] C. Louberry, M. Dalmau, P. Roose – Architectures Logicielles pour des Applications Hétérogènes Distribuées et Reconfigurables – NOTERE’08 - 23-27/06/2008, Lyon.

[27] Robert Laddaga, Paul Robertson, Self Adaptive Software: A Position Paper, SELF-STAR: International Workshop on Self-* Properties in Complex Information Systems, 31 May - 2 June 2004

[28] Holger Schmidt, Franz J. Hauck: SAMProc: Middleware for Self-adaptive Mobile Processes in Heterogeneous Ubiquitous Environments. 4th Middleware Doctoral Symposium - MDS, co-located at the ACM/IFIP/USENIX 8th International Middleware Conference (Newport Beach, CA, USA, November 26, 2007).

[29] Baresi, L.; Baumgarten, M.; Mulvenna, M.; Nugent, C.; Curran, K.; Deussen, P.H. - Towards Pervasive Supervision for Autonomic Systems - Distributed Intelligent Systems: Collective Intelligence and Its Applications, 2006. DIS 2006. IEEE Workshop on Volume, Issue, 15-16 June 2006 Page(s):365 – 370.

[30] Emmanuel bouix, Philippe Roose, Marc Dalmau - The Korrontea Data Modeling - Ambi Sys 2008 - International Conference on Ambient Media and Systems - 11/14 february, Quebec City, Canada, 2008.

[31] E. Bouix, M. Dalmau, P. Roose, F. Luthon. A Component - Model for transmission and processing of Synchronized Multimedia Data Flows. In Proceedings of the 1st IEEE International Conference on Distributed Frameworks for Multimedia Applications (France, February 6-9 2005).

[32] D. Garlan, J. Kramer, and A. Wolf, editors. Proceedings of the First ACM SIGSOFT Workshop on Self-Healing Systems (WOSS ’02). ACM Press, 2002.

[33] [Roman, 2000] Roman G.C., Picco, G.P. Murphy A.L. – Software Engineering for mobility : a roadmap – ICSE 2000 – ACM Press, New York, USA, p. 241-258 – 2000.

[34] A.K. Dey G.D. Abowd – Towards a better understanding of context and context-awareness – CHI 2000 - Workshop on the What, Who, Where, When and How of Context-Awareness, The Hague, Netherlands, April 2000.

[35] Dey, A.K. and Abowd, G.D. ‘A conceptual framework and a toolkit for supporting rapid prototyping of context-aware applications’, HCI Journal, Vol. 16, Nos. 2–4, pp.7–166.

[36] T. Chaari, F. Laforest - L’adaptation dans les systèmes d’information sensibles au contexte d’utilisation: approche et modèles. Conférence Génie Electrique et Informatique (GEI), Sousse, Tunisie, mars 2005. pp. 56-61.

[37] Matthias Baldauf, Schahram Dustdar, Florian Rosemberg - A survey on context-aware systems – Int’l journal on Ad Hoc and Ubiquitous Computing, Vol.2, N°4, 2007.

[38] T.A. Henzinger and J. Sifakis. The Embedded Systems Design Challenge Invited Paper, FM 2006, pp. 1-15.

[39] Indulska, J. and Sutton, P. (2003) ‘Location management in pervasive systems’, CRPITS’03: Proceedings of the Australasian Information Security Workshop, pp.143–151.

[40] A. Ranganathan, J. Al-Muhtadi, S. Chetan, R. H. Campbell, and M. D. Mickunas. Middlewhere: A middleware for location awareness in ubiquitous computing applications. Vol. 3231 of LNCS, pages 397–416. Springer, 2004.

[41] Sophie Laplace, Marc Dalmau, Philippe Roose - Kalinahia: Considering quality of service to design and execute distributed multimedia applications - NOMS 2008 - IEEE/IFIP Int'l Conference on Network Management and Management Symposium - 7-11/04/2008 Salvador de Bahia, Brazil, 2008.

[42] Bill Schilit, Marvin Theimer - Disseminating Active Map Information to Mobile Hosts - IEEE Network, September, 1994

[43] Jason Pascoe , Nick Ryan, David - Using while moving: HCI issues in fieldwork environments –ACM Transactions on Computer-Human Interaction (TOCHI) Vol. 7 , Issue 3 (09/2000) - Special issue on human-computer interaction with mobile systems - 2000

[44] K.E. Kjær - A Survey of Context-Aware Middleware - Software Engineering - SE 2007 - Innsbruck, Austria, 2007.

[45] Frédérique Laforest - De l’adaptation à la prise en compte du contexte – Une contribution aux systèmes d’information pervasifs – Habilitation à Diriger les Recherches, Université Claude Barnard, Lyon I, 2008.

[46] Daniel Cheung-Foo-Wo, Jean-Yves Tigli, Stéphane Lavirotte, Michel Riveill. “Contextual Adaptation for Ubiquitous Computing Systems using Components and Aspect of Assembly” in Proc. of the Applied Computing (IADIS), IADIS, Salamanca, Spain, 18-20 feb 2007

[47] Guanling Chen, David Kotz - A Survey of Context-Aware Mobile Computing Research - Dartmouth College Technical Report TR2000-381, November 2000.

[48] H. Lieberman and T. Selker - Out of context: Computer systems that adapt to, and learn from, context – IBM System Journal - Volume 39, Numbers 3 & 4, MIT Media Laboratory 2000.

[49] Pierre-Charles David, Thomas Ledoux - WildCAT: a generic framework for context-aware applications, Proceedings of the 3rd international workshop on Middleware for pervasive and ad-hoc computing, ACM International Conference Proceeding Series; Vol. 115

[50] A. Harter, A. Hopper – A distributed location system for the active office. IEEE Networks, 8(1):6270, 1994.

[51] Want, R. Schilit, B.N. Adams, N.I. Gold, R. Petersen, K. Goldberg, D. Ellis, J.R. Weiser, M. - An overview of the PARCTab ubiquitous computing environment. IEEE Personal Communications, 2(6): 2833, 1995.

Marc Dalmau is an IEEE member and Assistant Professor in the department of Computer Science at the University of Pau, France. He is a member of the TCAP project. His research interests include wireless sensors, software architectures for distributed multimedia applications, software components, quality of service,

Page 34: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

25

dynamic reconfiguration, distributed software platform, information system for multimedia applications. Philippe Roose is an Assistant Professor in the department of Computer Science at the University of Pau, France. He is responsible of the TCAP project - Video flows transportation on sensor networks for on demand supervision. His research interests include wireless sensors, software architectures for distributed multimedia applications, software components, quality of service, dynamic reconfiguration, COTS, distributed software platform, information system for multimedia applications. Sophie Laplace is Doctor Biographies in the department of Computer Science at the University of Pau, France. Her researches interests include formal methodology, Quality of Service design and evaluation. Her works mainly focus on multimedia applications. She defended her PhD (Software Architecture Design in order to integrate QoS in Distributed Multimedia Applications) thesis in 2006

Page 35: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

26

Embedded Sensor System for Early Pathology Detection in Building Construction

Santiago J. Barro Torres, Carlos J. Escudero Cascón

1 Department of Electronics and Systems, University of A Coruña A Coruña, 15071 Campus Elviña, Spain

[email protected]

2 Department of Electronics and Systems, University of A Coruña A Coruña, 15071 Campus Elviña, Spain

[email protected]

Abstract Structure pathology detection is an important security task in building construction, which is performed by an operator by looking manually for damages on the materials. This activity could be dangerous if the structure is hidden or difficult to reach. On the other hand, embedded devices and wireless sensor networks (WSN) are becoming popular and cheap, enabling the design of an alternative pathology detection system to monitor structures based on these technologies. This article introduces a ZigBee WSN system, intending to be autonomous, easy to use and with low power consumption. Its functional parts are fully discussed with diagrams, as well as the protocol used to collect samples from sensor nodes. Finally, several tests focused on range and power consumption of our prototype are shown, analysing whether the results obtained were as expected or not. Key words: Wireless Sensor Network, WSN, Building Construction, ZigBee, IEEE 802.15.4, Arduino, XBee.

1. Introduction

Over the last years a growing interest in security in the field of building construction has been noticed. The knowledge of distortions and movements caused by the structures at the right time makes it possible to assess their tension and, consequently, improve workers' security. The techniques traditionally used in the inspection of structures are very basic, mostly centered on having an operator watching out for damages present in the materials (fissures in the concrete, metal corrosion...). Sometimes, the structure is hidden or is difficult to reach. Additionally, access to the structure can be dangerous, as is the case with bridges. All these problems complicate the examination process. Therefore, it is necessary to have alternative means of detecting pathologies [1, 2, 3].

Advances in microelectronics make it possible to design new systems for carrying out technical inspection of works. Nowadays, it is possible to obtain embedded systems with a high degree of integration, processing, storage and a low consumption at an affordable price. On the other hand, sensor networks have evolved up to the point that it is possible to have a series of sensors sharing information and cooperating to reach a common goal. Thus, this new generation of intelligent sensors is beginning to look like a suitable technology for pathology detection. The saving in maintenance costs in the near future would make it possible to recover the value of the initial investment, by avoiding hiring technical inspection services. The boom of wireless technologies has also reached the sphere of sensor networks, with technologies such as ZigBee [4, 5, 6], which lets us interconnect a group of sensors in an ad-hoc network without a default physical infrastructure or a centralized administration [7]. For that reason, this technology is very suitable for this application. In this article the design and implementation of a pathology detection network based on embedded systems, sensors and the ZigBee technology is presented, satisfying the following requirements, as for example: - Ease of Use. The network must be able to configure

itself, without human intervention, reducing maintenance costs.

- Fault tolerance. In case one of the intermediate nodes fail, the network would look for alternative roots, so as not to leave any node isolated.

- Scalability. The network should be as extensible as possible, to place the sensor nodes located in areas potentially far away from the building.

- Low consumption. Sometimes it will be difficult or impossible to feed the nodes directly from the electric

Page 36: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

27

network, so it is necessary to equip them with a battery they will have to take full advantage of.

- Flexibility. The frequency with which samples are taken will change with time, which means that periodicity must be an independently configurable parameter in every single node. In addition, modification of the periodicity must be possible at any moment, even when the node is sleeping.

The system presented in this article is made of a series of sensor nodes specialized in the detection of pathologies, distributed along the structure of a building. These nodes communicate wirelessly through a ZigBee mesh network, which makes expanding the network easier, as there is no default structure. Moreover, management becomes easier, too. One of these nodes, formally called coordinator, is in charge of gathering and storing the samples sent by sensor nodes at a configurable periodicity, for their use in subsequent studies. Sensor nodes, in turn, are fed by a battery, and can thus operate autonomously, but at the cost of using a power saving design. The article structure is as follows: The second section describes the problem and the technical solution adopted. The third section presents the logical design of the system, showing its different parts and explaining how they operate. The fourth section deals with implementation, including commented photos of the prototype built. The fifth section shows the test results performed. Finally, the sixth section is dedicated to the conclusions.

2. Problem Statement and Technology

The choice of sensors depends on the physical phenomenon to be measured. In this case, the most suitable sensors to perform pathology detection are strain gauges [8], potentiometric displacement sensors and temperature catheters [9, 10].

Strain gauges are used to measure the deformation level of materials such as concrete and steel. Its operation is based on the piezoresistive effect, which means that its internal resistance changes when is deformed by an external force. As the voltage variations obtained are very small (less than 1 mV), it is necessary to add extra circuitry to condition the signal prior to reading its value: Amplification, noise filtering, etc. [11] Heating the gauge several minutes before sampling is another important restriction. On the other hand, potentiometric displacement sensors are used to measure the movement suffered by the structure with respect to its anchors. An important difference between the two sensors mentioned is that whilst strain gauges are not reusable, displacement

sensors are, because gauges are attached or embedded within the structure. Another kind of sensor to take into account is temperature catheters, which allows concrete monitoring in its first stages after the building construction has been completed.

ZigBee is a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 802.15.4 standard for wireless personal area networks (WPANs). In turn, IEEE 802.15.4 specifies the physical layer and media access control for low-rate wireless personal area networks (LR-WPANs) [12].

The purpose of ZigBee is to provide advanced mesh networking functions not covered by IEEE 802.15.4. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in most jurisdictions worldwide [4]. This technology is intended to be simpler, cheaper and to have lower power consumption than other WPANs such as Bluetooth. In a ZigBee network there are three different types of nodes:

- ZigBee Coordinator: The coordinator forms the root of the network tree and might bridge to other networks.

- ZigBee End Device: Contains just enough functionality to talk to the parent node (either the coordinator or a router), and it cannot relay data from other devices. This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life.

- ZigBee Router: Acts as an intermediate router, passing on data from other devices. Moreover, it might support the same functions as a ZigBee End Device.

Nowadays, ZigBee-based devices are easy to find, as many semiconductor and integrated circuit manufacturers have opted for this new technology:

- Digi International, leader in Connectware

solutions, offers development kits for its module XBee® & XBee-PRO® ZB RF [13].

- Rabbit, an 8-bit microcontroller specialized company, has developed the arquitecture iDigi™ BL4S100, which consists of a XBee-PRO® ZB module with a Rabbit® 4000 microcontroller, capable of acting as an intelligent controller or ZigBee-Ethernet gateway [14].

- Ember, a monitoring and wireless sensor network provider offers the InSight Development Kit [15],

Page 37: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

28

which includes everything needed to create embedded applications in their radios EM250/EM260.

- Crossbow, one of the leading wireless sensor manufacturers, features several development kits that provide complete solutions in the development of such networks, including the Professional Kit [16].

- Sun Microsystems offers their Java-based Sun SPOT [17], formed by a processor, battery and a set of sensors.

3. Design

This section shows the architecture and models describing the presented system.

3.1 System Architecture

Fig. 1 System Architecture

The system comprises a set of nodes interconnected in a ZigBee network, as shown in Figure 1. This structure provides us mobility to place end devices anywhere in the building, as well as connectivity to collect samples: - The Coordinator node works as gateway between the

sensor network and the main station computer, where the Coordinator Application Software will be running, which provides the user an interface to manage the system

- Router Nodes extend the network coverage through the whole building. They need to be placed where can be powered without interruption.

- End Devices sample sensor at regular intervals, and then send the values obtained to the coordinator. When not in use, the end device enters a low power consumption mode, called ‘sleep mode’, helping to increase battery life.

The Coordinator and End Devices collaborate with each other, following the protocol explained below. This

protocol is important to ensure that samples are collected properly.

3.2 Communications Model

Fig. 2 Sample Colleting Protocol Sequence Chart

The Coordinator and End Devices interact using a special state message-based protocol (see Figure 2). The protocol starts when any of the End Devices of the network decides to awake, notifying the Coordinator such situation with the message “I am awake” (1). This makes the sample periodicity management easier, since each End Devices know when it is time to take the next sample. Once the protocol has been started, the Coordinator is completely free to send the End Device any remote commands requests (2). For example, heating the gauge, as this is one of the requirements of this type of sensors. When the gauge is ready, the Coordinator could ask the End Device for as many samples as necessary. This approach gives us great flexibility to adapt the protocol to the application needs, with minimal system design changes. Finally, the Coordinator asks the End Device to sleep immediately (3). This event marks the end of the protocol, which will be executed again when the End Device decides to send the next sample, according to the sampling periodicity settings.

Page 38: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

29

The protocol requires processing capacity both in the Coordinator and the End Device, as illustrated in the next subsection. Coordinator Flow Chart (Figure 3): The Coordinator waits for the message “I am awake”, which is sent by one of the End Devices who owns the periodicity control. When this message arrives, the Coordinator asks the End Device to heat its strain gauge. After a while, the Coordinator asks the End Device to send one sample, which is stored in the Coordinator’s database whenever it is received. Finally, the Coordinator puts the End Device to sleep.

Fig. 3 Coordinator Flow Chart

End Device Flow Chart (Figure 4): The End Device has its own internal timer to know when to wake up, according to the sampling frequency settings. When it is time to wake up, the End Device notifies the Coordinator with the message “I am awake”, and enters an idle state, waiting for remote request commands coming from the Coordinator. According to the example being shown, there are three possible commands: Heat Gauge, Sample Gauge and

Sleep. Whenever a remote command is received, the message type is checked and the appropriate action is executed. The protocol ends when the Coordinator sends a Sleep End Device Request.

Fig. 4 End Device Flow Chart

3.3 Network Model

The network model is composed by three different submodels, in accordance with their responsibilities, as it was mentioned earlier: - Coordinator Model - Router Model - End Device Model Coordinator Model: The coordinator is responsible for collecting samples from the end devices, besides network management. There is only one in the entire network, and consists of the following elements: - User Interface. Some of the operations performed on

the system require user interaction, hence the need for an entry data interface.

- XBee-API Communications Library. It is an object model library specially designed to talk to XBee ZB modules in API mode.

- Database. It is used to store samples persistently, so that they can be analyzed or queried later.

- Coordinator Controller. Here lies the Coordinator core, where the protocol operations are performed.

Page 39: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

30

Router Model: Routers are useful to extend the network coverage, enabling communication between the Coordinator and End Devices. This communication would not be possible or would be very difficult to establish because of the distance, presence of obstacles (walls, floors, ceilings, structures, etc.). Although its presence on the network is optional, routers are usually distributed among several strategic points to extend coverage effectively. End Device Model: Figure 5 shows the functional elements compounding an End Device:

- Control Unit. Here lies the End Device core, where the protocol operations are performed. Besides, it manages all other components and performs actions such as: Sampling sensors, sending and receiving ZigBee messages, setting the alarm clock, etc.

- Alarm Clock. Once configured, it is able to wake up the whole End Device circuitry. Sampling frequency is set here. It must be always powered.

- ZigBee End Device Module. Enables remote communication with the Coordinator through ZigBee technology. It must be always powered, as it may receive data at any time, even when the End Device is asleep.

- Conditioning Circuitry. It is responsible for adapting the signal obtained in the strain gauge (amplifying, filtering, ADC-converting, etc.) so the control unit can read its value.

Fig. 5 End Device Functional Model

4. Implementation

4.1 Prototype Description

The hardware platform selected was SquidBee [18], an open mote architecture formed by Arduino Duemilanove Board [19], Digi International XBee® ZB [20] and a set of basic sensors (temperature [21], light and humidity [22] sensors), distributed by Libelium [23]. The main advantage of this platform is its great flexibility, as it allows us to build any node (Coordinator, Router or End Device) with very few hardware and firmware changes to its basic architecture [24]. The End Device’s Unit Control has been implemented in the Arduino’s Atmel ATmega168 microcontroller [25]. On the other hand, the XBee® ZB integrates the “alarm clock”, since it is able to wake up at regular customizable intervals (formally, cyclic sleep [20]). Therefore, remote configuration is much easier, as XBee provides simple mechanisms to change remote variables from the Coordinator. Finally, the Coordinator Application is a Java-based Desktop Software running on a computer with the Coordinator connected to one of its USB ports. XBee-API Library [26] has been used to implement such application.

4.2 Coordinator Implementation

Here is the component list for the Coordinator, as shown in Figure 6:

- Rigid case. Protects the components and the circuitry.

- USB cable. The connection between the Coordinator and the Computer is established by a USB connector. The USB also powers this Node.

- XBee® ZB RF Module. Provides the End Device with ZigBee connection. Must be set up with the Coordinator firmware.

- 2.4 GHz Antenna. An external antenna to connect to XBee, using a U. FL proprietary connector from Hirose Electronic Group [27].

- Arduino Board (without microcontroller). Since the Coordinator Application is running on the computer, the microcontroller is not needed anymore.

- XBee Shield. Enables ZigBee connection to the Arduino Board. USB connection must be selected [28].

Page 40: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

31

Fig. 6 Coordinator components

4.3 Router Hardware Implementation

In this case, the component list remains almost the same. Therefore, only different and new components are highlighted:

- USB charger. Unlike the Coordinator, Routers are not connected to a Computer, so a USB charger is needed to plug the Node to the power supply.

- XBee® ZB RF Module. Despite being physically equal, a different firmware is needed. Router firmware must be set up in the module [29].

The final assembled node is shown in Figure 7.

Fig. 7 Assembled Router

4.4 End Device Hardware Implementation

Again, only different and new components are highlighted. The components are shown in Figure 8:

- Rechargeable Lithium Battery. Since the End Device needs full autonomy, it must be powered by a Battery (of 1100 mAh in our case)

- Arduino Board with Atmel ATmega168. The Arduino Board must have its microcontroller with our protocol implementation loaded in the

memory. - XBee Shield configuration is slightly different

from the others. First, an extra connection is needed to awake the microcontroller from XBee [29]. Second, the XBee connection must be selected (mind the USB connection was the one selected previously) [28].

- XBee® ZB RF Module. End Device firmware must be set up [24].

- Data Acquisition Board. It is the special circuitry shown in Figure 9, which is equipped with several signal conditioning components (Analog to Digital Converters or ADCs, among others) used to connect strain gauges, potentiometric displacement sensors, temperature catheters and so on. Communication between the End Device and the Data Acquisition Board is performed through a Serial Port Connection, using a specific set of commands.

Fig. 8 End Device components

Fig. 9 Data Acquisition Board

As we will see in the Testing Scenario, we take temperature samples from the Data Acquisition Board, which has one temperature sensor [21] attached to one of its multiple input channels. The communication between the End Device and the Data Acquisition Board is performed through a RS-232 or Serial Connection. Since

Page 41: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

32

there are no more Serial Connectors available in the Arduino Board (apart from the one establishing the communication with the ZigBee Network), two virtual Serial Port Connectors were created, using a special circuitry (shown in Figure 10) and an Arduino Library called SoftwareSerial [30]. One of those ports establishes the communication, and the other one enables the log output, which is helpful when performing debugging tasks.

Fig. 10 Fully assembled End Device

This special circuitry converts Arduino TTL voltage levels into RS-232 values, since a RS-232 connector is needed to talk to both the Data Acquisition Board and Log Software. The electronic scheme, shown in Figure 11, is based on the ADM232L chip [31], which supports up to 2 Serial Ports.

Fig. 11 TTL-to-RS232 Circuitry Scheme

4.5 End Device Cyclic Sleep

XBee End Devices support cyclic sleep, allowing the module to wake periodically to check for RF data and sleep when idle. Since the changes in the sampling frequency must be immediate (or almost immediate), checking incoming messages is performed every 28 seconds. However, XBee does not impose any restriction about this [20]. As the sample frequency rarely changes, XBee will not receive any data most of the times. In consequence, there is no need to wake the external circuitry whenever XBee awakes, so it makes sense to difference between XBee and external circuitry frequencies, and , respectively. Equation 1 shows the relation between both variables:

(1) For example, consider an XBee module waking once every 28 seconds, and wakening an external sampling circuitry once every 2 minutes thought an interruption line. XBee must be set with the following parameters:

- - -

The Figure 12 represents graphically this example. The external circuitry is awake every four times XBee awakes and therefore matching the specified timing, as the timelines show. This behavior is repeated cyclically and hence its name.

Fig. 12 Timeline Chart showing Wake Times

5. Testing

Several tests dealing with coverage and power consumption where done in order to evaluate the prototype performance.

5.1 Coverage Test

This test has consisted in detecting the mean RSSI (Received Signal Strength Indication) value, obtained

Page 42: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

33

from the reception of 100 messages. Each measure is averaged from 5 repetitions of the same experiment, to counteract the signal fluctuations caused by indoor fading [32]. Both modules have transmitted with a power of 3 dBm and boost mode enabled [20]. Also, the ZigBee nodes have been configured to automatically select the channel having the least interference with other nearby networks [33]. The obtained results are shown in Tables 1 and 2.

Free Space Mean Attenuation (dB)

50 cm 0.00

1 m 8.16

2 m 11.65 4 m 19.91

8 m 23.93 11 m 29.61 Table 1: Signal Loss in Free Space

Obstacle Mean Attenuation (dB)

Window (Open Metallic Blinds) 1.04

Window (Closed Metallic Blinds) 3.95

Wall with Open Door 0.39 Wall with Closed

Door 1.19

Brick Wall 1.46

Between Floors 13.08 Table 2: Signal Loss with Obstacles

Note that the total attenuation is the sum of free space and all obstacles loss [22].

5.2 Power Consumption Test

Prototype power consumption has been measured for each of the possible states using a polymeter:

Node State Consumption (mA)

Sleeping 21.10

Awake (Idle) 69.80 Awake (Transmitting) 109.80

Table 3: Power Consumption Test

The consumption of a sleeping node is abnormally high, as shown in Table 3, due to a design fault in the Arduino board. This board always has the same consumption,

regardless of whether the microcontroller and XBee are sleeping or not [34]. As a result, the authors are developing their own customized Arduino board where this problem is solved. On the other hand, it is important to highlight the fact that data sending causes a consumption peak, which, however high, lasts a very short time. The last measure, that corresponding to consumption when the node is awake and active, has been estimated considering transmit and every circuit component consumption.

5.3 Testing Scenario

In this example, a network of three nodes, one of each type, is deployed. As shown in Figure 13, the End Device is placed in Classroom 1.1 and the Router in the Repository, whilst Coordinator is in Laboratory 1.2.

Fig. 13 Node distribution in the proposed Scenario

A temperature sensor is attached to the End Device (see Figure 14), sending temperature measures twice per hour. This rate was set from the Coordinator.

Fig. 14 End Device with a Temperature Sensor in Classroom 1.1

Unlike the previous nodes, the Coordinator is placed in the Laboratory 1.2, and it is connected to a computer through

Page 43: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

34

a USB port, as shown in Figure 15. The Coordinator Software is running on this computer, allowing the user to set the sample collecting frequency, or even store the samples in a file.

Fig. 15 Coordinator connected to a Computer USB Port in Lab 1.2

This scenario has been tested in real time, using a simple network with three nodes: One Coordinator, one Router and one End Device. When the Router is off, the received signal power (see Path 1 in Figure 13) at the Coordinator is too low to establish a connection with the End Device. However, when the Router located between the Coordinator and the End Device is switched on, the received signal power increases (it would be around 29.53 dBm, that is 3 dBm of transmit power, -29.61 dB of free space loss and -2.92 dB of 2-break wall obstacle loss), enabling communication between both of them.

With this scenario is possible to estimate End Device battery lifetime, as well. Considering a 30-minute sleeping cycle with 5-second staying in active state, the average consumption calculated according with the values shown in Table 3 is 23.79 mA. Therefore, using an 1100 mAh battery the total lifetime estimated is around 51 hours. Note this consumption is too high, caused by the Arduino board design, as it was said before. Therefore, the authors of this article are working in the design of a new Arduino-based board with very low consumption in sleeping state (just of a few µA). Thanks to this improved design, it is possible to extend battery lifetime to several months.

6. Conclusions

This article has presented a construction pathology detection system, based on a wireless sensor network using the ZigBee technology, which enables continuous monitoring of the parameters of interest, meeting the requirements of low consumption, ease of maintenance

and installation flexibility. Its functional parts were fully discussed with diagrams, including the protocol specifically designed to collect samples from sensor nodes and several photos of the prototype built. In addition, the results showing the typical node coverage limits and their consumption have been calculated for several situations.

Acknowledgments

This work has been supported by: 07TIC019105PR (Xunta de Galicia, Spain), TSI-020301-2008-2 (Ministerio de Industria, Turismo y Comercio, Spain) and 08TIC014CT (Instituto Tecnológico de Galicia, Spain). References [1] D. Wall, Building Pathology: Principles and Practice,

Willey-Blackwell, 2007. [2] S. Y. Harris, Building Pathology: Deterioration,

Diagnostics and Intervention, John Wiley & Sons, 2001. [3] L. Addleson, Building Failures: A guide to Diagnosis,

Remedy and Prevention, Butterworth-Heinemann Ltd, 1992.

[4] ZigBee Standards Organization, “ZigBee 2007 Specification Q4/2007”: http://www.zigbee.org/Products/TechnicalDocumentsDownload/tabid/237/Default.aspx

[5] E.H. Callaway, Wireless Sensor Networks: Architectures and Protocols, Auerbach Publications, 2003.

[6] J. A. Gutierrez et al, Low-Rate Wireless Personal Area Networks: Enabling Wireless Sensors with IEEE 802.15.4, IEEE Press, 2003.

[7] F. L. Zucatto et al, “ZigBee for Building Control Wireless Sensor Networks”, in Microwave and Optoelectronics Conference, 2007.

[8] W. M. Murry, W. R. Miller, The Bonded Electrical Resistance Strain Gage: An Introduction, Oxford University Press, 1992.

[9] J. S. Wilson, Sensor Technology Handbook, Elsevier Inc, 2005.

[10]J. Fraden, Handbook of Modern Sensors: Physics, Designs and Applications, Springer, 2004.

[11]R. Pallás-Areny, J. G. Webster, Sensors and Signal Conditioning, John Wiley & Sons, 2001.

[12]IEEE 802.15.4-2003, “IEEE Standard for Local and Metropolitan Area Networks: Specifications for Low-Rate Wireless Personal Area Networks”, 2003.

[13]Digi XBee® & XBee-PRO® ZigBee® PRO RF Modules: http://www.digi.com/products/wireless/zigbee-mesh/xbee-zb-module.jsp

[14]Rabbit Application Development Kit for ZigBee Networks: http://www.rabbit.com/products/iDigi_bl4s100_add-on_kit/index.shtml

Page 44: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

35

[15]Ember Insight Development Kit: http://www.ember.com/products_zigbee_development_tools_kits.html

[16]Crossbow WSN Professional Kit: http://www.xbow.com/Products/productdetails.aspx?sid=231

[17]Sun Spot Development Kit: http://www.sunspotworld.com/ [18]Libelium SquidBee. Open Mote for Wireless Sensor

Networks: http://ww.squidbee.org/ [19]Arduino Duemilanove Board, Open-Source Electronics

Prototyping Platform: http://www.arduino.cc/en/Main/ArduinoBoardDuemilanove

[20]Xbee®/XBee-Pro® ZB OEM RF Modules Manual, ver. 4/14/2008: http://ftp1.digi.com/support/documentation/90000976_a.pdf

[21]National Semiconductor LM35 (Precision Centigrade Temperature Sensor): http://www.national.com/mpf/LM/LM35.html

[22]808H5V5 Humidity Transmitter: http://www.sensorelement.com/humidity/808H5V5.pdf

[23]Libelium: http://www.libelium.com/ [24]Digi International, “Upgrading RF Modem modules to the

latest firmware using X-CTU”: http://www.digi.com/support/kbase/kbaseresultdetl.jsp?id=2103

[25]Atmel ATmega168 Manual Datasheet: http://www.atmel.com/dyn/resources/prod_documents/doc2545.pdf

[26]Java API for Communicating with XBee® & XBee-Pro® RF Modules: http://code.google.com/p/xbee-api/

[27]Hirose Electronic Group, “Ultra Small Surface Mount Coaxial Connectors”: http://www.hirose.co.jp/cataloge_hp/e32119372.pdf

[28]Libelium, “How to Create a Gateway Node”: http://www.libelium.com/squidbee/index.php?title=How_to_create_a_gateway_node

[29]Wireless Sensor Network Research Group, “How to Save Energy in the WSN: Sleeping the motes”: http://www.sensor-networks.org/index.php?page=0820520514

[30]Arduino – SoftwareSerial Library: http://arduino.cc/en/Reference/SoftwareSerial

[31]Analog Devices, 5V Powered CMOS RS-232 Drivers/Receivers: http://www.analog.com/static/imported-files/data_sheets/ADM231L_232L_233L_234L_236L_237L_238L_239L_241L.pdf

[32]A. Goldsmith, Wireless Communications, Cambridge University Press, 2005.

[33]K. Shuaib et al., “Co-Existence of ZigBee and WLAN. A Performance Study”. Wireless and Optical Communications Conference, 2007.

[34]Arduino Community Page. “Arduino Sleep Code”: http://www.arduino.cc/playground/Learning/ArduinoSleepCode

Santiago J. Barro Torres. He obtained a MS in Computer Engineering in 2008 and a MS in Wireless Telecommunications in 2009, both from A Coruña University. His research interests are Digital Communications, Wireless Sensor Networks, Microcontroller Programming and RFID Systems.

Carlos J. Escudero Cascón. He obtained a MS in Telecommunications Engineering from Vigo University in 1991 and a PhD degree in Computer Engineering from Coruña University, in

1998. He obtained two grants to stay at Ohio State University as a research visitor, in 1996 and 1998. In 2000 he was appointed Associate Professor, and more recently, in 2009, Government Vice-Dean in the Computer Engineering Faculty at Coruña University. His research interests are Signal Processing, Digital Communications, Wireless Sensor Networks and Location Systems. He has published several technical papers in journals and congresses, and supervised one PhD Thesis.

Page 45: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

36

SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer

Scott CARTER, Laurent DENOUE

FX Palo Alto Laboratory, Inc. 3400 Hillview Ave., Bldg. 4

Palo Alto, CA 94304 {carter,denoue}@fxpal.com

Abstract

Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment. Key words: Document reading, mobile, audio.

1. Introduction

Reading documents on-the-go can be difficult. As previous studies have shown, mobile users have limited stretches of attention during which they can devote their full attention to their device [8]. Furthermore, studies have shown that listening to documents can improve users' ability to navigate real world obstacles [11]. However, while solutions exist for unstructured text, these approaches do not support the figures, pictures, tables, callouts, footnotes, etc., that might appear in rich documents.

SeeReader is the first mobile document reader to support rich documents by combining the affordances of visual document reading with auditory speech playback and eyes-free navigation. Traditional eReaders have been either purely visual or purely auditory, with the auditory readers reading back unstructured text. SeeReader supports eyes-free structured document browsing and reading as well as automatic panning to and notification of crucial visual components. For example, while reading the text “as shown in Figure 2” aloud to the user the visual display automatically frames Figure 2 in the document. While this is most useful for document figures, any textual reference can be used to change the visual display, including footnotes, references to other sections, etc.

Furthermore, SeeReader can be applied to an array of document types, including digital documents, scanned documents, and web pages. In addition to using references in the text to automatically pan and zoom to areas of a page, SeeReader can provide other services automatically, such as following links in web pages or initiating embedded macros. The technology can also interleave

Figure 1: SeeReader automatically indicates areas of visual interest while reading document text aloud. The visual display shows the current reading position (left) before it encounters a link (right). When viewing the whole page, SeeReader indicates the link (top). When viewing the text, SeeReader automatically pans to the linked region (bottom). In both views, as the link is encountered, the application signals the user by vibrating the device.

Page 46: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

37

useful information into the audio stream. For example, for

scanned documents the technology can indicate in the audio stream (with a short blip or explanation) when recognition errors would likely make text-to-speech (TTS) translation unusable. These indications can be presented in advance to allow users to avoid listening to garbled speech.

In the remainder of this paper, we briefly review other mobile document reading technologies, describe SeeReader including server processes and the mobile interface, and describe a study we ran to verify the usefulness of our approach.

2. Mobile Document Reading

The linear, continuous reading of single documents by people on their own is an unrealistic characterization of how people read in the course of their daily work. [1]

Work-related reading is a misnomer. Most “reading” involves an array of activities, often driven by some well-defined goal, and can include skimming, searching, cross-referencing, or annotating. For example, a lawyer might browse a collection of discovery documents in order to find where a defendant was on the night of October 3, 1999, annotate that document, cross-reference it with another document describing conflicting information from a witness, and begin a search for other related documents.

A growing number of mobile document reading platforms are being developed to support these activities, including specialized devices such as the Amazon KindleTM (and others [10]) as well as applications for common mobile platforms such as the Adobe ReaderTM for mobile devices. Past research has primarily focused on active reading tasks, in which the user is fully engaged with a document [9, 4]. In these cases, support for annotation, editing, and summarization is critical.

Our goal, on the other hand, is to support casual reading tasks for users who are engaged in another activity. A straightforward approach for this case is to use the audio

channel to free visual attention to the primary task. Along

these lines, the Amazon KindleTM includes a TTS feature. However, the Kindle's provides no visual feedback while reading a document aloud. Similarly, the knfbReaderTM converts words in a printed document to speech (http://www.knfbreader.com/). However, as this application was designed primarily for blind users, its use of the mobile device's visual display is limited to an indication only of the text currently being read. Other mobile screen readers, such as Mobile Speak (http://www.codefactory.es/), can be configured to announce when they have reached a figure, however as they do not link to textual references they are therefore likely to interrupt the reading flow of text. Similarly, with Click-Through Navigation [3] users can click on figure references in body text to open a window displaying the figure.

SeeReader improves on these techniques by making figures (and other document elements) visible on the screen automatically when references to them are read.

3. Analysis Pipeline

The SeeReader mobile interface depends upon third-party services to generate the necessary metadata. Overall, SeeReader requires the original document, information delineating regions in the document (figures, tables, and paragraph boundaries) as well as keyphrase summaries of those regions, the document text parsed to include links and notifications, and audio files. In this section, we describe this process (shown in Figure 2 in detail).

Initially, documents, whether they are scanned images or electronic (PDFs), are sent to a page layout service that produces a set of rectangular subregions based on the underlying content. Subregions might include figures and paragraph boundaries. In our approach, we use a version of the page segmentation and region classification described in [5]. Region metadata is stored in a database.

In the next step the body text is digitized. This is obviously automatic for electronic documents, while for scanned documents we use Microsoft OfficeTM OCR. Next,

Figure 2: Data flow for digital documents (left) and scanned documents (right).

Page 47: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

38

text is sent to a service that extracts keyphrases summarizing each region. Many text summary tools would suffice for this step. We use a version of [6] modified to work on document regions. Once processed, keyphrases are saved in a database. Simultaneously, text is sent to an algorithm we developed to link phrases to other parts of the document. For example, our algorithm links text such as “Figure” to the figure proximate to a caption starting with text “Figure 2.” Our algorithm currently only creates links with low ambiguity, including figures, references, and section headings using simple rules based on region proximity (similar to [7]). These links are also saved in a database.

Finally, the document text and region keyphrases are sent to a TTS service (AT&T's Natural Voices TM). In the case of scanned documents, OCR scores are used to inject notifications to the user of potentially poorly analyzed blocks of text (e.g., this process may inject the phrase “Warning, upcoming TTS may be unintelligible”). The resulting files are processed into low footprint Adaptive Multi-Rate (AMR) files and saved in a database.

4. Mobile Application

The SeeReader mobile interface is a J2ME application capable of running on a variety of platforms. SeeReader also supports both touchscreen and standard input. Thus far, we have tested the device with Symbian devices (Nokia N95), Windows Mobile devices (HTC Touch), and others (e.g., LG Prada). The application acquires documents and metadata and their associated metadata (as produced from the pipeline described above) via a remote server whose location is specified in a configuration file. The application can also read document data from local files. When configured to interact with a remote server, the application downloads data progressively, obtaining first for each document only XML metadata and small thumbnail representations. When the user selects a document (described below), the application retrieves compressed page images first, and AMR files as a background process, allowing users to view a document quickly.

The interface supports primarily three different views: document, page, and text. The document view presents thumbnail representations of all available documents. Users can select the document they wish to read via either the number pad or by directly pressing on the screen.

After selecting a document, the interface is in page view mode (see Figure 1, top). We support both standard and touch-based navigation in page mode. For devices without a touchscreen, users can press command keys to navigate between regions and pages. For devices with a touchscreen, a user can set the cursor position for reading by pressing and holding on the area of interest. After a short delay, the application highlights the region the user selected and then begins audio playback beginning with the first sentence of the selected region.

To support eyes-free navigation, we implemented a modified version of the touchwheel described by Zhao et al. in [12] that provides haptic and auditory feedback to users as they navigate. This allows the user to maintain their visual attention on another task while still perusing the document. As the user gestures in a circular motion (see Figure 3), the application vibrates the device to signal sentence boundaries to the user. The application also reads aloud the keyphrase summary of each region as it is entered. In addition, we inject other notifications to help the user maintain an understanding of their position in the document as they navigate the touchwheel, such as page and document boundaries.

While in page view, users can flick their finger across the screen or use a command key to navigate between pages. Users can also click a command key or double-click on the screen to zoom in to the text view (see Figure 1, bottom). The text view shows the details of the current document page — users can navigate the page using either touch or arrow keys. Double-clicking again zooms the display back to the page view.

Multiple actions launch the read-aloud feature. In page view mode, when a user releases the touchwheel, selects a region by pressing and holding, or presses the selection key, the application automatically begins reading at the selected portion of text. The user can also press a command key at any time to start or pause reading. While

Figure 3: A user interacting with the touchwheel. As the user drags her finger around the center of the screen, the application vibrates the device to signal sentence boundaries and plays an audio clip of the keyphrase summarizing each region as it is entered.

Page 48: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

39

reading, SeeReader indicates the boundaries of the sentence being read. When SeeReader encounters a link, it vibrates the device and either highlights the link or automatically pans to the location of the linked content, depending on whether the device is in page view or text view mode respectively (see Figure 1).

5. Evaluation

We ran a within subjects, dual-task study as a preliminary evaluation of the core features of the SeeReader interface. Participants completed two reading tasks while also doing a physical navigation task. Common dual-task approaches to evaluating mobile applications involve participants walking on treadmills or along taped paths while completing a task on the mobile device. These approaches are designed to simulate the bodily motion of walking [2]. However, they do not simulate the dynamism of a real-world environment. We developed a different approach for this study that focuses on collision avoidance rather than walking per se.

In this configuration, participants move laterally either to hit targets (such as doorways or stairs) or avoid obstacles (such as telephone poles or crowds). We simulated the targets and obstacles on a large display visible in the participant's periphery as they used the mobile device (see Figure 4). To sense lateral motion, a WiimoteTM mounted on the top of the display tracked an IR LED attached to a hat worn by each participant. We also included an override feature to allow a researcher to manually set the participant's location with a mouse in the event that the sensor failed. In addition to simulating a more dynamic situation, this approach has the advantage of being easily repeatable and producing straightforward measures of success for the peripheral task (task completion time and the number of barriers and targets hit).

In order to understand the benefits of eyes-free document reading, we compared the touchscreen SeeReader interface against a modified version of the touchscreen SeeReader interface with audio and vibration notifications removed (similar to a standard document reader). At the beginning of the experiment, we asked participants to open and freely navigate a test document using both interfaces. After 10 minutes of use, we then had participants begin the main tasks on a different document. We used a 2x2 design, having the participants complete two reading tasks, one on each interface, in randomized order. At the end of each task we asked participants two simple, multiple-choice comprehension and recall questions.

Finally, at the end of the study we asked participants the following questions (with responses recorded on a 7-point scale, where 1 mapped to agreement and 7 mapped to disagreement): “I found it easier to use the audio version than the non-audio version”; “I was comfortable using the touchwheel”; “It felt natural to listen to this document”; “The vibration notifications were a sufficient indication of links in the document”. We ran a total of 8 participants (6 male and 2 female) with an average age of 38 (SD 6.02). Furthermore, only 2 of the 8 participants reported having extensive experience with gaming and mobile interfaces.

6. Results

Overall, we found that participants using SeeReader were able to avoid more barriers and hit more targets while sacrificing neither completion time nor comprehension of the reading material. Using SeeReader participants hit 76% (12% RSD) of the targets and avoided all but 10% (5% RSD) of the barriers, while using the non-audio reader participants hit 63% (11% RSD) of the targets and 17% (5% RSD) of the barriers. Meanwhile, average completion time across all tasks was virtually identical — 261 seconds (70 SD) for SeeReader and 260 seconds (70 SD) for the non-audio version. Also, comprehension scores were similar. Using SeeReader, participants answered on average 1.13 (.64 SD) questions correctly, versus 1 (.87 SD) using the non-audio version. In the post-study questionnaire, participants reported that they found SeeReader easier to use (avg. 2.75, 1.58 SD), and felt it was natural to listen to the document (avg. 3.00, 1.31 SD). However, participants were mixed on the use of the touchwheel (avg. 4.38, 1.85 SD) and the vibration notifications (avg. 4.13, 1.89 SD).

Figure 4: Study participants read documents while peripherally monitoring an interface, moving their body either to avoid barriers (red) or to acquire targets (green). A WiimoteTM tracked an IR LED attached to a hat to determine their location.

Page 49: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

40

While participants had some issues with SeeReader, overall they found it more satisfying than the non-audio version. More than one participant noted that they felt they did not yet have “enough training with [SeeReader’s] touchwheel.” Still, comments revealed that using the non-audio version while completing the peripheral task was “frustrating” and that some participants had no choice but to “stop to read the doc[ument].” On the other hand, SeeReader allowed participants to complete both tasks without feeling too overwhelmed. One participant noted that, “Normally I dislike being read to, but it seemed natural [with] SeeReader.”

7. Discussion

The results, while preliminary, imply that even with only a few minutes familiarizing themselves with these new interaction techniques, participants may be able to read rich documents using SeeReader while also maintaining an awareness of their environment. Furthermore, the user population we chose was only moderately familiar with these types of interfaces. These results give us hope that after more experience with the application, users comfortable with smart devices would be able to use SeeReader not only to remain aware of their environment but also to read rich documents faster. Methodologically, we were encouraged that participants were able to understand rapidly the peripheral task, and generally performed well. Still, it was clear that participants felt somewhat overwhelmed trying to complete two tasks at once, both with interfaces they had not yet encountered. We are considering how to iterate this method to make it more natural while still incorporating the dynamism of a realistic setting.

8. Conclusion

We presented a novel document reader that allows users to read rich documents while also maintaining an awareness of their physical environment. A dual-task study showed that users may be able to read documents with SeeReader as well as a standard mobile document reader while also being more aware of their environment. In future work, we intend to experiment with this technique in automobile dashboard systems. We can take advantage of other sensors in automobiles to adjust the timing of display of visual content (e.g., visual content could be shown only when the car is idling). We anticipate that SeeReader may be even more useful in this scenario given the high cost of distraction for drivers.

9. Acknowledgements

We thank Dr. David Hilbert for his insights into improving the mobile interface. We also thank our study participants.

References

[1] Annette Adler, Anuj Gujar, Beverly L. Harrison, Kenton O’Hara, and Abigail Sellen. A diary study of work-related reading: Design implications for digital reading devices. In CHI ’98. Pages 241–248.

[2] Leon Barnard, Ji S. Yi, Julie A. Jacko, and Andrew Sears. An empirical comparison of use-inmotion evaluation scenarios for obile computing devices. International Journal of Human-Computer Studies, 62(4):487–520, April 2005.

[3] George Buchanan and Tom Owen. Improving navigation interaction in digital documents. In JCDL ’08. Pages 389–392.

[4] Nicholas Chen, Francois Guimbretiere, Morgan Dixon, Cassandra Lewis, and Maneesh Agrawala. Navigation techniques for dual-display e-book readers. In CHI ’08. Pages 1779–1788.

[5] Patrick Chiu, Koichi Fujii, and Qiong Liu. Content based automatic zooming: Viewing documents on small displays. In MM ’08. Pages 817–820.

[6] Julian Kupiec, Jan Pedersen, and Francine Chen. A trainable document summarizer. In SIGIR ’95. Pages 68–73.

[7] Donato Malerba, Floriana Esposito, Francesca A. Lisi, and Oronzo Altamura. Automated discovery of dependencies between logical components in document image understanding. In ICDAR ’01. Pages 174–178.

[8] Antti Oulasvirta, Sakari Tamminen, Virpi Roto, and Jaana Kuorelahti. Interaction in 4-second bursts: The fragmented nature of attentional resources in mobile HCI. In CHI ’05. Pages 919–928.

[9] Morgan N. Price, Bill N. Schilit, and Gene Golovchinsky. Xlibris: The active reading machine. In CHI ’98. Pages 22–23.

[10] Bill N. Schilit, Morgan N. Price, Gene Golovchinsky, Kei Tanaka, and Catherine C. Marshall. As we may read: The reading appliance revolution. Computer, 32(1):65–73, 1999.

[11] Kristin Vadas, Nirmal Patel, Kent Lyons, Thad Starner, and Julie Jacko. Reading on-the-go: A comparison of audio and hand-held displays. In MobileHCI ’06. Pages 219–226.

[12] Shengdong Zhao, Pierre Dragicevic, Mark Chignell, Ravin Balakrishnan, and Patrick Baudisch. Earpod: Eyes-free menu selection using touch input and reactive audio feedback. In CHI ’07. Pages 1395–1404.

Scott Carter holds a PhD from The University of California, Berkeley. He is a research scientist at FX Palo Alto Laboratory.

Page 50: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

IJCSIIJCSI

41

Laurent Denoue holds a PhD from the University of Savoie, France. He is a senior research scientist at FX Palo Alto Laboratory.

Page 51: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

42

Improvement of Text Dependent Speaker Identification System Using Neuro-Genetic Hybrid Algorithm in Office

Environmental Conditions

Md. Rabiul Islam1 and Md. Fayzur Rahman2

1 Department of Computer Science & Engineering

Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh [email protected]

2 Department of Electrical & Electronic Engineering

Rajshahi University of Engineering & Technology (RUET), Rajshahi-6204, Bangladesh [email protected]

Abstract In this paper, an improved strategy for automated text dependent speaker identification system has been proposed in noisy environment. The identification process incorporates the Neuro-Genetic hybrid algorithm with cepstral based features. To remove the background noise from the source utterance, wiener filter has been used. Different speech pre-processing techniques such as start-end point detection algorithm, pre-emphasis filtering, frame blocking and windowing have been used to process the speech utterances. RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC and LPCC have been used to extract the features. After feature extraction of the speech, Neuro-Genetic hybrid algorithm has been used in the learning and identification purposes. Features are extracted by using different techniques to optimize the performance of the identification. According to the VALID speech database, the highest speaker identification rate of 100.000 % for studio environment and 82.33 % for office environmental conditions have been achieved in the close set text dependent speaker identification system. Key words: Bio-informatics, Robust Speaker Identification, Speech Signal Pre-processing, Neuro-Genetic Hybrid Algorithm.

1. Introduction

Biometrics are seen by many researchers as a solution to a lot of user identification and security problems now a days [1]. Speaker identification is one of the most important areas where biometric techniques can be used. There are various techniques to resolve the automatic speaker identification problem [2, 3, 4, 5, 6, 7, 8]. Most published works in the areas of speech recognition and speaker recognition focus on speech under the noiseless environments and few published works focus on speech under noisy conditions [9, 10, 11, 12]. In some research work, different talking styles were used to

simulate the speech produced under real stressful talking conditions [13, 14, 15]. Learning systems in speaker identification that employ hybrid strategies can potentially offer significant advantages over single-strategy systems. In this proposed system, Neuro-Genetic Hybrid algorithm with cepstral based features has been used to improve the performance of the text dependent speaker identification system under noisy environment. To extract the features from the speech, different types of feature extraction technique such as RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC and LPCC have been used to achieve good result. Some of the tasks of this work have been simulated using Matlab based toolbox such as Signal processing Toolbox, Voicebox and HMM Toolbox.

2. Paradigm of the Proposed Speaker Identification System

The basic building blocks of speaker identification system are shown in the Fig.1. The first step is the acquisition of speech utterances from speakers. To remove the background noises from the original speech, wiener filter has been used. Then the start and end points detection algorithm has been used to detect the start and end points from each speech utterance. After which the unnecessary parts have been removed. Pre-emphasis filtering technique has been used as a noise reduction technique to increase the amplitude of the input signal at frequencies where signal-to-noise ratio (SNR) is low. The speech signal is segmented into overlapping frames. The purpose of the overlapping analysis is that each speech sound of the input sequence would be approximately centered at some frame. After segmentation, windowing technique has been used. Features were extracted from the segmented speech. The

Page 52: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

43

IJCSIIJCSI

extracted features were then fed to the Neuro-Genetic hybrid techniques for learning and classification.

Fig. 1 Block diagram of the proposed automated speaker identification system.

3. Speech Signal Pre-processing for Speaker Identification

To capture the speech signal, sampling frequency of 11025 Hz, sampling resolution of 16-bits, mono recording channel and Recorded file format = *.wav have been considered. The speech preprocessing part has a vital role for the efficiency of learning. After acquisition of speech utterances, wiener filter has been used to remove the background noise from the original speech utterances [16, 17, 18]. Speech end points detection and silence part removal algorithm has been used to detect the presence of speech and to remove pulse and silences in a background noise [19, 20, 21, 22, 23]. To detect word boundary, the frame energy is computed using the sort-term log energy equation [24],

∑−+

=

=1

2 )(log10Nn

nti

i

i

tSE (1)

Pre-emphasis has been used to balance the spectrum of voiced sounds that have a steep roll-off in the high frequency region [25, 26, 27]. The transfer function of the FIR filter in the z-domain is [26]

10 , .1)( 1 ≤≤−= − αα zZH (2) Where α is the pre-emphasis parameter. Frame blocking has been performed with an overlapping of 25[%] to 75[%] of the frame size. Typically a frame length of 10-30 milliseconds has been used. The purpose of the overlapping analysis is that each speech sound of the input sequence would be approximately centered at some frame [28, 29]. From different types of windowing techniques, Hamming window has been used for this system. The purpose of using windowing is to reduce the effect of the spectral

artifacts that results from the framing process [30, 31, 32]. The hamming window can be defined as follows [33]:

Otherwise ,0

)2

1()2

1( ,2cos46.054.0)(⎪⎩

⎪⎨⎧ −

≤≤−

−Π

−=NnN

Nn

nw (3)

4. Speech parameterization Techniques for Speaker Identification

This stage is very important in an ASIS because the quality of the speaker modeling and pattern matching strongly depends on the quality of the feature extraction methods. For the proposed ASIS, different types of speech feature extraction methods [34, 35, 36, 37, 38, 39] such as RCC, MFCC, ∆MFCC, ∆∆MFCC, LPC, LPCC have been applied.

5. Training and Testing Model for Speaker Identification

Fig.2 shows the working process of neuro-genetic hybrid system [40, 41, 42]. The structure of the multilayer neural network does not matter for the GA as long as the BPNs parameters are mapped correctly to the genes of the chromosome the GA is optimizing. Basically, each gene represents the value of a certain weight in the BPN and the chromosome is a vector that contains these values such that each weight corresponds to a fixed position in the vector as shown in Fig.2. The fitness function can be assigned from the identification error of the BPN for the set of pictures used for training. The GA searches for parameter values that minimize the fitness function, thus the identification error of the BPN is reduced and the identification rate is maximized [43].

Fig.2 Learning and recognition model for the Neuro-Genetic hybrid system.

Page 53: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

44

IJCSIIJCSI

The algorithm for the Neuro-Genetic based weight determination and Fitness Function [44] is as follows: Algorithm for Neuro-Genetic Weight determination: { i 0; Generate the initial population Pi of real coded chromosomes C i

j each representing a weight set for the BPN; Generate fitness values F i

j for each C i j € Pi using the

algorithm FITGEN(); While the current population Pi has not converged { Using the cross over mechanism reproduced offspring from the parent chromosome and performs mutation on offspring; i i+1; Call the current population Pi ; Calculate fitness values F i

j for each C i j € Pi using the

algorithm FITGEN(); } Extract weight from Pi to be used by the BPN; } Algorithm for FITGEN(): {Let ( ,iI jT ), i=1,2,………..N where iI = ( ,1iI

,2iI ……… liI ) and iT = ( ,1iT ,2iT ……… liT ) represent the input-output pairs of the problem to be solved by BPN with a configuration l-m-n. { Extract weights iW from Ci ;

Keeping iW as a fixed weight setting, train the BPN for the N input instances (Pattern); Calculate error Ei for each of the input instances using the formula: Ei, = ∑ −

jjiji OT 2)( (3)

Where iO is the output vector calculated by BPN; Find the root mean square E of the errors Ei, I = 1,2,…….N

i.e. E = NEi

i∑ (4)

Now the fitness value Fi for each of the individual string of the population as Fi = E; } Output Fi for each Ci, i = 1,2,…….P; }

}

6. Optimum parameter Selection for the BPN and GA

6.1 Parameter Selection on the BPN

There are some critical parameters in Neuro-Genetic hybrid system (such as in BPN, gain term, speed factor, number of hidden layer nodes and in GA, crossover rate and the number of generation) that affect the performance of the proposed system. A trade off is made to explore the optimal values of the above parameters and experiments are performed using those parameters. The optimal values of the above parameters are chosen carefully and finally find out the identification rate.

6.1.1 Experiment on the Gain Term, η

In BPN, during the training session when the gain term was set as: η 1 = η 2 = 0.4, spread factor was set as k1 = k2 = 0.20 and tolerable error rate was fixed to 0.001[%] then the highest identification rate of 91[%] has been achieved which is shown in Fig.3.

Fig. 3 Performance measurement according to gain term.

6.1.2 Experiment on the Speed Factor, k

The performance of the BPN system has been measured according to the speed factor, k. We set η 1 = η 2 = 0.4 and tolerable error rate was fixed to 0.001[%]. We have studied the value of the parameter ranging from 0.1 to 0.5. We have found that the highest recognition rate was 93[%] at k1 = k2 = 0.15 which is shown in Fig.4.

Page 54: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

45

IJCSIIJCSI

Fig. 4 Performance measurement according to various speed factor.

6.1.3 Experiment on the Number of Nodes in Hidden Layer, NH

In the learning phase of BPN, We have chosen the hidden layer nodes in the range from 5 to 40. We set η 1 = η 2 = 0.4, k1 = k2 = 0.15 and tolerable error rate was fixed to 0.001[%]. The highest recognition rate of 94[%] has been achieved at NH = 30 which is shown in Fig.5.

Fig. 5 Results after setting up the number of internal nodes in BPN.

6.2 Parameter Selection on the GA

To measure the optimum value, different parameters of the genetic algorithm were also changed to find the best matching parameters. The results of the experiments are shown below.

6.2.1 Experiment on the Crossover Rate

In this experiment, crossover rate has been changed in various ways such as 1, 2, 5, 7, 8, 10. The highest speaker identification rate of 93[%] was found at crossover point 5 which is shown in the Fig.6.

Fig. 6 Performance measurement according to the crossover rate.

6.2.2 Experiment on the Crossover Rate

Different values of the number of generations have been tested for achieving the optimum number of generations. The test results are shown in the Fig.7. The maximum identification rate of 95[%] has been found at the number of generations 15.

Fig.7 Accuracy measurement according to the no. of generations.

7. Performance Measurement of the Text-Dependent Speaker Identification System

VALID speech database [45] has been used to measure the performance of the proposed hybrid system. In learning phase, studio recording speech utterances ware used to make reference models and in testing phase, speech utterances recorded in four different office conditions were used to measure the accurate performance of the proposed Neuro-Genetic hybrid system. Performance of the proposed system were measured according to various cepstral based features such as LPC, LPCC, RCC, MFCC, ∆MFCC and ∆∆MFCC which are shown in the following table.

Table 1: Speaker identification rate (%) for VALID speech corpus Type of

environments MFCC ∆ MFCC

∆∆ MFCC RCC LPCC

Clean speech utterances 100.00 100.00 98.23 90.43 100.00

Office environments 80.17 82.33 68.89 70.33 76.00

Page 55: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

46

IJCSIIJCSI

speech utterances

Table 1 shows the overall average speaker identification rate for VALID speech corpus. From the table it is easy to compare the performance among MFCC, ∆MFCC, ∆∆MFCC, RCC and LPCC methods for Neuro-Genetic hybrid algorithm based text-dependent speaker identification system. It has been shown that in clean speech environment the performance is 100.00 [%] for MFCC, ∆MFCC and LPCC and the highest identification rate (i.e. 82.33 [%]) has been achieved at ∆MFCC for four different office environments.

8. Conclusion and Observations

The experimental results show the versatility of the Neuro-Genetic hybrid algorithm based text-dependent speaker identification system. The critical parameters such as gain term, speed factor, number of hidden layer nodes, crossover rate and the number of generations have a great impact on the recognition performance of the proposed system. The optimum values of the above parameters have been selected effectively to find out the best performance. The highest recognition rate of BPN and GA have been achieved to be 94[%] and 95[%] respectively. According to VALID speech database, 100[%] identification rate in clean environment and 82.33 [%] in office environment conditions have been achieved in Neuro-Genetic hybrid system. Therefore, this proposed system can be used in various security and access control purposes. Finally the performance of this proposed system can be populated according to the largest speech recognition database. References [1] A. Jain, R. Bole, S. Pankanti, BIOMETRICS Personal

Identification in Networked Society, Kluwer Academic Press, Boston, 1999.

[2] Rabiner, L., and Juang, B.-H., Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, New Jersey, 1993.

[3] Jacobsen, J. D., “Probabilistic Speech Detection”, Informatics and Mathematical Modeling, DTU, 2003.

[4] Jain, A., R.P.W.Duin, and J.Mao., “Statistical pattern recognition: a review”, IEEE Trans. on Pattern Analysis and Machine Intelligence 22 (2000), pp. 4–37.

[5] Davis, S., and Mermelstein, P., “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences”, IEEE 74 Transactions on Acoustics, Speech, and Signal Processing (ICASSP), Vol. 28, No. 4, 1980, pp. 357-366.

[6] Sadaoki Furui, “50 Years of Progress in Speech and Speaker Recognition Research”, ECTI TRANSACTIONS ON COMPUTER AND INFORMATION TECHNOLOGY, Vol.1, No.2, 2005.

[7] Lockwood, P., Boudy, J., and Blanchet, M., “Non-linear spectral subtraction (NSS) and hidden Markov models for robust speech recognition in car noise environments”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1992, Vol. 1, pp. 265-268.

[8] Matsui, T., and Furui, S., “Comparison of text-independent speaker recognition methods using VQ-distortion and discrete/ continuous HMMs”, IEEE Transactions on Speech Audio Process, No. 2, 1994, pp. 456-459.

[9] Reynolds, D.A., “Experimental evaluation of features for robust speaker identification”, IEEE Transactions on SAP, Vol. 2, 1994, pp. 639-643.

[10] Sharma, S., Ellis, D., Kajarekar, S., Jain, P. & Hermansky, H., “Feature extraction using non-linear transformation for robust speech recognition on the Aurora database.”, in Proc. ICASSP2000, 2000.

[11] Wu, D., Morris, A.C. & Koreman, J., “MLP Internal Representation as Disciminant Features for Improved Speaker Recognition”, in Proc. NOLISP2005, Barcelona, Spain, 2005, pp. 25-33.

[12] Konig, Y., Heck, L., Weintraub, M. & Sonmez, K., “Nonlinear discriminant feature extraction for robust text-independent speaker recognition”, in Proc. RLA2C, ESCA workshop on Speaker Recognition and its Commercial and Forensic Applications, 1998, pp. 72-75.

[13] Ismail Shahin, “Improving Speaker Identification Performance Under the Shouted Talking Condition Using the Second-Order Hidden Markov Models”, EURASIP Journal on Applied Signal Processing 2005:4, pp. 482–486.

[14] S. E. Bou-Ghazale and J. H. L. Hansen, “A comparative study of traditional and newly proposed features for recognition of speech under stress”, IEEE Trans. Speech, and Audio Processing, Vol. 8, No. 4, 2000, pp. 429–442.

[15] G. Zhou, J. H. L. Hansen, and J. F. Kaiser, “Nonlinear feature based classification of speech under stress”, IEEE Trans. Speech, and Audio Processing, Vol. 9, No. 3, 2001, pp. 201–216.

[16] Simon Doclo and Marc Moonen, “On the Output SNR of the Speech-Distortion Weighted Multichannel Wiener Filter”, IEEE SIGNAL PROCESSING LETTERS, Vol. 12, No. 12, 2005.

[17] Wiener, N., Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications, Wiely, Newyork, 1949.

[18] Wiener, N., Paley, R. E. A. C., “Fourier Transforms in the Complex Domains”, American Mathematical Society, Providence, RI, 1934.

[19] Koji Kitayama, Masataka Goto, Katunobu Itou and Tetsunori Kobayashi, “Speech Starter: Noise-Robust Endpoint Detection by Using Filled Pauses”, Eurospeech 2003, Geneva, pp. 1237-1240.

[20] S. E. Bou-Ghazale and K. Assaleh, “A robust endpoint detection of speech for noisy environments with application to automatic speech recognition”, in Proc. ICASSP2002, 2002, Vol. 4, pp. 3808–3811.

[21] A. Martin, D. Charlet, and L. Mauuary, “Robust speech / non-speech detection using LDA applied to MFCC”, in Proc. ICASSP2001, 2001, Vol. 1, pp. 237–240.

Page 56: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

47

IJCSIIJCSI

[22] Richard. O. Duda, Peter E. Hart, David G. Strok, Pattern Classification, A Wiley-interscience publication, John Wiley & Sons, Inc, Second Edition, 2001.

[23] Sarma, V., Venugopal, D., “Studies on pattern recognition approach to voiced-unvoiced-silence classification”, Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP'78, 1978, Vol. 3, pp. 1-4.

[24] Qi Li. Jinsong Zheng, Augustine Tsai, Qiru Zhou, “Robust Endpoint Detection and Energy Normalization for Real-Time Speech and Speaker Recognition”, IEEE Transaction on speech and Audion Processing, Vol. 10, No. 3, 2002.

[25] Harrington, J., and Cassidy, S., Techniques in Speech Acoustics, Kluwer Academic Publishers, Dordrecht, 1999.

[26] Makhoul, J., “Linear prediction: a tutorial review”, in Proceedings of the IEEE 64, 4 (1975), pp. 561–580.

[27] Picone, J., “Signal modeling techniques in speech recognition”, in Proceedings of the IEEE 81, 9 (1993), pp. 1215–1247.

[28] Clsudio Beccchetti and Lucio Prina Ricotti, Speech Recognition Theory and C++ Implementation, John Wiley & Sons. Ltd., 1999, pp.124-136.

[29] L.P. Cordella, P. Foggia, C. Sansone, M. Vento., "A Real-Time Text-Independent Speaker Identification System", in Proceedings of 12th International Conference on Image Analysis and Processing, 2003, IEEE Computer Society Press, Mantova, Italy, pp. 632 - 637.

[30] J. R. Deller, J. G. Proakis, and J. H. L. Hansen, Discrete-Time Processing of Speech Signals, Macmillan, 1993.

[31] F. Owens., Signal Processing Of Speech, Macmillan New electronics. Macmillan, 1993.

[32] F. Harris, “On the use of windows for harmonic analysis with the discrete fourier transform”, in Proceedings of the IEEE 66, 1978, Vol.1, pp. 51-84.

[33] J. Proakis and D. Manolakis, Digital Signal Processing, Principles, Algorithms and Aplications, Second edition, Macmillan Publishing Company, New York, 1992.

[34] D. Kewley-Port and Y. Zheng, “Auditory models of formant frequency discrimination for isolated vowels”, Journal of the Acostical Society of America, 103(3), 1998, pp. 1654–1666.

[35] D. O’Shaughnessy, Speech Communication - Human and Machine, Addison Wesley, 1987.

[36] E. Zwicker., “Subdivision of the audible frequency band into critical bands (frequenzgruppen)”, Journal of the Acoustical Society of America, 33, 1961, pp. 248–260.

[37] S. Davis and P. Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences”, IEEE Transactions on Acoustics Speech and Signal Processing, 28, 1980, pp. 357–366.

[38] S. Furui., “Speaker independent isolated word recognition using dynamic features of the speech spectrum”, IEEE Transactions on Acoustics, Speech and Signal Processing, 34, 1986, pp. 52–59.

[39] S. Furui, “Speaker-Dependent-Feature Extraction, Recognition and Processing Techniques”, Speech Communication, Vol. 10, 1991, pp. 505-520.

[40] Siddique and M. & Tokhi, M., “Training Neural Networks: Back Propagation vs. Genetic Algorithms”, in Proceedings of International Joint Conference on Neural Networks, Washington D.C.USA, 2001, pp. 2673- 2678.

[41] Whiteley, D., “Applying Genetic Algorithms to Neural Networks Learning”, in Proceedings of Conference of the Society of Artificial Intelligence and Simulation of Behavior, England, Pitman Publishing, Sussex, 1989, pp. 137- 144.

[42] Whiteley, D., Starkweather and T. & Bogart, C., “Genetic Algorithms and Neural Networks: Optimizing Connection and Connectivity”, Parallel Computing, Vol. 14, 1990, pp. 347-361.

[43] Kresimir Delac, Mislav Grgic and Marian Stewart Bartlett, Recent Advances in Face Recognition, I-Tech Education and Publishing KG, Vienna, Austria, 2008, pp. 223-246.

[44] Rajesskaran S. and Vijayalakshmi Pai, G.A., Neural Networks, Fuzzy Logic, and Genetic Algorithms- Synthesis and Applications, Prentice-Hall of India Private Limited, New Delhi, India, 2003.

[45] N. A. Fox, B. A. O'Mullane and R. B. Reilly, “The Realistic Multi-modal VALID database and Visual Speaker Identification Comparison Experiments”, in Proc. of the 5th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA-2005), New York, 2005.

Md. Rabiul Islam was born in Rajshahi, Bangladesh, on December 26, 1981. He received his B.Sc. degree in Computer Science & Engineering and M.Sc. degrees in Electrical & Electronic Engineering in 2004, 2008, respectively from the Rajshahi University of Engineering & Technology, Bangladesh. From 2005 to 2008, he was a Lecturer in the Department of Computer

Science & Engineering at Rajshahi University of Engineering & Technology. Since 2008, he has been an Assistant Professor in the Computer Science & Engineering Department, University of Rajshahi University of Engineering & Technology, Bangladesh. His research interests include bio-informatics, human-computer interaction, speaker identification and authentication under the neutral and noisy environments.

Md. Fayzur Rahman was born in 1960 in Thakurgaon, Bangladesh. He received the B. Sc. Engineering degree in Electrical & Electronic Engineering from Rajshahi Engineering College, Bangladesh in 1984 and M. Tech degree in Industrial Electronics from S. J. College of Engineering, Mysore, India in 1992. He

received the Ph. D. degree in energy and environment electromagnetic from Yeungnam University, South Korea, in 2000. Following his graduation he joined again in his previous job in BIT Rajshahi. He is a Professor in Electrical & Electronic Engineering in Rajshahi University of Engineering & Technology (RUET). He is currently engaged in education in the area of Electronics & Machine Control and Digital signal processing. He is a member of the Institution of Engineer’s (IEB), Bangladesh, Korean Institute of Illuminating and Installation Engineers (KIIEE), and Korean Institute of Electrical Engineers (KIEE), Korea.

Page 57: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009 ISSN (Online): 1694-0784 ISSN (Printed): 1694-0814

IJCSIIJCSI

49

MESURE Tool to benchmark Java Card platforms

Samia Bouzefrane1, Julien Cordry1 and Pierre Paradinas2

1CEDRIC Laboratory, Conservatoire National des Arts et Métiers 292 rue Saint Martin, 75141, Paris Cédex 03, FRANCE

{[email protected]}

2INRIA, Domaine de Voluceau - Rocquencourt -B.P. 105, 78153 Le Chesnay Cedex, FRANCE. { [email protected]}

Abstract The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used.

Key words: Java Card platforms, software testing, benchmarking, smart cards.

1. Introduction

With more than 5 billion copies in 2008 [2], smart cards are an important device of today’s information society. The development of the Java Card standard made this device even more popular as it provides a secure, vendor-independent, ubiquitous Java platforms for smart cards. It shortens the time-to-market and enables programmers to develop smart card applications for a wide variety of vendors products. In this context, understanding the performance behavior of Java Card platforms is important to the Java Card community (users, smart card manufacturers, card software providers, card users, card integrators, etc.). Currently, there is no solution on the market which makes it possible to evaluate the performance of a smart card that implements Java Card technology. In fact, the programs which realize this type of evaluations are generally proprietary and not available to the whole of the Java Card community. Hence, the only existing and published benchmarks are used within research laboratories (e.g., SCCB project from CEDRIC laboratory [5] or IBM Research [12]). However, benchmarks are important in the smart card area because they contribute in discriminating companies products, especially when the products are standardized. In this

paper, on one hand we propose a general benchmarking solution through different steps that are essential for measuring the performance of the Java Card platforms; on the other hand we validate the obtained measurements from statistical and precision CAD (Card Acceptance Device) points of view.

The remainder of this paper is organised as follows. In Section 2, we describe briefly some benchmarking attempts in the smart card area. In Section 3, an overview of the benchmarking framework is given. Section 4 analyses the obtained measurements using first a statistical approach, and then a precision reader, before concluding the paper in Section 5.

2. Java-Card Benchmarking State of the Art

Currently, there is no standard benchmark suite which can be used to demonstrate the use of the Java Card Virtual Machine (JCVM) and to provide metrics for comparing Java Card platforms. In fact, even if numerous benchmarks have been developed around the Java Virtual Machine (JVM), there are few works that attempt to evaluate the performance of smart cards. The first interesting initiative has been done by Castellà et al. in [4] where they study the performance of micro-payment for Java Card platforms, i.e., without PKI (Public Key Infrastructure). Even if they consider Java Card platforms from distinct manufacturers, their tests are not complete as they involve mainly computing some hash functions on a given input, including the I/O operations. A more recent and complete work has been undertaken by Erdmann in [6]. This work mentions different application domains, and makes the distinction between I/O, cryptographic functions, JCRE (Java Card Run Time Execution) and energy consumption. Infineon Technologies is the only provider of the tested cards for the different application domains, and the software itself is not available. The work of Fischer in [7] compares the performance results given by a Java Card applet with the results of the equivalent

Page 58: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

50

IJCSIIJCSI

native application. Another interesting work has been carried out by the IBM BlueZ secure systems group and it was detailed in a Master thesis [12]. JCOP framework has been used to perform a series of tests to cover the communication overhead, DES performance and reading and writing operations into the card memory (RAM and EEPROM). Markantonakis in [9] presents some performance comparisons between the two most widely used terminal APIs, namely PC/SC and OCF. Comparatively to these works, our benchmarking framework not only covers the different functionalities of a Java Card platform but it also provided as a set of open source code freely accessible on-line.

3. General benchmarking framework

3.1 Introduction

Our research work falls under the MESURE project [10], a project funded by the French administration (Agence Nationale de Recherche), which aims at developing a set of open source tools to measure the performance of Java Card platforms. These benchmarking tools focus on Java Card 2.2 functionalities even if Java Card 3.0 specifications have been published since March 2008 [1], principally because until now there is no Java Card 3.0 platform in the market except for some prototypes such as the one demonstrated by Gemalto during the Java One Conference in June 2008. Since Java Card 3.0 proposes two editions: connected (web oriented) edition and classic edition, our measuring tools can be reused to benchmark Java Card 3.0 classic edition platforms.

3.2 Addressed issues

Only features related to the normal use phase of Java Card applications will be considered here. Excluded features include installing, personalizing or deleting an application since they are of lesser importance from user’s point of view and performed once.

Hence, the benchmark framework enables performance evaluation at three levels:

– The VM level: to measure the execution time of the various instructions of the virtual machine (basic instructions), as well as subjacent mechanisms of the virtual machine (e.g., reading and writing the memory).

– The API level: to evaluate the functioning of the services proposed by the libraries available in the embedded system (various methods of the API, namely those of Java Card and GlobalPlatform).

– The JCRE (Java Card Runtime Execution) level: to evaluate the non-functional services, such as the

transaction management, the method invocation in the applets, etc.

We will not take care of features like the I/Os or the power consumption because their measurability raises some problems such as:

– For a given smart card, distinct card readers may provide different I/Os measurements.

– Each part of an APDU is managed differently on a smart card reader. The 5 bytes header is read first, and the following data can be transmitted in several way: 1 acknowledge for each byte or not, delay or not before noticing the status word, etc.

– The smart card driver used by the workstation generally induces more delay on the measurement than the smart card reader itself.

3.3 The benchmarking overview

The set of tests are supplied to benchmark Java Card platforms available for anybody and supported by any card reader. The various tests thus have to return accurate results, even if they are not executed on precision readers. We reach this goal by removing the potential card reader weakness (in terms of delay, variance and predictability) and by controlling the noise generated by measurement equipment (the card reader and the workstation). Removing the noise added to a specific measurement can be done with the computation of an average value extracted from multiple samples. As a consequence, it is important on the one hand to perform each test several times and to use basic statistical calculations to filter the trustworthy results. On the other hand, it is necessary to execute several times in each test the operation to be measured in order to fix a minimal duration for the tests (> 1 second) and to expect getting precise results. We defined a set of modules as part of the benchmarking framework. The benchmarks have been developed under the Eclipse environment based on JDK 1.6, with JSR268 [13] that extends Java Standard Edition with a package that defines methods within Java classes to interact with a smart card. According to the ISO 7816 standard, since a smart card has no internal clock, we are obliged to measure the time a Java Card platform takes to answer to an APDU command, and to use that measure to deduce the execution time of some operations.

The benchmarking development tool covers two parts as described in Figure 1: the script part and the applet part. The script part, entirely written in Java, defines an abstract class that is used as a template to derive test cases characterized by relevant measuring parameters such as, the operation type to measure, the number of loops, etc. A method run() is executed in each script to interact with the

Page 59: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

51

IJCSIIJCSI

corresponding test case within the applet. Similarly, on the card is defined an abstract class that defines three methods: – a method setUp() to perform any memory allocation needed during the lifetime test case. – a method run() used to launch the tests corresponding to the test case of interest, and – a method cleanUp() used after the test is done to perform any clean-up.

Fig. 1 The script part and the Applet part

3.4 Modules

In this section, we describe the general benchmark framework (see Figure 2) that has been designed to achieve the MESURE goal. The methodology consists of different steps. The objective of the first step is to find the optimal parameters used to carry out correctly the tests. The tests cover the Virtual Machine (VM) operations and the API methods. The obtained results are filtered by eliminating non-relevant measurements and values are isolated by drawing aside measurement noise. A profiler module is used to assign a mark to each benchmark type, hence allowing us to establish a performance index for each smart card profile used. In the following subsections, we detail every module composing the framework. The bulk of the benchmark consists in performing time execution measurements when we send APDU commands from the computer through the CAD to the card. Each test (through the run method) is performed within the card a certain number of times (Y) to ensure reliability of the collected execution times, and within each run method, we perform a certain number of loops (L). L is coded on the byte P2 of the APDU commands which are sent to the on-card applications. The size of the loop performed on the card is L = (P2)2 since L is so great to be represented with one byte.

The Calibrate Module: computes the optimal parameters (such as the number of loops) needed to obtain measurements of a given precision.

Fig. 2 Overall Architecture

Benchmarking the various different byte-codes and API entries takes time. At the same time, it is necessary to be precise enough when it comes to measuring those execution times. Furthermore, the end user of such a benchmark should be allowed to focus on a few key elements with a higher degree of precision. It is therefore necessary to devise a tool that let us decide what are the most appropriate parameters for the measurement. Figure 3 depicts the evolution of the raw measurement, as well as its standard deviation, as we take 30 measurements for each available loop size of a test applet. As we can see, the measured execution time of an applet grows linearly with the number of loops being performed on the card (L). On the other hand, the perceived standard deviation on the different measurements varies randomly as the loop size increases, though with less and less peaks. Since a bigger loop size means a relatively more stable standard deviation, we use both the standard deviation and the mean measured execution time as a basis to assess the precision of the measurement. To assess the reliability of the measurements, we compare the value of the measurement with the standard deviation. The end user will need to specify this ratio between the average measurement and the standard deviation, as well as an optional minimum accepted value, which is set at one second by default. The ratio refers to the precision of the tests while the minimal accepted value is the minimum duration to perform each test. Hence, with both the ratio and the minimal accepted value, as specified by the end

Page 60: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

52

IJCSIIJCSI

user, we can test and try different values for the loop size to binary search and approach the ideal value.

Fig. 3 Raw measurements and Standard deviation

The Bench Module: For a number of cycles, defined by the calibrate module, the bench module computes the execution time for:

– The VM byte codes – The API methods – The JCRE mechanisms (such as transactions).

The Filter Module: Experimental errors lead to noise in the raw measurement experiments. This noise leads to imprecision in the measured values, making it difficult to interpret the results. In the smart card context, the noise is

due to crossing the platform, the CAD and the terminal (measurement tools, Operating System, hardware). The issues become: how to interpret the varying values and how to compare platforms when there is some noise in the results. The filter module uses a statistical design to extract meaningful information from noisy data. From multiple measurements for a given operation, the filter module uses the mean value µ of the set of measurements to guess the actual value, and the standard deviation σ of the measurements to quantify the spread of the measurements around the mean. Moreover, since the measurements respect the normal Gaussian distribution, a confidence interval [µ − (n × σ), µ + (n × σ)], within which the confidence level is of 1−a, is used to help eliminate the measurements outside the confidence interval, where n and a are respectively the number of measurements and the temporal precision, and they are related by traditional statistical laws.

The Extractor Module: is used to isolate the execution time of the features of interest among the mass of raw measurements that we gathered so far. Benchmarking byte-codes and API methods within Java Card platforms requires some subtle means in order to obtain execution results that reflect as accurately as possible the actual isolated execution time of the feature of interest. This is because there exists a significant and non-predictable elapse of time between the beginning of the measure, characterized by the starting of the timer on the computer, and the actual execution of the byte-code of interest. This is also the case the other way around. Indeed, when performing a request on the card, the execution call has to travel several software and hardware layers down to the card’s hardware and up to the card’s VM (vice versa upon response). This non-predictability is mainly dependent on hardware characteristics of the benchmark environment (such as the CAD, PC’s hardware, etc), the Operating System level interferences, services and also on the PC’s VM. To minimize the effect of these interferences, we need to isolate the execution time of the features of interest, while ensuring that their execution time is sufficiently important to be measurable. The maximization of the byte-codes execution time requires a test applet structure with a loop having a large upper bound, which will execute the byte-codes for a substantial amount of time. On the other hand, to achieve execution time isolation, we need to compute the isolated execution time of any auxiliary byte-code upon which the byte-code of interest is dependent. For example if sadd is the byte-code of interest, then the byte-codes that need to be executed prior to its execution are those in charge of loading its operands onto the stack, like two sspush. Thereafter we subtract the execution time of an empty loop and the execution time of the auxiliary

Page 61: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

53

IJCSIIJCSI

byte-codes from that of the byte-code of interest (opn in Table 1) to obtain the isolated execution time of the byte-code. As presented in Table 1, the actual test is performed within a method run to ensure that the stack is freed after each invocation, thus guaranteeing memory availability.

Table 1: The framework for a bytecode opn

Java Card Applet Test Case

process() { i = 0 while i <= L do { run() i = i+1 } }

run() { op0 op1 . . . opn-1 opn }

In Table 1: - L represents the chosen upper bound, - opn represents the byte-code of interest, - opi for i ∈ [0..n-1] represents the auxiliary byte-codes necessary to perform the byte-code opn. To compute the mean isolated execution time of opn we need to perform the following calculation:

∑−

=

− −=1

0

)()( )()(n

iiL

Emptyloopmopmn opMopM LnL     (1) 

Where : ‐  )( iopM is the mean isolated execution time of the byte-code opi ‐  )( nL opm is the mean global execution time of the byte-code opn, including interferences coming from other operations performed during the measurement, both on the card and on the computer, with respect to a loop size L. These other operations represent for example auxiliary byte-codes needed to execute the byte-code of interest, or OS and JVM specific operations. The mean is computed over a significant number of tests. It is the only value that is experimentally measured. - Emptyloop represents the execution of a case where the run method does nothing.

The formula (1) implies that prior to computing )( nopM we

need to compute )( iopM for i ∈ [0..n-1]. The Profiler Module: In order to define performance references, our framework provides measurements that are specifically adapted to one of the following application domains:

– Banking applications – Transport applications, and – Identity applications. A JCVM is instrumented in order to count the different operations performed during the execution of a script for a given application. More precisely, this virtual machine is a simulated and proprietary VM executing on a workstation. This instrumentation method is rather simple to implement compared to a static analysis based methods, and can reach a good level of precision, but it requires a detailed knowledge of the applications and of the most significant scripts. Some features related to byte-codes and API methods appeared to be necessary and the simulator was instrumented to give useful information such as: – for the API methods :

• the types and values of method parameters • the length of arrays passed as parameters,

– for the byte-codes : • the type and duration of arrays for array related byte-codes (load, astore, arraylength), • the transaction status when invoking the byte-code.

A simple utility tool has been developed to parse the log files generated by the instrumented JCVM, which builds a human-readable tree of method invocations and byte-code usage. Thus, with the data obtained from the instrumented VM, we attribute for each application domain a number that represents the performance of some representative applets of the domain on the tested card. Each of these numbers is then used to compute a global performance mark. We use weighted means for each domain dependent mark. Those weights are computed by monitoring how much each Java Card feature is used within a regular use of standard applets for a given domain. For instance, if we want to test the card for a use in transport applications, we will use the statistics that we gathered with a set of representative transport applets to evaluate the impact of each feature of the card. We are considering the measure of the feature f on a card c for an application domain d. For a set of nM extracted measurements M1c,f, …, MnMc,f  considered as significant

for the feature f, we can determine a mean fcM ,  modelling the performance of the platform for this feature. Given nC cards for which the feature f was measured, it is necessary to determine the reference mean execution time Rf , which will then serve as a basis of comparison for all subsequent test. Hence the “mark” Nc,f of a card c for a

feature f, is the relation between Rf and fcM , :

fc

f

MR

fcN,, = (2)

Page 62: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

54

IJCSIIJCSI

However, this mark is not weighted. For each pair of a feature f and an application domain d, we associate a coefficient αf,d, which models the importance of f in d. The more a feature is used within typical applications of the domain, the bigger the coefficient:

∑=

=Fn

i di

df

df1 ,

,

, β

βα (3)

where : – βf,d is the total number of occurrence of the feature f in typical applications of the domain d. – nF is the total number of features involved in the test. Therefore, the coefficient αf,d represents the occurrence proportion of the feature of interest f among all the features. Hence, given a feature f, a card c and a domain d, the “weighted mark” Wc,f,d is computed as follows :

Wc,f,d = Nc,f × αf,d (4)

The “global mark” Pc,d for a card c and for a domain d is then the sum of all weighted marks for the card. A general domain independent note for a card is computed as the mean of all the domain dependant marks. Figure 4 shows some significant byte-codes computed for a card and compared to the reference tests regarding the financial domain. Whereas, Figure 5 shows the global results obtained for a tested card. Based on the results of Figure 5, our tested card seems to be dedicated for financial use.

Fig. 4 An example of a financial-dependent mark

Fig. 5 Computing a global performance mark

4. Validation of the tests

4.1 Statistical correctness of the measurements

The expected distribution of any measurement is a normal distribution. The results being time values, if the distribution is normal, then, according to Lilja [8], the arithmetic mean is an acceptable representative time value for a certain number of measurements (Lilja recommends at least 30 measurements). Nevertheless, Rehioui [12] pointed out that the results obtained via methods similar to ours were not normally distributed on IBM JCOP41 cards. Erdmann [6] cited similar problems with Infineon smart cards. When we measure both the reference test and the operation test on several smart cards by different providers using different CADs on different OSs, none of the time performances had a normal distribution (see Figure 6 for a sample reference test performed on a card). The results were similar from one card to another in terms of distribution, even for different time values, and for different loop sizes. Changes in CAD, in host-side JVM, in task priority made no difference on the experimental distribution curve. Testing the cards on Linux and on Windows XP or Windows Vista, on the other side, showed differences. Indeed, the recurring factor when measuring the performances with a terminal running Linux with PC/SC Lite and a CCID driver is the gap between peaks of distribution. The peaks are often separated by 400ms and 100 ms steps which match some parts of the public code of PC/SC Lite and the CCID driver. With other CADs, the distribution shows similar

Page 63: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

55

IJCSIIJCSI

steps with respect to the CAD driver source code. The peaks in the distribution from the measurements obtained on Windows are separated by 0.2 ms steps (see Figure 7). Without having access to neither the source code of the PC/SC implementation on Windows nor the driver source codes, we can deduce that there must be some similarities in the source codes between the proprietary versions and the open source versions. In order to check the normality of the results, we isolated some of the peaks of some distribution obtained on our measurements and we used the resulting data set. The Shapiro-Wilk test is a well established statistical test used to verify the null hypothesis that a sample of data comes from a normally distributed population. The result of such a test is a number W ∈ [0, 1], with W close to 1 when the data is normally distributed. No set of value obtained by isolating a peak within a distribution gave us a satisfying W close to 1. For instance, considering the peak in Figure 8, W = 0.8442, which is the highest value for W that we observed, with other values ranging as low as W = 0.1384. We conclude that the measurements we obtain, even if we consider a peak of distribution, are not normally distributed.

4.2 Validation through a precision CAD

We used a Micropross MP300 TC1 reader to verify the accuracy of our measurements. This is a smart card test platform, that is designed specifically to give accurate results, most particularly in terms of time analysis. The results here are seemingly unaffected by noises on the host machine. With this test platform, we can precisely monitor the polarity changes on the contact of the smart card, that mark the I/Os. We measured the time needed by a given smart card to reply to the same APDUs that we used with a regular CAD. We then tested the measured time values using the Shapiro-Wilk test, we observed W ≥ 0.96, much closer to what we expected in the first place. So we can assume that the values are normally distributed for both the operation measurement and the reference measurement. We subtracted each reference measurement value from each sadd operation measurement value, divided by the loop size to get a time values set that represents the time performance of an isolated sadd bytecode. Those new time values are normally distributed as well (W = 0.9522). On the resulting time value set, the arithmetic mean is 10611.57 ns and the standard deviation is 16.19524. According to [6], since we are dealing with a normal distribution, this arithmetic mean is an appropriate evaluation of the time needed to perform a sadd byte code on this smart card. Using a more traditional CAD (here, a

Cardmann 4040, but we tried five different CADs) we performed 1000 measurements of the sadd operation test and 1000 measurements of the corresponding reference test. By subtracting each value obtained with the reference test from each of the values of the sadd operation test, and dividing by the loop size, we produced a new set of 1000000 time values. The new set of time values has an arithmetic mean of 10260.65 ns and a standard deviation of 52.46025. The value we found with a regular CAD under Linux and without priority modification is just 3.42% away from the more accurate value found with the precision reader. Although this is a set of measurements that are not normally distributed (W = 0.2432), the arithmetic mean of our experimental noisy measurements seems to be a good approximation of the actual time it takes for this smart card to perform a sadd. The same test under Windows Vista gave us a mean time of 11380.83 ns with a standard deviation of 100.7473, that is 7,24% away from the accurate value. We deduce that our data are noisy and faulty but despite a potentially very noisy test environment, our time measurements always provide a certain accuracy and a certain precision.

5. Conclusion

With the wide use of Java in smart card technology, there is a need to evaluate the performance and characteristics of these platforms in order to ascertain whether they fit the requirements of the different application domains. For the time being, there is no other open source benchmark solution for Java Card. The objective of our project [10] is to satisfy this need by providing a set of freely available tools, which, in the long term, will be used as a benchmark standard. In this paper, we have presented the overall benchmarking framework. Despite the noise, our framework achieves some degree of accuracy and precision. Our benchmarking framework does not need a costly reader to accurately evaluate the performance of a smart card. Java Card 3.0 is a new step forward for this community. Our framework should still be relevant to the classic edition of this platform, but we have yet to test it.

Page 64: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

56

IJCSIIJCSI

Fig. 6 Measurements of a reference test as the tests proceed under Linux, and the corresponding distribution curve L = 412

Fig. 7 Distribution of sadd operation measurements using Windows Vista, and a close up look at the distribution (L = 902)

Fig. 8 Some Distribution of the measurement of a reference test: close up look at a peak in distribution L = 412

References [1] Java card 3.0 specification, March 2008. [2] Pierrick Arlot. Le marché de la carte à puce ne connaît pas la crise.

Technical report, Electronique international, 2008. [3] Zhiqun Chen, Java Card Technology for Smart Cards: Architecture

and Programmer’s Guide, Addison Wesley 2000. [4] Jordy Castellà-Roca, Josep Domingo-Ferrer, Jordi Herrera-

Joancomati, and Jordi Planes. A performance comparison of Java Cards for micropayment implementation. In CARDIS, pages 19–38, 2000.

[5] Jean-Michel Douin, Pierre Paradinas, and Cédric Pradel. Open Benchmark for Java Card Technology. In e-Smart Conference, September 2004.

[6] Monika Erdmannn. Benchmarking von Java Cards. Master’s thesis, Institut für Informatik der Ludwig-Maximilians-Universität München, 2004.

[7] Mario Fischer. Vergleich von Java und native-chipkarten toolchains, benchmarking, messumgebung. Master’s thesis, Institut für Informatik der Ludwig-Maximilians-Universität München, 2006.

[8] David J. Lilja. Measuring Computer Performance: A Practitioner’s Guide. Cambridge University Press, 2000.

[9] Constantinos Markantonakis. Is the performance of smart card cryptographic functions the real bottleneck? In 16th international conference on Information security: Trusted information: the new decade challenge, volume 193, pages 77 – 91. Kluwer 2001.

[10] The MESURE project website. http://mesure.gforge.inria.fr. [11] Pierre Paradinas, Samia Bouzefrane, and Julien Cordry.

Performance evaluation of Java card bytecodes. In Springer, editor, Workshop in Information Security Theory and Practices (WISTP), Heraklion, Greece, 2007.

[12] Karima Rehioui. Java Card Performance Test Framework, September 2005. Université de Nice, Sophia-Antipolis, IBM Research internship.

[13] JSR 268 : http://jcp.org/en/jsr/detail?id=268

Page 65: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI International Journal of Computer Science Issues, Vol. 1, 2009

57

IJCSIIJCSI

Samia Bouzefrane is an associate professor at the CNAM (Conservatoire National des Arts et Métiers) in Paris. She received her Ph. D. in Computer Science in 1998 at the University of Poitiers (France). She joined the CEDRIC Laboratory of CNAM on September 2002 after 4 years at the University of Le Havre. After many research works on real-time systems, she is interested in smart card area. Furthermore, she is the author of two books: a French/English/Berber dictionary (1996) and a book on operating systems (2003). Currently, she is a member of the ACM-SIGOPS, France Chapter. Julien Cordry is a PhD student from the SEMpIA team (embedded and mobile systems towards ambient intelligence) of the CNAM in Paris. The topic of his research is the performance evaluation of Java Card platforms. He took part in the MESURE project, a collaborative work between the CNAM, the university of Lille and Trusted Labs. He gives lecturers at the CNAM, at the ECE (Ecole Centrale d'Electronique) and at the EPITA (a computer science engineering school). The MESURE project has received on September 2007 the Isabelle Attali Award from INRIA, which honors the most innovative work presented during “e-Smart” Conference. Pierre Paradinas is currently the Technology-Development Director at INRIA, France. He is also Professor at CNAM (Paris) where he manages the "chair of Embedded and Mobile Systems". He received a PhD in Computer Science from the University of Lille (France) in 1988 on smart cards and health application. He joined Gemplus in 1989, and was successively researcher, internal technology audit, Advanced Product Manager while he launched the card based on Data Base engine (CQL), and the Director of a common research Lab with universities and National Research Center (RD2P). He sets up the Gemplus Software Research Lab in 1996. He was also appointed technology partnership Director in 2001 based in California until June 2003. He was the Gemplus representative at W3C, ISO/AFNOR, Open Card Framework and Java Community Process, co-editor of the part 7 of ISO7816, Director of the European funded Cascade project where the first 32-Risc microprocessor with Java Card was issued.

Page 66: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSI CALL FOR PAPERS JANUARY 2010 ISSUE

The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas. See authors guide for manuscript preparation and submission guidelines. Accepted papers will be published online and authors will be provided with printed copies and indexed by Google Scholar, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld Academic Search Engine (BASE), SCIRUS and more. Deadline: 15th December 2009 Notification: 15th January 2010 Online Publication: 31st January 2010 • Evolutionary computation • Industrial systems • Evolutionary computation • Autonomic and autonomous systems • Bio-technologies • Knowledge data systems • Mobile and distance education • Intelligent techniques, logics, and

systems • Knowledge processing • Information technologies • Internet and web technologies • Digital information processing • Cognitive science and knowledge

agent-based systems • Mobility and multimedia systems • Systems performance • Networking and telecommunications

• Software development and deployment

• Knowledge virtualization • Systems and networks on the chip • Context-aware systems • Networking technologies • Security in network, systems, and

applications • Knowledge for global defense • Information Systems [IS] • IPv6 Today - Technology and

deployment • Modeling • Optimization • Complexity • Natural Language Processing • Speech Synthesis • Data Mining

All submitted papers will be judged based on their quality by the technical committee and reviewers. Papers that describe research and experimentation are encouraged. All paper submissions will be handled electronically and detailed instructions on submission procedure are available on IJCSI website (http://www.ijcsi.org). For other information, please contact IJCSI Managing Editor, ([email protected]) Website: http://www.ijcsi.org

Page 67: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

© IJCSI PUBLICATION 2009

www.IJCSI.org

Page 68: International Journal of Computer Science Issues (IJCSI), Volume 1, August 2009

IJCSIIJCSI

© IJCSI PUBLICATIONwww.IJCSI.org

The International Journal of Computer Science Issues (IJCSI) is a refereed journal for scientific papers dealing with any area of computer science research. The purpose of establishing the scientific journal is the assistance in development of science, fast operative publication and storage of materials and results of scientific researches and representation of the scientific conception of the society.

It also provides a venue for researchers, students and professionals to submit on-going research and developments in these areas. Authors are encouraged to contribute to the journal by submitting articles that illustrate new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.

Indexing of IJCSI:

1. Google Scholar2. Directory for Open Access Journals (DOAJ)3. Bielefeld Academic Search Engine (BASE)4. CiteSeerX5. SCIRUS

Frequency of Publication: Monthly

IJCSIIJCSI

© IJCSI PUBLICATIONwww.IJCSI.org

The International Journal of Computer Science Issues (IJCSI) is a refereed journal for scientific papers dealing with any area of computer science research. The purpose of establishing the scientific journal is the assistance in development of science, fast operative publication and storage of materials and results of scientific researches and representation of the scientific conception of the society.

It also provides a venue for researchers, students and professionals to submit on-going research and developments in these areas. Authors are encouraged to contribute to the journal by submitting articles that illustrate new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.

Indexing of IJCSI:

1. Google Scholar2. Directory for Open Access Journals (DOAJ)3. Bielefeld Academic Search Engine (BASE)4. CiteSeerX5. SCIRUS

Frequency of Publication: Monthly