39

IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National
Page 2: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

i

IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, Australia

Jia Zhang, Carnegie Mellon University – Silicon Valley, USA

Associate Editor-in-Chief Zibin Zheng, The Chinese University of Hong Kong

Editorial Board Boualem Benatallah, University of New South Wales, Australia

Rajdeep Bhowmik, Cisco Systems Inc., USA

Irena Bojanova, University of Maryland University College, USA

Hong Cai, ZTE, USA

Rong Chang, IBM T.J. Watson Research Center, USA

Shiping Chen, CSIRO, Australia

Chi-Hung Chi, CSIRO, Australia

Ying Chen, Wisedu Information Technology, China

Ernesto Damiani, University of Milan, Italy

Schahram Dustdar, Vienna University of Technology, Austria

Ephraim Feig, Independent Consultant, USA

Aditya Ghose, University of Wollongong, Australia

Claude Godart, Univ. Nancy, Lorraine Univ., France

Nils Gruschka, University of Applied Science Kiel, Germany

Mohand-Said Hacid, Université Claude Bernard Lyon 1, France

Jun Han, Swinburne University of Technology, Australia

Akhil Kumar, Pennsylvania State University, USA

Patrick Hung, University of Ontario, Canada

San-Yih Hwang, National Sun Yat-sen University, Taiwan

Geetika Lakshmanan, IBM T.J. Waston Research Center, USA

Donghui Lin, Kyoto University, Japan

Shiyong Lu, Wayne State University, USA

Michael Lyu, The Chinese University of Hong Kong, Hong Kong

Stephan Marganiec, University of Leicester, UK

Louise Moser, University of California Santa Barbara, USA

Lorenzo Mossucca, Istituto Superiore Mario Boella(ISMB), Italy

Sushil Prasad, Georgia State University, USA

Lakshmish Ramaswamy, University Of Georgia, USA

Krishna Ratakonda, IBM Watson Research Center, USA

Anna Ruokonen, Penn State University, USA

George Spanoudakis, City University London, UK

Vijay Varadharajan, Macquarie University, Australia

Wei Tan, IBM T.J. Watson Research Center, USA

Mingdong Tang, Hunan University of Science and Technology, China

Hongbing Wang, Southeast University, China

Shangguang Wang, Beijing University of Posts and Telecommunications, China

Yan Wang, Macquarie University, Australia

Jian Wu, Zhejiang University, China

Xiaofei Xu, Harbin Institute of Technology, China

Yuhong Yan, Concordia University, Canada

Bing Bing Zhou, The University of Sydney, Australia

Page 3: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

ii

International Journal of

Services Computing

2015, Vol. 3, No.2

Table of Contents

iii. EDITOR-IN-CHIEF PREFACE

Andrzej Goscinski, Deakin University, Australia

Jia Zhang, Carnegie Mellon University – Silicon Valley, USA

vi. Call for Articles: IJSC Special Issue of Application-Driven Services

Innovations

RESEARCH ARTICLES 1 DPOC:An Optimizations Strategy of EV Efficient Travelling

Lei Shi, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and

Telecommunications, Beijing

Jiayuan Li, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and

Telecommunications, Beijing

Zhihan Liu, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and

Telecommunications, Beijing

Jinglin Li, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and

Telecommunications, Beijing

10 Scalable Algorithm for the Service Selection Problem

Yanik Ngoko, Université de Paris 13, LIPN 99 Avenue Jean Baptiste Clément, 93430 Villetaneuse

Christophe Cérin, Université de Paris 13, LIPN 99 Avenue Jean Baptiste Clément, 93430 Villetaneuse

Alfredo Goldman, DCC-IME-USP Rua do Matão 1010, Britain

25 ICONCUBE:A Location-based Mobile Cloud System for Meeting Organizers and

Participants Guo Chi, Wuhan University, China;

Jieru Zeng, Wuhan University, China

Xuan Liu, Wuhan University, China

Jing song Cui, Wuhan University, China

Page 4: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

iii

Editor-in-Chief Preface:

Services Discovery and Management

Andrzej Goscinski, Deakin University, Australia

Jia Zhang, Carnegie Mellon University – Silicon Valley, USA

Welcome to International Journal of Services Computing (IJSC). From the technology foundation

perspective, Services Computing covers the science and technology needed for bridging the gap

between Business Services and IT Services, theory and development and deployment. All topics

regarding Web-based services lifecycle study and management align with the theme of IJSC.

Specially, we focus on:

1. Web-based services featuring Web services modeling, development, publishing, discovery,

composition, testing, adaptation, and delivery; and Web services technologies as well as standards;

2. Services innovation lifecycle that includes enterprise modeling, business consulting, solution

creation, services orchestration, optimization, management, and marketing; and business process

integration and management;

3. Cloud services featuring modeling, developing, publishing, monitoring, managing, and

delivering XaaS (everything as a service) in the context of various types of cloud environments; and

4. Mobile services featuring development, publication, discovery, orchestration, invocation, testing,

delivery; and certification of mobile applications and services.

IJSC is designed to be an important platform for disseminating high quality research on above

topics in a timely manner and provide an ongoing platform for continuous discussion on research

published in this journal. To ensure quality, IJSC only considers expanded version of papers

presented at high quality conferences, key survey articles that summarizes the research done so

far and identifies important research issues, and some visionary articles. At least two IJSC

Editorial Board members will review the extended versions. Once again, we will make every effort

to publish articles in a timely manner.

This issue collects the extended versions of five papers published at IEEE International

Conference on Web Services (ICWS) and IEEE International Conference on Services Computing

(SCC) in the general area of services discovery and management.

The first article is titled “DPOC:An Optimizations Strategy of EV Efficient Travelling” by Shi,

Li, Liu and Li. proposed an optimization strategy for EVs to travel more efficiently called dynamic

planned ordered charging (DPOC for short), that is, changing route dynamically to cut down time

on driving and necessary charging instead of waiting or overcharge.

The second article is titled, “Scalable Algorithm for the Service Selection Problem” by Ngoko,

Cérin and Goldman. The authors are interested in fast algorithms for the service selection problem.

Given an abstract services' composition, the objective in this problem is to choose the best services

for implementing the composition such as to minimize a given penalty function. Their work

contributes to both the sequential and parallel resolution of this problem.

The third article is titled, “ICONCUBE:A Location-based Mobile Cloud System for Meeting

Organizers and Participants” by Guo, Zeng, Liu and Cui. The authors designed and implemented

iConCube, a mobile cloud computing system for meeting organizers and participants integrating

Location-Based Services and indoor navigation.

Page 5: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

iv

We would like to thank the authors for their efforts in delivering these five quality articles. We

would also like to thank the reviewers, as well as the Program Committee of IEEE ICWS and SCC

for their help with the review process.

About the Publication Lead

Liang-Jie (LJ) Zhang is Senior Vice President, Chief Scientist, & Director of

Research at Kingdee International Software Group Company Limited, and

director of The Open Group. Prior to joining Kingdee, he was a Research Staff

Member and Program Manager of Application Architectures and Realization at

IBM Thomas J. Watson Research Center as well as the Chief Architect of

Industrial Standards at IBM Software Group. Dr. Zhang has published more

than 140 technical papers in journals, book chapters, and conference

proceedings. He has 40 granted patents and more than 20 pending patent

applications. Dr. Zhang received his Ph.D. on Pattern Recognition and Intelligent Control from

Tsinghua University in 1996. He chaired the IEEE Computer Society's Technical Committee on

Services Computing from 2003 to 2011. He also chaired the Services Computing Professional

Interest Community at IBM Research from 2004 to 2006. Dr. Zhang has served as the Editor-in-

Chief of the International Journal of Web Services Research since 2003 and was the founding

Editor-in-Chief of IEEE Transactions on Services Computing. He was elected as an IEEE Fellow in

2011, and in the same year won the Technical Achievement Award "for pioneering contributions to

Application Design Techniques in Services Computing" from IEEE Computer Society. Dr. Zhang

also chaired the 2013 IEEE International Congress on Big Data and the 2009 IEEE International

Conference on Cloud Computing (CLOUD 2009).

About the Editor-in-Chief

Dr. Andrzej Goscinski is a full Professor in the School of Information

Technology, Deakin University, Australia, where he directs research programs

in clouds and cloud computing, parallel processing, virtualization, security,

autonomic and service computing, and in general, distributed systems and

applications. From January 1993 to December 2001, Dr. Goscinski completed

tenure as the Head of School, and from 2004 he has led his research group to

successfully concentrate their research on autonomic grids based on SOA, the

abstraction of software and resources as a service, and cloud computing. A

major achievement in the area of autonomic grids based on SOA was the development of the

concept of a broker that led to its use in clouds. Furthermore, a major achievement in the area of the

abstraction of software and resources as a service and cloud computing was the development of the

Resource Via Web Services (RVWS) framework that contains service’s dynamic state and

characteristics, and service publishing, selection and discovery; the contribution to level of cloud

abstraction in the form of CaaS (Cluster as a Service); comparative study of High Performance

Computing clouds, and the development of H2D hybrid cloud. Currently, he concentrates his

research on exposing HPC applications as services, publishing them to a broker, and executing them

in a SaaS cloud by non-computing specialists. The results of this research have been published in

high quality journals and conference proceedings. Dr. Goscinski serves as Associate Editor of IEEE

Transactions on Service Computing; Associate Editor of Inderscience’s International Journal on

Page 6: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

v

Cloud Computing; member of the Editorial Board of Springer's Future Generation Computer

Systems; and General, Program Chair, and Honorary Chair of IEEE Services and Cloud

Conferences, and Distributed and Parallel Systems and Applications

.

Dr. Jia Zhang is an Associate Professor at Carnegie Mellon University -

Silicon Valley. Her recent research interests center on services computing, with

a focus on scientific workflows, net-centric collaboration, Internet of Things,

and big data management. She has co-authored one textbook titled "Services

Computing" and has published over 130 refereed journal papers, book chapters,

and conference papers. She is now an Associate Editor of IEEE Transactions on

Services Computing (TSC) and of International Journal of Web Services

Research (JWSR), and Editor-in-Chief of International Journal of Services

Computing (IJSC). She earned her Ph.D. in computer science from the

University of Illinoisat Chicago.

Page 7: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 3, No. 2, 2015

http://www.hipore.com/ijsc

vi

Call for Articles

IJSC Special Issue of Application-Driven Services Innovations

Services computing is a dynamic discipline. It has become a valuable resource and mechanism for

the practitioners and researchers to explore the value of services in all kinds of business scenarios

and scientific work. From industry perspective, IBM, SAP, Oracle, Google, Microsoft, Yahoo, and

other leading software and internet service companies have also launched their own innovation

initiatives around services computing.

The Services Transactions on Services Computing (IJSC) covers state-of-the-art technologies and

best practices of Services Computing, as well as emerging standards and research topics which

would define the future of Services Computing.

IJSC now launches a special issue which focuses on application-driven services innovations. The

papers should generally have results from real world development, deployment, and experiences

delivering SOA or solutions of web services. It should also provide information like "Lessons

learned" or general advices gained from the experience of services computing. Other appropriate

sections are general background on the solutions, overview of the solutions, and directions for

future innovation and improvements of services computing

Authors should submit papers (8 pages minimum, 24 papers maximum per paper) related to the

following practical topics:

1. Architecture practice of services computing

2. Services management practice

3. Emerging services algorithms

4. Security application of services computing

5. Web services application practice

6. Micro-service practice

7. Quality of service for web services

Please note this special issue mainly considers papers from real-world practices. In addition, IJSC

only considers extended versions of papers published in reputable related conferences. Sponsored

by Services Society, the published IJSC papers will be made accessible for easy of citation and

knowledge sharing, in addition to paper copies. All published IJSC papers will be promoted and

recommended to potential authors of the future versions of related reputable conferences such as

IEEE BigData Congress, ICWS, SCC, CLOUD, SERVICES and MS.

If you have any questions or queries on IJSC, please send email to IJSC AT ServicesSociety.org.

Page 8: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

1

DPOC:AN OPTIMIZATION STRATEGY OF EV EFFICIENT

TRAVELLING Lei Shi, Jiayuan Li, Zhihan Liu, Jinglin Li

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and

Telecommunications, Beijing

[email protected], [email protected], [email protected], [email protected]

Abstract The popularity of electric vehicles (EVs) provides great help to solve the increasingly serious energy issues and environmental problems, but it also has limitation such as short driving range and long charging time. This paper proposed an optimization strategy for EVs to travel more efficiently called dynamic planned ordered charging (DPOC for short), that is, changing route dynamically to cut down time on driving and necessary charging instead of waiting or overcharge. An energy reachable graph is built firstly to protect vehicles from breaking down in the midway. Travel time, including driving time, charging time and waiting time, is taken as the optimization goal. The modeling and solution of travel time is presented. The two most important contributions of this paper are route re-plan mechanism and the proposal of charging station load and how it can be calculated. Besides, reservation for charging is also introduced to achieve the global optimization. Then the planning of efficient travel can be converted to a general graph theory problem by putting time cost onto edge weight of the graph. According to the simulation and experiment, the strategy proposed in this paper is superior to other compared strategies in many aspects. Keywords: efficient travel strategy; charging station load; charging reservation; dynamic planned ordered charging

__________________________________________________________________________________________________________________

1. INTRODUCTION With the growing popularity and development of EVs,

problems such as short driving range, long charging time

and inadequate charging infrastructure become particularly

prominent. Currently, full charge driving range of EVs is

about 200 km (Yang, 2011) averagely and it will decrease

with battery cycles. As a result, when an EV is in a long-

distance running, it needs to be recharged once or several

times. However, even though in fast charging mode EVs

need at least tens of minutes to be fully recharged, which is

much more time-consuming than fuel vehicles. Besides, due

to the inadequate charging infrastructure, unplanned

charging will lead to high pressure of some particular

charging stations. The above factors have greatly hindered

the promotion and development of EVs. Therefore, an

efficient travel strategy is of great significance. Here the

efficiency involves two subjects. On one hand, each electric

vehicle prefers to re-plan path if necessary to cut down time

spent on driving, which is considerable in urban traffic

network and necessary charging, instead of waiting in line

or overcharge. On the other hand, from a global view, load

balancing and resource utilization optimizing of charging

network is an important issue. This paper mainly proposes a

strategy for EVs to achieve efficient travel.

The remainder of this paper is organized as follows.

Section 2 introduces some related works. Modeling and

solution are given in section 3 and experimental result is

provided in section 4 which verify the feasibility of the

strategy. Finally, section 5 proposes conclusion and future

work.

2. PROBLEM STATEMENT Electric vehicle is very promising in future and efficient

travel for EVs is of great significance. An efficient travel

strategy not only save time and energy, but also avoid

congestion and overload. Therefore, route planning or travel

instructing for EVs have attracted large numbers of

researchers.

Hua Qin and Wensheng Zhang (Hua, 2011) proposed a

solution that takes minimizing each vehicle’s waiting time

as the optimization goal. However, the applicable scenario

mentioned in the paper is that vehicles travel on highway

where charging piles distribute at expressway service area.

That is, the method tells how to dynamically choose which

station to be recharged based on fixed path between starting

point and destination. As a result, it cannot help improve

travel efficiency in city. Jing Li (Li, 2013) proposed a multi-

objective path planning method for EVs. According to the

method, time can be taken as the optimization goal. But the

paper didn’t consider charging stations’ load which may

lead to waiting in line and congestions in stations. Therefore

the method may instruct numbers of vehicles with same

starting point and destination to a same path. The path

smooth at first may be congested gradually. Florian Häusler

(Häusler, 2014) presented a new approach to reduce the

potential for EVs’ waiting in line at the charging stations

based on an analogy between the EVs/charging stations and

the mobile phones/base stations. This model eliminates the

central control server, but this method is more suitable for

the EVs in conventional travel instead of path planning in

Page 9: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

2

advance. Ao Sun, Guibin Zhu and Tie Jang (Sun, 2012) put

forward a minimum-time path time algorithm based on

traffic forecasting information in vehicle dynamic

navigation, and introduced the realization of dynamic path

navigation strategy. This algorithm can provide real-time,

high efficiency, strongly predictive planning route, but it

neglects some important factors such as traffic accident and

traffic congestion which may result in the failure of path

navigation. Besides, it is more suitable for traditional

vehicles but EVs while only taking traffic load into

consideration.

This paper will mainly focus on how to work out an

efficient travel plan in advance for EVs whose travelling

range is random and energy is limited.

3. PROPOSED NEW MODEL AND METHOD 3.1 Modelling

This paper proposes an optimization strategy of efficient

travel for EVs and mainly focuses on the electric vehicle’s

time cost minimization. Time cost of EVs consists of

driving time, waiting time and charging time. 3.1.1 Driving time

In city transport, driving time is mainly determined by

the path distance and traffic load. In general, a vehicle does

not drive at a constant velocity, its speed is affected by the

road traffic running states such as road condition and traffic

flow condition, and even some road has clear limits on

the speed of vehicles. In the time dimension, the traffic

condition of each road is changing dynamically in urban

road network. Therefore, the speed of one vehicle will be

changing dynamically and constantly as well. In most cases,

vehicles drive slower than expected particularly in face of

traffic congestion.

There are a large number of papers in the study of traffic

flow prediction (Kono, 2008), and even focusing on route

planning based on real-time and dynamic traffic (Faez,

2008). According to the prediction model, the traffic

condition is maintained unchanged approximately over a

period of time such as 5 minutes generally. In addition,

average speed of every road can be regarded as a constant

value. Based on these studies, the time cost of each link that

some vehicle will pass through in the future can be

estimated. Therefore, the following equation can be got.

,ti

driving linkj

j i

T T

Here, drivingT is driving time spent on the road, j

represents index of link in a particular path which consists

of several links, i is the number of time slice, it represents

a discrete time interval in the dimension of time, ti

linkjT

equals the time spent on the jth link during it

Taking varieties of traffic conditions into consideration,

each link of the road network has its threshold of time cost

which indicates that the link is under the condition of traffic

congestion. When the time cost of one link exceeds the

threshold, it will continue to increase even exponentially in

the next period of time. In this case, drivingT should be

recalculated immediately to update the planning route and

eventually reduce the time consumption on the road. Long

length allows large threshold. Therefore, the threshold of

one link is defined positively correlated to the link’s length

as shown in the equation below.

linkj linkjK L

Here, is a certain coefficient, linkjL represents the jth

link’s length.

When the road traffic condition is stable, and traffic flow

and vehicle speed of each road section float only in a small

range, it can be replaced by current time. Therefore, the

equation can be simplified as follows which is still of a

strong guiding significance.

driving linkj

j

T T

3.1.2 Waiting time Waiting time of EVs in charging stations is closely

related to the charging station load, so waiting time can be

estimated according to the charging station load from a

macro view. That is, the heavier the load, the longer the

time cost. According to Little's law (Denning, 1978), this

paper designs a quantitative model of charging station load.

L represents the load of a charging station.

0

20

( )

( 1)

avgtotal avl avl

total

avgnum avl

total

TP P P

PL

Tq P

P

Here, totalP is the number of charging piles, avgT is the

average charging time of EVs, numq represents the number

of queuing vehicles and avlP represents the number of idle

charging piles.

When there exist idle charging piles, theoretically

waiting time should be zero. It is to say vehicles can be

recharged immediately without queuing. However, the

number of idle charging piles can only describe the real-

time load instead of risk. Since even though there exist two

idle piles at a time point, there may come ten vehicles to

charge after one minute. As a result, although there are idle

charging piles, risk should be considered when quantifying

load. The risk depends on the total number of charging piles

and how many have been occupied. Large total number and

small occupied number lead to small risk, and vice versa.

When there are no available charging piles, new coming

vehicles must wait in line to be recharged and the load is

related to the number of queuing vehicles. Ideally, the load

of a charging station can be represented as idlL which equals

( 1)avg total numT P q . However, due to vehicles’ different

Page 10: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

3

remaining electricity and unequal time requirement to be

recharged, using idlL to describe load may lead to conflicts

easily. Because of the conflicts, one charging pile is quite

thoroughly difficult to achieve full utilization and vehicles

may waste a large number of time when waiting for charge,

which will result in time debris for a charging pile. It is

obvious that the larger the number of queuing vehicles, the

greater the chance of conflicts. Square number of queuing

vehicles is used to quantify the load, in this paper.

Waiting time in one charging station can be computed

through the load. High load directly lead to long time

waiting in line. When defining the load of charging station,

actual cost, risk cost and conflict have been considered.

Therefore, waiting time of a charging station is defined

positively correlated to its load, as shown in the following

equation where is a certain coefficient.

Tqueue =¶´ L

3.1.2 Charging time Due to the lack of charging stations and the charging

time of each EV cannot be ignored, the battery of one

vehicle is not fully but necessarily charged in charging

stations that merely meet the travel requirement. As a result,

charging time is determined by a vehicle’s remaining

electricity, required electricity and in which mode it is

charged (fast filling or slow filling) (Yang, 2011). Assuming

that the remaining electricity and required electricity of one

vehicle are known, then charging time is only determined by

charging mode. Fast or slow filling will lead to different

charging time. Charging time is represented as follows.

arg

arg

require remainch ing

ch ing

E ET

R

Here, requrieE is the electricity required to complete the

trip, remainE is the remaining electricity and argch ingR

represents charging rate of the charging station piles.

totalT is time cost from the starting point all through to

the destination and it is calculated according to the

following equation.

arg( )k ktotal queue ch ingdriving

k

T T T T

Where drivingT equals driving time that spent on the links

of a particular path, k represents index of the charging

station. kqueueT equals queuing time and arg kch ingT is charging

time in the kth charging station.

3.2 Solution Jing Li put forward the definition of energy reachable

graph and converted route planning problem into a graph

theory problem. According to the initial capacity of a

vehicle and charging infrastructure geographic information,

energy reachable graph can be built. It consists of the

starting point, the destination and multiple recharging points

and it is a sub-graph of the original network. Calculating the

shortest path based on energy reachable graph can protect

EVs from breaking down caused by energy consumption.

In order to satisfy the premise of energy restriction, this

paper firstly set up a directed sub-graph as the energy

reachable graph. Then time is added to the graph as edge

weight. After the above steps, making use of general path-

finding algorithm (Dijkstra, 1959) can optimize traveling

time.

How to introduce time to a directed graph is discussed

below. First of all, the time cost of one link can be directly

expressed as the graph edge weight in directed graph, while

waiting time and charging time can be expressed as node

weight. In general, if node weight and edge weight were

independent of each other, node weight can be added to the

link which pointing to the node so as to simplify the

problem (West, 2001). After the simplification, the problem

changes into a typical graph theory problem having only

link weight without node weight. How to calculate driving

time correctly and whether waiting time and charging time

can be converted to edge weight of a directed graph are

discussed as below.

3.2.1 Driving time Driving time of EVs on the road is very considerable in

EVs’ travel, particularly when the traffic is congested.

When the traffic flow floats in a small range, the time cost

of each link estimated is acceptable and accurate. However,

as the time goes by, these estimates may be at risk of failure.

Here comes an example as shown in Fig.1 below. O is

the starting point, D represents the destination, n stands for

node (ordinary node or charging station node) in the graph.

The edge weight in the graph tells the time cost of related

link. The planning route for an electric vehicle V is link2 -

n2 - link6 - n4 - link9. V is driving on link2 at this time,

while unfortunately a traffic accident has taken place on

link6, resulting in a sharp rise of the time cost of link6. If V

travels in accordance with the previous route, then it will

waste a lot of time on link6 and its driving time on the road

may be greatly increased. Eventually, the total time cost will

be intolerable.

Therefore, it’s time to do a route re-plan for V based on

the new time cost estimated of each link. According to the

planning result, link2 - n2 - link3- n1 - link4 - n3 - link8 is

n2

n3

DO

link4

link3

link6 n4

n1

Figure 1. Calculation of driving time

Page 11: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

4

the new planned route whose total time cost is optimized

and shortest in the directed graph at this moment.

Here summarizes a general method. The time cost of

each link should be estimated every period defined as t and

update graph edge weight in the directed graph immediately

where t is not a constant value, but determined by the road

traffic running states. Frequent and violent changes lead to

small period, and vice versa. Since how to calculate t is not

the emphases of this paper and t is also considerate as a

small and constant value in other papers (Sun ,2012)

according to the prediction model, such as 5 minutes.

In each estimate, if the time cost of link j meets or

exceeds j’s threshold, the routes for those EVs whose path

contains link j should be re-planned immediately. Therefore,

the route of one electric vehicle may be re-planned several

times due to the traffic condition and in the meantime, the

path will be changing dynamically.

3.2.2 Waiting time When a vehicle arrives at a charging station, whether it

needs to wait or how long it should wait is independent of

the path it has travelled through. That is, no matter from

which link the vehicle came to the charging station, the load

is a fixed value, which can thus be summed up to the link

pointing to the station node.

According to (4), waiting time and charging station load

are positively correlated, and the load is calculated with

static data (such as totalP and avgT ) and dynamic data (such

as umq and avlP which change over time). Therefore, the

above dynamic information should be saved for each

charging station as global shared variables to support the

calculation of charging station load. However, real-time data

of the exactly current moment is not enough. Here is an

example for why real-time data is not sufficient. According

to the planning result, one vehicle V need to be recharged at

a station S. Referring to the calculation result, V will arrive

at S at some time in the future. When calculating waiting

time in S, the load at the particular time in the future is

necessary. In order to get enough information to predict load,

for those vehicles whose path planning has been completed,

a reservation record should be stored in the corresponding

charging stations as globally shared variables. The record

consists of several items such as the planned arriving time,

charging starting time and charging finishing time. When

there are new-coming vehicles requesting plan, real-time

data and booking records of different charging stations

should be obtained, the figure below is an example. Fig. 2 is

a logic diagram of a specific charging station’s resource

reservation, horizontal is timeline and ordinate represents

charging piles. As the picture shows, this charging station

holds five charging piles and the bar of each charging pile

represents a reservation records, the yellow part is the

waiting time and the green part is charging time. At this

point, there is a new-coming vehicle that is expected to

arrive at the charging station at t0. First of all, according to

the existing records, calculate the load and waiting time.

Then making use of A-star algorithm to find a shortest path.

Once the route is chosen, an item of reservation record

should be added to related charging station’s global shared

variables, as shown in Fig. 3. Here, P1 will be preferred to

add the reservation record to make full use of the charging

piles while P5 is idle.

3.2.3 Charging time

Charging time is determined by vehicle’s remaining

electricity and required electricity. How much electricity is

remained depends on which path the vehicle has gone

through. Similarly, how much electricity is required depends

on which path the vehicle will go through. If a vehicle has

two path of different length to choose, the choice will lead

to different energy consumption and thus different

requirement for charging time accordingly.

The following instructions are based on Fig. 4. Here, O

is the starting point, D represents the destination, n stand for

ordinary nodes in the directed graph, CS is charging station

node. The edge weight in the graph tells the distance of link.

Known that the original electricity at point O is not enough

to reach D, but any charging stations in the figure is

reachable. The purpose is to choose a path for the vehicle

from O to D to minimize its time cost.

Figure 2. Resource reservation information of charging

station before new-coming vehicle

Figure 3. Resource reservation information of charging

station after new-coming vehicle

Page 12: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

5

When the vehicle chooses CS1 as the position where it

can be recharged, how much electricity it needs to charge is

related to its remaining electricity and required electricity.

The remaining electricity equals the original electricity

minus energy consumption from O to CS1. If the route is

link1 - n1 - link3, the energy consumption is different from

route link2 - n2 - link4. Similarly, the required electricity

equals the energy consumption through the following links

to the destination or next charging station. If the next route

is link6 - n3 - link8, the required electricity is energy

consumed on link6 and link8, while route link7 - CS2 -

link9 is on link7.

Since the distance of link remains changeless and is

nothing to do with dynamic traffic condition, the energy

consumption can be simplified to be proportional to driving

distance, namely driving energy consumption driveE equals

kkd . In the equation, is a coefficient standing for

the positive relationship, kd is the distance of the kth link.

The method of converting charging time to directed

graph edge weight is to convert the charging time to a path’s

energy cost. Since a vehicle’s original electricity in the

starting point is given and fixed, charging time mainly

depends on the energy consumption during travelling. The

more electricity consumption, the less the remaining

electricity, the more needed to be recharged, thus the longer

the charging time, and vice versa. According to the above

equation, the kth link requires energy of kd , the

charging time of the particular path can be converted to

arg

arg

ch ing

kk

ch ing

dT

R

Here, argch ingR is the charging rate of the vehicle. Total

time consumption of the kth link is as follows

argdriving ch ingk k kT T T

In conclusion, charging time can be transformed from

node weight to edge weight by setting the time consumption

kT to edge weight of the kth link in the directed graph.

Below is an example telling how to calculate time weight.

Refer to Fig. 3 again. The chosen path of a vehicle is

link1 - link3 - CS1 - link7 - CS2 - link9 and CS2 is the last

charging station the vehicle will pass by. When calculating

edge weight, driving time and charging time of all links of

the path include should be summed up and minus the time

compensation of the energy consumption from the original

electricity. The time consumption of one particular path can

be calculated using the following equation.

arg

1 ar

( )( )driving ch ing

norigin remain

j j

j ch ing

E ET T T

R

4. CASE STUDIES AND DATA ANALYSIS 4.1 Simulation Environment and Data Set

TransModeler applies a variety of mathematical models

of driver behavior and traffic flow theory to simulate traffic

phenomena. Its models make use of detailed and varied

input data about the transportation system, are capable of

generating an extensive array of output statistics, and rely

on a diverse set of parameters calibrated to match the

models with real world observations. It has a built-in map of

San Antonio.

San Antonio is a city located in the central and southern

of Texas, and it covers an area of about 1205 square

kilometers. The dataset of it contains 461 nodes and 618

links. Node contains the information such as Id, coordinates

and edge contains the Id, starting node and end node,

direction and information such as length of the link.

Firstly, 15 nodes are selected as charging stations from

node set on the map. The selection is in accordance with the

principle that charging stations should be distributed evenly

considering the geographical position. The second step is to

set random number of charging piles in each of the charging

station, ranging from 5 to 25.

3.2 Simulation Method and Result The strategy proposed in this paper is called dynamic

planned ordered charging (DPOC for short) and in order to

verify the effectiveness of the strategy, it was compared

with shortest first charging (SFC) and planned ordered

charging(POC for short), while reachable random charging

(RRC) whose performance is not stable and of little

significance was eliminated.

The simulation starts from randomly generating some

vehicles’ data including parameters such as the starting

point, destination, remaining electricity and velocity. Before

the path planning for a generated vehicle, the original

energy should be checked firstly. If the energy is not enough

for the whole trip, the situation aligns with this paper,

otherwise the traditional path planning method is applied

well. When the initial energy is insufficient to meet demand,

the vehicle should be recharged at least one time. There are

different kinds of strategy to determine the travelling plan

such as which path to go and where to charge. Three

different strategies of simulation have been performed

including the strategy proposed in this paper, SFC which

choose charging station closest to the starting point to

charge and POC which does not consider dynamic traffic

condition and never changes its path once planned. This

n1

n2

n3

DO

link5

CS1

CS2

Figure 4. Calculation of charging time

Page 13: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

6

section compares three strategies’ efficiency from multiple

aspects.

Firstly, the simulation system randomly generated 200

electric vehicles satisfying the condition that they must be

recharged at least one time. Secondly, time cost of each link

was obtained from TransModeler every 3 minutes which

had been well estimated in real-time. Thirdly, route for each

vehicle was planned according to the different strategies.

Fourthly, source data and planned path was put into

TransModeler. Note that if time cost of one link changed

substantially and eventually the threshold was met or

exceeded, the third and fourth steps would be repeated base

on the current state of relevant vehicles and new estimate of

time cost. In the end, real-time simulation result is presented

below.

4.2.1 Total Time Cost In Fig. 5, the X-axis contains 200 discrete points

representing 200 electric vehicles and Y-axis stand for total

time which equals the sum of driving time, waiting time and

charging time. In order to make it easy to observe, the

discrete dots are sorted by total time, colored according to

strategy and connected by lines. The figures below also

share the same pattern. According to Fig. 4, the strategy of

path planning representing by red scatter has less time cost

than the other two strategies. In addition, statistical analysis

is done against the sum of all vehicles’ time cost under the

three different strategies, as shown in Fig. 6. According to

the figure, DPOC strategy is superior to the POC and SFC

strategies.

4.2.2 Driving Time Cost

A specific analysis is done with the driving time. According to Fig. 6, more than half of the total time is spent on road which is called driving time. Shorter driving time leads to higher user satisfaction and better traffic condition. Fig. 7 shows the sum of driving time in three strategies. It can be seen that DPOC strategy saves significant time compared with POC and SFC which improves travel efficiency greatly.

4.2.3 Waiting Time Cost

The length of waiting time is one of the key evaluation

criterions for user satisfaction. According to Fig. 8, the total

waiting time of DPOC is less than POC and SFC, hence

DPOC has a better average waiting time. Fig. 8 shows the

waiting time of each vehicle. About half of the vehicles

need to wait in line. For the vehicles needing to wait, the red

line is significantly lower than the blue and green lines.

Figure 5. Vehicle time cost statistic chart

Figure 6. Comparison between total time of three

strategies

0 100000 200000 300000 400000

DPOC

POC

SFC

TIME(s) waitTime chargeTime drivingTime

Figure 7. Sum of driving time

3139.017

3239.667

3358.55

300030503100315032003250330033503400

Tim

e(m

inu

te)

Page 14: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

7

Besides the length of waiting time, waiting time ratio is

another important criteria. It’s the percentage of waiting

time in the total time. Because the total time of each vehicle

is different, the comparison of waiting time ratio is more

reasonable. Smaller waiting time ratio brings better user

satisfaction. Fig. 9 shows the comparison between waiting

time ratio of three strategies. It can be seen that DPOC’s

waiting time ratio is lower than that of POC and SFC.

4.2.2 Distance Cost

The comparison of the total time cost shows the

advantages of the proposed path planning strategy. In

addition to time cost, statistical analysis also been

performed for a single vehicle’s distance cost and total

distance cost. Fig. 10 shows the distance cost of each

vehicle and Fig. 11 shows the sum of distance cost in three

strategies. DPOC changes its path dynamically to find links

whose length may be longer but time cost lower base on the

traffic condition. It can be seen that DPOC and POC’s

distances are shorter than SFC obviously.

4.2.3 Charging Time Cost

In the simulation, all charging stations use the same

charging rate, therefore the amount of charging electricity is

in proportion to charging time. It’s more environmentally

friendly to reduce charging time and save energy. On the

other hand, short charging time also bring better user

satisfaction. Fig. 12 shows the charging time of each vehicle

and Fig. 13 represents total charging time together with

average charging time using the three strategies. It can be

seen from the figures that DPOC requires a little more

charging time than POC because of longer distance cost

analyzed before but quite less than SFC. It is reasonable that

DPOC tries its best to reduce the total time at the cost of a

small amount of additional energy consumption compared

with POC. But DPOC saves more energy and has better user

satisfaction than SFC. In this sense, DPOC has a better

performance.

Figure 8. Vehicle waiting time statistic chart

Figure 9. Waiting time ratio comparison time

0.1029 0.1175

0.1369

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

DPOCWaitRate POCWaitRate SFCWaitRate

Rat

e

Figure 10. Distance cost of vehicles

Figure 11. Sum of distance time

624 612

700

560

580

600

620

640

660

680

700

720

DPOCDisSum POCDisSum SFCDisSum

Dis

tan

ce(m

ile)

Page 15: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

8

5. CONCLUSIONS

This paper, taking electric vehicle as the researching

object, considering the actual demand of vehicle users,

presents an optimization strategy to solve the problem of

short running range, long charging time and inadequate

charging infrastructure. Referring to the calculated

travelling plan it will spend less time on driving and

necessary charging instead of queuing in line. After

experimental verification, no matter from a single vehicle’s

aspect or a global viewpoint, DPOC strategy shows

superiority and keeps a good balance between time and

energy consumption.

The method proposed in this paper is based on the fixed

charging stations, that is, the stations have known position.

However, it is obvious that reasonable charging station

distribution is critical to improving charging efficiency and

optimizing travel experience. Therefore, how to distribute

charging stations is a meaningful topic that can be further

studied.

6. ACKNOWLEDGMENT This work is supported by the National High-tech

Research and Development Program (863) of China under

Grant No. 2012AA111601, and the Fundamental Research

Funds for the Central Universities.

7. REFERENCES Yang, S. N., Cheng, W. S., Hsu, Y. C., Gan, C. H., & Lin, Y. B. (2011,

December). Charge Scheduling of Electric Vehicles in Highways through

Mobile Computing. In Parallel and Distributed Systems (ICPADS), 2011 IEEE 17th International Conference on (pp. 692-698). IEEE.

Qin, H., & Zhang, W. (2011, September). Charging scheduling with

minimal waiting in a network of electric vehicles and charging stations. In Proceedings of the Eighth ACM international workshop on Vehicular

inter-networking (pp. 51-60). ACM.

Li, J., Liu, Z., & Yang, F. A Multi-Objective Path Planning Method for Electric Taxis with Energy-Constrained.

Hausler, F., Crisostomi, E., Schlote, A., Radusch, I., & Shorten, R. (2014).

Stochastic park-and-charge balancing for fully electric and plug-in hybrid vehicles.

Sun, A., Zhu G. B., Jiang, T. (2012) A minimum-time path algorithm basd

on traffic forecasting information. In Modern Electronics Technique,

35(7).171-176

Faez, K., & Khanjary, M. (2008, October). UTOSPF: a distributed dynamic

route guidance system based on wireless sensor networks and open shortest

path first protocol. In Wireless Communication Systems. 2008. ISWCS'08.

IEEE International Symposium on (pp. 558-562). IEEE.

Kono, T., Fushiki, T., Asada, K., & Nakano, K. (2008, November). Fuel

consumption analysis and prediction model for eco route search. In 15th

World Congress on Intelligent Transport Systems and ITS America’s 2008

Annual Meeting.

Denning, P. J., & Buzen, J. P. (1978). The operational analysis of queueing

network models. ACM Computing Surveys (CSUR), 10(3), 225-261.

Dijkstra, E. W. (1959). A note on two problems in connexion with

graphs.Numerische mathematik, 1(1), 269-271.

West, D. B. (2001). Introduction to graph theory (Vol. 2). Upper Saddle

River: Prentice hall.

Authors

Lei Shi, is a postgraduate student at the

State Key Laboratory of Networking and

Switching Technology. His research

interests include Internet of Things and

Internet of vehicles.

Jiayuan Li, is a postgraduate student at

the State Key Laboratory of Networking

and Switching Technology. Her current

research interests include network

intelligence and Internet of vehicles..

Figure 12. Charging time of vehicles

Figure 13. Sum of charging time

2007.183 [值]

2059.867

19401960198020002020204020602080

Tim

e(m

inu

te)

Page 16: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

9

.

Zhihan Liu, is a lecturer in State Key Lab

of Networking and Switching

Technology. He has served as Deputy

Secretary-General of China IoV Industry

Technology Innovation Strategic

Alliance. His research interests include

IoT, IoV and decentralized SNS.

Jinglin Li, is an associate professor at the

State Key Lab of Networking and

Switching Technology. His research

interests include Converged network,

Internet of vehicles and Service support

environment.

Page 17: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

10

Scalable algorithm for the service selection problem Yanik Ngoko, Christophe Cérin Alfredo Goldman Université de Paris 13, LIPN DCC-IME-USP 99 Avenue Jean Baptiste Clément, 93430 Villetaneuse Rua do Matão 1010 {yanik.ngoko, christophe.cerin}@lipn.univ-paris13.fr [email protected]

Abstract In this paper, we are interested in fast algorithms for the service selection problem. Given an abstract services' composition, the objective in this problem is to choose the best services for implementing the composition such as to minimize a given penalty function. Our work contributes to both the sequential and parallel resolution of this problem. For the sequential resolution, we show how to extend a prior algorithm for QoS prediction to obtain a fast sequential resolution of the service selection problem. Our proposal innovates in the optimization techniques (variable ordering, branch and bound, etc.) used for the runtime minimization. For the parallel resolution, we discuss on two possible formulations for the parallelism: task and data parallelism. We show that on our problem, the latter formulation is adequate because it leads to a more scalable resolution. Finally, we conduct various experiments that show that super-linear speedups can be reached with our new parallel algorithm. Keywords: Service Selection; QoS Prediction, Graph Reduction, Domain Decomposition; Work Stealing; Backtracking.

__________________________________________________________________________________________________________________

1. Introduction

With the emergence of clouds and service-oriented systems,

middleware has become one of the most active tools in

modern distributed infrastructures. In these infrastructures,

various design concepts and technologies are used for

supporting several levels of parallelism: from low level ones

achieved by GPU units and cores to supercomputers and

large data centers in which we can obtain massive

parallelism. If in most cases, middlewares are designed for

helping external applications to benefit from this computing

power, for taking decisions, they do not always efficiently

use the large potential of their underlying infrastructures.

We are convinced that there is a gap between the

exploitation of parallelism in middlewares and the potential

of the computational power on which they are deployed.

Our conviction is supported by the practical utilization of

some middlewares tools (e.g OAR (OAR2 2012) in

Grid'5000, Slurm (Slurm, 2011) on clusters and an analysis

of the common approaches and philosophies adopted when

targeting middleware problems. In more detail, we

conducted a study on the conference papers (research and

industry tracks) of the IEEE Service Computing Conference

(SCC) held in Anchorage in 2014. On over the 105 papers

we analyzed, at most 10% between them focused slightly or

deeply on scalability or parallelism for solving the service

computing problem they addressed. This would not have

been a concern if none of these papers dealt with a

computing-intensive problem. However, we noticed that

29% of these papers targeted the resolution of a NP-hard,

non linear or exponential problems like the service selection

problem, virtual machines allocation, graph partitioning.

Moreover only 3% of the papers in which the focus was to

solve an exponential or NP-hard problem proposed to

develop a parallel or scalable solution. We did a similar

study on the proceedings of the 2014 IEEE cloud conference

held in Anchorage. In research sessions, only 4% of the

25% of the papers where an NP-hard or exponential

problem (e.g virtual machine consolidation, virtual

machines placements, resources partitioning etc.) was the

target focused on proposing a scalable or parallel solution.

At least two reasons might justify this little interest in

parallelization. The first is the usage of the client-server

model. In distributed computing, it seems natural to

consider parallelism in a context where a set of users

Page 18: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

11

requests are each treated by a different instantiation (of a

web service, function, objects), run on a part of the entire

machine. In this view, one can parallelize the treatment of

users requests in creating several instantiations of a

middleware service or function that each runs a sequential

algorithm. This approach for parallelization has a main

interest: the ease of its implementation. For instance, the

clouds todays incorporate automatic load balancing tools for

supporting this request-based parallelism on any

cloud-service. The limit of this solution is that requests

might address computing-intensive problems like the

service selection problem. In these cases, a sequential

resolution might be inefficient for optimally solving such a

problem in a real time setting. The second reason for the

little interest in parallelism is the use of heuristic

optimization techniques. Indeed, many papers that we

analyzed dealt with optimization techniques like greedy

algorithms, genetic algorithms and approximation

algorithms. If these techniques can help to improve the

response time, let us notice however that they can

deteriorate the quality of the results expected by the users.

For stating this clearer, let us for instance consider the

service selection problem studied in this paper. Being given

a set of abstract services that collaborate in a business

process, the objective is to choose the best concrete services

for its implementation such as to optimize the total response

time and the energy consumed in the collaboration. In

applying an optimization technique for quickly solving the

problem, we could obtain a choice of concrete services that

is too far from the optimal response time and energy

consumption we could have expected in the service

collaboration.

The conviction that supports our work is that for improving

the quality of service in the servicing of middleware

requests, one must consider for each request a balanced

view in which heuristic optimization and parallelism are

both considered. While optimization is used for simplifying

the problem addressed by the request, parallelism serves to

quickly find optimal solutions under the optimization

assumptions. Typically, given an NP-hard problem, our idea

is to apply a robust optimization approach for deriving a

sequential algorithm that will then be parallelized.

This paper underpins this balanced usage of parallelism and

optimization in proposing a scalable algorithm for the

service selection problem.

Parallel algorithms for service selection have been

investigated in previous work (Hening & Balke, 2010;

Pathak et al., 2006; Bartolos & Bielikova, 2009). We differ

from them on two main points. Firstly, we use a more

classical representation for services' compositions and

parallelize a sequential algorithm obtained by performing

different optimizations like the variable ordering or

backtracking. Secondly, we analyze various options for the

parallelization of service selection and prune a data

parallelism that leads to more scalability. In more details,

our parallelization is achieved with two techniques: the

domain decomposition (in particular the variable

partitioning approach (Platzner et al., 1996) and work

stealing (Blumofe & Leiserson, 1999). Finally, we conduct

a large set of experiments on representative benchmarks that

demonstrate that super-linear speedups can be achieved by

our approach.

The remainder of the paper is organized as follows: in the

next section, we discuss the related work. In Section 3, we

give a formal description of the variant of the service

selection problem studied in this paper. Section 4 presents

the sequential optimization we performed for reducing the

runtime. In Section 5, we discuss about the parallelization of

our sequential algorithms. Section 6 is devoted to

experimental results and we conclude in Section 7.

2. Related work

As stated in the introduction, there is a large literature

around the service selection problem.

Differently to our work, most of these contributions

implicitly target sequential execution contexts. In distributed

contexts, we refer to the work that have be done in (Li Fei et

al., 2006; Xin Li et al., 2013; Alrifai et al., 2012 ).

Fei Li et al. showed how we can limit the occurrence of

bottlenecks in the exchange of registry information required

for the composition of services. The idea of a distributed

composition of services in geographically distributed clouds

is developed in the work of Xin Li et al. The work of Alrifai

et al. proposes a distributed algorithm whose idea is to

decompose the global SLAs constraints into local ones.

Doing so, they show that we can quickly obtain good

approximations for the service selection problem. Our work

differs from these contributions on two points. Firstly, they

focus on building services' compositions in a distributed

context (not necessarily parallel) while we are interested in

parallelizing the composition process. Secondly, we are

interested in finding optimal solutions and not approximated

ones as Alrifai et al. The parallelization of the service

selection problem was also investigated. Beran et al. (Beran

et al., 2012) described two parallel algorithms for the

service selection problem. The first one is a master-slave

parallelization of a genetic algorithm. Their second

proposition is the parallelization of an A* like algorithm for

service selection. Though interesting, their proposal leads to

near-optimal solutions. Hening and Balke (Hening & Balke,

2010) proposed a parallel framework for service

composition. The parallelization is achieved in partitioning

the services graph. We differ from their solution on three

Page 19: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

12

points. Firstly they used a particular representation of the

services' composition (the binary tree based web

composition system) while we use a more classical one.

Secondly, we consider a different sequential algorithm to

parallelize. As it will be discussed further, our sequential

algorithm includes several optimizations that can drastically

reduces the search space of the service selection problem.

Finally, we both use graph partitioning for creating

parallelism. But, our solution is related to the domain

decomposition technique. The difference in the way we

formulate parallelism is practically observed by the fact that

while our parallelization leads in some cases to a

super-linear speedup (with 2,4,8 or 16 threads), the one

proposed by Hening and Balke is only near-linear. In

(Pathak et al., 2006) and (Bartolos & Bielikova, 2009), two

parallel algorithms for service compositions are proposed.

As criticized in (Hening & Balke, 2010), these solutions

only exploit the parallelism that is inherent to the

interactions among operations of the services' composition

graph; this can result to a poor scalability. In our solution, it

is the set of services that can be associated with abstract

ones that restrict the parallelism. Since the practical

hardness of the service selection problem somehow depends

on the size of this set, we have a more scalable algorithm.

The work proposed in this paper is built upon our prior

work in (Ngoko et al, 2014a) on the sequential resolution of

the service selection problem and the parallelization we

proposed in (Ngoko et al, 2014c). In this paper, we go

further in the sequential resolution in introducing two data

structures. The first data structure helps to optimize

repetitive calls in the iterative process at the core of the

sequential resolution. The second is used to generalize the

sequential process such as to make it more flexible to

runtime optimization. In comparison to the parallel

contributions we did before, we discuss in this paper about

an alternative formulation of the parallelism based on the

map-reduce paradigm (Dean & Ghemawat, 2008). We show

the limits of this solution, explain the interest of our

alternative approach and discuss about its scalability.

3. The Service selection problem

The problem formulation we use was introduced in previous

work (Yu &Lin, 2004; Ngoko et al., 2013). As input, we

have a services' composition described as a hierarchical

services graph (HSG). A HSG is obtained in composing

three graphs: an operations' graph that describes a set of

business process interactions between operations, a

services' graph stating the services that implement each

operation and a machines' graph that defines the machines

on which each service is run. In Figure 1, a representation of

such a graph is proposed.

Fig 1: Example of HSG

In our HSG, we admit that the operations (in the operations'

graph) are defined by the set O. Given an operation u, the

set of its possible services implementations is referred to as

Co(u). Each implementation is characterized by a service

response time (SRT) and energy consumption (EC). The

objective is to choose the best implementations of

operations for minimizing a penalty function while fulfilling

a set of SLAs constraints. More formally, we describe the

problem as follows.

Problem inputs:

A set of operations O. For each operation u, an

implementation set Co(u) = { u1 ,… uv}. For each uv, we

have the mean response time S(uv) and the energy

consumption E(uv). We assume two upper bounds that are

issued from SLAs constraints: the bound MaxS for the

service response time and MaxE for energy consumption.

Finally, we have a tuning parameter λ Є [0,1] .

Problem objectives:

We are looking for an assignment of service

implementations to O that fulfills the following constraints:

1. each operation must be associated with a unique

implementation;

2. the QoS of the resulting composition must not

exceed MaxS in response time and MaxE in energy

consumption (SLAs constraints);

3. if S is the service response time and E the energy

consumption, then the assignment must minimize

the global penalty λS+(1- λ)E.

Here, λ is provided by the user; it serves for prioritizing the

SRT or the EC in the optimization. One drawback in the

constraint 3) is that it does not seem natural to add the

service response time to the energy consumption since they

are expressed in distinct units. Alternatively, we can adopt a

normalized version in which the goal is to minimize:

In this paper, we will keep the first formulation of the

penalty. Let us observe that the solution we propose in this

paper can easily be adapted to the normalized case.

Page 20: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

13

In the problem formulation, we assumed that the SRT and

EC are real values. This choice is questionable because the

operations deal with different types of inputs. Fortunately,

the resolution approaches that we will propose can be

extended to more complex formulations as the probabilistic

modeling studied in (Ngoko et al., 2014b).

For completing the problem definition, the structure of the

operations graph must be defined. We must also define how

to infer the QoS (SRT and EC) of a services’ composition

once implementations are associated with operations. On

these points, we will use the considerations made in our

prior works (Ngoko et al., 2013). In particular, the structure

of the operations' graph will be presented in the next

sections.

4. Sequential algorithm for service selection

The service selection problem can be solved by an iterative

process whose idea is to evaluate all possible assignments of

operations to implementations. In this process each iteration

includes the following tasks:

1. definition of an assignment of possible

implementations to operations;

2. Estimation of the resulting SRT and EC and check

of the non-violation of SLAs constraints.

3. If a constraint is not violated, estimation of the

penalty of the assignment and comparison with the

best known solution.

The possible assignments of implementations are defined by

the HSG in which the operations’ graph is contained. For

instance, according to Figure 1, A is implemented in w1 and

w2. E is implemented in w2 and w3 and C is only

implemented in w1.

In the iterative process we defined, the service selection

problem is based on two algorithms: (1) A QoS predictor

that given an assignment computes the SRT and EC that it

leads to; (2) an iterative algorithm that explores assignments

and call the QoS predictor for checking the non-violation of

constraints and, estimating their objective value. We will

refer to this process as the exhaustive search process.

The exhaustive search is a natural option for finding

optimal solutions to the service selection problem. But,

there is a non negligible set of useless explorations

performed in this search that we can avoid. In this work, we

will propose an alternative resolution of the problem. From

the exhaustive search process, we will retain two important

requirements for solving the service selection problem: the

need of a mechanism for QoS prediction and the need of a

search process within possible assignments. Below, we

present the mechanism for QoS prediction that we will use.

4.1 QoS prediction: a graph reduction

approach.

Given an assignment of services to operations of a HSG, the

graph reduction approach defines a method for computing

the SRT and the EC. The precondition for the application of

this method is that the operations’ graph of the HSG must

be recursively decomposable in known patterns. This means

that there is a set of subgraphs structures that we can use for

composing this graph. In our case, we assumed that our

services' composition automates a business process.

Therefore, we can define the subgraphs structures from

known patterns used in business process modeling. In this

work, we restricted the set of patterns that we will use to the

ones of Figure 2.

Fig 2: Subgraph patterns for the operation graph

Here, each Pi is either a subgraph based on the same

patterns, or an operation. When for a pattern all the Pi are

operations, we say that we have an elementary subgraph.

Let us now admit that the operations’ graph is

decomposable in these patterns. The graph reduction

approach occurs through several stages where an elementary

subgraph is reduced into a single node whose SRT and EC

are the ones of the reduced subgraph. An illustration of the

execution is provided in Figure 3. A key challenge in this

process is the search of elementary subgraphs to reduce. In

our prior work (Goldman & Ngoko, 2012), we introduced

the notion of reduction order. Such an order defines the

successive set of elementary subgraphs that we must

consider in the reduction. More practically, a reduction

order is a stack whose elements have the key (root, leaf). In

this notation, root is the root node of the elementary

subgraph and leaf is the node of maximal depth of the

graph.

Considering for instance the Figure 3, (g3, g4) refers to the

elementary subgraph that comprises the operations C, D.

After the reduction of (g3, g4), (B, g3) will be an

Page 21: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

14

elementary sequence with the node B and the reduced node

representing (g3, g4). After the reduction of (B, g3), (g1, g2)

will also be an elementary subgraph.

The reduction order states how to recursively reduce the

operations’ graph to a unique node. Let us notice this can

lead to the computation of QoS if in the reductions of the

subgraphs, we aggregate the operations’ QoS (SRT and EC).

In Ngoko et al. 2013, we defined the aggregation rules we

need for this purpose.

Fig 3: Example of reduction with order

4.2 Exploring services implementations

with graph reduction.

Let us assume an assignment of implementations to the

graph of Figure 3. For the prediction of the resulting QoS,

the exhaustive search starts by reducing the subgraph

(g3,g4). At this stage, we will have the SRT and EC of (g3,

g4); from a simple comparison, it can happen that the

assignment made on this subgraph already violates a SLAs

constraint. However, in the exhaustive search, this

information will not be exploited. One will reduce totally

the graph and check only at the end whether or not a SLA is

violated. The process will be repeated until we explore all

possible assignments.

The exhaustive search has an obvious drawback: it does not

exploit the local information produced by the graph

reduction for accelerating the detection of SLAs violations.

The runtime overhead that this leads to can be important. In

particular, let us assume that we have n operations in the

graph and d implementations for each operations. In the

case, where we can detect a SLAs violation from the

reduction of two operations, we will only explore 2d

assignments on two operations. With the exhaustive search,

we will explore nd assignments on n operations.

Consequently, the exhaustive search process can be

particularly inefficient in the case where we do not have any

solution to the service selection problem or in the cases

where there are few solutions.

For improving the exhaustive search process, we proposed

in our prior work (Ngoko et al., 2014a) to explore solutions

based on Nested List of operations’ Ordering (NeLO). We

discuss this data structure below.

4.2.1 Using Nested Lists for Ordering in the search of solutions

NeLO are built for two purposes: the first one is to define an

ordering in the generation of assignments that we explore in

the resolution. The second purpose is to define a set of

evaluations on SLAs constraints that will serve to quickly

detect violation of constraints. For understanding the key

point of this data structure, we briefly recall below what we

name assignment. Given the graph of Figure 3, let us

assume that A can be associated with the implementations

set Co(A) = {A1, A2}. Let us also assume that B can be

associated with the implementations Co(B) = {B1, B2, B3};

then we define a possible assignment as [(A,A1), (B,B3)] .

Depending on the number of nodes of the operations’ graph,

an assignment can be complete or partial. In the former case,

all operations are associated with an implementation while

in the latter, it is the case only for some operations.

Given the graph of Figure 3, [(A,A1), (B,B3)] is a partial

assignment; a complete assignment must define an

implementation for A, B, C, D, E, F.

With the NeLOs, the idea is to solve the service selection

problem in using partial assignments that are progressively

completed if no constraints violation is found. This means

for instance that in Figure 3, we start by defining a partial

implementation for D and C. Once done, we can evaluate

the subgraph (g3, g4) on SRT and EC. If no SLA violation

is found, we continue the completion of this partial

assignment by choosing an implementation to B. The

process (completion of assignment and SLAs evaluation) is

repeated until we find a complete assignment or a constraint

violation. As data structure, a NeLO is based on a stack that

defines the ordering in which assignments will be made to

implementations. Some entries of the stack can point

towards a list in which are specified the reductions to

perform once assignments are made to the referring

sub-stack. For the graph of Figure 3, an example of NeLO is

given in Figure 4.

Fig 4: Example of NeLO

According to this representation, any first partial assignment

that we will consider will be obtained in associating D with

Page 22: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

15

an implementation. After, an implementation must be

defined for C. Once done, one must reduce the graph (g3,g4)

and then must check the SRT and EC that this will result to.

If we do not have a constraint violation, we continue in

defining an assignment to B; the graph (g1, g2) will then be

elementary and reduced. Then, one checks the SRT and EC

that their reduction will lead to. One continues in the same

way until reducing the entire graph. This will be done after

defining an implementation to A.

There are several challenges in the design of a NeLO. For

instance, how must we order the operations in the

completion of assignments and what are the constraints

evaluation to perform on sub-stacks? For more details about

their design, we invite the reader to take a look at the work

we did in (Ngoko et al., 2014a). The main information we

can retain is that polynomial time algorithms exist for the

design of NeLOs in which operations are ordered such as to

maximize the number of local checks of SLAs violations.

We introduced NeLOs in criticizing a worst case situation in

the exhaustive search: when the problem does not have any

solution or when there exist few solutions. It is obvious to

notice that this situation is optimized in using NeLO. Indeed,

in the graph of Figure 3, let us consider that a SLAs

violation exists on the subgraph (g3, g4). Following the

NeLO we will evaluate this subgraph after assigning an

implementation to D and C. We will then avoid to explore a

huge set of assignments that in exhaustive search would

have been considered.

Regarding search methodologies, the exploration of

assignments based on NeLOs can be referred to as a

backtracking exploration with a variable ordering strategy.

For instance, given an assignment if a constraint violation is

detected in the reduction of (B, g4), one can backtrack and

change the assignment made to B such as to consider

[(D,D1), (C,C2), (B,B2)] .

If again some constraints are violated, one can backtrack for

changing the implementation associated with C. In addition

to backtracking with variable ordering, we propose to add a

bound control in the exploration of potential solutions. The

idea is to evaluate the objective value (the function λS+(1-

λ)E) of any partial assignment such as to compare it with the

one of the best known solution. This means that from the

assignments made to D, C and B, we propose to compute

the local objective value in SRT and EC that the partial

assignment leads to. If this local objective function exceeds

the one of the best known solution, then there is no interest

in completing this partial assignment. For more details

about bound control, we refer the reader to the work we did

in (Ngoko et al. 2014a).

At this point, we showed how NeLO can improve the search

of the optimal solution for the service selection problem. In

the next sections, we will discuss about other aspects of the

search optimization in using NeLOs.

4.2.2 Making successive reductions

For finding the optimal solution of the service selection

problem, we must define various assignments of the

operations' graph. For the evaluation of partial assignments,

the NeLOs suggest to make reductions that will destroy this

graph. This implies that for the evaluation of a complete

assignment, we must destroy the operations' graph. How

then should we proceed since we need again to have this

graph for evaluating other assignments? This is the main

challenge that will be discussed in this part.

Our first answer to this challenge is to use local copies.

Before starting any assignment, we make a local copy of the

operations' graph. Then, one progressively assigns

implementations to its operations while making the

reductions suggested by the NeLO. Once we end with this

assignment, we delete the remaining nodes and create a new

copy for defining the next assignment. In appearance, this

approach is simple; however, let us notice that things can be

more complex when we must backtrack. Indeed, let us

assume that after making an assignment to D, C, B, E one

wants to backtrack for changing the implementation of D .

According to the NeLO of Figure 4, the node D will already

be deleted in the reduction of (g3, g4). This means that for

changing even a part of a partial assignment, we must create

again local copies of the operations’ graph.

The idea of local copies is then inappropriate in

backtracking because it can result in a huge set of local

copies to create. For quantifying this, let us consider that the

operations graph is a sequence of n operations. Then we

must pay an overhead of O(2n-1) for the creation of each

local copy (we have n-1 edges). Assuming that each node is

an operation with d possible implementations. Then, in the

best case, we have at least nd assignments to evaluate (we

only counted complete ones). In this case, the overhead in

copies will be in O(nd(2n-1)). Here, we did not include the

cost of deleting nodes and edges of elementary subgraphs in

the reductions. In conclusion, the idea of keeping a local

copy of the graph can lead to an exponential time cost: how

can we improve it?

Our alternative to local copies is based on the left naming

assumption defined below.

Left Naming assumption:

In NeLOs, once a subgraph (root,leaf) is reduced, it will be

further referred to only with its left name: root.

This is the case in the NeLO of Figure 4. Indeed, in

referring to the reduction of (B, g3), we consider here that

g3 is obtained from the reduction of (g3,g4). We have the

same consideration when referring to the reduction of (g1,

F): g1 refers to the reduction of (g1, g2).

It is trivial to generate a NeLO in which the left naming

assumption is respected. The interest is that we do not need

Page 23: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

16

again to destroy the operations' graph in the reduction. More

precisely, we propose to use a QoS array that works as

follows.

For each node of the operations' graph, there is an entry in

the QoS array that points towards a SRT and EC value. We

interpret the reduction of an elementary subgraph as: the

computation of the SRT and EC that it leads to followed by

the update of these results in the QoS entry of the root node

of the reduced graph. This interpretation exploits the fact

that we can latter refer to a reduced graph as the root node

of the subgraph it denotes. Summarizing, instead of making

effective reductions, we proceed by updating the SRT and

EC of the root nodes of the elementary subgraphs

considered.

There are some critical points to address in the formulation

of this solution. For instance, we might have a situation

where the NeLO mentioned to reduce a graph that is not

practically elementary since we did not destroyed the nodes.

This will be the case in Figure 3 when we will try to reduce

(B, g3) or (g1, g2). This situation is easy to address. This is

because before the reduction of (B, g3), we will already

have the necessary updates made in the QoS entries of g3. It

suffices to use them and update the QoS entry of B

accordingly. The same tricky situation will occur in the

reduction of (g1, g2). Again, at this point, all the internal

entries of the root nodes of the subgraphs will be updated.

Summarizing, our solution for improving the runtime in

backtracking consists of virtual reductions that are

performed on an internal associative array of QoS and to

create a NeLO based on left naming. The entries of the

associative array consist of tuples (node name, SRT, EC).

Initially, the SRT and EC of these tuples are only defined

for operations. For making a virtual reduction on the

subgraph (x, y), we update the entry (x, srt(x), ec(x)) of the

associative array. The values of this new entry are then used

each time we refer to the subgraph (x, y). In this solution,

we have neither the overhead induced by the copies of

operations’ graph nor the overhead caused by the

destruction of subgraphs. In comparison, we must just keep

the associative array update.

We showed how to efficiently operate several reductions

using NeLOs. Until now, we considered that NeLOs are

built for maximizing the number of local checks for SLAs

violations. Below, we discuss of alternative views.

4.2.3 Generalizing NeLOs for handling best cases in lazy reduction

We introduced NeLO as a data structure that serves to

optimize the constraints check in partial assignments.

NeLOs are useful over the worst cases in exhaustive search

but what about their efficiency over the best cases?

Fig 5: in a) the queue of local objective check; in b) the

queue of constraints checks; in c) the NeLO

Our answer to this question is that a NeLO in which we

maximize the number of local checks for SLAs violation is

not necessarily optimized over exhaustive search. In some

situations, it might be preferable to delay the constraints

checks or the bound comparison that follows. Let us recall

that while constraints check means that we compare the

local SRT and EC with the bound of the service selection

problem, bound comparison consists of comparing the local

objective function of a subgraph with the best solution,

known for the service selection problem.

A favorable situation in exhaustive search can happen when

all partial assignments fulfill the SLAs constraints. For

exhaustive search, this situation is favorable since the

intermediate constraints checks we made in using NeLOs

will not detect any violation. Since these local evaluations

are time consuming, a challenge for us then consists of

adjusting the definition of NeLOs for such situations.

For handling cases where sub-constraints checks are useless,

we propose to make the NeLOs more flexible by enriching

the data structure with two queues: a binary queue that

defines the points at which the constraint check are made

and a binary queue that defines the points at which the

comparison with the objective function are made.

An example of such an enriched data structure is given in

Figure 5. Associated with the NeLO, we put two queues that

define when we will compare the local objective values with

the global solution and when we will check whether or not

SRT or EC are violated.

The top entry in these additional queues must always be

equal to 1. This is required for checking constraints and

evaluating the local objective function on complete

assignments. But there is no requirement for the other

entries. With these additional data structures, constraints

checks can be reported. For instance, in the Figure 5, a

report is made after the definition of an implementation to B.

The SRT and EC will not be evaluated on (B, g3) because

the corresponding entry in Figure 5b) is set to 0. In the same

way, the local objective will not be evaluated on (g1, g2)

because the corresponding entry in Figure 5a) is set to 0.

With the enriched NeLO we obtain, we can define several

search configurations based on NeLOs. For instance, we

obtain the exhaustive search in having a unique entry set to

Page 24: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

17

1 at the top of queues 5a) and 5b). Various other

intermediate search configurations can be defined by

manipulating these queues. A challenging question that this

therefore implies is the choice of the optimal configuration.

We will not address this question in this work.

4.3 Summary of optimizations of the sequential

We end here the presentation of our sequential algorithm for

solving the service selection problem. The summary of the

optimizations we performed on the sequential resolution of

the service selection problem are given in Table 1.

In the next section, we will then consider the parallelization

of this algorithm.

Problem Solution Fast detection of SLAs

violation NeLO and backtracking

exploration Check of local objective

values Evaluation of local objectives

in NeLOs an Bound

comparison Runtime optimization in

successive searches with

NeLOs

Left naming, QoS associative

array and virtual reduc-tions

Flexibility in constraints

checks and bound

comparison

Binary queues for constraint

checks and bound evaluations

Table1: Summary of optimizations in the sequential

resolution.

5. Parallel algorithm for service selection

In this part, we will discuss two approaches for the

parallelization of the sequential resolution. In the first

approach we propose to anchor the parallelization on the

tasks defined in NeLO; in the second approach, the

parallelization focused on the structure of the search space

5.1 Task parallelism based on NeLO: a map-reduce approach A NeLO defines a set of tasks to perform for the evaluation

of each assignment. We can range these tasks in two types:

assignments of implementations to operations and

evaluations of QoS on elementary subgraphs. For instance,

in the NeLO of Figure 5, the assignments of

implementations must be done for D, C, B etc. The

evaluation of the QoS of (g3, g4), (B, g3), etc.

From this classification, we can derive a map-reduce like

parallelization for the evaluation of assignments. The

parallelism here uses two types of threads: mappers that

given a NeLO and a partial assignment of an operations'

graph extend the assignment if no constraints is violated and,

reducers that given a NeLO and an assignment evaluate the

constraints and objective values.

More concretely, let us consider the NeLO of Figure 5.

From a null assignment, at the beginning, a mapper will

start by extending the null assignment into an assignment

for D and C. Let us suppose that the partial assignment

defined at this stage is [(D, D1), (C, C1)]. The mapper will

not continue further the extension of the partial assignment

because according to the NeLO definition, SLAs constraints

checks and the evaluation of objective values must be done

on the subgraph (g3, g4). The mapper will let this task to a

reducer and will look for another partial assignment to build.

At a moment a reducer will evaluate the SLAs constraints

and objective value on [(D, D1), (C, C1)]. If the assignment

is valid (no constraints violations and optimized objective

value), it will returns it to a mapper and will look for

another reduction task to perform. A mapper will continue

on [(D, D1), (C, C1)] in assigning an implementation to B.

In this parallelization, there are two types of pools used as

local memories: pools for assignments that mappers should

extend and pools for assignments to be reduced. The run of

a mapper starts by the choice of an assignment to perform

from the mappers pools. It then extends and puts it in

reducers’ pools; inversely, a reducer starts by picking an

assignment to reduce from reducers pools and ends by

putting the checked assignment in a mapper pool.

The main interest of this parallelization model is that in

practice, we can expect to use optimized tools for

map-reduce frameworks for defining it. However there are

many drawbacks.

The first drawback is that it might be hard to coordinate

mappers such as to not explore twice a same assignment.

Indeed, the partial assignment [(D, D1), (C, C1)] can be the

prefix of several distinct complete assignments; once a

reducer will put it in the mappers pools, how must it be

extended? In our view, for addressing this question, one

must add other elements on partial assignments such as to

state how to extend them. We will not focus on these

aspects in this paper. Let us also notice that we can also

have a favorable situation: it is when according to the

queues of objectives values and constraints checks, there is

only a unique check and evaluation to be done when all

operations are assigned to an implementation. In this case

(that corresponds to exhaustive search), mappers will each

time perform a complete assignment.

The second drawback we see is that one must judiciously

decide on the number of mappers and reducers. Ideally,

these numbers must ensure that: (1) there are no mappers

and reducers that do not work at anytime; (2) one consumes

few local memories in the pools of mappers and reducers.

Guaranteeing these two conditions is challenging because it

depends on: the number of assignments to perform, the

Page 25: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

18

number of reduction tasks and the time duration of these

tasks.

Summarizing, if the map-reduce formulation we proposed

can work well on exhaustive search, it is hard to define it

efficiently in backtracking context. Because of this

difficulty, we envisioned in our work another option for

parallelization discussed below.

5.2 A data parallelism approach For avoiding the drawbacks we observed in the map-reduce

approach, we propose to consider only one type of thread:

workers that combine both the functions of mappers and

reducers defined below. The formulation of parallelism

according to this view is discussed in the next sections.

5.2.1 Creation of parallelism We consider a setting where the parallel algorithm will be

executed with a number p of threads, defined by the users.

In a multicore machine, a choice is to set p as the number of

cores. The objective is to decide on the work done by each

thread. For this, we propose to give to each thread a part of

the service selection search space. For understanding the

idea, two points are important: (a) how we represent the

search space; (b) how we partition it between threads.

5.2.2 Search space representation We represent the search space of the service selection

problem as a tree. We will refer to it as the services'

implementations tree. With each service selection problem,

we associate a services' implementations tree by the means

of the NeLO generated for the problem. Each level of the

tree corresponds either to possible assignments of an

abstract operation or, for the root node, to an abstract

operation. This means that if the NeLO ordering states the

ordering o1,o2,…on for operations, then the nodes of level i

will correspond to the possible assignments of

implementations to oi. In this representation, each branch

captures a potential assignment of implementations to

operations; the root node does not refer to an assignment but

to the first abstract operation. The Figure 6 gives an intuitive

idea of this representation.

5.2.3 Search space partitioning In the parallel execution, each thread runs the sequential

backtracking algorithm on a partition of the services’

implementations tree. A partition is a sub-tree with the

following properties: (1) it has the root node of the general

services implementations tree; (2) it has two parts: (2.1) a

linear part which is a sequence of nodes of degree 1 (in the

partition) that starts from the root node; (2.2) a tree part

(linked to the linear one) that is a sub-service

implementation tree rooted on a node connected to the linear

part.

Starting from the top graph of Figure 6, we give two

examples of partitioning. In a), we have two partitions of the

services implementation tree while we have 6 in b). In the

former case, the linear parts of T1 and T2 are reduced to the

node D. In the latter case, the linear part of T1 is made of D,

D1 while the linear part of T6 is made of D, D2. For any

partition, we will refer to the first node whose degree (in the

partition) is greater than 1 as the root computing node. This

term reflects the fact that the linear part of a tree defines a

partial assignment to complete. For instance, in the linear

part D, D1, E1, we have the assignments in which D is set to

D1 , E to E1.

A challenging question is to relate the number of partitions

to the number of threads. We propose to use two guiding

principles for deciding on this number. Firstly, the number

of partitions must be greater enough for keeping busy all

threads during the execution. Indeed, the partitions

correspond to sub-services implementations trees in which

threads must find solutions. If we create p-q partitions then

q threads will not be busy during the execution. This will

result in a loss of efficiency. The second guiding rule is that

the number of partitions must be small enough for not

dominating the sequential search runtime in each partition.

Indeed, the creation of partitions is similar to an exploration.

It has a cost that must be kept low enough in order to benefit

from the expertise of the sequential backtracking algorithm.

Fig 6: Example of search space partitioning

From the proposed guiding principles, a good compromise

consists in choosing a number of partitions equal to the

number of threads. In Figure 6, if we have 2 or 6 threads,

then we can adopt the decompositions that the figure

describes. If now we have 4 threads, another solution must

be found. We propose the following partitioning in 4.

We create a first partition whose root computing node is D1

but that does not explore E3, a second partition whose linear

part is D, D1 with E3 as root computing node, a third

partition whose root computing node is D2 but which does

not explore E3 and a last partition whose linear part is D, D2

and that explores E3.

This solution creates one partition per thread; therefore it

matches the compromise that we have from the two

principles. However, it highlights a drawback of our

compromise: the created partitions do not have the same

Page 26: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

19

sizes; since each thread explores a partition, this suggests a

priori a work unbalance. Moreover, if we consider in

general partitions of different sizes, then, one can find

several partitioning of a same search tree. In Figure 6, we

have some partitioning in 2, 3, 4 etc. How to choose among

them? For avoiding these difficulties we introduce the

following general principle.

Principle 1: (Equal size and sufficient partitioning)

Given p threads, let us suppose that the ordering of abstract

operations is o1,…on. We will create p’ = |Co(o1)|×…

×|Co(om)| partitions where:

1. Each partition is a sub-tree of the search tree and

o1,…om is a prefix of o1,…on ;

2. p’ is the smallest product for which

3. The size of the linear part of each partition is m-1

and the degree of the root computing node is

|Co(om+1)|.

We recall that Co(u) are the possible implementations of u.

In this general principle, the point 2 resumes the two

guiding principles introduced before. The idea in the last

point is both to have a solution for work balance and a

simple approach for the creation of partitions. However, the

simplicity pursued by the principle has a potential drawback.

In practice, the number of partitions can exceed the number

of threads. For example in Figure 6, this principle proposes

to create 6 partitions when we have 4 threads. In these cases,

how to define the work done by each thread?

Our answer is to use the work stealing technique (Blumofe

& Leiserson, 1999). At the beginning, we create the threads

and a pool of services partitions (PSP) that comprises the

sub-trees of the search space. Details about this data

structure will be provided later. The threads execution

continues with the steal of sub-trees in the pool while they

do not have something to process. In what follows, we will

formalize these executions throughout two models: the one

and the multi-levels models.

5.3 The one-level execution model

5.3.1 Description

In this model, the parallel algorithm starts by creating

threads. We will refer to the first created thread as the

master and to the others as the workers. The first master

task is the creation of the PSP. This means that it partitions

the services’ implementations tree. Then, the master

implements the behavior of the workers. The worker

execution consists of stealing a task in the PSP and then

searching in the corresponding sub-tree the best solution.

For this, they use the sequential backtracking algorithm.

Once a local optimum is found, threads (workers or master)

update the current global optimum; this is the last optimal

results found for the service selection problem. This value

will, in the next, be compared with lower bounds computed

on partial evaluations for reducing the search space. The

functioning of the workers is summarized in Figure 7-A).

In the [Stealing] state, the worker tries to retrieve a partition

from the PSP. If the steal is successfully concluded (there

are unprocessed partitions), the worker enters into the

[Processing] state. Otherwise, the execution of the worker

ends. In the [Processing] state, the worker checks whether or

not the lower bound resulting from the evaluation of the

stolen linear part matches the SLAs constraints. If there is

no violation, the worker runs the sequential backtracking

algorithm; otherwise it sets that it does not have any value

to update and returns in the [Stealing] state. Once it found a

local optimum, it enters into the [Updating] state where it

updates the current global optimum. For the master thread,

we have the same functional structure. The only difference

is that in the [Initializing] state, the master creates the

partitions.

In this execution model, there are two main global variables:

the current global optimum and the PSP. In our

implementation, the concurrent access to these variables is

controlled by a mutual exclusion variable.

We described the key points of the one-level model. It is

important to notice that this model has many similarities

with the busy-leaves algorithm proposed in (Blumofe &

Leiserson, 1999) for the general scheduling of multithreaded

computations. The difference is that we specify here a

particular mechanism for the creation of parallelism

(domain decomposition). Below, we analyze the speedup

that it can achieve in extreme situations.

Fig 7: Thread states and partition pool

5.3.2 Best and worst cases of the one-level model

The run of the backtracking algorithm is an exploration of

complete and partial assignments. Let us assume that the

PSP depth is i Є {1,…m-1}. While the complete

assignments correspond to branches of PSP, partial

assignments are sub-branches that, starting from the root

node, end at a depth i. Given a service composition problem,

let us assume that the mean runtime required for exploring a

branch until the depth i is α(i). Naturally, α(i) < α(i+1). This

is because the deeper we explore a branch, the greater is the

exploration runtime. Let us assume that the sequential

backtracking algorithm explored W1c complete assignments

and W1p(i) partial assignments until the depth i. Then, we

can approximate the runtime of the sequential backtracking

algorithm by the formula:

Page 27: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

20

T1 W1c .α(m) + W1

p(i).α(i)

Let us assume that the parallel algorithm runs with p threads

and that the computations are well-balanced. In this setting,

we can characterize its runtime by the formula

Tp (W1c. α(m) + W1

p(i).α(i)) + Op

Here Op is an overhead induced by synchronization. In the

best case scenario, Op is negligible. In the general case, it is

intuitive to consider that Wpc = W1

c; however, in the best

case we have Wpc < W1

c. Indeed, let us assume that the

optimal solution is found in the first branch of the pth

partition. In this case, all threads will quickly have in their

execution a strong lower bound for computations. Due to

this better bound, some assignments that were completely

explored in the sequential case will only be partially

evaluated in the parallel case. Consequently, we might have

Wpc < W1

c. This observation can be generalized as follows.

In the best case, some assignments that were completely

explored until a depth i' will only be explored until the depth

i” < i' . Since α(i") < α(i'), we will have

W1c. α(m) + W1

p(i).α(i) < T1.

Consequently in the best case the speedup Sp = (T1/Tp) > p.

Hence, we have a super-linear speedup (with Sp = p, we

have a linear speedup).

In the worst case, we do not have well-balanced

computations. We have p partitions such that in p-1 of these

partitions, an SLA violation can be detected in exploring the

first assignment until the depth 1. The p-1 threads will steal

such partitions and end their executions quickly. The

computations will therefore be concentrated in one thread.

We will have:

Tp W’1c .α(m) + W’1

p(i).α(i)-(p-1)α(1)+O’p

Let us observe that here, we might even have W’1c > W1

c.

This is because the lower bounds will not necessarily be

visited in the order of the sequential algorithm.

Consequently, the speedup Sp ~ 1. With such a speedup,

there is no interest in parallelization. Below, we will see

how to overcome this worst case situation.

5.3 The multi-levels model

The worst case situation in the one-level model is

exacerbated by the fact that p-1 threads are free while 1

thread is busy. One can avoid this situation if the busy

thread could share its work with the others. It is the main

idea that brings the multi-levels model. Here, the PSP is no

more global but local to each thread. For the master, the

pool is created in the [Initializing] state. For the worker, the

first pool is created in the [Processing] state. This means

that once a partition is stolen, the worker firstly subdivides

it in p” partitions according to the principle 1. Then, it

processes each partition by using the backtracking algorithm.

When a thread ends the processing and updating of its PSP,

it requests another partition from a chosen thread.

In these executions, the worst case situation of the one-level

model is delayed since free threads can steal work from

busy ones. A question that the model introduces is the

choice of threads for stealing. More precisely, for entering

in the [Stealing] state, to which thread must a request, be

sent? Our choice is to randomly select the target. Any free

thread can steal work to a busy one.

The multi-levels modeling has a drawback: there might be

several partition creations, this is time consuming. For

limiting the number of creations, we introduce a parameter

that refers to the partitioning level or granularity. In a

simplifier manner, a partition created from the entire search

tree is at level 1. From a partition of level 1, we create

partitions of level 2 and from them partitions of level 3.

Finally, let us remark that when l = 1, we have the one level

model. In the next, we will now discuss about the scalability

of the proposed model.

5.4 Discussion about the scalability

In a general manner, the scalability of a parallel algorithm

captures it capacity to process larger input by using more

resources such as to maintain efficiency.

A critical question when addressing scalability is the

measure of the problem input size. In our case, the inputs

are given in the definition of the service selection problem:

we have a HSG and the domains of services

implementations. The question of the scalability can then be

addressed for us in two manners: (a) how can we adjust our

parallel algorithm such as to use more resources when the

operations’ graph is larger? (b) What are our options when

the set of possible implementations is larger?

For the former question, let us notice that we proposed

through principle 1 to partition the operations’ graph such as

to have enough work for each thread. More precisely, given

an operations’ graph whose operations are o1,…on, the

principle guarantee that we can create until p = |Co(o1)|×…

×|Co(on)| threads. Moreover, in using a multi-level model,

we can even create more parallel tasks. As one can notice,

the creation of useful parallelism in our approach is related

to the number of operations. Consequently, it is reasonable

to conclude that we can create more threads with a useful

work when having larger graphs. According to our speedup

analysis, this also means that we can expect a greater speed

up on larger graphs.

We can formulate a similar answer in the case where the

domains of the possible implementations are larger. Indeed,

the bigger are the domains, the bigger is the number of

threads we can create in the one-level model.

In conclusion, the creation of parallelism in our models is

done such as to ensure that we can increase the number of

Page 28: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

21

threads for gaining in speedup when the inputs of the

service selection problem are increased. This is what we

need for scalable algorithms.

6. Experimental evaluation

In the experiments, we evaluated parallel resolutions of the

service selection problem. Here, we assumed that in the

sequential resolution, the NeLOs are generated such as to

evaluate the objective functions and constraints as soon as

possible. We did several experiments for two purposes. The

first one was to characterize the speedup that can be

expected from the one-level model. The second one was to

compare different settings for the multi-levels

parallelization.

6.1 Speedup in the one-level model

For the experiments, we focused on the special case of

services compositions that implement workflows. We chose

two workflows from the Pegasus database (Metha, 2003)

that we used for building the operations graph of our

compositions.

One can criticize the choice of these flows on two points: (1)

we chose small or medium workflow graphs; (2) the chosen

graph are regular. For the first point, let us recall that the

problem is NP-hard. Since we are providing an exact

solution, our expectation is to have real-time results on

small or medium problem instances. For the second point, it

is important to notice that the regularity of the graph does

not modify the runtime of our algorithm. What is important

is the set of services implementations.

From each workflow we created a first set of 200 problem

instances of services’ compositions. In the instances, each

workflow’ activity corresponds to an operation for which

we have d distinct implementations. d was set to 40 in the

Genelife workflow and 8 in the Motif workflow. Each

implementation is characterized by the service response

time and energy consumption. The response time was drawn

uniformly between 1 and 1500 ms and the energy

consumption was computed from the formula E = P.S,

where P is a power consumption value randomly drawn

between 100,...150. Finally for each problem, we used the

SLAs configuration MaxS = 3500, MaxE= 7000.

Fig 9: Super linear occurrences in the one-level model

The value of λ in the services' composition problem (see

Section 3) was set to 0.5. We subdivided the problem

instances in 5 classes of 40. In each class we ran the

sequential backtracking algorithm and the parallel algorithm

(one-level model) with 2, 4, 8, 16, 32 threads. In Figure 8,

we depict the mean speedup that we obtained with various

numbers of threads. Let us again recall that the speedup is

the ratio between the time obtained from the sequential

algorithm and the one of the parallel.

Fig 10: Speedup in the 2-level model

When compared to the theoretical speedup that can be

expected (linear speedup), we can notice that the one-level

model leads to a near linear speedup. In details however, we

observed super-linear cases. The statistics of these

expectations are reported in Figure 9.

Page 29: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

22

Fig 11: Acceleration of the 2-level over the one-level model

We justify the occurrence of super-linear speedups with the

analysis made in Section 5.3.2. One can also observe that

the more we create threads, the smaller is the efficiency of

the parallel algorithm (distance between practical and

theoretical speedup). However, this is a classical

phenomenon in parallel computing: more threads imply

more synchronization.

Fig 12: Speedup in the 3-level model.

We also did additional experiments with the multi-levels.

The objective was to see whether or not they lead to any

improvement. In setting the granularity of parallelism to 2,

we clearly observed an improvement for the Genelife

workflow. This is summarized in Figure 10, 11, 12, 13 were

we computed the ratio between the speedup of the

two-levels model and the one of the one-level. This means

that working on small partitions can in some cases be

beneficial. However, with a granularity set to 3, we did not

notice any particular improvement.

Fig 13: Acceleration of the 3-level over the one-level model.

Our explanation for this is that in these settings, the benefits

obtained by load-balancing in subdividing the search space

were counter-balanced by the cost of parallelism creation

and synchronization.

The experiments were done in C++ with the pthread library.

We used the lock primitives for synchronization.

7. Conclusions

As middlewares evolve, new challenging computational

problems are proposed for improving the servicing of users

requests. In the context of large distributed systems, we do

believe that for the resolution of these problems, we must

adopt a balanced view based on sequential optimization and

parallelism. In this paper, we developed this view on the

service selection problem; our results state that we might

expect super-linear speedups in the optimal resolution of

this problem.

For continuing this work, we intend to develop again the

frontiers of intelligent techniques that can be used for

solving this problem. In particular in a real-time setting, a

user SLA can be formulated as a maximal time expected in

the resolution of the service selection problem. In this

context, we must modify the way we optimize and

parallelize the service selection problem such as to obtain a

contract based algorithm that depending on the maximal

resolution time, will propose the optimal solution that can

be expected.

Regarding again intelligence in the resolution, let us notice

that we formulated a sequential resolution that can be tuned

(on the evaluation of objective values and constraints)

differently. An interesting question that follows is to decide

on the best configuration to use when solving each instance

Page 30: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

23

of the service selection problem. Personally, we do believe

that selecting one configuration is not the best option: it is

more interesting to investigate in the development of a

cooperative search approach (Parkes & Huberman, 2001) in

which various configurations are run concurrently.

8. Acknowledgment

The experiments conducted in this work were done on the

SMP nodes (40 cores) of the Magi Cluster at the university

of Paris 13. http://www.univ-paris13.fr/calcul/wiki/

9. References

Hennig, P, Balke, W. (2010) Highly Scalable Web Service Composition Using Binary Tree-Based Parallelization, International Conference on Web Services (ICWS), pp. 123-130.

Pathak, J., Basu, S., Lutz, R., Honavar, V. (2006) Parallel Web Service

Composition in MoSCoE: A Choreography-Based Approach, ECOWS, pp.

3-12.

Bartolos, P., Bielikova M. (2009) Semantic Web Service Composition

Framework Based on Parallel Processing, Seventh IEEE International

Conference on E-Commerce Technology(CEC), IEEE, pp. 495-498.

Platzner, M, Rinner, B. , Weiss, R. (1996) Exploiting Parallelism in

Constraint Satisfaction for Qualitative Simulation. J.UCS The Journal of

Universal Computer Science, Spri,nger, pp. 811-820.

Blumofe, R., Leiserson C. (1999) Scheduling Multithreaded

Computations by Work Stealing, Journal of ACM, vol. 46, num. 5, pp.

720-748.

Li Fei, Yang, Fang chun, Su Sen (2006) On Distributed Service Selection

for QoS Driven Service Composition, EC-Web, pp. 173-182.

Xin Li, Jie Wu, Sanglu Lu (2013) QoS-Aware Service Selection in

Geographically Distributed Clouds, ICCCN, pp. 1-5.

Alrifai, M., Risse, T., Nejdl, W. (2012) A Hybrid Approach for Efficient

Web Service Composition with End-to-end QoS Constraints, ACM Trans.

Web, Vol. 6, Num. 2, pp. 1-31.

Beran, P., Vinek, E., Schikuta, E., Leitner, M. (2012) An Adaptive

Heuristic Approach to Service Selection Problems in Dynamic

Distributed Systems, ACM/IEEE 13th International Conference on Grid

Computing, pp. 66-75.

Yu,Tao, Lin Kwei-Jay (2004) Service Selection Algorithms for Web

Services with End-to-End QoS Constraints, CEC, pp. 129-136.

Ngoko, Y. , Goldman, A., Milojicic, D. (2013 ) Service Selection in Web

Service Compositions: Optimizing Energy Consumption and Service

Response Time, Springer Journal of Internet of Services and Applications,

vol. 4, num 19, pp. 1-12.

Goldman, A., Ngoko Y. (2012) On graph reduction for QoS prediction of

large web service compositions, Internation Conference on Service

Oriented Computing (SCC), IEEE Press, Hawai, pp. 258-265.

Ngoko, Y. , Cérin C., Goldman, A., Milojicic D. (2014 a) Backtracking

algorithms for service selection, CORR, vol. ArXiv:1402.1309, pp. 1-31.

Ngoko, Y., Cérin, C., Goldman, A. (2014 b) Graph reduction for QoS

prediction of cloud-services' composition, Int. J. of Business Process

Integration and Management, vol7, Inderscience, no 2, pp. 89-102.

Ngoko, Y., Cérin, C., Goldman, A. (2014 c) A multithreaded resolution of

the service selection problem based on domain decomposition and

work stealing ,SCC , IEEE Press, Anchorage, pp. 424-431.

OAR2 (2012) from http://oar.imag.fr

Slurm: A highly scalable resource manager (2011)

from http://computing.llnl.gov/linux/slurm

Parkes, D., Huberman, B. (2001) ,Multiagent Cooperative Search for

Portfolio Selection. Games and Economic behavior , vol, 35, num 2, pp.

124-165.

Dean, J., Ghemawat, S. (2008) ,MapReduce: Simplified Data Processing

on Large Clusters. Comm. ACM , ACM, vol, 51, num 1, pp. 107-113.

Metha, J G. (2003). The pegasus workflow generator , from

https://confluence.pegasus.isi.edu

Page 31: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

24

10. Authors

Yanik Ngoko received his B.Sc. in

computer science from University of

Yaoundé I (UYI), Cameroon, his M.Sc.

in computer science also from UYI, and

his doctorate in computer science from

the Institut National Poly-technique de

Grenoble, France. He currently is a

researcher in the Laboratoire

d'Informatique de Paris Nord (France).

His research interests include parallel

and distributed computing, web services

and energy modeling in cloud

computing.

Christophe Cérin has been a Professor of

Computer Science at the University of

Paris 13, France since 2005. He has

served the IEEE Computer Society for

many years in different positions varying

from Chair of the France CS computer

chapter to the organisation of meetings

or being Financial Chair for different CS

sponsored or co-sponsored cconferences.

His industrial experience is currently

with the Resilience project (Nexedi,

Morpho, Alixen, Vifib) related to cloud

computing. His research interests are in

the field of high performance computing,

including grid computing. He is

developing middleware, algorithms,

tools and methods for managing

distributed systems.

Alfredo Goldman received his B.Sc. in

applied mathematics from University of

Sao Paulo (USP), Brazil, his M.Sc. in

computer science also from USP, and his

doctorate in computer science

fromthe Institut National Polytechnique

de Grenoble, France. He currently is an

associated professor in the Department

of Computer Science at USP. His

research interests include parallel and

distributed computing, mobile

computing and grid computing

Page 32: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

iConCube:A Location-based mobile cloud system

for meeting organizers and participants

Guo Chi

Global Navigation Satellite

System Research Center

Wuhan University

Wuhan, China

[email protected]

Zeng Jieru

Computer School

Wuhan University

Wuhan, China

Liu Xuan

School of Remote Sensing

and Information

Engineering

Wuhan University

Wuhan, China

Cui Jingsong†

Computer School

Wuhan University

Wuhan, China

[email protected]

Abstract—Location-Based Services based on thematic

information have become commonplace. Thematic indoor navigation

services are also increasingly pervasive with the development of

indoor positioning technologies. Extending these technologies, we

designed and implemented iConCube, a mobile cloud computing

system for meeting organizers and participants integrating Location-

Based Services and indoor navigation. For meeting organizers, our

system provides a templates library to efficiently customize meeting

portal websites for different topics and meeting types, such as

academic conference or art exhibitions. For meeting participants, the

Android application in our system not only provides indoor

navigation in a meeting space, but also enhances communication

among participants through online-to-offline social network services.

The system also extracts information from the Internet related to

users’ location and pushes it to users in sequence based on its

similarity to a location’s social attributes. Our system has been used

for many academic conferences such as China Satellite Navigation

Conference (CSNC), demonstrating its utility.

Keywords—meeting LBS; indoor navigation; social service;

location-based information; cloud computing

I. INTRODUCTION

Location-based Services (LBSs), a distinctive feature of

the Mobile Internet, has been moving towards provision of

thematic and the specialized services. In one thematic instance,

a LBS can provide navigation services using a users’ location,

but can also provide personalized and intelligent information

services to users [1]. LBS integrate location information with

other information to provide thematic services. Moreover,

advances in the portability and capability of mobile devices,

have brought rich mobile application experiences to end users

[2]. We extended these capacities by developing a mobile

cloud computing system for meeting arrangements, called

iConCube.

Our mobile cloud system for meeting arrangements was

designed for two classes of users, meeting organizers and

participants. The system can resolve the following problems:

For meeting organizers, there are few templates allowing for

rapid customization of a meeting information publishing

platform for different meeting topics such as academic

conferences or automobile exhibitions. Every time a new

meeting is held, organizers have to design and build a new

portal website from scratch. This is an inefficient way to

arrange meeting information publication. For meeting

participants, there are three problems: 1) it is difficult for them

to find the quickest route to a meeting room from meeting

space floor plans alone 2) the interaction between participants

cannot easily be extended online (i.e., in the meeting session)

to offline. 3) as the traditional medium of communication

between participants and meeting organization, meeting

documents in paper form, do not allow real-time updates or an

information push service based on participants’ demands.

We adopt cloud computing technology and other LBS

technologies to develop a mobile cloud system for organizing

meeting arrangements. There are two contributions this paper

makes:

1. We developed a mobile cloud system for meeting

arrangements named iConCube, which uses the cloud

computing platform OpenStack as the virtual machine

management platform, and introduced the framework for

the system design. The template library inside the system

allows rapid customization of various types of publishing

platforms for meeting information that users can easily

manage through the OpenStack web-based dashboard.

2. We introduced a design for services supplied by the

mobile application in the system. Based on hybrid

positioning technology and the location semantic

awareness technique [3], the mobile application, which

runs on the users’ mobile devices, provides four services

including a indoor navigation service, a meeting passbook,

online-to-offline (O2O) social service and location-based

information push service.

Our system has been used for many academic conferences

such as the China Satellite Navigation Conference (CSNC)

and National Internet Information Security Forum (Xdef).

According to International Committee on Global Navigation

Satellite Systems (ICG), CSNC now is one of the three

international satellite navigation conferences. It drew more

than 2,000 participants and nearly 150 exhibitors in the fifth

带格式的: 左侧: 1.29 厘米, 右侧: 1.29 厘米, 顶端: 1.9 厘米, 底端: 4.29 厘米

批注 [s1]: Moved this because it had nothing to do with the paragraph it was in

above.

批注 [s2]: This is advertizing copy

批注 [s3]: This has nothing to do with

your platform

Page 33: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

CSNC. We introduce the system and experiments on the

circumstance of CSNC.

The paper is organized as follows. In section 2, we present

an overview of the framework for the mobile cloud system for

meeting arrangement. Section 3 illustrates the design of the

four services provided by the mobile application. Section 4

presents the experiments and the performance of our system.

We introduce the related work we referenced in section 5. In

section 6, we summarize this paper.

II. SYSTEM FRAMEWORK

The access to a meeting portal website varies significantly

over a short time. We count a service requirement as an access.

The access amount of the CSNC 2013 portal website from two

months before the conference to three days after in Fig.1

shows a wide fluctuation. The CSNC 2013 was held for five

days from May 13th

to 17th and its portal website went on line

on March 1st. The access amount has been below 100 times

one day before May 1st and started to rise afterwards. Access

reached around 20,000 times per day from May 10th

to 13th.

Finally, it fell to almost to zero times per day after May 20th

. A

website requires enough physical machines to prevent a

breakdown when the maximum numbers of users access it.

Two characteristics of cloud computing, Auto-Scaling and

High Availability (HA) can accommodate varying workloads

and facilitate recovery after system failures.

Fig. 1. The access amount of CSNC 2013 portal website

Auto-Scaling: Cloud systems rely on virtualization

techniques to allocate computing resources on demand

[4]. Auto-Scaling can allocate dynamic resources

according to the current demands on a website.

High Availability (HA): Generally, the most meetings

last for one or several days, up to one week. Once the

server of a meeting’s portal website is unavailable for

some reasons, the recovery process may take at least one

day. Meeting arrangements and participants would be

greatly effected. In a cloud computing system, the HA

can detect the failures as they occur and ensure the

system returns quickly.

Fig. 2 shows the framework of the mobile cloud system for

meeting arrangements. Based on the combination of the client-

server model (C/S) and the browser-server model (B/S), the

system comprises two modules, one is the server, the other is

the client and browser. The server module includes two parts,

which are the bottom layer for the cloud system and the

template library. With virtualization technology, one or more

physical machines such as the physical servers, storage

devices and internet devices, can be configured and

partitioned into multiple independent virtual machines (VMs).

Built upon the open-source software cloud computing

platform OpenStack, the bottom layer of the cloud system is

responsible for management of VMs. The web cloud

management platform in the bottom layer is a web-based user

interface to access the virtual machine manager. The system

administrators can create, schedule, suspend, and stop VMs

and backup data for users through the management platform.

For example, the system administrators log in the web cloud

management platform via web browser to create a VM and

database for a meeting, and schedule the resources elastically

in accordance with user demand. When a meeting is over, the

system administrators suspend or stop VMs and backup data

with the permission of meeting organizers.

The system provides a meeting template library as the

foundation for this cloud computing platform. The meeting

template library was built in the VMs according to different

requirements of diverse meeting topics and types, and

provides templates for organizers during meeting portal

website design. The topics of meetings are virtually endless,

across the academic, art, and economic fields. Thus, there are

also many types of meetings such as conferences, forums, and

conventions. For example, for academic meetings, the

template offers a web-based paper submission system. Not

only authors can upload their papers, but also reviewers can

download papers, submit, and update their reviews through the

submission system. However, for automobile exhibition, the

corresponding template shows all the exhibitor`s information

and highlights the latest exhibits to attract more attendees. The

art exhibition portal website built from the template library

allows users to share their comments about pictures on many

popular social services like Facebook or Twitter, and express

which picture they like using a like button. According to user

comments, organizers can make a list of the top ten favorite

pictures and show the list on the website.

批注 [s4]: This paragraph does not follow

your section titles or numbering of your six

sections. Also, related work generally

comes before experiements,if not right after

the introduction.

Page 34: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

The system’s client and browser module includes an

Android application and meeting portal website for

prospective users. For meeting participants, they can use the

Android applications on the mobile devices such as

smartphones and tablet computers with Android operating

systems. Unlike traditional C/S, the client in our system is an

interface to access to cloud computing system services, such

as indoor navigation services and location-based news push-

delivery services. The server returns results back to the client.

For meeting organizers, the management backstage is based

on the secondary development: Horizon, the canonical

implementation of OpenStack’s Dashboard. Organizers use

the accounts assigned by system administrators to log in the

management backstage and choose a template for the portal

website design. Client and Browser

Schedule

Backup

Create

Suspend

Virtual Machine Manager

Web Cloud Management Platform

Server

Virtual machine

Web browser

Mobile device

Physical machine

Storage device Internet devicePhysical server

Android application

Indoor navigation and meeting passbook

O2O social network service for meeting

Relevant information service

Location-based news push-delivery service

Meeting portal website

Notice and announcement

privilege management

Meeting information input

templates library

Academic

Art

Economy

Conference

Convention

Forum

Workshop

Congress

Comic

Antique

Fig. 2. Framework of our system

III. MOBILE APPLICATION SERVICES

It is a common problem that meeting participants cannot

find their way to a meeting room or forget the meeting

schedule and miss an important session. Communications

between participants is limited during a meeting and often

happens in small groups. To solve these problems, the

Android application in our mobile cloud computing system for

meeting arrangements offers four services: An Indoor

navigation and meeting passbook service, O2O-based social

network service, meeting topic information service and a

location-based news push service.

A. Indoor Navigation and Meeting Passbook Service

Generally, since the meeting is held someplace where most

participants have never been to, participants are not familiar

with meeting places and are in urgent need of indoor

navigation to help them avoid getting lost. The system adopted

the Quick Response Code (QR code) location technology to

locate the participants inside conference and convention

centers. In contrast to traditional indoor location technologies

such as Pseudo-Satellite technology, WiFi indoor location

technology and Cell-ID location technology, the QR code

location technology used in the meeting places positioning has

the following advantages: 1) QR code location technology is

low cost. Since the places where meetings are often held are

used for temporary purposes and rented, the meeting

organizers are not allowed to deploy Pseudo-Satellites inside.

Even if permitted, such systems are prohibitively expensive.

Similarly, WiFi installation is expensive. 2) QR code location

technology does not require advanced technical support. Some

participants, whose mobile phones are without GPS chips, can

locate themselves in the meeting places. QR code location

technology makes location services available to the largest

number of participants. 3) QR code location technology has

high positioning accuracy. Trevisani [5] found that Cell-ID

performance is strongly influenced by unpredictable factors

such as communication load, noise, and multipath propagation.

Coordinate information from the meeting space floor plans are

stored in the QR code, participants can find their location on

the floor plan by scanning the QR code.

Meeting indoor navigation service provides users two

types of information, including users’ current location and the

route from their current location to a destination. To determine

a users’ current location, we chose landmarks in the meeting

place such as meeting room entries, elevators, the lounge hall,

and toilets. Every landmark’s x and y coordinates on the floor

plan grid are encoded into five bits as seen in Fig.3. Using the

“ZXing” library which supports decoding and generating QR

codes, the position codes transformed from the coordinates are

stored as QR code. QR codes are displayed around the places

which connect to coordinate information stored in QR code.

The correspondences between coordinate information and

position codes are managed in the database. However, given

the different screen resolutions of mobile phones, the x and y

coordinates of the same location are different on the floor plan

grid at different screen resolutions. Thus, in order to support

various mobile phones, grids for floor plans at multiple

resolutions are stored in the database. Given a mobile phone’s

screen resolution of w in width and h in height, the

coordinates ( , )x y for the floor plan grid at this resolution is

as follows:

0 0( , ) ( , ) ( , )x y x y x y (1)

Since coordinates of the upper left corner 0 0( , )x y at

w h resolution and 0 0( , )x y at w h resolution are

provided, relative coordinates ( , )x y is given as:

0 0

( , )( , ) ( , ) ( , )

( , )

w hx y x y x y

w h

(2)

Where w and h are the width and height of the standard

resolution and the coordinate information ( , )x y stored in QR

codes on the standard resolution’s floor plan grid.

As is shown in Fig.3, the process of planning a route from

users’ current location to the destination meeting room is as

follows: users scan the QR code using the camera of mobile

Page 35: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

phone and choose a desired meeting room. The application

converts the QR code to position codes and transmits position

codes, the destination, and the screen resolution of the mobile

phones to the server. The server returns the meeting floor plan

with two place marks and a route.

1 10 0 0

Position codes

Virtual machine Database

1 10 0 0

(23,142)+

Fig. 3. Framework of the meeting indoor navigation service

During the meeting, the application not only navigates

users to a destination, but also provides passbook service for

the meeting. Sometimes users want to move between rooms to

listen to different meetings held at the same time. By

following a meeting in the meeting passbook, users receive

reminders when the meeting is about to start. So users will not

miss any portion of meeting they may want to attend.

B. Meeting O2O-based Social Network Service

YX

Yang

JN Liu

JN

Chen

HZ

Xu

ZQ Wei

M

Rothacher

U

Hugentobler

SH Ye

S

ChuangJD Sun

Q Zhang

YTJ

Morton

SG Jin

GX Ai

SS Tan

JL Wang

JS Li

Fig. 4. Paper citation network of meeting participants

We collected a sample of participants from the China

Satellite Navigation Conference (CSNC) 2013 and made a

paper citation network as seen in Fig.4. Two participants who

have written papers together are connected by dark blue bold

line. Comparatively, the two who only have cited each other’s

papers are connected by light blue thin line. Fig.4 shows that

meeting participants form a miniature social network with

strong relationships. However, the traditional meeting

arrangements system based on Business-to-Customer model

(B2C) limits communication among participants. Business

cards with contact information are incompatible with mobile

phones, business cards can only be workable when

participants input contact information from the card onto their

phones. Besides, social networking has been integrated as core

enablers of enterprise applications to facilitate effective

communications [6]. So our application offers users a social

network platform.

The Online-to-Offline model (O2O), a meeting social

network service as in Fig.5 can provide an online interactive

platform where users can share their feelings, comments, and

photos of the meeting as well as their electronic business cards

(E-Cards). E-Cards can be imported directly into the mobile

phone address book so as to catalyze offline user

communications. It is helpful tool for participants seeking to

build relationships and networks.

Online

Offline

Fig. 5. Demonstration of meeting O2O-based social network service

C. Meeting Topic Information service

From a meeting organizers perspective, a meeting must

explore topics in depth and expand its influence by creating a

lot of buzz among the participants. The topic information

service delivers an array of subject matter information posted

by other relevant users on a Chinese microblogging website

called Sina Weibo.

Fig.6 illustrates the implementation of the topic

information service. First, system administrators need to

register a developer account on Weibo and apply an

application programming interface key (API key) to identify

the developer. Using the Weibo collection component, the

account can be employed to create a group comprised of

Weibo users followed by the account. The Weibo server

returns members of the group and messages posted by the

group in the form of html page based on the API key

transmitted from the meeting application. Application users

Page 36: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

can choose to read all messages or messages from a selected

user or users. Weibo users in the group can be changed in

tandem with the meeting topics to rapidly update the

information delivered by this service.

Developer account

Weibo group

component

Weibo serverAppKey

Weibo group

Fig. 6. Demonstration of meeting topic information service

D. Location-based Information Push Service

Meeting participants might want to know about activities

around meeting sites. However, information found on the

Internet is rarely categorized according to location. Based on

user’s current location, the location-based information push

service organizes Internet information not only in the order

near-to-far in terms of geographical distance, but also in

ascending order of social distance.

The framework of the location-based information push

service in Fig.7 is divided into three modules: a positioning

module, social distance computation module, and a location-

based information aggregation module. In the positioning

module, a users’ current location can be detected by the hybrid

positioning technology integrated with GPS, Cell-ID, and Wifi

location technology. The system searches the database to

determine geo-fences within a certain geographical radius to

the users’ location. Geo-fences stored in the database are

generated as in a radius around the point of interest.

In the social distance computation module, adapting a

location semantic awareness technique [3], the system

perceives the social attributes of users’ location and geo-

fences extracted from the database then computes the social

distance considering similarity among location social

attributes. For example, user’s current location is School of

Computer Science in Wuhan University and geo-fences

within the circle centered at the School of Computer Science

are extracted. School of Computer Science is much closer to

School of Chemistry and Molecular Science than the School

of Remote Sensing and Information Engineering in

geographical distance. But in social distance, the latter is much

more relevant to School of Computer Science. Thus, they are

assigned to the same cluster of information sciences. It can be

seen in Fig.7 that current user location and geo-fences are

classified into four clusters according to their social attributes,

Information Science, Faculty of Science, Engineering Science,

and Literae Humaniores. It is very unlikely that a student of

School of Computer Science will be interested in the

information about Literae Humaniores.

The location-based information aggregation module

includes an URL filter and Internet information filter. The

URL filter utilizes a search engine web crawler collection tool,

Nutch, to recognize and extract web pages where the Internet

texts, images, and multi-media information have related

geotags or contain the name of the user location and geo-

fences selected in the positioning module. Then, this module

puts the URLs of these web pages into the database. The

Internet information filter adopts Chinese word segmentation

software “Paodingjieniu” to process Chinese text parsing. The

main content is extracted from single meaningful words and is

transformed into a non-redundant, refined, and coherent text

segments. Finally, the Internet information associated with a

users’ loaction is presented by topic, abstract, URL and

posting date. Users can choose to read the information in

ascending order by social distance or geographical distance, as

well as by chronological order.

URL filter

Internet information filter

Topic

Abstact

URL date

Topic

Abstact

URL date

Topic

Abstact

URL date

URL

list

Social distance order

Geographical distance order

Date order

Information Science

Faculty of Science

Engineering Science

Literae Humaniores

Positioning

module

Social distance

computation

module

location-based

information

aggregation

module

School of Computer Science

Fig. 7. Framework of the location-based information push service. From

bottom to top, there are positioning module, social distance computation

module and location-based information aggregation module, three modules

constructed to implement this service

E. Application example

Our system has been used for many academic conferences,

i.e. China Satellite Navigation Conference (CNCC), which is

one of the three largest international academic conference in

navigation field. The performance is shown in the following

four main services, as seen in Fig.8.

Page 37: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

IV. RELATED WORK

In this paper, we introduced a kind of meeting

arrangements system based on user’s location information.

With regard to this work, there are two aspects were

considered.

Indoor Positioning Applications: Indoor positioning has

been booming [7]. There are many applications using this

technology to guide user’s activities in large buildings, such as

hospitals [8], shopping malls [9], airports, and parking garages.

Chai [8] proposed a hospital positioning system for patient

positioning and emergency calls, and could accurately obtain

the patient ID and his location, and send this information to

the central monitoring server to display the real time patient

information. Indoor positioning is also used in many

circumstances such as construction resource tracking. Woo

[10] built a WiFi-based indoor positioning system which

demonstrated the utility when tracking approximate locations

of labor at construction sites. Indoor positioning in

applications for thematic scenarios has been a hot field.

(a) The meeting list on May 16th (b) Meeting indoor navigation service

provides users’ location and other meeting room locations on raster

maps

(c)Meeting topic information service

provides timely and effective information about meeting themes

through the Sina Weibo API

(d) Location-based information push

service pushes the Internet information according to the sequence

of social distance, geographical

distance, or by posting date

Fig. 8. Performance of Android application in our system

Meeting Service System (MSS): With the development of

network technology, there are many MSSs, typical instances

include Science meeting Online [11] and

business management systems, such as FoxMeeting [12],

Medcon [13] and Suvisoft [14]. Meeting organizers can

customize meeting information publishing platform so that

meeting participants can conveniently access relevant

information about their meeting. However, most of these

platforms are just web service systems and thus are not

suitable for mobile network, users can not access the service

whenever and wherever they want. Social network

applications however are appearing more frequently in LBS

applications, a new type of meeting system called meeting

social network was recently proposed, because without

mobility and interaction, Auto-Scaling and high availability

are compromised.

V. CONCLUSION

We presented the design and implementation of a meeting

mobile cloud computing system for meeting arrangements

integrated with a social networking system for users,

including the both meeting organizers and participants.

Meeting organizers use a customized templates library to

effectively design the meeting portal website. Using the

hybrid positioning technology found in the Android

application of this system, a meeting space indoor navigation

service and Location-based information push service are

available for participants. The application also serves as a

social network to enhance communication among users.

Based on location information revealed by users, the

current trend in Location-based Services (LBSs) deploys

location awareness techniques to offer personalized services

contained within a thematic scenario and not only a

navigation service. The meeting mobile cloud computing

system integrates the mobile Internet with cloud computing

批注 [s5]: This probably should be before

your experiment, possibly right after the introduction. Typically related work is a

part of the introduction or a separate section

after the introduction following the IMRAD

format. But the structure is your decision;

you are the authors. A more descriptive

term like “DISCUSSION” might be more

suitable since the flow makes sense even if

the section title seems out of place

批注 [s6]: Name???

批注 [s7]: Name??

Page 38: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National

International Journal of Services Computing (ISSN 2330-4472) Vol. 2, No. 5, April - June2015

带格式的: (中文) 中文(中国)

technologies and provides indoor/outdoor services, becoming

typical example of this trend.

ACKNOWLEDGMENT

This work was supported by the National Natural Science

Foundation of China (NSFC) “Social Network Awareness and

Safety Collaboration Technology based on Location-Based

Service” (No.41104010), and the National High Technology

Research and Development Program of China (863 Program)

(No.2013AA12A204).

REFERENCES

[1] J. N. Liu, C. Guo, and R. Q. Peng. “Location-based services in the era of mobile internet.” Communications of the CCF, 2008, vol. 7(12), pp. 40-49 (in Chinese).

[2] Liu F, Shu P, Jin H. “Gearing resource-poor mobile devices with powerful clouds: architectures, challenges, and applications.” Wireless Communications, IEEE, 2013, vol.20(3), pp. 14-22.

[3] C. Guo, J. N. Liu. “iWISE: A location-based service cloud computing system with content aggregation and social awareness.” In Proceeding of the 10th Int’l Symp. on Location Based Services (LBS 2013). Shanghai, 2013.

[4] Hung, Che-Lun, Yu-Chen Hu, and Kuan-Ching Li. "Auto-Scaling Model for Cloud Computing System." International Journal of Hybrid Information Technology , 2012, vol. 5(2), pp. 181-186.

[5] E. Trevisani, A. Vitaletti. “Cell-ID location technique, limits and benefits: an experimental study.” In Proceeding of Mobile Computing Systems and Applications, 2004. WMCSA 2004. Sixth IEEE Workshop on. IEEE, 2004, pp. 51-60.

[6] Zhang, Liang-Jie LJ. "Editorial: Big services era: Global trends of cloud computing and big data." IEEE Transactions on Services Computing. 2012, vol.5(4), pp: 467-468.

[7] Z. L. Dong, Y. P. Yu, X. Yuan, N. Wan, and L. Yang. “Situation and development tendency of indoor positioning.” China Communication. 2013, vol. 10(3), pp. 42-55.

[8] J. H. Chai. “Patient Positioning System in Hospital Based on Zigbee.” In Proceeding of International Conference on Intelligent Computation and Bio-Medical Instrumentation, 2011, pp.159-162.

[9] Point Inside: StoreMode by Point Inside I engage shoppers in-store via mobile. http://www.pointinside.com

[10] S. Woo, S. Jeong, E. Mok, L. Y. Xia, C. Choi, M. Pyeon, et al. “Application of WiFi-based indoor positioning system for labor tracking at construction sites: a case study in Guangzhou MTR.” Automation in Construction, 2011, vol. 20(1), pp. 3-13.

[11] Sciencemeeting Online of China. http://www.meeting,edu,cn/meeting/

[12] Meeting Management System. http://www.foxmeeting.com.

[13] Medcon: Sciencemeeting Management System. http://www.medcon.org.cn/medcon/cn/.

[14] Suvisoft: Software Development Services. http://www.suvisoft.com/

Page 39: IJSC Editorial Board - hipore.comhipore.com/stsc/2015/IJSC-Vol3-No2-2015.pdf · IJSC Editorial Board Editors-in-Chief Andrzej Goscinski, Deakin University, ... San-Yih Hwang, National