12
Cluster Comput DOI 10.1007/s10586-015-0453-9 Robot and cloud-assisted multi-modal healthcare system Yujun Ma 1,2 · Yin Zhang 5 · Jiafu Wan 3 · Daqiang Zhang 4 · Ning Pan 1 Received: 11 November 2014 / Revised: 27 March 2015 / Accepted: 6 April 2015 © Springer Science+Business Media New York 2015 Abstract With the rise of the robot and cloud computing technology, human-centric healthcare service attracts widely attention in order to meet the great challenges of traditional healthcare, especially the limited medical resources. This paper presents a healthcare system based on cloud computing and robotics, which consists of wireless body area networks, robot, software system and cloud platform. This system is expected to accurately measure user’s physiological infor- mation for analysis and feedback, which is assisted by the robot integrated with various sensors. In order to improve the practicability of multi-media transmission in the health- care system, this paper proposes a novel scheme to deliver real-time video contents through an improved UDP-based B Ning Pan [email protected] Yujun Ma [email protected] Yin Zhang [email protected] Jiafu Wan [email protected] Daqiang Zhang [email protected] 1 School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 2 Computer Network Center, Nanyang Institute of Technology, Nanyang, China 3 School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou, China 4 Mobile Cloud Computing Laboratory, Tongji University, Shanghai, China 5 School of Information and Safe Engineering, Zhongnan University of Economics and Law, Wuhan, China protocol. Finally, the proposed approach is evaluated with experimental study based on a real testbed. Keywords Healthcare system · Cloud computing · Multi-modal · LTE · Video transmission 1 Introduction Along with the continuously growing aging trends and the dramatic increase in the elderly population, the healthcare sector is under a heavy burden due to the shortage of medical facilities and healthcare workers [1, 2]. In recent years, the expenses on healthcare services all over the world have been increased continuously, which has caused serious financial burden to the society and the government. In view of this sit- uation, it is essential to develop economical, convenient and scalable home healthcare systems to effectively enhance the informatization degree of healthcare services and balance the uneven distribution of medical resources [3]. Moreover, with the explosion of mobile terminals [4], user’s physiological singal can be easily collected which greatly encourages the development of intelligent healthcare. In recent years, considerable research on sensors has con- tributed to the popularity of the home healthcare system, and various healthcare devices are developed for accurately measuring, analyzing the user’s physiological data and corre- sponding feedback [5]. However, the measurement range of physiological data, battery lifetime and the accuracy of data acquisition have been slightly restricted since the size and weight of devices cannot be too large considering comfort- able and wearable [6, 7]. In order to overcome the defects and provide users with high Quality of Service (QoS) and Qual- ity of Experience (QoE) healthcare services, we can employ the approach based on mobile robot to collect environmental 123

Robot and cloud-assisted multi-modal healthcare systemepic.hust.edu.cn/yujunma/yujun_paper/RCHMS-CLUS.pdf · Robot and cloud-assisted multi-modal healthcare system ... ing technology

Embed Size (px)

Citation preview

Cluster ComputDOI 10.1007/s10586-015-0453-9

Robot and cloud-assisted multi-modal healthcare system

Yujun Ma1,2 · Yin Zhang5 · Jiafu Wan3 · Daqiang Zhang4 · Ning Pan1

Received: 11 November 2014 / Revised: 27 March 2015 / Accepted: 6 April 2015© Springer Science+Business Media New York 2015

Abstract With the rise of the robot and cloud computingtechnology, human-centric healthcare service attracts widelyattention in order to meet the great challenges of traditionalhealthcare, especially the limited medical resources. Thispaper presents a healthcare systembased on cloud computingand robotics, which consists of wireless body area networks,robot, software system and cloud platform. This system isexpected to accurately measure user’s physiological infor-mation for analysis and feedback, which is assisted by therobot integrated with various sensors. In order to improvethe practicability of multi-media transmission in the health-care system, this paper proposes a novel scheme to deliverreal-time video contents through an improved UDP-based

B Ning [email protected]

Yujun [email protected]

Yin [email protected]

Jiafu [email protected]

Daqiang [email protected]

1 School of Computer Science and Technology, HuazhongUniversity of Science and Technology, Wuhan, China

2 Computer Network Center, Nanyang Institute of Technology,Nanyang, China

3 School of Mechanical & Automotive Engineering, SouthChina University of Technology, Guangzhou, China

4 Mobile Cloud Computing Laboratory, Tongji University,Shanghai, China

5 School of Information and Safe Engineering, ZhongnanUniversity of Economics and Law, Wuhan, China

protocol. Finally, the proposed approach is evaluated withexperimental study based on a real testbed.

Keywords Healthcare system · Cloud computing ·Multi-modal · LTE · Video transmission

1 Introduction

Along with the continuously growing aging trends and thedramatic increase in the elderly population, the healthcaresector is under a heavy burden due to the shortage of medicalfacilities and healthcare workers [1,2]. In recent years, theexpenses on healthcare services all over the world have beenincreased continuously, which has caused serious financialburden to the society and the government. In view of this sit-uation, it is essential to develop economical, convenient andscalable home healthcare systems to effectively enhance theinformatization degree of healthcare services and balance theuneven distribution of medical resources [3]. Moreover, withthe explosion of mobile terminals [4], user’s physiologicalsingal can be easily collected which greatly encourages thedevelopment of intelligent healthcare.

In recent years, considerable research on sensors has con-tributed to the popularity of the home healthcare system,and various healthcare devices are developed for accuratelymeasuring, analyzing the user’s physiological data and corre-sponding feedback [5]. However, the measurement range ofphysiological data, battery lifetime and the accuracy of dataacquisition have been slightly restricted since the size andweight of devices cannot be too large considering comfort-able and wearable [6,7]. In order to overcome the defects andprovide users with high Quality of Service (QoS) and Qual-ity of Experience (QoE) healthcare services, we can employthe approach based on mobile robot to collect environmental

123

Cluster Comput

and physiological data [3]. Since the robots are not obviouslyrestricted from the size, weight and energy consumption,so that more physiological data acquisition modules can beintegrated, and the measurement accuracy and the batterylifetime can be greatly improved. Moreover, because of therobot can travel through freely, it can search the user and openthe camera to transport video or photo to remote healthcarecenter in health emergencies.

On the other hand, due to the robot has the humanizedappearance and the following functions: (1) capture the user’svoice, image and video; (2) play audio and video. Using thesecharacteristics, the robot can be completed with the purposeof the human emotional communication [8]. For example,in the elderly healthcare system, the robot will collect theold man’s voice and facial expressions, then such as datais uploaded to the remote healthcare center. The healthcaresystem can estimate the oldman’smental condition bymulti-modal emotional analysis. When the system determines thatthe human was lonely or depressed mood, the robot can ini-tiatively play alleviate discomfort mood music or video forthe old man, even more the robot can give the elderly psy-chological comfort by a robot dance.

The data in the healthcare system have the followingcharacteristics: large-scale data, data type diversification(including text data, image and video), high dimension andrapid change [9,10]. The traditional data storage and process-ing technology cannot meet the demand for fast access andefficient handling to data of the healthcare system. With therise of cloud computing and big data technology, they providethe possibility to solve the data storage and processing prob-lem in the healthcare system [11]. With the help of Hadoopdistributed data storage and processing platform [12], we cansolve the offline data storage and processing demands of thehealthcare system, and for others health monitoring applica-tions which require shorter response time, Storm and Sparkbig data processing platform are good solutions [13].

As referred to above, the healthcare system is very com-plex, including the following main technical fields: sensortechnology [14,15], body area network, wireless communi-cation technology, multi-modal data collection and analysis[16,17], robotics, cloud computing and big data analysis. Wepropose the robot and cloud-assisted multi-modal healthcaresystem architecture, and then, we will design a micro con-troller in the robot, which is a programmable, multi-sensingparameter and scalable data acquisition device to support arapid and safe communication with the robot host controller.In order to increase the practicability of video transmission inthe healthcare system, this paper proposes a novel scheme todeliver real-time video contents through an improved UDP-based protocol by the LTE network. We also create a testplatform for evaluation and experimental study.

The rest of this paper is structured as follows. We presentrelated works in Sect. 2. In Sect. 3, we indroduce the sys-

tem architecture and design. Section4 gives our experimentalresult and analysis. Finally, Sect. 5 completes this paper.

2 Related work

Healthcare system is a very complicated system and involvedin a variety of related technologies. In this section, we focuson the related works of multi-modal sensor informationcollection, robot technology and LTE-based wireless videotransmission in the healthcare system.

2.1 Multi-modal sensor information collection

In order tomake the healthcare system better serve humanity,it needs to collect all kinds of sensor data, including humanphysiological status, environmental information and emo-tional information. Therefore, it’s necessary to integrate avariety of physiological index sensors (e.g ECG, blood pres-sure, blood glucose) and environmental sensors (e.g carbondioxide, PM2.5) into the healthcare system.At the same time,in order to monitor the human psychological states, emotion-related information (e.g facial expressions, gestures, tone ofvoice) is also needed to be collected.

Human physiological data acquisition plays an impor-tant role in the healthcare system [18]. In recent years,wireless body area network (WBAN) has obtained fastdevelopment due to the following advantages: low cost ofacquisition device, portability, and low power consump-tion. Some WBAN based on wearable devices and sensorsimplanted in human bodies have served in the healthcare sys-tem [19]. In [10], Miller et al., design a set of physiologicaldata acquisition and management system using implantablewireless sensor, the system realizes the physiological dataacquisition, processing andmanagement.Choi et al. designeda physiological data acquisition system based on Bluetoothand wireless LAN technology [20,21]. Although the systemcan acquire physiological parameters, it is not flexible andpractical owing to the limitations of transmission distance ofBluetooth technology. In [22], Salvador et al., described aGSM-based monitoring system to track the health status ofpatients with heart disease in hospital. This system is primar-ily used for data transmission, but the GSM bandwidth is notas good as 3G and WiFi, and the physiological parameterscollected by the system are incomplete. In [14], Bourouiset al., designed a health monitoring system based on cloudcomputing, WBAN and smart phones, which uses WBAN toacquire physiological parameters such as electrocardiogram(ECG), body temperature, etc., and send the data to cloud.In [23], Kim et al., describe a framework for the healthcaresystem as a service model and presents a calorie-trackingexample service.

Psychology status is an another significant index of humanhealth. Recently, some emotion-aware applications based on

123

Cluster Comput

the smart phone have appeared. They recognize the human’smood by means of the relationship between the user’s moodand behavior patterns of using phones [24]. These applica-tions often gather small-scale data through smart phone, orrely on the active participation of the user [25]. Florian Lin-genfelser et al., proposed a method based on event driven, byusing the social signal interpretation (SSI) framework to mixmulti-modal signal to realize the human enjoyment recogni-tion [26], but it doesn’t monitor the physiological index, soit’s not suitable for applications in the healthcare system.

2.2 Cloud robotics and healthcare system

Robotics technology has a great influence on society, theeconomy and people’s life. The development of wireless net-work technology and cloud computing technology pave theway for the robot moving from the industrial control fieldto the service field [27]. Currently, the robot on the marketmainly focus on family education, entertainment and domes-tic service (such as cleaning robot).Mostly of themcontrolledby a built-in software have customized function and work instandalone mode. They have less intelligence and hard toupgrade for maintenance. Networked robot connects a groupof the robots by wired or wireless network to make the robotshave capabilities of remote operation and management, sothe multi-robots cooperation becomes possible. In order tosolve the deficiencies in resource constrained, communica-tion disability and the lack of learning ability, which existin networked robots and standalone robots, cloud robot isproposed [28,29].

The cloud robot architecture is divided into two tiers:machine to machine (M2M) level [30] and the machine to

cloud (M2C) level [31]. In M2M level, a group of robots isconnectedwith each other viawireless network to forman ad-hoc robot collaborative cloud infrastructure. In M2C layer,it provides a shared computing and storage resource pool,allowing the robot to offload computing tasks to the cloud.Some research departments have applied cloud computingtechnology to the robot application scenarios. For example,Google’s research grouphas developed a smart phone-driven,learning through the cloud robot system.

At present, there have been some applications of med-ical robots (for example, surgical robot, rehabilitation robot)in hospital, but the robot has high degree of specialization,expensive and inconvenient operation features, it is difficultto promote in the public healthcare system. With the pop-ularization and promotion of cloud computing, as well asthe field of human-computer interaction progress, make thecloud robot in the field of healthcare applications becomean important development trend. In this paper, we focus oncloud robot applications in the field of healthcare system.

3 Healthcare system architecture and design

3.1 Healthcare system architecture

Integration of the robot and cloud computing technology tothe healthcare system can greatly improve the service qualityand service level, our healthcare system architecture includ-ing end users, robots and healthcare cloud platform (as shownin Fig. 1). Users through thewearable devices to collect phys-iological data, and then the collected data is forwarded tothe remote cloud platform by the robot. The robot can storesensory data, interact with human, and integrate a variety

Fig. 1 Robot and cloud-assisted healthcare system architecture

123

Cluster Comput

of wireless communication module including ZigBee, WiFiand LTE. The cloud is used to store large-scale health data,health analysis and prediction, and provide personalized ser-vice. The rest of this section introduces the architecture anddesign of the core components.

3.2 Robot-based multi-modal sensory aggregation

Traditional sensory aggregation system consists of threecomponents: sensors, microprocessor (used to collect sen-sory data) and general computer (used to store and processsensory data). In the healthcare system, the robot-basedmulti-modal sensory aggregation system includes the sen-sors, robots and cloud platform (used to store and analysissensory data).

Themajor data collection components of the robot includemicro-controller andmain controller. Themicro-controller asthe front end of data collection. It controls wireless commu-nication module to collect users physiology data utilizing theworn on the user body, and senses environment informationwith environment sensor in the robot body. Then the micro-controller sends data to robot main controller. Finally, therobot main controller use XML or JSON data format to sendsensory data to the cloud platform for storage and analysis(as shown in Fig. 1).

The environment sensors (including temperature sensor,humidity sensor, harmful smog sensor, light sensor, etc.)are used to collect environment information. The body sen-sors (including body temperature sensor, heart rate sensor,blood pressure sensors, etc.) are used to collect users physi-ology data. Because of some sensors include microprocessorand periphery devices, they can perform tasks independently,what we should do is to bring the device in the system. In oursystem, because of different sensor has special identification,so we can used appropriate method to conveniently collectthe sensory data (analog signal or digital signal) accordingto different kinds of sensors. We use 8bit micro-processorto reduce power consumption and save cost. We run microembedded OS on the micro-controller to easy task scheduleand resource management. The robot main controller is the32bit ARMprocessor. Because of it has high frequency, largememory and abundant resource, so we run Android OS on it.It’s functions include sending sensory data through 3G, 4Gor WIFI, interaction with users, etc.

There are a few problems should be overcome in sys-tem implementation, for example, collection period of sensordata, sensory data process and transfer. Improper handle ofthese problems could cause huge difficulty in usage. The fol-lowing contents of this section describe the details.

3.2.1 Collection period of sensor data

The sensors used in our system include chip sensors andmature sensor devices. The mature sensor device could send

(a)

(b)

Fig. 2 Robot internal communication protocol

data to robot main controller directly through wired proto-col or Bluetooth. While the chip sensors need the help ofmicro-controller. Micro-controller sends control signal tochip sensors to collect sensor data, and then, the collectedsensor data are send to robot main controller.

According to the sensor’s function, different type sensorshould have different collection period and respond time. Thenormal physiological indexes of user can be collected in fixedcycle. But regard to the emergency health signal, we hope toget the information immediately, and the robot could informusers family or call the emergency center through phone-call or short messaging service in time. Some body sensorcan periodically sending data to the robot in an automatedway, others need users cooperation, so the frequency couldbe decided by user or just notify user at predefined period todo the sampling.

The handleway of harmful smog looks like the interrupt ofCPU, micro-controller samples its value continuously, oncethe value is abnormal according to the threshold value (itcould be set by robot main controller when initiation), micro-controller report the condition to robot main controller. Inour implementation, we sample general environment data infixedperiod, and sample important information continuously.When the exception is detected, the micro-controller sendsinterrupt to robot main controller. We provide a graphic userinterface to user setting the collection period or notify userto collect physiology related sensory data.

3.2.2 Sensory data process and transfer after sampling

Transfer of sensory data could be divided into two part,transfer from micro-controller to robot main controller andtransfer from robot main controller to cloud platform. In thetransfer frommicro-controller to robot main controller, sometransfer method doesnt have function of verify, for exam-ple UART, I2C, SPI. On the other hand, different sensorydata has different length and format. So we need a criterionto guarantee data transfer correctly. We apply the protocolcomposed by two kinds of packets to solve the problem. Oneis the command send from robot main controller to micro-controller, the structure as shown in the Fig. 2a, the otherpacket format is the result of the sampled sensory data sendfrom micro-controller to robot main controller, the structureas shown in Fig. 2b.

123

Cluster Comput

Each packet format has the same field name of PacketLength, ID, Checksum. Packet length field ensures the com-patibility of different length of data and command, this fieldlength is 2 bytes. IDfield could be used to assure the sequenceof command and sensory data. The ID filed of packet sendfrom micro-controller to robot main controller should be thesame with the ID field of packet send from robot main con-troller to micro-controller. This field may be 2 or 4 bytes.Checksum field assures the correctness of transmission. Theverify method could choose CRC or others. Verify methodshould be consolidated both in micro-controller and robotmain controller.

In Fig. 2a, the “Command No.” field is used to distin-guish different command content, and it’s length is 2 bytes.The “Content of Command” field is the actual content ofcommand, and it’s length couldbe arbitrary value.The “Com-mand No.” field could express different kind of commandincluding read sensory data, set threshold of important sen-sor. If the “Command No.” is used to read sensory data, the“Content of Command” should be the number of sensor. Ifthe “Command No.” is used to set threshold of importantsensor, the “Content of Command” should be the number ofsensor and its threshold.

In Fig. 2b, when micro-controller sends sensory data torobot main controller, “Sensor Data No.” field presents dif-ferent sensor number. It is used to distinguish different sensor,the length of the field is 2 bytes. While sensor data is the dataof the corresponding sensor, the length may be any value.The “Sensor Data No.” field should also be set to specialreport tag, and it’s used to report critical situation, for exam-ple, the density of harmful smog increase immediately or therequested sensor is not supported, etc.

After receiving sensory data from micro-controller, therobot main controller executes structure process. Send thesesensory data to cloud platform in XML or JSON manner.

In our implementation, we use UART as interface to trans-fer packet frommicro-controller to robotmain controller. Theprotocol format is listed in Fig. 2. The field length as follows:ID field 2 bytes, CRC checksum field 2 bytes. Robot maincontroller uses XML format to send sensory data to cloudplatform.

3.2.3 Scalability of sensory data acquisition platform

With the scalability and compatibility of diverse sensors, thesensory data collection platform should smoothly supportthese sensors.

The compatibility in hardware layer needs micro-controller to read sensor signal. So, the micro-controllershould afford enough general purpose input/output (GPIO)ports for input signal or output signal. If the micro-controllerhas sufficient GPIO ports, we can attach the sensor directly.Otherwise, we should add another micro-controller to reach

compatibility. The robot main controller should record thenew micro-controller and send control command to speci-fied micro-controller. Otherwise, the robot main controllershould send control command to all micro-controllers.

The compatibility in software layer needs new sensor tobe distinguished from remain sensors. The ID field in thetransfer protocol meet the requirement (The length of IDfield, support 216 = 65,536 kinds of sensors). On the otherhand, the robot main controller sends data to cloud in XMLor JSON. The tag of these file can distinguish diverse sensors,and ensure compatibility.

3.3 Communication of healthcare system

3.3.1 Communication architecture

The robot and the sensors worn by the user can communicatethrough theZigbee protocol, and thewireless communicationnetwork is used to send data to a remote smart phone,whereinthe wireless communication network may use WiFi, 3G, or4G etc.

WBAN is the communication level of the system, and itconsists of a series of sensor nodes, which is integration ofa variety of sensors [32]. A sensor node is responsible forcollecting the physiological parameters, and to transmit thecollected parameters to robot. For example, similar systemcan be used for monitoring of cardiac patients. The heartsensor can operate in multiple modes reporting either a rawECG signal, times-tamped heart beats, or averaged heart rateover a certain period of time. As the WBAN [33] and sensortechnology are used in this system, a sensor node consists ofsome sensors which can be worn on the user. This node iswith data storage, wireless communication and other func-tions. It can continue to collect sensor data, store collectedinformation and sample, then sent the data to the robot. Basedon Zigbee protocol, sensor node can communicate with robotin the entire network.

We have designed a “Robot Message” protocol to ensurethat the sensor data can be transmitted between micro-controller and Android-based main controller developmentboard securely and efficiently, as show in Fig. 2. Robot Mes-sage protocol should be in good expansibility, and can beadjusted according to actual condition for error checking,to give priority to performance or accuracy. The “RobotMessage” protocol is divided into command message andfeedbackmessage. The commandmessage ismainly used forsending instructions to the micro-controller from main con-troller, and feedbackmessage is used for themicro-controllerto respond to the command of main controller.

3.3.2 LTE-based wireless video transmission

LTE communication is the very important part of the health-care system. When emergency incidents, we can use the

123

Cluster Comput

Fig. 3 UDP-based video transmission workflow

LTE network video real-time transmission. The disadvan-tage of wireless communication system compared to wiredsystem is the latency, anti-interference performance and lim-itation of band-width. These features would made latency inreal-time transmission and then bring bad user experience[34]. The key point of good wireless video transmission isgood instantaneity. LTE’s band-width is not sufficient andnot steady enough for real-time video transmission. So, toassure instantaneity, we need a protocol in transmission toensure band-with, frame ration and quality of video.

In LTE based network, usually, some overhead needed toensure consecutiveness of video. In practical, the instantane-ity ismore important than consecutiveness. It is not necessaryto ensure transmission of each frame. To fulfill high efficientin video transmission, we choose UDP protocol. UDP hasthe advantage of simple packet format and high efficient.The design principle to improve UDP protocol and make itmore suitable for video transmission in LTE network is toadd some control packet on it. Figure 3 is the workflow ofUDP-based video transmission.

Our system add control module and picture compressionmodule inUDP to ensure instantaneity of video transmission.Picture compression module do different ratio of compres-sion, control module monitor the cache queue and sendcontrol packet to ensure the collaboration of send and receive.We implement both synchronized and asynchronizedmethodin control module.

(1) Synchronized mode: In this working mode, no acquain-tance mechanism is used, to original UDP packet tosend data. Client request data of frame continuously,server send data of frame continuously. During the trans-mission, packets may lost. The disadvantages are hugeband-width occupation and high latency.

(2) Asynchronized mode: In this working mode, acquain-tance mechanism is used, serve reply to each requestfrom client, after getting acquaintance from client, servertransfer next frame.

Picture compression module control the picture’s com-press ratio. Because the size of UDP packet would influencethe latency of transmission, large size of picture would beharmful to transmission performance while tiny size of pic-ture would decrease the quality of quality. So, beside with thecomparison of synchronized and asynchronized method, we

also compared the different compression ratio. In the exper-iment we choose 20, 60, 90 % to do the comparison.

3.4 Healthcare software system

We have developed a software control system based on theAndroid platform to collect physiological parameters of oldmen such as body temperature, ECG and ambient parame-ters by sensors on their activities of daily living. It also cancollect some environment parameters, such as temperature,humidity, smoke and so on.

The entire software system is developed on the Androidoperating system. It divides into client and server, and relatesto the underlying Android platform embedded hardwaredevelopment and application software development.Androidplatform can control the underlying hardware through theunderlying hardware interface layer. It also can controlthe remote robot, transmit real-time video and other func-tions through WiFi, 3G and 4G. By connecting with theAndroid development board, the system can monitor andshow a variety of sensor data of human health and the indoorenvironment-related. The software system also has emer-gency response mechanism, so the family can be notified bytelephone, text messaging, real-time remote video and othermeans in the event of emergencies. At the same time, thesoftware system also integrates intelligent speech recogni-tion and speech synthesis technology to provide conveniencefor the user.

4 Experimental results and analysis

4.1 Multi-modal sensory data aggregation

Our multi-model sensory aggregation system can collect thefollowing environment information: temperature, humidityand harmful smog. The supported users physiology infor-mation includes heart rate and blood pressure. The Fig. 4presents the result figures of blood pressure signal (Fig. 4a),heartbeat rate (Fig. 4b), temperature (Fig. 4c) and humidity(Fig. 4d) in running system.

4.2 LTE-based healthcare video transmission

We have designed three experimental schemas to test theperformance of video transmission based on LTE.

(1) Different compression rate of image. The picture for-mat android got from camera is YuvImage. We shouldconvert the color space from YUV to RGB. YUV iscalculated from PAL system (normalization and gammacorrection), what we should do is to do lossless compres-sion by discrete cosine transformand entropy coding.Weemploy three types of compression rates in experiment.

123

Cluster Comput

(a) (b)

(c) (d)

Fig. 4 Multi-modal sensory data

(2) Different working mode. We evaluate synchronizedmode and asynchronized mode. In synchronized mode,there is no collaboration mechanism during data trans-mitting and receiving. The transmitter continuously senddata to network. In asynchronized mode, the sender andreceiver would collaborate in half duplex. Sender sendthe next framedatawhen it has received an acknowledge-ment signal. When timeout happens, the sender performthe retransmission operation.

(3) Compare latency, frame rate, data transfer rate with thedifferent combination of synchronized and asynchro-nized mode, and different compression rate (20, 60,90%). The definition of data transfer rate is the amount ofpicture transferred each second and the number of trans-ferred bytes per second. The frame rate is the amount oftransferred picture per second. The latency is the timedifference between send time and receive time.

4.2.1 Comparative data transfer rate

(1) Synchronized mode: No acknowledgement mechanismis used. The program use JPEG-based compression algo-

rithm with different compression ratio. Figure5a is theresults under synchronizedmode with compression ratioof 20, 60 and 90 %. Y axis means data transfer rate atspecific time.

The results show that the transmission is especially notsteady at the 90 % compression rate, even appear lostpackage phenomenon. The transmission is steady with20 % compression rate. Compared to 60 % compressionrate, it doesn’t fully utilize the bandwidth resource. So,under synchronized mode, the 60 % compression rate isthe best among three compression rates.

(2) Asynchronized mode: This transmission mode wouldnot lost frame, because of each frame must be acknowl-edged. Figure5b shows the data rate of three differentcompression ratio under asynchronized mode.

Compared with synchronized mode, the data transfer ratedecrease rapidly. The data transfer rate is about 4000–8000(Bps). Under compression ratio of 60 %, data transfer rateis around 20,000–30,000 (Bps). No matter how, the network

123

Cluster Comput

(a) (b)

(c) (d)

(e) (f)

Fig. 5 Multi-modal sensory data

resource are not fully used. All of the data shows compres-sion ratio around 60 % make the resource used fully underasynchronized mode.

4.2.2 Comparison of transmission latency

(1) Synchronized mode: Figure5c shows the latency of dif-ferent compression ratio under synchronized mode. Yaxis means the time cost between last frame sent out andit get the acknowledgement. That is the latency fromclient to server. After comparison, the latency is notsteady with the compression ratio of 90 %, the transmis-sion latency with 20% compression ratio is the smallest.

(2) Asynchronizedmode: Figure5d shows the latency trans-mission with different compression ratio on 20, 60 and90 %. With the compression of 90 %, latency is a littlebit better than synchronized. While 20 % get the smallerlatency compared with 60 %. All of the data shows thatthe influenced factor is the process cost no matter using

synchronizedmode or asynchronizedmode.Due tomoreconsumption of process and transmission, we get theleast latency under 60 and 20 % compression ratio.

4.2.3 Comparison of frame rate

(1) Synchronized mode: Figure5e shows the frame rateunder different compression rate of 20, 60 and 90 %.Y axis means the amount of picture received from lastmoment to now. Compression rate of 90 % is still notgood.With the compression rate of 20 or 60%, the framerate is around 10 frame per second. Comparedwith 20%,frame rate under 60 % is more steady.

(2) Asynchronized mode: Figure5f shows the frame rateunder different compression of 20, 60 and 90 %. Usingthis mode, all the frame rates are low, around 5 frameper second. The reason is that the acknowledge mech-anism spend much time in asynchronized mode. The

123

Cluster Comput

Table 1 The captured data ofvideo transmission

Statistics type 20 % sync 20 % async 60 % sync 60 % async 90 % sync 90 % async

Bytes per second 19,880 4124 22,305 4250 21,234 7854

Frames per second 10 6 8 5 5 3

Transmission delay 412 342 642 357 2342 568

Fig. 6 Robot control UI (a) (b)

network resource utilization aren’t sufficient. Based onabove result, in synchronized mode, the frame rate couldassure above 10 frame per second, while asynchronizedmode cannot. In synchronized mode, different compres-sion rate of 20 or 60 % do not impact on frame rate.

Table 1 is the captured data of video transmission. From thetable, it concludes that the bytes per second is the biggestunder 20 % compression ratio in synchronized mode. Itshows that the used of channel has been taken fully. Fromthe perspective of frames per second, 10 frames per secondis the best one. For the users, this frame ratio is suitable tovisual effect of users, because of the latency comes from thetransport delay of channel. Comparison with other situation,the delaymaybemuch lower, but bytes per second and framesper second are small. In conclusion, the video transmissioncan get the best effect with compression ratio of 20 % insynchronized mode.

4.3 Health-care testbed operation

The robot is developed based on the Android developmentboard which’s processing core is Cortex-A8, and Android4.0 OS. The remote client device is the Sumsung Galaxy Ssmart phone. The application layer software is developed inJAVA, and the robot’s hardware controlling system is devel-oped by embed C programming. Besides, robot is equippedwith a variety of sensors (including temperature, humidity,heart rate and infrared sensors) to collection environment

information and physiological signal. The robot control pro-gramming includes client and server (as shown in Fig. 6).

Insides of the robot, the ARM development board is con-nect to the MCU by the serial port, after the installation ofclient software to the smart phone and server software tothe development board. Ensuring that the wireless networkis connected, then boot the client software. Click the USBbutton to open the USB connection, then we can test thewalking function of the robot by local manner (as shown inFig. 6b). The button “ReadUSB” is used to test the data inter-action between serial ports, and the “Video” button beginsthe video transmission. In the setting button, we can choosethe system language and the emergency contact name

After the configuration of sever software, then boot theclient software, and configure the corresponding IP addressto the robot server. This operation can accomplish the con-nection between remote smart phone and the robot. Figure6ashows the control interface, in which there is a button “HoldAnd Speak” to do the speech recognition, the “OpenVideo”and “CloseVideo” are used to open or close the video trans-mission.

We have conducted a serial of functional tests. Firstly testthe robot controlling function based on speech recognition:when the smart phone received the speech information, thencompare the sample speech with local database. If the speechcannot be recognized, the controlling software will call thecloud services to accomplish voice recognition. Once thesmart phone gets the correct voice control command, it willsend the control command to the robot’s controlling moduleto do the corresponding actions by wireless network.

123

Cluster Comput

Fig. 7 Sensory data visualization

Collected sensory data can be display on smart phone andweb pages on our testbed. Figure7 shows the environmentaldata and body physiological data captured by sensors in theform of visualization. If needed, the doctor can take the dataas a reference for treatment and the critical information canbe pushed to relevant departments and relevant personnel.

5 Conclusion

In this paper,wedesigned ahealthcare systembasedon robot-ics and cloud computing. This system is expected to real-timemonitor user’s health status, while it can be remote controlledby specialist or other users. The software of this system isdeveloped by Android 4.0 for storing, processing, displayingthe sensor data, and controlling robot. Through continuouslymonitoring, patient status changes can be identified promptlyand sent to family or medical institutions for emergency.

In the future, we will add more sensors to measure morephysiological and environmental data, and improve the cloudplatform for advanced data storage and processing. Fur-thermore, based on the results of data analysis and clinicaldiagnosis, personalized healthcare guidance and plan areavailable for users.

Acknowledgments This work is partially supported by the NationalNatural Science Foundation of China (No. 61262013), China Postdoc-toral Science Foundation (GrantNo. 2014M552045) and theOpen Fundof Guangdong Province Key Laboratory of Precision Equipment andManufacturing Technology (PEMT1303).

References

1. Chen, M., Gonzalez, S., Leung, V., Zhang, Q., Li, M.: A 2G-RFID-based e-healthcare system. Wirel. Commun. IEEE 17(1), 37–43(2010)

2. González-Valenzuela, S., Chen, M., Leung, V.C.: Mobility supportfor health monitoring at home using wearable sensors. IEEE Trans.Inf. Technol. Biomed. 15(4), 539–549 (2011)

3. Chen,M.,Ma, Y., Ullah, S., Cai,W., Song, E.: Rochas: robotics andcloud-assisted healthcare system for empty nester. In: Proceedings

of the 8th International Conference on Body Area Networks, pp.217–220, Institute for Computer Sciences, Social-Informatics andTelecommunications Engineering (ICST) (2013)

4. Zhang, D., Xiong, H., Hsu, C.-H., Vasilakos, A.V.: BASA: buildingmobile Ad-Hoc social networks on top of android. IEEE Netw.28(1), 4–9 (2014)

5. LiKamWa, R., Liu, Y., Lane, N.D., Zhong, L.: Moodscope: build-ing a mood sensor from smartphone usage patterns. In: Proceedingof the 11th Annual International Conference on Mobile Sys-tems, Applications, and Services, pp. 389–402. ACM, New York(2013)

6. Chen, M.: NDNC-BAN: supporting rich media healthcare servicesvia named data networking in cloud-assisted wireless body areanetworks. Inf. Sci. 284, 142–156 (2014)

7. Wan, J., Ullah, S., Lai, C.-F., Zhou, M., Wang, X., et al.: Cloud-enabledwireless body area networks for pervasive healthcare. IEEENetw. 27(5), 56–61 (2013)

8. Chen, M., Zhang, Y., Li, Y., Mao, S., Leung, V.C.M.: EMC:emotion-aware mobile cloud computing. IEEE Netw. 29, 32–38(2015)

9. Hu, H., Wen, Y.G., Chua, T.-S., Li, X.L.: Towards scalable systemsfor big data analytics: a technology tutorial. IEEEAccess J. 2, 652–687 (2014)

10. Miller, D., Rutkowski, K., Kroh, J., Brogdon, S., Moore, E.: Phys-iological data acquisition and management system for use with animplanted wireless sensor, US Patent 8,665,086 (2014)

11. Chen, M., Mao, S., Liu, Y.: Big data: a survey. Mob. Netw. Appl.19(2), 171–209 (2014)

12. Chen, Z., Luo, W., Wu, D., Huang, X., He, J., Zheng, Y., Wu,D.: Exploiting application-level similarity to improve SSD cacheperformance in Hadoop. J. Supercomput. 70(3), 1331–1344 (2014)

13. Chen, M., Jin, H., Wen, Y., Leung, V.C.: Enabling technologies forfuture data center networking: a primer. IEEE Netw. 27(4), 8–15(2013)

14. Bourouis, A., Feham, M., Bouchachia, A.: A new architecture of aubiquitous health monitoring system: a prototype of cloud mobilehealth monitoring system. arXiv:1205.6910 (2012)

15. Zheng, K., Zhang, X., Zheng, Q., Xiang, W., Hanzo, L.: Quality-of-experience assessment and its application to video services inLTE networks. IEEE Wirel. Commun. 22(1), 70–78 (2015)

16. Chen, M., Mao, S., Zhang, Y., Leung, V.C.: Big Data: RelatedTechnologies, Challenges and Future Prospects. Springer, Heidel-berg (2014)

17. He, J., Xue, Z., Wu, D., Wu, D.O., Wen, Y.: CBM: online strate-gies on cost-aware buffer management for mobile video streaming.IEEE Trans. Multimed. 16(1), 242–252 (2014)

18. Lai, C.-F., Chen,M., Pan, J.-S., Youn, C.-H., Chao,H.-C.: A collab-orative computing framework of cloud network andWBSNapplied

123

Cluster Comput

to fall detection and 3-D motion reconstruction. IEEE J. Biomed.Health Inf. 18(2), 457–466 (2014)

19. Chen, M., Gonzalez, S., Vasilakos, A., Cao, H., Leung, V.C.: Bodyarea networks: a survey. Mob. Netw. Appl. 16(2), 171–193 (2011)

20. Choi, J., Choi, B., Seo, J., Sohn,R., Ryu,M.,Yi,W., Park,K.:A sys-tem for ubiquitous healthmonitoring in the bedroomvia a bluetoothnetwork and wireless lan. In: 26th Annual International Confer-ence of the IEEE Engineering in Medicine and Biology Society.IEMBS’04, vol. 2, pp. 3362–3365 (2004)

21. Zheng, K., Fan, B., Liu, J., Lin, Y., Wang, W.: Interference coordi-nation for OFDM-based multihop LTE-advanced networks. IEEEWirel. Commun. 18(1), 54–63 (2011)

22. Salvador, C.H., Carrasco, M.P., de Mingo, M.G., Carrero, A.M.,Montes, J.M., Martin, L.S., Cavero, M.A., Lozano, I.F., Mon-teagudo, J.L.: Airmed-cardio: a GSM and internet services-basedsystem for out-of-hospital follow-up of cardiac patients. IEEETrans. Inf. Technol. Biomed. 9(1), 73–85 (2005)

23. Kim, T.-W., Kim, H.-C.: A healthcare system as a service in thecontext of vital signs: proposing a framework for realizing amodel.Comput. Math. Appl. 64(5), 1324–1332 (2012)

24. Chen, M., Zhang, Y., Li, Y.: AIWAC: affective interaction throughwearable computing and cloud technology. IEEEWirel. Commun.Mag. 22, 20–27 (2015)

25. Hernandez, J., Hoque,M.E., Drevo,W., Picard, R.W.:Moodmeter:counting smiles in the wild. In: Proceedings of the 2012 ACMConference on Ubiquitous Computing, pp. 301–310. ACM, NewYork (2012)

26. Lingenfelser, F., Wagner, J., André, E., McKeown, G., Curran,W.: An event driven fusion approach for enjoyment recognitionin real-time. In: Proceedings of the ACM international conferenceon multimedia, pp. 377–386. ACM, New York (2014)

27. Turnbull, L., Samanta, B.: Cloud robotics: formation control of amulti robot system utilizing cloud infrastructure. In: Proceedingsof IEEE Southeastcon, pp. 1–4. IEEE, Columbia (2013)

28. Zheng, K., Hu, F., Wang, W., Xiang, W., Dohler, M.: Resourceallocation in LTE-advanced cellular networks with M2M commu-nications. IEEE Commun. Mag. 50(7), 184–192 (2012)

29. Kehoe, B., Patil, S., Abbeel, P., Goldberg, K.: A survey of researchon cloud robotics and automation. IEEE Trans. Autom. Sci. Eng.99, 1–12 (2015)

30. Chen, M.: Towards smart city: M2M communications with soft-ware agent intelligence. Multimed. Tools Appl. 67(1), 167–178(2013)

31. Hu, G., Tay,W.P.,Wen, Y.: Cloud robotics: architecture, challengesand applications. IEEE Netw. 26(3), 21–28 (2012)

32. Zheng, K., Wang, Y., Wang, W., Dohler, M., Wang, J.: Energy-efficient wireless in-home: the need for interference-controlledfemtocells. IEEE Wirel. Commun. 18(6), 36–44 (2011)

33. Chen, M., Ma, Y., Wang, J., Mau, D.O., Song, E.: Enabling com-fortable sports therapy for patient: a novel lightweight durable andportable ecg monitoring system. In: 2013 IEEE 15th InternationalConference on e-Health Networking, Applications and Services,Healthcom 2013, Lisbon, Portugal, pp. 271–273 (2013)

34. Wen,Y.G., Zhu, X.Q., Rodrigues, J.P.C., Chen, C.W.:Mobile cloudmedia: reflections and outlook (invited paper). IEEE Trans. Mul-timed. 16(4), 885–902 (2014)

Yujun Ma is a Ph.D candidatein School of Computer Scienceand Technology, Huazhong Uni-versity of Science and Technol-ogy (HUST). His research inter-ests include cloud computing andthe Internet of Things. He isthe TPC member for 5th Inter-national Conference on CloudComputing (CloudComp 2014).He was publicity chair for cloud-comp 2013. He received theOutstanding Service Award fromcloudcomp 2013.

Yin Zhang is a faculty mem-ber of the School of Informationand Safety Engineering, Zhong-nanUniversity of Economics andLaw. He was a post-doctoralfellow at the School of Com-puter Science and Technology atHuazhong University of Scienceand Technology (HUST). He isa handling guest editor for NewReview of Hypermedia andMul-timedia. He serves as a reviewerfor IEEE Network, and Informa-tion Sciences. He is TPC Co-Chair for the 6th InternationalConference onCloudComputing

(CloudComp 2015). He is the local chair for the 9th International Con-ference on Testbeds and Research Infrastructures for the Developmentof Networks & Communities (TRIDENTCOM 2014), and CloudComp2013.

Jiafu Wan is an associateresearch fellow at South ChinaUniversity of Technology,China.Dr. Wan has authored/co-authored one book and morethan 50 scientific papers. He ismanaging editor for InternationalJournal of Arts and Technol-ogy (IJART). His research inter-ests include cyber-physical sys-tems (CPS), internet of things,machine-to-machine (M2M)com-munications, mobile cloud com-puting, and wireless body areanetworks (WBANs). He is a CCFsenior member, and a member of

IEEE, IEEE Communications Society, IEEE Control Systems Society,and ACM.

123

Cluster Comput

Daqiang Zhang is an associateprofessor in School of SoftwareEngineering atTongjiUniversity,China. He is a senior memberof IEEE. His research includesmobile computing, distributedcomputing and wireless sensornetworks. He has publishedmorethan 70 papers in major journalsand international conferences inthose above areas. He is an editorfor Springer TelecommunicationSystems, European Transactionson Telecommunications (Wileypublisher), International Journal

of Big Data Intelligence (Inderscience publisher), KSII Transactionson Internet and Information Systems (Korea Society of Internet Infor-mation) and New Review of Hypermedia and Multimedia (Taylor &Francis publisher). He received theBest PaperAward fromACCV’2009and UIC’2012.

Ning Pan is a Post-DoctoralFellow in School of Com-puter Science and Technology atHuazhong University of Scienceand Technology (HUST), China.He received his MS in Com-puter Software and Theory fromChongqing University (CQU),China and his Ph.D in ComputerScience fromHUST, in 2007 and2013 respectively. His researchinterests include medical imageprocessing, data mining and sta-tistical learning.

123