11
Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-Fall Detection Tong Liu 1 and Jun Liu 2, 1 Department of Electronic Science, Huizhou University, Guangdong 516001, China 2 College of Physics & Electronic Information Engineering, Wenzhou University, Wenzhou, 325035, China [email protected] Abstract This article introduces a mobile infrared silhouette imaging and sparse representation-based pose recognition for building an elderly-fall detection system. The proposed imaging paradigm exploits the novel use of the pyroelectric infrared (PIR) sensor in pursuit of body silhouette imaging. A mobile robot carrying a vertical column of multi-PIR detectors is organized for the silhouette acquisition. Then we express the fall detection problem in silhouette image-based pose recognition. For the pose recognition, we use a robust sparse representation-based method for fall detection. The normal and fall poses are sparsely represented in the basis space spanned by the combinations of a pose training template and an error template. The 1 norm minimizations with linear programming (LP) and orthogonal matching pursuit (OMP) are used for finding the sparsest solution, and the entity with the largest amplitude encodes the class of the testing sample. The application of the proposed sensing paradigm to fall detection is addressed in the context of three scenarios, including: ideal non-obstruction, simulated random pixel obstruction and simulated random block obstruction. Experimental studies are conducted to validate the effectiveness of the proposed method for nursing and homeland healthcare. Keywords Elderly-fall Detection, Healthcare, Pyroelectric Infrared Sensor, Mobile Robot Aided Silhouette Imaging, Sparse Representation 1. Introduction In recent years and for the foreseeable future, in many countries an ageing society is increasingly obvious due to better quality of life and a lower birth rate. The proportion of the worldwide population over 65 years is growing [1] and increasing numbers of elderly people not only need advanced medical technologies for the treatment of disease, but also more healthcare services for independent living and a better quality of life. However, the decreasing numbers of nursing professionals and in some countries the decline in the working-age population will cause a serious imbalance when looking to offer enough healthcare services for elderly people. Therefore, the ageing problem has motivated much research on automated and reliable healthcare systems. Falling is common among elderly people and a major hindrance to daily living, especially independent living. According to the research in reference [2], approximately one-third of those over 65 years old fall each year and Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection 1 ARTICLE Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/57318 1 Department of Electronic Science, Huizhou University, Guangdong, China 2 College of Physics & Electronic Information Engineering, Wenzhou University, Wenzhou, China * Corresponding author E-mail: [email protected] Received 20 Feb 2013; Accepted 24 Oct 2013 DOI: 10.5772/57318 ∂ 2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Tong Liu 1 and Jun Liu 2,* Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection Regular Paper International Journal of Advanced Robotic Systems

Mobile robot

Embed Size (px)

Citation preview

Page 1: Mobile robot

Mobile Robot Aided SilhouetteImaging and Robust Body PoseRecognition for Elderly-FallDetection

Tong Liu1 and Jun Liu2,

1Department of Electronic Science, Huizhou University, Guangdong 516001, China2College of Physics & Electronic Information Engineering, Wenzhou University, Wenzhou, 325035, China� [email protected]

Abstract This article introduces a mobile infraredsilhouette imaging and sparse representation-basedpose recognition for building an elderly-fall detectionsystem. The proposed imaging paradigm exploits thenovel use of the pyroelectric infrared (PIR) sensor inpursuit of body silhouette imaging. A mobile robotcarrying a vertical column of multi-PIR detectors isorganized for the silhouette acquisition. Then we expressthe fall detection problem in silhouette image-based poserecognition. For the pose recognition, we use a robustsparse representation-based method for fall detection.The normal and fall poses are sparsely representedin the basis space spanned by the combinations of apose training template and an error template. The �1norm minimizations with linear programming (LP) andorthogonal matching pursuit (OMP) are used for findingthe sparsest solution, and the entity with the largestamplitude encodes the class of the testing sample. Theapplication of the proposed sensing paradigm to falldetection is addressed in the context of three scenarios,including: ideal non-obstruction, simulated random pixelobstruction and simulated random block obstruction.Experimental studies are conducted to validate theeffectiveness of the proposed method for nursing andhomeland healthcare.

Keywords Elderly-fall Detection, Healthcare, PyroelectricInfrared Sensor, Mobile Robot Aided Silhouette Imaging,Sparse Representation

1. Introduction

In recent years and for the foreseeable future, in manycountries an ageing society is increasingly obviousdue to better quality of life and a lower birth rate. Theproportion of the worldwide population over 65 yearsis growing [1] and increasing numbers of elderly peoplenot only need advanced medical technologies for thetreatment of disease, but also more healthcare services forindependent living and a better quality of life. However,the decreasing numbers of nursing professionals and insome countries the decline in the working-age populationwill cause a serious imbalance when looking to offerenough healthcare services for elderly people. Therefore,the ageing problem has motivated much research onautomated and reliable healthcare systems.

Falling is common among elderly people and a majorhindrance to daily living, especially independent living.According to the research in reference [2], approximatelyone-third of those over 65 years old fall each year and

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

1

ARTICLE

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/57318

1 Department of Electronic Science, Huizhou University, Guangdong, China2 College of Physics & Electronic Information Engineering, Wenzhou University, Wenzhou, China* Corresponding author E-mail: [email protected]

Received 20 Feb 2013; Accepted 24 Oct 2013

DOI: 10.5772/57318

∂ 2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,distribution, and reproduction in any medium, provided the original work is properly cited.

Tong Liu1 and Jun Liu2,*

Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall DetectionRegular Paper

International Journal of Advanced Robotic Systems

Page 2: Mobile robot

half of them are repeat fallers. Falls often lead to dramaticphysiological injuries and psychological stress. Theelderly may remain on the floor for a long time leadingto life-threatening situations, while the fear of falling canresult in decreased activity, isolation and further functionaldecline. Therefore, falls should be detected as early aspossible to reduce the risk of morbidity-mortality [3].

With regard to detecting falls effectively and quickly,there has been a recent surge of interest into automatedfall detection and the detections of other abnormalbehaviours of elderly people. These methods can bedivided into two categories according to the use of thesensors, including: wearable sensors and vision sensors.Wearable sensors are usually mounted on the humanbody and able to achieve accurate fall detection, suchas acceleration [4–6], gyroscopes [7] and wireless buttonalarm [8]. However, these solutions break the bothphysical and psychological feelings of the elderly people.Moreover, if the elderly person forgets to wear it, thealarm system would not work. Therefore, automatic falldetection using a non-intrusive method is an essentialsensing for complementary under certain circumstances.

Vision sensors can realize the healthcare objectivesin a non-intrusive fashion. Several authors haveexplored commercial cameras to capture video in homeenvironments and extracted the object feature for falldetection [9, 10]. Although the existing methods haveobtained accurate fall detection, precise body posesextraction and robustness against changing lighting arechallenging issues in the computer vision community.Body poses extraction may be corrupted by the clusteredclothes and shadows. In addition, camera-based methodsintrude into the privacy of the elderly. Several alternativesolutions have been proposed in order to analyse humanmotion using thermal cameras [11–13]. The human bodyis considered to be a natural emitter of infrared rays.Normally, the body temperature is different from that ofits surroundings. This leads to easy motion detection fromthe background regardless of lighting conditions and thecolours of the human surfaces and surroundings. Thissensing pattern can extract meaningful information forhuman motion directly. However, thermal cameras areexpensive and information processing is still difficult todeal with.

To overcome the above limitations of data acquisition,we propose a mobile infrared silhouette sensing with

a pyroelectric infrared (PIR) sensor array for elderlypose acquisition and robust fall detection via sparserepresentation. Thanks to the well-established studies ofthe intelligent mobile robots for home care [14, 15] and thePIR sensor-based wireless networks [16, 17] for multiplehuman tracking, we make an assumption that the mobileservice robot is able to detect where a person has beenlying for a long time and move close to the incidentregion. The robot also has the function of patrolling theinterested region periodically. Another assumption ismade based on the fact that elderly people usually donot move after a fall, therefore a relatively static posesilhouette would provide enough information for falldetection. We have designed a sensor array consistingof a single vertical column of PIR detectors for capturinghuman thermal radiation and use a mobile service robotfor implementing silhouette imaging. The mobile robotundertakes rotary-scanning and the pose of the humanbody can be recorded as a crude binary silhouette. Falldetection is cast as image-based object recognition.

For the data processing, we use a sparserepresentation-based method for fall recognition. Ingeneral, simple aspect-ratio-based shape analysis forfall poses detection is able to satisfy the requirements.However, this method is suitable for applications in anideal environment with no obstructions. In reality, theobject recognition will encounter numerous corruptions,such as furniture obstructions in real home environments.This will render the simple aspect-ratio-based shaperecognition useless. For this reason, we propose the sparserepresentation-based robust pose recognition, which ismainly motivated by the recent sparse representationand its application for robust face recognition andtarget tracking [18, 19]. The normal and fall poses aresparsely represented in the basis space spanned by thecombination of a pose training template and an errortemplate. The sparsest approximation is computed basedon the �1-minimization using linear programming (LP)and orthogonal matching pursuit (OMP), and thecandidate with the maximum amplitude is classified asthe predefined pose. We test this classification method inthree scenarios: ideal non-obstruction, simulated randompixel obstruction and simulated random block obstruction.

Figure 1 presents the schematic diagram of thesystem. The fall detection system integrates a mobileinfrared silhouette imaging sensor and a sparserepresentation-based pose recognition algorithm.

Figure 1. Schematic diagram of the proposed system

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/573182

Page 3: Mobile robot

The experiments are conducted to demonstrate theeffectiveness of the described sensing and robust poserecognition in elderly-fall detection. If a user’s behaviouris detected as an abnormal event, the alarm will beactivated as soon as possible. This succinct sensingpattern and robust recognition algorithms make it flexiblefor elderly and disabled people to continue to live a freeand independent life in their own house while receivingreliable safety assistance.

The rest of this article is organized as follows. Section 2 wegive a brief review of PIR sensing and silhouette imaging,then present the mobile infrared silhouette imaging.Section 3 describes the sparse representation-based robustpose recognition. Section 4 presents the experimentaldetails and results. The summary and conclusions of thearticle are given in Section 5.

2. Mobile Robot Aided Infrared Silhouette Sensing

2.1. Related Work

Increasing attention has focused on PIR-based motioncapturing patterns for human presence detection. ThePIR detector has several promising advantages: it isable to convert the incident thermal radiation intoan electrical signal, and it responds to radiation withwavelengths ranging from 8µm to 14µm which justcorresponds to the typical thermal radiation emittedfrom the human body [20]; the cost of a commerciallyavailable sensor is extremely low; the power consumptionis low and suitable for wireless networks and mobileagent application. Because of these advantages, the PIRdetector has been developed in lightweight biometricdetection [21, 22], human identification [23–25] andmultiple human tracking [16, 17].

A recent surge of interest has focused on the silhouetteimaging sensor for the applications of electronic fence.The silhouette sensor belongs to crude imaging devicesand captures a pixelated silhouette of the monitoredtarget directly. Sartain firstly introduced the concept ofthe silhouette imaging sensor and discussed a varietyof approaches to realize a silhouette sensor [26]. Thecrude silhouettes generated from a sparse array of sensorswould offer sufficient information for the classificationtask and the classification algorithms were tractablewithout complex image processing. Russomanno et al.designed a sparse array of sensors with the near infrared(IR) transmitters and receivers for the border, perimeterand other intelligent electronic fence applications [27, 28].However, the proposed sensor belongs to an active versionand it is difficult to deploy them in a nursing home. Thus,the passive sensing pattern is a more attractive option.Jacobs et al. introduced the concept of passive PIRsensor-based silhouette imaging and gave a simulationbased on thermal infrared video sequences [29]. Willianmet al. presented a pyroelectric linear array-based silhouettesensor for distinguishing humans from animals, but theydid not discuss the application of the analysis of humanposes [30]. It should be noted that the field of view (FOV)of the above-mentioned solutions is fixed and it difficult toextend using sensor networks or mobile agents for homehealthcare situations.

Our sensing device is motivated by the above silhouetteimaging. A crude silhouette of the human body preservessufficient information for distinguishing between normaland abnormal poses. Considering the fact that elderlypeople usually do not move after a fall incident, a mobilesilhouette imaging device is more preferable for sensing arelatively static object. Thus, we embed linear multi-cellPIR detectors on a mobile robot for capturing the pose ofthe human body. Then the fall detection problem is cast asimage-based object recognition.

2.2. Sensing Model

Figure 2 shows the sensing model of a PIR detector. Thehuman body is a natural infrared radiation source andmakes exchanges with the surroundings. Thus, the PIRdetector collects the incident thermal radiation. Thiswill make changes in the temperature on the pyroelectricmaterial and this will be converted into an electricaloutput. The pyroelectric sensing model can be brieflyrepresented using a form of reference structure [31]:

M(rm, t) = H(t) ∗ ∑ V(rm, rs)S(rs, t) (1)

where H(t) = [dPs/dT] · [dT/dt] is the impulse responsefunction of the PIR detector, T the temperature, t thetime tag and Ps the polarization per unit volume [20].The quantity P = dPs/dT is known as the pyroelectriccoefficient and related to the pyroelectric materials, sothe H(t) = P · [dT/dt] is the rate of temperaturechange. In particular, the stationary human body does nottrigger the detector and the PIR detector only respondsto human movement without considering the body’sclothing textures. For a relatively static body, the bodyinformation can be obtained using a mobile infraredscanning. S and m denote the radiation state vectorand the measurement vector respectively. V(rm, rs) is thevisibility function, which is “1” when rs is visible to thedetector at rm, otherwise is “0”.

In this article, the commercially available pyroelectricdetectors D205b [32] are employed for sensing changes ofthe thermal radiation in the object space. The object spaceis defined as the collection of the thermal radiation fieldsin a human body. Fresnel lenses are used to bridge theFOV of PIR detector matching to the motion sensing space.Due to the special property of the PIR just respondingto the thermal radiation emitted from the human body,

Figure 2. Mobile pyroelectric infrared sensing model

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

3

Page 4: Mobile robot

the intrusion of visible illumination can be removed.When a sensing region moves across the object spacecontinuously, the detector would give a large voltageoutput corresponding to the human body, as shown inFigure 2.

Once the voltage outputs are collected, they will betransmitted to the data processing centre wirelessly. At thedata processing centre, users can determine the presenceof the human body via the short-time energy method.Figure 3 shows a typical response of a PIR detector whenscanning across a human body and explains the short-timeenergy method for transforming the raw analogue signalsinto an ”ON" or ”OFF" state signal. First, the collectedsignal from a PIR detector is normalized by removingits direct-current component. Second, we calculate thesquared absolute value of the normalized signal andenframe them into overlapping frames. Third, the energysignal is obtained by accumulating the enframed signalin each column and a predefined threshold is used fordetermining the current state.

Number of Sample

Voltage [V]

Number of Sample

Voltage [V]

Number of Enframed Sample

Number of Sample

Energy Amplitude

Number of Sample

Voltage [V]

Figure 3. Flowchart of the short-time energy method fortransforming the analogue signals into binary state signal

2.3. Mobile Silhouette Sensing

Figure 4 and Figure 5 show the proposed silhouettesensor consisting of a single vertical column of multi-cellPIR detectors, which is attached to an intelligent mobilerobot. The sensor array organizes a vertical column of 20PIR detectors to capture the human pose information atdifferent heights. The lowest-cell detector is 6cm abovethe ground, and the pitch between any two separateddetectors is 6cm. Therefore, the sensor array is able toacquire a crude image in the region with a total height of120cm. The optical axis of each detector is perpendicularto the longitudinal axis of the object. For better resolution,both the horizontal and vertical FOVs of each detector are10o.

The user tracking method used by the mobile robot isbased on the wireless distributed PIR sensors [16, 17].The FOVs of distributed sensors are cooperatively codedto support multiple human tracking and identification.The user’s location and biometric information are able totransmit to the robot via data centre wirelessly. Then therobot chooses an interesting target and moves close tohim/her for scanning imaging.

20 40 60 80 100 120 140

5

10

15

20

Figure 4. Presentation of the imaging sensor array consisting ofa single column of PIR detectors installed on a mobile robot

Figure 5. Prototype of the mobile infrared silhouette sensor

Under the assumption of knowing the location wherea human lays for a long time, the intelligent mobilerobot moves near to the interest region and performs apatrol task autonomously. With the help of the mobilerobot, the PIR sensor array will undertake scanning usingself-rotation. In our experiments, the rotary speed ofthe robot is approximately 9 degrees per second. Thedistance between the object and the sensor is 1.5m. Forthe signal sampling and wireless communication, weuse a ultra-low power consumption micro-controllerCC2430 from Texas Instruments. The micro-controllersamples the PIR sensor’s signal at a rate of 10Hz, andthen transmits the signal to the data centre wirelessly. Thewireless communication in this system is followed by theZigbee (802.15.4) protocol, which has a low data rate andlow power consumption compared with other wirelessprotocols.

The time spent on a complete scan is approximately20 seconds. This time is controlled by the maximum

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/573184

Page 5: Mobile robot

rotation speed of the mobile robot, which can be improvedby using a more flexible and quicker robot. If the userdisappears from the scanning region, the robot will querythis with the data centre to confirm the location of the user.If the user still remains within the scanned region andthe robot is not able to capture the body image, we inferthat access to body sensing is not feasible. Scanning atanother position or using other devices could compensatefor this disadvantage. If the user moves away from thescanned region, the robot will restart the tracking program.

To confirm the effectiveness of the proposed sensingmethod, we collected some silhouettes based on typicalnormal poses and falls in the laboratory as experimentalsamples for reference. According to research on sedentaryliving among elderly people [33, 34], they are moreaccustomed to less frequent and low intensity activities.Older adults often spent much time standing or sittingto work, eat, read or socialize. These samples includethe three most common categories: standing, sitting andfall on the ground. Figure 6 presents the experimentalscenarios and the acquired silhouettes. All pose silhouettesare normalized as a constant dimension of 20 × 150 andthe body is located at the centre in the horizontal direction.After the rotating scanning, the pixelated silhouetteimages can be recorded. However, the raw silhouettescontain distortion as shown in the column in Figure 6.(b).We use a median filter with size of 3 × 3 for preprocessingthe raw silhouettes. It should also be noticed that thebinary silhouettes protect the privacy of elderly people.

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Number of Frame

Sen

sor

Cha

nnel

20 40 60 80 100 120 140

5

10

15

20

Figure 6. Illustration of the typical experimental scenario andthe acquired silhouettes. (a) Scenarios with standing pose, sittingpose and fall pose. (b) Corresponding silhouettes generated bythe mobile infrared silhouette sensor. (c) Refined silhouettes viamedian filter.

3. Sparse Representation-based Robust Pose Recognition

Recent developments in sparse presentation-basedclassification reveal that a test sample is able to belinearly represented using an overcomplete dictionary

whose base elements are the combination of trainingtemplates and error-compensation templates [18, 19]. Ifthe data processing centre captures sufficient samplesfor reference, it is able to represent the test image witha sparse coefficient spanned on the template of thesame class. Following this framework, we exploit thesparse presentation-based classification to perform binarysilhouettes-based pose recognition.

In the proposed fall detection system, there are twocategories of poses including: normal pose and fall pose.We use the labelled training samples to build the templatematrix. For all sample images, we first arrange eachof them as a vector m ∈ RM (M = dv × dh), M is thedimension of the sample, while dv and dh are the verticaland horizontal dimension of the raw image. Given nF andnN training samples from the fall poses and normal poses,we reshape them as columns of two template matrixesTF = [tF,1, · · · , tF,nF ] and TN = [tN,1, · · · , tN,nN ], t ∈ RM.The columns of TF and TN are assigned to the fall poseand normal pose respectively.

Assuming the data centre collected enough trainingsamples of the ith pose image Ti = [ti,1, · · · , ti,ni ],i ∈ {F, N}, a same class of the test sample is able to beapproximated in linear form using:

m ≈ αi,1ti,1 + αi,2ti,2 + · · ·+ αi,ni ti,ni (2)

where the αi,ni is a regression coefficient. Since the classof the ith test sample is unknown for the data processingcentre, it is necessary to build a global training template byconcatenating the fall and normal pose template as:

T = [TF, TN ] = [tF,1, · · · , tF,nF , tN,1, · · · , tN,nN ] (3)

Then, the linear representation of m can be modified as:

m = Tw (4)

here w = [0, · · · , 0, αi,1, · · · , αi,ni , 0, · · · , 0, ]T ∈ R(nF+nN)

is regarded as a sparse coefficient vector with nonzeroentries of the ith class. Thus, the nonzero entries of thevector encode the identity of the testing sample m. We cansolve the equation m = Tw to determine the class of thepose image.

In many practical home scenarios, the silhouette imagem may be partially obstructed by unpredictable factors.Usually, a random pixel obstruction would happen withhardware errors or noise, while a block obstruction iscaused by the user’s furniture. Considering these cases,the above regression model should be modified

m = Tw + ε (5)

where ε is the errors vector and we assume that onlya small fraction of the entries is nonzero. The nonzeroentries of the ε are denoted as the errors or obstructionin the image m. The errors vector may have arbitrarymagnitude and the number of the nonzero entries isunknown, they cannot be ignored. To increase therobust property of the constructive regression, we canadd an identity matrix I ∈ RM×M as the errors basis

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

5

Page 6: Mobile robot

for approximating the nonzero error entries. The sparserepresentation can be rewritten as

m = [T, I][

we

]= Φwe (6)

where I = [i1, i2, · · · , id], ii ∈ RM is the vector with theonly unit entry, and e ∈ RM is the error coefficient vector.Then Φ = [T, I] ∈ RM×(M+nF+nN), the m = Φwe is anunderdetermined equation. The equation does not havea unique solution usually for the extended we. However,we previously make an assumption that the entries of theextended we = [w, e] are sparse. We will cast the problemof find sparsest solution as an optimization task.

The problem of find sparsest solution for the equation m =Φwe is able to transform to the following optimizationtask:

we = arg min ‖we‖0 subject to Φwe = m (7)

where ‖ · ‖0 is the l0 norm of a vector, and ‖we‖0 denotesthe number of entries in we. In general, solving the l0 normminimization needs exhaustive search and the existenceof the unique sparsest solution should meet certainconditions. This problem is regarded as NP-hard andhas combinational computational complexity. However,recent development in the compressive sensing theoryreveals that the l0 optimization can be replaced with thel1 norm minimization [35, 36]. The l1 norm optimization isformally defined as follows:

we = arg min ‖we‖1 subject to Φwe = m (8)

where ‖ · ‖1 is defined as ‖we‖1 = ΣM+nF+nNn=1 |we(n)|.

This problem is a convex optimization problem and thereare sophisticated methods with polynomial computationalcomplexity which can be used to solve it. There are tworepresentative algorithms for solving the sparse recovery.The first method is based on the convex optimization andthe problem can be solved via LP [35, 36]. The secondmethod is the greedy algorithm and the problem canbe solved via sequentially investigating the support ofrecovered signal [37]. OMP is a widely used method inthe greedy algorithm family due to its simplicity andgood performance. In this article, we first used CVX forsolving the �1 norm optimization, a package for specifyingand solving convex programs [38]. Then we find thesparse support of the recovered signal using the normalOMP [37].

For a newly obtained silhouette image, we first computeits sparsest solution following (8). Generally, nonzeroentries in the we should be associated with a single poseclass. However, the noise of the errors and obstructionsmay lead to small nonzero entries associated with otherclasses and error templates. Therefore, we assign thesilhouette image m to the pose class according to thelargest entry in the recovered we. The algorithm1 belowgives the pose recognition framework. Figure 7 illustratesthe overview recognition framework, the sparse recoveryis based on LP. If the largest nonzero entry is associatedwith the error template, we will reject making a decisionand other sensing methods may be needed for analysingthe pose of the human body.

4. Experimental Details and Results

4.1. Experimental Setup

The experimental acquisition of normal and fall poses isperformed with the involvement of 10 volunteers, twofemale subjects and eight male subjects. All volunteers areat normal heights, ranging from 160cm to 180cm. The dataacquisition process is based on a relatively frontal capture.For each category of activity, the participants are requiredto perform a self-selected pose and strategy. For each kindof pose, we use the proposed sensor array to scan 6 timesunder the predefined rate. Thus, there are 60 samples forstanding, 60 samples for sitting and 60 samples for fallposes.

Experimental data are divided into two sets: the trainingtemplates and testing samples. At the initializationstage, we randomly select 30 samples from each posefor building training templates. Therefore, there are30 columns for representing fall pose and 60 columnsfor representing normal pose. Hence, the extendedtemplate matrix Φ has size 3000 × 3090. The remainingsamples are used for testing the recognition method.The following average results are computed based on 10times cross-validation. All the recognition experimentsare run on an Intel Pentium4 2.8GHz computer underthe Matlab implementation. The average time spent onOMP recovery is 0.5738s with maximum 0.6125s, whilethe average time for LP is 1.8618s with maximum 2.0672s.

4.2. Recognition without Obstruction

We first test the proposed method for the pose recognitionwithout obstruction. Figure 8 illustrates the representativeresults of algorithm1, the sparse recovery is based on LP.Figure 8.(b) gives the sparse coefficients spanned on the

Algorithm 1 Framework of the pose recognition1: Acquiring the infrared silhouette image using the mobile PIR sensor.2: Input: A matrix of training samples T = [TF, TN ] = [tF,1, · · · , tF,nF , tN,1, · · · , tN,nN ], a test sample m.3: Extending the template matrix with Φ = [T, I] ∈ RM×(M+nF+nN).4: Normalizing each column of the Φ to have unit l2 norm.5: Solving the l1 norm minimization problem:

we = [w, e] = arg min ‖we‖1 subject to Φwe = m,

6: return identity(m) = arg max(we).

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/573186

Page 7: Mobile robot

Fall pose entriesNormal pose entriesSparse coefficientsLargest entry

Error entries

Figure 7. Illustration of the sparse representation-based pose recognition

training template, while Figure 8.(c) shows the errorcoefficients spanned on the error template. It can beseen that the blue entries of the spare coefficients arecorresponding to the true pose class and have largeramplitudes than those in error coefficients. The largestamplitude in the estimated candidates is always assignedto the true class. The red or green circle in the Figure 8.(b)indicates the determined pose class. The proposed falldetection method achieves a 100% recognition rate basedon both LP and OMP recovery.

4.3. Recognition with Random Pixel Obstruction

Considering the possible errors caused by PIR sensor andthe noise associated with the mobile robot or sensingsurrounding, we simulate this situation using the randompixel obstruction on the silhouette images at variouslevels, from 10% to 50%. The obstruction operationis executed by the ’bit-or’ operation of the raw imageswith a random pixel obstruction. Figure 9 illustrates therepresentative results of algorithm1 with 20% obstruction,the sparse recovery is based on LP. Figure 9.(c) shows theimages with the random pixel obstruction. Figure 9.(d)shows the amplitude of the coefficients on the trainingtemplate, while Figure 9.(e) shows the amplitude of thecoefficients on the error template. It can be seen thatthe blue entries are corresponding to the true pose class,while a limited number of error coefficients are activated.In these representative examples, the estimated globalcandidates are sparse and have the largest amplitudeat the associated class. The red or green circle in theFigure 9.(d) is chosen as the determined pose class. Inthis test, the sparse representation-based pose recognitionmethod is able to detect the fall pose and normal posecorrectly with serious random noise. Table 1 exhibits theaverage recognition rate at various levels of random pixelobstruction. The proposed fall detection method achievessimilar performance based on both LP and OMP recovery.

4.4. Recognition with Random Block Obstruction

For simulating a realistic scene containing furnitureobstructions, we create random block obstructions onthe silhouette images at various levels, from 10%to 50%. The obstruction operation is executed bymasking the raw images with random rectangle blockobstructions. Figure 10 illustrates the representative

results of algorithm1 with 20% obstruction, the sparserecovery is based on LP. Figure 10.(a) shows the rawimages and Figure 10.(b) simulates the mask with therandom block obstructions. Figure 10.(c) is the simulatedsilhouette image using the ’bit-or’ operation betweenFigure 10.(a) and Figure 10.(b). It can be seen that theblue entries are associated with the true pose class. Inthese representative examples, the estimated candidatesare sparse and have the largest amplitude at the true poseclass. The red or green circle with the largest amplitude inthe Figure 10.(d) indicates the determined pose class. Inthis test, the sparse representation-based pose recognitionmethod is able to detect the fall pose and normal posecorrectly with serious block obstructions. Table 2 exhibitsthe average recognition rate at various levels of randomblock obstruction. The proposed fall detection methodachieves similar performance based on both LP and OMPrecovery.

5. Discussions and Conclusions

In this article, we integrate an effective infrared sensingmethod and robust pose recognition algorithm forelderly-fall detection. For the data acquisition, we use asingle column of PIR detectors to implement the infraredsilhouette imaging. A mobile robot agent is employedfor aiding the mobile silhouette imaging. Fall detectionis cast as binary silhouette-based pose recognition. Thecandidate pose is represented as a linear combinationof training templates and error templates. A good poseis able to be approximated by the training template,which will lead to a sparse solution. Although some errorcoefficients will be activated in the simulated practicalscenarios, the combined coefficients are still sparse. The�1 norm minimizations using LP and OMP are used forfinding the sparsest solution, and the entity with thelargest amplitude indicates the class of the testing sample.From the experimental results, both algorithms havesimilar performance, but the OMP method will take lesscomputational time compared with the LP method. Insome limited circumstances, the OMP is a better choice.

However, there are some limitations to our system.First, the data acquisition process is based on a relativelyfrontal capture. For each category of activity, theparticipants were required to perform a self-selected poseand strategy. In the case that a frontal acquisition is not

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

7

Page 8: Mobile robot

Error entries

Error entries

Error entries

Figure 8. Illustration of the recognition without obstruction. (a) The test silhouette images.(b) Estimated spare coefficients w.(c) Estimated error coefficients e.

Recovered by LP Recovered by OMPPercent obstructed(%) 10 20 30 40 50 10 20 30 40 50Fall recognition(%) 100 100 98.33 96.67 55 100 100 98.16 95.97 57Fall detected as normal pose(%) 0 0 0 0 0 0 0 0 0 0Rejection(%) 0 0 1.67 3.33 45 0 0 1.84 4.03 43Normal pose recognition(%) 100 100 100 98.33 57.5 100 100 99.65 98.14 56.5Normal pose detected as fall(%) 0 0 0 0 0 0 0 0 0 0Rejection(%) 0 0 0 1.67 42.5 0 0 0.35 1.86 43.5

Table 1. Recognition rate with random pixel obstruction

Recovered by LP Recovered by OMPPercent obstructed(%) 10 20 30 40 50 10 20 30 40 50Fall recognition(%) 100 100 98.33 95 88.33 100 99.04 96.76 93.5 86.45Fall detected as normal pose(%) 0 0 0 0 0 0 0 0 0 0Rejection(%) 0 0 0 0 0 0 0.96 3.24 6.5 13.55Normal pose recognition(%) 100 99.17 97.5 93.33 79.17 100 98.86 97.1 92.86 76.65Normal pose detected as fall(%) 0 0 0.83 2.5 5 0 0 0.84 3.3 6.5Rejection(%) 0 0.83 1.67 4.17 15.83 0 1.14 2.06 3.84 16.85

Table 2. Recognition rate with random block obstructions

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/573188

Page 9: Mobile robot

Error entries

Error entries

Error entries

Figure 9. Illustration of the recognition with random pixel obstruction. (a) The raw test silhouette images. (b) The masks usedfor simulating random pixel obstruction. (c) The silhouette images with random pixel obstruction. (d) Estimated spare coefficientsw. (e) Estimated error coefficients e.

Error entries

Error entries

Error entries

Figure 10. Illustration of the recognition with random block obstructions. (a) The raw test silhouette images. (b) The masks used forsimulating random block obstructions. (c) The silhouette images with block obstructions. (d) Estimated spare coefficients w. (e) Estimatederror coefficients e.

possible, the posture may compromise the structure of thedata acquired, which is a limitation of this fall detectionmethod. If the robot has difficulty accessing the frontalposition in real application scenarios, we can deploy a

similarly-structured PIR array on the ground horizontallyto acquire a tangent silhouette, or a help button to assistthe fall detection. Second, in practical usage, if theservice environment contains certain heating sources at

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

9

Page 10: Mobile robot

body temperature, they will interfere with the silhouetteimaging process. Therefore, the proposed system willhave a better practical performance in more controlledenvironments such as nursing homes.

While the proposed methods do not help to preventfalls or decrease the number of falls occurring in the home,they may provide a sense of comfort and reassurance to theelderly. If an emergency occurred, immediate assistanceand care would be available to them. The proposedsensing model is not only advantageous in providing alow-cost, non-invasive motion sensing method, whichwill not interfere with the lighting condition, it may alsobecome a ubiquitous agent for healthcare applications.

6. Acknowledgements

The authors would like to thank the anonymous reviewersfor their constructive comments and suggestions. Theyalso wish to thank all the staff of the InformationProcessing & Human-Robot Systems lab in Sun Yat-senUniversity for their aid in conducting the measurementexperiments. This work is partly supported bythe National Natural Science Foundation of LiaoNingProvince (grant no.2013020008) and the National NaturalScience Foundation of China (grant no.61074167).

7. References

[1] World Health Day 2012 - Good health adds life to years(2012). World Health Organization.

[2] Chan BK, Marshall LM, Winters KM, Faulkner KA,Schwartz AV, and Orwoll ES (2007) Incident fall riskand physical activity and physical performance amongolder men. American Journal of Epidemiology. 165(6):696–703.

[3] Noury N, Fleury A, Rumeau P, Bourke AK, LaighinGO, Rialle V, and Lundy JE (2007) Fall detection -principles and methods. Proceedings of 29th AnnualInternational Conference of the IEEE Engineering inMedicine and Biology Society. pp.1663 –1666.

[4] Karantonis DM, Narayanan MR, Mathie M, LovellNH, and Celler BG (2006) Implementation of areal-time human movement classifier using atriaxial accelerometer for ambulatory monitoring.IEEE Transactions on Information Technology inBiomedicine. 10(1): 156–167.

[5] Lai CF, Huang YM, Park JH, and Chao HC (2010)Adaptive body posture analysis for elderly-fallingdetection with multisensors. IEEE Intelligent Systems.25(2):20 –30.

[6] Bourke AK, O’Brien JV, and Lyons GM (2007)Evaluation of a threshold-based tri-axial accelerometerfall detection algorithm. Gait & Posture. 26(2): 194 –199.

[7] Bourke AK, Lyons GM (2008) A threshold-basedfall-detection algorithm using a bi-axial gyroscopesensor. Medical Engineering & Physics. 30(1): 84 – 90.

[8] Hori T, Nishida Y, Aizawa H, Murakami S, andMizoguchi H (2004) Sensor network for supportingelderly care home. Proceedings of IEEE Sensors. 2:575–578.

[9] Lee YS, Chung WY (2008) Novel video sensor basedfall detection of the elderly using double-differenceimage and temporal template. Sensor Letters. 6(2):352–357.

[10] Lee YS, Chung WY (2011) Vision sensor based fallincident detection of elderly persons in real-timehealthcare surveillance system. Sensor Letters. 9(1):162–169.

[11] Han J, Bhanu B (2005) Human activity recognitionin thermal infrared imagery. Proceedings of the 2005IEEE Computer Society Conference on ComputerVision and Pattern Recognition- Workshops. 3,pp.17.

[12] Ming D, Xue Z, Meng L, Wan B, Hu Y, and LukKDK (2009) Identification of humans using infraredgait recognition. Proceedings of the 2009 IEEEInternational Conference on Virtual Environments,Human-Computer Interfaces and MeasurementSystems. pp.319–322.

[13] Kim D, Lee S, and Paik J (2009) Active shapemodel-based gait recognition using infrared images.Signal Processing, Image Processing and PatternRecognition. 61:275–281.

[14] Harmo P, Taipalus T, Knuuttila J, Vallet J, and HalmeA (2005). Needs and solutions - home automationand service robots for the elderly and disabled.Proceedings of International Conference on IntelligentRobots and Systems. pp.3201 – 3206.

[15] Fong T, Nourbakhsh I, and Dautenhahn K (2003)A survey of socially interactive robots. Robotics andAutonomous Systems. 42: 143 – 166.

[16] Hao Q, Hu F, and Xiao Y (2009) Multiple humantracking and identification with wireless distributedpyroelectric sensor systems. IEEE Systems Journal.3(4): 428 –439.

[17] Hao Q, Brady DJ, Guenther BD, Burchett JB, ShankarM, and Feller S (2006) Human tracking with wirelessdistributed pyroelectric sensors. IEEE Sensors Journal.6(6): 1683 –1696.

[18] Wright J, Yang AY, Ganesh A, Sastry SS, andMa Y (2009) Robust face recognition via sparserepresentation. IEEE Transactions on Pattern Analysisand Machine Intelligence. 31(2): 210 –227.

[19] Mei X, Ling H (2011) Robust visual trackingand vehicle classification via sparse representation.IEEE Transactions on Pattern Analysis and MachineIntelligence. 33(11): 2259-2272.

[20] Hossain A ,Rashid MH (1991) Pyroelectric detectorsand their applications. IEEE Transactions on IndustryApplications. 27(5): 824 –829.

[21] Burchett J, Shankar M, Hamza AB, Guenther BD,Pitsianis N, and Brady DJ (2006) Lightweight biometricdetection system for human classification usingpyroelectric infrared detectors. Applied Optics. 45(13):3031–3037.

[22] Liu T and Liu J (2012) Feature-specific biometricsensing using ceiling view based pyroelectric infraredsensors. EURASIP Journal on Advances in SignalProcessing. 2012:206.

[23] Fang JS, Hao Q, Brady DJ, Guenther BD, andHsu KY (2006) Real-time human identification usinga pyroelectric infrared detector array and hiddenMarkov models. Optics Express. 14(15): 6643–6658.

Int J Adv Robot Syst, 2014, 11:42 | doi: 10.5772/5731810

Page 11: Mobile robot

[24] Fang JS, Hao Q, Brady DJ, Guenther BD, and HsuKY (2007) A pyroelectric infrared biometric system forreal-time walker recognition by use of a maximumlikelihood principal components estimation (mlpce)method. Optics Express. 15(6): 3271–3284.

[25] Hosokawa T, Kudo M, Nonaka H, and Toyama J(2009) Soft authentication using an infrared ceilingsensor network. Pattern Analysis and Applications.12(3): 237–249.

[26] Sartain RB (2008) Profiling sensor for ISRapplications. SPIE Proceedings. 6963: 69630Q.

[27] Russomanno DJ, Chari S, Jacobs EL, and Halford C(2010) Near-IR sparse detector sensor for intelligentelectronic fence applications. IEEE Sensors Journal.10(6): 1106 –1107.

[28] Russomanno DJ, Chari S, and Halford C (2008) Sparsedetector imaging sensor with two-class silhouetteclassification. Sensors. 8(12): 7996-8015.

[29] Jacobs EL, Chari S, Halford C, and McClellan H (2009)Pyroelectric sensors and classification algorithms forborder / perimeter security. SPIE Proceedings. 7481:7481P.

[30] White III WE, Brown JB, Chari S, and Jacobs EL (2010)Real-time assessment of a linear pyroelectric sensorarray for object classification. SPIE Proceedings. 7834:783403.

[31] Brady DJ, Pitsianis NP, and Sun X (2004) Referencestructure tomography. Journal of the Optical Society ofAmerica A. 21(7): 1140–1147.

[32] http://pirsensor.bloombiz.com, 2011.[33] Lord S, Chastin SFM, McInnes L, Little L, Briggs

P, and Rochester L (2011) Exploring patternsof daily physical and sedentary behaviour incommunity-dwelling older adults. Age and Ageing.40(2): 205–210.

[34] Seguin R, LaMonte M, Tinker L, Liu J, Woods N,Michael YL, Bushnell C, and LaCroix AZ (2012)Sedentary behavior and physical function decline inolder women: Findings from the women’s healthinitiative. Journal of Aging Research. Article ID271589.

[35] Donoho DL (2006) Compressed sensing. IEEETransactions on Information Theory. 52(4): 1289 –1306.

[36] Candes EJ, Wakin MB (2008) An introductionto compressive sampling. IEEE Signal ProcessingMagazine. 25(2): 21 –30.

[37] Tropp JA, Gilbert AC (2007) Signal recovery fromrandom measurements via orthogonal matchingpursuit. IEEE Transactions on Information Theory.53(12): 4655–4666.

[38] Grant M, Boyd S (2011) CVX: Matlab softwarefor disciplined convex programming, version 1.21.Available: http://cvxr.com/cvx.

Tong Liu and Jun Liu: Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-fall Detection

11