6

Click here to load reader

[IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

A Study On Leg Posture Recognition From Indian Classical Dance Using Kinect Sensor

Sriparna Saha1, Shreya Ghosh2, Amit Konar1 1 Electronics and Telecommunication Engineering Dept.

2 School of Bioscience and Engineering Jadavpur University

India {sahasriparna, shreyaghosh215}@gmail.com,

[email protected]

Ramadoss Janarthanan Computer Science & Engineering Dept.

TJS Engineering College India

[email protected]

Abstract—This paper proposes a simple yet a novel technique to recognize leg postures in Indian classical dance by making use of a Kinect sensor. The sensor device has the ability to track the skeleton of the subject with the help of a visible camera and an IR camera coupled to an IR laser and diffraction grating. Twenty five leg postures from ‘Odissi’, an Indian Classical dance have been used for the evaluation our proposed algorithm. This methodology extracts eight features, which in turn can be categorized under three levels of symmetry viz. the vertical symmetry, the horizontal symmetry and the angular symmetry. Finally a similarity function is devised which is the basis of the leg posture recognition technique. This method provides better human computer interaction and also aims at spreading the dance form for e-learning purpose. The proposed algorithm can be applied for any dance form for leg posture recognition purposes. It gives 86.75% accuracy with five subjects.

Index Terms—angular symmetry, human computer interaction, horizontal symmetry, leg posture, similarity function, vertical symmetry.

I. INTRODUCTION Posture Recognition is an upcoming research field which

has opened up various avenues not only in detecting human physical activities but also in the health care sector as well. However, it faces the challenge of producing error free outputs despite the presence of motion artifacts and the variation of appearance of the subject [1]. Posture Recognition aims to build intelligent systems which can interpret human locomotive activities [2].

In our paper, we present a novel method of leg posture recognition in Indian classical dance forms using Kinect Sensor. It is Microsoft product [3] and was originally designed for the purpose of playing computer games. However, with the passage of time this device is now being used for other constructive research purposes. For example, it can be used for recognition of voice, facial gestures and hand gestures. In fact, we have successfully used it for the recognition of leg postures in Indian classical dance, ‘Odissi’. The Kinect sensor consists of a set of visible and IR cameras which detects the skeleton of the subject using twenty body joint coordinates [4]. Since for our experiment, we have worked on only leg postures, the joint coordinates of the lower portion of the human body have been

considered. The leg postures from Odissi dance has been used here as inputs. Though only Odissi leg postures have been processed for our experiment, our technique can be applied for any leg posture recognition for any dance form.

‘Odissi’ is an Indian classical dance form originating from the state of Orissa. A combination of body movements, feet movements, expressions accompanied with music constitutes the whole dance. Among these, the leg postures form a crucial segment of the dance. There are a total of twenty five basic leg postures in ‘Odissi’ dance. All of these have been recognized in this paper. Some examples are ‘Kumbha ‘Eka pad’, ‘Dhanu’ etc.

The proposed technique involves a three stage system. In this system, a total of eight features are extracted. These features can be categorized into three broad categories of symmetry namely the vertical symmetry, the horizontal symmetry and the angular symmetry. The body joint coordinates that are being used here belong to the abdominal portion of the body since our aim is to recognize the leg postures. The hip centre, the left and right knee joint coordinates, the foot joint coordinates for both the legs and the ankle coordinates for both the parts of the body are the ones that have been used for the purpose of designing the algorithm described in the proposed work. While the vertical symmetry calculates difference between the different Euclidean distances along the Y axis, the horizontal symmetry gives a measure of the difference of the differences in distance along the X-axis. The angular symmetry produces normalization between five angles located in the lower portion of the body. Finally a similarity function is formed using the eight features for each of the twenty five leg posture. Using this function mainly we are implementing the recognition technique.

The utility of the proposed methodology lies in the fact that it allows human computer interaction. Since human beings can change their postures and locomotive activity according to the demand of the external environment, hence a proper human computer interface will be able to manipulate the movements according to need [5]. The leg postures are detected by the Kinect sensor via the 3D [6] skeletal information. Consequently, a total of eight features are extracted.

The introduction of computers in learning (electronic learning) has led to a major paradigm shift. It has

Page 2: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

revolutionized learning by incorporating flexibility and affordability. Automatic leg posture recognition would capacitate the e-learning of Odissi dance, thereby spreading the dance form all over the world. The human computer interaction basically helps in learning the leg postures as and when required by anybody dwelling in any part of the world.

The overall recognition rate of the proposed algorithm is 86.75% where twenty five leg posture datasets are taken from five individual subjects. 80% datasets are taken for training purposes, whereas the others are used for testing purpose. The average computation time is 2.0013 sec for each leg posture in an Intel Pentium Dual-Core Processor with non-optimized Matlab R2011b.

The presentation starts with literature review in section II. A detail of the 3- stage procedure is given in Section III. Performance analysis is explained in section IV. Section V concludes.

II. LITERATURE REVIEW The sensing device that has been used here is called the

Kinect [3]. It has the appearance of a black horizontal bar, which has an IR camera, a visible RGB camera and an IR projector incorporated into it as shown in Fig. 1 [7]. The IR camera tracks the skeleton [4] using twenty body joint coordinates [8]. The Kinect [9] contains an IR laser along with a diffraction grating. The diffraction gratings produce different patterns that are recognized by the IR camera. The IR camera produces depth information [10] of the subject. Therefore, our proposed scheme eliminates the chances of error introduction owing to variation of appearance or color of clothes. Moreover, the Kinect can also detect facial expressions and voice gestures [11]. The advantages of the Kinect include its optimum cost [12], its ability to perform throughout the twenty four hours of the day [1]. Besides these, the Kinect also eliminates the problems that arise in the images of objects due to shadows [13] and the presence of multiple sources of illumination [14].

Fig. 1. Kinect Sensor

Posture recognition [15] has many applications in human computer interaction domain. Here in our proposed work, we are concentrating on leg posture recognition [5]. In case of occlusion, the authors in [16] proposed a method to identify upper leg posture for sleep apnea. Leg posture recognition is essential for vehicular control systems also [17]. In this paper, a vehicle system driven by the human foot is asserted on. Tanaka et. al study the impedence properties of the foot while it is at a definite posture. They created a database containing the data related to the mechanical properties of the foot. Though this experiment successfully carried out its purpose it failed to evaluate operational feeling and performance. In [18], authors aimed to create a proper human computer interaction system which would detect postures of the human body while under cover. Since usage of multiple videos is not always

convenient or rather possible, this paper presented a monocular video approach to the problem. Wang et. al proposed a model that incorporated feature space and model parameters integrated with a novel search framework. They also used a novel head model in combination with the previous model in order to get a result of better accuracy and authenticity. In [19], the authors devise a novel technique that is based on the change in the postures and overall performance as the tempo of the music varies. A Humanoid Robot HRP-2 was used so that the method proposed could be authenticated by the usage of simulated experiments. Xu et. al proposed a scheme in which body postures could be modeled based on the Euler angles of the torso, arms and legs [20]. The algorithm proposed here was validated by the usage of a classification engine in Matlab. It could detect changes in human postures with an accuracy rate of 97%. In [21], authors proposed a method of detection of movements made in bed. The authors of [21] resorted to the usage of load cells connected to the four corners of the bed and Gaussian Mixture Model in time domain to design the required classification system. The method was used on laboratory data and almost 84.6% results were found to be accurate.

III. 3 - STAGE SYSTEM The procedure consists of the Kinect sensor tracking up the

skeleton of the subject and then processing the information obtained from the 3D representation. The features extracted as a result of this are categorized into three main stages of symmetry namely the vertical symmetry (along the X-axis), the horizontal symmetry (along the Y-axis) and the angular symmetry. All the distances calculated are in meter. Fig. 2 describes the twenty body joints obtained using Kinect sensor for the corresponding RGB image. In our proposed work, we are using nine body joints from the lower body parts; these are highlighted in the skeleton view by magenta cubes. The leg posture stated in this figure is ‘Kunchita’.

Fig. 2. RGB image and skeleton of dancer portraying ‘Kunchita’ leg

posture

Page 3: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

A. Vertical Symmetry The Vertical symmetry is determined by the calculation of

three features. The Euclidean distance (D1) is calculated between the Hip Centre (HC) joint coordinates and the Knee Left (KL) joint coordinate. A similar method can be employed for calculating the distance (D2) between the Hip centre and the Knee Right (KR) joint coordinates. The first feature (F1) thus consists of the absolute difference between the above mentioned distances. The second feature (F2) comprises of the absolute difference in the Euclidean distances (D3 & D4) of the Hip centre with the Ankle Left (AL) as well as Ankle Right (AR). Similarly, the distances (D5 & D6) between the Hip Centre and the Foot Left (FL) and the Foot Right (FR) are considered and the absolute difference between them is the third feature (F3). Fig. 3 represented ‘Swastika Paada’ with its skeleton image. Table I shows the experimented values obtained for twenty five leg posture for subject 1.

[ ]KLHCEuclideanD −=1

( ) ( ) ( ){ }222kkjjii KLHCKLHCKLHCsqrt −+−+−=

(1) where Euclidean function gives Euclidean distance value

for the two parameters passed, sqrt denotes square root, i, j and k for unit vectors along X, Y and Z axes.

[ ]KRHCEuclideanD −=2 (2)

[ ]211 DDabsF −= (3) where abs calculates the absolute value.

[ ]ALHCEuclideanD −=3 (4)

[ ]ARHCEuclideanD −=4 (5)

[ ]432 DDabsF −= (7)

[ ]FLHCEuclideanD −=5 (8)

[ ]FRHCEuclideanD −=6 (9)

[ ]653 DDabsF −= (10)

Fig. 3. ‘Swastika Paada’ leg posture

TABLE I. SIX DISTANCE VALUES FOR SUBJECT 1

Posture Name

D1 D2 D3 D4 D5 D6

Adi Pad 0.5086 0.4895 0.8678 0.8405 0.9377 0.8954 Sama Pad

0.5355 0.5295 0.8697 0.8663 0.8983 0.9095

Viparita Mukha

0.4891 0.5362 0.8369 0.8351 0.8691 0.9072

Kumbha 0.4331 0.4540 0.7630 0.7716 0.8156 0.8130

Dhanu 0.4121 0.4370 0.7035 0.7378 0.7486 0.7940 Prushta Dhanu

0.4363 0.4525 0.6932 0.6767 0.7387 0.7440

Maha Pad

0.4222 0.4401 0.7410 0.5392 0.7788 0.4828

Eka Pad 0.4707 0.4548 0.7823 0.7339 0.8238 0.2421 Meenapuccha Pad

0.4234 0.4321 0.7560 0.4592 0.7998 0.3928

Lolita Pad

0.4722 0.4601 0.7470 0.5395 0.7558 0.4734

Uttolita Mukha

0.4496 0.4330 0.6167 0.5499 0.7564 0.5607

Ullolita Pad

0.4571 0.4500 0.7943 0.6240 0.8295 0.6666

Nupura 0.4472 0.4780 0.8142 0.6548 0.7995 0.7689

Soochi 0.4581 0.4733 0.8454 0.7625 0.8788 0.8339

Kunchita 0.4382 0.4563 0.8557 0.7325 0.8689 0.8579 Anukunchita

0.4720 0.4712 0.8160 0.8243 0.8802 0.8487

Bilagna Parsni Paada

0.4314 0.4618 0.7069 0.7675 0.7464 0.8568

Trivanga Paada

0.4458 0.4128 0.7438 0.7675 0.7961 0.8260

Swastika Paada

0.4465 0.4705 0.7238 0.7151 0.7932 0.7612

Mandala Paada

0.4700 0.5739 0.7303 0.7828 0.7729 0.7648

Chouka Paada

0.4518 0.5482 0.6587 0.6898 0.6868 0.7355

Bandhani Paada

0.4321 0.4560 0.7319 0.5293 0.7652 0.4213

Utparshnee Paada

0.4894 0.5367 0.8042 0.8579 0.8387 0.8780

Ardha Swastika Paada

0.4422 0.4410 0.7762 0.7539 0.73105

0.8515

Rekha Paada

0.5785 0.5399 0.8367 0.8363 0.8983 0.9095

B. Horizontal Symmetry The Horizontal symmetry gives an estimate of the

symmetry of the posture along the X-axis. It produces the fourth feature (F4) in our experiment. This feature (F4) is a subtraction between three distances. While the first distance (D7) is the Euclidean distance between the Knee Left (KL) and Knee Right (KR) joint coordinates, the second distance (D8) is the one between the two Ankles (AL & AR). The third distance (D9) is the distance between the Foot Left (FL) and Foot Right (FR) joint coordinates. Fig. 4 shows skeleton view for ‘Kumbha’ leg posture. Calculated values for three distances D7 to D9 of subject 2 are given in Table II.

[ ]KRKLEuclideanD −=7 (11)

[ ]ARALEuclideanD −=8 (12)

Page 4: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

[ ]871 DDflag −= (13)

[ ]972 DDflag −= (14)

[ ]214 flagflagabsF −= (15)

Fig. 4. ‘Kumbha’ leg posture

TABLE II. THREE DISTANCE VALUES FOR SUBJECT 2

Posture Name D7 D8 D9 Adi Pad 0.1408 0.1623 0.1560

Sama Pad 0.1166 0.1207 0.1613

Viparita Mukha 0.1531 0.1661 0.0962

Kumbha 0.3508 0.1913 0.1910

Dhanu 0.2472 0.1506 0.2151

Prushta Dhanu 0.2853 0.0827 0.0235

Maha Pad 0.3397 0.3666 0.3936

Eka Pad 0.0753 0.7370 0.8362

Meenapuccha Pad 0.1767 0.7766 0.7986

Lolita Pad 0.4355 0.43600 0.3721

Uttolita Mukha 0.4251 0.2740 0.2758

Ullolita Pad 0.4537 0.3335 0.3303

Nupura 0.4723 0.3435 0.3509

Soochi 0.3500 0.1196 0.1137

Kunchita 0.3460 0.1592 0.1439

Anukunchita 0.1534 0.1110 0.0897 Bilagna Parsni Paada

0.3159 0.3031 0.3333

Trivanga Paada 0.3299 0.3252 0.3543

Swastika Paada 0.2305 0.0776 0.1449

Mandala Paada 0.5820 0.5088 0.5192

Chouka Paada 0.6804 0.4962 0.4920

Bandhani Paada 0.3877 0.3826 0.3294

Utparshnee Paada 0.1113 0.3002 0.3003 Ardha Swastika Paada

0.3262 0.3219 0.2541

Rekha Paada 0.1326 0.1297 0.1997

C. Angular Symmetry To determine the angular symmetry it is required to

calculate five angles of the lower part of the body. The first angle is the one formed at the Hip Centre. This is the angle (A1) between two vectors formed by Knee Right (KR), Hip

Right (HR) and Knee Left (KL), Hip Left (HL). While the second angle (A2) is in between Hip Left, Knee Left and Ankle Left (AL), Knee Left. The third angle (A3) is formed by Knee Left, Ankle Left and Foot Left (FL), Ankle Left. Finally angles fourth (A4) and fifth (A5) are produced for right leg similar to the formation of second and third angles respectively. Normalized values of these five angles are obtained by dividing all the angles by first angle. These forms the last four features (F5-F8). Division of A1 by itself is always one, thus it is neglected. ‘Bilanga Parsni Paada’ leg gesture is shown in Fig. 5. The angles are required for this proposed work, are marked in the skeletal view. Five angle values experimentally obtained for subject 1 are given in Table III.

( ) ( )[ ]HLKLHRKRangleA ,,,1 = (16)

( ) ( )[ ]KLALKLHLangleA ,,,2 = (17)

( ) ( )[ ]ALFLALKLangleA ,,,3 = (18)

( ) ( )[ ]KRARKRHRangleA ,,,4 = (19)

( ) ( )[ ]ARFRARKRangleA ,,,5 = (20)

125 A

AF = (21)

136 A

AF = (22)

147 A

AF = (23)

158 A

AF = (24)

Fig. 5. ‘Bilagna Parsni Paada’ leg posture

TABLE III. FIVE ANGLE VALUES FOR SUBJECT 2

Posture Name

A1 A2 A3 A4 A5

Adi Pad 3.5786 178.7479 152.3401 173.7086 127.1755 Sama Pad

2.5464 169.0763 94.3262 174.8428 124.3565

Viparita Mukha

4.7419 169.1854 116.2262 168.3809 117.8991

Kumbha 36.3405 143.5378 112.4581 146.4116 112.1391

Dhanu 18.8559 159.4638 111.6643 153.0282 156.2545

Page 5: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

Prushta Dhanu

24.6978 149.5380 126.9804 134.3714 138.2977

Maha Pad

32.4379 141.2910 96.3162 95.6616 110.7594

Eka Pad 12.7873 165.1250 107.2621 148.3532 21.4991 Meenapuccha Pad

12.4309 130.1290 156.8192 190.9106 134.1504

Lolita Pad

33.4670 143.356 97.0163 95.4591 111.9394

Uttolita Mukha

47.8155 154.7074 105.8390 97.0669 71.1063

Ullolita Pad

52.0614 157.9590 105.8652 106.5974 111.7277

Nupura 53.1653 153.1191 104.6672 104.773 119.073

Soochi 35.4870 163.7690 101.9110 124.2790 146.0870

Kunchita 36.4974 170.12650

106.0140 123.7734 143.1290

Anukunchita

5.2584 157.7461 152.4337 140.0923 84.3459

Bilagna Parsni Paada

25.8873 151.0527 101.4353 164.2878 158.4217

Trivanga Paada

23.0856 150.5379 106.1543 163.1789 147.4767

Swastika Paada

12.3057 152.7616 134.1672 159.1735 132.3118

Mandala Paada

61.1299 146.1298 118.1712 135.2752 117.2919

Chouka Paada

83.7301 127.7327 136.0654 129.5277 130.0592

Bandhani Paada

32.2790 143.6208 94.3625 94.4210 113.7572

Utparshnee Paada

9.7907 153.4253 97.5851 135.0827 77.9143

Ardha Swastika Paada

23.4356 150.2752 106.3943 163.3990 143.8267

Rekha Paada

2.6464 170.0773 95.0062 176.4248 125.7845

IV. PERFORMANCE ANALYSIS A Similarity Function (SF) is created. This similarity

function consists of the above mentioned features for each leg posture. This function is the key behind the recognition technique in our paper. By using (26), result (R) is obtained. Twenty five similarity function forms a Similarity Matrix (SM). Table IV shows the comparison between two leg postures, namely ‘Dhanu’ and ‘Soochi’ by RGB and skeleton images. Eight features obtained for each leg posture is also given.

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

=

252525

242424

333

222

111

8...218...21......8...218...218...21

FFFFFF

FFFFFFFFF

SM

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

=

25

2

1

.

.

.

SF

SFSF

(25)

( ) ⎥⎦

⎤⎢⎣

⎭⎬⎫

⎩⎨⎧= ∑ ∑∀

= =

25

1

8

1,minarg

i l

knownunknown

primitivesSFSFR

(26)

Algorithm for the Proposed Work

Step 0 Create an initial database of skeletons for twenty five leg postures

BEGIN Step 1 Calculate D1 to D6 distances Step 2 Determine F1 by absolute value of (D1 - D2) Step 3 Determine F2 by absolute value of (D3 - D4) Step 4 Determine F3 by absolute value of (D5 - D6) Step 5 Calculate D5 to D7 Euclidean distances Step 6 Determine flag1 by (D7 - D8) Step 7 Determine flag2 by (D7 - D9) Step 8 Determine F4 by absolute value of (flag1 - flag2) Step 9 Calculate A1 to A5 angles Step10 Calculate normalized angles by dividing the five

angles by A1 to obtain F5 to F8 Step11 Calculate SF as [F1 F2 F3 F4 F5 F6 F7 F8] Step12 Repeat the process for unknown posture Step13 Use (22) to obtain R END

The recognition rate of the algorithm is 86.75% for five

subjects. Training and testing ratio is 4:1. The overall computation time is 2.0013 sec in an Intel Pentium Dual-Core Processor with non-optimized Matlab R2011b for each leg posture.

TABLE IV. COMPARISON OF TWO LEG POSTURES

Features Dhanu Soochi

RGB Image

Page 6: [IEEE 2013 International Conference on Human Computer Interactions (ICHCI) - Chennai, India (2013.8.23-2013.8.24)] 2013 International Conference on Human Computer Interactions (ICHCI)

Skeleton

F1 0.0249 0.0152

F2 0.0343 0.0829

F3 0.0454 0.0449

F4 0.2244 0.0005

F5 8.4570 4.46149

F6 5.9220 2.8718

F7 8.1157 3.5021

F8 8.2868 4.1166

V. CONCLUSION The advance in communication technology has opened up

new possibilities for learning. In fact, it appears that, because of low cost and highly accessible parameters, it has proven to be a useful technique now-a-days. This is because with the passage of time people have become more career oriented and they hardly have time to reach a different venue and train themselves in a dance academy. So this paper finds a way out for those keen learners and dance enthusiasts who cannot afford real time dance training. Besides, group training is also possible with this technique. Therefore, distance is not a limiting factor because human computer interaction can be possible anytime and anywhere in the world.

Another possible advantage of our methodology is that introverts, who are shy of learning in group, can learn the postures alone without anybody’s interference or help. The proposed work gives a high accuracy rate of 86.75%.

However, our present work requires the subject to remain in “frozen state” as long as the Kinect tracks the skeleton (here it is 2 sec). Hence, the future scope of this work can be on the usage of videos for recognition purposes.

ACKNOWLEDGEENT We would like to thank University Grant Commission,

India, University of Potential Excellence Programme (Phase II) in Cognitive Science, Jadavpur University.

REFERENCES [1] S. Monir, S. Rubya, and H. S. Ferdous, “Rotation and scale invariant

posture recognition using Microsoft Kinect skeletal tracking feature,” in Intelligent Systems Design and Applications (ISDA), 2012 12th International Conference on, 2012, pp. 404–409.

[2] L. Wang, W. Hu, and T. Tan, “Recent developments in human motion analysis,” Pattern recognition, vol. 36, no. 3, pp. 585–601, 2003.

[3] T. Leyvand, C. Meekhof, Y.-C. Wei, J. Sun, and B. Guo, “Kinect identity: Technology and experience,” Computer, vol. 44, no. 4, pp. 94–96, 2011.

[4] J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan, “Scanning 3d full human bodies using kinects,” Visualization and Computer Graphics, IEEE Transactions on, vol. 18, no. 4, pp. 643–650, 2012.

[5] T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231–268, 2001.

[6] J. Clark, “Object digitization for everyone,” Computer, pp. 81–83, 2011. [7] C. Herrera and J. Kannala, “Joint depth and color camera calibration

with distortion correction,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 10, pp. 2058–2064, 2012.

[8] J. Solaro, “The Kinect Digital Out-of-Box Experience,” Computer, pp. 97–99, 2011.

[9] F. Ryden, “Tech to the Future: Making a,” Potentials, IEEE, vol. 31, no. 3, pp. 34–36, 2012.

[10] K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras, “Two-person interaction detection using body-pose features and multiple instance learning,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, 2012, pp. 28–35.

[11] Z. Zhang, “Microsoft kinect sensor and its effect,” Multimedia, IEEE, vol. 19, no. 2, pp. 4–10, 2012.

[12] M. Parajuli, D. Tran, W. Ma, and D. Sharma, “Senior health monitoring using Kinect,” in Communications and Electronics (ICCE), 2012 Fourth International Conference on, 2012, pp. 309–312.

[13] R. Tanabe, M. Cao, T. Murao, and H. Hashimoto, “Vision based object recognition of mobile robot with Kinect 3D sensor in indoor environment,” in SICE Annual Conference (SICE), 2012 Proceedings of, 2012, pp. 2203–2206.

[14] T.-L. Le, M.-Q. Nguyen, and T.-T.-M. Nguyen, “Human posture recognition using human skeleton provided by Kinect,” in Computing, Management and Telecommunications (ComManTel), 2013 International Conference on, 2013, pp. 340–345.

[15] Y. Nihei, I. Samejima, N. Hatao, H. Takemura, and S. Kagami, “Map integration of human trajectory with sitting/standing position using LRF and Kinect sensor,” in Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, 2012, pp. 1250–1255.

[16] C.-W. Wang and A. Hunter, “A simple sequential pose recognition model for sleep apnea,” in BioInformatics and BioEngineering, 2008. BIBE 2008. 8th IEEE International Conference on, 2008, pp. 1–6.

[17] Y. Tanaka, T. Onishi, T. Tsuji, N. Yamada, Y. Takeda, and I. Masamori, “Analysis and modeling of human impedance properties for designing a human-machine control system,” in Robotics and Automation, 2007 IEEE International Conference on, 2007, pp. 3627–3632.

[18] Y. Liu, Z. Zhang, A. Li, and M. Wang, “View independent human posture identification using Kinect,” in Biomedical Engineering and Informatics (BMEI), 2012 5th International Conference on, 2012, pp. 1590–1593.

[19] T. Okamoto, T. Shiratori, S. Kudoh, and K. Ikeuchi, “Temporal scaling of leg motion for music feedback system of a dancing humanoid robot,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, 2010, pp. 2256–2263.

[20] M. Xu, A. Goldfain, A. R. Chowdhury, and J. DelloStritto, “Towards accelerometry based static posture identification,” in Consumer Communications and Networking Conference (CCNC), 2011 IEEE, 2011, pp. 29–33.

[21] A. M. Adami, M. Pavel, T. L. Hayes, A. G. Adami, and C. Singer, “A method for classification of movements in bed,” in Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE, 2011, pp. 7881–7884.