11
796 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016 PAWS: Passive Human Activity Recognition Based on WiFi Ambient Signals Yu Gu, Member, IEEE, Fuji Ren, Senior Member, IEEE, and Jie Li, Senior Member, IEEE Abstract—Indoor human activity recognition remains a hot topic and receives tremendous research efforts during the last few decades. However, previous solutions either rely on special hardware, or demand the cooperation of subjects. Therefore, the scalability issue remains a great challenge. To this end, we present an online activity recognition system, which explores WiFi ambi- ent signals for received signal strength indicator (RSSI) fingerprint of different activities. It can be integrated into any existing WLAN networks without additional hardware support. Also, it does not need the subjects to be cooperative during the recognition process. More specifically, we first conduct an empirical study to gain in- depth understanding of WiFi characteristics, e.g., the impact of activities on the WiFi RSSI. Then, we present an online activity recognition architecture that is flexible and can adapt to different settings/conditions/scenarios. Lastly, a prototype system is built and evaluated via extensive real-world experiments. A novel fusion algorithm is specifically designed based on the classification tree to better classify activities with similar signatures. Experimental results show that the fusion algorithm outperforms three other well-known classifiers [i.e., NaiveBayes, Bagging, and k-nearest neighbor (k-NN)] in terms of accuracy and complexity. Important sights and hands-on experiences have been obtained to guide the system implementation and outline future research directions. Index Terms—Activity recognition, ambient signals, fusion algo- rithm, WiFi. I. I NTRODUCTION W ITH THE rapid development of wireless techniques, today people are surrounded by entities with the capa- bility of sensing and communication such as laptops, smart phones, and tablets in their daily life, leading to the approach of Internet of Things (IoT) [1]–[3]. On one hand, more and more sensors are embedded in these entities to provide enriched environmental information for a better life experience [4]–[6]. On the other hand, the ever- increasing sensors pose severe challenges to the device design, especially for the volume and energy issues [7]. Therefore, Manuscript received October 12, 2015; revised November 23, 2015; accepted December 16, 2015. Date of publication December 23, 2015; date of current version September 08, 2016. This work was supported in part by the National Natural Science Foundation of China under Grant 61300034, Grant 61432004, and Grant 61472117, and in part by the JSPS KAKENHI Grant 15H01712—a start up fund for Huangshan Mountain Scholars (Outstanding Young Talents Program, No. 407-037070) in Hefei University of Technology. (Corresponding author: Yu Gu.) Y. Gu is with the Anhui Province Key Laboratory of Affective Computing and Advanced Intelligent Machine, School of Computer and Information, Hefei University of Technology, Hefei 230009, China (e-mail: [email protected]). F. Ren is with the Department of Information Science and Intelligent Systems, University of Tokushima, Tokushima 770-0855, Japan (e-mail: ren@ is.tokushima-u.ac.jp). J. Li is with the Department of Computer Science, University of Tsukuba, Tsukuba Science City 305-8573, Japan (e-mail: [email protected]). Digital Object Identifier 10.1109/JIOT.2015.2511805 instead of using different kinds of sensors directly, there is a recent trend of exploring wireless ambient signals as an alternative information source. The RF transceiver becomes an essential part of any IoT device, due to its ability of providing an efficient two-way bridge for the physical and cyber worlds [8], [9]. Previous research in this area mainly focuses on using the wireless signals for the location-related services [10]–[12]. For instance, Roberts and Pahlavan summarized approaches using the received signal strength indicator (RSSI) as signatures for indoor or outdoor WiFi localization and presented an empir- ical database of RSSI signatures measured on the Worcester Polytechnic Institute campus [13]. Most current research on the activity recognition issue relies on special hardware like the vision-based [14] and acceleratemetor-sensor-based systems [15], which usually suf- fer from vital shortcomings like the stability (e.g., depending on the light) and scalability issue (e.g., costly hardware, line- of-sight restriction, etc.). Therefore, researchers are struggling for new paradigms to revolutionize the traditional solutions. There is a recent trend to bridge the pervasive wireless sig- nals with the motion recognition issue [16]. More specifically, wireless signal tends to be stable over time in an indoor envi- ronment, e.g., a closed room, since there exists no interferences. However, human activities may disturb the signal propagation and thus cause signal fluctuations. Then, by exploring such virtual footprints, we are able to recover the corresponding physical activities. WiSee, presented by Pu et al., is one of the first such efforts [17]. Their basic methodology is to leverage the doppler shift, i.e., the frequency change of a wave as its source moves relative to the observer. Adib and Ktabi designed Wi-Vi to reveal infor- mation through walls such as the number of people or simple gestures by decoding the angle change of wireless signals [18]. Both WiSee and Wi-Vi are pioneering work. However, they are built on specialized platforms, i.e., the USRP-N210 SDR (software defined radio) system. Therefore, the availabil- ity issue remains. To this end, tremendous efforts have been devoted to the commodity WiFi devices. These solutions can be classified into two categories: 1) channel state information (CSI) based [19], [20] and 2) RSSI based [21]. CSI is extracted directly from the PHY layer and is able to reflect channel response in 802.11 a/g/n [22]. CSI is an efficient information source due to its unique abil- ity of eliminating the multipath effects. However, currently such info can be obtained only on specific hardware such as WiFi 5300 network interface controllers (NICs). Therefore, it is very difficult to be utilized in smart phones and tablets. 2327-4662 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

796 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016

PAWS: Passive Human Activity RecognitionBased on WiFi Ambient Signals

Yu Gu, Member, IEEE, Fuji Ren, Senior Member, IEEE, and Jie Li, Senior Member, IEEE

Abstract—Indoor human activity recognition remains a hottopic and receives tremendous research efforts during the lastfew decades. However, previous solutions either rely on specialhardware, or demand the cooperation of subjects. Therefore, thescalability issue remains a great challenge. To this end, we presentan online activity recognition system, which explores WiFi ambi-ent signals for received signal strength indicator (RSSI) fingerprintof different activities. It can be integrated into any existing WLANnetworks without additional hardware support. Also, it does notneed the subjects to be cooperative during the recognition process.More specifically, we first conduct an empirical study to gain in-depth understanding of WiFi characteristics, e.g., the impact ofactivities on the WiFi RSSI. Then, we present an online activityrecognition architecture that is flexible and can adapt to differentsettings/conditions/scenarios. Lastly, a prototype system is builtand evaluated via extensive real-world experiments. A novel fusionalgorithm is specifically designed based on the classification treeto better classify activities with similar signatures. Experimentalresults show that the fusion algorithm outperforms three otherwell-known classifiers [i.e., NaiveBayes, Bagging, and k-nearestneighbor (k-NN)] in terms of accuracy and complexity. Importantsights and hands-on experiences have been obtained to guide thesystem implementation and outline future research directions.

Index Terms—Activity recognition, ambient signals, fusion algo-rithm, WiFi.

I. INTRODUCTION

W ITH THE rapid development of wireless techniques,today people are surrounded by entities with the capa-

bility of sensing and communication such as laptops, smartphones, and tablets in their daily life, leading to the approachof Internet of Things (IoT) [1]–[3].

On one hand, more and more sensors are embedded in theseentities to provide enriched environmental information for abetter life experience [4]–[6]. On the other hand, the ever-increasing sensors pose severe challenges to the device design,especially for the volume and energy issues [7]. Therefore,

Manuscript received October 12, 2015; revised November 23, 2015; acceptedDecember 16, 2015. Date of publication December 23, 2015; date of currentversion September 08, 2016. This work was supported in part by the NationalNatural Science Foundation of China under Grant 61300034, Grant 61432004,and Grant 61472117, and in part by the JSPS KAKENHI Grant 15H01712—astart up fund for Huangshan Mountain Scholars (Outstanding Young TalentsProgram, No. 407-037070) in Hefei University of Technology. (Correspondingauthor: Yu Gu.)

Y. Gu is with the Anhui Province Key Laboratory of AffectiveComputing and Advanced Intelligent Machine, School of Computer andInformation, Hefei University of Technology, Hefei 230009, China (e-mail:[email protected]).

F. Ren is with the Department of Information Science and IntelligentSystems, University of Tokushima, Tokushima 770-0855, Japan (e-mail: [email protected]).

J. Li is with the Department of Computer Science, University of Tsukuba,Tsukuba Science City 305-8573, Japan (e-mail: [email protected]).

Digital Object Identifier 10.1109/JIOT.2015.2511805

instead of using different kinds of sensors directly, there isa recent trend of exploring wireless ambient signals as analternative information source. The RF transceiver becomes anessential part of any IoT device, due to its ability of providingan efficient two-way bridge for the physical and cyber worlds[8], [9].

Previous research in this area mainly focuses on using thewireless signals for the location-related services [10]–[12]. Forinstance, Roberts and Pahlavan summarized approaches usingthe received signal strength indicator (RSSI) as signatures forindoor or outdoor WiFi localization and presented an empir-ical database of RSSI signatures measured on the WorcesterPolytechnic Institute campus [13].

Most current research on the activity recognition issuerelies on special hardware like the vision-based [14] andacceleratemetor-sensor-based systems [15], which usually suf-fer from vital shortcomings like the stability (e.g., dependingon the light) and scalability issue (e.g., costly hardware, line-of-sight restriction, etc.). Therefore, researchers are strugglingfor new paradigms to revolutionize the traditional solutions.

There is a recent trend to bridge the pervasive wireless sig-nals with the motion recognition issue [16]. More specifically,wireless signal tends to be stable over time in an indoor envi-ronment, e.g., a closed room, since there exists no interferences.However, human activities may disturb the signal propagationand thus cause signal fluctuations. Then, by exploring suchvirtual footprints, we are able to recover the correspondingphysical activities.

WiSee, presented by Pu et al., is one of the first such efforts[17]. Their basic methodology is to leverage the doppler shift,i.e., the frequency change of a wave as its source moves relativeto the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through walls such as the number of people or simplegestures by decoding the angle change of wireless signals [18].

Both WiSee and Wi-Vi are pioneering work. However,they are built on specialized platforms, i.e., the USRP-N210SDR (software defined radio) system. Therefore, the availabil-ity issue remains. To this end, tremendous efforts have beendevoted to the commodity WiFi devices. These solutions canbe classified into two categories: 1) channel state information(CSI) based [19], [20] and 2) RSSI based [21]. CSI is extracteddirectly from the PHY layer and is able to reflect channelresponse in 802.11 a/g/n [22].

CSI is an efficient information source due to its unique abil-ity of eliminating the multipath effects. However, currentlysuch info can be obtained only on specific hardware such asWiFi 5300 network interface controllers (NICs). Therefore, itis very difficult to be utilized in smart phones and tablets.

2327-4662 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

GU et al.: PAWS: PASSIVE HUMAN ACTIVITY RECOGNITION BASED ON WiFi AMBIENT SIGNALS 797

Moreover, the complexity remains an issue since modificationsare essential at the access points’ (APs) or mobile entities’(MEs) ends. To this end, RSSI seems to be a better tradeoffbetween efficiency and cost.

Sigg et al. designed a RSSI-based activity recognition systemfor the mobile phones. In general, it modifies the firmware tocollect WiFi RSSI at mobile phones to monitor single-handedgestures. The average recognition accuracy is 56% based onk-nearest neighbor (k-NN) with four features. Their work isenlightening. However, the low accuracy is a major concernin the real-world scenarios. Compared to TelepathicPhone, ourwork has a different focus. Instead of hand gestures, we concen-trate on the human activities. A novel fusion algorithm is specif-ically designed to achieve a better performance. Moreover, oursystem needs no modifications at both APs’ and MEs’ ends,making it a good choice in the real-world applications.

To evaluate the proposed solution, we prototype our systemto recognize six different activities, namely walking (0.5 m/s),sitting, standing, sleeping, fallen (falling on the ground andstruggling to get up), and running (2 m/s). Since some activi-ties like sitting and standing have similar fingerprints and aredifficult to distinguish, we propose a novel fusion algorithmbased on the classification tree to better classify activities thatcould be easily confused with each other. Extensive experi-ments have been conducted and the results confirmed that theproposed algorithm significantly outperforms the commonlyused k-NN algorithm as well as other two classifiers (Baggingand NaiveBayes) in terms of recognition accuracy and com-plexity. Besides, important insights and hands-on experienceshave been obtained to guide the system implementation andoutline future directions.

In summary, our main contributions are the following.

1) We present empirical results of various activities on thesignals and gain in-depth understanding on the character-istics of a WLAN system.

2) We present an online fingerprint-based activity recogni-tion architecture, which is flexible and adaptive.

3) We prototype and evaluate our system via extensive real-world experiments. Results show the superiority of theproposed fusion algorithm over three other state-of-the-art classifiers (Naive Bayes, k-NN, and Bagging) in termsof the recognition accuracy and complexity.

This paper is organized as follows. In Section II, we presentsome preliminary experiments on the impact of activities onRSSI. The online fingerprint-based activity recognition systemis described in Section III. Section IV presents a prototype sys-tem and its performance evaluation. We introduce related workin detail in Section V. Finally, Section VI concludes this paperand outlines the future research directions.

II. EMPIRICAL STUDY OF WiFi CHARACTERISTICS

In this section, we conduct experiments to study the impact ofdifferent activities on the signal and present some preliminaryrecognition results using k-NN. It is shown that the k-NN algo-rithm with a single feature works well with some activities, e.g.,achieving 100% recognition accuracy for walking. However,

Fig. 1. Snapshot of the experimental site.

TABLE IEXPERIMENTAL PARAMETERS

due to the similar fingerprints between certain activities theoverall accuracy is low, i.e., 51.79%. The results inspire usto push the research further by exploring new features andclassification algorithms.

A. Experimental Setup

Experiments have been conducted in one conference roomwith furniture (room 205) in our institute, whose size is7.2 m × 8 m, as shown in Fig. 1. The WLAN consists of onewireless AP (TP-link TL-WR845N) under 802.11b (2.4 GHz)and one laptop (Asus N80VM, Ubuntu 12.10).

During the experiments, we ensure that no static or dynamicobstacles are present between the two devices, whose anten-nas are aligned toward each other to ensure a line-of-sighttransmission.

Currently, we consider six different activities, namely walk-ing, sitting, standing, sleeping, fallen, and running. Empty isused as the baseline. Fallen means that a subject falls on theground and struggles to get up.

To simulate the real-world scenario, the AP is located at thetop-left corner while the laptop is deployed at the bottom-rightcorner of the conference table. At this stage, only one subjectis involved in the experiments. Sitting, standing, and sleepingare static activities performed between the AP and the laptop.Walking and running are dynamic activities performed alongthe top-left corner of the conference table repeatedly.

The experimental parameters are presented in Table I. Asshown in Table I, the distance between AP and the laptop isset to 5.5 m. We take 500 samples of RSSI per measurement.And the sample rate is set to 10 samples per second.

B. Impact of Activities on WiFi RSSI

To enhance the signal interference of different activities,spatial restrictions have been employed and all actions are con-ducted at locations between the AP and the laptop. To ensure astable environment during the experiments, the door is closed

Page 3: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

798 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016

Fig. 2. WiFi RSSI versus activities—(a) empty, (b) sleeping, (c) sitting, (d) standing, (e) walking, (f) fallen, (g) running, and (h) comparison among activities.

TABLE IICONFUSION MATRIX OF THE k-NN ALGORITHM

to exclude potential external interferences, e.g., passing bysubjects.

Fig. 2 presents the raw RSSI data of different activities. Thered line inside every figure represents the average RSSI of cer-tain activity. The most notable and straightforward observationis that the RSSI value varies in different ways for different activ-ities, implying that it may be possible to classify them usingRSSI only.

To highlight this possibility, we use Fig. 2(h) to record suchpatterns of different activities. For instance, fallen tends to havea higher RSSI value than sitting does, and sleeping seems tohave stable RSSI values compared to running.

In summary, it is clear that activities do affect the signal.Also, different activities have different impacts on the WiFiRSSI, i.e., the fluctuation of one activity has its own pattern.It inspires us to push the research further and think about thefollowing questions.

1) Whether the pattern (fingerprint) is unique for eachactivity?

2) How to extract them for the accurate recognition?In the next part, we present some preliminary results of using

data mining techniques for the recognition.

C. Activity Recognition

We use k-NN with a single feature for the activity recognitionand record the confusion matrix in Table II.

We take empty and sitting as an example. Table II shows thatk-NN can always distinguish the baseline empty from otheractivities, i.e., 100% recognition accuracy. While it performsmuch worse on the activity sitting, e.g., it misinterprets sittingto sleeping with a possibility of 29.2%. The average accuracyis 51.79%.

On one hand, activities like sitting and standing have quitesimilar RSSI footprints and thus are difficult to be distin-guished. On the other hand, k-NN employs one feature only,leading to a lower recognition accuracy. To this end, we willintroduce a new feature to capture the signal fluctuations, and anovel fusion algorithm, combining k-NN with the classificationtree, to improve the accuracy.

The above preliminary results indicates that by exploring theRSSI fingerprints even with a single feature and simple clas-sifier some activities can be recognized with high ratios, e.g.,100% for sitting. Therefore in the next section, we will presentour online fingerprint-based activity recognition system, whichhas a flexible architecture and can adapt to different conditionsaccordingly.

III. ONLINE FINGERPRINT-BASED ACTIVITY

RECOGNITION ARCHITECTURE

A. Architecture Overview

Fig. 3 illustrates the system architecture, which consists ofthree layers. APs, constituting the top layer, are the signalsources. Locations of APs are fixed. Smart devices, includingPCs, laptops, and phones, are the middle layer. A platform-specific application (currently including Android, Linux, andWindows versions) is developed and installed in these devicesto collect WiFi data including time, AP IDs, and the detailedvalues of RSSI. The information collection is totally pas-sive and no connection is needed between devices and APs.The bottom layer is a Linux server, which is responsible for

Page 4: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

GU et al.: PAWS: PASSIVE HUMAN ACTIVITY RECOGNITION BASED ON WiFi AMBIENT SIGNALS 799

Fig. 3. System architecture.

storing, analyzing training/testing data, and recognizing activ-ities. Recognition results are then transmitted back to thedevices.

Our system works as follows. In the training phase, while asubject is performing the activities, a device running the datacollection application keeps collecting samples and sendingthem to the server via wireless (e.g., laptops or smart phones) orwired connections (e.g., PCs). The server receives the trainingdata with labeled activities (manually) and launches the pre-processing module to process the raw data (explained later).Then the processed data will be orderly stored in the SQLitedatabase according to sampling time, experimental sites, andactivities. After all training data have been received, the serverthen launches the classification module to obtain the trainedclassifiers.

After the training phase, the system is online for use. Anydevice holding the testing data could directly contact the serverand ask for the recognition service. The received RSSI data willbe handled by the preprocessing module first. Then, the pro-cessed testing data will be fed to the classification model forthe recognition. The results, i.e., activities with probability, willbe sent back to the device.

In the next parts, we will present more details about therecognition system.

B. Design Details

1) Data Collection Application: The main purpose of thisapplication is to collect training/testing data for further pro-cessing. Therefore, first we need to select the site where themeasurement takes place. In the training phase, users1 caneither choose a recorded site or create a new one.2 While in the

1Here in our prototype system, user is a general term for both systemmanagers and service users.

2By a recorded site, we mean an experimental site that has already been usedfor the data training and thus is ready for the recognition service.

testing phase, user can either ignore this issue or choose onerecorded site to improve the recognition accuracy (the accuracyis usually higher if matched training and testing data are used).

Then, users start the sampling and remain still during thesampling period (user movements could result in mismatchedtraining/testing data). After the sampling, users can choose tosend back the data or start over again. The application can con-tact the server via wireless/wired connections and upload thetraining/testing data, including RSSI readings, sampling time,AP IDs, etc.

If the connection with the server is down at the moment,data will be stored locally and wait for the connection to berecovered.

2) Preprocessing Module: The aim of this module is to fil-ter the abnormal samples from both training and testing data,so as to improve the recognition accuracy. The data filteringconsists of the following three steps.

[Step 1]. Define a constant value α, and samples whoseRSSI < α are classified as the obvious abnor-mal data and removed from the data set. α is anempirical value.

[Step 2]. Define Ms,g as the average value of the updateddata set as follows:

Ms,g =

∑|S|s=1

∑|G|g=1 Rs,g

W(1)

where S is the set of activities, G is the set of datagroups, Rs,g is the RSSI value of the gth group ofthe activity s, and W is the size of the data set.

[Step 3]. The Gaussian filter is utilized here; since Rs,g

is a sequence of independent and identically dis-tributed random variables, with mean μ and vari-ance σ2 > 0, according to Lindeberg–Levy theo-rem [23] Rs,g is a normal approximation distribu-tion, i.e., Rs,g ∼ (μs,g, σ

2s,g).

Therefore, we should choose samples falling in the followingcondition:

P (Ms,g − Cs,g ≤ Rs,g ≤ Ms,g + Cs,g)

≥PTH (2)

fRs,g=

1

σs,g

√2π

e(Rs,g−μs,g)2

2σ2s,g (3)

where PTH is an empirical value (PTH ∈ [0.6, 1]),fRs,g

is the probability density function of the

Gaussian distribution, μs,g = 1W

∑|S|s=1

∑|G|g=1 Rs,g , and

σs,g =√

1W

∑|S|s=1

∑|G|g=1(Rs,g − μ2

s,g).Combining (2) and (3), we can obtain the deviation range

Cs,g using the training sample set Rs,g. Then, we can have theaverage deviation value Cs of activity s as follows:

Cs =

∑|S|s=1

∑|G|g=1 min(Cs,g)

|G| . (4)

Then, the deviation value of all samples can be calculated asfollows:

C =

∑|S|s=1�Cs�|S| . (5)

Page 5: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

800 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016

Fig. 4. μ and σ of RSSI for the indoor environment. (a) Feature μ. (b) Feature σ. (c) σ over different time periods. (d) Variation of σ over 1 month.

Lastly, with C, we use the following range to filter theabnormal data from both the training and testing data sets:

[Ms,g − C,Ms,g + C]. (6)

After data filtering, the processed training data set will bestored in the SQLite database, while the testing data will besent to the classification module.

3) Classification Module: The objective of this module isto recover the activity from the testing data based on thetraining data. The key part is the classification algorithm.Currently, four classification algorithms have been used, i.e.,k-NN, NaiveBayes, Bagging, and Fusion.

Also, new algorithms can be added to deal with new activi-ties or to achieve better performance in terms of running timecomplexity or accuracy. The realization of this module heavilydepends on the applications. Therefore, Section IV-C will givemore details about a prototype system.

IV. PROTOTYPE SYSTEM AND PERFORMANCE

EVALUATION

Section III briefly introduces the online fingerprint-basedactivity recognition architecture. In this part, we describethe implementation of a prototype system to distinguish sixactivities, e.g., sleeping, sitting, standing, walking, fallen, andrunning. Details are presented, including the selection of recog-nition features, recognition process, and the fusion algorithm.We also provide a demo for a better understanding [24].

A. Objective

The prototype system is realized to achieve the followinggoals.

1) To recognize activities. Empty is used as baseline for theperformance comparison.

2) To evaluate the online system in terms of complexity andaccuracy.

3) To gain critical insights and valuable hands-on experiencefor the guidance of system implementation.

In the next parts, we will introduce in detail about the pre-processing and classification module, which are the core of thesystem.

B. Preprocessing Module

To filter the abnormal data, we apply the general method inSection III-B with the experimental parameters.

[Step 1]. According to our statistics over one million samplestaken from all experiments, we set α to −60 dBm.

[Steps 2 and 3]. We set PTH , s, and g to 0.6, 7, and 24,respectively. According to different input data set, we calcu-late C accordingly to get the range of RSSI samples [Ms,g

− C,Ms, g + C].

C. Classification Module

Though the recognition process is similar, different activitiesand classifiers may need different recognition features.

1) Recognition Features: To select suitable features for theclassification, we conduct a case study and collect four groupsof data in the empty conference room over one night. Eachgroup contains 2000 samples. Groups 2 and 4 are recordedimmediately after groups 1 and 3, respectively. While the timeduration between groups 2 and 3 is set to 2 h.

The training samples first go through the preprocessing mod-ule, and then divided into subgroups with a size of 100, e.g.,[RSSI1, . . . ,RSSI100]. The mean (μ) and standard deviation(σ) of each subgroup are calculated as the possible recognitionfeatures as follows:

μ =

∑100i=1 RSSIi100

(7)

σ =

√∑100i=1(RSSIi − μ)2

100. (8)

Fig. 4 records the experimental results. The first investigationis that for each group, μ is stable over time, indicating thatduring the sampling period (i.e., 200 s) the environmentalalteration is limited. However, μ changes significantly betweengroups 2 and 3, indicating the background noise could befluctuating over a long period of time, e.g., several hours.Therefore, μ may not be the best choice. On the other hand,σ shows the potential ability of classifying different activitiessince it is relatively stable even in a long-term point of view,as shown in Fig. 2(b). It is because σ mathematically recordsvariations of sampled data. In our case, though the backgroundnoise may be changing over time, leading to unstable averagedRSSI measures (i.e., μ), the relative alternations of the signalcaused by activities remain stable.

Another issue is whether the features are stable over time. Toclarify this issue, we conduct experiments collecting data fromdifferent time intervals (afternoon and night) with differentactivities within one day. Furthermore, to study the long-term

Page 6: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

GU et al.: PAWS: PASSIVE HUMAN ACTIVITY RECOGNITION BASED ON WiFi AMBIENT SIGNALS 801

effect, we also compared the features collected now and onemonth ago at the same time interval under “empty.” The resultis shown in Fig. 4(d). In Fig. 4(c), for one fixed activity, σ showsthe stability over time. For different activities, σ still showsthe high possibility of distinguishing them regardless of spe-cific time periods. For Fig. 4(d), it is shown that even for a longperiod, σ still shows the stability as a recognition feature.

2) Recognition Process: For each activity, with the samesampling rate, we obtain six data groups as the training data,each consisting of 2000 samples. Abnormal samples have beenremoved first before the recognition process as in the last sub-section. Then, we further divide each group into subgroupswith a size of 20 samples. σ of each subgroup is calculatedas the recognition feature. Therefore, for all the activities, wehave seven corresponding sets of training data, namely,

−→P =

{P1, . . . , P7}. For the observed RSSI data of the unrecognizedactivity, we have applied the same processing procedure toget one data set, namely U . The distances between U and

−→P

are calculated and recorded in−→L = {L1, . . . , L7}. We use the

k-NN algorithm as the classifier while setting k to

k = 0.8 · μ(−→L ) = 0.8 ·∑7

i=1 Li

7. (9)

On one hand, certain activities have quite similar footprintsand thus they are difficult to be distinguished. On the otherhand, the k-NN algorithm employs one feature only, leadingto a lower recognition accuracy. To deal with this issue, wefirst introduce a new feature to capture the variations of thefeature σ, as well as a novel fusion algorithm, combining thek-NN classifier with the classification tree, so as to improve theaccuracy.

3) Fusion Algorithm: Since a single feature fails to meetthe recognition requirement, we introduce another one toenhance the performance: σ(2), which is the standard deviationof the collected σ, i.e.,

μ(σ) =

∑Nj=1 σj

N

σ(2) =

√∑Nj=1(σj − μ(σ))2

N(10)

where N is the number of subgroups (25 in our case). σ(2)

aims to record the variation of feature σ, so as to capture tinydifferences of activities that have similar footprints.

The fusion algorithm runs in the following steps.S1: To recognize one undetermined activity, we first use

feature σ to decide which cluster it belongs to.S2: If it attaches to C1, k-NN with feature σ is utilized to

decide the exact activity. Otherwise, we further breakC2 into two clusters: C3 = { empty, sleeping, fallen }and C4 = { sitting, standing }. We then use feature σ(2)

to determine which cluster the activity belongs.S3: Using k-NN with feature σ(2) to decide the exact

activity.We notice that certain two activities present closed features,

such as walking/standing, sitting/sleeping, and standing/fallen.Therefore, for the above recognition process, it is possible to

Fig. 5. Fusion algorithm.

misinterpret the unknown activity. To this end, we propose thefollowing countermeasure:

S4: To distinguish two close activities like walking/standing, we use k-NN with both features σ and σ(2)

as the classifier.Fig. 5 presents the detailed work flows of the fusion algo-

rithm.4) Complexity: The complexity of the classifier is very cru-

cial to the recognition process since basically it determines theoverall running time. Therefore, here we present the complex-ity analysis for the proposed algorithm. Assuming the numberof training data for each activity is m and each group has beendivided into n subgroups, we have the following conclusion.

Theorem 1: The computational complexity of k-NN isΘ((4mn)2), while the computational complexity of the fusionalgorithm is Θ[(4m)2 + (2mn)2].

Proof. For k-NN, the number of elements needed to besorted is 4mn, therefore its complexity is Θ[(4mn)2]; forthe fusion algorithm, we first sort 4m items and them sortanother 2mn items. Therefore, the overall complexity shouldbe Θ[(4m)2 + (2mn)2].

D. Performance Evaluation

For each activity, sampling data have been divided intogroups to gain the statistical information to characterize theactivity. Therefore, for a given data set, the size of a group(i.e., SG) is critical. We now explain the reason we select 20as default in our experiments.

We have performed 24 experiments for each activity as thetesting data set (recall that we have used another 24 experimentsof each activity as the training data set). We vary SG from 5 to50 and record the recognition accuracy of both k-NN and fusionalgorithms in Fig. 6. Fig. 6(a) and (b) shows the recognitionaccuracy of each activity using the k-NN and fusion algorithms,respectively. For empty, SG has little impact on the recognitionaccuracy, i.e., the fluctuation is around 5%. However, activitiessuch as sleeping of the fusion algorithm, heavily rely on the SG.It is due to the specific settings of the fusion algorithm: whenSG is too small, the variance of data is not so obvious and thencould be easily misinterpreted as sitting or fallen.

Table III summarizes different group sizes at different recog-nition phases. At the feature selection phase, we use a larger

Page 7: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

802 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016

Fig. 6. Recognition accuracy versus SG. (a) k-NN. (b) Fusion. (c) Average.

TABLE IIIGROUP SIZE VERSUS PHASES

TABLE IVPERFORMANCE COMPARISON OF NAIVEBAYES, k-NN, BAGGING, AND FUSION ALGORITHMS

group size, i.e., 100, to better illustrate the efficiency of the twocandidates, i.e., μ and σ. After the feature has been determined,we use 20 according to Fig. 6 to achieve the best recognitionaccuracy.

To further validate the proposed fusion algorithm, we com-pare it with three well-known classifiers, i.e., NaiveBayes,k-NN, and Bagging. Table IV concludes the results.

The experimental parameters and settings remain the same.We still conduct the experiments with all activities and use theabove four classifiers to do the activity recognition.

Since k-NN has a better performance (65.77% accuracy onaverage) over NaiveBayes (58.33%) and Bagging (61.89%),we use it as a baseline to demonstrate the performance of ourproposed fusion algorithm.

The major investigation is the significant improvements ofcertain activities over k-NN. For instance, the accuracy of

sleeping has been improved from 8.3% to 62.5%. It indicatesthat the fusion algorithm can better filter interferences for oneactivity from similar ones by layered classifications based ontwo features. Also, the average recognition accuracy has beenimproved from 65.77% to 72.47%.

In summary, through in-depth investigations on both featuresof activities, a suitable classification tree is designed to bettercapture the differences of activities, leading to better overallperformance. However, the size of a group is important to theperformance, and thus should be carefully selected.

V. RELATED WORKS

For the last few decades, activity recognition remains a hotresearch topic and its major objective is to explore footprints

Page 8: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

GU et al.: PAWS: PASSIVE HUMAN ACTIVITY RECOGNITION BASED ON WiFi AMBIENT SIGNALS 803

TABLE VSUMMARY AND COMPARISON OF WiFi-BASED SYSTEM IN MOTIONS, METHODOLOGY, PERFORMANCE, AND COMPLEXITY AND AVAILABILITY

from historical information for recognizing physical activities[28], [29].

Though the information resources for the recognition couldbe diversified, the accelerometer sensor is the most com-monly used device due to its high-recognition rate. Bao andIntille developed efficient algorithms to detect human activitiesfrom accelerometer data [30]. However, their system requiressubjects to wear separated sensors on different parts of thebody. With the rapid development of pervasive computing,researchers realized that smartphones with built-in accelerom-eters are better alternatives. However, unlike sensors attachedat fixed positions, smartphones could be carried along the bodyfreely. To this end, Kan et al. presented a special recognitionalgorithm that is independent of the smartphone position [15].

Besides accelerometer sensors, vision information also cap-tures human motions and thus can be explored for recog-nition [28]. Like the way human eyes detects movements,most research employs visible-light cameras and takes contin-uous images of motions, from which gradient-based featuresare extracted for activity recognition. Xia et al. proposed aninteresting supplementation and utilized the depth informationseized by the Kinect device [31] for 3-D motions. A recentsurvey of such approaches can be found in [32].

In summary, previous research on activity recognition eitherdepends on special hardware, or requires the cooperation oftested subjects. Therefore, the scalability issue becomes amajor challenge for the real-world applications. To address theissue, some recent studies used wireless background signalsas the information source for the human activity recognition,

e.g., WiSee [17], Wi-Vi [18], WiFall [19], APsense [20],TelepathicPhone [21], etc.

Table V presents a brief summary as well as comparisons ofseveral representative WiFi-based motion recognition systemsin terms of motions, methodology, performance, complexity,and availability.

As discussed in Section I, WiSee and Wi-Vi are pioneeringwork that leverage WiFi signals for passive and noninvasivesolutions. However, they share the same complexity and avail-ability issue: both are based on USRP-N210 SDR systems andnonapplicable to current off-the-shelf WiFi devices.

RSSI has been long recognized as an effective tool for ana-lyzing context info. For instance, Patwari et al. presented astatistical model to approximate locations of individuals bymeasuring the RSSI variance caused by human beings. Veryrecently, it has been utilized in distinguishing human motions.

TelepathicPhone [21] and WiGest [27] are two such RSSI-based systems recognizing hand motions. The average recog-nition accuracy of TelepathicPhone over five hand gestures is56% based on k-NN with four features. While WiGest usedthe segmentation and matching algorithm to recognize 10 handmotions with 87.5% accuracy.

ActPhone [25] is similar to our work. It is also a RSSI-basedsystem for distinguishing three activities. However, it needs tomodify the firmware on the off-the-shelf smart phones.

CSI is an emerging technique that has been explored forfine-grained motion recognition. WiFall [19] and APsense [20]are two representative CSI-based solutions. WiFall is designedspecifically for the elders or inability people that live alone

Page 9: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

804 IEEE INTERNET OF THINGS JOURNAL, VOL. 3, NO. 5, OCTOBER 2016

(two activities—sit and fall). While APsense aims to recognizeseveral hand motions (four in their experiments). APsense isrealized at the AP’s end, while WiFall is at the MEs’ end.

VI. CONCLUSION AND FUTURE WORK

This paper introduces a human activity recognition schemebased on WiFi ambient signals. It demands neither special hard-ware support nor the cooperation of subjects. Thus, it can beeasily integrated with any existing WLAN network deployedin the indoor environment. By exploring the impact of variousfactors on WiFi RSSI via a systematic experimental study, weobtain in-depth understanding of WiFi behaviors. The empiri-cal results show the possibility of extracting WiFi fingerprintsof different activities for the recognition and inspire us to pushthe research further. We then present an online fingerprint-based activity recognition architecture, which is flexible andadaptive. Lastly, we prototype this online system to evaluatethe proposed method and gain valuable hands-on experiences.The prototype system has been tested through extensive real-world experiments. Results confirmed the efficiency of theproposed method, i.e., achieving 100% accuracy for certainactivities and 72.47% accuracy on average. Important insightshave been obtained to guide the system implementation andfuture research directions.

For the future work, several open issues call for in-depthinvestigations. First, according to Bobick [33], motion is ageneral term, under which there are actually three differentlevels, i.e., movement, activity, and action. The richer the con-textual and semantic info the motion has, the higher the levelit falls in. Therefore, exploring the context info of activitiesis critical, e.g., context between persons and motions, contextbetween locations and motions, and even context between emo-tions and motions. Another possible extension is to exploretradeoff between performance and availability. As stated above,both CSI-based and RSSI-based methods have their own mer-its and demerits in terms of performance and availability. CSIwill eventually become dominating due to its much better per-formance over RSSI-based solutions. However, under currentcircumstances, the RSSI-based method is still worth using dueto its simplicity. A hybrid solution of combining the RSSI withother info sources such as light and accelerometer sensors couldbe promising in achieving a good tradeoff between performanceand availability.

ACKNOWLEDGMENT

The authors would like to thank the reviewers for their valu-able comments to improve both the content and quality of thispaper.

REFERENCES

[1] V. Gungor and G. Hancke, “Industrial wireless sensor networks:Challenges, design principles, and technical approaches,” IEEE Trans.Ind. Electron., vol. 56, no. 10, pp. 4258–4265, Oct. 2009.

[2] L. Atzori, A. Iera, and G. Morabito, “The Internet of Things: A survey,”Comput. Netw., vol. 54, no. 15, pp. 2787–2805, 2010.

[3] M. Zorzi, A. Gluhak, S. Lange, and A. Bassi, “From today’s intranet ofthings to a future Internet of Things: A wireless- and mobility-relatedview,” IEEE Wireless Commun., vol. 17, no. 6, pp. 44–51, Dec. 2010.

[4] M. Darianian and M. P. Michael, “Smart home mobile RFID-basedInternet-of-Things systems and services,” in Proc. IEEE Int. Conf.Adv. Comput. Theory Eng. (ICACTE’08), Phuket, Thailand, Dec. 2008,pp. 116–120.

[5] X. Cao, J. Chen, Y. Xiao, and Y. Sun, “Building-environment controlwith wireless sensor and actuator networks: Centralized versus dis-tributed,” IEEE Trans. Ind. Electron., vol. 57, no. 11, pp. 3596–3605, Nov.2010.

[6] Y. Yu, J. Ou, J. Zhang, C. Zhang, and L. Li, “Development of wirelessMEMS inclination sensor system for swing monitoring of large-scalehook structures,” IEEE Trans. Ind. Electron., vol. 56, no. 4, pp. 1072–1078, Apr. 2009.

[7] B. Spencer, M. E. Ruiz-Sandoval, and N. Kurata, “Smart sensing tech-nology: Opportunities and challenges,” Struct. Control Health Monit.,vol. 11, no. 4, pp. 349–368, 2004.

[8] Y. Liu and G. Zhou, “Key technologies and applications of Internet ofThings,” in Proc. IEEE 5th Int. Conf. Intell. Comput. Technol. Autom.(ICICTA), Zhangjiajia, China, Jan. 2012, pp. 197–200.

[9] A. Willig, M. Kubisch, C. Hoene, and A. Wolisz, “Measurements ofa wireless link in an industrial environment using an IEEE 802.11-compliant physical layer,” IEEE Trans. Ind. Electron., vol. 49, no. 6,pp. 1265–1282, Dec. 2002.

[10] V. M. Olivera, J. M. C. Plaza, and O. S. Serrano, “WiFi localizationmethods for autonomous robots,” Robotica, vol. 24, no. 4, pp. 455–461,2006.

[11] C.-H. Lim, Y. Wan, B.-P. Ng, and C. See, “A real-time indoor WiFi local-ization system utilizing smart antennas,” IEEE Trans. Consum. Electron.,vol. 53, no. 2, pp. 618–622, May 2007.

[12] J. Biswas and M. Veloso, “WiFi localization and navigation forautonomous indoor mobile robots,” in Proc. IEEE Int. Conf. Robot.Autom. (ICRA), May 2010, pp. 4379–4384.

[13] B. Roberts and K. Pahlavan, “Site-specific RSS signature model-ing for WiFi localization,” in Proc. IEEE Global Telecommun. Conf.(GLOBECOM), Dec. 2009, pp. 1–6.

[14] J. Sung, C. Ponce, B. Selman, and A. Saxena, “Unstructured humanactivity detection from RGBD images,” in Proc. IEEE Int. Conf. Robot.Autom., St. Paul, MN, USA, May 2012, pp. 842–849.

[15] A. M. Khan, Y.-K. Lee, S. Lee, and T.-S. Kim, “Human activity recogni-tion via an accelerometer-enabled-smartphone using kernel discriminantanalysis,” in Proc. IEEE 5th Int. Conf. Future Inf. Technol. (FutureTech),Busan, Korea, May 2010, pp. 1–6.

[16] O. Yurur, C. Liu, Z. Sheng, V. Leung, W. Moreno, and K. Leung,“Context-awareness for mobile sensing: A survey and future directions,”IEEE Commun. Surveys Tuts., to be published.

[17] Q. Pu, S. Gupta, S. Gollakota, and S. Patel, “Whole-home gesturerecognition using wireless signals,” in Proc. ACM MOBICOM, 2013,pp. 27–38.

[18] F. Adib and D. Katabi, “See through walls with WiFi!” in Proc. ACMSIGCOMM, 2013, pp. 75–86.

[19] C. Han, K. Wu, Y. Wang, and L. Ni, “Wifall: Device-free fall detection bywireless networks,” in Proc. IEEE INFOCOM, Apr. 2014, pp. 271–279.

[20] Y. Zeng, P. H. Pathak, C. Xu, and P. Mohapatra, “Your AP knows how youmove: Fine-grained device motion recognition through WiFi,” in Proc. 1stACM Workshop Hot Topics Wireless, 2014, pp. 49–54.

[21] S. Sigg, U. Blanke, and G. Troster, “The telepathic phone: Frictionlessactivity recognition from WiFi-RSSI,” in Proc. IEEE PERCOM, 2014,pp. 148–155.

[22] Z. Yang, Z. Zhou, and Y. Liu, “From RSSI to CSI: Indoor localizationvia channel response,” ACM Comput. Surv. (CSUR), vol. 46, no. 2, p. 25,2013.

[23] P. Billingsley, “The Lindeberg–Levy theorem for martingales,” in Proc.Amer. Math. Soc., vol. 12, no. 1, pp. 88–792, 1961.

[24] Demo. [Online]. Available: http://v.youku.com/v_show/id_XNzM0NDM4MDAw.html?qq-pf-to=pcqq.c2c

[25] S. Sigg, M. Scholz, S. Shi, Y. Ji, and M. Beigl, “RF-sensing of activitiesfrom non-cooperative subjects in device-free recognition systems usingambient and local signals,” IEEE Trans. Mobile Comput., vol. 13, no. 4,pp. 907–920, Apr. 2014.

[26] S. Sigg, L. Wolf, Y. Ji, and M. Beigl, “Passive, device-free recogni-tion on your mobile phone: Tools, features and a case study,” in Proc.MOBIQUITOUS, Dec. 2014, vol. 131, p. 435.

[27] H. Abdelnasser, M. Youssef, and K. A. Harras, “Wigest: A ubiq-uitous WiFi-based gesture recognition system,” arXiv preprint arXiv:1501.04301, 2015.

Page 10: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

GU et al.: PAWS: PASSIVE HUMAN ACTIVITY RECOGNITION BASED ON WiFi AMBIENT SIGNALS 805

[28] T. B. Moeslund and E. Granum, “A survey of computer vision-basedhuman motion capture,” Comput. Vis. Image Understand., vol. 81, no. 3,pp. 231–268, 2001.

[29] P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, “Machinerecognition of human activities: A survey,” IEEE Trans. Circuits Syst.Video Technol., vol. 18, no. 11, pp. 1473–1488, Nov. 2008.

[30] L. Bao and S. S. Intille, “Activity recognition from user-annotated accel-eration data,” in Pervasive Computing. New York, NY, USA: Springer,2004, pp. 1–17.

[31] L. Xia, C.-C. Chen, and J. Aggarwal, “Human detection using depth infor-mation by kinect,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. PatternRecog. Workshops (CVPRW), Portland, OR, USA, Jun. 2011, pp. 15–22.

[32] R. Poppe, “A survey on vision-based human action recognition,” ImageVis. Comput., vol. 28, no. 6, pp. 976–990, 2010.

[33] A. F. Bobick, “Movement, activity and action: The role of knowledge inthe perception of motion,” Philos. Trans. Roy. Soc. B, Biol. Sci., vol. 352,no. 1358, pp. 1257–1265, 1997.

Yu Gu (M’14) received the B.E. degree in specialclasses for the gifted young (SCGY) and the D.E.degree in computer science from the University ofScience and Technology of China, Hefei, China, in2004 and 2010, respectively.

From 2006 to 2006, he was an Intern with theWireless Network Group, Microsoft Research Asia,Beijing, China. From 2007 to 2008, he was a VisitingScholar with the Department of Computer Science,University of Tsukuba, Tsukuba, Japan. He has beena Full-Time Professor with the School of Computer

and Information, Hefei University of Technology, Hefei, China. His researchinterests include wireless communications, pervasive computing, and effectivecomputing.

Mr. Gu was a JSPS Research Fellow with the National Institute ofInformatics, Tokyo, Japan, from 2010 to 2012. He was the recipient of theExcellent Paper Award from the IEEE Scalcom 2009.

Fuji Ren (M’03–SM’03) received the B.E. and M.E.degrees from the Beijing University of Posts andTelecommunications, Beijing, China, in 1982 and1985, respectively, the Ph.D. degree from HokkaidoUniversity, Sapporo, Japan, in 1991.

He has been a Professor with the Faculty ofEngineering, University of Tokushima, Tokushima,Japan. His research interests include information sci-ence, artificial intelligence, language understandingand communication, and affective computing.

Dr. Ren is a Member of the IEICE, CAAI, IEEJ,IPSJ, JSAI, and AAMT. He is a Fellow of the Japan Federation of EngineeringSocieties. He is the President of the International Advanced InformationInstitute.

Jie Li (M’96–SM’04) received the B.E. degreein computer science from Zhejiang University,Hangzhou, China, in 1982, the M.E. degree inelectronic engineering and communication sys-tems from the China Academy of Posts andTelecommunications, Beijing, China, in 1985, andthe Dr.Eng. degree from the University of Electro-Communications, Tokyo, Japan, in 1993.

From 1985 to 1989, he was a Research Engineerwith the China Academy of Posts and Tele-communications. Since 1997, he has been an

Associate Professor with the Department of Computer Science, GraduateSchool of Systems and Information Engineering, University of Tsukuba,Tsukuba, Japan. He is currently a Full-Time Professor with the Department ofComputer Science, Graduate School of Systems and Information Engineering,University of Tsukuba. His research interests include mobile distributed mul-timedia computing and networking, OS, network security, and modeling andperformance evaluation of information systems.

Page 11: PAWS: Passive Human Activity Recognition Based on WiFi ...download.xuebalib.com/xuebalib.com.28365.pdf · to the observer. Adib and Ktabi designed Wi-Vi to reveal infor-mation through

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具