Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
1
DroidZepp
Final year project report
Author: Nijat Ismayilzada
Degree programme: B.Sc. (Hons) Computer Science
Supervisor: Dr. Nick Filer
April 2016
2
Abstract
In today’s world, mobile devices have become a part of daily life. They make life
easier, more dynamic and productive by providing various services and solutions to meet all
requirements of users. The main concept underlying these services is processing of data and
providing it to the user in a desired form. There are a number of resources to obtain data from
users, such as user’s texting style, WIFI network names (SSID), GPS location of the device
and even user’s personal files. One of the most exciting resource for this purpose is wearable
devices that recently integrated into our daily life. Primary example of these devices is
smartwatches. They contain numerous features as smartphones.
The project DroidZepp aims to collect user data from hardware sensors (e.g., the
accelerometer and gyroscope) of mobile phones and wearable devices (such as smart
watches) and to show a possible use for this data. The main idea behind the project is to make
an Android smartphone and smartwatch application that monitors user actions and provides
intelligent alarm system. This report outlines the implementation of the project, various
design choices during the development, limitations, testing of the application, evolution and
outcome of the project and further possible future developments.
3
Acknowledgements
I would like to thank my supervisor Dr. Nick Filer for his continuous support during
all stages of this project. Without his advice and ideas, this project couldn’t have made it this
far.
4
Chapter 1
Introduction
1.1 Motivation
In today’s world, the importance of mobile phones is undeniable. As their technology has
rapidly improved, these devices have become able to provide a substantial contribution to our
daily life [1]. By always carrying them with us and being able to call 911, we can save ours
or someone else’s life in case of emergency. This simple example shows the significance of
mobile phones.
Recently these devices have been miniaturized and even more integrated into our life
as wearables; the idea of ‘wearable devices’ has especially been a main focal point for the
industry. A number of different devices, such as smart watches, smart glasses, virtual reality
headsets, smart fitness belts, can be worn on various parts of the body. For instance, smart
watches are considered a handy gadget in fast-paced environments due to their quick control
functions for phone users, enabling the user to answer messages and calls, to see and control
notifications from the phone or to use numerous native small apps particularly developed for
smart watches [2].
The success of wearables and especially smart watches, and the excitement of
contributing to the development of the latest technology, is the first motivation for this
project. As mentioned above, numerous data sources are available from mobile devices and
the collected data can be used in a wide range of areas for diverse purposes. The variety of
data sources is increased by introduction of smart watches. As the user carries them on
his/her wrist in a daily basis, the actions of the user’s hands can be tracked and introduced as
a new data source. This possibility is the second motivation for this project.
1.2 Project Objectives
1.2.1 Collection of User Data
The first objective of this project is the collection of user user data by monitoring hardware
sensors of both phones and smartwatches. In order to do this in a sensible way, creating the
DroidZepp application was essential. It should be noted that this application is required to run
in the background of the mobile device operating system and collects data by monitoring the
device’s hardware sensors.
5
1.2.2 Classification by Using Machine Learning Techniques
The next objective of the project is identification of an appropriate analysis mechanism for
the collected data. Machine learning algorithms, techniques and tools are one of the most
rapidly growing areas in Computer Science. Applying machine learning techniques to
classify the data can provide huge amounts of information about the user. For example, by
monitoring appropriate hardware sensors of the smartwatch and classifying the collected data,
the movement of the user’s hand can be recognised and the user’s current action can be
predicted.
1.2.3 Applying the Results to Daily Life
After collecting and classifying user data from both phones and smart watches, the
DroidZepp application is implemented. For an initial step, this real world scenario was
followed:
A user does numerous actions by moving hands (where the smartwatch is worn)
during their daily routine, for example, ironing, running, eating, buying something, etc. Some
of these actions are distributed among scheduled times during the day and some of these
scheduled activities can be missed by the user at some point. The user teaches one of these
important actions to the application by recording examples of this activity. The user also
enters the reminder time for the action.
At this point, the application sets an internal alarm for the entered time. When the
scheduled time comes, the application runs in the background and starts the data collection
by recording the movements of user (user’s hand in this case). When the recording is
completed, the application classifies this recorded data based on the training data set. As a
result of this classification, the user’s current action is predicted. Now the application can
tell if the user is currently doing their scheduled action or not. If not, user will be reminded
by some kind of notification.
1.3 Report Structure
The report is divided into following sections:
1. Introduction. This chapter already explained the main motivations for the DroidZepp
project. It outlined the objectives of the project in three main categories.
2. Background. This chapter gives background information for upcoming chapters. It
explains details about the application itself and required tools during the development.
It also provides research results from this area.
3. Design. The chapter describes the general structure of the application, complexity of
the overall system and the relation between the different parts of the architecture.
4. Development. This chapter provides more details about the implementation of the
particular parts of the system. It shows different development decisions, libraries and
APIs used, algorithmic problem solving, etc.
6
5. Testing, Evaluation and Results. The chapter describes Android testing solutions and
methodologies used in the development stage of the project. It also gives results for
accuracy of classifier and shows the outcome of the project.
6. Conclusion. This chapter outlines knowledge gained during the project and presents
possible implementations for future.
7
Chapter 2
Background
This chapter provides more detailed background information about the idea behind the
project DroidZepp, the required concepts to understand it, the target audience for the project,
some technical background for the tools and devices used.
2.1 DroidZepp Application
First of all, in order to achieve first objective, the collection of convenient user data, some
kind of hardware-sensor monitoring tool should be implemented for mobile devices. To start
building this tool, the current state of available platforms was analysed. The following
requirements were considered during the process:
1. The platform should be a mobile platform on smartphones
2. The platform should have some kind of wearable device support
3. The platform should have some kind of hardware sensors to track its users (type of
this tracking is not known yet)
4. The platform should provide bug-free and extensive API
Another small point was the popularity of the platform. Supporting well-known platforms
make tools more useful, and it has always been desired behaviour in the history of software
engineering [3]. International Data Corporation (IDC) recently shared statistical information
about the market share of Smartphone Operating Systems which can be seen on figure 2.1.
The market is mainly shared between Android and iOS devices. Due to the fact that
market share for Blackberry OS and Windows Phone is decreasing year by year, as well as
their lack of wearable device support, these devices were not considered as a target platform.
Both Apple and Google provide leading technologies to the industry with iOS and
Android. Both operating systems have their own user audience and great development
support. In recent years they have both introduced their own wearable platform: Android
Wear and Apple watchOS. Initially, both platforms were great candidates for the
development of project DroidZepp. However, during the design stage of application, it was
determined that implementing the DroidZepp project for Apple smartwatches would not be
possible due to some software limitations. In order to monitor user’s actions, DroidZepp
needs to run in the background of the operating system and regularly collect data from the
device hardware sensors. Unfortunately, the iOS developer library mentions that, if user does
not actively use the application (for example, the user presses the Home button to exit the
application), the operating system moves the application into the background, which
terminates the application after five seconds [4]. In these five seconds the developer should
8
provide some clean-up steps. In some cases, application can ask for extra time (actually, three
more minutes) from operating system to stay for a longer time in the background, but the
background stage still lasts a finite amount of time before the application is terminated by the
system. So, it was decided to use the Android platform for its background application
flexibility.
Figure 2.1. International Data Corporation smartphone market share rankings for 2012-
2015 [5].
2.1.1 Android Platform
The Android Developer network provides a lot of useful information. Three main resources
were used from this network:
1. API guides [6]. This provides some training code samples and app examples. It also
shows main Android application design and architectural styles.
2. Packages [7]. This contains all available packages inside the Android API.
3. Wear API [8]. This part of the API is used for wearable application development.
Two Android devices were used during the development:
1. Sony Xperia Z2 running on Android 5.1.1. This device is currently a mid-
performance smartphone device in terms of its hardware compared to other Android
devices on the market. For the full specifications, see Appendix A.
2. LG G Watch running on Android 5.1.1 and then Android 6.0 during later stages of
development. This device was one of the first generation of Android smartwatches
with very low hardware specifications (see Appendix B). In theory, if DroidZepp can
run on this device, it should be able to run on all other Android smartwatches. This
theory was proved correct.
9
2.1.2 Sensor Technologies in Android Devices
After specifying the development platform, the next issue was investigating sensor
technologies in Android. Main objective was to identify what sensor would be the best choice
to monitor user actions and to collect sensible data for Machine Learning.
As an internal hardware component of a mobile device, most sensors are provided by
smartphone and smartwatch manufacturers. There are standard sensors that most mobile
devices already have, but some of them contain additional sensors, such as the heart rate
sensor in the Samsung Galaxy S6. The table 2.1 shows the list of standard sensors suggested
by Google to manufacturers.
On the software support side of this hardware, Android API 14 and later makes them
available to use in applications.
Table 2.1. Sensor types supported by Android OS and their availability by APIs [9].
1This sensor type was added in Android 1.5 (API Level 3), but it was not available for use until
Android 2.3 (API Level 9). 2This sensor is available, but it has been deprecated.
Lara and Labrador outline very extensive usage of triaxial accelerometer sensors in
human action recognition. They show examples of highly accurate results using this sensor,
but also outline weaknesses of accelerometer-only classifications such as confusing eating
and brushing teeth [10].
According to Shoaib et al., using the feature set of gyroscope sensor readings in
combination with accelerometer data, in specific actions where positioning matters, the
overall accuracy of action recognition increases up to 13.4% [11]. Based on this information
10
and due to the fact that accelerometers and gyroscopes are the most highly supported sensors
by manufacturers, these sensors were used for the DroidZepp project.
2.1.3 Classification
As the second objective of the project, classification of the collected data by using Machine
Learning techniques was investigated. Mannini et al. describe a wide range of categories that
classifier taxonomy is built around [12]. First of all, they explain two main approaches for
classifiers, supervised and unsupervised. In supervised classifiers, new test datasets are
classified according to a pre-labelled training dataset. On the other hand, unsupervised
classifiers are provided by any training set; the dataset is classified based on the feature set
and number of classes. Second, Mannini et al. discuss sequential and single-frame classifier
types. Sequential classifiers take the results of past classifications into account while labelling
the dataset and create new virtual train datasets. However, single-frame classifiers only
consider the current classification in isolation from results of past classifications. Finally,
Mannini also explain probabilistic, geometric, and template matching approaches in machine
learning. At the end of this stage of the project, it was decided to implement supervised,
sequential and probabilistic-based Hidden Markov Models (HMMs) classifier for DroidZepp.
Hidden Markov Models (HMMs)
HMMs classification is based on the probabilistic likelihood of the given sequences of the
observations. For example, some sequence of numbers can be similar to the other sequence
that observed in the past. This similarity is defined by the probabilistic similarity in the order
of the states that these numbers can possess. The figure 2.3 describes a simple HMM.
Figure 2.2. Single Hidden Markov Model [13].
De Souza describes a Markov property as a model that in any sequence, the current
observation will only be dependent on the most immediate previous one. For example, given
a sequence of observations x = {x1, x2, …, xt}, and corresponding sequence of states y = {y1,
y2, …, yt}, the probability of any sequence of observations can be shown as formula 2.1.
11
Formula 2.1. Probability of any HMM sequence of observations [13].
The probability p(yt|yt-1) can be described as the probability of being in current yt state
right after the yt-1 state. On the other hand, the probability p(xt|yt) can be read as the
probability of observing xt given being currently in the state yt. De Souza explains that to
compute these probabilities, two matrices, A and B, can be used. Matrix A provides state
probabilities p(yt|yt-1) and the matrix B is for the observation probabilities p(xt|yt). [13]
To apply HMMs to project DroidZepp, a web service is developed by using
Accord.NET Machine Learning Framework [14]. This framework is written in C# for the
.NET platform. For this reason, ASP.NET services on Microsoft Azure [15] were used to
implement the cloud service. Azure is the cloud platform (server) of Microsoft, which lets
developers deploy their applications to cloud. Implementation of this web service is described
on chapter 4.2.
12
Chapter 3
Design
DroidZepp has three main components:
1. Android application running on a smartphone
2. Android Wear application running on a smartwatch
3. Web service running on Microsoft Azure web servers
Figure 3.1 shows the interaction between these components.
Figure 3.1. Interaction between components of DroidZepp.
The design of these three components will be discussed individually in this chapter as
each has their own complex individual structure and relationship to the others. The main
controller unit is the application running on the smartphone, which contains main logical
parts and user interface.
13
3.1 Design of Smartphone Application
The Android application running on the smartphone is the main component of the whole
system. When action recording and data collection starts, it interacts with Android Wear
applications on the smartwatch. When it comes to the classification of the collected data, it
communicates with the online web service.
First of all, the internal design of this component should be discussed. It contains the
user interface, main controller, three background services and the database. Following figure
3.2 shows the overall architecture of smartphone application.
Figure 3.2. Overall architecture of the smartphone application.
As can be seen, MainActivity is the main controller unit of the component. It starts the UI
on top of it, so the user can interact with the system. Under the main activity, three
background services run on the system:
1. Data Collection Service
2. Classification Service
3. Alarm Service
All of these three services have their own individual databases.
3.1.1 Data Collection Service on the Smartphone Application
Data collection service consists of two elements:
1. Data collection service itself
2. Sensor listeners
14
The data collection service is a background service starting and controlled by the main
activity controller. As an Android service it is designed to run individually in the background
of the Android system and interact with overall system. It can start data collection
individually or under the control of main controller. The following figure 3.3 outlines the
lifetime of data collection service.
Figure 3.3. Lifetime of data collection service.
The start phase of the sensor handler service can be triggered by the user or main
controller. There are numerous actions taken by this phase:
1. Prompting data collection service of the smartwatch side of the system. In order to
provide simultaneous recording of user action, the wear side of the system should also
start data collection at the same time as the phone application. The start phase of the
sensor handler service prompts the wear application to start data collection.
2. Organising the database. The data collection service is designed to use a temporary
database for saving collected data. It enables the classifier service to access the values
of newly recorded actions more safely and also reduces overall database load on the
operating system. To save new values of accelerometer and gyroscope sensor
readings, the start phase clears the temporary database and opens a new connection to
the temporary database.
3. Registering sensor listeners. Details of listeners will be described in chapter 4.1.1.1.
During actual recording phase, the sensor handler service retrieves data points from
listeners. This process repeats itself with a fixed interval. The current system is designed for
repeating this phase every 200 milliseconds.
The end phase of sensor handler service is also complex, like the start phase. The
following actions are taken by this phase:
1. Unregistering sensor listener. Details of listeners will be described in chapter 4.1.1.1.
2. Closing database. At the end of recording, the sensor handler service closes the
database connection.
3. Inform about completion of recording. Details of this operation will be described in
chapter 4.1.1.5.
The application on smartphone contains two sensor listeners, one each for the
accelerometer and gyroscope. The design of these sensors is based on the background
information given in chapter 2.1.2. After the registration of the listeners by the start phase of
the sensor handler service, they read accelerometer and gyroscope values. The data delay for
reading the values is set to be the fastest possible, so that every small change on data values is
15
being read. It should be noted that reading with fastest delay creates a very high density of
sensor data values. However, this is not an issue for the current system, because inserting
these values to a temporary database is arranged by the second phase of the data collection
service, so that the interval (density) for inserting values to the database can easily and safely
be controlled by modifying that phase of sensor handler service.
The design decision behind this architecture was one of the most important requirements
for the system. It makes the data collection service supply the classifier with a much more
convenient dataset of sensor readings. As discussed in the background information for the
classifiers, the required dataset should contain training samples and their labels. To provide
more accuracy in the classifier and to identify human actions more properly, all these training
samples should consist of the same number of sensor readings. By taking this into
consideration, the following figure 3.4 can be drawn.
Figure 3.4. Timeline of two data collection processes.
As seen from the figure 3.4, by keeping a fixed delay between each sensor value
reading and by keeping the same overall time for each action recording, a more convenient
dataset can be produced. However, Android sensor listeners are not capable of providing
values with custom delays. Whenever values of hardware sensors are changed, listeners
immediately read those values. This is why this design decision was chosen for the project.
DroidZepp controls data readings in the second phase of recording in the data collection
service. Even though listeners provide a high density of sensor readings, the data collection
service accepts only appropriate data points.
3.1.2 Classification Service on the Smartphone Application
The classification service is the logical part of the system. It is an Android service that runs in
the background of the operating system independently and indefinitely. As in other
background services, closing the application does not affect the work of classification service.
Similar to the data collection service, the classification service also can start classification
individually (automated alarms) or under the control of the main activity (user action
training).
The design of the classification service comprises three concepts:
1. Processing collected data
2. Classifying the data
3. Announcing the prediction
16
Processing the collected data is designed to pass five steps:
1. Receiving collected data from the smartwatch side. At the end of action recording on
smartwatch, collected data is sent to the smartphone side. This data is received by the
classification service as further processing is required.
2. Extracting local data. As mentioned before, the data collection service saves the
values to a temporary database. At this stage, the classification service extracts all the
recorded data from the temporary database of the data collection service.
3. Combining smartwatch and smartphone. Received data from the wear side and
extracted local data from the data collection service is combined into a single dataset.
4. Feature extraction. Depending on the type of classifier, feature extraction is done at
this stage. Having one dataset makes it more appropriate to extract required features
from this dataset. The current system does basic feature extraction as it uses an online
HMMs classifier.
5. Saving the final dataset. The final dataset is saved into permanent actions database
and labelled as ‘classification required’. The idea behind this design choice is the
consistency and reliability requirements of the system. Classification of the dataset is
very expensive work for the device (especially mobile devices). However, the
Android API allows multiple threads and services to be run in the background of the
system, which enables DroidZepp to save the data and classify later, or receive new
data while classifying previous data by using this design architecture.
As discussed in the background chapter, machine learning on mobile devices is a very
exciting and complex area of computer science. The current system is designed to use an
online classifier service rather than a local data classifier. The second unit of the
classification service is designed to provide a connection to an online classifier web service.
Details of this connection and the design of the web service will be outlined in chapter 4.2.
Finally, at the end of classification, the service announces the result of classification. This
process is done by an internal messaging system of the DroidZepp service, which will be
discussed in chapter 4.1.1.5.
3.1.3 Alarm Service on the Smartphone Application
The alarm service is the third main component of DroidZepp. It is also an Android service
that runs in the background of the system and provides several functionalities for users. As in
other background services, it is designed to run independently and indefinitely in the
background of the system. In contrast to other services, it is simpler, as it has only two
components:
1. Alarm setter
2. Alarm receiver (notifier)
The alarm setter is able to accept requests either from internal services or directly from
the user. It is designed to provide access to the Android operating system alarm services. This
lets DroidZepp schedule actions very consistently to run in the future. For example, even if
17
the user kills DroidZepp and all its background processes (services), the alarms will still be
viable.
The alarm receiver is a special unit that activates when an alarm goes off. It is a broadcast
receiver for alarms. So, whenever an internal alarm goes off, this unit starts working. It can
be configured to perform different operations. Its current design is described in figure 3.5:
Figure 3.5. Activity diagram for triggering Alarm Receiver in two different scenarios.
As seen in the figure, the alarm receiver can be triggered under two different scenarios.
As an Android broadcast alarm receiver, it starts user activity detection to predict the user’s
current action. Basically, it prompts data collection and classify services. This process is done
using the internal messaging system of DroidZepp, which described in chapter 4.1.1.5.
The second scenario occurs when the classify service announces the result of the
classifier (prediction of user’s current action). The alarm receiver receives this result and
checks it against the database. If the prediction is not the same as alarm’s scheduled action
(i.e., the user has missed their scheduled activity), a notification pops up to inform the user.
Otherwise, DroidZepp just marks the action as completed and removes the alarm.
18
In addition, it should be noted that the current design allows modification of the
notifications to more sophisticated full-screen alarm notifiers. The flexibility of the user
interface in this part of the application is high.
3.2 Design of the Smartwatch Application
As mentioned above, DroidZepp is designed to monitor user actions by using hardware
sensors in Android smartphones and smartwatches. Details of the design of the smartphone
application was discussed above. When it comes to the smartwatch application, it should be
mentioned that this part of application does not contain the main controller units of the
system and for this reason cannot be used as a separate application. It is a highly integrated
component of the overall system as it provides the main feature set of user actions by
monitoring movements of the user’s hand (wrist).
The main reason for this design choice is related to the performance of smartwatches.
Even though these small gadgets have a reasonable amount of CPU power and memory (see
Appendix B for full specifications), they cannot process a huge amount of sensor data. This
would cause battery drain issues and overall inconsistency issues of the system. Details of the
performance issues will be discussed in chapter 5.
The following figure 3.6 shows the general design of the smart watch application.
Figure 3.6. Overall architecture of the smartwatch application.
19
As you can see from the diagram, the wear part of the system contains two background
services:
1. Data Collection service
2. SendToClassify service
3.2.1 Data Collection Service on the Smartwatch Application
Data collection service of the smartwatch application is a modified version of the one on the
smartphone application. So, same as on the phone side, the data collection service in the
smartwatch application also has 2 components:
1. Data collection service itself
2. Sensor listeners
Details of these components are already discussed in chapter 3.1.1. There is only one
change in the design of data collection service. On the first phase of the data collection
service, instead of prompting wear side, it receives a starting command from the smartphone
side. The next steps continue the same behaviour as the smartphone data collection service.
3.2.2 SendToClassify Service on the Smartwatch Application
The SendToClassify service is designed to send recently recorded human action data to the
smartphone side. As mentioned above, the classification service on the smartphone side
receives this data.
The SendToClassify service is a typical Android service that runs in the background
of the system individually and indefinitely. This lets the overall system overcome several
inconsistency and security issues such as working at the same time with the data collection
service.
The service consists of two components:
1. Local data extractor. Similar to the classification service on the smartphone side, the
SendToClassify service also extracts recently collected data from the temporary
database of the data collection service. However, the SendToClassify service does not
have its own separate database. After extracting the dataset, it is sent to the second
component of the service.
2. Data sender. This component is designed to send extracted local data to smartphone
application over Bluetooth. Details of its implementation will be discussed in chapter
4.1.1.4.
20
Chapter 4
Development
This chapter outlines the implementation of the DroidZepp application, its three different
compartments, the development of the relationship between these compartments, various
internal communication of each compartment and also database implementation to support
the entire system. A number of external tools and libraries were also used during the
development. The chapter will cover the implementation in a similar order to the design
chapter.
Two main development environments were used during the implementation:
1. Android Studio for Android Development on Linux
2. Visual Studio [16] for ASP.NET Web service Development on Windows
Android applications were implemented on the official integrated development
environment of Android, Android Studio by Google. Figure 2.2 describes the platform on
Linux.
Figure 4.1. Android Studio running on Linux.
21
Android API was the main source of the required knowledge for the development.
However, different implementation choices were encountered in the process. These obstacles
were overcome by further research in the area.
Visual Studio by Microsoft was used as a development platform for the
implementation of the web service. The web service is deployed to the Microsoft Azure cloud
platform. The details of the implementation will be discussed in chapter 4.2.
All of the development process was tracked by the Git version control system. Strong
Git Integration of IDEs made progress quick and flawless.
4.1 Development of Android applications
Most of the development effort was spent on the Android side of the application. With
respect to Model-View-Controller (MVC) architecture, the applications were implemented in
three layers as described on figure 4.2.
Figure 4.2. Graphical representation for MVC software architectural pattern.
As seen from the figure, three levels were implemented separately by following MVC
architecture paradigms. On the bottom level, the database structure was built with the
following controllers as a middle layer. On top of all these, the UI was implemented as a
flexible individual piece.
22
Figure 4.3. The directory structure of Android applications (left – smartphone, right –
smartwatch).
4.1.1 Controllers (services)
As mentioned in the design chapter, all controllers were implemented as an Android
background service, so that they could run in the background of the system and safely
monitor user actions or internal events. To implement an Android background service, a class
should extend Android service and should be registered to the system by the Android
manifest file. An Android manifest file is a XML configuration file of an application. It
contains many details about the application such as the name of application, identity of main
activity to run, list of broadcast receivers, supported Android versions, package name, etc.
These details are saved as XML tags. For instance, the registration of services was
implemented here. Figure 4.4 shows a snapshot of this registration.
Figure 4.4. Registration of background services in Android manifest file.
23
In addition to the internal lifecycle of the services, their global appearance on the
operating system is also managed. OnDestroy and OnLowMemory functions were
implemented to overcome system instability issues. The Android operating system does not
100% guarantee that a background service will always be running. Some cases, such as low
memory space, or external user intervention, can cause termination of the service.
Implementation of OnDestroy and OnLowMemory functions does not prevent of happening
these cases. However, they guarantee safe termination of the service by saving the current
state of the service, closing database and Bluetooth connections to smartwatch and stopping
the work of internal runnables1.
1 A runnable is an executable function. It is implemented by using Java Runnable interface.
4.1.1.1 Data Collection Service
The role of the data collection service in DroidZepp is very important, because of its major
contribution to the project objectives. This also makes this service the main focal point of the
development process. Although the requirements set below for the service were extensive and
hard to implement, all of them were considered as a high impact values and implemented
successfully:
1. Multi-threaded execution of internal modules
2. Repetitive execution of processes
3. Consistent and reliable communication with wear side
4. Safe communication to other services
Multi-Threading in Android
Classic Java Thread [17] approach can be used in Android applications for executing multiple
functions on multiple threads. However, as described by Vogel [18], this approach has the
following drawbacks:
1. If the results of background function need to posted to the UI, a further
synchronisation process is required.
2. The lifetime of the thread should be managed externally. For example, there is not
default cancellation operation for thread.
3. Android API doesn’t provide any default pooling options for Threads. This is
explicitly required if there are more than 2 tasks are created by threads. In the ideal
pooling approach, these tasks are added into some queue and executed in some order
by thread pool.
4. In the case of configuration changes on the application, such as orientation change, re-
creation of the activity causes inconsistency issues. These cases are required to be
managed externally.
These drawbacks of the classic Java threads approach forced the development process to
look for more a reliable approach.
24
Repetitive Execution of Processes
Very basic repetitive execution of processes, such as running a function again and again can
be implemented with using while loops and “Thread.sleep” functions. This approach can
easily be managed in terms of controlling start and stop timings of the execution by using
simple conditions. However, this operation is very fragile and expensive to run on the CPU.
Putting the thread into a sleep state means freezing the work of the thread completely and
locking access to its contents. This can easily lead to issues such as deadlock and starvation.
So, better control of repetitive execution of processes were decided to implement, such as the
timertask class provided by Android API [19].
Implementation of timertask is very easy and manageable. A new instance of an Android
timer class is created and a new runnable event was scheduled to the timer to run repetitively
with fixed delays. However, the following obstacles occurred:
1. Rescheduling timertask with new configurations was not possible. In order to
configure it, such as change time delays, it is required that the current task be
terminated completely and created again with new configurations.
2. It was not possible to add more than one runnable event to one timertask.
According to Abdul Ahmad, the memory usage of timertask is also very high [20]. These
were main reasons that implementation of timertask was also discontinued.
Android Handlers
After careful analysis of these obstacles with further research, multi-threading and repetitive
execution of processes were implemented by using Handlers [21]. The Android API mentions
that a Handler allows a developer to send and process message and runnable objects
associated with a thread's MessageQueue. (As a current topic, only runnable objects will be
discussed.) So, Handlers are capable of providing very customised processing and
management of runnable objects with their associated thread.
The following functionalities are supported by Android Handlers:
1. Handlers are running in separate threads, so they don’t interfere each other’s work
2. Handlers execute runnable objects.
3. Handlers can start execution of a runnable object immediately, at some exact time in
the future or delayed for some time.
4. Handlers can execute runnable objects repetitively by simply posting the same
runnable again at the end of previous task.
5. Handlers can terminate execution of runnable objects or reschedule them to run in the
future.
All these functionalities made Android Handlers the best option for the implementation of
the data collection service. Three phases of the sensor handler service were implemented by
using three handlers as described on figure 4.5.
25
Figure 4.5. Three Handlers of data collection service.
By using Handler, posting a new runnable object to execute can be implemented by
three different approaches:
1. post(Runnable) - Posts a runnable directly to run on a thread.
2. postAtTime(Runnable, long) - Schedules a runnable to run in a specified time (long
input argument specifies a time).
3. postDelayed(Runnable, long) - Schedules a runnable to run after a specified time
(long input argument specifies a time).
The data collection service uses the first and third approach in order to manage the start of
collection and its termination after some time. These simple code snapshot represents base
mechanism of this process:
// Start phase of recording is posted to execute.
hndlStartRecording.post(prcsStartRecording)
Runnable prcsStartRecording = new Runnable() {
// Recording phase is posted.
hndlRecording.post(prcsRecording);
// End of the recording is scheduled.
hndlEndRecording.postDelayed(prcsEndRecording, recordingLength);
// Start time for a new recording is scheduled
hndlStartRecording.postDelayed(prcsStartRecording, recordingInterval);
};
Runnable prcsRecording = new Runnable() {
// Repetitively sensor readings are recorded.
hndlRecording.postDelayed(prcsRecording, sensorDelay);
};
Runnable prcsEndRecording = new Runnable() {
// Recording is finished.
hndlRecording.removeCallbacks(prcsRecording);
};
For the sake of simplicity, above code implementation just shows the mechanism of
Handlers in the data collection service. All designed operations for the recording phases were
developed inside of these runnables.
26
Sensor Listeners
Sensor listeners for the data collection service were implemented by the Sensor Manager and
Sensor Event Listener from the Android API [22]. Two possible functions were considered
for implementation:
1. OnAccuracyChanged() - This method is called by Android System whenever the
delay for receiving sensor values are changed. In Data collection service, as
mentioned on the design part, the sensor delay was set to be fastest and the density of
collection was managed by Handlers. So, in this design accuracy never changes and
this method turned out to be not considered to implement.
2. OnSensorChanged() - This method receives sensor values as “SensorEvent” objects.
Main development work is done on this function.
The SensorEvent object contains three floating point values for representing the current
state of a particular sensor. The accelerometer sensor listener provides the acceleration rate of
the device on a three-axis coordination system in m/s2. So, three float values inside of
SensorEvent objects represents the acceleration rate on the x, y, z axis coordinate system.
When it comes to the gyroscope sensor, a similar approach can be seen. The gyroscope
measures the rate of rotation in rad/san around the device’s x, y and z axis. The SensorEvent
object in the gyroscope sensor listener stores this rate as 3 float values respectively.
4.1.1.2 Classification service
As described in chapter 3.1.2, implementation of this service involved three main steps:
processing data, classifying data and announcing the result. Data processing is implemented
inside of extractFeatures() function. It requests sensor readings from both temporary
databases and the smartwatch data receiver. All these new raw data are combined into one
feature container object. It holds the following values:
string time; Time and date of recording the sensor reading on smartphone
float accMX; Smartphone accelerometer x axis value
float accMY; Smartphone accelerometer y axis value
float accMZ; Smartphone accelerometer z axis value
float gyroMX; Smartphone gyroscope x axis value
float gyroMY; Smartphone gyroscope y axis value
float gyroMZ; Smartphone gyroscope z axis value
float accWX; Smartwatch accelerometer x axis value
float accWY; Smartwatch accelerometer y axis value
float accWZ; Smartwatch accelerometer z axis value
float gyroWX; Smartwatch gyroscope x axis value
float gyroWY; Smartwatch gyroscope y axis value
float gyroWZ; Smartwatch gyroscope z axis value
long lId; Dummy label id for recorded action
As mentioned before, classification of the data occurs on the cloud server. The recorded
dataset is sent to this server using the KSOAP library [23]. This library provides a simple
27
framework for packing all the data into a ‘soap object’ and sending it through the
HttpTransportSE interface [24].
A number of configurations were set into properties of these soap objects, as well as
customised XML marshalling1 classes. The communication protocol between the smartphone
and web service is XML and due to the complex structure of input parameters of the web
service, custom marshalling classes were essential.
Announcement mechanism for the result of classification is developed by using internal
message systems between services as described in chapter 4.1.1.5.
1Marshalling is the process of transforming the memory representation of an object to a data format suitable
for transmission. In this case, recorded dataset is transformed to XML objects by using marshalling.
4.1.1.3 Alarm Service
The alarm setter is implemented by using the Alarm Manager interface of the Android API
[25]. An AlarmObject custom data type was built to store all the required information about
an alarm, such as date, time, frequency, required action, etc.
As an alarm receiver, a custom Android broadcast receiver is implemented. The
broadcast receiver is a special interface provided by Android API for background service–
invoking purposes. Similar to background services, it should also be registered in Android
Manifest as figure 4.6.
Figure 4.6. Registration of Alarm Receiver in Android manifest.
This registration lets the Android system invoke the service attached to the receiver
even if the whole application is terminated. Building this behaviour was important
requirement for the reliability of the application.
4.1.1.4 SendToClassify Service
After completing data recording on smartwatch, SendToClassify service processes the
collected data and transfers it to smartphone application. On the processing stage of the
recorded sensor readings, this service is implemented similarly to the classification service in
the smartphone application. However, after extracting the local data from temporary
databases, it is sent to the smartphone by using the PutDataMapRequest interface and
wearable data API from Google APIs for Android [26]. These tools have full control of the
Bluetooth connection between devices. Delays and instability in the connection are prevented
by using the setUrgent() function from the API and implementing the custom
validateConnection() function. The validateConnection() function checks for the current state
of the connection and if it is down, it tries to reconnect in 10 seconds. After 10 seconds, if the
connection is still not restored, it terminates the process.
28
4.1.1.5 Communication between Services
Communication between the services was implemented by using IBinder service from
Android API [27]. IBinder creates internal messaging service between controllers. First of all,
the following message types were implemented:
int MSG_REGISTER_CLIENT = 1;
int MSG_UNREGISTER_CLIENT = 2;
int MSG_START_RECORDING = 3;
int MSG_START_CLASSIFICATION = 4;
int MSG_RECORDING_DONE = 5;
int MSG_COMBINING_DONE = 6;
int MSG_CLASSIFIER_RESULT = 7;
int MSG_REGISTER_NEW_ALARM = 8;
int MSG_START_CLASSIFICATION_HIDDEN = 9;
int MSG_START_RECORDING_HIDDEN = 10;
int MSG_RECORDING_DONE_HIDDEN = 11;
int MSG_COMBINING_DONE_HIDDEN = 12;
int MSG_CLASSIFIER_RESULT_HIDDEN = 13.
For instance, to announce the result of classification, the classification service sends
MSG_CLASSIFIER_RESULT with attached result to all other services.
To build a communication line between two services, two ends of this communication
were implemented individually:
1. Message sender. When the service is started by the system, the service returns its own
IBinder object. This object represents a kind of an address of the service. By using
this object, the message sender service binds to the client service. During the binding
process, the message sender service creates a new Android messenger object with the
configuration properties from the IBinder object of the client. After that, the
MSG_REGISTER_CLIENT message is sent to the client to register itself.
2. Message receiver (Client). As a message receiver, handleMessage from the Android
Handler interface is implemented. This Handler has a similar implementation as the
one in the data collection service. However, it manages Android messages rather than
runnable events. Finally, implementation of the message receiver contains the
‘switch’ statement to process different messages.
4.1.2 Database implementation
Android applications use the SQLite database engine [28]. To implement databases for
different parts of the application, the Android SQLiteOpenHelper class is used. It provides a
very flexible architecture to easily configure and update the construction of the main database
structure by overriding the onCreate() and onUpgrade() functions.
During the implementation, four separate databases were considered for DroidZepp:
29
1. Accelerometer temporary database
2. Gyroscope temporary database
3. Actions database
4. Alarm database
Accelerometer and gyroscope sensor listeners of the data collection service record their
readings into the accelerometer and gyroscope temporary databases, respectively. The actions
database contains the labelled train dataset, while the alarm database stores the required
information about alarms.
In order to manage the temporary databases, three functions were implemented:
1. addXYZ(). Adds triple values (x, y, z axis) of one sensor reading to the database.
2. getAllData(). Returns all values from database as an arraylist of XYZ custom data
type.
3. clearTable(). Empties the tables of the temporary database.
The actions database has a more complicated structure than the others:
1. addNewLabel(). Adds new action label to labels table.
2. updateLabel(). Updates existing label in labels table.
3. getDataSet(). Returns all feature sets of the train dataset from the actions table as a
three-dimensional array of double.
4. getTestData(). Returns all feature set of test dataset from Actions table as a two-
dimensional array of double.
5. getLabels(). Returns all labels of the train dataset from the labels table as an array of
integers. (A unique label ID is attached to each class/action name)
6. getClasses(). Returns distinct classes from the labels table.
7. deleteRecordedAction(). Removes specified action from labels table and the actions
table.
Figure 4.7 shows the overall design of the database structure of DroidZepp.
30
Figure 4.7. Database structure of DroidZepp
The main reason behind splitting the database into four pieces is creating a more
reliable overall data storage for DroidZepp. These four databases are stored as four ‘db’ files
in the file system of the application. Now, imagine an architecture where all these data are
handled by multiple tables inside of single file. All three background services plus additional
two sensor listeners sooner or later will access this database at the same time. The situation
can be described as reading and writing into single resource by multiple threads. Jenkov
explains potential problems can occur during this process, such as starvation [29]. There are a
lot of ways for managing this kind of shared-resource usage, but as one might expect, one of
the best solutions is just not using a single resource and distributing it into multiple files.
4.2 Implementation of the Web Service
As mentioned before, the web service contains Accord.NET HMMs classifiers of DroidZepp.
The service was implemented based on de Souza’s Accord.NET HMMs tutorial [13]. The
service receives the following parameters as inputs of classifier:
31
1. double[][][] trainDataSet. This three-dimensional array should contain the train
dataset of the classification. The following figure 4.8 shows the representation of this
dataset.
Figure 4.8. Graphical representation of three-dimensional train dataset.
2. int[] trainLabels. This input parameter should contain all true labels of the train
dataset.
3. double[][] testData. Classification-required test samples should be provided in
separate two-dimensional array.
4. string[] classes. Names of distinct classes in the train dataset should be provided as an
extra array of strings.
Inside of the function, instances of HiddenMarkovClassifier classes are implemented with
some extra configurations. As an inner learning model HiddenMarkovClassifierLearning
class is used.
The web service can be accessed on www.droidzepp.azurewebsites.net. To use its
functionality, the clients should send their requests through SOAP protocol [23] with XML.
A sample of SOAP 1.1 request and response is given in Appendix C.
32
Chapter 5
Testing, Evaluation and Results This chapter outlines testing and evaluation steps of the project DroidZepp. It also describes
final results of the project.
5.1 Testing Environment
In order to run and test the application, there were two options provided by Google:
1. Testing on a virtual device. Android Studio can create an Android virtual device
(AVD) on the operating system and developed application can be tested on this
platform [30].
2. Testing on a real Android device. By connecting the real Android mobile device to
computer by USB cable, Android Studio can quickly deploy and run the application
on the device. If developer has an actual Android device, this is more preferred over
virtual device testing, due to the slowness and high memory usage of AVD. It is also
always better to try code on a real device for more accurate results. Project DroidZepp
was deployed and tested on real Android Devices during the development process.
During the design stage of the project, unit testing practices were considered as a viable
option for DroidZepp. However, in order to apply unit testing, completely new Android Test
Interface was investigated.
5.1.1 Android Unit Testing
Initially, classical JUnit testing framework was considered for the evaluation of the process.
It was assumed that, Android application run on Java, so that they can execute JUnit test
cases and can provide testing results for desired functionalities. However, most parts of
Android Applications explicitly require an Android device to perform unit testing. The reason
underlying this behaviour is Android API dependency of applications. For example, in order
to write a simple Junit test case that makes a single call to the database (which is located on
an Android device and has a dependency on Android API), the whole application should be
deployed to the device and then run the unit tests to perform system and database calls. So,
the Android Test Interface consists of two different test case format:
1. Local Unit tests that run on any Java Virtual Machine.
2. Instrumented Unit tests that require an Android device to run.
Figure 5.1 and 5.2 describes examples of local and instrumented unit tests.
33
Figure 5.1. Example Android local unit test sample
Figure 5.2. Example Android instrumented test sample.
34
Some more test examples can be found on Appendix D.
Android Studio provides very nice user interface for the results of test cases. Local
unit tests can export their results into well-designed html pages. This can be seen on figure
5.3.
Figure 5.3. Result of example local unit test case.
The outcome for simple instrumented unit test case can be seen on figure 5.4.
Figure 5.4. Result of example instrumented unit test case.
For more unit test case results see Appendix E.
5.1.2 Evaluation of Classifier
One of the main reasons of selecting Accord.NET Machine Learning framework as a
classifier for DroidZepp was because of very high accuracy results. The classifier tested with
activity recognition dataset provided by Pervasive Systems research group [31]. The dataset
has been collected from different parts of human body by using Samsung Galaxy SII
smartphone: arm, belt, wrist, pocket. The wrist position provided perfect simulation of
smartwatch data collection for the project DroidZepp.
The dataset divided into two equal subsets:
1. Training data
2. Testing samples
So, both training and testing samples were containing these action recordings
(accelerometer and gyroscope recording were picked):
Running – 28 examples
Sitting – 29 examples
35
Standing – 29 examples
Going upstairs – 21 examples
Walking – 30 examples
HMMs classifier of Accord.NET framework provided satisfactory result, on average 76%
of accuracy. Confusion matrix for the outcome is described on table 5.1.
Action Running Sitting Standing Going
upstairs
Walking
Running 0.36 0.01 0.16 0.10 0.47
Sitting 0 0.86 0.08 0.02 0.04
Standing 0.03 0.14 0.65 0.01 0.17
Going
upstairs
0.11 0.1 0.10 0.65 0.13
Walking 0.03 0.05 0.05 0.10 0.77
Table 5.2. Confusion matrix for HMMs classification of Pervasive dataset.
5.2 Results
This section of the report outlines the final application. Figures 5.4 to 5.x shows screen
capture of the application.
Figure 5.4. User Interface of DroidZepp. Currently, doesn’t have any recorded action.
36
Figure 5.5. Adding new action to DroidZepp.
Figure 5.6. Recording phase of DroidZepp.
37
Figure 5.7. After recording, user can either classify recorded action or can enter the name of
action.
Figure 5.8. UI of DroidZepp with three recorded actions.
38
Figure 5.9. Pressing and holding an action displays a menu for setting an alarm or deleting
the recorded action.
Figure 5.10. User gets notification about missing required task.
39
Chapter 6
Conclusion Initial objectives of the project are revisited on this chapter in order to reflect how they have
been achieved. The chapter also outlines overall gained knowledge during the project.
6.1 Reviewing the Objectives of the Project
As discussed on chapter 1.2 there are three main objectives of the project:
1. Collection of user data. This objective is achieved as current version of the application
are capable of monitoring accelerometer and gyroscope sensors of smartwatches and
smartphones and can easily record human actions.
2. Classification by using Machine Learning technique. This objective is also achieved,
but can be improved. Current classifier has some limitations in terms of distinguishing
very similar actions in small training space. However, the modular design of the
system lets easily add a new better classifier than HMMs.
3. Applying the results to daily life. This objective also well achieved by providing
functional automated alarm system. Current version of alarm system is very stable
and reliable.
6.2 Gained Knowledge During the Project
The project helped my personal development in terms of gained technical knowledge. I did
in-depth research of Android platform and background services. I also investigated sensor
technologies and their application to today’s world. On the other hand, development of
classifier provided me with two very valuable experiences:
1. Experience on current Machine Learning techniques on the industry.
2. Experience on Microsoft development products such as Visual Studio, ASP.NET web
service, SOAP, Azure cloud platform, etc.
6.3 Summary of the Project
The report aimed to demonstrate the work and effort spent on the final year project. It
outlined the motivations and objectives of the project DroidZepp and provided real world
example to show the significance of such system. The design and development part described
40
detailed behind the scenes information about DroidZepp. Testing and evaluation chapter tried
to show the evaluation progress of DroidZepp and provided outcome of the project.
41
References
1. James Kendrick. “Mobile technology: The amazing impact on our lives” April 30,
2013. http://www.zdnet.com/article/mobile-technology-the-amazing-impact-on-our-
lives/
2. Steve Henn. “Smartwatch is next step in ‘Quantified Self’ Life-Logging” September
9, 2013.
http://www.npr.org/sections/alltechconsidered/2013/09/10/220726721/smartwatch-is-
next-step-in-quantified-self-life-logging
3. Priya Viswanathan. “Top 5 Tools for Multi-Platform Mobile App Development”
December 2015. http://mobiledevices.about.com/od/mobileappbasics/tp/Top-5-Tools-
Multi-Platform-Mobile-App-Development.htm
4. iOS Developer Library. “Background Execution”.
https://developer.apple.com/library/ios/documentation/iPhone/Conceptual/iPhoneOSP
rogrammingGuide/BackgroundExecution/BackgroundExecution.html
5. IDC. Smartphone OS Market Share, 2015. http://www.idc.com/prodserv/smartphone-
os-market-share.jsp
6. Android Developers. “Introduction to Android”.
http://developer.android.com/guide/index.html
7. Android Developers. “Package Index”.
http://developer.android.com/reference/packages.html
8. Android Developers. “Building Apps for Wearables”.
http://developer.android.com/training/building-wearables.html
9. Android Developers. “Introduction to Sensors”.
http://developer.android.com/guide/topics/sensors/sensors_overview.html#sensors-
intro
10. Oscar D. Lara and Miguel A. Labrador. “A Survey on Human Activity Recognition
using Wearable Sensors”. http://www.usf.edu/engineering/cse/documents/recent-
research-paper.pdf
11. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4118351/
12. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3244008/
13. http://www.codeproject.com/Articles/541428/Sequence-Classifiers-in-Csharp-Part-I-
Hidden-Marko
14. http://accord-framework.net/
15. https://www.dreamspark.com/Product/Product.aspx?productid=99
16. https://www.visualstudio.com/products/visual-studio-community-vs
17. https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html
18. http://www.vogella.com/tutorials/AndroidBackgroundProcessing/article.html
19. http://developer.android.com/reference/java/util/TimerTask.html
20. http://androidtrainningcenter.blogspot.co.uk/2013/12/handler-vs-timer-fixed-period-
execution.html
21. http://developer.android.com/reference/android/os/Handler.html
22. http://developer.android.com/reference/android/hardware/SensorEventListener.html
23. http://kobjects.org/ksoap2/index.html
24. http://kobjects.org/ksoap2/doc/api/org/ksoap2/transport/HttpTransportSE.html
25. http://developer.android.com/reference/android/app/AlarmManager.html
42
26. https://developers.google.com/android/reference/com/google/android/gms/wearable/P
utDataMapRequest
27. http://developer.android.com/guide/components/bound-services.html
28. https://www.sqlite.org/
29. http://tutorials.jenkov.com/java-concurrency/read-write-locks.html
30. http://developer.android.com/tools/devices/index.html
31. http://ps.cs.utwente.nl/Datasets.php
43
Appendices
44
Appendix A
45
Appendix B
46
Appendix C
47
Appendix D
48
Appendix E
49