165
APPLICATION OF CONSUMER-OFF-THE-SHELF (COTS) DEVICES TO HUMAN MOTION ANALYSIS by Mark Tomaszewski February 2017 A thesis submitted to the Faculty of the Graduate School of the University at Buffalo, State University of New York in partial fulfillment of the requirements for the degree of Master of Science Department of Mechanical and Aerospace Engineering

Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

Embed Size (px)

Citation preview

Page 1: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

APPLICATION OF CONSUMER-OFF-THE-SHELF (COTS) DEVICES TO

HUMAN MOTION ANALYSIS

by

Mark Tomaszewski

February 2017

A thesis submitted to the

Faculty of the Graduate School of

the University at Buffalo, State University of New York

in partial fulfillment of the requirements for the degree of

Master of Science

Department of Mechanical and Aerospace Engineering

Page 2: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

ii

Dedicated to

My family and close friends – for their enthusiastic support and encouragement.

This thesis is a direct product of their love.

Page 3: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

iii

Acknowledgements

I must acknowledge a number of people who have had an impact on my

professional, academic, and social growth during my study for this thesis. First, I would like

to thank my advisor, Dr. Venkat Krovi, for offering me an education that goes far beyond the

classroom and the laboratory by incorporating endless opportunities for intellectual,

technical, and professional growth. I would also like to thank Dr. Gary Dargush and Dr. Ehsan

Esfahani for serving as members of my committee. These three professors have collectively

contributed toward the majority of inspiration I have received in my time at university.

I would also like to extend thanks to my colleagues with whom I have shared many

profound experiences as coworkers, labmates, and friends. Thank you to Matthias Schmid

for your guidance and assistance in our shared professional endeavors. Thank you to all of

the members of ARMLAB, both students and interns. In particular, thank you S.K. Jun, Xiaobo

Zhou, Suren Kumar, Ali Alamdari, Javad Sovizi, Yin Chi Chen, and Michael Anson. In some

way, we have all done this together.

Page 4: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

iv

Contents

Abstract ................................................................................................................................................................... vii

1 Introduction ................................................................................................................................................... 1

2 Background ................................................................................................................................................. 11

2.1 Myo Overview .................................................................................................................................... 11

2.2 Sphero Overview .............................................................................................................................. 17

3 Software Tools ........................................................................................................................................... 21

3.1 Myo SDK MATLAB MEX Wrapper Development .................................................................. 22

3.2 Sphero API MATLAB SDK Development .................................................................................. 49

3.3 Application Cases .............................................................................................................................. 70

4 Mathematical Methods ........................................................................................................................... 82

4.1 Coordinate Frames, Vectors, and Rotations ........................................................................... 82

4.2 Working with Sensor Data ............................................................................................................ 85

4.3 Upper Limb Kinematics .................................................................................................................. 89

4.4 Calibration Problem ........................................................................................................................ 95

4.5 Experiment Definition .................................................................................................................. 114

5 Motion Analysis ....................................................................................................................................... 118

5.1 Experimental Setup ....................................................................................................................... 118

5.2 Data Collection ................................................................................................................................. 121

5.3 Data Processing and Calibration ............................................................................................... 124

5.4 Analysis Results .............................................................................................................................. 128

6 Discussion .................................................................................................................................................. 133

Appendix A Source Code ......................................................................................................................... 139

References .......................................................................................................................................................... 156

Page 5: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

v

Figures

Figure 2-1: Myo teardown disassembly .................................................................................................... 12

Figure 2-2: Myo mainboard (front and back) ......................................................................................... 13

Figure 2-3: Myo APIs and middleware stack ........................................................................................... 16

Figure 2-4: Sphero 2.0 internal electronic and mechanical components .................................... 18

Figure 2-5: Sphero BB-8™ mainboard ....................................................................................................... 19

Figure 2-6: Sphero API and middleware stack ....................................................................................... 20

Figure 3-1: MEX function states, transitions, and actions .................................................................. 37

Figure 3-2: Myo MATLAB class wrapper behavior ............................................................................... 45

Figure 3-3: Sphero API send flowchart...................................................................................................... 62

Figure 3-4: Sphero API receive flowchart ................................................................................................ 66

Figure 3-5: Myo CLI EMG logger plot ......................................................................................................... 71

Figure 3-6: Myo GUI MyoMexGUI_Monitor ......................................................................................... 73

Figure 3-7: Myo GUI MyoDataGUI_Monitor ...................................................................................... 73

Figure 3-8: Sphero CLI gyroscope logger plot ........................................................................................ 75

Figure 3-9: Sphero GUI SpheroGUI_MainControlPanel ......................................................... 77

Figure 3-10: Sphero GUI SpheroGUI_Drive ...................................................................................... 78

Figure 3-11: Sphero GUI SpheroGUI_VisualizeInputData ................................................. 79

Figure 3-12: Myo and Sphero upper limb motion capture ................................................................ 80

Figure 4-1: Upper limb forward kinematics model .............................................................................. 91

Figure 4-2: Calibration point definitions .................................................................................................. 96

Figure 4-3: Calibration point calculated vectors ................................................................................... 97

Figure 4-4: Choice of task space coordinate frame ............................................................................... 97

Figure 4-5: Calibration objective function error vector ...................................................................... 99

Figure 4-6: Experimental analysis plane error vector calculation ............................................... 116

Figure 5-1: Calibration point fixture ........................................................................................................ 118

Figure 5-2: Calibration fixtures assembled onto jig ........................................................................... 119

Figure 5-3: Experimental setup calibration jig and subject ............................................................ 120

Figure 5-4: Data visualization provided by MyoSpheroUpperLimb ....................................... 122

Figure 5-5: Subject performing the t-pose to set the home pose .................................................. 123

Figure 5-6: Calibration data visualization for trial 4 .......................................................................... 129

Figure 5-7: The effect of poor calibration correspondence on plane error............................... 129

Figure 5-8: Magnitude of plane error ep for three reaches in trial 4 ............................................ 130

Figure 5-9: Inverse kinematics joint angle trajectories .................................................................... 131

Figure 5-10: Magnitude of error introduced by inverse kinematics............................................ 132

Page 6: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

vi

Tables

Table 1-1: Motion capture systems ................................................................................................................ 4

Table 2-1: Myo SDK offerings ........................................................................................................................ 13

Table 2-2: Myo SDK versus Bluetooth protocol ..................................................................................... 16

Table 2-3: Sphero SDK offerings .................................................................................................................. 19

Table 3-1: MyoData data properties ......................................................................................................... 48

Table 3-2: Sphero API CMD fields ................................................................................................................ 51

Table 3-3: Sphero API RSP fields ................................................................................................................. 52

Table 3-4: Sphero API MSG fields ................................................................................................................ 53

Table 3-5: Command definition for the Ping() command .............................................................. 54

Table 3-6: Response definition for the Ping() command ............................................................... 54

Table 3-7: Command definition for the Roll() command .............................................................. 55

Table 3-8: Response <DATA> definition for the ReadLocator() command ......................... 56

Table 3-9: Response <DATA> interpretation for the ReadLocator() command .............. 56

Table 3-10: Data source MASK bits for SetDataStreaming() command and message .. 57

Table 3-11: Data source MASK for streaming accelerometer data ................................................. 58

Table 3-12: Command parameters for SetDataStreaming command .................................. 58

Table 3-13: Message definition for the DataStreaming message ............................................. 59

Table 3-14: Message <DATA> definition for the DataStreaming message ........................... 60

Table 3-15: Message <DATA> interpretation for the DataStreaming message .................. 60

Table 3-16: Sphero data sources .................................................................................................................. 67

Table 4-1: Inverse kinematics joint variable definitions .................................................................... 94

Table 4-2: Calibration constraints summary ........................................................................................ 109

Table 4-3: Calibration constraint set ........................................................................................................ 111

Table 4-4: Experimental protocol state progression and timing .................................................. 115

Table 5-1: Calibration optimization statistics ....................................................................................... 126

Table 5-2: Calibration subject geometric parameter results .......................................................... 127

Page 7: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

vii

Abstract

Human upper limb motion analysis with sensing by way of consumer-off-the-shelf

(COTS) devices presents a rich set of scientific, technological, and practical implementation

challenges. The need for such systems is motivated by the popular trend toward the

development of home based rehabilitative motor therapy systems in which patients perform

therapy alone while a technological solution connects the patient to a therapist by

performing data acquisition, analysis, and the reporting of evaluation results remotely. The

choice to use COTS devices mirrors the reasons why they have become universally accepted

in society in recent times. They are inexpensive, easy to use, manufactured to be deployable

at large scale, and satisfactorily performant for their intended applications. These claims for

the use of COTS devices also resound with requirements that make them suitable for use as

low-cost equipment in academic research.

The focus of this work is on the development of a proof of concept human upper

limb motion capture system using Myo and Sphero. The end-to-end development of the

motion capture system begins with developing the software that is required to interact with

these devices in MATLAB. Each of Myo and Sphero receive a fully-featured device interface

that’s easy to use in native MATLAB m-code. Then, a theoretical framework for upper limb

motion capture and analysis is developed in which the devices’ inertial measurement unit

data is used to determine the pose of a subject’s upper limb. The framework provides

faculties for model calibration, registration of the model with a virtual world, and analysis

methods that enable successful validation of the model’s correctness as well as evaluation of

its accuracy as shown by the concrete example in this work.

Page 8: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

1

1 Introduction

Human motion analysis is relevant in the modern day context of quantitative

home-based motor rehabilitation. This motivational application domain frames a rich

landscape within which many challenges exist with respect to the core technology being

leveraged, application-specific requirements, and societal factors that contribute to the

adoption of such systems. The challenges facing the developers of home-based motor

rehabilitation systems come in at least two bulk categories. Perhaps most importantly, there

are application specific requirements that must be met for the solution to be accepted by

practicing professionals. There also exist technological challenges to be overcome, some of

which arise as a result of the previously mentioned application specific challenges.

The scenario which provides an example of the utility inherent in a home-based

motor rehabilitation scheme is one in which the limitations of traditional rehabilitation

therapy are mitigated by the introduction of a technological solution that does not inhibit the

ability of therapists to provide similar quality of patient care. The so-called traditional

rehabilitation scheme typically takes place directly between a therapist and the patient in

the therapist’s office. The fact that patients must travel to receive therapy immediately

constrains the frequency with which they can receive care in most practical situations.

Hence, a first set of challenges is identified as the lessening of the time and distance gap

between patients and the point of care. The provision of care itself is characterized by the

therapist’s human knowledge of the patient’s condition over time. Evaluation of the patient’s

condition is enabled by the therapist’s knowledge and experience in treating patients when

assigning scores to his or her own perception of the patient’s therapy task performance. This

Page 9: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

2

application domain knowledge presents a secondary set of application specific challenges to

be overcome with the home-based solution.

The basic technological problem to be addressed in the home-based motor

rehabilitation solution is one in which the desired outcome is a therapist-approved reporting

of the patient’s task performance quality. The development of such metrics is a problem that

is to be considered at a stage when the technological solution provides sufficiently accurate

representation of the subject’s motion. Assuming that this requirement is met, then the

development of motion derived metrics can follow. In addition to providing an accurate

motion representation of the subject, it’s also desirable for the system to provide the

capability for interactivity of the subject with a known task environment containing any

combination of physical or virtual fixtures. This enables monitoring of the subject’s ability to

perform interaction tasks that may be necessary in daily life. The representation of the

subject’s motion as well as the task environment must then also be reliably accurate such as

to be proven through validation testing.

The sensory data acquisition system technology used to support human motion

analysis varies in cost from the order of hundreds of dollars to as much as hundreds of

thousands of dollars. Similar to this gross variation in technology cost, the variety of motion

capture systems also exhibit differing precision, accuracy, and repeatability characteristics.

They also show similar variation in the complexity of setup and calibration procedures which

directly affects the required user skill and the need these systems to be installed in controlled

environments.

A representative range of product offerings that may be used for such human

motion capture systems is shown in Table 1-1 along with indication of the magnitude of cost

Page 10: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

3

for each system or device. The top-end system here, made by Vicon, represents the highest

quality motion capture data, but also the highest demands on users and the installation

environment. This optical marker tracking system requires that many cameras be installed

in a rather large room, such as a designated motion analysis laboratory space, and must not

be disturbed for the duration of motion capture activities. A calibration must be performed

in which optical markers are used to “wand” the motion capture volume to perform extrinsic

calibration of the cameras, and the subject must also be instrumented with optical markers

that are affixed to the body precisely on known anatomical landmarks. This system

represents one that is infeasible as a candidate for therapy patients to operate alone at home.

The Motion Shadow motion capture suit uses inertial measurement unit (IMU) sensors to

capture the spatial orientation of the subject’s limbs. Contact based sensor systems such as

this require no environment setup, very few environment requirements, and minimal

complexity to setup the subject-worn apparatus. The only environmental requirement is

minimal presence of electromagnetic interference (EMI) that would introduce errors into

the IMUs’ onboard magnetometer sensor readings. With much less cost (although still

significant) and greater usability for common people, the tradeoff is slightly less fidelity

(lower degree of freedom motion model), precision (data resolution), and accuracy in the

motion representation. This trend is one that continues as we move down the list.

Page 11: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

4

Table 1-1: Motion capture systems Images taken from the device manufacturer websites: vicon.com, motionshadow.com, microsoftstore.com, myo.com, and sphero.com

Name Sensing Modality Cost Magnitude Image

Vicon Optical Markers $100,000

Motion Shadow

Wearable IMU (Navigation grade)

$10,000

Kinect Vision (RGB & IR depth)

$100

Myo and Sphero

Wearable IMU (Consumer grade)

$100

One step lower than the Motion Shadow suit, we cross a device accessibility

boundary that makes the remaining devices highly desirable for applications in research.

Perhaps the gold standard in consumer motion capture products is the Microsoft Kinect

sensor. With a price on the order of hundreds of dollars and well developed community

software support for Windows computers as well as the MATLAB environment, this has been

a popular choice for vision based motion capture in academic research applications for many

years. With similar benefits, the Myo gesture control armband, made by Thalmic Labs, and

Page 12: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

5

Sphero the robotic ball are runners-up to Microsoft Kinect. In addition to the IMU sensors in

Myo and Sphero, these devices offer other features that make them desirable for use in

motion capture applications for motor rehabilitation.

The Kinect sensor requires virtually no setup and its only requirement on the

environment is direct line of sight to the entire subject during use. The device provides a

representation of human motion that is encoded by the translational positions of the joints

in a skeleton model of the subject. Two limitations on the quality of data received from the

Kinect are due to these two qualities. A pitfall resulting from violation of the line of sight

requirement is that for frames in which there is even partial occlusion of the subject, the

skeleton estimate will either be lost or will fail tragically with an incorrect pose. Such

occlusions can happen also due to self-occlusion of the subject so that certain tasks may not

permissible for capture using the Kinect sensor. Also, the joint position kinematics

description fails to capture the axial rotation of skeleton segments that are parallel to the

image frame. This is not an accident as this is a fundamental limitation in the utility of depth

data for motion capture. A final remark on the limitations of Kinect is that the skeleton

estimation is not subject to any sort of temporal continuity relationship. This means that

higher order motion analysis (for example: velocity and acceleration) of the data must be

performed on data that has been filtered in some way to smooth this noise.

Myo and Sphero bear sensing characteristics that make these devices competitive

options compared to the Kinect due to the fact that they rely on IMU sensor data. As was the

case with Motion Shadow, these devices must only be affixed to the subject in some way. This

is attainable since Myo is designed to be worn on the subject’s arm whereas Sphero is

appropriately sized for the subject to hold it in the hand. In this way, a combination of these

Page 13: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

6

devices can be used to capture the pose of a human upper limb. Also, like for Motion Shadow,

the only environmental requirement is minimal EMI. Due to the fact that the IMU sensors for

these devices are of lesser quality than those used in Motion Shadow, we also expect that the

typical gyroscope drift error will be evident in the output data from these sensors due to a

combination of factors that influence sensor calibration errors. Compared to the quality of

Kinect data, the use of both of these devices will not be affected by line of sight occlusion nor

will the kinematic representation fail to identify the orientation of skeleton segments. This

is because the sensors provide their spatial orientation as a data output in the form of a unit

quaternion. Also, the estimated quaternion is the result of an estimation algorithm that, by

its nature, filters and smooths the data. Thus, the kinematic representation from these

devices does not suffer from noise as is the case for Kinect.

In addition to the previously mentioned benefits of Myo and Sphero as IMU data

acquisition devices compared to the vision based Kinect system, we also note other

functionality that is supported by Myo and Sphero because of the intended use for each

device. As a gesture control armband, a particular variation on a natural user interface (NUI)

input device, Myo contains eight surface electromyography (EMG) sensors that are used by

its onboard processor to detect gestures performed by the subject. Myo provides access to

the raw EMG data along with the higher level gesture detection state. This additional sensing

modality is very much relevant to motor rehabilitation, as it may be useful to enhance the

characterization of subject task performance. Sphero the robotic ball is not purposefully built

as an input device although this is a valid secondary use case. Its primary intended

functionality is to be teleoperated as a robotic toy for entertainment purposes. This provides

future work with the opportunity to create hybrid virtual reality environments in which

Page 14: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

7

games that exist in both virtual and physical reality can be developed to exercise the

rehabilitation subject in a more immersive way.

These promising attributes of Myo and Sphero motivate the case for using them

in a new and novel way to build an IMU sensor based human upper limb motion capture

system for academic research. In contrast to the software support status for Microsoft Kinect,

these devices have not yet experienced maturing of their support communities. For this

reason, software tools must be developed with which to interact with the devices in a

common environment that’s accessible to a broad spectrum of target users.

Perhaps the first choice to be made in an implementation of software support for

these devices is selection of the end user development environment. In the typical case,

software support is provided for devices in the way of precompiled binary executable

libraries that are linked to the user’s application implementation through code bindings

written in some programming language. In many cases, the programming language here is

chosen to be very general and extensible, such as C++, or otherwise one that is a platform

independent interpreted language such as Python or Javascript. In cases involving lower

level device interfaces specifications, the provided interface may be closer to the physical

communication layer and rely upon the user to implement all supporting application

software. Although this is standard practice in hardware device application software

support, this model assumes that users be proficient with the chosen programming language

in addition to a suitable development environment and tools. For many target end users,

such as undergraduate students, graduate students and academic researchers, and

nontechnical researchers, these assumptions may be prohibitive to working with the

devices.

Page 15: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

8

Possible candidates for choice as the integrated development environment (IDE)

that is suitable for the academic and research environment include solutions such as The

MathWorks’ MATLAB and LabVIEW by National Instruments. Both of these environments

provide users with an accessible interface to compiled software along with faculties to test

and debug programmatic solutions and utilities to visualize application data. Although both

of these IDEs could be used, we believe that MATLAB is a more suitable candidate due to its

slightly less structured, and more extensible, program development capabilities. Since

LabVIEW is primarily a graphical programming environment intended for use in data

capture and visualization, it may not provide the best possible environment in which to

interface devices requiring time evolving changes to control state.

We can also look to the past success of other comparable devices with existing

support for MATLAB to gain some insight. For example, there exist publicly available projects

for Microsoft’s Kinect V1 [1] and Kinect V2 [2]. The reach of these projects to the MATLAB

user community is evidenced by average download rates of 200-400 downloads per month

and average ratings of 4.8 out of 5 stars. The use benefits of software support such as these

packages is stated quite well by the developers of this Kinect V2 package. According to Tervin

and Córdova-Esparza there is a tradeoff between implementations in MATLAB compared to

those using native code with the Kinect software development kit (SDK) in the way of 30%

performance degradation and with an order of magnitude in code size reduction [3].

In addition to the utility and reach of the software solution, the implementation

should also be correct as well as conformant to the implementation of the underlying

interface without obscuring the device capabilities from the user. Other interfaces to Myo

and Sphero exist for the MATLAB environment, but in various ways each of these fails to

Page 16: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

9

adhere to these requirements. In [4], Boyali and Hashimoto utilize the matMYO project [5]

to acquire data from Myo for their application in gesture classification research. This

implementation is used only for the batch recording of data for future offline analysis. The

data is not made available to the user until recording has completed, thus greatly limiting the

capability of the interface to be used for interactive applications. This implementation also

assumes that the collected dataset is synchronized and complete with no missing samples

without the performing any validity checks. Before the start of this work, the Sphero

MATLAB Interface [6] was made available by Lee to provide users with the capability to send

a handful of synchronous commands to Sphero in order to move it within its environment

and read odometry data, for example. Within one week of the release of the code developed

for Sphero in this work, The MathWorks released the Sphero Connectivity Package [7]

created by Sethi which provides users with a slightly more complete set of device features.

Both of these alternate interfaces to Sphero obscure the user from the full capability of

Sphero through software abstraction and lack of implemented features.

The first half of this work focuses on the development of software interfaces to

Myo and Sphero in the MATLAB environment. We set out to achieve the success shown in

the community use of the MATLAB packages for Microsoft Kinect while correctly

representing the available device data and state to the user in a way that’s consistent with

the intended functionality of the device. Although code performance may be suboptimal in

these MATLAB implementations, we realize that the intention of this exercise is to broaden

the spectrum of users who will benefit from programmatic accessibility to these devices.

More importantly, we intend to reduce code size and complexity for user applications while

simplifying the programmatic code path to device data.

Page 17: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

10

Through the development of software tools, we explore and begin to understand

more greatly the ways in which the device data can be connected to analysis algorithms to

obtain kinematic representation of the physical world. Development of application case

examples for the Myo and Sphero software tools leads to a combined (Myo and Sphero)

application of human upper limb motion analysis which serves as the so-called “zeroth-

order” approximation to the rest of the motion capture modeling and analysis in this work.

The remainder of this thesis is organized with the following structure. In section

2, Background, we cover the necessary prerequisite information on the Myo and Sphero

devices. In section 3, Software Tools, we develop the open source interface software that

enables academic researchers to use all device features relevant to engineering research

with minimal effort in MATLAB. Then in section 4, Mathematical Methods, we develop the

mathematical framework that is used to implement upper limb motion capture using two

Myo devices and one Sphero using the software tools developed in the previous section.

Section 5, Motion Analysis, documents the implementation of the motion analysis scheme

and presents the results that will allow us to validate the effectiveness of the complete

system. Finally, in section 6, Discussion, we discuss the results with respect to the software

tool development and the mathematical methods before closing with suggestions of rich

areas for future work.

Videos that depict the intermediate results of this work can be found at this

YouTube channel: https://www.youtube.com/channel/UCnrXD_jBuv_P14kC7isMBeQ.

Notable contributions to this video repository include demonstrations of the software tools

as well as visualizations of the virtual representations of the human upper limb generated in

the course of this work.

Page 18: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

11

2 Background

In this section we present an introduction to the core technology for the devices

that we are utilizing in this work. Each of Myo and Sphero will be introduced in terms of their

hardware capabilities and application programming interface (API) software support in

preparation for the development of their middleware software interfaces in the following

section.

2.1 Myo Overview

Thalmic Labs’ Myo gesture control armband is a consumer product that is

marketed for use as a NUI input device for general purpose computers. The core technology

empowering its NUI capability is representative of the modern state of the art in sensing

technology. The main features of Myo include the ability to control applications based upon

high-level outputs in the form of the device’s spatial orientation and the detection of gestures

performed by the user. These outputs are derived on-board Myo from raw data that is

measured by way of an IMU and eight EMG sensors, respectively.

2.1.1 Myo Hardware

An in depth look at the underlying hardware of Myo is found in the documentation

of a device teardown that was performed by a popular hobby electronics company named

Adafruit Industries [8]. Although the marketing materials give some indication of the

expected hardware inside Myo, these pictures of the actual components populating the

inside of its enclosure provide proof of the technology Myo relies upon.

Figure 2-1 shows a series of photos from the teardown article that illustrate the

physical constitution of the device. Here we see the main EMG sensor hardware built into the

Page 19: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

12

inside of the pods that make up the armband with one of the pods reserved to hold the device

mainboard and batteries. The mainboard contains the remainder of the device hardware that

interests us except for operational amplifiers attached to each of the EMG sensors (not

shown here).

Figure 2-1: Myo teardown disassembly

The mainboard for Myo, shown in Figure 2-2, houses its microcontroller unit

(MCU), IMU sensor, and Bluetooth Low Energy (BLE) module. Also located in the same pod

is a vibration motor attached to the battery board. The Freescale Kinetis M series MCU

contains a 32 bit ARM architecture 72MHz Cortex M4 CPU core with floating point unit

hardware. This particular series of MCU is targets low power metrology applications. The

BLE module enables external communication between Myo and a client computer. The IMU

chip made by Invensense is a 9 axis model containing an onboard digital motion processor

(DMP) which performs sensor fusion on the raw sensor data. The MPU-9150 contains a 3

axis magnetometer, 3 axis gyroscope, and 3 axis accelerometer all in the same silicon die.

The DMP fuses these raw data sources using a proprietary undocumented algorithm to

produce an estimated quaternion. All data outputs, raw and calculated, are made available

Page 20: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

13

through a first-in-first-out (FIFO) buffer that is read by the MCU over either a serial

peripheral interface (SPI) or inter-integrated circuit (IIC or I2C) communication bus.

Figure 2-2: Myo mainboard (front and back)

2.1.2 Myo Software

Thalmic Labs has created a rather large ecosystem for development of Myo

enabled applications. The company released officially supported SDKs for four compute

platforms, both desktop and mobile. Thalmic also fosters a larger community of developers

who have contributed projects for Myo in a variety of programming languages. Table 2-1

contains a non-exhaustive list of these offerings to show the diversity of the development

ecosystem surrounding Myo.

Table 2-1: Myo SDK offerings A listing of Myo SDK offerings from official [9] and community [10] sources.

Operating System Language Dependencies Supported By

Windows C++ Myo SDK runtime library

Thalmic Labs

Mac OS X C++ Myo SDK framework

Thalmic Labs

iOS Objective-C MyoKit framework

Thalmic Labs

Android Java Java Library Thalmic Labs

Page 21: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

14

Operating System Language Dependencies Supported By

Windows C#, .NET --- Community

Linux C, C++, Python --- Community

Mac OS X Objective-C --- Community

--- Unity, Python, Javascript, Ruby, Go, Haskell, Processing, Delphi, ROS, Arduino, MATLAB

--- Community

In addition to these available existing software projects, Thalmic Labs has also

completely opened developer accessibility to Myo by publicly providing the specification for

its physical communication modality in the form of a BLE Generic Attribute (GATT) Profile

specification [11]. The provision of this resource allows developers to completely bypass all

compute platform and programming language dependencies by developing strictly for the

physical BLE communication itself. This is what has enabled creation of the community

projects for Linux and Arduino indicated in Table 2-1, and it also provides future

developments with a powerful option to leverage toward their own projects.

The main advantage when considering developing against the BLE protocol for

Myo is the fact that every implementation detail of the solution can be specified as desired.

These details include not only those concerning the software architecture, but also the

absence of some inherently limiting choices that might be made in other higher level

software solutions such as Myo SDK. One example of this is the fact that the combination of

Myo SDK and Myo Connect limit the use of only one dongle per instance of Myo Connect per

system. The effect of this limitation is that multiple Myo devices must share a single dongle

Page 22: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

15

if desired to be used on the same machine. Consequently, not all EMG data can be received

due to hardware limitations in the throughput capacity of the provided BLE dongle.

Although the development freedom of leveraging the BLE specification directly

may be appealing, this option comes with a nontrivial cost. Along with the freedom to specify

every software and hardware choice related to Myo communication and control comes the

responsibility of making the best decisions and the need to compose solutions for all of them.

Some of the decisions that would need to be made involve the choice of supporting hardware

such as Bluetooth radios along with a suitably stable and deployable BLE software stack. All

of this low-level development must first be performed before then working on the layers

which may otherwise be occupied by the officially supported Myo SDK from Thalmic Labs.

The layout of the software stack being described here is presented in Figure 2-3.

We can envision the possible paths between the Myo Device (bottom right) and our intended

MATLAB Interface (top left). Development against the Myo SDK leverages the Myo Connect

desktop application and Myo SDK runtime library with C++ bindings running on the

Application Computer as well as the included BLE dongle hardware. The entry point for

developers into the Myo SDK stack is in their implementation of the Myo SDK C++ API. It’s

from this location in the stack that we compare to the similar level in the low-level API

middleware that targets the BLE specification.

Page 23: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

16

Figure 2-3: Myo APIs and middleware stack

A summary of the advantages and disadvantages that are active in our choice

between developing with Myo SDK versus the BLE protocol is collected in Table 2-2. Due to

the level of complexity and sheer volume of code involved in developing from the BLE GATT

specification, the continued active support the Thalmic Labs provides for Myo SDK, and

acceptance of the tradeoff that we will not be able to leverage the EMG data when working

with multiple Myo devices, we choose to use the Myo SDK in this work.

Table 2-2: Myo SDK versus Bluetooth protocol

Advantages Disadvantages

Myo SDK Vendor support

Hardware included with Myo

No EMG data with multiple Myo devices

BLE Protocol

Free choice for all hardware and software

Code volume and complexity

Not as easily deployable

Page 24: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

17

2.2 Sphero Overview

Sphero has undergone two major revisions since its first appearance in consumer

markets in the year 2011. The original Sphero received an incremental redesign and the

official name was changed to “Sphero 2.0” in 2014. Although we choose to drop the version

number when referring to Sphero in this work, Sphero 2.0 is the model we are working with

here. The device then received a facelift along with changes to its Bluetooth communication

technology with the release of a new Star Wars™ themed product named BB-8™ in 2015.

Aside from a change from Bluetooth Classic to Bluetooth Low Energy from Sphero 2.0 to BB-

8™, it appears that similar hardware is used in both devices according to a technical blogger

on the Element 14 Community engineering community web platform [12]. In this section we

begin by looking at the hardware inside Sphero devices followed by a survey of the developer

software support.

2.2.1 Sphero Hardware

The first thing we notice when attempting to take a look inside Sphero is its solid

water proof plastic shell. The first step of disassembling this device, shown in Figure 2-4, is

to mechanically split the robotic ball’s shell by cutting (right). Then, the view inside of the

device reveals a two wheeled inverted pendulum type of vehicle that drives around the

inside of the spherical shell in a manner similar to that in which a hamster will run inside its

wheel. The two wheels are powered through gear reduction by two direct current (DC)

motors. Contact traction between the wheels and the shell is maintained by force provided

by a spring in the top mast of Sphero’s chassis. And finally, the main feature of this internal

assembly that we’re interested in is the mainboard containing the interface electronics and

sensors.

Page 25: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

18

Figure 2-4: Sphero 2.0 internal electronic and mechanical components

Image source: http://atmega32-avr.com/wp-content/uploads/2013/02/Sphero.jpg

As mentioned previously, public information about the specific components used

in Sphero 2.0 is hard to come by, so we base further inspection on a teardown of BB-8™ with

its mainboard shown in Figure 2-5. This mainboard houses many components, of which

three are particularly important. The Toshiba TB6552FNG dual DC motor driver provides

power to the motors via pulse width modulation (PWM) while offering an electromagnetic

field (EMF) intensity signal to provide feedback for closed loop control. A 6 axis Bosch

BMI055 IMU provides 3 axis gyroscope and 3 axis accelerometer data in a low power

package. In contrast to the MPU-9150 used by Myo, this IMU does not contain a

magnetometer reference, and it also does not perform any digital signal processing on board.

Rather, the ST Microelectronics MCU must read the IMU and provide sensor fusion and

quaternion estimation features. The STM32F3 contains a 32 bit ARM architecture 72MHz

Cortex M4 CPU core with a built in floating point unit just like the Freescale MCU used in

Myo.

Page 26: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

19

Figure 2-5: Sphero BB-8™ mainboard

2.2.2 Sphero Software

Similar to the API support provided by Thalmic Labs for Myo, Sphero has provided

both platform specific SDKs as well as a specification for Sphero’s low-level serial binary

Bluetooth communication API. In the case of Sphero, the officially supported platform

specific SDKs primarily target mobile computing platforms such as Android and iOS as

shown in Table 2-3.

Table 2-3: Sphero SDK offerings A listing of manufacturer and community supported interfaces to Sphero [13]

Operating System Language Dependencies Supported By

iOS Objective-C RobotKit SDK framework

Sphero

iOS Swift RobotKit SDK framework

Sphero

Android Java RobotLibrary SDK jar library

Sphero

--- Javascript Source code Community

These options are less desirable for the intended application in this project since

we prefer to build applications on typical general purpose compute platforms such as Mac,

Page 27: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

20

Linux, or Windows. In this case, we choose to take a closer look at the prospect of developing

for the low-level binary protocol as the preferred candidate API.

Figure 2-6: Sphero API and middleware stack

Sphero’s low-level binary protocol, shown in the software stack diagram of Figure

2-6, is a serial communication protocol that is transmitted over its Bluetooth Classic physical

network connection with the client computer. The particular profile employed in this

Bluetooth Classic communication channel is named the serial port profile (SPP). The matter

of implementing Bluetooth communications in a client application is typically addressed in

user space by the implementation of system (or third party) libraries, but in the case of

Bluetooth Classic SPP in MATLAB, this is not the case. The Instrument Control Toolbox of

MATLAB has extended support to Bluetooth devices that implement SPP specifically. What

this means is that MATLAB provides platform independent read and write functionality to

the required Bluetooth device for communication with Sphero in native m-code. Since it is

possible to write native MATLAB m-code that communicates directly to Sphero, this is the

preferred choice for Sphero development API in this work.

Page 28: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

21

3 Software Tools

Although the present work demands the creation of MATLAB interfaces for Myo

and Sphero, the broad objective is farther reaching. We also aim to provide academic

students and researchers with a simple way to interact with the devices and their data

programmatically. Furthermore, having chosen MATLAB as the development platform also

enables a larger research based workflow in a development environment that is familiar to

many target users. The following list captures the statement of the guidelines that are

followed throughout the development of these software tools.

All features should be accessed with MATLAB m-code

Common tasks should be wrapped in utility functions

Operations critical to functionality should be performed automatically

The device should be brought live and functional with a single function call

Device data should be automatically stored in the workspace

All features should be well documented

The satisfaction of these guidelines for creating the user-facing MATLAB code

should become evident throughout the remainder of this section. In addition to devising

strategies through which to present the user with a MATLAB m-code interface, we must also

connect the m-code to the physical hardware in some appropriate manner. The presentation

of the code required to achieve these goals will include explanation of the core principles and

logic of the solution followed by detailed explanation of the implementation code. This

section is intended to fully document the function of the software tools for Myo and Sphero

so as to serve as a reference to users of the resulting code base for both functional and

educational purposes.

Page 29: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

22

In the remainder of this section we will first discuss the bottom-up development

of Myo SDK MATLAB MEX Wrapper [14] and Sphero API MATLAB SDK [15]. Each of these

subsections begins with a discussion of the basic concepts for the chosen API. Then we

develop the layers of code needed to bridge device communication and control with the user-

facing MATLAB m-code interface described above. Finally, we showcase some individual and

some combined application cases for these devices to provide perspective on the user

experience with respect to the guidelines above resulting from these software tools.

3.1 Myo SDK MATLAB MEX Wrapper Development

In this section, we progress through the design documentation for this interface.

We begin by introducing the basic concepts of the API chosen in section 2.1.2, the Myo SDK

API. Then we follow the path up the software stack through discussing the core aspects of

implementing Myo SDK, development of a MATLAB EXternal (MEX) interface to the Myo

SDK, and finally the design of two MATLAB classes to manage the state of the MEX interface

and the Myo device data for Myo SDK MATLAB MEX Wrapper [14], [16].

3.1.1 Myo SDK Concepts

Every Myo SDK project depends upon the use of the Myo Connect application. The

first step toward interacting with the device involves the end user connecting to the physical

Myo device using the Myo Connect desktop application and the included BLE dongle. Once

connected, the API provided by Myo SDK will enable third party code to interact with the

device by calling into Myo Connect through a runtime library.

The API provided by Myo SDK is in the form of C++ bindings to a dynamically

linked library that calls into Myo Connect. The API handles data communication with an

Page 30: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

23

event driven paradigm. Our main objective, data acquisition, is implemented through use of

the SDK by way of user defined implementations of virtual functions of a Myo SDK

application class, myo::DeviceListener. These functions serve as callbacks for events

that provide data from Myo. Additionally, the Myo SDK provides functions that can be

thought of as service calls to change device configuration and state. The definition of these

virtual functions along with the class to which they are members is the core foundation of an

implementation of Myo SDK.

3.1.2 Myo SDK Implementation

The Myo SDK C++ bindings receive events from Myo through the virtual functions

of a user defined application class that inherits from myo::DeviceListener and is

registered to the myo::Hub as a listener. The developer uses an instance of myo::Hub to

invoke the event callbacks by calling myo::Hub::runOnce() periodically. Some Myo

services can be called through members of myo::Myo, but the data acquisition functionality

we desire depends mostly upon the implementation of our myo::DeviceListener

derived application class and its interaction with another application class named MyoData

that manages queued and synchronized streaming data from each Myo device [17]. In this

section, we’ll step through our Myo SDK implementation from its highest level of use and

systematically drill down to the underlying business logic when appropriate.

The initialization of Myo SDK should be preceded by awakening of all Myo devices

that are connected to the host computer via the Myo Connect application. Then, a session is

opened by instantiation of a myo::Hub*.

myo::Hub* pHub;

pHub = new myo::Hub("com.mark-toma.myo_mex");

Page 31: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

24

if ( !pHub )

// ERROR CONDITION

The instantiation of pHub invokes the Myo SDK C-API to communicate with the

Myo Connect process, and is identified by a globally unique application identifier string such

as “com.mark-toma.myo_mex”. If pHub is NULL then there was an unrecoverable error

in communicating with Myo Connect, and the program should terminate. Otherwise, the next

step is to validate the existence of a Myo device in the Myo Connect environment by calling

myo::Hub::waitForMyo().

myo::Myo* pMyo;

pMyo = pHub->waitForMyo(5);

if ( !pMyo )

// ERROR CONDITION

In similar fashion, a resulting NULL value for pMyo indicates the failure of Myo

Connect to validate the existence of a connected Myo device. In this case, the program should

terminate. Otherwise, we know that at least one Myo device is connected to Myo Connect.

Now that the connectivity is established, we create our application class and register it as a

listener to pHub so that we can begin to receive events from Myo Connect.

// unsigned int countMyosRequired = <user-specified integer>

DataCollector collector;

if (countMyosRequired==1)

collector.addEmgEnabled = true;

pHub->addListener(&collector);

The DataCollector class inherits from myo::DeviceListener so we are

now ready to receive callbacks in collector from Myo Connect. Immediately following

instantiation of collector, we configure EMG streaming if the expected number of Myo

devices is exactly one. Then, in order to allow for callbacks to be triggered for some

duration in milliseconds, we invoke myo::Hub::run(duration).

Page 32: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

25

#define INIT_DELAY 1000 // milliseconds

// unsigned int countMyosRequired = <user-specified integer>

pHub->run(INIT_DELAY);

if (countMyosRequired!=collector.getCountMyos())

// ERROR CONDITION

Here, we are introduced to our first public member function of DataCollector.

The function DataCollector::getCountMyos() returns the number of unique Myo

devices that have been identified in collector as a result of having been passed from Myo

Connect through callback functions. If this number is different than the number of Myos the

user has specified to the application, then the program should terminate. Otherwise, the

program should begin to continually call myo::Hub::runOnce() in a separate thread so

that all callbacks are triggered. At this point, the collector should be configured to

initialize its data logs for Myo.

collector.syncDataSources();

collector.addDataEnabled = true;

Whenever DataCollector::addDataEnabled is toggled from false to

true, the data sources must be synchronized. Since this flag allows the data callbacks to fall

through when unset, the data logs will be in an unknown state. Thus, when toggling the flag

to true, we must synchronize the data queues by calling

DataCollector::syncDataSources(). This function pops all previously logged data

from the data logs in collector.

Finally, the last remaining function to be performed publicly on the

DataCollector is the reading of the data log queues. For this purpose, we have two

struct objects that represent the data sampled from a single instant in time. Since we

sample the data at two different rates, we use two distinct data frames. The data elements

that are always available are sampled at 50Hz and reported in FrameIMU objects.

Page 33: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

26

struct FrameIMU

{

myo::Quaternion<float> quat;

myo::Vector3<float> gyro;

myo::Vector3<float> accel;

myo::Pose pose;

myo::Arm arm;

myo::XDirection xDir;

};

When EMG streaming is enabled during use of only a single Myo, we will also work

with FrameEMG objects.

struct FrameEMG

{

std::array<int8_t,8> emg;

};

The data is read from collector by popping the oldest available data frame

from the queues in collector using the functions DataCollector::getFrameXXX().

The following code example shows how this may be performed in the case that either one or

two Myos are being used. In the event that three or more Myos are being used, this example

can be adapted by reading only FrameIMU from each additional Myo device.

// Declarations and initializations

unsigned int iiIMU1=0, iiEMG1=0, iiIMU2=0, iiEMG2=0;

unsigned int szIMU1=0, szEMG1=0, szIMU2=0, szEMG2=0;

FrameIMU frameIMU1, frameIMU2;

FrameEMG frameEMG1, frameEMG2;

unsigned int countMyos = collector.getCountMyos();

if (countMyos<1) /* ERROR CONDITION */;

#define READ_BUFFER 2 // number of samples to leave in collector

szIMU1 = collector.getCountIMU(1)-READ_BUFFER;

if (countMyos==1) {

szEMG1 = collector.getCountEMG(1)-READ_BUFFER;

} else if (countMyos==2) {

szIMU2 = collector.getCountIMU(2)-READ_BUFFER;

} // else if (countMyos==N) // optionally extend to handle more Myos

// --- AQUIRE LOCK ON myo::Hub::runOnce() ----------------------------

// --- BEGIN CRITICAL SECTION ----------------------------------------

while (iiIMU1<szIMU1) { // Read from Myo 1 IMU

frameIMU1 = collector.getFrameIMU(1);

// process frameIMU1

iiIMU1++;

Page 34: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

27

}

while (iiEMG1<szEMG1) { // Read from Myo 1 EMG

frameEMG1 = collector.getFrameEMG(1);

// process frameEMG1

iiEMG1++;

}

while (iiIMU2<szIMU2) { // Read from Myo 2 IMU

frameIMU2 = collector.getFrameIMU(2);

// process frameIMU2

iiIMU2++;

}

while (iiEMG2<szEMG2) { // Read from Myo 2 EMG

frameEMG2 = collector.getFrameEMG(2);

// process frameEMG2

iiEMG2++;

}

// --- BEGIN CRITICAL SECTION ----------------------------------------

// --- RELEASE LOCK --------------------------------------------------

We also note that the calls into DataCollector::getFrameXXX() must be

performed when holding a lock against the thread that is triggering callbacks by invoking

myo::Hub::runOnce() to avoid corruption of data queue synchronization.

Up to this point, we have seen a high-level view of the Myo SDK implementation

without much mention of the underlying implementation details. In the remainder of this

section, we will use this high level blueprint as a framework for describing the functionality

of the DataCollector application class as well as the MyoData data management class.

The DataCollector class is our lowest interface to the Myo device data as it is the

subscriber to streaming device data, and MyoData is a helper class whose objects are owned

by DataCollector and store this data in synchronized FIFO queues. The complete

implementation of the file containing DataCollector and MyoData, myo_class.hpp,

can be found in Appendix A.1 for reference.

The DataCollector class inherits from myo::DeviceListener which is

defined with several virtual methods that are used as callbacks by pHub when Myo

streaming data and state change events occur. It also owns pointers to MyoData objects that

Page 35: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

28

are stored in a private member variable std::vector<MyoData*> knownMyos.

Perhaps most importantly, it defines public member functions that are used to control the

behavior of DataCollector and its MyoData instances as well as read the logged data

stored in knownMyos. Finally, for completeness, we note that DataCollector also defines

some member functions and variables for utility. The following is a representation of the

class declarations with some input parameter lists partially omitted for brevity.

class DataCollector : public myo::DeviceListener

{

std::vector<MyoData*> knownMyos;

public:

// Properties

bool addDataEnabled; // onXXXData() falls through when unset

bool addEmgEnabled; // onEmgData() falls through when unset

// Construction, deletion, and utility

DataCollector();

~DataCollector();

void syncDataSources();

const unsigned int getMyoID(myo::Myo* myo,uint64_t timestamp);

// Accessors

unsigned int getCountIMU(int id);

unsigned int getCountEMG(int id);

const FrameIMU &getFrameIMU( int id );

const FrameEMG &getFrameEMG( int id );

const unsigned int getCountMyos();

// State change callbacks

void onPair(myo::Myo* myo, uint64_t timestamp,...);

void onUnpair(myo::Myo* myo, uint64_t timestamp,...);

void onConnect(myo::Myo *myo, uint64_t timestamp,...);

void onDisconnect(myo::Myo* myo, uint64_t timestamp);

void onLock(myo::Myo* myo, uint64_t timestamp);

void onUnlock(myo::Myo* myo, uint64_t timestamp);

void onArmSync(myo::Myo* myo, uint64_t timestamp,...);

void onArmUnsync(myo::Myo* myo, uint64_t timestamp);

// Data streaming callbacks

void onOrientationData(myo::Myo* myo, uint64_t timestamp,...);

void onGyroscopeData (myo::Myo* myo, uint64_t timestamp,...);

void onAccelerometerData (myo::Myo* myo, uint64_t timestamp,...);

void onEmgData(myo::Myo* myo, uint64_t timestamp,...);

void onPose(myo::Myo* myo, uint64_t timestamp,...);

}; // DataCollector

The instantiation of DataCollector is assumed to use the default constructor

which creates and instance with both addDataEnabled and addEmgEnabled assigned

the value false. The destructor function is also quite simple in that it iterates through

Page 36: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

29

knownMyos to delete all instances. Immediately following instantiation of collector, we

first set the addEmgEnabled member before then registering it as a listener to pHub. Next,

we run pHub for a brief initialization period to allow callbacks to run from all available Myo

devices so that available Myos can be detected by collector. It is in this process that the

automatic event based behavior inherent in DataCollector begins with the instantiation

of MyoData objects in knownMyos.

The first operation performed in each data streaming callback as well as the

onConnect() and onPair() callbacks is to access the internal identifier for the Myo

producing the event, myo::Myo* myo, by calling

DataCollector::getMyoID(myo,...). This member function returns the index of

myo in knownMyos, but its implicit functionality is to push myo onto knownMyos if it

doesn’t already belong. In this way, the first time a callback is triggered from a Myo device

such that the device will be available in the future, the corresponding MyoData instance is

created in knownMyos.

After the brief initialization delay duration of the previous call to pHub->run(),

we can use DataCollector::getCountMyos() to access the length of the knownMyos

vector. Since it’s assumed that all Myos available in Myo Connect will fire callbacks during

the initialization period, we can terminate the program if the number of Myos expected

doesn’t match the number of Myos returned by this function call.

At this time, we will ensure that collector and its MyoData instances are

properly initialized to a known data state by calling

DataCollector::syncDataSources() to remove any previously logged data. Then

Page 37: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

30

we’re ready to set DataCollector::addDataEnabled to true so that onXXXData()

callbacks will pass the received data into the corresponding MyoData instance in

knownMyos.

The only remaining function to be performed on collector is the reading of

logged data by use of the getFrameXXX(id) member functions. These functions are

simply wrappers for similarly named data accessors in the MyoData class. Now, we’ll move

on to look at the data management functionality in MyoData to complete the description of

this Myo SDK implementation.

The primary function to be performed by the MyoData class is to manage the

streaming data that is provided and consumed by DataCollector. This class must receive

individual samples of data from various sources in any sequence via the use of the member

functions onXXXData(). Then, since some data might be lost in transmission, it must

provide automated functionality to synchronize multiple streams that are sampled on the

same time base such as the quaternion, gyroscope, and accelerometer data. This is

performed by the syncXXX() functions. Then the data is consumed, oldest synchronized

sample first, by calling the public getFrameXXX() functions.

Most of the data associated with the MyoData class is stored internally with

private member variables. Only some information such as the Myo device’s myo::Myo*

pointer, the current number of logged data frames, and the data frames themselves are

available externally. The raw streaming data is stored in

std::queue<T,std::deque<T>> double-ended queue containers with type T

corresponding to the datatype used in Myo SDK. All other member variables encode data

stream state that is necessary for use in the business logic of queue synchronization.

Page 38: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

31

The class declaration for MyoData is shown here with some input parameter lists

and the std::queue<> type names partially omitted for presentation clarity.

class MyoData

{

// Properties

myo::Myo* pMyo;

FrameIMU frameIMU;

FrameEMG frameEMG;

bool addEmgEnabled;

// Streaming data queues

std::queue<myo::Quaternion<float>,std::deque<...>> quat;

std::queue<myo::Vector3<float>,std::deque<...>> gyro;

std::queue<myo::Vector3<float>,std::deque<...>> accel;

std::queue<myo::Pose,std::deque<myo::Pose>> pose;

std::queue<myo::Arm,std::deque<myo::Arm>> arm;

std::queue<myo::XDirection,std::deque<myo::XDirection>> xDir;

std::queue<std::array<int8_t,8>,std::deque<...>> emg;

// Streaming data state

uint64_t timestampIMU;

unsigned int countIMU;

unsigned int semEMG;

unsigned int countEMG;

uint64_t timestampEMG;

// Construction, deletion, and utility

MyoData(myo::Myo* myo, uint64_t timestamp, bool _addEmgEnabled);

~MyoData();

void syncIMU(uint64_t ts);

bool syncPose(uint64_t ts);

bool syncArm(uint64_t ts);

bool syncXDir(uint64_t ts);

void syncEMG(uint64_t ts);

void syncDataSources();

// Accessors

myo::Myo* getInstance();

unsigned int getCountIMU();

unsigned int getCountEMG();

FrameIMU &getFrameIMU();

FrameEMG &getFrameEMG();

// Add data

void addQuat(const myo::Quaternion<float>& _quat,...);

void addGyro(const myo::Vector3<float>& _gyro,...);

void addAccel(const myo::Vector3<float>& _accel,...);

void addEmg(const int8_t *_emg,...);

void addPose(myo::Pose _pose,...);

void addArm(myo::Arm _arm,...);

void addXDir(myo::XDirection _xDir,...);

}; // MyoData

The constructor for MyoData requires a pointer to myo::Myo*, a 64-bit

timestamp, and a configuration flag indicating the behavior for streaming EMG data as shown

Page 39: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

32

by its signature above. When called by DataCollector, the addEmgEnabled flag is

passed from its corresponding member variable whereas the other parameters are passed

through from the generating callback function. The operations performed by the constructor

are to initialize the Myo device and then initialize the class properties by filling the data log

queues each with a dummy sample and setting streaming data state accordingly. Myo device

initialization includes calling the unlock service to force Myo into a state that allows data

streaming as well as enabling EMG data streaming from the device if applicable. The

following is an abbreviated representation of the constructor implementation.

MyoData(myo::Myo* myo, uint64_t timestamp, bool _addEmgEnabled)

: countIMU(1), countEMG(1), semEMG(0), timestampIMU(0), timestampEMG(0)

{

pMyo = myo;

pMyo->unlock(myo::Myo::unlockHold);

if (_addEmgEnabled) {

pMyo->setStreamEmg(myo::Myo::streamEmgEnabled);

// INITIALIZE EMG QUEUE AND STATE

}

addEmgEnabled = _addEmgEnabled;

// INITIALIZE IMU QUEUE AND STATE

}

Once a MyoData object has been created, then data can be added by calling its

addXXXData() functions. These functions populate the associated data queue with the

new data, check data synchronization status, and handle synchronization operations if

necessary. This is performed by collector on its MyoData vector knownMyos as

depicted by the example of adding accelerometer data which is sampled on the same time

base as quaternion and gyroscope data.

The onAccelerometerData callback of DataCollector shown here first

checks the addDataEnabled property and falls through if unset. Otherwise, the function

Page 40: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

33

continues to provide the new accelerometer data to the appropriate MyoData object in

knownMyos.

void DataCollector::onAccelerometerData (myo::Myo* myo,

uint64_t timestamp, const myo::Vector3<float>& a)

{

if (!addDataEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addAccel(a,timestamp);

}

First, getMyoID() is called to return the one-based index of the provided

myo::Myo* in knownMyos. Then zero-based indexing is used to access the corresponding

MyoData object. The appropriate add data function, addAccel() in this case, is called on

this MyoData instance to pass the new data through.

The addAccel() function first checks that all data on the appropriate time base

is currently synchronized by calling syncIMU() before pushing the new accelerometer data

onto its queue as shown in the following.

void addAccel(const myo::Vector3<float>& _accel, uint64_t timestamp)

{

syncIMU(timestamp);

accel.push(_accel);

}

In the case of the IMU data, synchronization checks for data integrity by using the

timestamp and checking the lengths of the IMU data queues. If the current timestamp is

greater than the previously stored timestamp, then the new data is for a more recent sample.

In this case, the lengths of the IMU data queues should be the same. Otherwise, a

synchronization failure is detected, and this soft failure is recovered by zero-order-hold

interpolation of all the short queues.

The other add data functions follow very similar logic except that the synchronize

functions vary for syncEMG(), syncPose(), syncArm(), and syncXDir(). Since EMG

Page 41: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

34

data is provided two samples per timestamp, data corruption is identified when less than

two samples are logged before a new timestamp is received. And since the pose, xDir, and

arm data are only provided as events, we interpolate them on the IMU time base with zero-

order-hold when a new value is received.

Data is read from the MyoData queues by using the public member functions

getFrameXXX() with similar names to these functions of DataCollector. The

implementations of these functions are shown below.

// DataCollector

const FrameIMU &getFrameIMU( int id )

{

return knownMyos[id-1]->getFrameIMU();

}

// MyoData

FrameEMG &getFrameEMG()

{

countEMG = countEMG - 1;

frameEMG.emg = emg.front();

emg.pop();

return frameEMG;

}

The wrapper for the accessor in DataCollector uses the provided one-based

index id to index into knownMyos and call the similar accessor function on this MyoData

instance. The accessor in MyoData simply constructs the FrameXXX structure by reading

and popping the data elements off of their queues.

As mentioned earlier when first introducing data reading using

DataCollector, this implementation must be accompanied by use of mutual exclusion

locks when the hub is run in a separate thread. In this case, the callbacks will invoke the add

data functions to write data into the MyoData queues in the worker thread stack frame. If

Page 42: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

35

the data is then popped off of the queues in a the main thread using getFrameXXX()

without protecting these critical sections, the result is undefined.

3.1.3 MEX Interface

The Myo SDK implementation shown in the previous section provides the solution

to our implementation needs in the C++ environment. However, an additional layer is

required to enable calling of this C++ code from the MATLAB environment. The MATLAB

MEX interface provides C/C++ and Fortran bindings to the MEX API as well as MEX

compilation tools. These resources allow developers to write and compile so-called MEX

functions which are binary executable files that can be called from the MATLAB runtime in

m-code. In this section we will devise a strategy to execute the Myo SDK implementation in

such a way that it can be used transparently from m-code.

In general, MEX functions are simply external code that has been written and

compiled against the MATLAB MEX API. Since our external code language is C++, we will opt

to use the C-API. All such MEX files begin with the inclusion of the MEX header file and the

default entry point for a MEX function, void mexFunction(...). In this case, we choose

to name the MEX function source code file myo_mex.cpp so that MATLAB built-in function

mex will compile a binary file myo_mex.mexw64 (on 64 bit Windows platforms) that can

be called in MATLAB m-code as [...] = myo_mex(...).

The minimal example of myo_mex.cpp is shown below. In addition to the MEX

header, we also include the necessary files to compile against the Myo SDK and our inline

application classes in myo_mex.cpp. We also realize that since we can lock the memory for

Page 43: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

36

the scope of myo_mex.cpp we declare collector, pHub, and pMyo in global scope so they

will be persistent variables.

#include <mex.h> // mex api

#include "myo/myo.hpp" // myo sdk library

#include "myo_class.hpp" // myo sdk implementation application classes

...

DataCollector collector

myo::Hub* pHub

myo::Myo* pMyo

...

void mexFunction(int nlhs, mxArray *plhs[],

int nrhs, const mxArray *prhs[])

{

...

}

At this point, myo_mex.cpp will compile, but its functionality down not yet exist.

Here we contemplate the major function of the MEX API which is to enable the passing of

variables from m-code as input parameters and back to m-code as return parameters or

outputs. The MEX API connects C++ variables back to a MATLAB workspace through the

arrays of pointers to mxArray. Note that mxArray *plhs[] and *prhs[] are pointers

for left-hand-side and right-hand-side parameters, respectively. Using these faculties, we can

build a state machine that consists of two states, three possible state transitions, and

initialization and delete transitions into and out of the MEX function. The transition requests

can be passed as strings in the first input parameter, and then the mexFunction() will

attempt to perform the requested transition while servicing any additional inputs and

outputs that exist.

The prototype state machine is shown below in Figure 3-1. The first call into MEX

must be on the init transition to initialize the Myo SDK implementation and lock the MEX

function memory region in the MATLAB process using the mexLock() MEX API. Then, the

default idle state is entered. Successful entry into the idle state signifies that the MEX function

Page 44: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

37

is initialized without unrecoverable error and thus is ready to be used for streaming data.

The only permissible transition out of the idle state that is useful is start_streaming

which launches threaded calling of myo::Hub::runOnce() to begin streaming data and

places the MEX function into the streaming state. While in the streaming state, calls to the

get_streaming_data transition will acquire a lock on the data streaming thread, read all

available data from the data queues, and then return these data samples to the MATLAB

workspace while the MEX function returns to the streaming state. When in the streaming

state, data streaming can be cancelled by sending the MEX function back to the idle state with

the stop_streaming transition. Finally, at the end of the usage cycle, the delete

transition is used to clean up all Myo SDK resources and return the MEX function memory

management back to the control of MATLAB by calling the mexUnlock() MEX API.

Figure 3-1: MEX function states, transitions, and actions

A summary of this state machine implementation contained within

myo_mex.cpp is shown below with pseudocode descriptions of the intended actions

documented in the comments. Note that most of the business logic and MEX API code has

been omitted here for clarity.

Page 45: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

38

void mexFunction(int nlhs, mxArray *plhs[],

int nrhs, const mxArray *prhs[])

{

// check input

char* cmd = mxArrayToString(prhs[0]); // get command string

if ( !strcmp("init",cmd) ) {

// initialize pHub and collector

mexLock(); // lock the memory region for this MEX function

} else if ( !strcmp("start_streaming",cmd) )

collector.addDataEnabled = true;

// dispatch thread calling pHub->runOnce()

} else if ( !strcmp("get_streaming_data",cmd) ) {

// create MATLAB struct outputs in *plhs[]

// read data queues using collector.getFrameXXX(id)

// assign data to outputs in *plhs[]

} else if ( !strcmp("stop_streaming",cmd) ) {

// terminate thread and reset state

collector.addDataEnabled = false;

collector.syncDataSources();

} else if ( !strcmp("delete",cmd) ) {

// clean up Myo SDK and platform resources

mexUnlock();

}

return;

}

Although we will not fully describe the implementation of the MEX function in

detail, we will show specific example implementation for each of the MEX interface calls in

the following. The complete source code for myo_mex.cpp can be found in Appendix A.2 as

well as the public GitHub repository for this project [16].

Before dispatching any of the myo_mex commands, the MEX function must use

the MEX APIs to extract the command string from the input parameters in *prhs[]. The

first lines of mexFunction() perform these operations in addition to input error checking.

// check for proper number of arguments

if( nrhs<1 )

mexErrMsgTxt("myo_mex requires at least one input.");

if ( !mxIsChar(prhs[0]) )

mexErrMsgTxt("myo_mex requires a char command as the first input.");

if(nlhs>1)

mexErrMsgTxt("myo_mex cannot provide the specified number of outputs.");

char* cmd = mxArrayToString(prhs[0]);

Page 46: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

39

Here we see the use of the integer nrhs to validate the minimum expected

number of inputs as well as additional MEX APIs to deal with the string data type expected

to be passed as the first input argument. At any time throughout this validation process, the

default behavior for an error condition is to throw an exception back to the MATLAB

workspace by use of the MEX API mxErrMsgTxt().

Once we have the character array cmd, a collection of if ... else if blocks

are used to take action matching each possible cmd. The bodies of the init and delete

commands are not covered here since they merely instantiate and destroy all Myo SDK

resources as described previously while locking memory space by bracketing these

operations with the MEX APIs mexLock() and mexUnlock(). The init method

additionally parses one required input argument that specifies countMyosRequired for

the session.

The start_streaming API is responsible for dispatching a worker thread to

call myo::Hub::runOnce() on pHub. In this work, we implement the thread using the

Windows API since we’re developing strictly for Windows, additional external libraries will

add unnecessary complexity to installation for the target end users, and a newer compiler

that offers threading support in the C++ standard library will be restrictive to new users

working on machines with older software.

if ( !strcmp("start_streaming",cmd) ) {

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( runThreadFlag )

mexErrMsgTxt("myo_mex is already streaming.\n");

if ( nlhs>0 )

mexErrMsgTxt("myo_mex too many outputs specified.\n");

collector.addDataEnabled = true;

// dispatch concurrent task

runThreadFlag = true;

Page 47: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

40

hThread = (HANDLE)_beginthreadex( NULL, 0, &runThreadFunc,

NULL, 0, &threadID );

if ( !hThread )

mexErrMsgTxt("Failed to create streaming thread!\n");

}

This command action toggles collector to begin responding to add data calls,

sets the runThreadFlag global variable, and then dispatches the thread to call

runThreadFunc(). This thread function will perform its routine until the

runThreadFlag is unset later in a call to stop_streaming.

unsigned __stdcall runThreadFunc( void* pArguments ) {

while ( runThreadFlag ) { // unset runThreadFlag to terminate thread

// acquire lock then write data into queue

DWORD dwWaitResult;

dwWaitResult = WaitForSingleObject(hMutex,INFINITE);

switch (dwWaitResult)

{

case WAIT_OBJECT_0: // The thread got ownership of the mutex

// --- CRITICAL SECTION - holding lock

pHub->runOnce(STREAMING_TIMEOUT); // run callbacks to collector

// END CRITICAL SECTION - release lock

if (! ReleaseMutex(hMutex)) { return FALSE; } // bad mutex

break;

case WAIT_ABANDONED:

return FALSE; // acquired bad mutex

}

} // end thread and return

_endthreadex(0); //

return 0;

}

The get_streaming_data command is then available only when

runThreadFlag is set indicating existence in the streaming state. This command action

begins with input and output checking before initialization of the variables used in the data

reading routine described previously. The new elements in this command action include

acquiring the mutual exclusion lock while reading data and handling the output data by

declaring MEX API types with mxArray *outDataN, initializing them with

makeOutputXXX(), assigning each frame using fillOutputXXX(), and finally assigning

Page 48: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

41

the data arrays to output variables using mxCreateStructMatrix() and

assnOutputStruct().

if ( !strcmp("get_streaming_data",cmd) ) {

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( !runThreadFlag )

mexErrMsgTxt("myo_mex is not streaming.\n");

if ( nlhs>1 )

mexErrMsgTxt("myo_mex too many outputs specified.\n");

// Verify that collector still has all of its Myos

unsigned int countMyos = collector.getCountMyos();

if ( countMyos != countMyosRequired )

mexErrMsgTxt("myo_mex countMyos is inconsistent… We lost a Myo!");

// Declare and initialize to default values the following:

// iiIMU1 iiIMU2 iiEMG1 iiEMG2 szIMU1 szIMU2 szEMG1 szEMG2

// frameIMU1 frameIMU2 frameEMG1 frameEMG2

// Output matrices hold numeric data

mxArray *outData1[NUM_FIELDS];

mxArray *outData2[NUM_FIELDS];

// Initialize output matrices

makeOutputIMU(outData1,szIMU1);

makeOutputEMG(outData1,szEMG1);

makeOutputIMU(outData2,szIMU2);

makeOutputEMG(outData2,szEMG2);

// Now get ahold of the lock and iteratively drain the queue while

// filling outDataN matrices

DWORD dwWaitResult;

dwWaitResult = WaitForSingleObject(hMutex,INFINITE);

switch (dwWaitResult)

{

case WAIT_OBJECT_0: // The thread got ownership of the mutex

// --- CRITICAL SECTION - holding lock

// Use handle the data frame for sensor N by using:

// fillOutputIMU(outDataN,frameIMUN,iiIMUN,szIMUN);

// fillOutputEMG(outDataN,frameEMGN,iiEMGN,szEMGN);

// END CRITICAL SECTION - release lock

if ( !ReleaseMutex(hMutex))

mexErrMsgTxt("Failed to release lock\n");

break;

case WAIT_ABANDONED:

mexErrMsgTxt("Acquired abandoned lock\n");

break;

}

// Assign outDataN matrices to MATLAB struct matrix

plhs[DATA_STRUCT_OUT_NUM] = mxCreateStructMatrix(1,countMyos,

NUM_FIELDS,output_fields);

assnOutputStruct(plhs[DATA_STRUCT_OUT_NUM], outData1, 1);

Page 49: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

42

if (countMyos>1) {

assnOutputStruct(plhs[DATA_STRUCT_OUT_NUM], outData2, 2);

}

}

The way that we approach assigning the data frames to output variables in

*plhs[] when reading the values iteratively is to first initialize numerical arrays for each

individual data matrix that will be returned. There is one matrix for the data stored in each

of the fields of the FrameXXX objects. These matrices are objects of type mxArray named

outDataN. During the iterative reading of data from collector, we fill the corresponding

elements of outDataN with the data from the current frame. Then when all of the data has

been read from collector into the matrices in outDataN, we assign each of the matrices

to fields of MATLAB structure array in *plhs[] so that these fields correspond with the

fields of both FrameIMU and FrameEMG.

The MEX file can then be compiled by using the built in MATLAB command, mex.

This command must be passed the locations of the Myo SDK include and lib directories,

then name of the runtime library for the local machine’s architecture (32 or 64 bit), and the

location of myo_mex.cpp. Assuming that Myo SDK has been extracted to the path “C:\sdk\”

on a 64 bit machine, the following command will be used to compile myo_mex.cpp.

mex -I"c:\sdk\include" -L"c:\sdk\lib" -lmyo64 myo_mex.cpp

In this project a general build tool build_myo_mex() has been written for

which a user can issue the following command in the command window to perform this same

operation.

build_myo_mex c:\sdk\

Page 50: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

43

Successful compilation of the MEX file results in the existence of a new file,

myo_mex.mexw64 (on a 64 bit machine) which is the executable that can be called from m-

code. All that remains now is for the user to call an accepted sequence of commands on

myo_mex while Myo Connect is running and connected to the desired number of Myo

devices. This is performed in m-code in an example that gathers five seconds of data from

the device as shown here.

countMyos = 1;

myo_mex(‘init’,countMyos);

myo_mex(‘start_streaming’);

pause(5);

d = myo_mex(‘get_streaming_data’);

myo_mex(‘stop_streaming’);

myo_mex(‘delete’);

% Data is now accessible in the matrices stored in fields of d:

% d.quat, d.gyro, d.accel, d.pose, d.arm, d.xDir, d.emg

This approach works well for single shot data log collection, but breaks down

when successive calls of get_streaming_data are required. This is the case when a

continuous stream of data is desired in the MATLAB workspace. The solution to this problem

is to encapsulate the calling pattern on myo_mex into a MATLAB class MyoMex that will

automatically implement the above data collecting routine to populate class properties with

the continuously streaming data from the Myo device(s).

3.1.4 Myo MATLAB Class Wrapper

The final MATLAB m-code layer of this project includes the creation of two

MATLAB classes, MyoMex and MyoData. MyoMex will be responsible for all management of

the myo_mex environment, and it will own a vector of MyoData objects to which all new

data will be passed. The MyoData object is a representation of the data from a Myo device.

Page 51: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

44

When it receives new data from MyoMex, this data is pushed onto data logs. The MyoData

class also offers convenient ways to access and interpret the logged data.

By design, the usage of MyoMex has been made as simple as possible. The minimal

use case for this code is shown here.

mm = MyoMex; % MyoMex(countMyos) defaults to a single Myo

md = mm.myoData; % Use MyoData object to access current data logs

% md.quat, md.gyro, md.accel, md.emg, etc.

mm.delete(); % Clean up when done

All setup is performed in the constructor MyoMex(), and all cleanup is performed

in the overridden delete() method. Between these calls, when MyoMex is running, it is

continually calling myo_mex(‘get_streaming_data’) and passing this data to its

MyoData objects. The MyoData object can then be used to access the data. The process

diagram for MyoMex lifecycle including operations that cross the MEX boundary is shown in

Figure 3-2.

Page 52: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

45

Figure 3-2: Myo MATLAB class wrapper behavior

The myo_mex command invocation has also been wrapped in static methods of

the MyoMex class. Each command has its own implementation with the signature

[fail,emsg,plhs] = MyoMex.myo_mex_<command>(prhs) in which a try ...

catch block catches myo_mex errors and returns the failure status along with any existing

error messages. In this way MyoMex can ensure the proper functionality of myo_mex by

attempting recovery if an unexpected failure is encountered.

The constructor for MyoMex calls into myo_mex(‘init’,countMyos) using

the static method wrapper, instantiates its myoData property, and then calls the

startStreaming() method to set up automatic polling for data.

Page 53: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

46

function this = MyoMex(countMyos)

% validate preconditions for MyoMex and myo_mex

[fail,emsg] = this.myo_mex_init(countMyos);

% if fail, attempt recovery, otherwise throw error; end

this.myoData = MyoData(countMyos);

this.startStreaming();

end

The constructor for MyoData instantiates a vector of MyoData objects of length

countMyos. The startStreaming() method of MyoMex creates and starts the update

timer that schedules MyoMex calls into myo_mex(‘get_streaming_data’).

function startStreaming(this)

% validate preconditions

this.timerStreamingData = timer(...

'busymode','drop',...

'executionmode','fixedrate',...

'name','MyoMex-timerStreamingData',...

'period',this.DEFAULT_STREAMING_FRAME_TIME,...

'startdelay',this.DEFAULT_STREAMING_FRAME_TIME,...

'timerfcn',@(src,evt)this.timerStreamingDataCallback(src,evt));

[fail,emsg] = this.myo_mex_start_streaming();

% if fail, issue warning and return; end

start(this.timerStreamingData);

end

Then the timerStreamingDataCallback() method completes the normal

behavior of MyoMex. We invoke the get_streaming_data command and then pass the

data into MyoData by passing it to the addData method of MyoData along with the

currTime property which holds the time in seconds since MyoMex was instantiated.

function timerStreamingDataCallback(this,~,~)

[fail,emsg,data] = this.myo_mex_get_streaming_data();

% if fail, clean up and throw error; end

this.myoData.addData(data,this.currTime);

end

The MyoData class exposes two main interfaces, each with important bits of

business logic behind them. The addData method is exposed only to the friend class

MyoMex, and is the entry point for new streaming data to MyoData. The new data is

Page 54: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

47

processed internally before it is then pushed onto the data log properties of MyoData for

consumption in the way of public access from the MATLAB workspace by users.

The addData method receives the output data struct array returned by

myo_mex. This method then calls two more utility methods, addDataIMU() and

addDataEMG(), to push the new data onto the data log properties of MyoData. The very

first time the data logs are updated, a time vector is initialized for each of the IMU and EMG

data sources, and subsequent data values are interpreted as having been sampled at time

instants determined by the number of data samples and the sampling time for the data

source.

The data properties of MyoData all have two forms. The property given the base

name for the data source contains the most recent sample, and another version of this

property with “_log” appended will contain the complete time history log for the data

source. For example, the properties quat and quat_log contain the most recent 1 × 4

vector of quaternion data and the complete 𝐾 × 4 matrix of 𝐾 quaternion samples,

respectively. In the remainder of this discussion of data properties, we assume the

appropriate property, e.g. quat versus quat_log, based on the context while only referring

to the property by its short name.

There are seven data sources that are built into Myo: quat, gyro, accel, emg,

pose, arm, xDir. However, these representations of the data may not be the most

convenient for all applications. Since the nature of the MyoData object is such that its

intended use case will be on dynamically changing streaming data, we will also perform

online data conversion. This means that we will translate the data to other representations

in order to provide users with the most convenient data representation without the added

Page 55: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

48

cost of computation or code complexity. A summary of these properties is given in Table 3-1

including information about which data sources are derived from others and a description

of the interpretation of the data source. Note that these data sources are expanded along the

first dimension to create their associated data log properties except for rot_log which is

expanded along the third dimension.

Table 3-1: MyoData data properties

Data sources in MyoData including descriptions of the data and which source(s) it is derived from if not a default data source.

Data Source

Derived From

Description

quat --- 1 × 4 unit quaternion, transforms inertial vectors to sensor coordinates

rot quat 3 × 3 rotation matrix corresponding with quat

gyro --- 1 × 3 angular velocity vector with components in sensor coordinates

accel --- 1 × 3 measured acceleration vector with components in sensor coordinates

gyro_fixed gyro

quat Rotated representation of gyro by quat to coordinates of the inertial fixed frame

accel_fixed accel quat

Rotated representation of accel by quat to coordinates of the inertial fixed frame

pose --- Scalar enumeration indicating current pose

pose_<spec> pose Scalar logical indication of pose given by spec: rest,

fist, wave_in, wave_out, fingers_spread, double_tap, unknown

arm --- Scalar enumeration indicating which arm Myo is being worn on

arm_<spec> arm Scalar logical indication of which arm Myo is being worn on given by spec: right, left, unknown

Page 56: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

49

Data Source

Derived From

Description

xDir --- Scalar enumeration indicating which direction the Myo is pointing on the subject’s arm

xDir_<spec> xDir Scalar logical indication of x-direction given by spec: wrist, elbow, unknown

emg --- 1 × 8 vector of normalized EMG intensities in [−1,1]

Finally, the time vectors upon which the data is sampled are given by the

properties timeIMU_log and timeEMG_log. The EMG data is sampled on timeEMG_log

at 200Hz whereas all other data sources should are sampled at 50Hz on the timeIMU_log

vector.

3.2 Sphero API MATLAB SDK Development

In this section, we build on the background information about the available

Sphero APIs covered in section 2.2.2 by implementing a MATLAB interface with the selected

API option. In this work, we have chosen to create the device interface entirely in native

MATLAB m-code by leveraging the Instrument Control Toolbox Bluetooth object to

communicate with Sphero by way of its low-level serial binary Bluetooth API. In the

remainder of this section, we step through the development process beginning with

introducing the basic concepts of the API, and then the design and development of the three

layer class hierarchy that comprises Sphero API MATLAB SDK [15], [18].

3.2.1 Sphero API Concepts

The Sphero API is a serial binary protocol documented publicly by the Sphero on

their GitHub repository with the original company name, Orbotix [19]. The serial protocol

Page 57: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

50

transmits all device control information and data through a single communication channel

as a stream of bits, or zeros and ones. The way that commands and data are interpreted from

this continuous stream of bits is by imposing upon it some sort of expected structure. The

description of these structured bits is the binary protocol that we will describe here through

some specific examples of commands and data transmissions. Once the protocol description

is complete, all that remains is to implement the protocol with properly formed procedures

to communicate with Sphero as we will encounter in the following sections.

The Sphero API defines three subdivisions of the bitstream referred to as packets.

The command packet (CMD) is used to transmit a command from the client to Sphero. The

response packet (RSP) is used to transmit a response for a CMD from Sphero back to the

client. And the message packet (MSG) contains asynchronous messages from Sphero to the

client.

These packets each have a specific structure in their representation at the byte

level. In the remainder of this section, we’ll describe collections of bytes in two ways. The

integral data types represented by a collection of bytes will be noted by [u]int<N> where N

is the number of bits and the presence of “u” indicates that it’s an unsigned integer. For

example, uint8 is an unsigned 8 bit integer, and int16 is a signed 16 bit integer. Also, the

unsigned value of a collection of bit or bytes will be given in hexadecimal notation using the

characters 0-9 and A-F followed by “h” or binary notation using the characters 0-1 followed

by a “b.” For example, the number seventy-four is 4Ah, FFh is two-hundred fifty-five, ten is

1010b, and 0111b is seven.

Page 58: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

51

A Sphero API CMD is constructed by assembling the fields listed in Table 3-2. A

CMD is assembled by writing each field and then concatenating them in order from SOP1 to

CHK.

Table 3-2: Sphero API CMD fields A CMD is the concatenation of these fields: [SOP1|SOP2|DID|CID|SEQ|DLEN|<DATA>|CHK]

Field Name Description Value

SOP1 Start of packet 1 Constant value FFh

SOP2 Start of Packet 2 Bits 0 and 1 constain per-CMD configuration flags

111111XXb

DID Device Identifier Specifies which "virtual device" the command belongs to

---

CID Command Identifier Specifies the command to perform ---

SEQ Sequence Number Used to identify RSP packets with a particular CMD packet

00h-FFh

DLEN Data Length Computed from combined length of <DATA> and CHK

computed

<DATA> Data Payload Array of byte packed data (optional) ---

CHK Checksum Checksum computed

Each command that is defined by the Sphero API has a documented DID and CID,

which together uniquely identify the command. Each command also has its own definition of

the <DATA> array with corresponding DLEN. All commands support two configuration

options that are selected by settings bits 0 and 1 of SOP2. Bit 1 instructs Sphero to reset its

command timeout counter. Bit 0 instructs Sphero to provide a RSP to the CMD. If bit 0 of

SOP2 is unset, then the host application will receive no RSP, and therefore no indication of

Sphero’s success in interpreting the CMD. And the checksum is computed from all previous

Page 59: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

52

bytes except SOP1 and SOP2 to guard against Sphero taking action on corrupted or

malformed commands.

A RSP is generated from Sphero to the client in response to a CMD if bit 0 of SOP2

was set in the generating CMD. The fields of a RSP are listed in Table 3-3.

Table 3-3: Sphero API RSP fields A RSP is the concatenation of these fields: [SOP1|SOP2|MRSP|SEQ|DLEN|<DATA>|CHK]

Field Name Description Value

SOP1 Start of packet 1 Constant value FFh

SOP2 Start of Packet 2 Specifies RSP packet type FFh

MRSP Message Response Indicates failure status of CMD interpretation

00h for success

SEQ Sequence Number Used to identify RSP packets with a particular CMD packet

echoed from CMD

DLEN Data Length Computed from combined length of <DATA> and CHK

computed

<DATA> Data Payload Array of byte packed data (optional) ---

CHK Checksum Checksum computed

When a RSP is received by the client, the beginning of the packet is identified by

the bytestream FFFFh. Then, the remaining bytes are consumed according to DLEN and the

CHK is recomputed for comparison. Assuming no failures in this procedure, a valid response

is received to indicate the success of a previous command with a matching SEQ and

optionally provide data to the client application.

A MSG is sent to the client from Sphero asynchronously. Since the occurrence of

these messages depends on the state of Sphero, and some of this state can be changed with

persistence, the client application must be prepared to receive MSG given by the fields listed

Page 60: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

53

in Table 3-4. It is important to note that unlike the DLEN field for CMD and RSP, the DLEN

field for MSG is 16 bits. This uint16 value is sent over the wire most significant byte first.

Table 3-4: Sphero API MSG fields A MSG is the concatenation of these fields: [SOP1|SOP2|ID_CODE|DLEN|<DATA>|CHK]

Field Name Description Value

SOP1 Start of packet 1 Constant value FFh

SOP2 Start of Packet 2 Specifies MSG packet type FEh

ID_CODE Identifier Code Indicates the type of message ---

DLEN Data Length Computed from combined length of <DATA> and CHK, 16 bit

computed

<DATA> Data Payload Array of byte packed data ---

CHK Checksum Checksum computed

The asynchronous message packet is sent from Sphero to the client at any time.

These packets can be identified when read by the client by checking the value of SOP2, and

they contain structured data in <DATA> that is decoded based upon the type of message

being sent as specified by the message identifier code, ID_CODE. Various CMD packets

configure Sphero to generate asynchronous messages periodically based upon the

occurrence of events or the passing of some time duration. Because of the asynchronous

nature of MSG packets, the client must always be in a state that attempts to read and parse

either RSP or MSG packets and behave accordingly to store the response data locally and

optionally take action automatically when a MSG is received without interfering with the

synchronicity of the CMD-RSP packet flow.

Perhaps the best way to describe the process of encoding and decoding packets is

to show by example with a few select CMD and the associated RSP. The Ping() command

Page 61: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

54

is the simplest type of CMD in that it contains no <DATA> and its purpose is to verify

connectivity with Sphero. Table 3-5

Table 3-5: Command definition for the Ping() command

* The value FDh is also acceptable for SOP2 ** This is an arbitrary SEQ for purposes of computing CHK

SOP1 SOP2 DID CID SEQ DLEN <DATA> CHK

FFh FFh* 00h 01h 37h** 01h --- C6h

The procedure for computing the checksum above is listed here in sequence.

1. Compute the sum of bytes DID through <DATA>

00h + 01h + 37h + 01h = 39h = 57

2. Compute the module 256 of this result

57 % 256 = 57 = 00111001b

3. Compute the bitwise compliment of this result

~00111001b = 11000110b = C6h

The expected RSP for this Ping() command is shown in Table 3-6. The SEQ has

been echoed to inform the client application of its generating CMD and the MRSP indicates

success.

Table 3-6: Response definition for the Ping() command

*This value for SEQ corresponds with that sent in the CMD constructed previosuly

SOP1 SOP2 MRSP SEQ DLEN <DATA> CHK

FFh FFh 00h 37h* 01h --- C7h

These examples so far have shown one CMD and one RSP each with no <DATA>

payload. Two more commands that are a bit more useful than Ping() are Roll() and

Page 62: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

55

ReadLocator(). These commands are used to command Sphero to move with a desired

speed and heading, and to read Sphero’s current estimated position, respectively.

The Roll() command requires a data payload that contains an 8 bit SPEED, a 16

bit HEADING, and an 8 bit STATE. The value for SPEED is a normalized fraction of maximum

speed so that FFh is full speed and 00h is no speed. We’ll construct this command for roughly

half speed, with a SPEED of 80h. The HEADING is the planar heading specified in degrees

from 0 to 360. In this example we’ll take the heading to be 270° or 010Eh. As with all

multibyte values, the HEADING will be sent over the wire most significant byte first as the

pair 01h, 0Eh. The STATE is an enumeration selecting the mode with which the Roll()

command should be carried out. The normal mode is selected by specifying STATE 01h. The

following Table 3-7 summarizes this Roll() CMD.

Table 3-7: Command definition for the Roll() command

SOP1 SOP2 DID CID SEQ DLEN SPEED HEADING STATE CHK

FFh FFh 02h 30h 37h 05h 80h 01h 0Eh 01h 01h

Since we have specified that a RSP should be generated for this Roll() CMD, we

will expect a response that is similar in form to that received from the Ping() command

previously in Table 3-6.

The ReadLocator() CMD is specified by DID 02h and CID 15h, and it should

be constructed in a way that is similar to the Ping() CMD previously in Table 3-5 with SOP2

set so that a RSP will be generated. There is no data associated with the ReadLocator()

CMD, as its purpose is to query Sphero for <DATA> that is returned in the ReadLocator()

RSP. The RSP <DATA> for contains five 16 bit integers that encode Sphero’s current position,

Page 63: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

56

velocity, and speed. The following Table 3-8 shows an example RSP assuming a CMD success

with a SEQ of 37h.

Table 3-8: Response <DATA> definition for the ReadLocator() command

DLEN X_POS Y_POS X_VEL Y_VEL SOG CHK

0Bh 00h 09h FFh C5h FFh 8Fh FFh 67h 00h BEh 3Eh

The interpretation of this RSP <DATA> is in units of centimeters and centimeters

per second as int16 datatype. The translation of this ReadLocator() <DATA> into values

with engineering units is captured in Table 3-9.

Table 3-9: Response <DATA> interpretation for the ReadLocator() command

Data Raw Value Engineering Value Description

X_POS 0009h 9cm 𝑥 component of planar position

Y_POS FFC5h −59cm 𝑦 component of planar position

X_VEL FF8Fh −113cm/s 𝑥 component of planar velocity

Y_VEL FF67h −153cm/s 𝑦 component of planar velocity

SOG 00BEh 190cm/s Speed over ground or magnitude of planar velocity

The third packet type is most useful in this project by its use in receiving

streaming data from Sphero. The SetDataStreaming() CMD sends configuration data to

Sphero to set up the streaming data messages. This configuration data includes information

about the desired sample rate, number of samples per data frame transmission, and total

number of packets to send in addition to specification of the desired data sources. The data

sources are selected by sending a 32 bit MASK that’s computed by performing a bitwise

logical OR operation on the mask bits for the desired data types shown in Table 3-10.

Page 64: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

57

Table 3-10: Data source MASK bits for SetDataStreaming() command and message

Data Source Data Source MASK Bit Description

MASK_ACCEL_X_RAW 80h 00h 00h 00h Raw measured acceleration

MASK_ACCEL_Y_RAW 40h 00h 00h 00h

MASK_ACCEL_Z_RAW 20h 00h 00h 00h

MASK_GYRO_X_RAW 10h 00h 00h 00h Raw angular velocity

MASK_GYRO_Y_RAW 08h 00h 00h 00h

MASK_GYRO_Z_RAW 04h 00h 00h 00h

MASK_MOTOR_RT_EMF_RAW 00h 40h 00h 00h Raw right motor EMF

MASK_MOTOR_LT_EMF_RAW 00h 20h 00h 00h Raw left motor EMF

MASK_MOTOR_LT_PWM_RAW 00h 10h 00h 00h Raw left motor PWM

MASK_MOTOR_RT_PWM_RAW 00h 08h 00h 00h Raw right motor PWM

MASK_IMU_PITCH_FILT 00h 04h 00h 00h Filtered pitch angle

MASK_IMU_ROLL_FILT 00h 02h 00h 00h Filtered roll angle

MASK_IMU_YAW_FILT 00h 01h 00h 00h Filtered yaw angle

MASK_ACCEL_X_FILT 00h 00h 80h 00h Filtered measured acceleration

MASK_ACCEL_Y_FILT 00h 00h 40h 00h

MASK_ACCEL_Z_FILT 00h 00h 20h 00h

MASK_GYRO_X_FILT 00h 00h 10h 00h Filtered angular velocity

MASK_GYRO_Y_FILT 00h 00h 08h 00h

MASK_GYRO_Z_FILT 00h 00h 04h 00h

MASK_MOTOR_RT_EMF_FILT 00h 00h 00h 40h Filtered right motor EMF

MASK_MOTOR_LT_EMF_FILT 00h 00h 00h 20h Filtered left motor EMF

Page 65: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

58

If we desire to configure both the raw and the filtered measured acceleration data

sources, then the MASK would be determined by OR-ing these MASK bits together as shown

in Table 3-11.

Table 3-11: Data source MASK for streaming accelerometer data

Data Source Data Source MASK Bit

MASK_ACCEL_X_RAW 80h 00h 00h 00h

MASK_ACCEL_Y_RAW 40h 00h 00h 00h

MASK_ACCEL_Z_RAW 20h 00h 00h 00h

MASK_ACCEL_X_FILT 00h 00h 80h 00h

MASK_ACCEL_Y_FILT 00h 00h 40h 00h

MASK_ACCEL_Z_FILT 00h 00h 20h 00h

MASK 11100000b 00000000b 11100000 00000000b

E0h 00h E0h 00h

The other parameters for the SetDataStreaming() command are introduced

and described in Table 3-12 along with example values to configure streaming the data

sources specified in MASK at a rate of 1Hz with one sample per data frame one time.

Table 3-12: Command parameters for SetDataStreaming command

The DID and CID for this CMD are 02h and 11h, respectively

Data Parameter Description Value

DLEN Data length depends upon existence of MASK2 0Ah

N Sample rate divider on 400Hz base frequency 0190h

M Samples per frame 0001h

MASK Selects data sources E000E000h

Page 66: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

59

Data Parameter Description Value

PCNT Packet count: number of data frames to be transmitted (zero for unlimited streaming)

01h

MASK2

(optional)

Selects additional data sources by extending MASK ---

The complete SetDataStreaming() CMD would then be the following byte

string with a SEQ of 37h and the data parameters given in Table 3-12.

FFh FFh 02h 11h 37h 0Ah 01h 90h 00h 01h E0h 00h E0h 00h 01h 58h

Assuming that the SetDataStreaming() command has been successfully

configured to stream one sample per frame, then we expect to receive a MSG with ID_CODE

03h and DLEN 000Dh since we have selected 6 data sources that are each two bytes wide. An

example MSG for this scenario may be given by the following.

Table 3-13: Message definition for the DataStreaming message

SOP1 SOP2 ID_CODE DLEN <DATA> CHK

FFh FEh 03h 00h 0Dh See Table 3-14 62h

As the MSG is received in the incoming bytestream by the client, it is first identified

by the substring FFFEh formed by SOP1 and SOP2. Then the ID_CODE is used to identify

the type of MSG, and the DLEN is used to read the remainder of the MSG. Finally, the CHK is

computed by the client to check for possible data corruption. Successful parsing of this

particular MSG should result in a <DATA> payload similar to the example shown here. The

serial bytestream of signed 16 bit integers is taken to represent the selected bits in MASK in

the same order in which they appear in MASK from most significant bit to least significant

bit. Table 3-14 contains an example <DATA> payload for this particular MSG.

Page 67: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

60

Table 3-14: Message <DATA> definition for the DataStreaming message

ACCEL_

X_RAW

ACCEL_

Y_RAW

ACCEL_

Z_RAW

ACCEL_

X_FILT

ACCEL_

Y_FILT

ACCEL_

Z_FILT

00h 00h 00h 0Ah 00h FBh FFh DFh 00h 76h 10h 24h

The data in Table 3-14 is translated from hardware units to engineering units

according to scaling factors listed in Sphero API documentation [19] and shown in Table

3-15.

Table 3-15: Message <DATA> interpretation for the DataStreaming message

Data Raw Value Engineering Value Description

ACCEL_X_RAW 0000h 0g Measured acceleration in units of g on an 8 bit scale

ACCEL_Y_RAW 000Ah 0.04g

ACCEL_Z_RAW 00FBh 0.98g

ACCEL_X_FILT FFDFh −0.01g Measured acceleration in units of g on a 12 bit scale

ACCEL_Y_FILT 0076h 0.03g

ACCEL_Z_FILT 1024h 1.01g

The encoding of CMD packets and the decoding of RSP and MSG packets along with

knowledge of the state changes that influence the creation of these packets comprise the

necessary prerequisites for implementing a programmatic client object to manage this

communication.

3.2.2 Sphero API Implementation

The MATLAB interface for Sphero is enabled by the implementation of a MATLAB

class named SpheroCore that manages all state and communication with Sphero through

a MATLAB Instrument Control Toolbox Bluetooth object. The first step toward

implementing this solution is to understand how to communicate with Sphero via the

Page 68: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

61

Bluetooth object. Through examples similar to those given in the previous section, we

incrementally build the functionality of SpheroCore in this section. The end result is a

MATLAB class with methods to implement each of the Sphero API function calls and

properties to contain information about Sphero’s state and data.

The constructor for SpheroCore requires a single input argument that is used

to uniquely identify Sphero, its remote_name. This is of the form “Sphero-XXX” where

“XXX” is a three character string of initials representing the colors that Sphero blinks when

awake and idle. For example, a device with the color sequence white-white-purple will bear

the default remote_name “Sphero-WPP”. The constructor returns a handle to the

SpheroCore object only when instantiation of its Bluetooth object, bt, completes

successfully by opening of the Bluetooth device for communication. Note that most of the

validation code as well as the initialization of event listeners and local device state

representation has been omitted for clarity here.

function s = SpheroCore(remote_name)

% instantiate bluetooth property

s.bt = Bluetooth(...

remote_name,s.BT_CHANNEL,...

'BytesAvailableFcnMode','byte',...

'BytesAvailableFcnCount',1,...

'BytesAvailableFcn',@s.BytesAvailableFcn,...

'InputBufferSize',8192);

% open bt and call Ping() to validate connectivity

fopen(s.bt); pause(0.5);

assert( ~s.Ping(),'SpheroCore.Ping() failed.');

end

We notice two things here, the way in which the Bluetooth object is configured

and the way in which API functions are called on the SpheroCore object. Since we must

respond to incoming MSG packets asynchronously, it’s imperative that incoming data is read

from s.bt automatically by way of the BytesAvailableFcn callback function property.

Page 69: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

62

The behavior of this callback function, the BytesAvailableFcn() method, will be

covered later during discussion of parsing RSP and MSG packets.

The flowchart for implementation of the Sphero API command methods is shown

in Figure 3-3 for reference as a guide for the following description of the code used to

implement the Ping(), Roll(), and ReadLocator() methods. Each API method sends

its DID, CID, and <DATA> along with per-command configuration flags to a stack of utility

methods. The utility methods construct the CMD packet, write it to Sphero, and optionally

wait for a response and return data if it exists.

Figure 3-3: Sphero API send flowchart

Page 70: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

63

The Sphero API functions all share the same signature template. The parameter

list begins with the class handle reference followed by a list of API <DATA> parameters

required for the API function, and is completed by a variable length argument list where the

per-command configuration flags can optionally be passed to the API method.

function fail = Ping(s,varargin)

[reset_timeout_flag,~] = s.ParseVargs(varargin{:});

did = s.DID_CORE; cid = s.CMD_PING; data = [];

fail = s.WriteClientCommandPacket(did,cid,data,...

reset_timeout_flag,true);

end

First we parse the variable length argument array varargin using

ParseVargs(). The varargin array is assumed to contain only flags to set the command

timeout reset (reset_timeout_flag) and the command answer behavior

(answer_flag), in that order. Since Ping() must request a response to be useful, the

second output of ParseVargs() is ignored. The next thing to happen is the assignment of

the DID and CID from constant properties of the class containing this information and the

construction of the <DATA> array. In the case of Ping() there is not <DATA> so we

construct the empty matrix instead. Then we are ready to issue the command by sending this

command to Sphero via WriteClientCommandPacket().

function [fail,resp] = WriteClientCommandPacket(s,did,cid,data,...

reset_timeout_flag,answer_flag)

fail = []; resp = [];

sop1 = s.SOP1;

sop2 = s.SOP2_MASK_BASE;

if reset_timeout_flag

sop2 = bitor(sop2,s.SOP2_MASK_RESET_TIMEOUT,'uint8');

end

seq = 0; % default seq

if answer_flag

sop2 = bitor(sop2,s.SOP2_MASK_ANSWER,'uint8');

seq = s.GetNewCmdSeq();

end

% figure out dlen from data

dlen = length(data)+1;

% compute checksum beginning with did

Page 71: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

64

chk = bitcmp(mod(sum(uint8([did,cid,seq,dlen,data])),256),'uint8');

packet = uint8([sop1,sop2,did,cid,seq,dlen,data,chk]);

fwrite(s.bt,packet,'uint8');

if answer_flag, [fail,resp] = s.WaitForCommandResponse(); end

end

The WriteClientCommandPacket() method constructs the packet byte

string from the provided DID, CID, <DATA>, and configurations flags by computing SOP2,

DLEN, and CHK. Then the packet is written to the Bluetooth object. Optionally, if

answer_flag is set, WaitForCommandResponse() is called to spin with a timeout

waiting to read a RSP. This method returns a failure status along with any RSP <DATA> that

has been passed back to the client.

function [fail,resp] = WaitForCommandResponse(s)

fail = true; resp = [];

tic; t = 0;

while fail && (toc < s.WAIT_FOR_CMD_RSP_TIMEOUT)

if ~isempty(s.response_packet) && (s.response_packet.seq == s.seq)

% check for successful response

fail = s.CheckResponseFailure();

dlen = s.response_packet.dlen;

if ~fail && (dlen > 0)

resp = s.response_packet.data;

end

s.response_packet = [];

else

pause(s.WAIT_FOR_CMD_RSP_DELAY);

end

t = toc;

end

end

The method WaitForCommandResponse() spins in a while ... end loop

with a timeout waiting for the response_packet property to be filled with response data

by BytesAvailableFcn(). If a response is found, the failure and possibly response data,

are returned up the call stack to the originating Sphero API method when the data will be

interpreted and the failure status is returned to the base calling workspace.

Page 72: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

65

The implementation of the Roll() API method shows the way in which API

<DATA> parameters may be passed through to be written in the CMD packet.

function fail = Roll(s,speed,heading,state,varargin)

% input validation

[reset_timeout_flag,~] = s.ParseVargs(varargin{:});

switch state

case 'normal'

roll_state = s.ROLL_STATE_NORMAL;

case 'fast'

roll_state = s.ROLL_STATE_FAST;

case 'stop'

roll_state = s.ROLL_STATE_STOP;

end

speed = round(255*speed);

heading = floor(wrapTo360(heading));

heading_arr = s.ByteArrayFromInteger(heading,'int16');

did = s.DID_SPHERO; cid = s.CMD_ROLL;

data = [speed,heading_arr,roll_state];

fail = s.WriteClientCommandPacket(did,cid,data,...

reset_timeout_flag,answer_flag);

end

And the implementation of the ReadLocator() API method shows the way in

which RSP <DATA> is returned to, and handled by, an API method.

function [fail,locator_data] = ReadLocator(s,varargin)

[reset_timeout_flag,~] = s.ParseVargs(varargin{:});

did = s.DID_SPHERO; cid = s.CMD_READ_LOCATOR; data = [];

[fail,data] = s.WriteClientCommandPacket(did,cid,data,...

reset_timeout_flag,true);

% handle response data

locator_data = [];

if fail || isempty(data) || (10~=length(data))

fail = true;

return;

end

s.time = s.time_since_init;

% process data

[x,y,dx,dy,v] = s.LocatorDataFromData(data);

s.odo = [x,y];

s.vel = [dx,dy];

s.sog = v;

locator_data.odo = [x,y];

locator_data.vel = [dx,dy];

locator_data.sog = v;

end

Page 73: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

66

The majority of the code in this method is dealing with the handling of response

data. Since the data is also associated with SpheroCore properties, it is assigned to the

appropriate properties, odo, vel, and sog, as well as it is assigned to an output structure

locator_data in order to be returned to the caller.

All the time when a valid instance of SpheroCore exists, the

BytesAvailableFcn() method is receiving callbacks for the event that new incoming

data is available on the Bluetooth object s.bt. The process flow for the reading of

incoming data is shown in Figure 3-4. Since the reading of packet data can void the need for

the callback to execute a number of times, we keep track of the number of extra bytes read

within this callback function. If the num_bytes to skip is positive, we decrement this value

and return. Otherwise, the SpinProtocol() method is called .

Figure 3-4: Sphero API receive flowchart

Within the SpinProtocol() method the incoming data is read in sequence to

identify the start of packet for a RSP with FFFFh or a MSG with FFFEh. Once the incoming

Page 74: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

67

packet type is identified, the remainder of the packet is read using DLEN, num_bytes is

accumulated, and the appropriate action is taken. For a RSP packet, the response_packet

property is set so that WaitForCommandResponse() can receive the response data. A

MSG will trigger the invocation of the appropriate MSG handler. The implementation of

SpinProtocol() is shown in Appendix A.3 for reference.

In addition to providing the means by which to call Sphero API functions on

Sphero, SpheroCore also keeps track of all of the device data received from Sphero. This is

performed by the maintenance of class properties for each data source. Table 3-16 shows a

list of all data sources.

Table 3-16: Sphero data sources

Data Source Description

accel_raw 1 × 3 Raw measured acceleration in sensor frame

gyro_raw 1 × 3 Raw angular velocity in sensor frame

motor_emf_raw 1 × 2 Raw motor EMF (left, right)

motor_pwm_raw 1 × 2 Raw motor PWM (left, right)

imu_rpy_filt 1 × 3 Filtered roll, pitch, yaw determined from quat

accel_filt 1 × 3 Filtered acceleration magnitude

gyro_filt 1 × 3 Filtered angular velocity

motor_emf_filt 1 × 2 Filtered motor EMF (left, right)

quat 1 × 4 Estimated unit quaternion, orientation of the worl with respect to the sensor frame

rot 3 × 3 Rotation matrix determined from quat

accel_one Scalar magnitude of measured acceleration

odo 1 × 2 Planar position of Sphero

Page 75: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

68

Data Source Description

vel 1 × 2 Planar velocity of Sphero

sog Scalar speed over ground, magnitude of planar velocity

Each of the data sources shown in Table 3-16 exist as properties of SpheroCore

that can be accessed to receive the most recently known value for the data source. If no value

has been reported up to the current time, a NaN value is used as a placeholder. Consecutive

instances of values received for these data sources are also stored in log properties that are

given similar names, with “_log” appended. Two more properties, time and time_log

hold the time since SpheroCore initialization at which data values were received in the

MATLAB environment.

A minimal use case for SpheroCore may be considered as one in which we

connect to Sphero, use a for ... end loop to roll Sphero around in a circle, and then stop.

This example is implemented in the following code snippet.

s = SpheroCore(‘Sphero-WPP’);

speed = 0.5; state = ‘normal’;

reset_timeout_flag = []; answer_flag = false;

for heading = 1:10:360 % roll in a circle

s.Roll(speed,heading,state,…

reset_timeout_flag,answer_flag)

pause(0.5);

end

s.Roll(0,0,’normal’); %stop Sphero

s.delete();

clear s;

3.2.3 Sphero Interface

The implementation of SpheroCore follows as closely as possible the patterns

evident in the Sphero API specification. However, in some cases, this documented behavior

is less than ideal. It is for this reason that this project also provides a subclass of

SpheroCore named SpheroInterface as well as another nested subclass Sphero. The

Page 76: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

69

former provides convenience methods to extend and override the default behavior of

SpheroCore, whereas the latter is intended to be a template application class for users to

implement for their usage of SpheroInterface.

The primary extension that SpheroInterface offers to SpheroCore is a

redefinition of the heading used in the Roll() method and the coordinate system used

with both Roll() and ConfigureLocator() (and thus ReadLocator()). First, an

overridden Roll() method negates the value passed in for heading so that

SpheroInterface works with a right-handed coordinate system instead of the default

left-handed rotation angle. Once this preferred convention is imposed, we resolve the

remaining inconvenience with Sphero’s coordinate system. Rotating the reference

coordinate system used in Roll() and ReadLocator() with SpheroCore would require

external management of a consistent rotation angle to be added to the heading for Roll()

and used to configure the locator coordinates before calling ReadLocator(). We simplify

this process and improve the user experience when working with rotation calibrated

coordinates in SpheroInterface. A new property heading_offset keeps track of the

desired rotation. A new method, ConfigureLocatorWithOffset() is used to call

ConfigureLocator() with the heading_offset used to calculate a consistent

yaw_tare argument. And the RollWithOffset() method is a wrapper for Roll() that

adds the heading_offset rotation angle to the heading argument.

Additional modifications such as these may be incorporated by users in the

Sphero class that inherits from SpheroInterface. Some type of application data, or

perhaps a generic userdata property, may be useful to have contained within the Sphero

Page 77: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

70

instance in a user’s program. This framework supports the implementation of such an

application specific class.

3.3 Application Cases

There are many possibilities for use with each of these software projects

individually, and even more when they are combined. In this section, we take a cross section

of the possible use cases to showcase some of the ways in which these projects can be used.

For each of Myo and Sphero we’ll look at both examples that use the command line interface

(CLI) as well as prototypical graphical user interface (GUI) applications. Then, finally, we’ll

look at an application that combines the sensor data from both Myo and Sphero that serves

as the conceptual basis for the human motion analysis application that is the focus of the

other remainder of this work.

Videos that show the basic CLI and GUI capabilities packaged with these software

tools for Myo and Sphero can be found at the following locations on YouTube:

Myo: https://www.youtube.com/watch?v=pPh306IgEDo

Sphero: https://www.youtube.com/watch?v=YohxMa_z4Ww

3.3.1 Myo Command Line Interface

The Myo is primarily an input device that may be used to collect raw sensory data

from any of its IMU sources or its eight channel surface EMG sensors. A typical data collection

application that is well suited for implementation in a MATLAB m-code script is the task of

data collection for an experimental trial. In the following example, we create an instance of

MyoMex, wait DURATION seconds, and then access the desired data before cleaning up the

connection.

Page 78: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

71

DURATION = 5; % seconds

mm = MyoMex;

pause(DURATION);

emg = mm.myoData.emg_log;

time = mm.myoData.timeEMG_log;

mm.delete();

figure;

plot(time,emg);

ylabel('emg [normalized activation]');

xlabel('time [s]');

title(sprintf('EMG versus time for a %d[s] trial',DURATION));

xlim([0,DURATION]);

Once we have collected the emg data and its corresponding time vectors, we can

operate on these MATLAB workspace variables in a way of our choosing. Although it’s not

shown here, one may choose to save() the variables in a “.mat” file for future analysis.

Instead, we plot the data to inspect it visually as shown in Figure 3-5.

Figure 3-5: Myo CLI EMG logger plot

Page 79: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

72

Figure 3-5 shows example EMG data from Myo in the format with which it is

stored in MyoData. This is eight dimensional data representing the EMG activation on a

normalized scale in the range [−1,1] that is sampled at a rate of 200Hz. The 1000 samples of

eight dimensional data in this plot are show without a legend or other signal disambiguation

since the data is shown exclusively for illustrative purposes.

3.3.2 Myo Graphical User Interface

The Myo SDK MATLAB MEX Wrapper project comes packaged with a collection of

two GUIs. These GUIs are used to instantiate and destroy the MyoMex object and to monitor

the data from each Myo using its corresponding MyoData instance.

The MyoMexGUI_Monitor is used to control the creation and deletion of

MyoMex objects. This GUI, shown in Figure 3-6, contains prompts for users to specify the

desired number of Myos to be used in a new session before instantiating MyoMex by clicking

the “Init MyoMex” pushbutton. Following instantiation, the same pushbutton changes to

provide functionality to “Delete MyoMex.” Once MyoMex is created, users can launch either

one or two instances of the MyoDataGUI_Monitor by selecting the appropriate

checkbox(es). The possible number of MyoDataGUI_Monitor instances depends on the

number of Myos that have been connected to in the current session.

Page 80: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

73

Figure 3-6: Myo GUI MyoMexGUI_Monitor

The MyoDataGUI_Monitor, shown in Figure 3-7, is designed to receive a handle

reference to a MyoData object as its first input argument. The GUI manages the creation and

deletion of its own timer object to schedule graphical updates. The timer callback function

receives the MyoData handle reference as an additional parameter to access Myo data for

use in the visualizations provided as the main features of this GUI.

Figure 3-7: Myo GUI MyoDataGUI_Monitor

The right hand side of the GUI contains strip charts showing the most recent data

from the quat, gyro, accel, and emg sources along with a representation of the signal

magnitude for the latter three sources (thick line). The top left of the GUI contains

pushbutton controls to toggle the streaming status and radio buttons to toggle the

Page 81: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

74

representation of the vector data shown in the strip charts. The gyro and accel data can

be viewed in coordinates of either the sensor or fixed frame of reference.

The left side of the GUI also shows a visualization of the quat data. The cylinder

in this plot represents the subject’s lower arm, with the elbow at the origin. Slider controls

are used to allow the user to reorient the fixed frame on the screen so that correspondence

between the virtual model (cylinder) and physical model (subject lower arm) can be

obtained by manual adjustment. Also on the left side, at bottom, an indication of detected

pose is displayed when this is not either pose_rest or pose_unknown.

3.3.3 Sphero Command Line Interface

Sphero is typically used in one of two types of applications. A teleoperative

application will utilize API functions such as Roll() and possibly ReadLocator() that

enable closed loop control of Sphero’s trajectory. Since an example using Roll() has

already been provided earlier at the end of section 3.2.2, we’ll show the alternative use of

Sphero as an input device here with a data logging example. In the following example, we

create an instance of Sphero, toggle off stabilization mode, set data streaming for

DURATION seconds, collect the data, and then reset stabilization before cleaning up the

connection to Sphero.

DURATION = 5; % seconds

RATE = 50; % Hz

sensors = {'gyro_raw','gyro_filt'};

s = Sphero('Sphero-WPP');

s.SetStabilization(false);

s.SetDataStreaming(RATE,RATE,DURATION,sensors);

pause(DURATION+1);

t = s.time_log;

gr = s.gyro_raw_log;

gf = s.gyro_filt_log;

s.SetStabilization(true);

s.delete();

Page 82: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

75

figure;

plot(t-t(1),gr,'-'); hold on;

plot(t-t(1),gf,':');

legend({'raw','','','filt','',''});

ylabel('gyro [deg/s]');

xlabel('time [s]');

title(sprintf('%s for a %d[s] trial',...

‘Gyro (raw and filt) versus time ‘,DURATION));

xlim([0,DURATION]);

The configuration of streaming data in this example was to stream DURATION

seconds of both raw and filtered gyroscope data at a sample rate of RATE Hz with RATE

samples per data frame, or one data frame transmitted per second. The resulting 250

samples of the six signals are shown in the plot of Figure 3-8. Although one may otherwise

choose to save the data log for future analysis, here we’re only interested in visualizing the

data in a plot.

Figure 3-8: Sphero CLI gyroscope logger plot

Page 83: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

76

Figure 3-8 shows the two variants of gyroscope data available from Sphero, both

raw and filtered. The raw data is shown in solid lines and the filtered data is shown in the

dotted lines. As is evident in the plot, when this data was collected, Sphero was subjected to

a rotation about one of its sensor frame’s basis vector axes. Then after a brief pause, it was

rotated about the same axis in the opposite direction. Correctness of the unit conversion can

also be roughly checked here as the 1.5s duration rotation at −1000°/s and the 2s rotation

at about 1250°/s represent a reasonable 3 to 3.5 revolutions per second.

3.3.4 Sphero Graphical User Interfacess

The Sphero API MATLAB SDK package is distributed with a collection of GUIs that

provide users with a way to quickly get started with programmatic interactions with Sphero

as well as a starting point for further development efforts. The multiple GUIs provide

features to control the Sphero object including setting the device color and configuring its

heading_offset among examples of two application cases for the device. A vector drive

application allows users to drive Sphero while visualizing its odometry data, and a

visualization application allows users to view its IMU data while manipulating the device.

Users may choose to first launch the SpheroGUI_MainControlPanel, shown

in Figure 3-9, to manage the connection to Sphero and launch the other packaged GUIs. In

the event that a user doesn’t know the remote_name for the Sphero device, the “Find

Devices” pushbutton performs a search on the local machine for paired Bluetooth device

candidates. The remote_name is either selected from the list of found devices or entered

manually before clicking the “Connect” pushbutton to begin a session with Sphero. Once

Sphero is connected, this GUI is then used to launch the other GUIs, and then finally clean up

the session by clicking the “Disconnect” pushbutton.

Page 84: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

77

Figure 3-9: Sphero GUI SpheroGUI_MainControlPanel

Also shown in this figure are the GUIs: SpheroGUI_ChangeColor (upper right) and SpheroGUI_ConfigHeadingOffset (lower right)

Two utility GUIs are provided to change Sphero’s color and to configure the

heading_offset as shown at right in Figure 3-9. The color of Sphero is selected by

clicking a color wheel shown in SpheroGUI_ChangeColor, which automatically calls

SetRGBLEDOutput() to update the color displayed by Sphero. The heading_offset

property of SpheroInterface is configured by changing this value using the slider in

SpheroGUI_ConfigHeadingOffset. The slider callback function first updates the

heading_offset property before calling the RollWithOffset() and

ConfigureLocatorWithOffset() methods of SpheroInterface to update Sphero

with the new rotated locator coordinates. Sphero is commanded to roll with no speed to

orient the device with the rotated coordinate system. Since the back light on Sphero is

illuminated by SetBackLEDOutput() when this GUI is open, the user is given feedback on

the current orientation of the locator coordinates.

The first of two example applications showcased in Sphero GUIs is the vector drive

application shown in Figure 3-10. This GUI uses a timer object to schedule updates that

issue streaming (without answer_flag set) calls to RollWithOffset() and refresh the

Page 85: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

78

display of Sphero’s odometry data. The vector drive input is provided by the user who will

click and hold the mouse in the white circle at left in Figure 3-10. The vector drawn from the

center of the circle to the mouse location, illustrated by the red dotted arrow, determines the

speed and heading for Sphero by its magnitude and angle, respectively. When this GUI is

open, Sphero is also configured to stream its odometry data odo and vel, its position and

velocity, to enable graphical updates with this information as shown at the right of this figure.

The trace of its position is displayed along with its current bearing given by a vel. The

bearing is in the plot shown as a short line segment originating from the center of Sphero.

Figure 3-10: Sphero GUI SpheroGUI_Drive

The second example application for Sphero is the input data visualization shown

in Figure 3-11 named SpheroGUI_VisualizeInputData. Similar to the CLI example,

the stabilization is turned off when this GUI is active. Sphero is set to stream its quaternion,

gyroscope, and accelerometer data at a default sample rate of 40Hz and frame rate of 10Hz.

A timer object is used to schedule graphics updates on the plot elements.

Page 86: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

79

Figure 3-11: Sphero GUI SpheroGUI_VisualizeInputData

Strip charts of the most recent time series data from the gyroscope and

accelerometer are shown at right. Radio buttons at top right are used to change the

coordinate system in which this vector data is represented between “Robot” (sensor) and

“World” (fixed) frame. At left, a spatial visualization of the quaternion data displayed along

with a slider control that allows the user to orient the fixed coordinates with the screen

coordinates. This allows the user to register the orientation of the virtual model with that of

the physical model.

3.3.5 Myo and Sphero Motion Capture

The application that is the focus of this work has been chosen here to be the

application case that employs both Myo and Sphero simultaneously. The

MyoSphero_UpperLimb GUI shown in Figure 3-12 represents a very simple “zeroth-

order” approximation to the use of two Myo devices and a Sphero device as an upper limb

motion capture system. The subject wears one Myo on each of the upper and lower arm

segments and holds a Sphero with the hand. Then, the lengths of the subject’s upper limb

Page 87: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

80

segments are assumed and the orientations of each Myo and Sphero are assumed to be

exactly the orientations of the corresponding anatomical segments. The forward kinematics

of this virtual model are updated on a schedule with a timer object to provide a near real

time virtual model of the subject’s physical upper limb.

Figure 3-12: Myo and Sphero upper limb motion capture

The controls at left of Figure 3-12 are first used to initialize the Myos and Sphero.

Then, two more configuration options exist for the upper limb virtual model. Since the

application doesn’t support an automatic way to detect which arm is being used, the “Switch

Arm” push button is used to reflect the graphical representation of the upper limb segments

in the plot. The Myo SDK does provide a data source that indicates the likely arm on which

the device is worn, but tests have shown this result to be less reliable than manual

configuration by the user. A “Toggle Myos” checkbox also allows users to change the order

in which the two Myos are taken to represent the upper and lower limb segments.

In the remainder of this work, we move on to refine this application case of human

upper limb motion capture. Through the definition of a formal calibration methodology,

Page 88: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

81

experimental protocol, and analysis methods, we improve the functionality of the current

application prototype.

Page 89: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

82

4 Mathematical Methods

The mathematical theory in this work ranges from the basic general

representations of spatial position level kinematics to the development of a particular

calibration framework that enables us to reconstruct a subject’s upper limb in a virtual world

with registration of physical and virtual fixtures. We begin this section by covering the basics

of the spatial kinematics theory needed in this work and the general methods needed when

working with raw IMU data. Then we develop the forward and inverse kinematics models of

the subject’s upper limb and apply these models to a calibration procedure, experimental

protocol, and experimental data analysis methods.

4.1 Coordinate Frames, Vectors, and Rotations

A coordinate frame is defined by a six degree of freedom translation and rotation

pair with respect to some other coordinate frame. The base coordinate frame in this work is

taken to be the fixed frame 𝐹. This coordinate frame is special since its basis vectors will be

the set of standard basis vectors 𝒆𝑥 𝒆𝑦 and 𝒆𝑧 given numerically (4-1).

𝒆𝑥 = [100] 𝒆𝑦 = [

010] 𝒆𝑧 = [

001]

(4-1)

All other coordinate frames are defined with respect to another coordinate frame,

such as this frame 𝐹 by the pair of a translational displacement vector and a rotation matrix

that describes the relative orientation between the two frames. We may consider these to be

𝒅𝐵𝐹 and 𝑹𝐵

𝐹 , respectively, for some arbitrary frame 𝐵 . The displacement vector 𝒅𝑖𝑗

is the

Page 90: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

83

vector drawn from the origin of frame 𝑗 to the origin of frame 𝑖 and the rotation matrix 𝑹𝑖𝑗 is

the transformation from frame 𝑖 to frame 𝑗.

When the context demands that vectors be componentized, it is assumed that

components are taken in the fixed frame 𝐹 unless a left superscript is given to the variable.

For example, the following relationships in (4-2) show the symbolic form of a vector 𝒅𝑖𝑗 with

components taken in frame 𝑘 . The convention when the left superscript is absent is to

assume that 𝑘 = 𝑗 as is often the case.

𝒌𝒅𝑖

𝑗, 𝒅𝑖

𝑗≡ 𝒋𝒅𝑖

𝑗

(4-2)

Vector operations such as the dot product and cross product are performed in the

typical way, however the cross product is of particular interest in that we define a new

operator [(⋅)̃] for one chosen representation of this operation. The cross product of two

vectors 𝒖 and 𝒗 is represented as the multiplication of a skew symmetric matrix [�̃�] with the

original vector 𝒗 as shown in (4-3).

𝒖 × 𝒗 ≡ [�̃�]𝒗

with,

[�̃�] = [

0 −𝑢𝑧 𝑢𝑦𝑢𝑧 0 −𝑢𝑥−𝑢𝑦 𝑢𝑥 0

]

(4-3)

A unit quaternion is given by a scalar part 𝑠 and vector part 𝒗 so that the

quaternion 𝒒 is given in (4-4) along with its conjugate �̅�.

Page 91: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

84

𝒒 = [𝑠𝒗] , �̅� = [

𝑠−𝒗]

(4-4)

We define multiplication on two quaternions 𝒒𝑙 and 𝒒𝑟 (the left and right

operands) to produce 𝒒𝑝 as shown in (4-5).

𝒒𝑝 = 𝒒𝑙𝒒𝑟

= [𝑠𝑙𝑠𝑟 − 𝒗𝑙

T𝒗𝑟𝑠𝑙𝒗𝑟 + 𝑠𝑟𝒗𝑙 + [�̃�𝑙]𝒗𝑟

]

(4-5)

The rotation of a vector 𝒑 to its image 𝒑′ by a quaternion 𝒒 can be computed as in

(4-6) by applying the conjugation function of 𝒒 to a new quaternion with the vector part

given by 𝒑 and a zero scalar part.

[0𝒑′] = 𝒒 [

0𝒑] �̅�

(4-6)

The rotation matrix 𝑹 that performs this same operation on 𝒑 is calculated from

𝒒 as shown in (4-7). We derive this formula for 𝑹 by rotating each of the standard basis

vectors by 𝒒 as follows.

Page 92: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

85

[0

𝑹(⋅,1)] = 𝒒 [

0𝒆𝑥] �̅�, [

0𝑹(⋅,2)

] = 𝒒 [0𝒆𝑦] �̅� , [

0𝑹(⋅,3)

] = 𝒒 [0𝒆𝑧] �̅�

and,

𝑹(𝒒) = [𝑹(⋅,1) 𝑹(⋅,2) 𝑹(⋅,3)]

= [

𝑠2 + 𝑣𝑥2 − 𝑣𝑦

2 − 𝑣𝑧2 2(𝑣𝑥𝑣𝑦 − 𝑠𝑣𝑧) 2(𝑣𝑥𝑣𝑧 + 𝑠𝑣𝑦)

2(𝑣𝑥𝑣𝑦 + 𝑠𝑣𝑧) 𝑠2 − 𝑣𝑥2 + 𝑣𝑦

2 − 𝑣𝑧2 2(𝑣𝑦𝑣𝑧 − 𝑠𝑣𝑥)

2(𝑣𝑥𝑣𝑧 − 𝑠𝑣𝑦) 2(𝑣𝑦𝑣𝑧 + 𝑠𝑣𝑥) 𝑠2 − 𝑣𝑥2 − 𝑣𝑦

2 + 𝑣𝑧2

]

= (𝑠2 − 𝒗T𝒗)𝑰3×3 + 2(𝒗𝒗T + 𝑠[�̃�])

(4-7)

Then substituting the unit norm constraint on 𝒒, namely 1 = 𝑠2 + 𝒗T𝒗, allows us

to rewrite the first term of the final line in (4-7) to rewrite this conversion formula in its final

form shown here in (4-8).

𝑹(𝒒) = (1 − 2𝒗T𝒗)𝑰3×3 + 2(𝒗𝒗

T + 𝑠[�̃�]) (4-8)

4.2 Working with Sensor Data

The nature of IMU data that we receive from the sensors in both Myo and Sphero

are subject to a few similar characteristics. These are explained in detail with respect to use

of the data for kinematics modeling and analysis. The position level data, an estimated

quaternion, must be calibrated by capturing a datum for rotational offset that is to be

removed in future realizations of data. The vector data, from gyroscope and accelerometer

data, must also be interpreted with its components in the correct reference frame. And

finally, the measured acceleration from the accelerometer can conveniently be adjusted to

calculate the kinematic acceleration.

Page 93: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

86

4.2.1 Setting Home Pose

The IMU estimated orientation data must initially be calibrated against a known

orientation before its absolute orientation can be calculated at a later time. This is due to the

fact that the estimation algorithms are initialized with an unknown inertial reference frame

at an unknown time. The initial inertial reference frame 𝑁𝑠 for a sensor frame 𝑠 encodes very

little information about the physical world. All that is known about 𝑁𝑠 is that its vertical basis

vector 𝒛𝑁𝑠 is antiparallel to gravity. Otherwise, this reference frame, and thus the orientation

of the sensor, is unknown with respect to any rotation about the gravity vector. For this

reason, the following procedure is necessary in use cases when the absolute orientation of

the sensor is desired.

The general procedure used to set the home pose for a sensor is simply described

as one in which we capture the orientation of the sensor during a known physical orientation

so that it can be removed as a sort of rotational offset in future realizations of the data. We

compute the relationship between the raw sensor orientation 𝑹𝑠𝑁𝑠 and the calibrated sensor

orientation in the fixed frame 𝑹𝑠𝐹 by capturing offsets 𝑹𝐹

𝑁𝑠 simultaneously and applying the

following loop closure equation.

𝑹𝑠𝑁𝑠 = 𝑹𝐹

𝑁𝑠𝑹𝑠𝐹

(4-9)

Each offset 𝑹𝐹𝑁𝑠 is dependent upon a definition of the home pose for the sensor in

the fixed frame �̃�𝑠𝐹 which is chosen to be identity in this work. The offsets are computed by

solving the preceding loop closure equation with 𝑹𝑠𝐹 = �̃�𝑠

𝐹.

Page 94: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

87

𝑹𝐹𝑁𝑠 = 𝑹𝑠

𝑁𝑠(�̃�𝑠𝐹)T= 𝑹𝑠

𝑁𝑠𝑰3×3 = 𝑹𝑠𝑁𝑠

(4-10)

Subsequent raw sensor orientation data is then interpreted with respect to the

fixed frame by solving the rotational loop closure equation for 𝑹𝑠𝐹 and applying the offsets

captured during home pose calibration.

𝑹𝑠𝐹 = (𝑹𝐹

𝑁𝑠)T𝑹𝑠𝑁𝑠

(4-11)

4.2.2 Vector Data Interpretation

The IMU sensors also contain data from gyroscope and accelerometer sources that

must be interpreted properly for effective use. In addition to understanding the physical

meaning of the data, we must have an awareness of how to work with this data between the

various relevant coordinate systems.

The raw gyroscope and accelerometer data from both Myo and Sphero bears

characteristics that are typical for IMU data. Both sources are reported, after having been

measured, in the rotating body frame 𝑠 of the sensor 𝑠. The gyroscope data is as expected the

angular velocity vector 𝑠𝝎 with components taken in the 𝑠 frame and units of degrees per

second. Describing the accelerometer data in terms of kinematic quantities is slightly more

involved.

The nature of measuring accelerations involves relying upon the indirect

observation of acceleration through computing the force due to both acceleration and gravity

acting on a proof mass. The effect of this is that the measured acceleration 𝑠�̂� is a summation

Page 95: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

88

of the kinematic acceleration 𝒂 and the acceleration due to gravity 𝒈. We denote the gravity

vector in inertial coordinates to be given by (4-12).

𝑁𝑠𝒈 = −𝑔𝒆𝑧

(4-12)

The measured acceleration is then equal to the combination of kinematic

acceleration and apparent acceleration due to gravity as shown in (4-13).

𝑠�̂� = 𝑠𝒂 − 𝑠𝒈

(4-13)

Fortunately with both Myo and Sphero we have access to an estimated quaternion

relating the 𝑠 and 𝑁𝑠 frames from which we can compute the rotation matrix 𝑹𝑠𝑁𝑠 . This

rotation matrix along with the definition of the gravity vector in (4-12) can be applied to

(4-13) to solve for the kinematic acceleration of the sensor with components in the inertial

frame 𝑁𝑠 shown here.

𝑁𝑠𝒂 = 𝑁𝑠�̂� + 𝑁𝑠𝒈

= 𝑹𝑠𝑁𝑠 𝑠�̂� − 𝑔𝒆𝑧

(4-14)

An alternative perspective on the kinematic acceleration that is simpler than

(4-14) while losing some information can be found in looking only at its magnitude. The

computation of kinematic acceleration magnitude is performed by taking the norm of the

measured acceleration and subtracting the magnitude of acceleration due to gravity.

‖𝒂‖ = ‖�̂�‖ − 𝑔

(4-15)

Page 96: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

89

The interpretation of the sensor angular velocity is straightforward in that the

vector can be taken into the inertial frame by premultiplication of the rotation matrix.

𝑁𝑠𝝎 = 𝑹𝑠

𝑁𝑠 𝑠𝝎 (4-16)

4.3 Upper Limb Kinematics

The home calibrated sensor data can be used to represent the subject’s upper limb

motion in multiple ways. The data can be used directly with a forward kinematics model of

the upper limb. This is the way that we represent the subject’s limb through the calibration

process in the following section. Alternatively, we can map the sensor data back onto a set of

scalar joint angles through an inverse kinematics calculation. Finally, a variation of the

forward kinematics representation can be computed on the result from inverse kinematics.

4.3.1 Forward Kinematics

The forward kinematics model that we adopt relies strictly upon the estimated

quaternion data from both Myo devices and Sphero. Our goal here is to determine the

position of a point of interest on the subject’s upper limb by using only data from the sensors.

These calculations will then enable us to perform calibration of the kinematic model within

its task environment in the following section.

In order to relate the sensor data to the human upper limb, we make a critical

assumption. We assume that each sensor is rigidly attached to its respective limb so that the

orientation of the sensor is exactly the orientation of the limb at the point of attachment

between the sensor and the limb. With this assumption, we can now think about the

orientation of three segments of the upper limb as directly known from the data.

Page 97: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

90

The three segments of the upper limb are taken to be represented by three

coordinate frames 𝑈 𝐿 and 𝐻 attached to the upper arm, lower arm and hand, respectively.

Each of these sensor frames is attached to the limb so that the 𝒙 axis points in the lateral

direction and the 𝒛 axis points vertically upward when the subject assumes the t-pose with

the arm extended laterally and palm facing down.

The connectivity between these segments is modeled only by spherical joints in

at this stage since we’re relying only upon the sensor data information. In this case, the so-

called sensor joint variables are represented by the rotation matrices 𝑹𝑈 𝑹𝐿 and 𝑹𝐻. Note

that for the remainder of this work, when a rotation matrix lacks a superscript reference

frame, it is to be taken with respect to the fixed frame 𝐹.

With the inclusion of four geometrical parameters, three arm segment lengths 𝑙𝑈,

𝑙𝐿, 𝑙𝐻 and the radius of Sphero 𝑟𝑆, we can now fully specify the mapping of sensor joint data

onto the location of Sphero’s center point in the fixed frame 𝒅𝑆𝐹 . Of course, this formulation

involves one more assumption that the center of Sphero exists vertically downward from the

origin of frame 𝐻 when the subject is in home position. So long as Sphero does not move with

respect to the subject’s wrist point as viewed in the hand frame 𝐻 , this is a reasonable

assumption.

The mathematical expression of this forward kinematics formulation is aided by

the visual depiction of Figure 4-1 and is given by the following vector loop closure equation.

𝒅𝑆𝐹 = 𝒍𝑈 + 𝒍𝐿 + 𝒍𝐻 + 𝒓𝑆

(4-17)

The point of interest is the center of Sphero located by 𝒅𝑆𝐹 , and the vectors 𝒍(⋅) and

𝒓𝑆 point along the 𝒙(⋅) and 𝒛𝐻 basis vectors, respectively. This formula can be represented

Page 98: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

91

more directly in terms of the sensor data rotation matrices and the geometrical parameters

by writing it in terms of the parameters’ scalar magnitudes as shown in (4-18).

Figure 4-1: Upper limb forward kinematics model

𝒅𝑆𝐹 = 𝑹𝑈𝒆𝑥𝑙𝑈 +𝑹𝐿𝒆𝑥𝑙𝐿 + 𝑹𝐻𝒆𝑥𝑙𝐻 − 𝑹𝐻𝒆𝑧𝑟𝑠

(4-18)

Finally, we notice that the sensor data must be homed before it can be used to

compute the forward kinematics. This is performed as described above in (4-10) by first

capturing the rotational offsets during performance of the t-pose using (4-19).

𝑹𝐹𝑁𝑈 = 𝑹𝑈

𝑁𝑈

𝑹𝐹𝑁𝐿 = 𝑹𝐿

𝑁𝐿

𝑹𝐹𝑁𝐻 = 𝑹𝐻

𝑁𝐻⏟ during home pose

(4-19)

After these rotational offsets are recorded, they are to be applied to all future

sensor data in order to transform it to the common fixed frame 𝐹 as described generally in

(4-11) and shown specifically here in (4-20).

Page 99: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

92

𝑹𝑈 = 𝑹𝑈𝐹 = (𝑹𝐹

𝑁𝑈)T𝑹𝑈𝑁𝑈

𝑹𝐿 = 𝑹𝐿𝐹 = (𝑹𝐹

𝑁𝐿)T𝑹𝐿𝑁𝐿

𝑹𝐻 = 𝑹𝐻𝐹 = (𝑹𝐹

𝑁𝐻)T𝑹𝐻𝑁𝐻

(4-20)

4.3.2 Inverse Kinematics

As a final step in representing a model of human upper limb motion, we may wish

to map the rotation data for each limb segment into a minimal parameter description of the

joint motion described by a set of scalar rotation angles. To reason about this process at a

conceptual level higher than simple geometry, we leverage two Euler angle sets to calculate

the joint angles in a systematic way. The inverse Euler angle mapping is then applied to the

three relative rotation matrices between adjacent limb segments. Then, this particular nine

parameter description is reduced to an assumed seven degrees of freedom by introducing

two constraints on the parameter set.

Euler angle parameterization is given by an ordered sequence of three rotations

performed about the principal axes. The sequence is denoted 𝑖 − 𝑗 − 𝑘 to specify the axes of

rotation about which the proximal frame 𝑁 is rotated through the angles 𝜃1 𝜃2 and 𝜃3 ,

respectively, to result in the distal frame 𝐷 . Each successive rotation is performed with

respect to the coordinate frame resulting from the previous rotation. The composed rotation

matrix is given by (4-21).

𝑹𝐷𝑁 = 𝑴𝑖(𝜃1)𝑴𝑗(𝜃2)𝑴𝑘(𝜃3) = 𝑴𝑖𝑗𝑘

(4-21)

Where 𝑖 𝑗 and 𝑘 can be any of 𝑥 𝑦 or 𝑧, and the matrices 𝑴𝑥(𝜃) 𝑴𝑦(𝜃) and 𝑴𝑧(𝜃)

are rotation transformations about the principal axes shown in (4-22).

Page 100: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

93

𝑴𝑥(𝜃) = [1 0 00 𝑐𝜃 −𝑠𝜃0 𝑠𝜃 𝑐𝜃

]

𝑴𝑦(𝜃) = [𝑐𝜃 0 𝑠𝜃0 1 0−𝑠𝜃 0 𝑐𝜃

]

𝑴𝑧(𝜃) = [𝑐𝜃 −𝑠𝜃 0𝑠𝜃 𝑐𝜃 00 0 1

]

(4-22)

In this work, we leverage two nonsymmetric parameterizations to perform the

decomposition of the joint rotation matrices into rotations about the assumed joint axes. The

𝑧 − 𝑦 − 𝑥 and 𝑥 − 𝑦 − 𝑧 parameterizations will be used and derived in the remainder of this

section. Applying (4-21) and (4-22) to these sequences allows us to calculate the

compositions 𝑴𝑥𝑦𝑧 and 𝑴𝑧𝑦𝑥 as shown in (4-23).

𝑴𝑥𝑦𝑧 = [

𝑐2𝑐3 −𝑐2𝑠3 𝑠2𝑐1𝑠3 + 𝑠1𝑠2𝑐3 𝑐1𝑐3 − 𝑠1𝑠2𝑠3 −𝑠1𝑐2𝑠1𝑠3 − 𝑐1𝑠2𝑐3 𝑠1𝑐3 + 𝑐1𝑠2𝑠3 𝑐1𝑐2

]

𝑴𝑧𝑦𝑥 = [

𝑐1𝑐2 −𝑠1𝑐3 + 𝑐1𝑠2𝑠3 𝑠1𝑠3 + 𝑐1𝑠2𝑐3𝑠1𝑐2 𝑐1𝑐3 + 𝑠1𝑠2𝑠3 −𝑐1𝑠3 + 𝑠1𝑠2𝑐3−𝑠2 𝑐2𝑠3 𝑐2𝑐3

]

(4-23)

Inversion of these composed rotation matrices will enable us to determine joint

angles between adjacent upper limb segments. We’ll denote the elements of the matrices

𝑴(⋅,⋅,⋅) = [𝑚𝑖𝑗] by their subscripts as 𝑚𝑖𝑗 in the following derivation of Euler angle inversion

formulas shown in (4-24) and (4-25).

Page 101: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

94

𝜃1 = atan2(−𝑚23, 𝑚33) = atan2(𝑠1𝑐2, 𝑐1𝑐2)

𝜃2 = asin(𝑚13) = asin(𝑠2)

𝜃3 = atan2(−𝑚12,𝑚11) = atan2(−𝑐2𝑠3, −𝑐2𝑐3)⏟ 𝑥−𝑦−𝑧

(4-24)

𝜃1 = atan2(𝑚21,𝑚11) = atan2(𝑠1𝑐2, 𝑐1𝑐2)

𝜃2 = asin(−𝑚31) = asin(𝑠2)

𝜃3 = atan2(𝑚32,𝑚33) = atan2(𝑐2𝑠3, 𝑐2𝑐3)⏟ 𝑧−𝑦−𝑥

(4-25)

The inverse Euler angle formulas in (4-24) and (4-25) are applied to the relative

rotations between segments 𝑹𝑈𝐹 𝑹𝐿

𝑈 and 𝑹𝐻𝐿 with alternate variable names assigned to this

representation of the shoulder, elbow, and wrist joint rotations given in Table 4-1. These

variable names are used to disambiguate the numbered Euler angles presented previously.

Table 4-1: Inverse kinematics joint variable definitions

Joint 𝒊 − 𝒋 − 𝒌 𝑴𝒊𝒋𝒌 𝜽𝟏 𝜽𝟐 𝜽𝟑

Shoulder 𝑧 − 𝑦 − 𝑥 𝑹𝑈𝐹 𝜃𝑠𝑧 𝜃𝑠𝑦 𝜃𝑠𝑥

Elbow 𝑧 − 𝑦 − 𝑥 𝑹𝐿𝑈 𝜃𝑒𝑧 𝜃𝑒𝑦 𝜃𝑒𝑥

Wrist 𝑥 − 𝑦 − 𝑧 𝑹𝐻𝐿 𝜃𝑤𝑥 𝜃𝑤𝑦 𝜃𝑤𝑧

The interpretation of these nine parameters such that they accurately represent

the pose of a seven degree of freedom human arm are subject to two additional pieces of

information. First, we notice that the angles 𝜃𝑒𝑥 and 𝜃𝑤𝑥 are rotations about the same axis.

These two variables represent the contributions from two sensor sources 𝑹𝐿𝑈 and 𝑹𝐻

𝐿 to a

single physical joint rotation of the lower arm 𝜙𝑒𝑥 that is calculated by the sum in (4-26).

𝜙𝑒𝑥 = 𝜃𝑒𝑥 + 𝜃𝑤𝑥

(4-26)

Page 102: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

95

The remaining extra degree of freedom in our parameterization is superficial with

respect to the physical motion of the human upper limb. We assume that the rotation of the

elbow is a rotation purely about the 𝑧 and 𝑥 axes. Thus, any nonzero value of the variable 𝜃𝑒𝑦

represents error in the determined pose because of our assumption in (4-27).

𝜃𝑒𝑦 = 0

(4-27)

The joint variables used to represent the upper limb pose are then taken to be 𝜃𝑠𝑧

𝜃𝑠𝑦 𝜃𝑠𝑥 𝜃𝑒𝑧 𝜙𝑒𝑥 𝜃𝑤𝑦 and 𝜃𝑤𝑧 . After mapping the joint rotations into this set of joint variables,

we must check the validity of the assumption in (4-27) to ensure that excessive error doesn’t

exist in this pose representation.

Reconstruction of the forward kinematics can then be computed using the results

from inverse kinematics. One must use the formulas in (4-23) to compute 𝑹𝐿𝑈 with 𝜃𝑒𝑦 = 0

and then use this matrix to compute 𝑹𝐿𝐹 before continuing on to compute the forward

kinematics using joint rotation matrices as previously described. The validity of the joint

angles and the inverse kinematics motion model can be checked by comparing this

reconstructed forward kinematics result with the result computed from the sensor data

directly by computing the distance between the calculated 𝒅𝑆𝐹 from each method.

4.4 Calibration Problem

After having removed rotational offsets from the sensor data and then mapping

the data through the forward kinematics calculation, we realize the need for calibration.

There are two problems that this calibration is set to solve. First, the geometric parameters

of the upper limb needed in order to compute the forward kinematics equation are not

Page 103: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

96

known. We must devise some method to estimate these parameters before they can be used

reliably. Second, the fixed frame defined at the moment the home pose is set has little

relationship to the physical world. Supposing that we desire to relate the kinematics of the

subjects are to the location of objects in the physical world, it is critical to determine the rigid

body transformation between the fixed frame 𝐹 and some known coordinate system of

interest in the task environment.

The proposed calibration method involves the use of a calibration jig in which

calibration point fixtures are installed at known relative locations as depicted in Figure 4-2.

The calibration point fixtures are designed to mechanically interface Sphero so that its center

point remains invariant under spherical rotation. With many data realizations captured

while the subject is holding Sphero at each of multiple calibrations points, we estimate the

upper limb geometric parameters and the calibration point locations in one optimization

step.

Figure 4-2: Calibration point definitions

The use of three calibration points 𝐶1 𝐶2 and 𝐶3 provides enough information to

reconstruct a rigid body transformation between the fixed frame 𝐹 and a new task frame 𝑇

that is embedded in the calibration jig. Figure 4-2 shows the three calibration points and

their locating vectors with respect to the fixed frame origin and with components taken in

the fixed frame. The treatment of constraints on the calibration points is the additional

Page 104: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

97

information that allows us to hone in on the optimal values for the upper limb geometrical

parameters and the calibration point locations.

As we develop the constraints on the calibration points, we will need to assume

some compatible configuration of the calibration points within a calibration jig. Figure 4-3

depicts the three calibration points along with four vectors that are computed from the

locations of the calibration points. There are three relative displacement vectors and one

vector that’s normal to the plane spanned by these relative displacement vectors.

Constraints on the geometry of the calibration points will be applied directly to the

components of these vectors taken in the fixed frame 𝐹.

Figure 4-3: Calibration point calculated vectors

Some examples of the constraints we will develop from these calculated vectors

include enforcing known vector magnitude, known vector component values, vector

orthogonality. The latter is a property that is exploited in efforts to simplify the definition of

a unique coordinate frame in the task space 𝑇 as depicted in Figure 4-4.

Figure 4-4: Choice of task space coordinate frame

Page 105: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

98

We choose to define the task frame 𝑇 directly from the relative displacement

vectors with the origin’s position and basis vector given with respect to the fixed frame by

(4-28).

𝒅𝑇𝐹 = 𝒅𝐶2

𝐹

𝒙𝑇 =𝒓𝐶2/𝐶3𝑟𝐶2/𝐶3

𝒚𝑇 =𝒓𝐶2/𝐶1𝑟𝐶2/𝐶1

𝒛𝑇 =𝒏𝑇𝑛𝑇

=𝒓𝐶2/𝐶3 × 𝒓𝐶2/𝐶1‖𝒓𝐶2/𝐶3 × 𝒓𝐶2/𝐶1‖

(4-28)

In the remainder of this section we develop the optimization framework including

formulation of the objective function and the geometrical constraints followed by a

discussion of solving the problem computationally.

4.4.1 Objective Function

The objective of this calibration optimization is to minimize the error between the position

𝒅𝑆𝐹 of center of Sphero computed from the forward kinematics of (4-18) and the location of

each 𝑖-th calibration point 𝒅𝐶𝑖𝐹 to determine the lengths of each upper limb segment and the

location of the calibration points. Figure 4-5 shows the definition of the error vector 𝝐 =

𝒅𝐶𝑖𝐹 − 𝒅𝑆

𝐹 that we will minimize over the design variables 𝒙 shown in (4-29).

Page 106: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

99

Figure 4-5: Calibration objective function error vector

𝒙 = [𝒍T (𝒅𝐶1𝐹 )

T(𝒅𝐶2

𝐹 )T(𝒅𝐶3

𝐹 )T]T

, 𝒍 = [𝑙𝑈 𝑙𝐿 𝑙𝐻]T

(4-29)

We begin the formulation of the objective function by considering only the 𝑘-th

realization for the 𝑖-th calibration point, substituting 𝝐 into (4-18), and factoring out the

variables in 𝒙 while isolating 𝝐.

𝒅𝑆𝐹 = 𝑹𝑈𝒆𝑥𝑙𝑈 + 𝑹𝐿𝒆𝑥𝑙𝐿 + 𝑹𝐻𝒆𝑥𝑙𝐻 − 𝑹𝐻𝒆𝑧𝑟𝑠−𝝐 = 𝑹𝑈𝒆𝑥𝑙𝑈 + 𝑹𝐿𝒆𝑥𝑙𝐿 + 𝑹𝐻𝒆𝑥𝑙𝐻 − 𝒅𝐶𝑖

𝐹 − 𝑹𝐻𝒆𝑧𝑟𝑠

−𝝐 = [𝑹𝑈𝒆𝑥 𝑹𝐿𝒆𝑥 𝑹𝐻𝒆𝑥 −𝑰3×3]

[ 𝑙𝑈𝑙𝐿𝑙𝐻𝒅𝐶𝑖𝐹]

− 𝑹𝐻𝒆𝑧𝑟𝑠

−𝝐 = [𝑨𝑘𝑖 −𝑰3×3] [𝒍𝒅𝐶𝑖𝐹 ] − 𝒃𝑘𝑖

(4-30)

We can write (4-30) 𝑃𝑖-many times by expanding along the 𝑘 dimension to obtain

the result in (4-31).

Page 107: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

100

[ 𝑨1𝑖 −𝑰⋮ ⋮𝑨𝑘𝑖 −𝑰⋮ ⋮𝑨𝑃𝑖𝑖 −𝑰]

[𝒍𝒅𝐶𝑖𝐹 ] −

[ 𝒃1𝑖⋮𝒃𝑘𝑖⋮𝒃𝑃𝑖𝑖]

= −

[ 𝝐1𝑖⋮𝝐𝑘𝑖⋮𝝐𝑃𝑖𝑖]

[𝑨𝐾𝑖 −𝑰3𝑃𝑖×3] [𝒍𝒅𝐶𝑖𝐹 ] − 𝒃𝐾𝑖 = −𝝐𝐾𝑖

(4-31)

We can then write (4-31) 𝑀-many times by expanding along the 𝑖 dimension for

each of 𝑀 = 3 calibration points to obtain the global matrices shown in (4-32).

[

𝑨𝐾1 −𝑰3𝑃1×3 0 0

𝑨𝐾2 0 −𝑰3𝑃2×3 0

𝑨𝐾3 0 0 −𝑰3𝑃3×3

]

[ 𝒍𝒅𝐶1𝐹

𝒅𝐶2𝐹

𝒅𝐶3𝐹]

− [

𝒃𝐾1𝒃𝐾2𝒃𝐾3

] = − [

𝝐𝐾1𝝐𝐾2𝝐𝐾3]

𝑨𝐺𝒙 − 𝒃𝐺 = −𝝐𝐺

(4-32)

Finally, we complete the formulation of the objective function by calculating the

squared norm of the error 𝝐T𝝐 to obtain 𝑓(𝒙) and then calculate the gradient ∇𝑓(𝒙) in (4-33).

𝑓(𝒙) = 𝒙T𝑨𝐺

T𝑨𝐺𝒙 − 2𝒃𝐺T𝑨𝐺𝒙 + 𝒃𝐺

T𝒃𝐺 = 𝝐𝐺T𝝐𝐺

∇𝑓(𝒙) = 2𝑨𝐺T𝑨𝐺𝒙 − 2𝒃𝐺

T𝑨𝐺𝒙

(4-33)

4.4.2 Design Variable Bounds

The geometrical parameters in 𝒍 must necessarily be positive values since they

represent the lengths of the subject’s virtual upper limb as defined in the kinematic model

assumed in this work. If left unconstrained, valid solutions may be obtained through various

possible combinatorial permutations of the vector loop equation in (4-18). However, to

avoid ambiguity with future interpretation of the calibration result, it’s necessary to impose

Page 108: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

101

a bound constraint that the link length parameters be non-negative as shown in (4-34). Note

that the calibration point locations 𝒅𝐶𝑖𝐹 are free vectors and so their component magnitudes

are unconstrained on ℛ3.

𝒈𝐵 = −𝒍 ≼ 𝟎

(4-34)

4.4.3 Calibration Point Distance Constraints

The first reasonable constraints to impose on the calibration points are the

relative distances between each pair of the three calibration points. We easily compute the

relative displacement vectors as differences of elements in the design vector. The squared

norm of these vectors is a value that we know from the geometry of the calibration jig, and

it’s invariant under rigid transformations of the task space. We proceed to compute the norm

of these relative displacement vectors and then cast the expression into quadratic form.

The three relative displacement vectors are taken to be 𝒓𝐶2/𝐶1 𝒓𝐶2/𝐶1 𝒓𝐶2/𝐶1, and

they are calculated by (4-35).

𝒓𝐶2/𝐶1 = 𝒅𝐶2

𝐹 − 𝒅𝐶1𝐹

𝒓𝐶2/𝐶3 = 𝒅𝐶2𝐹 − 𝒅𝐶3

𝐹

𝒓𝐶3/𝐶1 = 𝒅𝐶3𝐹 − 𝒅𝐶1

𝐹

(4-35)

The form of these quadratic equality constraints is given by the following.

ℎ𝐷21 = (𝒓𝐶2/𝐶1)T𝒓𝐶2/𝐶1 − (𝑟𝐶2/𝐶1)

2= 0

ℎ𝐷23 = (𝒓𝐶2/𝐶3)T𝒓𝐶2/𝐶3 − (𝑟𝐶2/𝐶3)

2= 0

ℎ𝐷31 = (𝒓𝐶3/𝐶1)T𝒓𝐶3/𝐶1 − (𝑟𝐶3/𝐶1)

2= 0

(4-36)

Page 109: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

102

Next we substitute (4-35) into (4-36) and expand the product to obtain three

quadratic equations.

ℎ𝐷21 = (𝒅𝐶2𝐹 )

T𝒅𝐶2𝐹 − 2(𝒅𝐶2

𝐹 )T𝒅𝐶1𝐹 + (𝒅𝐶1

𝐹 )T𝒅𝐶1𝐹 − (𝑟𝐶2/𝐶1)

2

ℎ𝐷23 = (𝒅𝐶2𝐹 )

T𝒅𝐶2𝐹 − 2(𝒅𝐶2

𝐹 )T𝒅𝐶3𝐹 + (𝒅𝐶3

𝐹 )T𝒅𝐶3𝐹 − (𝑟𝐶2/𝐶3)

2

ℎ𝐷31 = (𝒅𝐶3𝐹 )

T𝒅𝐶3𝐹 − 2(𝒅𝐶3

𝐹 )T𝒅𝐶1𝐹 + (𝒅𝐶1

𝐹 )T𝒅𝐶1𝐹 − (𝑟𝐶3/𝐶1)

2

(4-37)

Finally, we can represent the three constraint equations in (4-37) in quadratic

matrix vector form by introducing the matrices 𝑸21 𝑸23 and 𝑸31 shown in (4-38).

𝑸21 = 2 [

𝟎3×3 … ⋮ 𝑰3×3 −𝑰3×3 −𝑰3×3 𝑰3×3 ⋮ … 𝟎3×3

]

12×12

𝑸23 = 2 [

𝟎3×3 … ⋮ 𝟎3×3 … ⋮ 𝑰3×3 −𝑰3×3 −𝑰3×3 𝑰3×3

]

12×12

𝑸31 = 2 [

𝟎3×3 … ⋮ 𝑰3×3 ⋮ −𝑰3×3 … 𝟎3×3 … −𝑰3×3 ⋮ 𝑰3×3

]

12×12

(4-38)

The final form of these constraints is shown in (4-39) along with the constraints’

respective analytical gradients.

Page 110: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

103

ℎ𝐷21 =1

2𝒙T𝑸21𝒙 − (𝑟𝐶2/𝐶1)

2

∇ℎ𝐷21 = 𝑸21𝒙

ℎ𝐷23 =1

2𝒙T𝑸23𝒙 − (𝑟𝐶2/𝐶3)

2

∇ℎ𝐷23 = 𝑸23𝒙

ℎ𝐷31 =1

2𝒙T𝑸31𝒙 − (𝑟𝐶3/𝐶1)

2

∇ℎ𝐷31 = 𝑸31𝒙

(4-39)

4.4.4 Calibration Point Orthogonality Constraint

The calibration points will be used directly to construct the basis for the task

frame 𝑇, so we also constrain two of the calibration points’ relative position vectors to be

orthogonal. We compute the dot product of two calibration point relative displacement

vectors, and then constraint this value to be equal to zero. In this way, we obtain a new

quadratic equality constraint.

The two relative displacement vectors we work with here are taken to be 𝒓𝐶2/𝐶3

and 𝒓𝐶2/𝐶1 .

𝒓𝐶2/𝐶3 = 𝒅𝐶2

𝐹 − 𝒅𝐶3𝐹

𝒓𝐶2/𝐶1 = 𝒅𝐶2𝐹 − 𝒅𝐶1

𝐹

(4-40)

The form of these quadratic equality constraints is given by the following dot

product set equal to zero.

ℎ𝑂 = (𝒓𝐶2/𝐶3)T𝒓𝐶2/𝐶1 = 0

(4-41)

Page 111: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

104

Next we substitute (4-40) into (4-41) and expand the product to obtain the

following quadratic equation.

ℎ𝑂 = (𝒅𝐶2𝐹 )

T𝒅𝐶2𝐹 − (𝒅𝐶2

𝐹 )T𝒅𝐶1𝐹 − (𝒅3

𝐹)T𝒅𝐶2𝐹 + (𝒅3

𝐹)T𝒅𝐶1𝐹

(4-42)

We can represent the quadratic orthogonality constraint equation in (4-42) by

introducing the matrix 𝑸𝑂 .

𝑸𝑂 = [

𝟎3×3 … ⋮ 𝟎3×3 −𝑰3×3 𝑰3×3 −𝑰3×3 2𝑰3×3 −𝑰3×3 𝑰3×3 −𝑰3×3 𝟎3×3

]

(4-43)

The final form of this constraint equation is shown in (4-44) along with its

analytical gradient.

ℎ𝑂 =1

2𝒙T𝑸𝑂𝒙

∇ℎ𝑂 = 𝑸𝑂𝒙

(4-44)

4.4.5 Calibration Point Planar Constraints

The calibration jig will also be designed so that the two orthogonal relative

displacement vectors will exist in the horizontal plane. In fact, this condition is satisfied

exactly when all three calibration points exist in the horizontal plane. One great benefit of

employing this constraint is that it enables to encoding of such configuration information

into the form of a linear equality constraint. All that must be done is to set the vertical

component of the calculated relative displacement vectors to zero.

Page 112: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

105

We begin with the desired linear form for this set of linear equality constraints as

shown in (4-45).

𝒉𝑃 = 𝑨𝑃𝒙 − 𝒃𝑃 = 𝟎2×1

(4-45)

Next, we write down the coefficients in 𝑨𝑃 so that 𝑨𝑃𝒙 calculates a 2 × 1 column

vector containing the vertical components of 𝒓𝐶2/𝐶3 = 𝒅𝐶2𝐹 − 𝒅𝐶3

𝐹 and 𝒓𝐶2/𝐶1 = 𝒅𝐶2𝐹 − 𝒅𝐶1

𝐹 and

realize that 𝒃𝑃 is the zero vector.

𝑨𝑃 = [𝟎1×3 0 0 0 0 0 1 0 0 −1𝟎1×3 0 0 −1 0 0 1 0 0 0

]

𝒃𝑃 = 𝟎2×1

(4-46)

The linear equality constraint that enforces that the calibration points exist in the

horizontal plane is given by (4-45) and (4-46). Since these constraints are linear, the gradient

is constant and is given entirely by the coefficient matrix 𝑨𝑃.

4.4.6 Calibration Point Normal Vector Constraints

The final set of constraints we may impose on the calibration points involves

properties of the vector 𝒏𝑇 that is normal to the relative displacement vectors we used

previously. We compute 𝒏𝑇 by a cross product and realize that the three components can be

constrained in a few different ways. The horizontal components can be constrained to zero

value as an alternative to the planar linear constraints developed previously. The vertical

component can be specified exactly with the expected magnitude and sign. Assuming a

perfect calibration, this would also ensure that 𝒏𝑇 is vertical. But, additionally, this

constraint also serves a very important function to orient the task frame 𝑇. Since another

Page 113: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

106

quadratic equality constraint may be burdensome on the optimization solver, we also

formulate a similar inequality constraint using the vertical component of 𝒏𝑇 in which this

value is constrained to be non-negative. In effect, this will allow us to orient the task frame.

The normal vector is taken to be the cross product of the planar relative

displacement vectors used previously. We choose to represent the cross product with the

cross matrix and tilde notation.

𝒏𝑇 = [�̃�𝐶2/𝐶3]𝒓𝐶2/𝐶1

(4-47)

4.4.7 Normal Vector Horizontal Equality Constraints

The equality constraints on the horizontal components of 𝒏𝑇 are given by the

following form in (4-48).

ℎ𝐻𝑥 = (𝒆𝑥)

T𝒏𝑇 = (𝒆𝑥)T[�̃�𝐶2/𝐶3]𝒓𝐶2/𝐶1 = 0

ℎ𝐻𝑦 = (𝒆𝑦)T𝒏𝑇 = (𝒆𝑦)

T[�̃�𝐶2/𝐶3]𝒓𝐶2/𝐶1 = 0

(4-48)

These constraints are compactly represented in quadratic form through the

introduction of matrices 𝑸𝐻𝑥 and 𝑸𝐻𝑦 as shown in (4-49).

Page 114: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

107

𝑸𝐻𝑥 =

[ 𝟎3×3 …⋮ 𝟎3×3 −𝑺𝑥 𝑺𝑥

−𝑺𝑥 𝟎3×3 −𝑺𝑥𝑺𝑥 −𝑺𝑥 𝟎3×3]

𝑸𝐻𝑦 =

[ 𝟎3×3 …⋮ 𝟎3×3 −𝑺𝑦 𝑺𝑦

−𝑺𝑦 𝟎3×3 −𝑺𝑦𝑺𝑦 −𝑺𝑦 𝟎3×3]

with,

𝑺𝑥 = [�̃�𝑥] = [0 0 00 0 −10 1 0

]

𝑺𝑦 = [�̃�𝑦] = [0 0 10 0 0−1 0 0

]

(4-49)

And the final formulas for ℎ𝐻𝑥 and ℎ𝐻𝑦 along with their analytical gradients are

given in (4-50).

ℎ𝐻𝑥 =1

2𝒙T𝑸𝐻𝑥𝒙

∇ℎ𝐻𝑥 = 𝑸𝐻𝑥𝒙

ℎ𝐻𝑦 =1

2𝒙T𝑸𝐻𝑦𝒙

∇ℎ𝐻𝑦 = 𝑸𝐻𝑦𝒙

(4-50)

4.4.8 Normal Vector Vertical Equality Constraint

The normal vector vertical equality constraint is given by the following form in

(4-51).

ℎ𝑉 = (𝒆𝑧)

T𝒏𝑇 −𝑟𝐶2/𝐶3𝑟𝐶2/𝐶1 = 0

= (𝒆𝑧)T[�̃�𝐶2/𝐶3]𝒓𝐶2/𝐶1 −𝑟𝐶2/𝐶3𝑟𝐶2/𝐶1 = 0

(4-51)

Page 115: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

108

This constraint is written in quadratic form by introducing the 𝑸𝑉 as shown in

(4-52).

𝑸𝑉 = [

𝟎3×3 … ⋮ 𝟎3×3 −𝑺𝑧 𝑺𝑧 −𝑺𝑧 𝟎3×3 −𝑺𝑧 𝑺𝑧 −𝑺𝑧 𝟎3×3

]

with,

𝑺𝑧 = [�̃�𝑧] = [0 −1 01 0 00 0 0

]

(4-52)

The final form of the normal vector vertical component equality constraint is

given along with its gradient in (4-53).

ℎ𝑉 =1

2𝒙T𝑸𝑉𝒙 − 𝑟𝐶2/𝐶3𝑟𝐶2/𝐶1

∇ℎ𝑉 = 𝑸𝑉𝒙

(4-53)

4.4.9 Normal Vector Vertical Inequality Constraint

The relaxation of the normal vector vertical equality constraint into a nonnegative

inequality constraint 𝑔𝑉 is easily given by inspection of ℎ𝑉 from (4-53) along with its

gradient in the following equation (4-54).

𝑔𝑉 = −1

2𝒙T𝑸𝑉𝒙 ≤ 0

∇𝑔𝑉 = −𝑸𝑉𝒙

(4-54)

Note that 𝑸𝑉 is given by the same expression in (4-52).

Page 116: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

109

4.4.10 Calibration Constraints Summary

This collection of constraints we have formulated is collected along with some

summary data in Table 4-2. The number of constraints, the linearity of the constraints, and

the type of constraint will be the main focus of the discussion of selecting an appropriate set

of constraints the follows in this section.

Table 4-2: Calibration constraints summary

Function Name Count Linearity Type Equations

𝒈𝐵 Bounds 3 Constant Inequality (4-34)

ℎ𝐷(⋅⋅) Distance 3 Quadratic Equality (4-38), (4-39)

ℎ𝑂 Orthogonality 1 Quadratic Equality (4-43), (4-44)

𝒉𝑃 Planar 2 Linear Equality (4-45), (4-46)

ℎ𝐻(⋅) Normal Vector Horizontal

2 Quadratic Equality (4-49), (4-50)

ℎ𝑉 Normal Vector Vertical

1 Quadratic Equality (4-52), (4-53)

𝑔𝑉 1 Quadratic Inequality (4-52), (4-54)

The constraints we have developed offer some redundancy and overlapping

purpose that allows us to make an informed selection in implementation of a solution to this

optimization problem. However, some of the constraints have no alternative formulation or

may be necessary. We will discuss the latter before comparing options with the former.

The bounds, distance, and orthogonality constraints have no other

representation, and so they must necessarily be used. The bounds are critical in enforcing

Page 117: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

110

uniqueness of our kinematic model because they ensure non-negative upper limb segment

lengths. The distance and orthogonality constraints enforce the basic geometry of the

calibration points’ configuration in the calibration jig.

The remaining constraints can then be looked at comparatively as alternative

options. The remaining properties we wish to enforce on our design vector are that the

calibration points lie in a horizontal plane and the orientation of this horizontal plane. We’ll

first consider constraining the calibration points to the horizontal plane. The normal vector

vertical equality constraint could enforce this condition with near-perfect data, but

unfortunately the very errors that we’re minimizing in this optimization can manifest

satisfaction of this constraint without meeting the desired horizontal condition. Therefore

we choose to use a set of two constraints in its place by selecting either the planar constraints

or the normal vector horizontal component constraints. At this point the decision to select

the planar constraints is obvious since they’re expressed with a mathematically simpler

linear function rather than two quadratic expressions.

The final remaining property to enforce on the design variables is the orientation

of the calibration points’ horizontal plane. If we would have chosen to use the normal vector

vertical equality constraint to satisfy the calibration points’ horizontal plane condition, we

would not need to further constrain the problem here because that constraint also enforces

the orientation of the horizontal plane. Since this constraint does also enforce the conditions

imposed by previous constraints (calibration points in the horizontal plane), we choose to

not use this constraint because of risk in over constraining the design space. Instead, the

suitable choice is to use the relaxed version of this constraint by selecting the inequality

constraint variant.

Page 118: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

111

We may also find that the assumptions made on the sensors’ estimated quaternion

data may be a bit strict considering possible errors evident in the data. For instance, drifting

of the quaternion estimate may allow for the reported sensor data to violate the assumption

that the sensor’s vertical inertial reference is antiparallel with gravity. The propagation of

this error through the upper limb kinematics can manifest as an apparent tilting of the

calibration points plane away from horizontal. For this reason, we’ll also consider the planar

constraint as optional with respect to the quality of optimization result with this constraint.

The final selection of constraint set is presented in Table 4-3 along with indication

of the necessity for each constraint.

Table 4-3: Calibration constraint set

Constraint Necessity

Bounds Necessary

Distance Necessary

Orthogonality Necessary

Planar Optional (horizontal calibration points plane)

Normal Vector Vertical Inequality Necessary

4.4.11 Solving the Calibration Problem

The general problem we set out to solve is given by the objective function and

constraints developed in the preceding content of this section. A formal statement of the

problem is given here in (4-55).

Page 119: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

112

minimize

𝒙𝑓(𝒙)

subject to 𝒈𝐵 ≼ 𝟎 𝑔𝑉 ≤ 0 ℎ𝐷21 = 0

ℎ𝐷23 = 0

ℎ𝐷31 = 0

ℎ𝑂 = 0

(4-55)

The objective function 𝑓(𝒙) is given by (4-33) and the locations of the constraint

formulas are given in Table 4-2.

The calibration optimization problem of (4-55) is quadratic with constant

(bounded), linear, and quadratic constraints. Although convexity is a desirable property of

an optimization problem such as this, we have at least one concave quadratic constraint in

𝑔𝑉 since 𝑸𝑉 is not positive semi definite. This matrix has both positive and negative

eigenvalues, and so the constraint function cannot be convex. The most suitable built in

solver in MATLAB is then the general nonlinear solver fmincon(). In the remainder of this

section, we discuss the process of solving the optimization problem in MATLAB using this

solver.

We must supply fmincon with an initial value for the design vector 𝒙0. This value

is computed by default by using rough measurements of a normative subject’s upper arm

dimensions along with an assumption on relative pose between the subject and the

calibration jig. The former specifies initial values for the upper limb lengths directly while

the latter is used to compute the calibration point locations using the known geometry of the

calibration jig. In a production application of this system, it should be reasonable to use

Page 120: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

113

default values for any subject, but fall back on prompting the subject to manually enter

measured values if the calibration fails because of a poor initial condition.

Other required inputs to fmincon() include the implementation of the set of

constraints. This includes direct provision of the bounds on 𝒙 as well as 𝑨𝑃 and 𝒃𝑃 to specify

the linear constraints in 𝒉𝑃 in addition to functions that compute the objective function and

the nonlinear constraints.

Optional configuration of the solver that has been implemented includes selection

of the feature that uses analytical gradients from the constraints and the objective function.

Since we’ve implemented this feature in both the objective function and the constraint

function implementations, we can specify this option in a call to optimoptions() as

shown here.

optimOpts = optimoptions('fmincon',...

'gradobj','on',...

'gradconstr','on');

[...] = fmincon(...,optimOpts);

Additional options that can be set for fmincon() include such settings as

tolerances on the objective function, design vector, or constraints, specification of various

built in solver algorithms, and more. Since this problem is very well behaved, in particular

with satisfactory choice of 𝒙0, the solvers for each available algorithm successfully find the

same local optimum with an accuracy on the order of about two millimeters. This level of

apparent accuracy is likely more precise than the compounding of errors that are evident in

the physical nature of the problem.

Page 121: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

114

4.5 Experiment Definition

The first requirement for any experimental trial we implement in this system is

calibration. For each trial, the subject must begin by holding the t-pose so that the home pose

can be set. Then, a short time series of data must be captured from each of three calibration

points located on the calibration jig to enable the calibration of the task space pose and the

subject’s upper limb geometry. Then, some exercise should be performed on which the

forward and inverse kinematics mappings of the sensor data can be analyzed to provide

information about the motion of the subject’s upper limb.

In this work we choose the exercise for this experiment to be a reaching task for

various reasons. Furthermore, we choose to use the subject’s motion between calibration

points, during the calibration procedure, as the experimental reaching task. This allows us

the benefit of having the action data captured for each task at a minimal time offset from the

time at which the calibration data is collected. This affects the validity of each trial’s

calibration data since we assume that the sensors will experience drift over time. In addition

to reducing the time between calibration and action performance, we also incorporate new

calibration data with every action sequence without increasing the total time for collection

of the experimental data. This affords us the highest possible calibration fidelity within the

chosen calibration framework.

The formal statement of experimental protocol is defined by assigning a timed

progression of well-defined states for the subject to follow during each experimental trial.

This protocol is defined by four states, home, idle, calib, and reach. The home state is the

starting state for which the subject should assume the t-pose and begin the remaining

automated procedure. The next state, idle, is then interleaved between all remaining state

Page 122: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

115

changes to allow a brief period of time in which the subject can prepare for the upcoming

state change. The calib state is then met for each of the three calibration points with the

subject performing a reach between them.

The calib state is used to collect sufficient calibration data. During this state the

subject should smoothly articulate the upper limb joints as much as comfortably possible

while holding Sphero reliably in the appropriate calibration point fixture. When moving

between the calibration points, the subject will be performing the reach state. A reach should

be performed smoothly at approximately constant speed so that the reach begins and ends

at the boundaries of the allotted time duration. The reaching motion should involve the

minimal amount of vertical motion that is sufficient for Sphero to clear the calibration point

fixture, while the horizontal motion should trace a straight line segment between the

calibration point locations. Table 4-4 summarizes the state progression and timing for this

experimental protocol.

Table 4-4: Experimental protocol state progression and timing

State Location Duration

home t-pose ---

idle --- 2s

calib 𝐶1 3s

idle --- 2s

reach 𝐶1 → 𝐶2 3s

idle --- 2s

calib 𝐶2 3s

idle --- 2s

Page 123: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

116

State Location Duration

reach 𝐶2 → 𝐶3 3s

idle --- 2s

calib 𝐶3 3s

idle --- 2s

reach 𝐶3 → 𝐶1 3s

idle --- 2s

calib 𝐶2 3s

4.5.1 Motion Analysis

At the position level, we will begin by computing a position error vector at every

time instant for the motion between two calibration points 𝐶𝑖 and 𝐶𝑗 . We assume that the

ideal motion between these two points is such that point 𝑆 is in the vertical plane including

these two points. We proceed to resolve the known relative position of 𝑆 with respect to 𝐶𝑖

into components that are parallel (𝒓𝑃/𝐶𝑖) and perpendicular (𝒓𝑆/𝑃) to this plane as shown in

Figure 4-6. The latter is the plane error vector we wish to calculate.

Figure 4-6: Experimental analysis plane error vector calculation

We begin by writing the vector loop closure equation in (4-56).

Page 124: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

117

𝒅𝑆𝐹 = 𝒅𝐶𝑖

𝐹 + 𝒓𝑆/𝐶𝑖

(4-56)

We wish to resolve 𝒓𝑆/𝐶𝑖 into it’s components that are parallel and perpendicular

to the plane and meet at the intermediate point 𝑃, which is the point on the plane that is

closest to 𝑆. Next, we write the unit vector 𝒏𝑖𝑗 that is normal to the plane.

𝒏𝑖𝑗 =𝒓𝐶𝑖/𝐶𝑗 × 𝒛𝑇

‖𝒓𝐶𝑖/𝐶𝑗 × 𝒛𝑇‖

(4-57)

We can calculate our desired position error vector 𝒆𝑝 = 𝒓𝑆/𝑃, the component of

𝒓𝑆/𝐶𝑖 perpendicular to the plane via projection onto 𝒏𝑖𝑗.

𝒆𝑝 = 𝒓𝑆/𝑃 = (𝒏𝑖𝑗

T 𝒓𝑆/𝐶𝑖)𝒏𝑖𝑗

(4-58)

The formula for 𝒆𝑝 in (4-58) is fully known by the relationships in (4-56) and

(4-57).

Page 125: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

118

5 Motion Analysis

In this section we apply the work from all previous sections toward the

experiment implementation. First, we discuss the experimental setup including the physical

hardware required and application specific parameters used in calibration and analysis.

Then, in the data collection stage, we describe the software implementation that is used to

control Myo and Sphero at a high level. Finally, we post process and calibrate the data before

performing analysis in the way of empirical validation tests that are used to assess the

kinematic modeling and effectiveness of the motion capture system.

5.1 Experimental Setup

The experiments will take place in an office-like environment with the subject

seated at a desk. In this section, we’ll introduce the physical realization of the theoretical

calibration fixtures, calibration jig assembly, and the geometry of the subject within the

experimental task space given in terms of the mathematical model.

The proposed calibration method involves the use of a calibration jig in which

calibration point fixtures are installed at known relative locations. The calibration point

fixture shown in Figure 5-1 is designed to cup Sphero such that it can undergo spherical

rotations while the center point of Sphero remains fixed at the calibration point.

Figure 5-1: Calibration point fixture

Page 126: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

119

These fixtures are designed with dimensions that are largely arbitrary except for

the radius that cups Sphero. The manufactured radius for this feature is sized roughly 0.5mm

larger than the radius of Sphero to minimize contact interference. This dimension detail is

shown along with the important assembly dimensions for the calibration jig in Figure 5-2.

Figure 5-2: Calibration fixtures assembled onto jig

Dimensions given in millimeters

The calibration jig is designed with its shape and size appropriate to be laid flat

on the top of a desk. The calibration points are defined as (top-left of Figure 5-2) 𝐶1: top-left,

𝐶2: bottom-left, and 𝐶3: bottom-right. This definition corresponds with the relative position

vectors 𝒓𝐶2/𝐶3 and 𝒓𝐶2/𝐶1 orthogonality requirement, and the required lengths of these

vectors are also provided in this assembly drawing.

The physical realization of the calibration jig has been manufactured from a 3/4in

thick piece of medium density fiberboard for the base, and three calibration fixtures that are

3D-printed from polylactic acid (PLA) filament. The fixtures are attached to the base with

Page 127: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

120

four wood screws each as are four rubber feet (not pictured) to provide stability on a variety

of surfaces. The final product can be seen in Figure 5-3.

Figure 5-3: Experimental setup calibration jig and subject

Note that the subject is not wearing the Myo devices in this image.

The calibration jig and subject shown in Figure 5-3 allow us to envision the most

proximal and most distal coordinate frames in our upper limb motion model, the task frame

𝑇 and the fixed frame 𝐹, respectively. We recall that the calibration optimization is seeded

with initial values of the design vector for the calibration point locations 𝒅𝐶𝑖𝐹 . We can

compute that information by approximating the location of the origin of frame 𝐹 with

respect to the calibration point 𝐶3 . The two required measurements are indicated by the

dashed grey line segments assuming that the subject is consistently positioned so that the

origin of 𝐹 is aligned with 𝐶3 in the 𝒙𝐹 direction. Assuming that the relative rotation between

𝐹 and 𝑇 is roughly identity, then 𝒅𝑇𝐹 is calculated directly from the approximated dimensions

and our knowledge of the calibration jig geometry. In these experiments the initial value

Page 128: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

121

𝒅𝑇𝐹 = [300 −250 −250]Tmm is used. From this information, we can then compute the

initial values for 𝒅𝐶𝑖𝐹 .

5.2 Data Collection

The data collection procedure involves the user operating a purpose-built GUI to

operate a custom class implementation that manages control of the Myo devices, the Sphero

device, and the data. The GUI allows users to connect the devices, manually toggle the

locations of the two Myos (upper and lower limb segments), set the home pose, and save the

current data set all while visualizing the streaming data in a dynamically updating figure.

The device management class MyoSpheroUpperLimb owns references to

MyoMex and Sphero objects (stored as class properties) that provide the interface to the

two Myos and Sphero that are used in this implementation. A method called

connectDevices() performs the instantiation of the device class properties while also

capturing the delay between these operations to synchronize the data sources when

accessing the data.

The joint rotation data log is accessed by removing the most recent samples from

the internally managed queue with the method popRotData() to return a structure

containing the uncalibrated sensor data for 𝑹𝑈𝐹 𝑹𝐿

𝐹 and 𝑹𝐻𝐹 . This sensor rotation data is

internally accessed through dependent properties RX_source (X is U, L, or H) that return

data that has been calibrated to a home pose. These accessors check the myoOneIsUpper

property to appropriately assign data from the two Myo devices to the correct data sources,

and then remove the home pose orientation offset that is stored in properties named RFNX.

Page 129: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

122

Another utility method setHomePose() is also implemented to automates the setting of

RFNX properties to the current value of RX_source(:,:,end).

Visualization of the home calibrated data is provided by internally managed

named plots of the subject’s upper limb as shown in Figure 5-4. This plot is created and

updated by calling a method drawPlotUpperLimb() with a unique name string,

calibration data struct, and action data struct containing only one sample of data to visualize.

Animation of a data sequence, or the monitoring of data in near real time, can be performed

by scheduling calls to this method with a timer object.

Figure 5-4: Data visualization provided by MyoSpheroUpperLimb

Visualization provided by drawPlotUpperLimb() with the addition of desired and actual trace of Sphero position

The visualization depicts the current pose of the limb segments computed based

on the internally stored lengths of the limb segments 𝒍 as well the calibration jig geometry

positioned at internally stored values for 𝒅𝑇𝐹 and 𝑹𝑇

𝐹 . Upon instantiation these values are set

at default parameters discussed earlier and given by the nominal anatomical dimensions

Page 130: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

123

assumed in the computer generated graphical model of the upper limb segments. As these

values are updated, the plot will also dynamically update to show the changes.

When the data collection GUI is launched the user first uses a pushbutton to

instantiate MyoSpheroUpperLimb. Once connection is established to the devices, the user

should then click a pushbutton to invoke the visualization tool while using a pushbutton to

set the home pose to the currently performed t-pose (shown in Figure 5-5) and a checkbox

to correctly configure myoOneIsUpper property.

Figure 5-5: Subject performing the t-pose to set the home pose

Each time setHomePose() pushbutton is clicked a stopwatch style timer is

displayed on the GUI and popRotData() discards the data log to flush the log queues. This

is intended to be used by the user and the subject to properly time the subject’s motions

according to the experimental protocol. Then, when the subject has correctly performed the

experimental protocol for one trial, a pushbutton is used to save the calibration and action

data sequence in a timestamped .mat file named trial-yyyymmdd-HHMMSS.mat.

Page 131: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

124

5.3 Data Processing and Calibration

After the collection of data for an experimental trial, the data must be segmented

according to the protocol states defined in Table 4-4 so that calibration and analysis can be

performed. In the best case implementation scenario this would be performed automatically

upon successful saving of trial data in near real time. Although this is possible to implement

with the current system, it has proven to be unreliable since the resource demands of

SpheroCore are too high while streaming data. The remainder of data processing and

calibration is performed offline after the data collection phase is complete.

The time durations for the states (Table 4-4) along with the known sample rate of

the data are used to segment the complete data log by into two separate sets of three logs

each. The three calib state data sets and the three reach state data sequences are each stored

in a three element struct array for further processing. The calib state data set struct, named

calibPointsData, is then passed on to a collection of methods in

MyoSpheroUpperLimb to perform the calibration.

The invocation of MyoSpheroUpperLimb.calibrate(calibPointsData)

sets off the procedure to perform calibration optimization on the calibration points data set.

The body of this method is shown below to illustrate the way in which the calibrate()

method configures and invokes the calibration optimization in its call to the static method

computeCalibration(). After the calibration has been performed, the calibration

results are then interpreted to by computing 𝒍 𝒅𝑇𝐹 and 𝑹𝑇

𝐹 from the optimization solution 𝒙∗.

function calibrate(this,calibPointsData)

calib = this.calib;

data = calibPointsData;

params = this.makeCalibParams(...

calib,data,{'ortho','dist','normalVertIneq'});

[xs,fval,exitFlag,output,lambda] = this.computeCalibration(params);

Page 132: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

125

[lengths,dT,RT] = this.interpretCalibResult(xs);

this.lengthUpper = lengths(1);

this.lengthLower = lengths(2);

this.lengthHand = lengths(3);

this.dT = dT;

this.RT = RT;

end

The calibration parameters struct params contains information that is critical to

the optimization such as an initial guess of the design vector, encoding of the objective

function parameters, and an encoding of the selected constraints. The latter is specified by

the cell array of strings params.conSpec where constraints are selected by indicating the

names from Table 4-2.

function [xs,fval,exitFlag,output,lambda] = computeCalibration(params)

LB = [0;0;0;-inf(9,1)];

UB = inf(12,1);

Aeq = []; beq = [];

[Aeq_,beq_] = MyoSpheroUpperLimb.planarFun(params);

Aeq = [Aeq;Aeq_]; beq = [beq;beq_];

A = []; b = [];

x0 = [...

params.lengthUpper;...

params.lengthLower;...

params.lengthHand;...

params.dT0 - params.rc2c1;...

params.dT0;...

params.dT0 - params.rc2c3];

optimOpts = optimoptions('fmincon',...

'display','iter',...

'gradobj','on',...

'gradconstr','on');

objFunGrad = @(x)MyoSpheroUpperLimb.objFun(x,params);

nonlinConFunGrad = @(x)MyoSpheroUpperLimb.nonlinConFunGrad(x,params);

[xs,fval,exitFlag,output,lambda] = ...

fmincon(objFunGrad,x0,...

A,b,Aeq,beq,LB,UB,...

nonlinConFunGrad,...

optimOpts);

if exitFlag<1

warning('Calibration may not be successful!');

end

end

Ten trials were captured in total from which one suffered data loss as a result of

user error, and one suffered incorrect calibration result due to the subject’s invalid

Page 133: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

126

performance of the experimental protocol. The remaining eight trials were then arbitrarily

reduced to five representative trials for the reporting of results here. Table 5-1 shows the

calibration optimization statistics.

Table 5-1: Calibration optimization statistics Brief statistics for the calibration optimization step using the Sequential Quadratic Program algorithm

Trial Iterations Objective Function Value

First Order Optimality

Constraint Violation

1 19 624005.1446 0.001830164 3.78786E-08

2 23 539046.6033 0.000212945 9.54606E-09

3 20 533839.1261 0.002044092 4.64497E-08

4 24 554550.4986 0.003572365 5.12227E-08

5 20 567094.4731 0.001807924 7.14499E-09

Each optimization solved relatively quickly with only about twenty iterations. The

solver stopped in each case because the change in the objective function became less than its

tolerance with no active constraints. The constraint violation is also negligible since its units

are at most on the order of squared millimeters. Practically, any constraint violation up to

the order of a whole millimeter is reasonably acceptable for the accuracy of the kinematic

model and the rotation data in this work.

We can also look at the resulting optimal design vector to gain insight on the

effectiveness of this approach. Table 5-2 shows the resulting geometric parameters of the

subject’s upper limb. These values should theoretically be constant between trials, and

excessive variations here may indicate errors in the model and data.

Page 134: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

127

Table 5-2: Calibration subject geometric parameter results

Trial Upper 𝒍𝑼 [mm]

Lower 𝒍𝑳 [mm]

Hand 𝒍𝑯 [mm]

Total 𝒍𝑼 + 𝒍𝑳 + 𝒍𝑯 [mm]

1 300 239 87 627

2 319 228 55 603

3 347 206 70 623

4 325 216 78 619

5 317 218 79 615

minimum mean maximum range

300 322 347 47

206 221 239 33

55 74 87 32

603 617 627 24

These limb segment length values are qualitatively correct. The values decrease

in magnitude from upper to lower to hand. The values are also near the rough values

expected from approximating these dimensions by measuring them with a ruler. However,

the variation of the segment lengths over multiple trials is more interesting. If we compare

the range of values by normalizing them to the mean of values, we see similar results for

upper and lower limb segments at 15%, but much greater variation in the hand at 44%. The

total length of all segments varies much less with a mean normalized range of 4%. This may

suggest that the model is not failing at capturing the subject’s geometry, but rather failing at

representing it properly. However, the absolute ranges of this length data are all roughly the

same with variation from 24mm to 47mm so that this interpretation of results may be

merely a consequence of the true magnitude for each value. It may be possible that the

experimental protocol does not allow for the subject to articulate the upper limb in the best

way to activate the range of motion required in the joints to properly calibrate the upper

limb geometry.

Page 135: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

128

5.4 Analysis Results

The analysis is performed on the three reach tasks contained in each of these five

trials. For each task, we consider three evaluation metrics. First, we compute the magnitude

of the plane error 𝑒𝑝 = ‖𝒆𝑝‖ of (4-58) to provide an indication of how well the subject

performed the intended motion. Then, we qualitatively and quantitatively investigate the

results from the inverse kinematics mapping developed in section 4.3.2. The latter is

considered to be the distance error in 𝒅𝑆𝐹 before and after the inverse kinematic mapping

modifies the motion data.

In this section, we choose to show the analysis results for a representative trial

number 4. A visualization of the calibration data is shown in Figure 5-6 for qualitative

inspection. We see that the forward kinematics result does not exactly correspond with the

locations of the calibration points for all three calibration points (top row). Since this will

have an effect on the relationship between a perfect plane motion in the data representation

compared to the assumed perfect plane motion, it will likely have an effect on the results of

the plane error magnitude 𝑒𝑝.

Page 136: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

129

Figure 5-6: Calibration data visualization for trial 4

Visualization of calibrated data (top) and experiment video (bottom) at each calibration point

The compounded effect we expect this calibration correspondence error to have

on 𝑒𝑝 is illustrated in Figure 5-7. A translation of the perfect motion plane (left) in the data

will result in a constant offset of the true 𝑒𝑝, and a rotation will result in a v-shaped offset of

𝑒𝑝 with the minimum representing the intersection of the two planes. These plots are purely

geometrical in that the time evolving trajectory of the point of interest is not represented.

Some warping of these characteristic curve shapes due to acceleration is likely to be evident

in offset of the real data.

Figure 5-7: The effect of poor calibration correspondence on plane error

Page 137: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

130

Figure 5-8 shows the plane error plotted against sample number for all three

reach tasks of trial 4. We see that this error is bounded within 8cm and the majority of the

data is bounded by 5cm . We also see the predicted characteristic shapes of Figure 5-7

evident in the data for 𝐶1 → 𝐶2 and 𝐶3 → 𝐶1 while the error for 𝐶2 → 𝐶3 is much higher and

in a shape that is inconsistent with the expected sources of error other than human error

itself that we intended to capture.

Figure 5-8: Magnitude of plane error ep for three reaches in trial 4

The joint angle trajectories for the reach 𝐶2 → 𝐶3 are shown in Figure 5-9. As could

be expected by intuition we can check that the majority of the motion contribution comes

from 𝜃𝑠𝑧 𝜃𝑠𝑦 and 𝜃𝑒𝑧. We also note that the assumption in (4-27) appears to be approximately

satisfied with a value very near to 0°.

Page 138: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

131

Figure 5-9: Inverse kinematics joint angle trajectories

The validity of the inverse kinematics data is then verified by calculating 𝒅𝑆𝐹 by

using the sensor data directly as well as the result from inverse kinematics calculation of the

joint angles. The distance between these two calculations for 𝒅𝑆𝐹 is shown in Figure 5-10 for

all three reaches. The maximum error here is bounded at 3.5cm which is significantly less

than the plane error bounds resulting largely from calibration errors. Sources of this error

are possibly due to either sensor inaccuracies such as drift or mechanical errors such as the

relative motion between the sensor and the subject’s arm due to skin movement.

Page 139: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

132

Figure 5-10: Magnitude of error introduced by inverse kinematics

Page 140: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

133

6 Discussion

This exercise in applying COTS devices to human motion analysis has been

majorly successful as the first iteration in this motion capture system’s engineering

development. The software tools developed to communicate with Myo and Sphero from the

MATLAB environment were successful at serving their intended feature sets, although

certain limitations have affected their usability and accuracy of results. The calibration

procedure and method has been shown to function correctly with the small set of trials

tested in this work. However, the accuracy of the calibration leaves much room for

improvement. And the method of human upper limb kinematic modeling is also shown to be

correct although it may introduce unnecessary errors.

The Myo SDK MATLAB MEX Wrapper project has not presented any problems

even after substantial use in this project. Furthermore, at the time of this writing, the

community on The MathWorks’ File Exchange has provided a strong endorsement in the way

of twenty-one 5 star ratings and a monthly download average between forty and sixty

downloads per month for a duration of ten months [14]. This project has been used by

everyone from software developers to undergraduate students and graduate researchers to

engineering faculty and professionals. A variety of applications have been reported including

use in robot controllers, medical research, and this package has even been used to win a

hackathon.

Although this Myo package has brought such overwhelming success to the greater

academic and development communities as well as this work, the feature limitations with

EMG streaming are an impediment to future progress that should be addressed. Many

applications, with motor rehabilitation included, would find great benefit if the MATLAB

Page 141: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

134

interface to Myo would support streaming EMG data from more than one sensor

simultaneously. This limitation can be removed in future work by the replacement of the

Myo Connect desktop application with a new software solution that provides data from

multiple Myos that are paired to different Bluetooth dongles.

Sphero API MATLAB SDK was also a great success, although it has not raised quite

the same popularity as the Myo project. Since its initial release in the summer of 2015, this

project has been downloaded roughly ten to fifteen times per month, and it also maintains a

five star rating [15]. User applications that are built on this code have been developed by

undergraduate students, professional researchers, and even nontechnical graduate

researchers. The projects undertaken by these groups include computer vision feedback

controlled choreography of Sphero, oceanographic data collection (leveraging Sphero’s

waterproof quality), and rat evader experiments to understand the trajectory planning

algorithms used by rats.

The success of projects that interact with a single Sphero device and come with

very little computational cost otherwise perform quite well with the current MATLAB

interface software for Sphero. However, when adding resource demands to the MATLAB

session in which Sphero is being operated, a certain limiting threshold exists, beyond which

the functionality of the Sphero interface breaks down. This is viewed as a performance

limitation that may very well have affected the results in this work. The manifestation of this

deficiency is evident typically when streaming data from Sphero at a sufficiently high data

rate, and is identified by lag in the processing of scheduled tasks. For instance, users will

notice that an animation plot update scheduled by a timer will lag behind the expected

update time during symptomatic scenarios. This is due to the inability of MATLAB to quickly

Page 142: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

135

process the callbacks generated by the built in Bluetooth object when new incoming bytes

become available. The compounding nature of computational overhead between MATLAB

m-code and the Bluetooth data through many abstraction layers leads to this performance

limitation. The many layers include the MATLAB compiler runtime environment, the Java

virtual machine MATLAB runs within, the operating system, and the Bluetooth hardware

itself. Future work can address this limitation by providing the Bluetooth implementation in

a MEX file relying upon a third party Bluetooth library instead of the m-code implementation

that uses the MATLAB Bluetooth object. In a fashion similar to that with which Myo SDK

MATLAB MEX Wrapper has been developed, the m-code interface can then be built on top of

this lower level base.

The collective utility of the software tools for Myo and Sphero has been shown in

this work by the application cases presented in section 3.3 for the individual software tools

as well as in the main application to human upper limb motion analysis that is the focus of

this work. A measurable intended outcome of software development is seen in the code size

for the CLI scripts in 3.3.1 and 3.3.3 for Myo and Sphero, respectively. When we compare the

MATLAB m-code size for a simple data collection exercise to the total code size for each of

these projects, we see a great reduction. For Myo, we compare a 5 line script to a total of

roughly 1500 source lines of code in the implementation of Myo SDK MATLAB MEX Wrapper.

In a similar way, we see a reduction to a 9 line script from a MATLAB implementation that’s

almost 3000 source lines of code. This massive reduction in the size of the code required to

perform simple to moderately complicated tasks with Myo and Sphero in MATLAB is a factor

that contributes to the accessibility of these devices to a larger population of students and

researchers in academia. In addition to reduction in code size, we also see a great lessening

Page 143: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

136

of the specific programming language knowledge required to use these devices in MATLAB.

Much of the strict syntax and logic encoded in these projects has been abstracted away from

the end user’s experience so that even nontechnical researchers or technical students who

lack previous programming experience may be enabled to use these devices.

The calibration errors discovered in section 5.4 are suspected to be caused by

many contributing sources including excessive calibration jig manufacturing tolerances, data

synchronization errors, and the violation of kinematic modeling assumptions. The latter two

are more likely than the former to have been the dominant sources for error in this work.

Data synchronization error may be related to the same cause as the Sphero

interface performance limitation described previously. When the Bluetooth object event

queue is not processed as quickly as it is generated, the most recent data available to the user

will become progressively older as the lag increases. The apparent quality of data

synchronization based upon the lagging Sphero data will influence the users to assign

misguided trust in the synchronization quality between the Sphero data source and the Myo

data sources. With the multiple data sources out of synchronization, the recorded data for a

given time instance will be invalid with respect to the physical phenomenon that’s intended

to have been captured. This is a problem that, if existent during calibration, will affect the

accuracy of all future calibrated data elements.

The kinematic modeling assumption that is perhaps most susceptible to violation

is that the origin of the shoulder joint remain fixed in space. This assumption requires the

subject to be responsible for the introduction of error directly into the data set if the shoulder

is moved. A translation of the shoulder maps into a translation of any distal point on the

upper limb by the same amount. It’s quite possible that a substantial portion of the observed

Page 144: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

137

error in this work is due to unintentional motions by the subject in violation of this

assumption. Future work should use complementary technologies to observe translational

information about the upper limb kinematics in addition to the rotational position

information currently used from the IMU sources. Incorporation of the data from a vision

based sensor such as the Microsoft Kinect may all of the benefits of vision based and contact

based sensors in one integrated motion capture system.

The present work has proposed a motion capture system for the human upper

limb based in the use of the COTS devices Myo and Sphero. The necessary interface software

has been developed for use of these devices by researchers with minimal programming

experience to interact with the best quality data from these devices in near real time. An

upper limb kinematic model, calibration procedure, and experimental protocol were

developed as a framework within which to test the function and performance of the interface

software as well as the modeling methodology itself. The calibration procedure includes the

determination of unknown geometry of the subject as well as the virtual configuration of the

subject’s task environment. This knowledge of the task environment enables the use of both

physical and virtual fixtures in motion analysis application such as motor rehabilitation

therapy task evaluation.

The present work has provided a major contribution to researchers across the

globe by providing the first fully featured and fully functional MATLAB m-code interface to

Thalmic Labs’ Myo armband [14]. The software tools for Sphero were also the first of their

kind to have been released publicly as a device interface for MATLAB [15]. In contrast to the

typical choice of Microsoft Kinect as a COTS device for human motion analysis, the present

work has taken a different approach to motion capture utilizing low quality consumer grade

Page 145: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

138

contact based IMU sensing technology. Considering the challenges associated with end-to-

end development of such a motion capture system, the display of reasonably correct results

to calibration and motion analysis proves to be a third major contribution of this work.

Finally, the set of recommendations for future work in all aspects of this project sets forth a

path to follow into the next iteration of human motion capture system design.

Page 146: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

139

Appendix A Source Code

Here we collect the textual content of source code files that are not prohibitively

long (greater than 1000 source lines of code), and that aid the discussion in the main body

text. Note that all source code written for this project can be found at the author’s GitHub

(https://github.com/mark-toma/) in the public repositories MyoMex and SpheroMATLAB

[16], [18].

Appendix A.1 myo_class.hpp

#ifndef MYO_CLASS_HPP

#define MYO_CLASS_HPP

// comment the following line to remove debug output via DB_MYO_CLASS()

//#define DEBUG_MYO_CLASS

#ifdef DEBUG_MYO_CLASS

#define DB_MYO_CLASS(fmt, ...) printf(fmt, ##__VA_ARGS__)

#else

#define DB_MYO_CLASS(fmt, ...)

#endif

#include "mex.h"

#include "myo/myo.hpp" // myo sdk cpp binding

#include <array> // myo sdk emg data

#include <vector>

#include <queue>

// --- Data Frames

// These structures are used as return types in MyoData and DataCollector

// methods to return a single sample of data from one time instance

// IMU data frame

struct FrameIMU

{

myo::Quaternion<float> quat;

myo::Vector3<float> gyro;

myo::Vector3<float> accel;

myo::Pose pose;

myo::Arm arm;

myo::XDirection xDir;

}; // FrameIMU

// EMG data frame

struct FrameEMG

{

std::array<int8_t,8> emg;

}; // FrameEMG

Page 147: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

140

// END Data Frames

// --- MyoData

// This class is used to keep track of the data for one physical Myo.

// Typical use may be to instantiate a MyoData for each unique myo received

// by a DataCollector. Then, subsequent calls into add<data> and get<data>

// can be used to receive contiguous collections of IMU, EMG, and Meta data

class MyoData

{

// Data frames for returning data samples

FrameIMU frameIMU;

FrameEMG frameEMG;

// Pointer to a Myo device instance provided by hub

myo::Myo* pMyo;

bool addEmgEnabled;

// IMU data queues and state information

std::queue<myo::Quaternion<float>,std::deque<myo::Quaternion<float>>> quat;

std::queue<myo::Vector3<float>,std::deque<myo::Vector3<float>>> gyro;

std::queue<myo::Vector3<float>,std::deque<myo::Vector3<float>>> accel;

std::queue<myo::Pose,std::deque<myo::Pose>> pose;

std::queue<myo::Arm,std::deque<myo::Arm>> arm;

std::queue<myo::XDirection,std::deque<myo::XDirection>> xDir;

uint64_t timestampIMU;

unsigned int countIMU;

// EMG data queues and state information

std::queue<std::array<int8_t,8>,std::deque<std::array<int8_t,8>>> emg;

unsigned int semEMG;

unsigned int countEMG;

uint64_t timestampEMG;

// syncIMU

// Called before pushing quat, gyro, or accel

// This updates the timestampIMU member to keep track of the IMU datas.

// If it is detected that a sample of quat, gyro, or accel was skipped,

// the previous value for that data source is copied to fill the gap.

// This is zero-order-hold interpolation of missing timeseries data.

// Furthermore, since event-based meta data is sampled on the EMG vector

// as well, we fill their queues up to the future size of emg to maintain

// consistency there.

void syncIMU(uint64_t ts)

{

if ( ts > timestampIMU ) {

// fill IMU data (only if we missed samples)

while ( quat.size() < countIMU ) {

myo::Quaternion<float> q = quat.back();

quat.push(q);

}

while ( gyro.size() < countIMU ) {

myo::Vector3<float> g = gyro.back();

gyro.push(g);

Page 148: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

141

}

while ( accel.size() < countIMU ) {

myo::Vector3<float> a = accel.back();

accel.push(a);

}

countIMU++;

timestampIMU = ts;

// fill pose, arm, and xDir up to the new countEMG

myo::Pose p = pose.back();

while ( pose.size()<(countIMU-1) ) { pose.push(p); }

myo::Arm a = arm.back();

while ( arm.size()<(countIMU-1) ) { arm.push(a); }

myo::XDirection x = xDir.back();

while ( xDir.size()<(countIMU-1) ) { xDir.push(x); }

}

}

// sync<Pose/Arm/XDir>

// This event-based meta data is sampled on the EMG vector, so we fill

// their queues up to the future size of emg to maintain consistency.

// Things would theoretically break down if these events fired more

// frequently than the emg data, but I think that's impossible. It's

// highly unlikely for sure! But still, the boolean return values allow

// for guarding against this case.

bool syncPose(uint64_t ts)

{coll

if (pose.size() == countIMU)

return false;

myo::Pose p = pose.back();

while ( pose.size()<(countIMU-1) ) { pose.push(p); }

return true;

}

bool syncArm(uint64_t ts)

{

if (arm.size() == countIMU)

return false;

myo::Arm a = arm.back();

while ( arm.size()<(countIMU-1) ) { arm.push(a); }

return true;

}

bool syncXDir(uint64_t ts)

{

if (xDir.size() == countIMU)

return false;

myo::XDirection x = xDir.back();

while ( xDir.size()<(countIMU-1) ) { xDir.push(x); }

return true;

}

// syncEMG

// Called before pushing emg data

// This updates the timestampEMG member to keep track of the EMG datas.

// If it is detected that a sample of emg was skipped, the previous value

// for that data source is copied to fill the gap. This operation sounds

// trivial, but it isn't quite as simple as you'd expect. Myo SDK

// provides emg data samples in pairs for each unique timestamp. That is,

// timeN emgN

Page 149: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

142

// time0 emg1

// time0 emg2

// time1 emg3

// time1 emg4

// timeK emg(2*K+1)

// timeK emg(2*K+2)

// So then, we keep track of the number of new emg samples received

// without a new timestamp in semEMG. Then pad emg with the last value if

// it's detected that a sample was missed.

// This is zero-order-hold interpolation of missing timeseries data.

void syncEMG(uint64_t ts)

{

if ( ts>timestampEMG ) { // new timestamp

if ( 0==(semEMG%2) ) {

std::array<int8_t,8> e = emg.back();

emg.push(e);

}

semEMG = 0; // reset sem

} else {

semEMG++; // increment sem

}

countEMG++;

timestampEMG = ts;

}

public:

MyoData(myo::Myo* myo, uint64_t timestamp, bool _addEmgEnabled)

: countIMU(1), countEMG(1), semEMG(0), timestampIMU(0), timestampEMG(0)

{

pMyo = myo; // pointer to myo::Myo

// perform some operations on myo to set it up before subsequent use

pMyo->unlock(myo::Myo::unlockHold);

if (_addEmgEnabled) {

pMyo->setStreamEmg(myo::Myo::streamEmgEnabled);

countEMG = 1;

std::array<int8_t,8> _emg;

emg.push(_emg);

timestampEMG = timestamp;

}

addEmgEnabled = _addEmgEnabled;

// fill up the other private members

myo::Quaternion<float> _quat; // dummy default objects

myo::Vector3<float> _gyro;

myo::Vector3<float> _accel;

quat.push(_quat); // push them back onto queues

gyro.push(_gyro);

accel.push(_accel);

pose.push(myo::Pose::unknown);

arm.push(myo::armUnknown);

xDir.push(myo::xDirectionUnknown);

timestampIMU = timestamp;

}

// Myo is owned by hub... no cleanup necessary here

Page 150: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

143

~MyoData() {}

// getFrameXXX

// Read a sample of data from the IMU or EMG queues

FrameIMU &getFrameIMU()

{

countIMU = countIMU - 1;

frameIMU.quat = quat.front();

frameIMU.gyro = gyro.front();

frameIMU.accel = accel.front();

frameIMU.pose = pose.front();

frameIMU.arm = arm.front();

frameIMU.xDir = xDir.front();

quat.pop();

gyro.pop();

accel.pop();

pose.pop();

arm.pop();

xDir.pop();

return frameIMU;

}

FrameEMG &getFrameEMG()

{

countEMG = countEMG - 1;

frameEMG.emg = emg.front();

emg.pop();

return frameEMG;

}

// getInstance

// Get the pointer to this myo::Myo* object. Use this function to test

// equivalence of this MyoData's myo pointer to another.

myo::Myo* getInstance() { return pMyo; }

// getCountXXX

// Get the number of valid samples in the IMU or EMG queues

unsigned int getCountIMU() { return countIMU; }

unsigned int getCountEMG() { return countEMG; }

// syncDataSources

// Pops data off of queues until there are at most two bad samples.

// Subsequently, a third bad sample may fill the read head of the queue.

// Use this functions to put the data vector into a known state. Throw

// away the first three samples of data read after this call. The rest

// should be contiguous on the maximum sample rate for the data source.

void syncDataSources()

{

FrameIMU frameIMU;

while ( getCountIMU() > 1 )

frameIMU = getFrameIMU();

FrameEMG frameEMG;

while ( getCountEMG() > 1 )

frameEMG = getFrameEMG();

}

// add<data> functions

// All of these perform two operations:

Page 151: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

144

// * sync<type>

// Syncs up the data queues that are being samples on the same time

// base.

// * <data>.push(_<data>) pushes new data onto its queue

void addQuat(const myo::Quaternion<float>& _quat, uint64_t timestamp)

{

syncIMU(timestamp);

quat.push(_quat);

}

void addGyro(const myo::Vector3<float>& _gyro, uint64_t timestamp)

{

syncIMU(timestamp);

gyro.push(_gyro);

}

void addAccel(const myo::Vector3<float>& _accel, uint64_t timestamp)

{

syncIMU(timestamp);

accel.push(_accel);

}

void addEmg(const int8_t *_emg, uint64_t timestamp)

{

if (!addEmgEnabled ) { return; }

syncEMG(timestamp);

std::array<int8_t,8> tmp;

int ii = 0;

for (ii;ii<8;ii++) {tmp[ii]=_emg[ii];}

emg.push(tmp);

}

void addPose(myo::Pose _pose, uint64_t timestamp)

{

if ( syncPose(timestamp) )

pose.push(_pose);

}

void addArm(myo::Arm _arm, uint64_t timestamp)

{

if ( syncArm(timestamp) )

arm.push(_arm);

}

void addXDir(myo::XDirection _xDir, uint64_t timestamp)

{

if ( syncXDir(timestamp) )

xDir.push(_xDir);

}

}; // MyoData

// END MyoData

// --- DataCollector

// This class provides the link to Myo SDK, encapsulation of the MyoData

// class that manages data queues for each Myo device, and provides access

// to that data.

// * Register this class with a myo::Hub to trigger calls back into the

// on<event> functions below.

// * Call myo::Hub::run to allow callbacks to write data into the

// encapsulated MyoData objects in knownMyos

// * Call getFrameXXX(id) at most getCountXXX(id) times to read samples of

Page 152: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

145

// FrameXXX data, where id is the 1-indexed id for a Myo device with

// maximum value getCountMyos()

class DataCollector : public myo::DeviceListener

{

std::vector<MyoData*> knownMyos;

public:

bool addDataEnabled; // unset to disable callbacks (they'll fall-through)

bool addEmgEnabled;

DataCollector()

: addDataEnabled(false), addEmgEnabled(false)

{}

~DataCollector()

{

// destruct all MyoData* in knownMyos

int ii=0;

for (ii;ii<knownMyos.size();ii++)

{

delete knownMyos[ii];

}

}

// --- Wrappers for MyoData members

// These functions basically vectorize similarly named members of MyoData

// on the elements of knownMyos

unsigned int getCountIMU(int id) { return knownMyos[id-1]->getCountIMU(); }

unsigned int getCountEMG(int id) { return knownMyos[id-1]->getCountEMG(); }

const FrameIMU &getFrameIMU( int id ) { return knownMyos[id-1]-

>getFrameIMU(); }

const FrameEMG &getFrameEMG( int id ) { return knownMyos[id-1]-

>getFrameEMG(); }

void syncDataSources()

{

int ii = 0;

for (ii;ii<knownMyos.size();ii++)

knownMyos[ii]->syncDataSources();

}

// getCountMyos

// Get current number of myos

const unsigned int getCountMyos() { return knownMyos.size(); }

// getMyoID

// Returns the (1-indexed) ID of input myo in knownMyos. If myo isn't in

// knownMyos yet, it's added. This function can be used to index into

// knownMyos with a myo pointer by: knownMyos[getMyoID(myo)-1].

const unsigned int getMyoID(myo::Myo* myo,uint64_t timestamp)

{

// search myos in knownMyos for myo

for (size_t ii = 0; ii < knownMyos.size(); ii++)

if (knownMyos[ii]->getInstance() == myo) { return ii+1; }

// add myo to a new MyoData* in knowmMyos if it doesn't exist yet

knownMyos.push_back(new MyoData(myo,timestamp,addEmgEnabled));

return knownMyos.size();

}

// on<event> Callbacks

Page 153: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

146

// * Refer to the Myo SDK documentation for information on the mechanisms

// that trigger these callback functions in myo::Hub.

// * All of these invoke getMyoID() so as to automatically add myo to

// knownMyos without explicitly expressing this logic.

// * The on<data>Data functions fall-through when !addDataEnabled

// * Some device state meta data is maintained in the state change events

void onPair(myo::Myo* myo, uint64_t timestamp, myo::FirmwareVersion

firmwareVersion)

{

unsigned int tmp = getMyoID(myo,timestamp);

}

void onConnect(myo::Myo *myo, uint64_t timestamp, myo::FirmwareVersion

firmwareVersion)

{

unsigned int tmp = getMyoID(myo,timestamp);

}

void onDisconnect(myo::Myo* myo, uint64_t timestamp)

{

knownMyos.erase(knownMyos.begin()+getMyoID(myo,timestamp)-1);

}

void onLock(myo::Myo* myo, uint64_t timestamp)

{

// shamelessly unlock the device

myo->unlock(myo::Myo::unlockHold);

}

void onOrientationData(myo::Myo* myo, uint64_t timestamp, const

myo::Quaternion<float>& q)

{

if (!addDataEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addQuat(q,timestamp);

}

void onGyroscopeData (myo::Myo* myo, uint64_t timestamp, const

myo::Vector3<float>& g)

{

if (!addDataEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addGyro(g,timestamp);

}

void onAccelerometerData (myo::Myo* myo, uint64_t timestamp, const

myo::Vector3<float>& a)

{

if (!addDataEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addAccel(a,timestamp);

}

void onEmgData(myo::Myo* myo, uint64_t timestamp, const int8_t *e)

{

if (!addDataEnabled||!addEmgEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addEmg(e,timestamp);

}

void onPose(myo::Myo* myo, uint64_t timestamp, myo::Pose p)

{

if (!addDataEnabled) { return; }

knownMyos[getMyoID(myo,timestamp)-1]->addPose(p,timestamp);

}

//void onUnpair(myo::Myo* myo, uint64_t timestamp) {}

void onArmSync(myo::Myo* myo, uint64_t timestamp, myo::Arm arm,

myo::XDirection xDirection) {

if (!addDataEnabled) { return; }

Page 154: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

147

knownMyos[getMyoID(myo,timestamp)-1]->addArm(arm,timestamp);

knownMyos[getMyoID(myo,timestamp)-1]->addXDir(xDirection,timestamp);

}

void onArmUnsync(myo::Myo* myo, uint64_t timestamp) {

if (!addDataEnabled) { return; }

// infer state changes of arm and xdir

myo::Arm newArm = myo::Arm::armUnknown;

myo::XDirection newXDir = myo::XDirection::xDirectionUnknown;

knownMyos[getMyoID(myo,timestamp)-1]->addArm(newArm,timestamp);

knownMyos[getMyoID(myo,timestamp)-1]->addXDir(newXDir,timestamp);

}

//void onUnlock(myo::Myo* myo, uint64_t timestamp) {}

}; // DataCollector

// END DataCollector

#endif // ndef MYO_CLASS_HPP

Appendix A.2 myo_mex.cpp

// comment the following line to remove debug output via mexPrintf()

//#define DEBUG_MYO_MEX

#ifdef DEBUG_MYO_MEX

#define DB_MYO_MEX(fmt, ...) mexPrintf(fmt, ##__VA_ARGS__)

#else

#define DB_MYO_MEX(fmt, ...)

#endif

#include <mex.h> // mex api

#include <windows.h> // win api for threading support

#include <process.h> // process/thread support

#include <queue> // standard type for fifo queue

#include "myo/myo.hpp"

#include "myo_class.hpp"

// macros

#define MAKE_NEG_VAL_ZERO(val) (val<0)?(0):(val)

// indeces of output args (into plhs[*])

#define DATA_STRUCT_OUT_NUM 0

// indeces of data fields into data output struct

#define QUAT_FIELD_NUM 0

#define GYRO_FIELD_NUM 1

#define ACCEL_FIELD_NUM 2

#define POSE_FIELD_NUM 3

#define ARM_FIELD_NUM 4

#define XDIR_FIELD_NUM 5

#define EMG_FIELD_NUM 6

#define NUM_FIELDS 7

const char* output_fields[] =

{"quat","gyro","accel","pose","arm","xDir","emg"};

// program behavior parameters

#define STREAMING_TIMEOUT 5

Page 155: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

148

#define INIT_DELAY 1000

#define RESTART_DELAY 500

#define READ_BUFFER 2

// program state

volatile bool runThreadFlag = false;

// global data

DataCollector collector;

myo::Hub* pHub = NULL;

myo::Myo* pMyo = NULL;

unsigned int countMyosRequired = 1;

// threading

unsigned int threadID;

HANDLE hThread;

HANDLE hMutex;

// thread routine

unsigned __stdcall runThreadFunc( void* pArguments ) {

while ( runThreadFlag ) { // unset isStreaming to terminate thread

// acquire lock then write data into queue

DWORD dwWaitResult;

dwWaitResult = WaitForSingleObject(hMutex,INFINITE);

switch (dwWaitResult)

{

case WAIT_OBJECT_0: // The thread got ownership of the mutex

// --- CRITICAL SECTION - holding lock

pHub->runOnce(STREAMING_TIMEOUT); // run callbacks to collector

// END CRITICAL SECTION - release lock

if (! ReleaseMutex(hMutex)) { return FALSE; } // acquired bad mutex

break;

case WAIT_ABANDONED:

return FALSE; // acquired bad mutex

}

} // end thread and return

_endthreadex(0); //

return 0;

}

// These functions allocate and assign mxArray to return output to MATLAB

// Pseudo example usage:

// mxArray* outData[...];

// makeOutputXXX(outData,...);

// fillOutputXXX(outData,...);

// // then assign matrices in outData to a MATLAB struct

// plhs[...] = mxCreateStructMatrix(...);

// assnOutputStruct(plhs[...],outData,...);

// Note: The size of outData must be consistent with hard code in the

// makeOutdataXXX and fillOutdataXXX functions.

void makeOutputIMU(mxArray *outData[], unsigned int sz) {

outData[QUAT_FIELD_NUM] =

mxCreateNumericMatrix(sz,4,mxDOUBLE_CLASS,mxREAL);

outData[GYRO_FIELD_NUM] =

mxCreateNumericMatrix(sz,3,mxDOUBLE_CLASS,mxREAL);

outData[ACCEL_FIELD_NUM] =

Page 156: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

149

mxCreateNumericMatrix(sz,3,mxDOUBLE_CLASS,mxREAL);

outData[POSE_FIELD_NUM] =

mxCreateNumericMatrix(sz,1,mxDOUBLE_CLASS,mxREAL);

outData[ARM_FIELD_NUM] =

mxCreateNumericMatrix(sz,1,mxDOUBLE_CLASS,mxREAL);

outData[XDIR_FIELD_NUM] =

mxCreateNumericMatrix(sz,1,mxDOUBLE_CLASS,mxREAL);

}

void makeOutputEMG(mxArray *outData[], unsigned int sz) {

outData[EMG_FIELD_NUM] =

mxCreateNumericMatrix(sz,8,mxDOUBLE_CLASS,mxREAL);

}

void fillOutputIMU(mxArray *outData[], FrameIMU f,

unsigned int row,unsigned int sz) {

*( mxGetPr(outData[QUAT_FIELD_NUM]) + row+sz*0 ) = f.quat.w();

*( mxGetPr(outData[QUAT_FIELD_NUM]) + row+sz*1 ) = f.quat.x();

*( mxGetPr(outData[QUAT_FIELD_NUM]) + row+sz*2 ) = f.quat.y();

*( mxGetPr(outData[QUAT_FIELD_NUM]) + row+sz*3 ) = f.quat.z();

*( mxGetPr(outData[GYRO_FIELD_NUM]) + row+sz*0 ) = f.gyro.x();

*( mxGetPr(outData[GYRO_FIELD_NUM]) + row+sz*1 ) = f.gyro.y();

*( mxGetPr(outData[GYRO_FIELD_NUM]) + row+sz*2 ) = f.gyro.z();

*( mxGetPr(outData[ACCEL_FIELD_NUM]) + row+sz*0 ) = f.accel.x();

*( mxGetPr(outData[ACCEL_FIELD_NUM]) + row+sz*1 ) = f.accel.y();

*( mxGetPr(outData[ACCEL_FIELD_NUM]) + row+sz*2 ) = f.accel.z();

*( mxGetPr(outData[POSE_FIELD_NUM]) + row ) = f.pose.type();

*( mxGetPr(outData[ARM_FIELD_NUM]) + row ) = f.arm;

*( mxGetPr(outData[XDIR_FIELD_NUM]) + row ) = f.xDir;

}

void fillOutputEMG(mxArray *outData[], FrameEMG f,

unsigned int row,unsigned int sz) {

int jj = 0;

for (jj;jj<8;jj++)

*( mxGetPr(outData[EMG_FIELD_NUM]) + row+sz*jj ) = f.emg[jj];

}

void assnOutputStruct(mxArray *s, mxArray *d[], int id) {

int ii = 0;

for (ii;ii<NUM_FIELDS;ii++) {

DB_MYO_MEX("Setting field %d of struct element %d\n",ii+1,id);

mxSetFieldByNumber(s,id-1,ii,d[ii]);

}

}

void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])

{

// check for proper number of arguments

if( nrhs<1 )

mexErrMsgTxt("myo_mex requires at least one input.");

if ( !mxIsChar(prhs[0]) )

mexErrMsgTxt("myo_mex requires a char command as the first input.");

if(nlhs>1)

mexErrMsgTxt("myo_mex cannot provide the specified number of outputs.");

char* cmd = mxArrayToString(prhs[0]);

if ( !strcmp("init",cmd) ) {

Page 157: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

150

// ----------------------------------------- myo_mex init -------------

if ( mexIsLocked() )

mexErrMsgTxt("myo_mex is already initialized.\n");

if ( nrhs<2 )

mexErrMsgTxt("myo_mex init requires 2 inputs.\n");

if( !mxIsDouble(prhs[1]) || mxIsComplex(prhs[1]) ||

!(mxGetM(prhs[1])==1 && mxGetM(prhs[1])==1) )

mexErrMsgTxt("myo_mex init requires a numeric scalar countMyos as the

second input.");

// Get input counyMyos and set up collector accordingly

countMyosRequired = *mxGetPr(prhs[1]);

if (countMyosRequired==1)

collector.addEmgEnabled = true;

// Instantiate a Hub and get a Myo

pHub = new myo::Hub("com.mark-toma.myo_mex");

if ( !pHub )

mexErrMsgTxt("Hub failed to init!");

pMyo = pHub->waitForMyo(5);

if ( !pMyo )

mexErrMsgTxt("Myo failed to init!");

// configure myo and hub

pHub->setLockingPolicy(myo::Hub::lockingPolicyNone); // TODO: What does

this do?

pHub->addListener(&collector);

// instantiate mutex

hMutex = CreateMutex(NULL,FALSE,NULL);

if (hMutex == NULL)

mexErrMsgTxt("Failed to set up mutex.\n");

// Let Hub run callbacks on collector so we can figure out how many

// Myos are connected to Myo Connect so we can assert countMyosRequired

pHub->run(INIT_DELAY);

if (countMyosRequired!=collector.getCountMyos())

mexErrMsgTxt("myo_mex failed to initialize with countMyos.\n");

// Flush the data queues with syncDataSources

// Note: This pops the oldest samples of data off the front of all

// queues until only the most recent data remains

collector.syncDataSources();

// At this point we don't anticipate and errors, so we commit to

// locking this file's memory

// Note: The mexLock status is used to determine the initialization

// state in other calls

mexLock();

} else if ( !strcmp("start_streaming",cmd) ) {

// ----------------------------------------- myo_mex start_streaming --

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( runThreadFlag )

mexErrMsgTxt("myo_mex is already streaming.\n");

if ( nlhs>0 )

Page 158: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

151

mexErrMsgTxt("myo_mex too many outputs specified.\n");

collector.addDataEnabled = true; // lets collector handle data events

// dispatch concurrent task

runThreadFlag = true;

hThread = (HANDLE)_beginthreadex( NULL, 0, &runThreadFunc, NULL, 0,

&threadID );

if ( !hThread )

mexErrMsgTxt("Failed to create streaming thread!\n");

DB_MYO_MEX("myo_mex start_streaming:\n\tSuccess\n");

} else if ( !strcmp("get_streaming_data",cmd) ) {

// ----------------------------------------- myo_mex get_streaming_data

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( !runThreadFlag )

mexErrMsgTxt("myo_mex is not streaming.\n");

if ( nlhs>1 )

mexErrMsgTxt("myo_mex too many outputs specified.\n");

// Verify that collector still has all of its Myos, otherwise error out

unsigned int countMyos = collector.getCountMyos();

if ( countMyos != countMyosRequired )

mexErrMsgTxt("myo_mex countMyos is inconsistent with initialization...

We lost a Myo!");

// Declarations and initializations and stuff

unsigned int iiIMU1=0; // Index into output matrices when reading queue

unsigned int iiEMG1=0;

unsigned int iiIMU2=0;

unsigned int iiEMG2=0;

unsigned int szIMU1 = 0; // Size of samples to read from queue

unsigned int szEMG1 = 0;

unsigned int szIMU2 = 0;

unsigned int szEMG2 = 0;

FrameIMU frameIMU1, frameIMU2; // Data structures returned from queue

read

FrameEMG frameEMG1, frameEMG2;

// Output matrices hold numeric data

mxArray *outData1[NUM_FIELDS];

mxArray *outData2[NUM_FIELDS];

// Compute size of output matrices

szIMU1 = collector.getCountIMU(1)-READ_BUFFER;

if (countMyos<2) {

szEMG1 = collector.getCountEMG(1)-READ_BUFFER;

} else {

szIMU2 = collector.getCountIMU(2)-READ_BUFFER;

}

szIMU1 = MAKE_NEG_VAL_ZERO(szIMU1);

szEMG1 = MAKE_NEG_VAL_ZERO(szEMG1);

szIMU2 = MAKE_NEG_VAL_ZERO(szIMU2);

szEMG2 = MAKE_NEG_VAL_ZERO(szEMG2);

// Initialize output matrices

Page 159: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

152

makeOutputIMU(outData1,szIMU1);

makeOutputEMG(outData1,szEMG1);

makeOutputIMU(outData2,szIMU2);

makeOutputEMG(outData2,szEMG2);

// Now get ahold of the lock and iteratively drain the queue while

// filling outDataN matrices

DWORD dwWaitResult;

dwWaitResult = WaitForSingleObject(hMutex,INFINITE);

switch (dwWaitResult)

{

case WAIT_OBJECT_0: // The thread got ownership of the mutex

// --- CRITICAL SECTION - holding lock

while (iiIMU1<szIMU1) { // Read from Myo 1 IMU

frameIMU1 = collector.getFrameIMU(1);

fillOutputIMU(outData1,frameIMU1,iiIMU1,szIMU1);

iiIMU1++;

}

while (iiEMG1<szEMG1) { // Read from Myo 1 EMG

frameEMG1 = collector.getFrameEMG(1);

fillOutputEMG(outData1,frameEMG1,iiEMG1,szEMG1);

iiEMG1++;

}

while (iiIMU2<szIMU2) { // Read from Myo 2 IMU

frameIMU2 = collector.getFrameIMU(2);

fillOutputIMU(outData2,frameIMU2,iiIMU2,szIMU2);

iiIMU2++;

}

while (iiEMG2<szEMG2) { // Read from Myo 2 EMG

frameEMG2 = collector.getFrameEMG(2);

fillOutputEMG(outData2,frameEMG2,iiEMG2,szEMG2);

iiEMG2++;

}

// END CRITICAL SECTION - release lock

if ( !ReleaseMutex(hMutex))

mexErrMsgTxt("Failed to release lock\n");

break;

case WAIT_ABANDONED:

mexErrMsgTxt("Acquired abandoned lock\n");

break;

}

// Assign outDataN matrices to MATLAB struct matrix

plhs[DATA_STRUCT_OUT_NUM] =

mxCreateStructMatrix(1,countMyos,NUM_FIELDS,output_fields);

assnOutputStruct(plhs[DATA_STRUCT_OUT_NUM], outData1, 1);

if (countMyos>1) {

assnOutputStruct(plhs[DATA_STRUCT_OUT_NUM], outData2, 2);

}

} else if ( !strcmp("stop_streaming",cmd) ) {

// ----------------------------------------- myo_mex stop_streaming ---

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( !runThreadFlag )

mexErrMsgTxt("myo_mex is not streaming.\n");

if ( nlhs>0 )

Page 160: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

153

mexErrMsgTxt("myo_mex too many outputs specified.\n");

// Terminate thread and reset state

runThreadFlag = false; // thread sees this flag and exits

WaitForSingleObject( hThread, INFINITE );

CloseHandle( hThread );

hThread = NULL;

// Terminate data logging and reset state

collector.addDataEnabled = false; // stop handling data events

collector.syncDataSources(); // sync data up again (flushes queue)

} else if ( !strcmp("delete",cmd) ) {

// ----------------------------------------- myo_mex delete -----------

if ( !mexIsLocked() )

mexErrMsgTxt("myo_mex is not initialized.\n");

if ( runThreadFlag )

mexErrMsgTxt("myo_mex cannot be deleted while streaming. Call

stop_streaming first.\n");

if ( nlhs>0 )

mexErrMsgTxt("myo_mex too many outputs specified.\n");

CloseHandle (hMutex);

hMutex = NULL;

mexUnlock();

if (pHub!=NULL)

delete pHub;

} else {

mexErrMsgTxt("unknown command!\n");

}

return;

Appendix A.3 SpheroCore.SpinProtocol

function SpinProtocol(s)

% SpinProtocol Reads local input buffer to parse RSP and MSG

% Invoked by the Bluetooth callback BytesAvailableFcn

if length(s.buffer) < 6, return; end % bail if not enough data for a packet

% grab first two bytes and bail on protocol fault

sop1 = s.buffer(1);

sop2 = s.buffer(2);

if sop1 ~= s.SOP1

s.DEBUG_PRINT('Missed SOP1 byte');

s.buffer = s.buffer(2:end);

return;

elseif ~any( sop2 == [s.SOP2_RSP,s.SOP2_ASYNC] )

s.DEBUG_PRINT('Missed SOP2 byte');

s.buffer = s.buffer(3:end);

return;

end

% sop1 and sop2 are both valid now

Page 161: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

154

% proceed to read in

if sop2 == s.SOP2_RSP

% response format

% [ sop1 | sop2 | mrsp | seq | dlen | <data> | chk ]

mrsp = s.buffer(3);

seq = s.buffer(4);

dlen = s.buffer(5);

% read data into buffer

% if the whole packet isn't in the buffer yet, this part will

% attempt a blocking read on the assumed missing bytes.

num_bytes = 5+dlen - length(s.buffer);

new_bytes = [];

if num_bytes > 0

new_bytes = fread(s.bt,num_bytes,'uint8')';

end

s.buffer = [s.buffer,new_bytes];

s.num_skip = s.num_skip + num_bytes; % adjustment for BytesAvailableFcn to

ignore the triggers for bytes read manually

% move packet out of buffer

packet = s.buffer(1:5+dlen);

s.buffer = s.buffer(5+dlen+1:end);

% grab data, chk, validate chk

data = packet(6:end-1);

chk = packet(end);

chk_cmp = bitcmp(mod(sum(uint8(packet(3:end-1))),256),'uint8');

if chk ~= chk_cmp, return; end

s.response_packet.sop1 = sop1;

s.response_packet.sop2 = sop2;

s.response_packet.mrsp = mrsp;

s.response_packet.seq = seq;

s.response_packet.dlen = dlen;

s.response_packet.data = data;

s.response_packet.chk = chk;

elseif sop2 == s.SOP2_ASYNC

% async message format

% [ sop1 | sop2 | id_code | dlen_msb | dlen_lsb | <data> | chk ]

id_code = s.buffer(3);

dlen_msb = s.buffer(4);

dlen_lsb = s.buffer(5);

dlen = s.IntegerFromByteArray([dlen_msb,dlen_lsb],'uint16');

% read data into buffer

% if the whole packet isn't in the buffer yet, this part will

% attempt a blocking read on the assumed missing bytes.

num_bytes = double(5+dlen - length(s.buffer));

new_bytes = [];

if num_bytes > 0

new_bytes = fread(s.bt,num_bytes,'uint8')';

end

s.buffer = [s.buffer,new_bytes];

Page 162: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

155

s.num_skip = s.num_skip + num_bytes; % adjustment for BytesAvailableFcn to

ignore the triggers for bytes read manually

% move packet out of buffer

packet = s.buffer(1:5+dlen);

s.buffer = s.buffer(5+dlen+1:end);

% grab data, chk, validate chk

data = packet(6:end-1);

chk = packet(end);

chk_cmp = bitcmp(mod(sum(uint8(packet(3:end-1))),256),'uint8');

if chk ~= chk_cmp, return; end

s.DEBUG_PRINT('received packet: %s',sprintf('%0.2X ',packet));

% handle the message

switch id_code

case s.ID_CODE_POWER_NOTIFICATIONS

s.HandlePowerNotification(data);

case s.ID_CODE_LEVEL_1_DIAGNOSTIC_RESPONSE

s.HandleLevel1Diagnostic(data);

case s.ID_CODE_SENSOR_DATA_STREAMING

s.HandleDataStreaming(data);

case s.ID_CODE_CONFIG_BLOCK_CONTENTS

s.HandleConfigBlockContents(data);

case s.ID_CODE_PRE_SLEEP_WARNING

s.HandlePreSleepWarning(data);

case s.ID_CODE_MACRO_MARKERS

s.HandleMacroMarkers(data);

case s.ID_CODE_COLLISION_DETECTED

s.HandleCollisionDetected(data);

case s.ID_CODE_ORB_BASIC_PRINT_MESSAGE

s.HandleOrbBasic(data,'print');

case s.ID_CODE_ORB_BASIC_ERROR_MESSAGE_ASCII

s.HandleOrbBasic(data,'error-ascii');

case s.ID_CODE_ORB_BASIC_ERROR_MESSAGE_BINARY

s.HandleOrbBasic(data,'error-binary');

case s.ID_CODE_SELF_LEVEL_RESULT

s.HandleSelfLevelResult(data);

case s.ID_CODE_GYRO_AXIS_LIMIT_EXCEEDED

s.HandleGyroAxisLimitExceeded(data);

case s.ID_CODE_SPHEROS_SOUL_DATA

s.HandleSpheroSoulData(data);

case s.ID_CODE_LEVEL_UP_NOTIFICATION

s.HandleLevelUpNotification(data);

case s.ID_CODE_SHIELD_DAMAGE_NOTIFICATION

s.HandleShieldDamageNotification(data);

case s.ID_CODE_XP_UPDATE_NOTIFICATION

s.HandleXpUpdateNotification(data);

case s.ID_CODE_BOOST_UPDATE_NOTIFICATION

s.HandleBoostUpdateNotification(data);

otherwise

s.DEBUG_PRINT('Unsupported asyncronous message!');

end

end

end

Page 163: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

156

References

[1] MathWorks Image Acquisition Toolbox Team, "Image Acquisition Toolbox

Support Package for Kinect For Windows Sensor - File Exchange - MATLAB

Central," The MathWorks, Inc., 7 March 2013. [Online]. Available:

https://www.mathworks.com/matlabcentral/fileexchange/40445-image-

acquisition-toolbox-support-package-for-kinect-for-windows-sensor.

[Accessed 15 January 2017].

[2] J. R. Terven,

"https://www.mathworks.com/matlabcentral/fileexchange/53439-kinect-2-

interface-for-matlab," The MathWorks, Inc., 9 October 2015. [Online].

Available: https://www.mathworks.com/matlabcentral/fileexchange/53439-

kinect-2-interface-for-matlab. [Accessed 15 January 2017].

[3] J. R. Terven and D. M. Córdova-Esparza, "Kin2. A Kinect 2 toolbox for MATLAB,"

Science of Computer Programming, vol. 130, pp. 97-106, 2016.

[4] A. Boyali and N. Hashimoto, "Spectral Collaborative Representation based

Classification for hand gestures recognition on electromyography signals,"

Biomedical Signal Processing and Control, vol. 24, pp. 11-18, 2016.

[5] A. Boyali, "boyali/matMYO: Matlab MYO Libray for Raw Acquiring Sensor Data

in Batch Mode," GitHub, Inc., 14 March 2016. [Online]. Available:

https://github.com/boyali/matMYO. [Accessed 15 January 2017].

[6] Y. J. Lee, "Sphero MATLAB Interface - File Exchange - MATLAB Central," The

MathWorks, Inc., 5 Novermber 2014. [Online]. Available:

https://www.mathworks.com/matlabcentral/fileexchange/48359-sphero-

matlab-interface. [Accessed 15 January 2017].

[7] D. Sethi, "Sphero Connectivity Package - File Exchange - MATLAB Central," The

MathWorks, Inc., 25 August 2015. [Online]. Available:

https://www.mathworks.com/matlabcentral/fileexchange/52481-sphero-

connectivity-package. [Accessed 15 January 2017].

[8] B. Stern, "Inside Myo | Myo Armband Teardown | Adafruit Learning System,"

Adafruit Industries, 3 February 2016. [Online]. Available:

https://learn.adafruit.com/myo-armband-teardown/inside-myo. [Accessed 6

January 2017].

Page 164: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

157

[9] developer.thalmic.com, "Thalmic Labs - Maker of Myo gesture control

armband," Thalmic Labs, 2016. [Online]. Available:

https://developer.thalmic.com/downloads. [Accessed 8 December 2016].

[10] developer.thalmic.com, "Thalmic Labs Developer Forum / Tools and Bindings /

List of Unofficial Tools and Bindings," Thalmic Labs, 2016. [Online]. Available:

https://developer.thalmic.com/forums/topic/541/. [Accessed 8 December

2016].

[11] thalmiclabs, "myo-bluetooth/myohw.h at master · thalmiclabs/myo-

bluetooth," 28 September 2015. [Online]. Available:

https://github.com/thalmiclabs/myo-bluetooth/blob/master/myohw.h.

[Accessed 8 December 2016].

[12] E. White, "Disassembling BB8 (Part 2) | element14 | chriswhite," Element 14: A

Premier Farnell Company, 17 September 2015. [Online]. Available:

https://www.element14.com/community/blogs/linker/2015/09/17/disasse

mbling-bb8-part2. [Accessed 6 January 2017].

[13] sdk.sphero.com, "Sphero Docs | Getting Started," Sphero, 2016. [Online].

Available: http://sdk.sphero.com/sdk-documentation/getting-started/.

[Accessed 8 December 2016].

[14] M. Tomaszewski, "Myo SDK MATLAB MEX Wrapper," The MathWorks, Inc., 7

March 2016. [Online]. Available:

https://www.mathworks.com/matlabcentral/fileexchange/55817-myo-sdk-

matlab-mex-wrapper. [Accessed 7 January 2017].

[15] M. Tomaszewski, "Sphero API MATLAB SDK - File Exchange - MATLAB

Central," The MathWorks, Inc., 30 August 2015. [Online]. Available:

https://www.mathworks.com/matlabcentral/fileexchange/52746-sphero-api-

matlab-sdk. [Accessed 7 January 2017].

[16] M. Tomaszewski, "mark-toma/MyoMex: Access data from Thalmic Labs' Myo

Gesture Control Armband in m-code!," GitHub, 20 November 2016. [Online].

Available: https://github.com/mark-toma/MyoMex. [Accessed 26 December

2016].

Page 165: Tomaszewski, Mark - Thesis: Application of Consumer-Off-The-Shelf (COTS) Devices to Human Motion Analysis

158

[17] developer.thalmic.com, "Myo SDK 0.9.0: Myo SDK Manual," Thalmic Labs, 2014.

[Online]. Available:

https://developer.thalmic.com/docs/api_reference/platform/index.html.

[Accessed 8 December 2016].

[18] M. Tomaszewski, "mark-toma/SpheroMATLAB: Control Sphero from MATLAB

in m-code!," GitHub, Inc., 17 August 2016. [Online]. Available:

https://github.com/mark-toma/SpheroMATLAB. [Accessed 26 December

2016].

[19] Orbotix, "Sphero API 1.50," 20 August 2013. [Online]. Available:

https://github.com/orbotix/DeveloperResources/blob/master/docs/Sphero_

API_1.50.pdf. [Accessed 8 January 2017].