8
A Visual-Inertial Approach to Human Gait Estimation Ahmed Ahmed and Stergios Roumeliotis 1 Abstract— This paper addresses the problem of gait estima- tion using visual and inertial data, as well as human motion models. Specifically, a batch least-squares (BLS) algorithm is presented that fuses data from a minimal set of sensors [two inertial measurement units (IMUs), one on each foot, and a head-mounted IMU-camera pair] along with motion constraints corresponding to the different walking states, to estimate the person’s head and feet poses. Subsequently, gait models are employed to solve for the lower-body’s posture and generate its animation. Experimental results against the VICON motion capture system demonstrate the accuracy of the proposed minimal sensors-based system for determining a person’s motion. I. I NTRODUCTION AND RELATED WORK Human-motion modeling and estimation is of critical importance to physical therapy and rehabilitation (for as- sessing, diagnosing, and planning treatment [1]), the movie and gaming industries (for motion capturing and character animation [2]), and robotics (e.g., for modeling bipedal walk [3] and indoor localization [4]). Existing motion captur- ing systems can be classified into the following categories: (i) External camera-based systems, or outside-in systems (e.g., the VICON system [5]), estimate the body-posture by tracking markers attached to the user. They provide high-accuracy measurements in real time, but their high cost, complex infrastructure, limited coverage area, and the required cumbersome markers' suit restrict their use. (ii) Body-mounted camera systems attach the cameras, instead, to the person’s body and observe the surrounding environment, as an inside-out system. For example, [6] used 16 cameras to capture general motion, while [7] used only 2 head-mounted cameras in a way that observes both the moving person and the surroundings. These systems provide high accuracy under sufficient motion and environment con- ditions, they do not impose the infrastructure and coverage area constraints, and have less cost compared to the previous ones. Their high computational demands, however, limit their operation to only offline, and still they have inconvenient setup due to the numerous body-mounted cameras. (iii) Body-mounted IMU systems combine angular velocity and linear acceleration data from IMUs attached to different body segments. They follow a sensor-based approach and employ motion constraints (e.g., maintain the body dimen- sions, and ensure zero velocity of the feet during stance periods) to improve estimation accuracy. These systems 1 A. M. Ahmed and S. I. Roumeliotis are with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, 55455, USA, medhat|[email protected]. This work was supported by the University of Minnesota and the National Science Foundation (IIS-1328722). impose few limitations on the area of operation, and achieve real-time performance. They are costly, however, and require cumbersome sensor suit setup (e.g., 17 IMUs for the Xsens MVN motion-capture suit [8]). (iv) Peripheral IMU-based systems use prior motion mod- els to reduce the number of body-attached sensors, hence, to overcome the previous shortcomings. For example, [9] used 4 IMUs (attached to hands and feet) to estimate the full body posture, while [10] used one foot-mounted IMU to estimate gait parameters and represent the motion with a simple 2D model. Lastly, [11] also used one IMU but only to estimate the foot trajectory. These attempts addressed the system’s usability and cost constraints, but reducing the number of IMUs comes at the expense of lower estimation accuracy. Our objective in this work is to combine the body-mounted camera system’s high accuracy with the peripheral IMU- based system’s low cost and usability within a minimal sensor-based framework. In particular, we employed two foot-mounted IMUs and a head-mounted camera-IMU pair (Google Glass) to estimate the person’s trajectory and gener- ate a 3D animation of their corresponding motion. This setup can be used to improve the quality of pervasive healthcare (by providing personalized monitoring and incidence detec- tion [12]), and virtual reality (VR) applications (by enabling natural interaction and bringing immersive experiences [13]). To achieve our objectives, we need to address two key challenges: Since the camera-IMU pair is attached to the person’s head while the two IMUs are on their feet, the relative transformations between these sensors are unknown and vary during motion. To fuse information from the three sensing modalities, these transformations have to be estimated. The lower-body posture corresponding to the person’s motion has to be computed, given only the input from three body-attached sensors, To this end, we employ the gait model, which describes the body-posture’s time evolution during walking, and thus allows us to relate the sensors' poses and compute the lower body-posture. In particular, the gait model consists of gait-events and nominal joint-angle profiles. The gait-events define transition states for a complete walking step (e.g., foot ground contact and foot swing). Therefore, they impose pose- related motion constraints at the times of their occurrence timings [1]. The joint-angle profiles determine the nominal body-posture during walking, which is specified by 27 joint angles of an articulated human model. Our human model is defined as a set of joints connecting body-segments, with 2018 IEEE International Conference on Robotics and Automation (ICRA) May 21-25, 2018, Brisbane, Australia 978-1-5386-3081-5/18/$31.00 ©2018 IEEE 4614

A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

A Visual-Inertial Approach to Human Gait Estimation

Ahmed Ahmed and Stergios Roumeliotis1

Abstract— This paper addresses the problem of gait estima-tion using visual and inertial data, as well as human motionmodels. Specifically, a batch least-squares (BLS) algorithmis presented that fuses data from a minimal set of sensors[two inertial measurement units (IMUs), one on each foot,and a head-mounted IMU-camera pair] along with motionconstraints corresponding to the different walking states, toestimate the person’s head and feet poses. Subsequently, gaitmodels are employed to solve for the lower-body’s postureand generate its animation. Experimental results against theVICON motion capture system demonstrate the accuracy ofthe proposed minimal sensors-based system for determining aperson’s motion.

I. INTRODUCTION AND RELATED WORK

Human-motion modeling and estimation is of critical

importance to physical therapy and rehabilitation (for as-

sessing, diagnosing, and planning treatment [1]), the movie

and gaming industries (for motion capturing and character

animation [2]), and robotics (e.g., for modeling bipedal

walk [3] and indoor localization [4]). Existing motion captur-

ing systems can be classified into the following categories:

(i) External camera-based systems, or outside-in systems

(e.g., the VICON system [5]), estimate the body-posture

by tracking markers attached to the user. They provide

high-accuracy measurements in real time, but their high

cost, complex infrastructure, limited coverage area, and the

required cumbersome markers' suit restrict their use.

(ii) Body-mounted camera systems attach the cameras,

instead, to the person’s body and observe the surrounding

environment, as an inside-out system. For example, [6] used

16 cameras to capture general motion, while [7] used only

2 head-mounted cameras in a way that observes both the

moving person and the surroundings. These systems provide

high accuracy under sufficient motion and environment con-

ditions, they do not impose the infrastructure and coverage

area constraints, and have less cost compared to the previous

ones. Their high computational demands, however, limit their

operation to only offline, and still they have inconvenient

setup due to the numerous body-mounted cameras.

(iii) Body-mounted IMU systems combine angular velocity

and linear acceleration data from IMUs attached to different

body segments. They follow a sensor-based approach and

employ motion constraints (e.g., maintain the body dimen-

sions, and ensure zero velocity of the feet during stance

periods) to improve estimation accuracy. These systems

1A. M. Ahmed and S. I. Roumeliotis are with the Department ofComputer Science and Engineering, University of Minnesota, Minneapolis,MN, 55455, USA, medhat|[email protected]. This workwas supported by the University of Minnesota and the National ScienceFoundation (IIS-1328722).

impose few limitations on the area of operation, and achieve

real-time performance. They are costly, however, and require

cumbersome sensor suit setup (e.g., 17 IMUs for the Xsens

MVN motion-capture suit [8]).

(iv) Peripheral IMU-based systems use prior motion mod-

els to reduce the number of body-attached sensors, hence, to

overcome the previous shortcomings. For example, [9] used

4 IMUs (attached to hands and feet) to estimate the full body

posture, while [10] used one foot-mounted IMU to estimate

gait parameters and represent the motion with a simple 2D

model. Lastly, [11] also used one IMU but only to estimate

the foot trajectory. These attempts addressed the system’s

usability and cost constraints, but reducing the number of

IMUs comes at the expense of lower estimation accuracy.

Our objective in this work is to combine the body-mounted

camera system’s high accuracy with the peripheral IMU-

based system’s low cost and usability within a minimal

sensor-based framework. In particular, we employed two

foot-mounted IMUs and a head-mounted camera-IMU pair

(Google Glass) to estimate the person’s trajectory and gener-

ate a 3D animation of their corresponding motion. This setup

can be used to improve the quality of pervasive healthcare

(by providing personalized monitoring and incidence detec-

tion [12]), and virtual reality (VR) applications (by enabling

natural interaction and bringing immersive experiences [13]).

To achieve our objectives, we need to address two key

challenges:

• Since the camera-IMU pair is attached to the person’s

head while the two IMUs are on their feet, the relative

transformations between these sensors are unknown and

vary during motion. To fuse information from the three

sensing modalities, these transformations have to be

estimated.

• The lower-body posture corresponding to the person’s

motion has to be computed, given only the input from

three body-attached sensors,

To this end, we employ the gait model, which describes

the body-posture’s time evolution during walking, and thus

allows us to relate the sensors' poses and compute the

lower body-posture. In particular, the gait model consists of

gait-events and nominal joint-angle profiles. The gait-events

define transition states for a complete walking step (e.g., foot

ground contact and foot swing). Therefore, they impose pose-

related motion constraints at the times of their occurrence

timings [1]. The joint-angle profiles determine the nominal

body-posture during walking, which is specified by 27 joint

angles of an articulated human model. Our human model is

defined as a set of joints connecting body-segments, with

2018 IEEE International Conference on Robotics and Automation (ICRA)May 21-25, 2018, Brisbane, Australia

978-1-5386-3081-5/18/$31.00 ©2018 IEEE 4614

Page 2: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

lengths computed as a percentage of the body-height input

parameter [14].

In the epicenter of our approach is a batch least squares

(BLS) estimator, that combines all visual and inertial data

along with the motion constraints to estimate the kinematic

state (pose, linear and rotational velocity, and linear ac-

celeration) of the person’s head and feet. These estimates

are then combined with the gait model to determine the

body’s posture. To summarize, our main contributions are

the following:

• To the best of our knowledge, we present the first visual-

inertial-based motion-capture system, which takes ad-

vantage of the high accuracy of body-mounted camera

systems and the usability, low price and reduced pro-

cessing requirements of peripheral IMU-based systems.

• We introduce a twofold approach that (i) Fuses mo-

tion constraints with visual and inertial measurements

to estimate the trajectories of the three time-varying

head/feet coordinate frames, (ii) Generates a motion

animation by using the estimated trajectories and the

prior gait model, and solving a non-linear optimization

problem, in closed forms to compute the body pose.

In the next section, we provide an overview of the pro-

posed system. Details of each component are presented in

subsequent sections as follows: Motion analysis and con-

straint generation in Sec. III; Trajectory estimation along

with inertial, visual, and motion constraints are described in

Sec. IV; Motion synthesis in Sec. V. Lastly, we present our

experimental results in Sec. VI, and our concluding remarks

in Sec. VII.

II. APPROACH OVERVIEW

Our objective is to estimate the trajectory of a person

walking (along with the body posture) using few only sensors

(head-mounted camera-IMU and two foot-mounted IMUs).

The proposed system comprises (see Fig. 1):

• Input corresponds to the visual/inertial sensor data, the

human model, and the gait model.

• Gait-Event Detector (GED) tracks the walking transi-

tion states (e.g., foot ground contact and foot swing).

Specifically, it detects gait-event occurrences by apply-

ing thresholds on the accelerometer measurements, and

thus it generates motion constraints corresponding to

these events.

• Trajectory estimator computes the 3D pose of the 3

body-mounted IMUs (Google Glass-IMU and two foot-

mounted IMUs), as well as, their time-varying relative

transformations by fusing visual and inertial data with

the proposed motion constraints in a BLS formulation.

• Motion Synthesizer generates the body animation. In

particular, it applies nominal joint angles to the human

model using the timings computed from the GED. Then,

it places the model along the estimated trajectories by

solving a non-linear optimization problem.

We first describe the gait model and how to detect the events

in Sec. III. Then, we present the trajectory estimation along

Fig. 1. Proposed system’s block diagram.

with the generated motion constraints in Sec. IV. Finally, we

describe the motion synthesizer in Sec. V.

III. MOTION ANALYSIS AND CONSTRAINT GENERATION

The gait-event detector (GED) tracks the body-posture’s

transition states (e.g., foot ground contact and foot swing)

according to the gait-events' model, and thus introduces

motion constraints that relate the three head/feet trajectories.

Moreover, in order to compute the joint-angles necessary

for generating the 3D animation, we need detect the body

posture corresponding to the person’s motion at each time

step. To this end, the GED provides information for relating

the sensors' poses and to computing the lower-body’s posture.

The GED detects the gait events by applying simple thresh-

olds to the magnitude of the linear acceleration measure-

ments of the foot-mounted IMUs. As a result, it computes

gait-event timings and generates motion constraints. The next

subsections present the gait-events' model and the process of

detecting them.

A. Gait Model

Our gait model consists of gait-events and nominal joint-

angle profiles. The gait-events define the transition states

of a complete walking cycle, i.e., the pose-related events

during the time interval between two successive same-foot

ground contacts. Detecting gait-events allows us to impose

motion constraints at the time their occurrence. Therefore,

it is important to understand these events and the associated

body postures. Specifically, the gait-cycle (GC) is divided

into the following states based on foot-ground-contact events

(see Fig. 2):

1) Loading Response (0 − 10% of the GC): In this

transition state, both feet are in contact with the ground

4615

Page 3: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

Fig. 2. Gait Cycle State Diagram.

(double support period). It is triggered by one foot’s

initial floor contact event and continues until the other

foot is lifted off the ground for swing (opposite foot

toe-off event).

2) Stance (10− 50% of the GC): After the opposite toe-

off, the foot becomes completely static and in full

contact with the ground during a single support period

(i.e., single-foot contact with ground), while the other

foot is swinging. During this state motion constraints

(e.g., static foot constraint) are applied [see Sec. IV-C].

3) Pre-Swing (50 − 60% of the GC): This state mirrors

the Loading Response on the other foot. It starts when

the opposite swinging foot hits the ground (opposite

initial contact event), while the stance foot is lifted for

the swing. The state ends with the foot toe-off event.

4) Swing (60 − 100% of the GC): After toe-off, the

foot progresses forward until it hits the ground at the

next initial contact event. During this state the motion

constraints (e.g., static foot constraint) are applied to

the opposite foot [see Sec. IV-C].

Fig. 2 illustrates the four gait cycle states and the transition

events between them.1 The joint-angle profiles, on the other

hand, specify the time evolution of the 27 joint angles

(constituting our human model) during the gait-cycle. Hence,

they define the body-posture during a walking step. Fig. 3

illustrates the body posture at the corresponding detected

gait-events, along with a subset of the nominal joint angle

profiles during the gait-cycle. 2 A detailed analysis of the

gait process and states are presented in [1].

1For illustration purposes, we associate red and blue colors with leftand right feet events, respectively, in all figures of the paper.

2Fig. 3 is created using the experimental data provided by Clinical GaitAnalysis (CGA) Normative Gait Database [15] and modified illustrativeimages acquired from various medical websites.

Fig. 3. Gait Model: Gait states and events during the gait cycle along withthe joint-angle profiles.

B. Gait-Event Detector

The gait event detector (GED) applies thresholds to the

magnitude of the accelerometer measurements from the foot-

mounted IMUs (they are expected to be equal to gravity

during a stance period) to detect the foot-ground-contact

events. Fig. 4 illustrates the gait-event detection process.

Specifically, the two lower plots show the left-right foot

accelerometer measurements' magnitude during the gait-

cycle, along with the GC states and events. As evident, the

linear acceleration’s magnitude equals that of gravity during

the foot stance periods (i.e., the foot-mounted IMU is static).

The feet elevation trajectories (computed from the VICON

tracking system and shown in Fig. 4) confirm the accuracy of

the detected gait-event timings, and thus the validity of this

method. Lastly, we note that the proposed system maintains

the state-machine illustrated in Fig. 2 to track the four gait

states.

IV. TRAJECTORY ESTIMATION

Our BLS estimator computes the 3D trajectories of the

three body-mounted IMUs (Google Glass-IMU and the two

foot-mounted IMUs) by fusing visual and inertial data. As

mentioned before, the main challenge in this step is that the

three IMUs are not rigidly attached to each other, i.e., their

relative transformations vary as the person is walking. To ad-

dress this issue, we estimate their time varying extrinsics and

impose motion constraints on their trajectories. Specifically,

our BLS estimates the following state vector:

x =[xTF xT

G xTL xT

R xTE

]T(1)

where xF is the Euclidean coordinates of the observed visual

features, xG, xL, and xR are the state vectors of the Glass,

left-foot, and righ-foot IMUs, respectively, along the entire

trajectory, i.e., xTG =

[xTG1

. . .xTGk

. . . xTGN

]T. Each

xGi, i = 1, . . . , N , comprises the position, attitude, linear

velocity, and biases of the corresponding IMU [see (2)].

4616

Page 4: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

Fig. 4. Linear acceleration and feet elevation profiles during the gait cycle.

Finally, xE denotes the extrinsic parameters (relative pose

of the 3 IMUs and the pose of the Glass-camera w.r.t. the

Glass-IMU). The BLS estimator computes x by iteratively

minimizing a function comprising cost terms from the IMU

measurements, the visual observations, and the motion con-

straints. Each of these are described in detail hereafter.

A. Inertial measurements cost term Cu(x)Each IMU measures the sensor’s rotational velocity and

linear acceleration contaminated by white Gaussian noise and

time-varying biases. The IMU state at time k is defined as

xIk =[IkqT

G bTgk

GvTIk

bTak

GpTIk

]T(2)

where IkqG is the quaternion representing the orientation

of the global frame, {G}, in the IMU’s frame of reference,

{Ik}, GpIk and GvIk are the position and velocity of {Ik}in {G}, and bg and ba are the gyroscope and accelerometer

biases, respectively. The sensor’s state evolution is described

by the propagation model

xIk+1= f(xIk , uk) +wk (3)

where f(xIk , uk) is a nonlinear function for integrating the

inertial measurements uk over the time interval[tk, tk+1

](see [16] for details), and wk is the discrete-time zero-mean

white Gaussian measurement noise with covariance Qk,

computed through IMU characterization [17]. Thus, every

IMU measurement imposes a stochastic constraint between

the consecutive IMU states xIk and xIk+1. Linearizing (3),

around the state estimates xIk and xIk+1, results in the

following error equation

xIk+1= (f(xIk ,uk)− xIk+1

) +ΦkxIk +wk (4)

where Φk is the corresponding Jacobians w.r.t the state xIk .

The error state x is defined as the difference between the true

state x and the state estimate x employed for linearization

(i.e., x � x− x), while for the quaternion q a multiplicative

error model is used q � q⊗ q−1 � [12δθ

T 1]T

, where δθis a minimal representation of the attitude error.

Thus, each inertial measurement uk contributes to the BLS

a linearized cost term of the form

Cuk(xIk , xIk+1

) = || [Φk −I] [ xIk

xIk+1

]− (xIk+1

− f(xIk ,uk))||2Qk(5)

while multiple IMU measurements across time yield a cost

term CuI(x) resulting from the summation of the individual

terms (5), i.e.,

CuI(x) =

n∑k=1

Cuk(xIk , xIk+1

) (6)

In our case, we have three IMU sensors, and thus the overall

cost function Cu(x) comprises the cost terms from the head-

mounted IMU CuG(x), the right-foot IMU CuR

(x), and the

left-foot IMU CuL(x), i.e.,

Cu(x) = CuG(x) + CuR

(x) + CuL(x) (7)

B. Visual observations cost term Cz(x)As the person walks, the head-mounted camera observes

the surrounding environment. To provide information about

camera’s motion, we extract and track static visual point

features. Specifically, we extract Harris corners [18] (i.e.,

points of local maximum or minimum intensity) as 2D

observations of the 3D features and track them through

consecutive images using the Kanade-Lucas-Tomasi (KLT)

feature tracker [19]. The resulting observed feature tracks

impose visual constraints on the camera’s (and hence the

IMU’s) motion relative to the observed scene. Note that this

visual information is of critical importance fro reducing the

IMU drift.

In particular, each visual measurement relates the camera

pose at a specific time step k with the observed feature fthrough the following projection model

zfk = π(Ckpf ) + nfk (8)

with

π([x y z

]T) �

[xz

yz

]T,

where Ckpf is the feature position expressed in the cam-

era frame of reference, nfk is zero-mean, white Gaussian

noise with covariance σ2fI2, and I2 is the 2 × 2 identity

matrix. Note that we express the feature measurement (8)

in the normalized pixel coordinates, after performing in-

trinsic camera calibration offline [20]. Note that Ckpf is

expressed w.r.t. the state vector elements (head IMU pose

4617

Page 5: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

GpIk and IkqG, feature position w.r.t. global Gpf , and

camera-IMU extrinsic CpI and CqI ); i.e., Ckpf = CpI +R(CqI)R(IkqG)

(Gpf − GpIk

), where R(q) is the rotation

matrix corresponding to the quaternion q. Linearizing (8),

yields the error equation

zfk = Hfk x+ nfk (9)

where Hfk is the corresponding Jacobian evaluated at the

state estimate x, which contributes a linearized cost term of

the form

Cz(f,k)(x) = ||Hfk x− zfk ||2σ2

f I2(10)

Since the camera observes multiple features from different

poses, we form the visual observations cost function Cz(x)by accumulating the contributing cost terms (10) for all

features and camera poses:

Cz(x) =∑k

∑f

Cz(f,k)(x) (11)

C. Motion constraints

As mentioned earlier, the main difference between our

BLS formulation and existing visual-inertial navigation sys-

tems is that we are fusing measurements from a camera and 3

IMUs whose relative transformations are unknown and time

varying. In order to estimate them, we introduce additional

information in the form of motion constraints that relate the

three IMUs' trajectories. Specifically, we take advantage of

the fact that the sensors are mounted on a person walking

on a planar surface, and seek to infer the kinematic state

of each IMU during gait events. In particular, every time a

foot hits the ground, we require it to be at the same height

(i.e., we restrain both feet to intermittently be on the same

ground plane). Additionally, detecting each foot’s stance state

imposes its velocity to be zero during this period. Lastly,

we constrain the head’s projection on the ground plane

to be in between consecutive footsteps. These simple, yet

efficient, constraints capture the person’s motion and prevent

the estimated IMU trajectories from diverging from each

other. Note that, all the motion constraints stem from stance

events detected by GED (see Sec. III). In what follows, we

assume that these events are already detected and provided

to the BLS estimator. Next, we present the formulation of

each of these constraints within the BLS framework:

1) Zero-velocity constraint cost term Cv(x): This con-

straint, applied during the foot stance states of the gait cycle

(see Gait Model in Sec. III), sets the linear velocity of the

foot to zero; this implies that the foot is not moving and is

in full contact with ground. This stochastic constraint and its

corresponding error equation at footstep k are expressed as:

zvk = GvIk + nvk (12)

zvk = Hvk x+ nvk

where Hvk is the corresponding Jacobian, and nvk is zero-

mean white Gaussian noise with covariance σ2vI3. The con-

straint contributes to the BLS a cost term of the form

Cvk(x) = ||Hvk x− zvk ||2σ2vI3

(13)

For multiple footsteps n, we sum the contributing cost

terms (13) corresponding to the left CvLkand right CvRk

feet

to form the cost function:

Cv(x) =n∑

k=1

CvLk+

n∑k=1

CvRk(14)

2) Constant-height constraint cost term Ch(x): This con-

straint sets the foot elevation to be at the same height during

the stance states of the gait cycle, i.e., it sets the z position

of the IMU to zero. In other words, this constraint restrains

the feet to move on a plane, following a planar walking

motion model see Sec. III). This stochastic constraint and

its corresponding error equation at footstep k can be written

as:

zhk=

[0 0 1

]GpIk + nhk

(15)

zhk= Hhk

x+ nhk

where Hhkis the corresponding Jacobian, and nhk

is zero-

mean white Gaussian noise with variance σ2h. This constraint

contributes to the BLS a cost term of the form

Chk(x) = ||Hhk

x− zhk||2σ2

h(16)

For multiple footsteps n, we sum the contributing cost

terms (16) corresponding to the left ChLkand right ChRk

feet to form the cost function:

Ch(x) =n∑

k=1

ChLk+

n∑k=1

ChRk(17)

3) Head/Feet relative position constraint cost term Cp(x):This constraint requires the projection of the head’s position

on the plane to be in the middle of consecutive foot steps.

In other words, it does not allow the feet to move far away

from each other on the planar surface, and maintains the

left/right feet positions relative to the head. The constraint

can be represented with the following equation

SGphk=

1

2S (Gplk + Gprk) (18)

where S �[1 0 00 1 0

], Gphk

is the head position in the

Glass global frame,3 Gplk and Gprk are the positions of the

left and right foot, respectively, during consecutive steps (i.e.,

both at stance phase) w.r.t. the Glass-IMU global frame {G}.Summarizing, this stochastic constraint and its corresponding

error equation are expressed as:

zpk= SGphk

+ npk(19)

zpk= Hpk

x+ npk

where where Hhkis the corresponding Jacobian and npk

is zero-mean, white Gaussian noise with covariance σ2pI2.

3The translation between the Glass-IMU position and the head’s middlepoint is calculated based on the human-body measurements.

4618

Page 6: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

Finally, the cost term contributed to the BLS in this case is:

Cpk(x) = ||Hpk

x− zpk||2σ2

pI2(20)

For multiple footsteps n, we accumulate the contributing cost

terms (20) to form the cost function:

Cp(x) =n∑

k=1

Cpk(21)

D. Batch Least Squares

After adding all cost terms [see (7), (11), (14), (17),

and (21)] together, we iteratively minimize the BLS function

C(x) = Cu(x) + Cz(x) + Cv(x) + Ch(x) + Cp(x) (22)

= ||Hx− z||2Rusing Gauss-Newton’s method, The Jacobian H is evaluated

at each iteration and the normal equations are formed by

evaluating the residual r and the Hessian matrix H, i.e.,

HTR−1Hx = HTR−1z (23)

⇒ Hx = r

The normal equations (23) are solved by employing

Cholesky factorization of the (sparse) Hessian and consec-

utively solving two triangular systems using the SuitSparse

numerical library [21]. The resulting estimated trajectories

are then provided to the animator (see Sec. V) that generates

the lower-body motion.

V. MOTION SYNTHESIS

Motion synthesizer generates the 3D animation of the

articulated human model (head, trunk, and lower limps)

during walking. In particular, it takes as input the estimated

head/feet trajectories, the detected gait event timings, and

the nominal joint angle profiles from a prior gait model

(see Fig. 1), and applies the linear warping function of [2]

to compute the model’s joint angles across time, hence

generating the corresponding human postures. Fig. 3 depicts

a subset of the nominal joint-angle profiles during the gait

cycle, which is used for generating the motion animation.

Finally, the synthesizer computes the body’s root (pelvis)

pose to place it along the estimated trajectory.

These two main steps for generating the animation [(i)

compute the joint angles (based on the gait-event timings

and nominal joint-angle profiles), and (ii) evaluate the body

pose (based on the estimated trajectories and body posture)]

are described in detail in the following two sections.

A. Joint Angle Calculation

The synthesizer computes the joint angles at a given time

step using the detected GC events and the nominal joint-

angle profiles. In particular, a joint-angle θ at time step tis evaluated using a piecewise linear warping function [τ =w(t)] that computes the equivalent GC percentage τ , and

applies it to the nominal joint-angle profile g(τ), i.e.,

θ(t) = g(τ) = g(w(t)) (24)

Fig. 5. Piecewise linear motion warping approach.

where, the warping function w is defined as a linear inter-

polation between the closest detected gait-event times (teand te+1) and their corresponding gait-cycle percentages (τeand τe+1). Fig. 5 illustrates the proposed motion warping ap-

proach and the computed variables of interest. Next, forward

kinematics are employed to evaluate the head/feet positions

w.r.t. the pelvis using the computed joint-angles.

B. Body Pose Evaluation

The final step, after applying the estimated joint-angles to

the human model, is to place the model along the estimated

head/feet trajectories. To do so, the position of the body root

(pelvis) and its orientation (yaw angle, since the person is

walking on a planar surface) need to be computed. Note

that, it is not possible to use the estimated head orientation

to place the model; as the head can independently point to

different directions while walking. To address this issue, we

formulate an optimization problem in the body position and

yaw angle that seeks to minimize the distances between the

articulated models' head/feet positions and the estimated 3D

trajectories, i.e., it computes the body pose that best aligns

with the trajectories.

In particular, we introduce a closed-form solution for

determining the body position GpB w.r.t. the global frame

and the yaw angle GθB , given the estimated trajectory

positions (Gpi, i = 1, 2, 3 representing the three head/feet

points) and their corresponding human model’s positions

(Bpi, computed from applying forward kinematics). The

error εi is defined from the geometric constraint as the

following

εi =Gpi − (Rz(

GθB)Bpi +

GpB) (25)

where Rz(θ) is the rotation matrix around the z-axis with

yaw angle θ. Therefore, we solve the minimization problem

Gp∗B ,Gθ∗B = argmin

GpB ,GθB

{C(GpB ,

GθB) �1

2

3∑i=1

||εi||2}(26)

Taking derivatives w.r.t. the body position yields:

∂C(GpB ,GθB)

∂GpB= 0 =⇒ Gp∗B =

1

3

3∑i=1

[Rz(

GθB)Bpi−Gpi

](27)

4619

Page 7: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

Substituting 27 in 26 results in a cost function w.r.t. the yaw

angle GθB

C′(GθB) = 1

2

3∑i=1

||Rz(GθB)vi − ui||2 (28)

where, vi = [vxi vyi vzi]T � 1

3

∑3j=1

Bpj − Bpi and

ui = [uxi uyi uzi]T � 1

3

∑3j=1

Gpj − Gpi. Since Rz(GθB)

is a rotation matrix around the z-axis, we reformulate the

problem as a constraint optimization problem by introducing

the variable x � [cosGθB sinGθB ]T , i.e.,

x∗ = argminx

{12

∑3i=1 ||Aix−wi||2

}s.t. xTx = 1

(29)

where, Ai �[vxi −vyivyi vxi

]and wi � [uxi uyi]

T . Lastly,

the body yaw angle is computed from the solution of this

constraint optimization problem as x∗ =∑3

i=1 ATi wi

||∑3i=1 AT

i wi|| .

VI. EXPERIMENTAL RESULTS

Our system comprises a Google Glass and two 100Hz

Navchip IMU each attached to a foot. For the Google Glass,

100Hz inertial data is received from the Invensense 9150IMU and 15Hz 320×240 narrow field of view (45◦) images

from the camera. The system estimates head poses every

25 cm and feet poses at every step. The experiments took

place within an area of 3×4 meters. We used VICON system

for our ground truth comaprisons. Specifically, for position

accuracy evaluation, we compared the head/feet trajectory

estimates against the VICON ground truth. Similarly for

assessing the motion synthesis' accuracy, we compared the

computed head/feet positions (evaluated from applying for-

ward kinematics to the articulated model) against the VICON

ground truth.

Table I shows the computed root mean square error

(RMSE) at each stage of the system (i.e., trajectory estima-

tion and motion synthesis), along with the trajectory lengths.

As evident, the error is less than 2% of the distance travelled.

Note also that, the trajectory estimator’s accuracy is typically

better than the motion synthesizer’s. This is due to the fact

that the estimated joint angles need to be refined during

motion. For example, we can use inverse kinematics to refine

the estimated joint-angles given head/feet position estimates.

Fig.s 6 and 7 illustrate the outputs of the trajectory estimator

and motion synthesizer for Dataset 3, while Fig. 8 depicts the

position errors (note that the z-position error is very small for

both feet due to the incorporated motion constraints). Finally,

a video of the corresponding animations of the walking

experiments along with the estimated trajectories is shown

on the project’s webpage [22].

VII. CONCLUSIONS

In this paper, we presented, to the best of our knowledge,

the first visual-inertial-based motion capture system. The

proposed approach takes advantages of the high accuracy

of body-mounted camera systems and the usability, low

TABLE I

RMSE TRAJECTORY ESTIMATION AND MOTION SYNTHESIS ERRORS

Estimator RMSE(m) Synthesizer RMSE(m)

Dataset 1 - Trajectory Length 29.13m

Feet 0.13 0.12

Head 0.05 0.12

Dataset 2 - Trajectory Length 3.18m

Feet 0.07 0.09

Head 0.13 0.09

Dataset 3 - Trajectory Length 8.31m

Feet 0.09 0.09

Head 0.06 0.14

Fig. 6. Dataset 3: Estimated 3D trajectories vs. VICON ground truth.

price and computational requirements of the peripheral IMU-

based systems within a minimal sensor-based framework. To

do so, it addresses two main challenges: (i) estimating the

trajectories of different sensors with time-varying relative

transformations, and (ii) determining the body posture and

joint angles corresponding to the person’s motion using only

three body-attached IMUs. In particular, our system fuses

visual and inertial measurements along with motion con-

straints in a batch least squares formulation, and incorporates

a prior human gait model to generate motion animation. In

our implementation, we use a Google Glass and two IMUs

attached to each foot. The results of our experiments support

the feasibility of the proposed method for decreasing the

number of required sensors in motion capture systems. As

part of our future work, we plan to improve the estimation

accuracy by incorporating the joint-angle profile models

within the trajectory estimation process. Finally, we aim to

refine the generated animation by considering the effect of

additional gait parameters, such as the walking speed and

cadence.

REFERENCES

[1] D. Levine, J. Richards, and M. W. Whittle, Whittle’s Gait Analysis,5th ed. Churchill Livingstone, 2012.

4620

Page 8: A Visual-Inertial Approach to Human Gait Estimationmars.cs.umn.edu/papers/08460871.pdf · 2019-01-02 · tion constraints with visual and inertial measurements to estimate the trajectories

Fig. 7. Dataset 3: Estimated 3D trajectories and animated model.

0 0.5 1 1.5 2 2.5

Err

or (

m)

-0.1

-0.05

0

0.05

0.1

0.15Error Left Foot

0 0.5 1 1.5 2 2.5

Err

or (

m)

-0.2

-0.1

0

0.1

0.2Error Right Foot

time (sec)0 0.5 1 1.5 2 2.5

Err

or (

m)

-0.2

-0.1

0

0.1

0.2

0.3Error Head

X - Estimator Y - Estimator Z - Estimator X - Synthesizer Y - Synthesizer Z - Synthesizer

Fig. 8. Dataset 3: Trajectory and motion synthesizer position errors.

[2] A. Witkin and Z. Popovic, “Motion warping,” in Proc. of the 22ndAnnual Conference on Computer Graphics and Interactive Techniques,Los Angeles, CA, August 6 – 11 1995, pp. 105–108.

[3] J. Koenemann, F. Burget, and M. Bennewitz, “Real-time imitationof human whole-body motions by humanoids,” in Proc. of IEEEInternational Conference on Robotics and Automation, Hong Kong,China, May 31 – June 7 2014, pp. 2806–2812.

[4] D. G. Kottas and S. I. Roumeliotis, “An iterative Kalman smoother forrobust 3D localization on mobile and wearable devices,” in Proc. ofIEEE International Conference on Robotics and Automation, Seattle,Washington, May 26 – 30 2015, pp. 6336–6343.

[5] “VICON: Motion capture system,” Online, https://www.vicon.com/.[6] T. Shiratori, H. S. Park, L. Sigal, Y. Sheikh, and J. K. Hodgins,

“Motion capture from body-mounted cameras,” ACM Transactions onGraphics, vol. 30, no. 4, pp. 31:1–31:10, 2011.

[7] H. Rhodin, C. Richardt, D. Casas, E. Insafutdinov, M. Shafiei, H.-P. Seidel, B. Schiele, and C. Theobalt, “Egocap: Egocentric marker-less motion capture with two fisheye cameras,” ACM Transactions onGraphics, vol. 35, no. 8, pp. 162:1–162:11, 2016.

[8] D. Roetenberg, H. Luinge, and P. Slycke, “Xsens MVN: Full 6DOFhuman motion tracking using miniature inertial sensors,” Xsens Tech-nologies, Tech. Rep., 2009.

[9] J. Tautges, A. Zinke, B. Kruger, J. Baumann, A. Weber, T. Helten,M. Muller, H.-P. Seidel, and B. Eberhardt, “Motion reconstruction

using sparse accelerometer data,” ACM Transactions on Graphics,vol. 30, no. 3, pp. 18:1–18:12, 2011.

[10] Y. Ketema, D. Gebre-Egziabher, M. Schwarts, C. Matthews, andR. Kriker, “Use of gait-kinematics in sensor-based gait monitoring:A feasibility study,” Journal of Applied Mechanics, vol. 81, no. 4,2013.

[11] Y. S. Suh, “Inertial sensor-based smoother for gait analysis,” Sensors,vol. 14, no. 12, pp. 24 338–24 357, 2014.

[12] U. Varshney, Pervasive Healthcare Computing: EMR/EHR, Wirelessand Health Monitoring, 1st ed. Springer Publishing Company, 2009.

[13] S. Tregillus and E. Folmer, “VR-STEP: Walking-in-place using inertialsensing for hands free navigation in mobile VR environments,” inProc. of the 2016 CHI Conference on Human Factors in ComputingSystems, San Jose, CA, May 07 – 12 2014, pp. 1250–1255.

[14] R. Drillis, R. Contini, and M. Bluestein, “Body segment parameters:A survey of measurement techniques,” Artificial Limbs, vol. 8, no. 1,pp. 44–66, 1964.

[15] “Clinical gait analysis CGA normative gait database,” available at http://www.clinicalgaitanalysis.com/data/.

[16] N. Trawny and S. I. Roumeliotis, “Indirect Kalman filter for 3Dattitude estimation,” University of Minnesota, Tech. Rep., March 2005.

[17] R. O. Allen and D. H. Change, “Performance testing of the systrondonner quartz gyro,” JPL Engineering Memorandum, Tech. Rep.,January 1993.

[18] C. Harris and M. Stephens, “A combined corner and edge detector,”in Proc. of the Alvey Vision Conference, Manchester, UK, August 31– September 2 1988, pp. 147–151.

[19] B. D. Lucas and T. Kanade, “An iterative image registration techniquewith an application to stereo vision,” in Proc. of the International JointConference on Artificaial Intelligence, Vancouver, British Columbia,August 24 – 28 1981, pp. 674–679.

[20] J.-Y. Bouguet, “Camera calibration toolbox for matlab,” 2006, avail-able at http://www.vision.caltech.edu/bouguetj/calibdoc/, version 1.6.0.

[21] T. A. Davis, “SUITESPARSE: A suite of sparse matrix software,”available at http://faculty.cse.tamu.edu/davis/suitesparse.html.

[22] “The project webpage,” Online, http://mars.cs.umn.edu/research/human motion project.php.

4621