27
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM

Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM

  • View
    222

  • Download
    2

Embed Size (px)

Citation preview

Adam Rachmielowski

615 Project:Real-time monocular vision-based SLAM

Overview

• SFM and SLAM

• Extended Kalman filter

• Visual SLAM details

• Results

• Next

Estimating structure and motion

• Factorization [Tomasi & Kanade ’92]– Batch method– Efficient– Originally for affine camera– Missing data?– Finite camera [Sturm & Triggs]

W = MX

Estimating structure and motion

• Reconstruction from N views [Hartley & Zisserman ’00]

– Multiview geomteric entities and algorithms described by Faugeras, H, Z, and others

– Minimize global error with bundle adjustment– Can be used sequentially– Upgrade to Euclidean with auto calibration

x F P X

SLAM

• Simultaneous Localisation And Mapping• Estimate robot’s pose and map feature positions• Probabilistic framework maintains

– current estimate– estimate uncertainty (covariance)

• Update based on measurements and model• Many systems use

– odometry and active sensors as measurement devices– limited motion models

Vision-based SLAM

• Camera for measurements

• Trinocular– 3D measurements by triangulation– Offline [Ayache, Faugeras ’89]– Real-time with SIFTs [Se, Lowe, Little ’01]

• Real-time monocular [Chiuso et al. ’00]

Kalman filter [Swerling ’58][Welch, Bishop ’01]

• Estimates state of dynamic system

• Integrates noisy measurements to give optimal estimate

• Noise is Gaussian

kkkkkk wuBxFx 1

• First order Markov process

KF: key variables

• estimate of state at time k

• error covariance (estimate uncertainty)

• state transition function

• measurement

kkx̂

kkP

kF

kz

kH

kk RQ ,

• state to measure

• noise covariances

KF: Two phase estimation

• Predict– Predicted state

– Predicted covariance

kkkkkkk uBxFx 111ˆˆ

kkkkkkk QFPFP T

111

KF: Two phase estimation

• Update– Innovation

– Innov. covar.

– Kalman gain

– State

– Covariance

1ˆ~

kkkkk xHzy

11

kkkkk SHPK T

kkkkkk y~ˆˆ1 Kxx

kkkkkk RHPHS T

1

1)( kkkkkk I PHKP

EKF: Extended Kalman filter

• Allow non-linear functions (F, H)

• Apply functions to state

• Apply jacobian to covariances

• Linearizing functions around current estimate

),,( 1 kkkk f wuxx ),( kkk h vxz

Visual SLAM details [Davison ’03]

• State representation x, P

• Process model F (motion)

• Measurement model H (projection)

• State update

• System initialization

• Adding and removing feature

State representation

• Scene structure (feature points)– Depth from reference image [Azarbayejani,

Pentland ’95]– x,y,z coordinates

• Camera– Pose– Motion

State estimate vector

• Points yi

• Camera xv

– 6DOF pose– Constant velocity motion model– Acceleration modeled as noise

2

1

v

y

y

x

x

Tzyxzyxzyx vvvqqqqzyx 0vx

Covariance matrix

• Covariance blocks– Pxx camera params

– Pyiyi point I

• Off diagonals represent correlation between estimates

22122

21111

21

yyyyxy

yyyyxy

xyxyxx

PPP

PPP

PPP

P

Process model

• Points don’t move: yk = yk-1

• Add velocity and acceleration to current camera parameters

• Covariance updated using jacobian

),,( 1 kkkk f wuxx

Ωω

Vv

Ωωqq

Vvr

ω

v

q

r

fvt

t

new

new

new

newvx

fv

Measurement model

• H models projection of the predicted points by the predicted camera

• Covariance Si guides feature match search

),( kkk h vxz

RPP

PPS

TT

TT

i

iyy

i

i

v

ixy

i

i

i

ixy

v

i

v

ixx

v

ii

y

h

y

h

x

h

y

h

y

h

x

h

x

h

x

h

iii

i

Making measurements / Update

• Project innovation covariance to search ellipse

• Warp template based on camera and point prediction

• If viewing angle is good, match to get measurement

• Compute Kalman gain and update state and covariance

System initialization

• Need initial estimate and covariance– Calibration object– SFM

• Process covariance– Small: small searches, but can only handle small

accelerations– Large: can handle big accelerations, but need many

measurements

• Measurement covariance– Function of matching method (camera resolution)

Adding and removing features

• Add– Select salient feature in desired region– Search along epipolar line

• Remove– If matching repeatedly fails

22122

21111

21

yyyyxy

yyyyxy

xyxyxx

PPP

PPP

PPP

P

Davison ’03

Preliminary results

• Simulation [implemented with Birkbeck]– Behaves according to model– Initial estimate of camera and 4 key points is

true value + small amount of noise– Initial estimate of other points is true value +

significant noise– Initial covariance is scaled identity

Simulation

Adding points

Simulation with visibility

Next

• Real images (video sequence)– Feature matching– Tracking– SIFTs ?

• Real-time issues– Postponement [Davison ’01]

• Loop closing – Davison’s system automatically corrects if feature becomes

visible and is correctly measured, but…– Prevent drift by incorporating explicit loop closing [Newman,

Ho ’05]

References

References