Upload
others
View
9
Download
0
Embed Size (px)
Citation preview
3D Photography: Bundle Adjustment and SLAM
31 March 2014
1
Structure-From-Motion
• Two views initialization:
– 5-Point algorithm (Minimal Solver)
– 8-Point linear algorithm
– 7-Point algorithm
E → (R,t)
2
Structure-From-Motion
• Triangulation: 3D Points
E → (R,t)
3
Structure-From-Motion
• Subsequent views: Perspective pose estimation
(R,t) (R,t)
(R,t) 4
Bundle Adjustment
• Final step in Structure-from-Motion.
• Refine a visual reconstruction to produce jointly optimal 3D structures P and camera poses C.
• Minimize total reprojection errors .
ijx
i j
WijijX ij
CPx2
,argmin
ijz
z
jP
jj CX ,
iC
ijz
Cost Function:
Measurement error covariance :1
ijW
],[ CPX 5
Bundle Adjustment
• Final step in Structure-from-Motion.
• Refine a visual reconstruction to produce jointly optimal 3D structures P and camera poses C.
• Minimize total reprojection errors .
ijx
z
jP
jj CX ,
iC
ijz
Cost Function:
i j
ijij
T
ijX
zWzargmin
Measurement error covariance :1
ijW
],[ CPX 6
Xf
Bundle Adjustment
• Minimize the cost function:
1. Gradient Descent
2. Newton Method
3. Gauss-Newton
4. Levenberg-Marquardt
XfX
argmin
7
Bundle Adjustment
1. Gradient Descent
8
Initialization: 0XX k
Compute gradient:
gXX kk
WJZ
X
Xfg T
XX k
Update:
Slow convergence near minimum point!
Iterate until convergence
: Step size X
J
: Jacobian
Bundle Adjustment
2. Newton Method
9
2nd order approximation (Quadratic Taylor Expansion):
KK XX
T
XXHgXfXf
21)()(
Find that minimizes ! KXX
Xf
)(
kXX
XfH
2
2
:
Hessian matrix
Bundle Adjustment
2. Newton Method
10
Differentiate and set to 0 gives:
gH 1
Computation of H is not trivial and might get stuck at saddle point!
Update: kk XX
Bundle Adjustment
3. Gauss-Newton
11
i j
ij
ijij
T
XWZWJJH
2
2
WJJH T
Normal equation:
ZWJWJJ TT
Update: kk XX
Might get stuck and slow convergence at saddle point !
Bundle Adjustment
12
4. Levenberg-Marquardt
Regularized Gauss-Newton with damping factor .
ZWJIWJJ TT
:0 Gauss-Newton (when convergence is rapid)
: Gradient descent (when convergence is slow)
LMH
Structure of the Jacobian and Hessian Matrices
• Sparse matrices since 3D structures are locally observed.
13
Solving the Normal Equation
• Schur Complement
14
ZWJH T
LM
LMH
3D Structures
Camera Parameters
Solving the Normal Equation
• Schur Complement
15
ZWJH T
LM
3D Structures
Camera Parameters
sH
LMH
SCH
T
SCH CH
Solving the Normal Equation
• Schur Complement
16
ZWJH T
LM
C
S
C
S
C
T
SC
SCS
HH
HH
3D Structures
Camera Parameters
Multiply both sides by:
IHH
I
S
T
SC
1
0
110 S
T
SCSC
S
C
S
SCS
T
SCC
SCS
HHHHHH
HH
Solving the Normal Equation
• Schur Complement
17
110 S
T
SCSC
S
C
S
SCS
T
SCC
SCS
HHHHHH
HH
11 )( S
T
SCSCCSCS
T
SCC HHHHHH
First solve for from: C
Schur Complement (Sparse and Symmetric Positive Definite Matrix)
Easy to invert a block diagonal matrix
Solve for by backward substitution. SC
Solving the Normal Equation
• Sparse matrix factorization 1. LU Factorization
2. QR factorization
3. Cholesky Factorization
• Iterative methods 1. Conjugate gradient
2. Gauss-Seidel
18
11 )( S
T
SCSCCSCS
T
SCC HHHHHH bAx
Can be solved without inverting A since it is a sparse matrix!
QRA
LUA
TLLA
Solve for x by forward backward substitutions.
Problem of Fill-In
19
Problem of Fill-In
• Reorder sparse matrix to minimize fill-in.
• NP-Complete problem.
• Approximate solutions: 1. Minimum degree
2. Column approximate minimum degree permutation
3. Reverse Cuthill-Mckee.
4. …
20
bPxPAPP TTT
Permutation matrix to reorder A
Problem of Fill-In
21
Robust Cost Function
• Non-linear least squares:
• Maximum log-likelihood solution:
• Assume that: 1. X is a random variable that follows Gaussian distribution.
2. All observations are independent.
22
ij
ijij
T
ijX
zWzargmin
)|(lnargmin- XZpX
ij
ijij
T
ijijXX
zWzcZXp explnargmin-)|(lnargmin-
ij
ijij
T
ijX
zWzargmin
Robust Cost Function
• Gaussian distribution assumption is not true in the presence of outliers!
• Causes wrong convergences.
23
Robust Cost Function
24
ij
ijij
T
ijXij
ijijX
zSzz argminargmin
Robust Cost Function scaled with ijW ij"
• Similar to iteratively re-weighted least-squares.
• Weight is iteratively rescaled with the attenuating factor .
• Attenuating factor is computed based on current error.
ij"
Robust Cost Function
25
(.) (.)"
Reduced influence from high errors
Gaussian Distribution
Cauchy Distribution
Influence from high errors
Robust Cost Function
26
Outliers are taken into account in Cauchy!
State-of-the-Art Solvers
• Google Ceres:
– https://code.google.com/p/ceres-solver/
• g2o:
– https://openslam.org/g2o.html
• GTSAM:
– https://collab.cc.gatech.edu/borg/gtsam/
27
Simultaneous Localization and Mapping (SLAM)
• For a robot to estimate its own pose and acquire a map model of its environment.
• Chicken-and-Egg problem:
– Map is needed for localization.
– Pose is needed for mapping.
28
Full SLAM: Problem Definition
29
Robot Poses
Observations
Map landmarks
M
i
K
k
jkikkiiiX,LX,L
lxzpuxxpXpUZLXp1 1
10 ),|(),|()(argmax,|,argmax
u1 u2 u3 Control Actions
30
M
i
K
k
jkikkiiiX,LX,L
lxzpuxxpXpUZLXp1 1
10 ),|(),|()(argmax,|,argmax
K
k
jkikk
M
i
iiiX,L
lxzpuxxp11
1 ),|(ln),|(lnargmin-Negative log-likelihood
Simultaneous Localization and Mapping (SLAM)
Likelihoods:
}),(exp{),|(2
11i
iiiii xuxfuxxp
Process model
}),(exp{),|(2
kkjkikjkikk zlxhlxzp
Measurement model
Simultaneous Localization and Mapping (SLAM)
31
K
k
jkikk
M
i
iiiX,LX,L
lxzpuxxpUZLXp11
1 ),|(ln),|(lnargmin-),|,(argmax
Putting the likelihoods into the equation:
K
k
kjkik
M
i
iiiX,LX,L ki
zlxhxuxfUZLXp1
2
1
2
1 ),(),(argmin),|,(argmax
Minimization can be done with Levenberg-Marquardt (similar to bundle adjustment)!
Simultaneous Localization and Mapping (SLAM)
32
ZWJIWJJ TT
Jacobian made up of , , , x
f
u
f
x
h
l
h
Weight made up of , i
k
Normal Equations:
Can be solved with sparse matrix factorization or iterative methods
Online SLAM: Problem Definition
33
121 ...),|,(),|,( tt dxdxdxUZLXpUZLxp
• Estimate current pose and full map .
• Inference with:
1. Kalman Filtering (EKF SLAM)
2. Particle Filtering (FastSLAM)
tx L
Previous poses are marginalized out
EKF SLAM
• Assumes: pose and map are random variables that follows Gaussian distribution.
• Hence,
34
tx L
),Ν(~),|,( UZLxp t
mean Error covariance
EKF SLAM
35
),( 1 ttt uf
t
T
tttt RFF 1
1)( t
T
ttt
T
ttt QHHHK
tttt yK
tttt HKI )(
1
1),(
t
ttt
x
ufF
t
tt
x
hH
)(
Prediction:
Correction:
Process model
Error propagation with process noise
Kalman gain
Update mean
)( ttt hzy Measurement residual (innovation)
Update covariance
Measurement Jacobian Process Jacobian
Structure of Mean and Covariance
36
2
2
2
2
2
2
2
1
21
2221222
1211111
21
21
21
,
NNNNNN
N
N
N
N
N
llllllylxl
llllllylxl
llllllylxl
lllyx
ylylylyyxy
xlxlxlxxyx
t
N
t
l
l
l
y
x
Covariance is a dense matrix that grows with increasing map features!
True robot and map states might not follow unimodal Gaussian distribution!
Particle Filtering: FastSLAM
• Particles represents samples from the posterior distribution .
• can be any distribution (need not be Gaussian).
37
),|,( UZLxp t
),|,( UZLxp t
FastSLAM
38
},...,,,,{ ,,,2,2,1,1 m
tN
m
tN
m
t
m
t
m
t
m
t
m
t
m
t xp
Each particle represents:
Robot state Landmark state (mean and covariance)
),|(~ 1 ttt
m
t uxxpx Sample the robot state from
the process model
),|( , t
m
t
m
tn zxLp N Kalman filter measurement updates
),|( m
t
m
tt
m
t xLzpw Weight update
Resampling
FastSLAM
• Many particles needed for accurate results.
• Computationally expensive for high state dimensions.
39
Pose-Graph SLAM
40
ij
jiijX ij
vvhz2
),(argmin
• 3D structures are removed (fixed).
• Constraints are relative pose estimates from 3D structures.
• Minimizes loop-closure errors.
Conclusion
• Bundle Adjustment
• Simultaneous Localization and Mapping
– Full SLAM
– Online SLAM
• EKF SLAM
• FastSLAM
• Pose-Graph SLAM
41