Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
Joint Research Institute in Image and Signal ProcessingEdinburgh Research Partnership in Engineering and Mathematics
Sparse Signal Modelling and Compressed Sensing
TH
E
U N I V E R SI T
Y
OF
ED I N B U
RG
H
T. BlumensathInstitute for Digital Communications
Joint Research Institute for Signal and Image ProcessingThe University of Edinburgh
September, 2008
home · prev · next · page
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Portable Light Field Imaging: Extended Depth of Field, Aliasing
and Superresolution
Paolo Favaro
joint work with Tom BishopThis work has been supported by EPSRC grant EP/F023073/1(P)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Imaging sensors
2
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Imaging sensors
2
•Traditional cameras are based on the design of the human eye
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Imaging sensors
2
•Traditional cameras are based on the design of the human eye
•Q: Is this optimal for all vision tasks?
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Imaging sensors
2
•Traditional cameras are based on the design of the human eye
•Q: Is this optimal for all vision tasks?
•Other designs in nature:-simple eyes-pit eyes-pinholes-spherical lenses-multiple lenses-corneal refraction
-composite eyes-apposition-neural superposition-refracting superposition-reflecting superposition-parabolic superposition
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Imaging sensors
2
•Traditional cameras are based on the design of the human eye
•Q: Is this optimal for all vision tasks?
•Other designs in nature:-simple eyes-pit eyes-pinholes-spherical lenses-multiple lenses-corneal refraction
-composite eyes-apposition-neural superposition-refracting superposition-reflecting superposition-parabolic superposition
•Other designs match lower computational capabilities, different survival tasks, environment priors
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
modified optics
3
Computational photography is a holistic approach at solving imaging problems by jointly designing the camera and the signal processing algorithms
Computational photography paradigm
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
modified optics
3
Computational photography is a holistic approach at solving imaging problems by jointly designing the camera and the signal processing algorithms
Computational photography paradigm
blurred/coded image
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
modified optics
3
Computational photography is a holistic approach at solving imaging problems by jointly designing the camera and the signal processing algorithms
Computational photography paradigm
blurred/coded image
sharp image
blind deconvolution
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
coded image
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
restored image
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
restored image
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
restored image
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Example: Coded aperture
4
restored image
LCD opaque mask
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
In this presentation
•The (portable) light field camera
•What can one do with it?•Obtain 3D from a single image•Extend the depth of field•Image synthesis (e.g., digital refocusing)•3D image editing
•How does it work?•Assembly•Camera vs microlens array•Depth estimation•Image deblurring
•What are its limits?•Sampling•Sample repetitions, microlens blur and magnification•Coincidence of samples and undersampling
•Comparisons & evaluation
5
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
The light-field camera: What can one do with it?
6
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
The light-field camera: What can one do with it?
6
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
The light-field camera: What can one do with it?
6
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Depth estimation
light field 3D reconstruction
7
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Extended depth of field
extended depth of fieldlight field
8
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Digital refocusing
9
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Digital refocusing
9
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Challenge: Limited resolution
10
captured light field
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Challenge: Limited resolution
10
captured light field
4000p
4000p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Challenge: Limited resolution
10
captured light field
digitally refocused image
4000p
4000p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Challenge: Limited resolution
10
captured light field
digitally refocused image
4000p
4000p
300p
300p
~178 fold loss
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Related work: Superresolution
•Lumsdaine and Georgiev – Tech report 2008 and ICCP 2009Magnification and averaging of microlens images
•Pros: Computationally efficient and simple
•Cons: No deblurring, no depth estimation
11
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Related work: Superresolution
•Lumsdaine and Georgiev – Tech report 2008 and ICCP 2009Magnification and averaging of microlens images
•Pros: Computationally efficient and simple
•Cons: No deblurring, no depth estimation
11
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Related work: Superresolution
•Lumsdaine and Georgiev – Tech report 2008 and ICCP 2009Magnification and averaging of microlens images
•Pros: Computationally efficient and simple
•Cons: No deblurring, no depth estimation
11
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Geometric optics
12
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
QR p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
QR p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
QR p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
QR p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
p‘
Optical Axis
vv’z
OO
Main lensMicrolenses Sensor
p
i
Geometric optics
12
QR p
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light fields and the light field camera
13
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
Light fields and the light field camera
13
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
•Consider a sphere around an object
Light fields and the light field camera
13
object
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
•Consider a sphere around an object•The object scatters light
Light fields and the light field camera
13
object
illumination
reflected light
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
•Consider a sphere around an object•The object scatters light•We define an intensity value for each position on the sphere and for each 3D direction
Light fields and the light field camera
13
object
illumination
reflected light
measured light
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
•Consider a sphere around an object•The object scatters light•We define an intensity value for each position on the sphere and for each 3D direction•The light field can be described by a 4D function
Light fields and the light field camera
13
object
illumination
reflected light
measured light
viewpoint (u,v)
incoming ray (x,y)light field parametrization
2D coordinate
2D coordinate
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•The light field is a representation of how light propagates in space
•Consider a sphere around an object•The object scatters light•We define an intensity value for each position on the sphere and for each 3D direction•The light field can be described by a 4D function
•The light field camera projects a (portion of the)4D light field onto a 2D sensor array
Light fields and the light field camera
13
object
illumination
reflected light
measured light
viewpoint (u,v)
incoming ray (x,y)light field parametrization
2D coordinate
2D coordinate
(x,y)
(u,v)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography 14
camera vs microlens array
microlens array viewtarget
(x,y)
(u,v)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography 14
camera vs microlens array
microlens array view
camera array view
target
(x,y)
(u,v)
(u,v)
(x,y)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography 14
camera vs microlens array
microlens array view
camera array view
target
(x,y)
(u,v)
(u,v)
(x,y)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
•Key idea: Make use of redundancy in light field images
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
•Key idea: Make use of redundancy in light field images
•Formally, superresolution can be posed as a space-varying blind deconvolution problem
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
•Key idea: Make use of redundancy in light field images
•Formally, superresolution can be posed as a space-varying blind deconvolution problem•Introduce piecewise smoothness to estimate the depth map of the scene
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Light field superresolution
•Key idea: Make use of redundancy in light field images
•Formally, superresolution can be posed as a space-varying blind deconvolution problem•Introduce piecewise smoothness to estimate the depth map of the scene•Introduce texture priors to superresolve scene texture given the depth map
15
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
A Bayesian approach to superresolution
16
l = Hr + w
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
A Bayesian approach to superresolution
16
l = Hr + w
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
PSF
A Bayesian approach to superresolution
16
l = Hr + w
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
sharp imagePSF
A Bayesian approach to superresolution
16
l = Hr + w
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
sharp imagePSF
noise
A Bayesian approach to superresolution
16
l = Hr + w
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
sharp imagePSF
noise
A Bayesian approach to superresolution
16
l = Hr + w
obtain map estimate: r = arg maxr
p(l|r, Hs)p(r)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
sharp imagePSF
noise
A Bayesian approach to superresolution
16
l = Hr + w
Hs ← hLI = h
MLh
µL
obtain map estimate: r = arg maxr
p(l|r, Hs)p(r)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
light field image
sharp imagePSF
noise
A Bayesian approach to superresolution
16
l = Hr + w
Hs ← hLI = h
MLh
µL
obtain map estimate: r = arg maxr
p(l|r, Hs)p(r)
hµLk(i)(θq(i), u) =
� 1πb2(u)
��θq(i) − λ(u)(ck(i) − u)��
2< b(u)
0 otherwise.
hMLk(i)(θq(i), u) =
d2
4πβ2,
����θq(i) ±2b(u)
d(ck(i) − u)
����2
<2β
d
0, otherwise
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
robust norm
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
robust norm
view from the vantage pointu
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
robust norm depth/disparity map
view from the vantage pointu
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
robust norm depth/disparity map 2D shift in view centers
view from the vantage pointu
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•Energy minimization•Fidelity term is matching all pairs of views via 2D warps•Regularization is Total Variation
•Minimize via linearized Euler-Lagrange equation
Depth reconstruction via stereo matching
17
Edata(s) =�
u,u,u
Φ�Vu(u− s(u)∆u)− Vu(u− s(u)∆u)
�
robust norm depth/disparity map 2D shift in view centers
view from the vantage pointu
Aliasing needs to be taken into account!
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Boundary texture smoothness constraint
18
2 high-res views (wrong depth assumption)
x border gradients:
partial borders interpolated
borders propagated via linear interpolation
pixels used for x-gradients:
xx
xy
cell boundary
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Depth superresolution
19
low-res view hi-res view (wrong depth)hi-res view (this method)
low-res depth map hi-res depth map
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Experiments
20
camera array view
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Experiments
20
camera array view single view
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Experiments
20
camera array view single view
recovered depth map
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Superresolution
21
@ ulens resolution @ full sensor resolution(Georgiev’s method)
@ full sensor resolution(this work)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Superresolution
21
@ ulens resolution @ full sensor resolution(Georgiev’s method)
@ full sensor resolution(this work)
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Superresolution
21
@ ulens resolution @ full sensor resolution(Georgiev’s method)
@ full sensor resolution(this work)
INVERTED DEPTH OF FIELD!
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
The light field camera: What are its limits?
22
102 103 104 1050
5
10
15
20
25
30
depth (mm)
blur
radi
us (p
ixel
s)
LF cameraRegular camera
plane in focus (635mm)
max blur disc capped by
microlens diameter
F 80mmF# 3.2
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coordinate system & views
23
p‘Optical
Axis
vv’z
OO
Main lens
Microlenses (scale exagerated) Sensorobject planes
conjugate planesp
i
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coordinate system & views
23
p‘Optical
Axis
vv’z
OO
Main lens
Microlenses (scale exagerated) Sensorobject planes
conjugate planesp
i
consider only conjugate domain
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
•count how many microlenses fall under the blur B
Repetitions
24
θq
θq
µ
θ1θ2
θv’v-
v’ v
θq
θq
main lens blur B
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
25
θ1
θQ
θ2
q=1
q=Q
q=2
µ
q
q
at these planes some microlenses share the
same identical samples
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
image reconstruction (experimental validation)
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
Georgiev
image reconstruction (experimental validation)
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
Georgiev
image reconstruction (experimental validation)
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
coincidence of samples
Georgiev
image reconstruction (experimental validation)
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
coincidence of samples
Georgiev
image reconstruction (experimental validation)
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
coincidence of samples
Georgiev
low-res reconstruction
image reconstruction (experimental validation)
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
coincidence of samples
Georgiev
low-res reconstruction
image reconstruction (experimental validation)
coded aperture (Zhou&Nayar mask)
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
coincidence of samples
Georgiev
low-res reconstruction
image reconstruction (experimental validation)
coded aperture (Zhou&Nayar mask) traditional camera
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Coincidence of samples and undersampling
26
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
coincidence of samples
Georgiev
low-res reconstruction
image reconstruction (experimental validation)
coded aperture (Zhou&Nayar mask)
light field cameratraditional camera
this work
ISNR = 10 log�
r − r0
r − r
�
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Comparison with other EDOF systems
27
IEEE TRANSACTION OF PATTERN RECOGNITION AND MACHINE INTELLIGENCE, VOL. , NO. , MONTH YEAR 12
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8
9
depth level
ISN
R (d
B)
=0=3e 3=6e 3=1.2e 2=2.5e 2=5e 2=1e 1
60 80 100 120 140 160 1800
1
2
3
4
5
6
7
8x 10 4
depth level
Aver
age
L 2 erro
r per
pix
el
2x
4x
8x
16x32x
w=0
w=1.2e 2
w=1e 1
Fig. 17L2 ERROR RESULTS V.S. OTHER SYSTEMS.. LEFT: RESTORATION PERFORMANCE VERSUS DEPTH OF OUR METHOD AND THE METHOD OF [6] ON THE
SIMULATED LF CAMERA USING OUR CAMERA SETTINGS AND THE BRODATZ “BARK” TEXTURE, WITH INPUT INTENSITY RANGE 0–1. THE ISNR IS
COMPARED FOR SEVERAL DIFFERENT LEVELS OF OBSERVATION NOISE (STANDARD DEVIATION σw ). SOLID LINES SHOW OUR RESTORATION METHOD,DASHED USING THE METHOD OF [6] ON THE SAME DATA. WE HAVE NOT RESTORED DEPTHS WHERE λ < 1 (THERE ARE GAPS IN THE RESTORATION AT
THESE PARTS, SINCE SOME PARTS OF THESE PLANES ARE NOT SAMPLED AT ALL.). RIGHT: PERFORMANCE COMPARISON OF DOF EXTENSION BETWEEN
THE LF CAMERA (THICK LINES), A REGULAR CAMERA (THIN LINES) AND A CODED APERTURE CAMERA (DOTTED LINES). THE CROSSES INDICATE THE
ERROR FROM THE UPSAMPLED INTEGRAL REFOCUSING RESULT ON THE SAME LF DATA. SEE MAIN TEXT IN §VIII-B.2 FOR FURTHER DESCRIPTION.
(a) (b) (c) (d) (e) (f) (g) (h)Fig. 18
RESOLUTION TESTS. THE EXPERIMENT IN FIG. 17 IS REPEATED USING PART OF A RESOLUTION TEST CHART AND ADDITIVE NOISE AT σ = 1.2× 10−2 .COLUMN (A): SIMULATED LIGHT FIELD IMAGE; (B) METHOD OF [6] (USED AS INITIALIZATION); (C) METHOD OF [6] DEBLURRED (ONLY FOR
COMPARISON); (D) INPUT IMAGE RESTORED WITH OUR METHOD; (E) SIMULATED CODED APERTURE IMAGE; (F) DECONVOLVED CA IMAGE; (G)SIMULATED FOCAL SWEEP IMAGE (ACROSS THE WHOLE DEPTH RANGE); (H) DECONVOLVED FOCAL SWEEP IMAGE, USING MID-DEPTH PSF. ROWS, TOP
TO BOTTOM: DEPTH=60,72,80,88. THE PLENOPTIC CAMERA IS SEEN TO OUTPERFORM THE CA AND FOCAL SWEEP SYSTEMS IN TERMS OF REGULARITY
AND CLARITY OF THE SOLUTION AWAY FROM THE MAIN-LENS PLANE IN FOCUS. NOTE ALSO THAT MORE DETAIL IS RECOVERED THROUGH USE OF THE
FULL OBSERVATION MODEL THAN WOULD BE POSSIBLE JUST BY DEBLURRING THE RESULTS IN THE SECOND COLUMN.
light field GeorgievGeorgiev
+ deblurring
our method
coded aperture
input
coded aperture
focus sweep input
focus sweep
17 December 2011 NIPS 2011 Machine Learning Meets Computational Photography
Conclusions
•We have analyzed the light field camera
•Sampling patterns
•Limits
•We have introduced algorithms for depth and image estimation from a single light field image
•Based on depth and image priors
•Q: What is the tradeoff between depth identification and image reconstruction?
28