Computational Cameras - Computer Sciencelazebnik/research/fall08/rahul_raguram.pdfRahul Raguram COMP...

Preview:

Citation preview

Computational Cameras

Rahul RaguramCOMP 790-090

What is a computational camera?Camera optics Camera

sensor

3D sceneFinal image

Traditional camera

Modified optics Camera sensor

3D sceneCoded image

Computational camera

Compute

Image

Additional information

Computational cameras - examples

Catadioptric cameras

*

* Source: S. K. Nayar, 2006

Computational cameras - examples

Catadioptric cameras HDR imaging with assorted pixels

* *

* Source: S. K. Nayar, 2006

Computational cameras - examples

Catadioptric cameras HDR imaging with assorted pixels

Multiview radial cameras

Time-of-flight cameras

* *

* #

* Source: S. K. Nayar, 2006

# Source: L. Guan and M. Pollefeys, 2008

The aperture

• Diameter of the lens opening (controlled by diaphragm)

• Expressed as a fraction of focal length (f-number)– f/2.0 with a 50mm lens: aperture is 25mm– f/2.0 with a 100mm lens: aperture is 50mm

• Typical f-numbers: f/1.4, f/2, f/2.8, f/4, f/5.6, f/8…– see a pattern?

Glossographia Anglicana Nova,1707

Varying the aperture

Small aperture – large depth of field

Varying the aperture

Large aperture – small depth of field

Bokeh (derived from Japanese boke ぼけ, a noun form of bokeru ぼける, "become blurred or fuzzy")

Multi-Aperture Photography

Paul Green – MIT CSAILWenyang Sun – MERLWojciech Matusik – MERLFrédo Durand – MIT CSAIL

Slides by Green et al.

Motivation

http://photographertips.net

Portrait Landscape

Small Aperture

Large Aperture

Depth of Field Control

Shallow Depth of Field

Large Depth of Field

plane of focus

Depth and Defocus Blur

sensor lens

defocus blur depends on distance from plane of focus

subject

rays from point in focus converge to single pixel

circle of confusion

Defocus Blur & Aperture

lens plane of focus

defocus blur depends on aperture size

aperture

http://photographertips.net

sensor

subject

circle of confusion

GoalsAperture size is a critical

parameter for photographers

■ post-exposure depth of field control

■ extrapolate shallow depth of field beyond physical aperture

OutlineMulti-Aperture Camera

■ New camera design■ Capture multiple aperture

settings simultaneously

Applications■ Depth of field control■ Depth of field extrapolation■ (Limited) refocusing

Related WorkComputational Cameras

■ Plenoptic Cameras■ Adelson and Wang ‘92■ Ng et al ‘05■ Georgiev et al ‘06

■ Split-Aperture Camera■ Aggarwal and Ahuja ‘04

■ Optical Splitting Trees■ McGuire et al ‘07

■ Coded Aperture■ Levin et al ’07■ Veeraraghavan et al ’07

■ Wavefront Coding■ Dowski and Cathey ‘95

Depth from Defocus■ Pentland ‘87

Georgiev et al‘06

Aggarwal and Ahuja ‘04McGuire et al ‘07

Adelson and Wang ‘92

Levin et al ’07 Veeraraghavan et al ’07

Plenoptic CamerasCapture 4D LightField

■ 2D Spatial (x,y)■ 2D Angular (u,v Aperture)

Trade resolution for flexibility after capture■ Refocusing■ Depth of field control■ Improved Noise

Characteristics

Lens Aperture

u

v

Sensor (x,y)

LensletArray

Subject

Lens (u,v)

1D vs 2D Aperture Sampling

u

v

Aperture

2D Grid Sampling http://photographertips.net

4 Samples4 Samples

u

v

Aperture

2D Grid Sampling

1D vs. 2D Aperture Sampling

Aperture

1D “Ring”Sampling

45 Samples45 Samples

http://photographertips.net

Goals■ post-exposure

depth of field control■ extrapolate shallow

depth of field■ (limited) refocusing

■ 1d sampling■ no beamsplitters■ single sensor■ removable

Optical Design Principles

Aperture

3D sampling■ 2D spatial■ 1D aperture size■ 1 image for

each “ring”

Sensor

http://photographertips.net

Aperture Splitting

Incoming light

Sensor

MirrorsFocusing lenses

Tilted Mirrors

Aperture Splitting

Photographic Lens

Aperture Plane

Relay system

Aperture splitting optics

New Aperture Plane

Ideally at aperture plane, but not physically possible!Solution: Relay Optics to create virtual aperture plane

Optical Prototype

Mirror Close-up

main lens relay optics

mirrors

tilted mirrors

lenses SLR Camera

Sample DataRaw data from our camera

Ideally would be ringsGaps are from occlusion

Point Spread Function Occlusioncombinedinner ring 1 ring 2 outer

OutlineMulti-Aperture Camera

■ New camera design■ Capture multiple aperture

settings simultaneously

Applications■ Depth of field control■ Depth of field extrapolation■ Refocusing

DOF Navigation

0I 2I

1I 3I

Approximate defocus blur as convolution

DOF Extrapolation?

0I 1I 2I 3I

?EI)(0 nn KII σ∗=

)( nK σ - Circular aperture blurring kernel

Depends on depth and aperture size

Blu

r siz

e

Aperture Diameter

Largest physical aperture

DOF Extrapolation Roadmap

capture estimate blur fit model extrapolate

blur

IIEE

I1

I2I0

I3

Summary■Multi-aperture camera

■1D sampling of aperture■Removable

■Post-Exposure depth of field control

■Depth of field extrapolation

Image and Depth from a Conventional Camera with a Coded Aperture

Anat Levin, Rob Fergus, Frédo Durand, William Freeman

MIT CSAIL

Slides by Levin et al.

Single input image:

Output #1: Depth map

Single input image:

Output #1: Depth map

Output #2: All-focused image

Single input image:

Output #1: Depth map

Output #2: All-focused image

Lens Camera sensor

Point spread

function

Image of a point light source

Lens and defocus

Focal plane

Lens’ aperture

LensObject Camera sensor

Point spread

function

Image of a defocused point

light source

Lens and defocus

Lens’ aperture

Focal plane

Lens Camera sensor

Point spread

function

Image of a defocused point

light source

Lens and defocus

Object

Lens’ aperture

Focal plane

Lens Camera sensor

Point spread

function

Image of a defocused point

light sourceLens’ aperture

Lens and defocus

Object

Focal plane

Lens and defocus

Lens Camera sensor

Point spread

function

Image of a defocused point

light sourceLens’ aperture

Object

Focal plane

Depth and defocus

Depth from defocus:Infer depth by analyzing local scale of defocus blur

Out of focus

In focus

Challenges

• Hard to discriminate a smooth scene from defocus blur

• Hard to undo defocus blur

Input Ringing with conventional deblurring algorithm

Out of focus?

Key contributions• Exploit prior on natural images

- Improve deconvolution

- Improve depth discrimination

• Coded aperture (mask inside lens)- make defocus patterns different from

natural images and easier to discriminate

Natural Unnatural

Defocus as local convolution

Input defocused image

Calibrated blur kernels at different depths

xfy k ⊗=

xfy k ⊗=

xfy k ⊗=

Defocus as local convolution

xfy k ⊗= xf k

Depth k=1:

Depth k=2:

Depth k=3:

Input defocused image

Local sub-window

Calibrated blur kernels

at depth

Sharp sub-window

k

y

Overview

⊗= Correct scale

Smaller scale

Larger scale

⊗= ⊗=

Try deconvolving local input windows with different scaled filters:

Somehow: select best scale.

??????

Challenges

• Hard to identify correct scale:

• Hard to deconvolve even when kernel is known

Input Ringing with the traditional Richardson-Lucy deconvolution

algorithm

⊗=

?? Correct scale

Smaller scale

? Larger scale

⊗= ⊗=

• Hard to deconvolve even when kernel is known

?

yxf =⊗

=

Deconvolution is ill posed

Deconvolution is ill posed

? =

=?

Solution 1:

Solution 2:

yxf =⊗

Idea 1: Natural images prior

Image

gradient

put a penalty on gradients

Natural images have sparse gradients

Natural Unnatural

What makes images special?

Deconvolution with prior

2|| minarg yxfx −⊗=

⊗ _

∑ ∇+i ix )(ρλ

2

+

⊗ _ +2

?

?

Convolution error Derivatives prior

High

Low Equal convolution error

Comparing deconvolution algorithms

Input

Richardson-Lucy

(Non blind) deconvolution code available online:http://groups.csail.mit.edu/graphics/CodedAperture/

Gaussian prior

2)( xx ∇=∇ρ“spread” gradients

Sparse prior

“localizes” gradients

8.0)( xx ∇=∇ρ

Comparing deconvolution algorithms

Input

Richardson-Lucy

(Non blind) deconvolution code available online:http://groups.csail.mit.edu/graphics/CodedAperture/

Gaussian prior

“spread” gradients

Sparse prior

“localizes” gradients

2)( xx ∇=∇ρ 8.0)( xx ∇=∇ρ

⊗=

??Correct scale

Smaller scale

?Larger scale

⊗= ⊗=

Try deconvolving local input windows with different scaled filters:

Recall: Overview

Challenge: smaller scale not so different than correct

Somehow: select best scale.

Idea 2: Coded Aperture

• Mask (code) in aperture plane- make defocus patterns different from

natural images and easier to discriminate

Conventional aperture

Our coded aperture

Lens Camera sensor

Point spread

function

Object

Solution: lens with occluder

Focal plane

Solution: lens with occluder

Lens with coded aperture

Camera sensor

Point spread

function

Image of a defocused point

light sourceAperture pattern

Object

Focal plane

Lens with coded aperture

Camera sensor

Point spread

function

Image of a defocused point

light sourceAperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded aperture

Camera sensor

Point spread

function

Image of a defocused point

light sourceAperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded aperture

Camera sensor

Point spread

function

Image of a defocused point

light sourceAperture pattern

Solution: lens with occluder

Object

Focal plane

Lens with coded aperture

Camera sensor

Point spread

function

Image of a defocused point

light sourceAperture pattern

Solution: lens with occluder

Object

Focal plane

Why coded?

Conventional Coded

Coded aperture- reduce uncertainty in scale identification

Correct scale

Smaller scale

Larger scale

Depth results

Input Local depth estimation Regularized depth

Regularizing depth estimation

2|| minarg yxfx −⊗=

_

∑ ∇+i ix )(ρλ

2+

Convolution error Derivatives prior

Try deblurring with 10 different aperture scales

Keep minimal error scale in each local window + regularization

305

295

285

275

265

255

245

235

200

Input

Local depth estimation

Regularized depth

Regularizing depth estimation

305

295

285

275

265

255

245

235

200

305

295

285

275

265

255

245

235

200

Sometimes, manual intervention

305

295

285

275

265

255

245

235

200

305

295

285

275

265

255

245

235

305

295

285

275

265

255

245

235

Input

After user correctionsRegularized depth

Local depth estimation

All focused results

Input

All-focused (deconvolved)

Original image

All-focus image

Close-up

Input

All-focused (deconvolved)

Original image All-focus image

Close-up

Naïve sharpening

Comparison- conventional aperture result

Ringing due to wrong scale estimation

Comparison- coded aperture result

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Application: Digital refocusing from a single image

Image AND depth at a single shot

No loss of image resolution

Simple modification to lens

Depth is coarse

But depth is a pure bonus

Loss some light

But deconvolution increases depth of field

Coded aperture: pros and cons

unable to get depth at untextured areas, might need manual corrections.

-++

+-+

+

http://groups.csail.mit.edu/graphics/CodedAperture/

Deconvolution code available

Depth acquisition: priceless

$1Cardboard:$79.9550mm f/1.8:

Tape: $1

Recommended