High-Accuracy Stereo Depth Maps Using Structured Light by: D. Scharstein & R. Szeliski

Preview:

DESCRIPTION

High-Accuracy Stereo Depth Maps Using Structured Light by: D. Scharstein & R. Szeliski. Presented by: Ali Agha March 02, 2009. Outline. Sterevision overview Motivation & Contribution Structured light & method overview Related work Disparity computation Results Conclusion Future work. - PowerPoint PPT Presentation

Citation preview

Presented by: Ali AghaMarch 02, 2009

OutlineSterevision overviewMotivation & ContributionStructured light & method overviewRelated workDisparity computationResultsConclusionFuture work

STEREO VISIONWhen 3D information of a scene is needed

Depth from Disparity

(xR-xL)/f = b/z

disparity=dRL =(xR-xL)

Motivation of the presented paper“A taxonomy and evaluation of dense two-frame

stereo correspondence algorithms”. Intl. J. Comp. Vis., 2002.

http://www.middlebury.edu/stereo/.

Tsukuba Venus

need for more challenging scenes

accurate ground truth information??

Motivation of the presented paper

Contributions of this work

A method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information.

Does not require the calibration of the light sources

High resolution in comparison with range sensors

Process PipelineThis method uses structured light and

consists of the following stages:

Acquire all desired views under all illuminations.

Rectify the imagesDecode the light patterns at each pixel to

compute correspondences.Compute the view and illumination disparity

and combine them

Structured lightStructured-light techniques rely on

projecting one or more special light patterns onto a scene, usually in order to directly acquire a range map of the scene

http://en.wikipedia.org/wiki/File:1-stripesx7.svg

Structured lightA pair of cameras and one or more light

projectors are used

http://en.wikipedia.org/wiki/File:1-stripesx7.svg

Related Work in Decoding light patterns

J. Batlle, E. Mouaddib, and J. Salvi. Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. Pat. Recog., 31(7):963–982, 1998.

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Posdamer-Daltschuler 1981-82-87

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Inokuchi, Sato and Matsuda 1984

8 bits temporally Gray-coded pattern projection

8 bits temporally binary-coded pattern projection

Gray CodeUsing such binary images requires log2(n)

patterns to distinguish among n locations.

Decoding the light patternsUsing average of all-white and all-black

In practice, the only reliable way is to project both the code pattern and its inverse.

In surfaces with widely varying reflection properties, use two different exposure times (0.5 and 0.1 sec.).

If this largest difference is still below a threshold, the pixel is labeled “unknown”

Disparity computationView disparitiesIllumination disparities

Definition:views – the images taken by the camerasIlluminations – the structured light patterns

projected onto the scene.

View disparitiesAssuming rectified views leads simple 1D

search

Practical issues:OcclusionUnknown code values (due to shadows or reflections).A perfect matching code value may not exist

(interpolation errors)Several perfect matching code values may exist (limited

resolution)

View disparitiesThe first problem (partial occlusion) is

unavoidableThe number of unknown code values can be

reduced by using more than one illumination source

As a final consistency check, we establish disparities dLR and dRL independently and cross-check for consistency.

View disparities

scene under illumination view disparities

Illumination disparitiesdisparity between the cameras and the

illumination sources.

The difference in our case is that we can register these illumination disparities with our rectified view disparities dLR without the need to explicitly calibrate the illumination sources (video projectors).

Illumination disparitiesRelationship between the left view L and

illum. source 0.

Each pixel whose view disparity has been established can be considered a (homogeneous) 3D scene point S=[x,y,d,1] with projective depth d = dLR(x, y).

The pixel’s illumination disparity (u0L, v0L)

P = M0LS in which P = [u0L v0L 1]

Practical IssuesA small number of pixels with large disparity

errors can strongly affect the least-squares fit.

Outlier detection by iterating the above process.

Only those pixels with low residual errors are selected as input to the next iteration.

Illumination disparitiesGiven the projection matrix M0L, we can now

solve equation for dLR at all pixelsNote that these disparities are available for

all points illuminated by source 0, even those that are not visible from the right camera.

Combining the disparity estimatesRemaining task is to combine the 2N + 2

disparity maps.Create combined maps for each of L and R

separatelyWhenever there is a majority of values within close

range, we use the averageotherwise, the pixel is labeled unknown.

L and R maps are checked for consistency, for unoccluded pixels,

dLR(x, y) = − dRL(x + dLR(x, y), y),

Combined disparityMost stereo implementations work with much

smaller image sizes. So, we downsample the images and disparity maps to quarter size (460 × 384).

Note that for the downsampled images, we now have disparities with quarter-pixel accuracy.

Unknown DisparitiesA remaining issue is that of holes, i.e.,

unknown disparity valuesSmall holes can be filled by interpolationLarge holes may remain in areas where no

illumination codes were available to begin with.

Two main sources: surfaces that show very low reflectionareas that are shadowed under all illuminations.

ResultsTwo different scenes, Cones and Teddy.

ExperimentsIn experimental setup, a single digital camera

(Canon G1) translating on a linear stage, and one or two light projectors illuminating the scene from different directions.

Results

VerificationTo verify that stereo data sets are useful for

evaluating stereo matching algorithms, several of the algorithms from the Middlebury Stereo Page has been ran on our new images.

Conclusiona new methodology to acquire highly precise

and reliable ground truth disparity measurements

camera-projector disparities, which can be used as an auxiliary source of information to increase the reliability of correspondences and to fill in missing data.

Considerations for Future workExploiting in navigation

Field of view is limited by the range of light projector

Investigate the number of projected patterns which directly affect the speed of the method

In daylight or dark placesInvisible lights

Thank you

Questions??

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Posdamer-Daltschuler 1981-82-87

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Inokuchi, Sato and Matsuda 1984

8 bits temporally Gray-coded pattern projection

8 bits temporally binary-coded pattern projection

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Sato, Yamamoto and Inokuchi 1986-87

proposed to use a Liquid Crystal Devicewhich allows an increasednumber of columns to be projected witha high accuracy. The system also improves the

codedspeed, against a slide projector, so the LCD

can beelectronically controlled.

If an object has a high textural contrast or any high

reßected surface regions, then, some pattern segmentation

errors can be produced. – Solution?

The problem of a light projector is sometimes a result

of heat irradiation onto the scene

Related work-CODED STRUCTURED LIGHT TECHNIQUES

Hattori and Sato 1995replace the light projector with a

semiconductorlaser, which gives a high power illuminationwith low heat irradiation. The proposed

system,named Cubiscope

The Cubiscope system

Related work – Carrihill-HummelkLook at the notes

Related work – Boyer-KakColour

Related work – Le Moigne-Waxman

not-coded grid patterns

Related work – Morita-Yakima-Sakata

Related work – Vuylsteke-Oosterlinck

Recommended