Photometric Stereo - Yonsei · 2014. 12. 29. · Photometric Stereo v.s. Structure from Shading [1]...

Preview:

Citation preview

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Photometric StereoPhotometric Stereo

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Photometric Stereo Photometric Stereo v.sv.s. Structure from Shading [1]. Structure from Shading [1]

• Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under different lighting conditions. The technique was originally introduced by Woodham in 1980.

• The special case where the data is a single image is known as shape from shading, and was analyzed by B. K. P. Horn in 1989.

Woodham, R.J. 1980. Photometric method for determining surface orientation from multiple images. Optical Engineerings 19, I, 139-144.

B. K. P. Horn, 1989. Obtaining shape from shading information. In B. K. P. Horn and M. J. Brooks, eds., Shape from Shading, pages 121–171. MIT Press.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Radiometry Units [7]Radiometry Units [7]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Radiometry UnitsRadiometry Units

http://www.mathsisfun.com/geometry/steradian.html

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Radiometry UnitsRadiometry Units

http://www.mathsisfun.com/geometry/steradian.html

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Radiometry Units [7]Radiometry Units [7]http://www.light-measurement.com/basic-radiometric-quantities/

Per unit time

Radiant energy

Per unit solid angle

Radiant power Radiant intensity

Per unit projected-area

Radiance

http://www.tpdsci.com/tpc/RdmRadERGeoFig.php

Per unit area

Radiant exitanceRadiant emittance

Radiosity

+ reflected light

Irradiance

The amount of radiant power impinging

upon a surface per unit area.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Photometry Units [8]Photometry Units [8]

Photometry is the science of the measurement of light, in terms of its perceived brightness to the human eye. It is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power; rather, in photometry, the radiant power at each wavelength is weighted by a luminosity function (a.k.a. visual sensitivity function) that models human brightness sensitivity.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

LambertLambert’’s cosine law [9]s cosine law [9]

In optics, Lambert's cosine law says that the radiant intensity observed from a "Lambertian" surface is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal.

An important consequence of Lambert's cosine law is that when such a surface is viewed from any angle, it has the same apparent radiance. This means, for example, that to the human eye it has the same apparent brightness (or luminance). It has the same radiance because, although the emitted power from a given area (dA) element is reduced by the cosine of the emission angle, the size of the observed area (dAnormal ) is decreased by a corresponding amount.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

LambertianLambertian

scatters [9]scatters [9]

When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons/time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal.

A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter.

This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

LambertLambert’’s cosine law [9]s cosine law [9]

Emission rate (photons/s) in a normal and off-normal direction. The number of photons/sec directed into any wedge is proportional to the area of the wedge.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

LambertLambert’’s cosine law [9]s cosine law [9]

The observer directly above the area element will be seeing the scene through an aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0 . We can assume without loss of generality that the aperture happens to subtend solid angle dΩ when "viewed" from the emitting area element. This normal observer will then be recording I dΩ dA photons per second and so will be measuring a radiance of

photons/(s·cm2·sr).

The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0 cos(θ). This observer will be recording I cos(θ) dΩ dA photons per second, and so will be measuring a radiance of

photons/(s·cm2·sr),

which is the same as the normal observer.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

BRDF [5]BRDF [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Constraints on the BRDF [5]Constraints on the BRDF [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Diffuse Reflection [5]Diffuse Reflection [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Diffuse Reflection [5]Diffuse Reflection [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

SpecularSpecular

Reflection [5]Reflection [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Diffuse Reflection and Diffuse Reflection and LambertianLambertian

BRDF [2]BRDF [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

SpecularSpecular

Reflection and Mirror BRDF [2]Reflection and Mirror BRDF [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Combining Combining SpecularSpecular

and Diffuse [2]and Diffuse [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Combining Combining SpecularSpecular

and Diffuse [2]and Diffuse [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

BRDF Models [5]BRDF Models [5]

• Phenomenological

– Phong

– Ward

– Lafortune

et al.

– Ashikhmin

et al.

• Physical

– Cook-Torrance

– Dichromatic

– He et al.

• Here we’re listing only some well-known examples

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

PhongPhong

Illumination Model [5]Illumination Model [5]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Measuring the BRDF [5]Measuring the BRDF [5]

• Gonioreflectometer

– Device for capturing the BRDF by moving a camera + light source

– Need careful control of illumination, environment

traditional design by Greg Ward

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Measuring the BRDF [5]Measuring the BRDF [5]

• MERL (Matusik et al.): 100 isotropic, 4 nonisotropic, dense

• CURET

(Columbia-Utrect): 60 samples, more sparsely sampled, but also

bidirectional texure

functions (BTF)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Image Intensity and 3D Geometry [2]Image Intensity and 3D Geometry [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Image Formation [6]Image Formation [6]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Reflectance Map [6]Reflectance Map [6]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Surface Normal [2]Surface Normal [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Gradient Space [2]Gradient Space [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Reflectance Map [2]Reflectance Map [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Reflectance Map [2]Reflectance Map [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Reflectance Map [2]Reflectance Map [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Reflectance Map [2]Reflectance Map [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Shape from a Single Image? [2]Shape from a Single Image? [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Shape from a Single Image? [3]Shape from a Single Image? [3]

• Take more images

Photometric stereoPhotometric stereo

• Add more constraints

Shape-from-shading

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Photometric Stereo [2]Photometric Stereo [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Photometric Stereo [2]Photometric Stereo [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Solving the Equations [2]Solving the Equations [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

More than Three Light Sources [2]More than Three Light Sources [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Color Images [2]Color Images [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Light Source Direction [2]Light Source Direction [2]

n

rs

s=r-2(r-(rn)n)=r-2r+2(rn)n=2(rn)n-r

r-(rn)n

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Normal Field to Surface [6]Normal Field to Surface [6]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Normal Field to Surface [6]Normal Field to Surface [6]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Results [2]Results [2]

1. Estimate light source directions

2. Compute surface normals

3. Compute albedo

values

4. Estimate depth from surface normals

5. Relight the object (with original texture and uniform albedo)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Shape from a Single Image? [3]Shape from a Single Image? [3]

• Take more images

Photometric stereo

• Add more constraints

ShapeShape--fromfrom--shadingshading

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Human Perception [4]Human Perception [4]

• Our brain often perceives shape from shading.

• Mostly, it makes many assumptions to do so.

• For example:

Light is coming from above (sun).

Biased by occluding contours.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Stereographic Projection [4]Stereographic Projection [4]

y

z

x

1zq

p

1

s n

NS

s n

1

1zg

f

1n

s

(p,q)-space (gradient space)

y

z

x

(f,g)-space

Problem(p,q) can be infinite when 90

22112

qppf

22112

qpqg

Redefine reflectance map as gfR ,

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Occlusion Boundary [4]Occlusion Boundary [4]

ne

ve

n

venvnen , e and are knownv

The values on the occluding boundary can be used as the boundary condition for shape-from-shading

n

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Image Irradiance Constraint [4]Image Irradiance Constraint [4]

• Image irradiance should match the reflectance map

dxdygfRyxIei

2

image

,,

Minimize

(minimize errors in image irradiance in the image)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Smoothness Constraint [4]Smoothness Constraint [4]

• Used to constrain shape-from-shading• Relates orientations (f,g) of neighboring surface points

ygg

xgg

yff

xff yxyx

,,,

gf , : surface orientation under stereographic projection

es f x2 f y

2 gx2 gy

2 image dxdy

Minimize

(penalize rapid changes in surface orientation f and g over the image)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Shape from Shading [4]Shape from Shading [4]

• Find surface orientations (f,g) at all image points that minimize

is eee

smoothnessconstraint

weight

image irradianceerror

dxdygfRyxIggffe yxyx

2

image

2222 ,,

Minimize

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Result [4]Result [4]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

With Colored Lights [10]With Colored Lights [10]

• In an environment where red, green, and blue light is emitted from different directions, a Lambertian

surface will reflect each of those colors simultaneously

without any mixing of the frequencies.• The quantities of red, green and blue light reflected are a linear function of the

surface normal direction.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

With Colored Lights [10]With Colored Lights [10]

For simplicity, we first focus on the case of a single distant light source with direction l

illuminating a Lambertian

surface point P

with surface orientation

direction n.

The energy distribution of the light source

The Spectral reflectance function at that point

The spectral sensitivity of the i-th

sensor

The intensity measured at i-th

sensor

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

With Colored Lights [10]With Colored Lights [10]

in matrix form

RGB

normal mapping: Equation (2) establishes a 1-1 mapping between an

RGB pixel measurement from a color camera and the surface orientation at the point projecting to that pixel. Our strategy is to use the inverse of this mapping to convert a video of a deformable surface into a sequence of normal maps.

By estimating and then inverting the linear mapping M linking RGB values to surface normals, we can convert a video sequence captured under colored light into a video of normal-maps.

We then integrate each normal map independently to obtain a depth map in every frame by imposing that the occluding contour is always at zero depth.

At

the end of the integration process, we obtain a video of depth-maps.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

With Colored Lights [10]With Colored Lights [10]

Our approach is to use the first depth-map of the sequence as a 3D template which will be deformed to match all subsequent depth-maps.

The deformations of the template will be guided by the following twocompeting constraints:• the deformations must be compatible with the frame to frame 2D optical flow of

the original video sequence,• the deformations must be locally as rigid as possible.

Tracking the surfaceTracking the surface

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Wikipedia, “Photometric Stereo,” available at www.wikipedia.org.2. Yu-Wing, “Photometric Stereo,” KAIST Computer Vision (CS 770, Fall 2009) lecture

material, available at http://yuwing.kaist.ac.kr/courses/CS770/pdf/17-photometricstereo.pdf3. S. Narasimhan, “Photometric Stereo,” CMU Computer Vision(15-385, -685, Spring 2006)

lecture material, available at http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/

4. S. Narasimhan, “Structure from Shading,” CMU Computer Vision(15-385, -685, Spring 2006) lecture material, available at http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/

5. Seitz, “Light,” Washington University Computer Vision (CSE 455, Winter 2004) lecture material, available at http://www.cs.washington.edu/education/courses/455/04wi/

6. David Kriegman, “Photometric Stereo,” UCSD Computer Vision (CSE152, Spring 2005) lecture material, available at http://cseweb.ucsd.edu/classes/sp05/cse152/

7. Wikipedia, “Radiometry,” available at www.wikipedia.org8. Wikipedia, “Photometry,” available at www.wikipedia.org9. Wikipedia, “Lambert’s cosine law,” available at www.wikipedia.org10. Carlos Hernandez, et at., “Non-rigid Photometric Stereo with Colored Lights,” 2007 IEEE

11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14-21 Oct 2007.

Recommended