35
1 Formation et Analyse d’Images Session 3 Daniela Hall 3 October 2005

Formation et Analyse d’Images Session 3

  • Upload
    bertha

  • View
    42

  • Download
    0

Embed Size (px)

DESCRIPTION

Formation et Analyse d’Images Session 3. Daniela Hall 3 October 2005. Course Overview. Session 1 (19/09/05) Overview Human vision Homogenous coordinates Camera models Session 2 (26/09/05) Tensor notation Image transformations Homography computation Session 3 (3/10/05) - PowerPoint PPT Presentation

Citation preview

Page 1: Formation et Analyse d’Images Session 3

1

Formation et Analyse d’ImagesSession 3

Daniela Hall

3 October 2005

Page 2: Formation et Analyse d’Images Session 3

2

Course Overview

• Session 1 (19/09/05)– Overview– Human vision – Homogenous coordinates– Camera models

• Session 2 (26/09/05)– Tensor notation– Image transformations– Homography computation

• Session 3 (3/10/05)– Camera calibration– Reflection models– Color spaces

• Session 4 (10/10/05)– Pixel based image analysis

• 17/10/05 course is replaced by Modelisation surfacique

Page 3: Formation et Analyse d’Images Session 3

3

Course overview

• Session 5 + 6 (24/10/05) 9:45 – 12:45– Kalman filter– Tracking of regions, pixels, and lines

• Session 7 (7/11/05)– Gaussian filter operators

• Session 8 (14/11/05)– Scale Space

• Session 9 (21/11/05)– Contrast description– Hough transform

• Session 10 (5/12/05)– Stereo vision

• Session 11 (12/12/05)– Epipolar geometry

• Session 12 (16/01/06): exercises and questions

Page 4: Formation et Analyse d’Images Session 3

4

Session Overview

1. Camera calibration

2. Light

3. Reflection models

4. Human color perception

5. Color spaces

Page 5: Formation et Analyse d’Images Session 3

5

Bi-linear interpolationPs0 Ps1

Ps3Ps2

Ps

)()()()()(

,)1(

),1(),1)(1(

3210

0

0

sssss

yssy

xssx

PDIPCPBPAIPI

dxdyDdydxC

dydxBdydxA

PPdy

PPdx

The bilinear approach computes the weighted average of the four neighboring pixels. The pixels are weighted according tothe area.

AB

CD

Page 6: Formation et Analyse d’Images Session 3

6

Camera calibration

• Assuming that the camera performs a exact perspective projection, the image formation process can be expressed as a projective mapping from R3 to R2.

• PI=MIS PS

• Camera calibration: process of estimating MIS

from a set of point correspondences RS PI . • Advantage: intrinsic and extrinsic camera

parameters don't need to be known. They are estimated automatically.

Ref: CVonline, LOCAL_COPIES/MOHR_TRIGGS/node16.html

Page 7: Formation et Analyse d’Images Session 3

7

Calibration

1. Construct a calibration object whose 3D position is known.

2. Measure image coordinates

3. Determine correspondences between 3D point RS

k and image point PIk.

4. We have 11 DoF. We need at least 5 ½ correspondences.

Page 8: Formation et Analyse d’Images Session 3

8

Calibration

• For each correspondence scene point RSk and image

point PIk

• which gives following equations for k=1, ..., 6

• from wich MIS can be computed

Sk

IS

Sk

IS

k

kkk RM

RM

w

iwi

3

1

)(

)(

Sk

IS

Sk

IS

k

kkk RM

RM

w

jwj

3

2

)(

)(

0))(())((

0))(())((32

31

Sk

ISk

Sk

IS

Sk

ISk

Sk

IS

RMjRM

RMiRM

Page 9: Formation et Analyse d’Images Session 3

9

Calibration using many points

• For k=5 ½ M has one solution.– Solution depends on precise measurements of

3D and 2D points. – If you use another 5 ½ points you will get a

different solution.

• A more stable solution is found by using large number of points and do optimisation.

Page 10: Formation et Analyse d’Images Session 3

10

Calibration using many points

• For each point correspondence we know (i,j) and R=(x,y,z,1)T.

• We want to know MIS.

Solve equation with your favorite algorithm (least squares, levenberg-marquart, svd,...)

0))(())((

0))(())((32

31

Sk

ISk

Sk

IS

Sk

ISk

Sk

IS

RMjRM

RMiRM

0

34

33

32

31

24

23

22

21

14

13

12

11

10000

00001

0000000000

0000000000

m

m

m

m

m

m

m

m

m

m

m

m

jzjyjxjzyx

iziyixizyx

Page 11: Formation et Analyse d’Images Session 3

11

Estimation of MIS

1. When intrinsic (Ci, Cj, Di, Dj, F) and extrinsic camera (3d camera position and orientation) parameters are known, compute MI

S directly:

2. If one parameter is not precisely known or you wish a stable estimation of MI

S, do calibration with a large number of points.

SIS

SCS

RC

IR

I PMPTMCP

Page 12: Formation et Analyse d’Images Session 3

12

Session Overview

1. Camera calibration

2. Light

3. Reflection models

4. Human color perception

5. Color spaces

Page 13: Formation et Analyse d’Images Session 3

13

Light

• N: surface normal• i angle between incoming light and normal• e angle between normal and camera• g angle between light and camera

camera

light

N

egi

Page 14: Formation et Analyse d’Images Session 3

14

Spectrum

• Light source is characterised by its spectrum. • The spectrum consists of a particular quantity of photons

per frequency. • The frequency is described by its wavelength• The visible spectrum is 380nm to 720nm• Cameras can see a larger spectrum depending on their

CCD chip

f1

Page 15: Formation et Analyse d’Images Session 3

15

• Albedo is the fraction of light that is reflected by a body or surface.

• Reflectance function:

Albedo

light received

light emitted

Irradiance

Radiance),,( geiR

camera

light

N

egi

Page 16: Formation et Analyse d’Images Session 3

16

Reflectance functions

• Specular reflection– example mirror

• Lambertian reflection– diffuse reflection, example paper, snow

Page 17: Formation et Analyse d’Images Session 3

17

Specular reflection

else 0,

gei and ei if ,1),,( geiR

camera

lightN

egi

Page 18: Formation et Analyse d’Images Session 3

18

Lambertian reflection

)cos(),,( igeiR

Page 19: Formation et Analyse d’Images Session 3

19

Di-chromatic reflectance model

• the reflected light R is the sum of the light reflected at the surface Rs and the light reflected from the material body RL

• Rs has the same spectrum as the light source• The spectrum of Rl is « filtered » by the material (photons are

absorbed, this changes the emitted light)• Luminance depends on surface orientation• Spectrum of chrominance is composed of light source

spectrum and absorption of surface material.

),,(),,(),,( geiRgeiRgeiR LLSs

Page 20: Formation et Analyse d’Images Session 3

20

Session Overview

1. Camera calibration

2. Light

3. Reflection models

4. Human color perception

5. Color spaces

Page 21: Formation et Analyse d’Images Session 3

21

Color perception

• The retina is composed of rods and cones.• Rods - provide "scotopic" or low intensity vision.

– Provide our night vision ability for very low illumination, – Are a thousand times more sensitive to light than cones, – Are much slower to respond to light than cones, – Are distributed primarily in the periphery of the visual field.

Page 22: Formation et Analyse d’Images Session 3

22

Color perception

• Cones - provide "photopic" or high acuity vision. – Provide our day vision, – Produce high resolution images, – Determine overall brightness or darkness of images, – Provide our color vision, by means of three types of cones:

• "L" or red, long wavelength sensitive, • "M" or green, medium wavelength sensitive, • "S" or blue, short wavelength sensitive.

• Cones enable our day vision and color vision. Rods take over in low illumination. However, rods cannot detect color which is why at night we see in shades of gray.

• source: http://www.hf.faa.gov/Webtraining/VisualDisplays/

Page 23: Formation et Analyse d’Images Session 3

23

Color perception

• Rod Sensitivity- Peak at 498 nm.• Cone Sensitivity- Red or "L" cones peak at 564 nm. - Green or "M"

cones peak at 533 nm.  - Blue or "S" cones peak at 437 nm.

• This diagram shows the wavelength sensitivities of the different cones and the rods. Note the overlap in sensitivity between the green and red cones.

Page 24: Formation et Analyse d’Images Session 3

24

Camera sensitivity

• observed light intensity depends on:– source spectrum: S(λ)– reflectance of the observed point (i,j): P(i,j,λ)– receptive spectrum of the camera: c(λ)– p0 is the gain

400 600 800 1000 nm

S(λ)

λ

vidicon

CCD

0

0 )()(),,(),( cSjiPpjip

Page 25: Formation et Analyse d’Images Session 3

25

Classical RGB camera

• The filters follow a convention of the International Illumination Commission.

• They are functions of λ: r(λ), g(λ), b(λ)• They are close to the sensitivity of the human

color vision system.

Page 26: Formation et Analyse d’Images Session 3

26

Color pixels

0

0

0

)()(),,(0),(

)()(),,(0),(

)()(),,(0),(

),(

),(

),(

),(

dbSjiPbjiB

dgSjiPgjiG

drSjiPrjiR

jiB

jiG

jiR

jiP

Page 27: Formation et Analyse d’Images Session 3

27

Color bands (channels)

• It is not possible to perceive the spectrum directly.• Color is a projection of the spectrum to the

spectrum of the sensors. • Humans (and cameras) probe the spectrum at 3

positions.

0

0

0

0

0

0

)()(

)()(

)()(

dbSbB

dgSgG

drSrR

Page 28: Formation et Analyse d’Images Session 3

28

Session Overview

1. Camera calibration

2. Light

3. Reflection models

4. Human color perception

5. Color spaces

Page 29: Formation et Analyse d’Images Session 3

29

Color spaces

• RGB color space

• CMY color space

• YIQ color space

• HLS color space

Page 30: Formation et Analyse d’Images Session 3

30

RGB color space

• A CCD camera provides RGB images

• The luminance axis is r=g=b (diagonal)

• Each axis has 256 (8 bit) different values

• RGB colors: 2563=16777216

Page 31: Formation et Analyse d’Images Session 3

31

Hering color space

• Opponent color space

• Is obtained from RGB space by transformation.

• Luminance, C1 (red-green), C2 (red+green-blue)

B

G

R

gg

gg

gg

gg

ggggg

C

C

Y

gr

gb

gr

rb

rg

bgr

1

02

3

2

3

2

1

2222

B

G

R

C

C

Y

12/12/1

02

1

2

13/13/13/1

2

1

Page 32: Formation et Analyse d’Images Session 3

32

CMY color space

• Cyan, magenta, yellow

• CMYK: CMY + black color channel

B

G

R

B

G

R

Y

M

C

max

max

max

Page 33: Formation et Analyse d’Images Session 3

33

YIQ color space

• This is an approximation of– Y: luminance, – I: red – cyan, – Q: magenta - green

• Used US TVs (NTSC coding). Black and white TVs display only Y channel.

B

G

R

Q

I

Y

31.052.021.0

32.028.06.0

11.059.03.0

Page 34: Formation et Analyse d’Images Session 3

34

HLS space

• Hue, luminance, saturation space.

• L=R+G+B

• S=1-3*min(R,G,B)/L

elsex

gbxT

BGBRGR

BRGRx

,2

if ,

)))(()(

))()((5.0(cos

2

1

L

S T

Page 35: Formation et Analyse d’Images Session 3

35

Influence of color spaces for image analysis

• According to dichromatic reflectange model:– Luminance depends on surface orientation– Spectrum of chrominance is composed of light source

spectrum and absorption of surface material.• In HLS space, luminance is separated from

chrominance. For object recognition robust to changes in light source direction, use only chrominance plane for identification.

• In RGB space, changes in luminance influence all 3 channels. The above technique can not be used directly (do transformation to Hering space first).