158
Object Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the field of Robotics The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 January 1997 © 1997 Yoichi Sato This work was sponsored in part by the Advanced Research Projects Agency under the Department of the Army, Army Research Office under grant number DAAH04-94-G-0006, and partially by NSF under Contract IRI- 9224521. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the United States Govern- ment.

Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

Object Shape and Reflectance Modeling from Color Image Sequence

Yoichi Sato

CMU-RI-TR-97-06

Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the field of Robotics

The Robotics InstituteCarnegie Mellon University

Pittsburgh, Pennsylvania 15213

January 1997

© 1997 Yoichi Sato

This work was sponsored in part by the Advanced Research Projects Agency under the Department of theArmy, Army Research Office under grant number DAAH04-94-G-0006, and partially by NSF under Contract IRI-9224521. Views and conclusions contained in this document are those of the authors and should not be interpreted asnecessarily representing official policies or endorsements, either expressed or implied, of the United States Govern-ment.

Page 2: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 3: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

i

Abstract

This thesis describes the automatic reconstruction of 3D object models from observa-tion of real objects. As a result of the significant advancement of graphics hardware andimage rendering algorithms, 3D computer graphics capability has become available even onlow-end computers. However, it is often the case that 3D object models are created manu-ally by users. That input process is normally time-consuming and can be a bottleneck forrealistic image synthesis. Therefore, techniques to obtain object models automatically byobserving real objects could have great significance in practical applications.

For generating realistic images of a 3D object, two aspects of information are neces-sary: the object’s shape and its reflectance properties such as color and specularity. A num-ber of techniques have been developed for modeling object shapes by observing realobjects. However, attempts to model reflectance properties of real objects have been ratherlimited. In most cases, modeled reflectance properties are too simple or too complicated tobe used for synthesizing realistic images of the object.

One of the main reasons why modeling of reflectance properties has been unsuccessful,compared with modeling of object shapes, is that both diffusely reflected lights and specu-larly reflected lights, i.e., the diffuse and specular reflection components, are treatedtogether, and therefore, estimation of reflectance properties becomes unreliable. To elimi-nate this problem, the two reflection components should be separated prior to estimation ofreflectance properties. For this purpose, we developed a new method called goniochromaticspace analysis (GSA) which separates two fundamental reflection components from a colorimage sequence.

Based on GSA, we studied two approaches for generating 3D models from observationof real objects. For objects with smooth surfaces, we developed a new method which exam-ines a sequence of color images taken under a moving light source. The diffuse and specularreflection components are first separated from the color image sequence; then, object sur-face shapes and reflectance parameters are simultaneously estimated based on the separationresults. For creating object models with more complex shapes and reflectance properties, weproposed another method which uses a sequence of range and color images. In this method,GSA is further extended to handle a color image sequence taken by changing object posture.

To extend GSA to a wider range of applications, we also developed a method for shapeand reflectance recovery from a sequence of color images taken under solar illumination.The method was designed to handle various problems particular to images taken using solarilluminations, e.g., more complex illumination and shape ambiguity caused by the sun’scoplanar motion.

This thesis presents new approaches for modeling object surface reflectance properties,as well as shapes, by observing real objects in both indoor and outdoor environments. Themethods are based on a novel method called goniochromatic space analysis for separatingthe two fundamental reflection components from a color image sequence.

Page 4: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

ii

Page 5: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

iii

Acknowledgments

I would like to express my deepest gratitude to my wife, Imari Sato, and to my parents,Yoshitaka Sato and Kazuko Sato, who always have been supportive throughout my years atCarnegie Mellon University.

I would also like to express my gratitude to Katsushi Ikeuchi for being my adviser andmentor. From him, I have learned how to conduct research in the field of computer vision. Ihave greatly benefited from his support and enthusiasm over the past five years. I am alsograteful to my thesis committee members Martial Hebert, Steve Shafer, and Shree Nayar fortheir careful reading of this thesis and for providing valuable feedback regarding my work.

For taking the time to proofread this thesis, I am very grateful to Marie Elm. She alwayshas been kind to spare her time for correcting my writing and improving my writing skills.

I was fortunate to have many great people to work with in the VASC group at CMU. Inparticular I would like to thank members of our Task Oriented Vision Lab group for theirinsights and ideas which are embedded in my work: Prem Janardhan, Sing Bing Kang,George Paul, Harry Shum, Fred Solomon, and Mark Wheeler; special thanks go to FredSolomon, who patiently taught me numerous hands-on skills necessary for conductingexperiments. I have also benefited from the help of visiting scientists in our group, includingSantiago Conant-Pablos, Kazunori Higuchi, Yunde Jiar, Masato Kawade, Hiroshi Kimura,Tetsuo Kiuchi, Jun Miura, Kotaro Ohba, Ken Shakunaga, Yutaka Takeuchi, and TakuYamazaki. We all had many fun barbecue parties at Katsu’s place during my stay in Pitts-burgh. I will miss very much those parties and Katsu's excellent homemade wine.

Finally, I would once again like to thank my family for their love, support, and encour-agement, especially my wife, Imari. Since Imari and I married, my life has always beenquite wonderful; she has made the hard times seem as nothing, and the good times an abso-lute delight.

Page 6: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

iv

Page 7: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

v

Table of Contents

Chapter 1

Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Goniochromatic Space Analysis of Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

1.2 Object Modeling from Color Image Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

1.3 Object Modeling from Range and Color Image Sequences . . . . . . . . . . . . . . . . . . . . .8

1.4 Reflectance Analysis under Solar Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Chapter 2

Goniochromatic Space Analysis of Reflection . . . . . . . . . . . . . . . . . . . . . 13

2.1 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

2.2 The RGB Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

2.3 The I-q (Intensity - Illuminating/Viewing Angle) Space . . . . . . . . . . . . . . . . . . . . . .19

2.4 The Goniochromatic Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20

Chapter 3

Page 8: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

vi

Object Modeling from Color Image Sequence . . . . . . . . . . . . . . . . . . . . 23

3.1 Reflection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.1.1 The Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.2 The Torrance-Sparrow Reflection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.1.3 Image Formation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2 Decomposition of Reflection Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.3 Estimation of the Specular Reflection Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.3.1 Previously Developed Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.3.1.1 Lee’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.3.1.2 Tominaga and Wandell’s Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3.1.3 Klinker, Shafer, and Kanade’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.3.2 Our Method for Estimating an Illuminant Color . . . . . . . . . . . . . . . . . . . . . . . . 38

3.4 Estimation of the Diffuse Reflection Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.5.2 Estimation of Surface Normal and Reflectance Parameters . . . . . . . . . . . . . . . 43

3.5.3 Shiny Dielectric Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.5.4 Matte Dielectric Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.5.5 Metal Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.5.6 Shape Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.5.7 Reflection Component Separation with Non-uniform Reflectance. . . . . . . . . . 55

3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Chapter 4

Page 9: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

vii

Object Modeling from Range and Color Images:Object Models Without Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.1 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62

4.2 Image Acquisition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64

4.3 Shape Reconstruction from Multiple Range Images . . . . . . . . . . . . . . . . . . . . . . . . .66

4.3.1 Our Method for Merging Multiple Range Images . . . . . . . . . . . . . . . . . . . . . . .68

4.3.2 Measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

4.3.3 Shape Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

4.4 Mapping Color Images onto Recovered Object Shape. . . . . . . . . . . . . . . . . . . . . . . .71

4.5 Reflectance Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75

4.5.1 Reflection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75

4.5.2 Reflection Component Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76

4.5.3 Reflectance Parameter Estimation for Segmented Regions . . . . . . . . . . . . . . . .78

4.6 Synthesized Images with Realistic Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81

4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83

Chapter 5

Object Modeling from Range and Color Images:Object Models With Texture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.1 Dense Surface Normal Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87

5.2 Diffuse Reflection Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88

5.3 Specular Reflection Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89

5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90

5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98

Page 10: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

viii

Chapter 6

Reflectance Analysis under Solar Illumination . . . . . . . . . . . . . . . . . . . 101

6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6.2 Reflection Model Under Solar Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6.3 Removal of the Reflection Component from the Skylight . . . . . . . . . . . . . . . . . . . 106

6.4 Removal of the Specular Component from the Sunlight . . . . . . . . . . . . . . . . . . . . . 107

6.5 Obtaining Surface Normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.5.1 Two Sets of Surface Normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.5.2 Unique Surface Normal Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6.6 Experimental Results: Laboratory Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6.7 Experimental Result: Outdoor Scene (Water Tower) . . . . . . . . . . . . . . . . . . . . . . . 114

6.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Chapter 7

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

7.1.1 Object Modeling from Color Image Sequence . . . . . . . . . . . . . . . . . . . . . . . . 120

7.1.2 Object Modeling from Range and Color Images. . . . . . . . . . . . . . . . . . . . . . . 120

7.1.3 Reflectance Analysis under Solar Illumination . . . . . . . . . . . . . . . . . . . . . . . . 121

7.2 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

7.3 Directions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

7.3.1 More Complex Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

7.3.2 Planning of Image Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7.3.3 Reflectance Analysis for Shape from Motion . . . . . . . . . . . . . . . . . . . . . . . . . 123

Page 11: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

ix

7.3.4 More Realistic Illumination Model for Outdoor Scene Analysis . . . . . . . . . . .124

Color Figures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Page 12: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

x

Page 13: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

xi

List of Figures

Figure 1 Object model generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

Figure 2 Object model for computer graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

Figure 3 (a) a gonioreflectometer and (b) a typical measurement of BRDF . . . . . . . .4

Figure 4 Goniochromatic space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Figure 5 Reflection component separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

Figure 6 Synthesized image of an object without texture . . . . . . . . . . . . . . . . . . . . .10

Figure 7 Synthesized images of an object with texture . . . . . . . . . . . . . . . . . . . . . . .10

Figure 8 Image taken under solar illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Figure 9 A sphere and its color histogram as T shape in the RGB color space . . . . .19

Figure 10 Viewer-centered coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20

Figure 11 The space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20

Figure 12 The goniochromatic space (synthesized data) . . . . . . . . . . . . . . . . . . . . . . .22

Figure 13 Polar plot of the three reflection components . . . . . . . . . . . . . . . . . . . . . . .24

Figure 14 Reflection model used in our analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

Figure 15 Internal scattering and surface reflection . . . . . . . . . . . . . . . . . . . . . . . . . . .26

Figure 16 Solid angles of a light source and illuminated surface . . . . . . . . . . . . . . . .26

Figure 17 Geometry for the Torrance-Sparrow reflection model [85] . . . . . . . . . . . . .30

Figure 18 Measurement at one pixel (synthesized data) . . . . . . . . . . . . . . . . . . . . . . .32

Figure 19 Diffuse and specular reflection planes (synthesized data) . . . . . . . . . . . . . .34

Page 14: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

xii

Figure 20 x-y chromaticity diagram showing the ideal loci of chromaticities correspond-

ing to colors from five surfaces of different colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Figure 21 Estimation of illuminant color as an intersection of color signal planes . . 37

Figure 22 T-shape color histogram and two color vectors . . . . . . . . . . . . . . . . . . . . . 38

Figure 23 Estimation of the color vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Figure 24 Geometry matrix (synthesized data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Figure 25 Geometry of the experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Figure 26 Geometry of the extended light source . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Figure 27 Green shiny plastic cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Figure 28 Measured intensities in the goniochromatic space . . . . . . . . . . . . . . . . . . . 45

Figure 29 Decomposed two reflection components . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Figure 30 Loci of two reflection components in the goniochromatic space . . . . . . . . 47

Figure 31 Diffuse and specular reflection planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Figure 32 Result of fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Figure 33 Green matte plastic cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Figure 34 Measured intensities in the goniochromatic space . . . . . . . . . . . . . . . . . . . 49

Figure 35 Two decomposed reflection components . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Figure 36 Result of fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Figure 37 Aluminum triangular prism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Figure 38 Loci of the intensity in the goniochromatic space . . . . . . . . . . . . . . . . . . . 52

Figure 39 Two decomposed reflection components . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Figure 40 Purple plastic cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Figure 41 Needle map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Figure 42 Recovered object shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Figure 43 Estimation of illuminant color in the x-y chromaticity diagram . . . . . . . . 56

Figure 44 Multicolored object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Figure 45 Diffuse reflection image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Figure 46 Specular reflection image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Figure 47 Image acquisition system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Page 15: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

xiii

Figure 48 Input range data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

Figure 49 Input color images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

Figure 50 Recovered object shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71

Figure 51 View mapping result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73

Figure 52 Intensity change with strong specularity . . . . . . . . . . . . . . . . . . . . . . . . . . .74

Figure 53 Intensity change with little specularity . . . . . . . . . . . . . . . . . . . . . . . . . . . .74

Figure 54 Geometry for simplified Torrance-Sparrow model . . . . . . . . . . . . . . . . . . .75

Figure 55 Separated reflection components with strong specularity . . . . . . . . . . . . . .76

Figure 56 Separated reflection component with little specularity . . . . . . . . . . . . . . . .77

Figure 57 Diffuse image and specular image: example 1 . . . . . . . . . . . . . . . . . . . . . .77

Figure 58 Diffuse image and specular image: example 2 . . . . . . . . . . . . . . . . . . . . . .78

Figure 59 Segmentation result (grey levels represent regions) . . . . . . . . . . . . . . . . . .80

Figure 60 Synthesized image 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82

Figure 61 Synthesized image 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82

Figure 62 Synthesized image 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83

Figure 63 Object modeling with reflectance parameter mapping . . . . . . . . . . . . . . . .86

Figure 64 Surface normal estimation from input 3D points . . . . . . . . . . . . . . . . . . . . .88

Figure 65 Diffuse saturation shown in the RGB color space . . . . . . . . . . . . . . . . . . . .90

Figure 66 Input range data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

Figure 67 Input color images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

Figure 68 Recovered object shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92

Figure 69 Simplified shape model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93

Figure 70 Estimated surface normals and polygonal normals . . . . . . . . . . . . . . . . . . .93

Figure 71 Color image mapping result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95

Figure 72 Estimated diffuse reflection parameters . . . . . . . . . . . . . . . . . . . . . . . . . . .95

Figure 73 Selected vertices for specular parameter estimation . . . . . . . . . . . . . . . . . .96

Figure 74 Interpolated and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97

Figure 75 Synthesized object images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99

Figure 76 Comparison of input color images and synthesized images . . . . . . . . . . .100

Page 16: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

xiv

Figure 77 Comparison of the spectra of sunlight and skylight [48] . . . . . . . . . . . . . 104

Figure 78 Change of color of sun with altitude [48] . . . . . . . . . . . . . . . . . . . . . . . . . 104

Figure 79 Three reflection components from solar illumination . . . . . . . . . . . . . . . 105

Figure 80 Sun direction, viewing direction and surface normal in 3D case . . . . . . . 108

Figure 81 Diffuse reflection component image (frame 8) . . . . . . . . . . . . . . . . . . . . . 111

Figure 82 Two sets of surface normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Figure 83 The boundary region obtained from two surface normal sets . . . . . . . . . 112

Figure 84 The boundary after medial axis transformation . . . . . . . . . . . . . . . . . . . . 112

Figure 85 Segmented regions (gray levels represent regions) . . . . . . . . . . . . . . . . . 113

Figure 86 Right surface normal set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Figure 87 Recovered object shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Figure 88 Observed color image sequence of a water tank . . . . . . . . . . . . . . . . . . . . 115

Figure 89 Extracted region of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Figure 90 Water tank image without sky reflection component . . . . . . . . . . . . . . . . 116

Figure 91 Water tank image after highlight removal . . . . . . . . . . . . . . . . . . . . . . . . 117

Figure 92 Surface normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Figure 93 Recovered shape of the water tank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Page 17: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

1

Chapter 1

Introduction and Overview

As a result of the significant advancement of graphics hardware and image renderingalgorithms, 3D computer graphics capability has become available even on low-end com-puters. At the same time, the rapid spread of the internet technology has caused a significantincrease in the demand for 3D computer graphics. For instance, a new format for 3D com-puter graphics on the internet, called VRML, is becoming an industrial standard format, andthe number of applications using the format is quickly increasing. Therefore, it is importantto create suitable 3D object models for synthesizing realistic computer graphics images.

An object model for computer graphics applications should contain two aspects ofinformation: the shape and reflectance properties of the object. Surface reflectance proper-ties of object models are particularly important for synthesizing realistic computer graphicsimages since appearance of objects greatly depends on reflectance properties, i.e., how inci-dent lights are reflected on the object surfaces. For instance, a polished metal sphere willlook completely different after it is coated with diffuse white paint even though the shaperemains exactly the same.

Unfortunately, it is often the case that 3D object models are created manually by users.That input process is normally time-consuming and can be a bottleneck for realistic imagesynthesis. Alternatively, there might be CAD models of 3D objects available. Even in thiscase, however, reflectance properties are usually not available as a part of CAD models, andtherefore need to be determined. Thus, techniques to obtain object model data automaticallyby observing real objects could have great significance in practical applications. This is the

Page 18: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

2 Chapter 1

main motivation of this thesis work. In this thesis, we describe a novel method for automat-ically creating 3D object models with shape and reflectance properties, by observing realobjects.

Previously developed techniques for modeling object shapes by observing real objectsuse various approaches which include: for instance, range image merging, shape-from-motion, shape-from-shadings, shape-from-focus, and photometric stereo. In addition, thereexist sensing devices such as range finders, which can measure 3D object shapes directly. Infact, there are many 3D scanning sensors commercially available today. Various kinds ofrange sensors have been developed and are being marketed. They include: triangulation-based laser range sensors, time-of-flight laser range sensors, and light-pattern-projectiontype range sensors. The drawback of these sensors is that they are not designed to measureobject reflectance properties.

Figure 1 Object model generation

object model

real object

automatic generation

measurement by hand

CAD model

Page 19: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3

Figure 2 Object model for computer graphics

Attempts to model reflectance properties of real objects have been rather limited. Inmost cases, modeled reflectance properties are too simple to be used for synthesizing realis-tic images of the object. If only observed color texture or diffuse texture of a real object sur-face is used (e.g., texture mapping), shading effects such as highlights cannot be reproducedcorrectly in synthesized images. For instance, if highlights on the object surface areobserved in original color images, the highlights are treated as diffuse textures on the objectsurface and, therefore, remain on the object surface permanently regardless of illuminatingand viewing conditions. This should be avoided for realistic image synthesis because high-lights on object surfaces are known to play an important role in conveying a great deal ofinformation about object surface finish and material type.

For modeling surface reflectance properties of real objects, there are two approaches.The first approach is to intensively measure a distribution of reflected lights, i.e., a bidirec-tional reflectance distribution function (BRDF), and to record the distribution as reflectanceproperties. The second approach is to estimate parameters of some sort of parametric reflec-tion model function, based on relatively coarse measurements of reflected lights.

A BRDF is measured by using a device called a gonioreflectometer. The usual designfor such a device incorporates a single photometer that is designed to move in relation to alight source, all under the control of a computer. Because a BRDF is, in general, a functionof four angles, two incident and two reflected, such a device must have four degrees ofmechanical freedom to measure the complete function. This requires substantial complexityin the apparatus design as well as long periods of time to measure a single surface. Also, realobject surfaces very often have non-uniform reflectance: therefore, a single measurement of

real object

Shape

Reflectance

illumination & camera

synthesized image

Page 20: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4 Chapter 1

BRDF per object is not enough. More particularly, accuracy of measured BRDFs is oftenquestionable even if they are carefully measured [88]. For these reasons, BRDFs have rarelybeen used for synthesizing computer graphics images in the past.

Figure 3 (a) a gonioreflectometer and (b) a typical measurement of BRDFthe BRDF is redrawn from [75].

Alternatively, we can use a parametric reflectance model to reduce the complexityinvolved in using BRDFs for synthesizing images. When we have relatively coarse mea-surement of reflected light distribution, the measurement has to be somehow interpolated, sothat real distribution of reflected lights can be inferred. Here, the best chance would be toassume some underlying reflection model, i.e., the Torrance-Sparrow reflection model, as astarting point. By estimating parameters of the reflection model, we can interpolate the mea-sured distribution of reflected lights.

Depending on object material types and sampling methods of reflected lights, an appro-priate reflection model should be selected from currently available reflection models. Thosecurrently available reflection models were developed either empirically or analytically. Forexample, reflection models commonly used in computer vision and computer graphicsinclude: the Lambert model, the Phong model [59], the Blinn-Phong model [8], the Tor-rance-Sparrow model [85], the Cook-Torrance model [11], the Beckmann-Spizzino model[5], the He model [20], the Strauss model [80], and the Ward model [87].

The estimation of parameters of a reflection model function has been investigated byother researchers. In some cases, reflectance parameters are estimated only from multipleintensity images. In other cases, both range and intensity images are used to obtain objectsurface shapes and reflectance parameters. However, all of the previously proposed methods

(a) (b)

light source

sensor

test material

Page 21: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

1.1 Goniochromatic Space Analysis of Reflection 5

for reflectance parameter estimation are limited in one way or another. For instance, somemethods can handle only objects with uniform reflectance, and in other methods, estimationof reflectance parameters becomes very sensitive to image noise. To our knowledge, nomethod is currently being used for estimating reflectance properties of real objects in realapplications.

One of the main reasons why modeling of reflectance properties has been unsuccessful,as compared with modeling of object shapes, is that both diffusely reflected lights and spec-ularly reflected lights, i.e., the diffuse and specular reflection components, are examinedsimultaneously, and therefore, estimation of reflectance properties becomes unreliable. Forinstance, estimation of the diffuse reflection parameters may be affected by specularlyreflected lights observed in input images. Also, estimation of the specular reflection compo-nent’s parameters often becomes unreliable when specularly reflected lights are notobserved strongly in the input images.

In this thesis, we tackle the problem of object modeling by using a new approach toanalyze a sequence of color images. The new approach allows us to estimate shape andreflectance parameters of real objects in a robust manner even when both the diffuse andspecular reflection components are observed.

1.1 Goniochromatic Space Analysis of Reflection

We propose a new framework to analyze object shape and surface properties from asequence of color images. This framework plays an important role in reflectance analysisdescribed in this thesis. We observe how a color of an object surface varies on change inangular illuminating-viewing conditions using a four dimensional “RGB plus illuminating/viewing angle” space. We call this space the goniochromatic space based on Standard Ter-minology of Appearance1 from American Society for Testing and Materials [78].

The goniochromatic space is closely related to the two spaces previously used for ana-lyzing color or gray-scale images: the Red-Green-Blue (RGB) color space and the

(image intensity - illumination/viewing direction) space. Typically, the RGB colorspace is used for analyzing color information from a single color image. One of the epoch-making works using the RGB color space was done by Shafer [72][73]. He demonstratedthat, illuminated by a single light source, a cluster of uniformly colored dielectric objects inthe RGB color space forms a parallelogram defined by two color vectors, namely the diffuse

1. goniochromatism: change in any or all attributes of color of a specimen on change in angular illuminating-viewing conditions but without change in light source or observer.

I θ–

Page 22: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6 Chapter 1

reflection vector and the specular reflection vector [72]. Subsequently, Klinker et al. [39]demonstrated that the cluster actually forms a T-shape in the color space instead of a paral-lelogram; they separated the diffuse and specular reflection components by geometricallyclustering a scatter plot of the image in the RGB color space. However, their methodrequires that objects be uniformly colored and that surface shapes are not planar. For exam-ple, if an object has a multiple-colored or highly textured surface, a cluster in the RGB colorspace becomes cluttered, and therefore, separation of the two reflection componentsbecomes impossible. If the object’s surface is planar, the cluster eventually concentrates to apoint in the RGB color space, and again, the separation becomes impossible. As a result, themethod can be applied to only a limited class of objects.

On the other hand, the space was used for analyzing a gray-scale image sequence.This space represents how the pixel intensity changes as illumination or viewing geometrychanges. By using the space, Nayar et al. analyzed a gray-scale image sequence given by amoving light source [49]. Their method can separate the diffuse and specular reflectioncomponents from an observed intensity change in the space, and can estimate shapesand reflectance parameters of objects with hybrid surfaces2. The main advantage of theirmethod is that all necessary information is obtained from a single point on the object sur-face, and therefore, the method can be applied locally. This is advantageous compared to thealgorithm developed by Klinker et al. [39] because the algorithm by Klinker et al. examinesa color cluster formed from an entire image globally. However, the Nayar et al. method isstill limited in the sense that only a small group of real objects have hybrid surfaces, and themethod requires an imaging apparatus of specific dimension, e.g., the light diffuser’s diam-eter and the distance from a light source to the light diffuser.

2. In the paper [49] by Nayar et al. a hybrid surface is defined as one which exhibits the dif-fuse lobe reflection component and the specular spike reflection components. (Those two reflection components and the specular lobe reflection component will be described in more detail in Chapter 3.)

I θ–

I θ–

Page 23: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

1.2 Object Modeling from Color Image Sequence 7

Figure 4 Goniochromatic space

Unlike the RGB color space and the space, the GSA does not require strongassumptions such as uniform reflectance, non-planar surfaces, hybrid surfaces, or an imag-ing apparatus of specific dimension. By using the GSA, we can separate the two reflectioncomponents locally from a color image sequence and obtain the shape and reflectanceparameters of objects.

1.2 Object Modeling from Color Image Sequence

Based on GSA, we have developed a new method for estimating object shapes andreflectance parameters from a sequence of color images taken by a moving point lightsource. First, the diffuse and specular reflection components are separated from the colorimage sequence. The separation process does not have to assume any specific reflectionmodel. Then, using the separated reflection components, object shapes and parameters of aspecific reflection model are estimated.

Like the space analysis, our method requires only local information. In otherwords, we can recover the object shape and reflectance parameters based on color change ateach point on the object surface; the method does not depend on observed color at other por-tions of the object surface. In addition, our method is not restricted to a specific reflectionmodel, i.e., a hybrid surface, or to a specific imaging apparatus. Thus, our method can beapplied to a wide range of objects.

image sequence under moving light

I θ–

I θ–

Page 24: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

8 Chapter 1

Figure 5 Reflection component separation

Currently, our method can handle only the case where object surface normals exist in a2D plane. This is a rather severe limitation. However, the limitation comes from the fact thatonly coplanar motion of a light source is used, and it is not an inherent limitation of ourmethod. For instance, if only two light source locations are used for photometric stereo, twosurface normals are obtained at each surface point. The ambiguity can be solved by addingone more light source location which is not coplanar to the other two locations. The samething can be done for our method, but it has not been tested in research conducted for thisthesis.

The main limitation of this method is that an object shape cannot be recovered accu-rately if the object has a surface with high curvature. This is generally true for all methodswhich use only intensity images, e.g., shape-from-shading and photometric stereo. Anotherlimitation of the method is that surface shape and reflectance parameters can be recoveredonly for a partial portion of the surface. Obviously, we cannot see the back of an objectunless we rotate the object or change our eye location.

1.3 Object Modeling from Range and Color Image Sequences

To overcome the limitations noted in the previous section, we investigated anothermethod. The goal was creating object models with complete shapes even if the objects havesurfaces with high curvature. To attain this goal, we developed a method for creating com-plete object models from sequences of range and color images. These images are taken bychanging object posture.

One advantage of using range image for shape recovery is that object shapes can be

Page 25: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

1.3 Object Modeling from Range and Color Image Sequences 9

obtained as triangular mesh models which represent 3D information of the object shapes. Incontrast, the method we described in Section 1.2 can produce only surface normals which is2.5D information of the object surface. Hence, the object surface can be obtained as a trian-gular mesh model only after some sort of integration procedure is applied to the surface nor-mals. In general, this integration process does not work well for object surfaces with highcurvature. And, a single range image cannot capture an entire object shape: it measures onlythe partial shape of the object seen from a range sensor. Therefore, we need to observe theobject from various viewing points to see the object surface entirely. Then, we have tosomehow merge those multiple range images to create a complete shape model of theobject. In this thesis, two different algorithms are used for integrating multiple rangeimages: a surface-based method and a volume-based method.

When we apply the GSA to a sequence of range and color images, we have to face thecorrespondence problem between color image frames. As we have already mentioned, theGSA examines a sequence of observed colors as illuminating/viewing geometry changes.Therefore, we need to know where the point on the object surface appears in each inputcolor image. This correspondence problem was not an issue in the case of a color imagesequence taken with a moving light source. In that case, the object location and the viewingpoint were fixed. Therefore, the point on the object surface appeared at the same pixel coor-dinate through the color image sequence.

Fortunately, we can solve the correspondence problem by using the reconstructed trian-gular mesh model of the object shape. Having determined object locations and cameraparameters from calibration, we project each color image frame back onto the reconstructedobject surface. By projecting all of the color image frames, we can determine the observedcolor change at each point on the object surface as the object is rotated. Then, we apply theGSA to the observed color change to separate the diffuse and specular reflection compo-nents.

After the two reflection components are separated, we estimate reflectance parametersof each reflection component. Here, we consider two different classes of objects. The firstclass comprises objects which are painted in multiple colors and do not have detailed sur-face textures. In this case, an object surface can be segmented into multiple regions withuniform diffuse color. The second class comprises objects with highly textured surfaces; inthis case, object surfaces cannot be clearly segmented.

In this thesis, we investigated two different approaches for these two classes of objects.For the first class of objects without detailed surface texture, we developed a method to esti-mate reflectance parameters based on region segmentation of an object surface. Each seg-mented region on the object surface is assigned the same reflectance parameters for thespecular reflection component, by assuming that each region with uniform diffuse color has

Page 26: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

10 Chapter 1

more or less the same specular reflectance. Then, each triangle of the triangular mesh objectmodel is assigned its diffuse parameters and specular parameters.

Figure 6 Synthesized image of an object without texture

For the second class of objects with highly textured surfaces, region segmentation can-not be performed reliably. Therefore, we developed another method using a slightly differ-ent approach. Instead of assigning one set of reflectance parameters to each triangle of atriangular mesh object model, each triangle is assigned a texture of reflectance parametersand surface normals. This method is similar to the conventional texture mapping technique.However, unlike that technique, our method can be used to synthesize realistic color imageswith realistic shading such as highlights. Finally, highly realistic object images are synthe-sized using the created object models with shape and reflectance properties.

Figure 7 Synthesized images of an object with texture

Page 27: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

1.4 Reflectance Analysis under Solar Illumination 11

1.4 Reflectance Analysis under Solar Illumination

Most algorithms for analyzing object shape and reflectance properties, including ourmethods described above, have been applied to images taken in a laboratory.

Images synthesized or taken in a laboratory are well controlled and are less complexthan those taken outside under sunlight. For instance, in an outdoor environment, there aremultiple light sources of different colors and spatial distributions, namely the sunlight andthe skylight. The sunlight can be regarded as a point light source whose movement isrestricted to the ecliptic. On the other hand, the skylight acts as a blue extended light source.Those multiple light sources create more than two reflection components from the objectsurface unlike the case of one known light source in a laboratory setup.

Also, due to the sun’s restricted movement, the problem of surface normal recoverybecomes underconstrained under the sunlight. For instance, if the photometric stereomethod is applied to two intensity images taken outside at different times, two surface nor-mals which are symmetric with respect to the ecliptic are obtained at each surface point.Those two surface normals cannot be distinguished locally because those two surface nor-mal directions give us exactly the same brightness at the surface point.

In this thesis, we address the issues involved in analyzing real outdoor intensity imagestaken under solar illumination: the multiple reflection components including highlights, andthe ambiguous solution for surface normals. For the difficulties associated with reflectanceanalysis under solar illumination, we propose a solution and then demonstrate the feasibilityof the solution by using test images which are taken in a laboratory setup and outdoors underthe sun.

Figure 8 Image taken under solar illumination

Page 28: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

12 Chapter 1

1.5 Thesis Outline

This thesis presents new approaches for modeling object surface reflectance properties,as well as shapes, by observing real objects in both indoor and outdoor environments. Themethods are based on a novel algorithm called goniochromatic space analysis for separatingthe diffuse and specular reflection components from a color image sequence.

This thesis is organized as follows. In Chapter 2, we introduce the goniochromaticspace and explain the similarities and differences between the goniochromatic space and thetwo other spaces commonly used for reflectance analysis: the RGB color space and the space. In Chapter 3, we discuss our method for modeling object shapes and reflectanceparameters from a color image sequence. In Chapter 4 and Chapter 5, we describe two dif-ferent methods for modeling object shape and reflectance parameters from a sequence ofrange and color images. In Chapter 6, we describe our attempt to analyze shape and reflec-tance properties of an object by using a color image sequence taken under solar illumina-tion. Finally, in Chapter 7, we summarize the work presented in this thesis, give conclusionsand directions of future research.

I θ–

Page 29: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

13

Chapter 2

Goniochromatic Space Analysis of Reflection

2.1 Background

Color spaces, especially the RGB color space, have been widely used by the computervision community to analyze color images. One of the first applications of color space anal-ysis was image segmentation by partitioning a color histogram into Gaussian clusters(Haralick and Kelly [18]). A histogram is created by the color values at all image pixels; ittells, for each point in the RGB color space, how many pixels exhibit the color. Typically,the colors tend to form clusters in the histogram, one for each textureless object in theimage. By manual or automatic analysis of the histogram, the shape of each cluster is deter-mined. Then each pixel in the color image is assigned to the cluster that is closest to thepixel color in the RGB color space. Following the work by Haralick and Kelly, a number ofimage segmentation techniques have been developed [1], [9], [10], [61].

Most of the early works in color computer vision used color information as a randomvariable to be used for image segmentation. Later, many researchers tried using knowledgeabout how color is created to analyze a color image and compute some important propertiesof objects in the color image.

Shafer [73] carefully examined the physical properties of reflection when light strikesan inhomogeneous surface which includes materials such as plastics, paints, ceramics, and

Page 30: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

14 Chapter 2

paper. An inhomogeneous surface consists of a medium with particles of colorant sus-pended in it. When light hits such a surface, there is a change in the index of refraction at theinterface. This reflection occurs in the perfect specular direction where angle of incidenceequals angle of reflection, and forms the specular reflection component, i.e., the highlightsseen on shiny materials. The light that penetrates through the interface is scattered andselectively absorbed by the colorant. Then, the light is re-emitted to the air to become thediffuse reflection component.

Based on this observation, Shafer proposed the first realistic color reflection modelused in computer vision: the dichromatic reflection model. That reflection model states thatthe reflectance of an object may be divided into two components: the interface or specularreflection component, and the body or diffuse reflection component. In addition, Shaferdemonstrated that, illuminated by a single light source, a cluster of uniformly coloreddielectric objects in the color space forms a parallelogram defined by two color vectors,namely the specular reflection vector and the diffuse reflection vector.

The dichromatic reflection model proposed by Shafer has inspired a large amount ofimportant related work. Klinker, Shafer and Kanade [39][40] demonstrated that, instead of aparallelogram, the cluster actually forms a T-shape in the color space, and they separated thediffuse reflection component and the specular reflection component by geometrically clus-tering a scatter plot of the image in the RGB color space. They used the separated diffusereflection component for segmentation of a color image without suffering from disturbancesof highlights in the image. Their method is based on the assumption that the directions ofsurface normals in an image are widely distributed in all directions. This assumption guar-antees that both the diffuse reflection vector and the specular reflection vector will be visi-ble. Therefore, their algorithm cannot handle cases where only a few planar surface patchesexist in the image.

Bajcsy and Lee [2] proposed using the hue-saturation-intensity (HSI) color spaceinstead of the RGB color space for analyzing a color image. They studied clusters in the HSIcolor space formed by scene events such as shading, highlights, shadows, and interreflec-tion. Based on this analysis, their algorithm uses a hue histogram technique to segment indi-vidual surfaces and then follows with a local thresholding to identify highlights andinterreflection. This technique was the first to identify interreflection successfully from asingle color image. The algorithm is shown to be effective on color images of glossyobjects.

Novak and Shafer [56] presented an algorithm for analyzing color histograms. Thealgorithm yields estimates of surface roughness, the phase angle between the camera and thelight source, and the illumination intensity. In their paper, they showed that these propertiescannot be computed analytically, and they developed a method for estimating these proper-

Page 31: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

2.1 Background 15

ties based on interpolation between histograms that come from images of known sceneproperties. The method was tested by using both simulated and real images, and success-fully estimated those properties from a single color image.

Lee and Bajcsy [44] presented an interesting algorithm for the detection of specularitiesfrom Lambertian reflections using multiple color images from different viewing directions.The algorithm is based on the observation that reflected light intensity from the diffusereflection component at an object surface does not change depending on viewing directions;however reflected light intensity from the specular reflection component or from a mixtureof the diffuse and specular reflection components does change. This algorithm differs fromother algorithms described above in that multiple color images taken from different viewingdirections are used to differentiate color histograms of the color images. In this aspect, thisalgorithm is the one most closely related to our color analysis framework proposed in thisthesis. However, their method still suffers from the fact that a color histogram of a colorimage is analyzed. When input color images contain many objects with non-uniform reflec-tance, color histograms of those images become too cluttered to be used for detecting thespecular reflection component. Also, Lee and Bajcsy’s method cannot compute reflectanceparameters of objects, and it is not clear how the method can be extended for reflectanceparameter estimation.

All of the algorithms for color image analysis described in this section examine histo-grams formed either in the RGB color space or in some other color space. This means thatthe method depends on global information extracted from color images. In other words,those algorithms require color histograms which are not so cluttered and can be segmentedclearly. If color images contain many objects with non-uniform reflectance, then color histo-grams become impossible to segment; therefore, the algorithms will fail.

Another limitation of those algorithms is that there is little or no consideration of illu-minating/viewing geometry. In other words, those algorithms, with the exception of the oneby Lee and Bajcsy, do not examine how observed color changes as the illuminating/viewinggeometry changes. This makes it very difficult to extract any information about object sur-face reflectance properties. (Strictly speaking, this is not true in the work by Novak and Sha-fer [56] where surface roughness is estimated from a color histogram. However, theiralgorithm does not work well for cluttered color images.)

On the other hand, other techniques have been developed for analyzing gray-scaleimages. Those techniques include shape-from-shading and photometric stereo. The shape-from-shading technique introduced by Horn [32] recovers object shapes from a single inten-sity image. In this method, surface orientations are calculated starting from a chosen pointwhose orientation is known a priori, by using the characteristic strip expansion method.Ikeuchi and Horn [33] developed a shape-from-shading technique which uses occluding

Page 32: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

16 Chapter 2

boundaries of an object to iteratively calculate surface orientation.

In general, shape-from-shading techniques require rather strong assumptions aboutobject surface shape and reflectance properties, e.g., smoothness constraint and uniformreflectance. The limitation comes from the fact that only one intensity image is used forshape-from-shading techniques, and therefore, it is a fundamentally under-constrained prob-lem.

Photometric stereo was introduced by Woodham [95] as a technique for recovering sur-face orientation from multiple gray-scale images taken with different light source locations.Surface normals are determined from the combination of constraints provided by reflectancemaps with respect to different incident directions of a point light source. Unlike shape-from-shading techniques, Woodham’s technique does not rely on assumptions such as the surfacesmoothness constraint. However, the technique is still based on the assumption of the Lam-bertian surface. Hence, the technique can be applied to object surfaces only with the diffusereflection component.

While specularities have usually been considered to cause errors in surface normal esti-mation by photometric stereo methods, some researchers have proposed the opposite idea ofusing the specular reflection component as a primary source of information for shape recov-ery. Ikeuchi was the first to develop a photometric stereo technique that can handle purelyspecular reflecting surfaces [34].

Nayar, Ikeuchi and Kanade [49] developed a photometric stereo technique for recover-ing object shape with surfaces exhibiting both the diffuse and specular reflection compo-nents, i.e., hybrid surfaces. These reflection components can vary in relative strength, frompurely Lambertian to purely specular. The technique determines 2D surface orientation andthe relative albedo strength of the diffuse and specular reflection components. The key is touse extended rather than point light sources so that a non-zero specular component isdetected from more than just one light source. In fact, the extended nature of the lightsources and their spacing are made so that for a hybrid surface, a non-zero specular compo-nent results from two consecutive light sources, with the rest of the observed reflectionsbeing only from the diffuse reflection component. Later, this technique was extended furtherto be able to compute 3D surface orientations by Sato, Nayar, and Ikeuchi [63].

Lu and Little developed a photometric stereo technique to estimate a reflectance func-tion from a sequence of gray-scale images taken by rotating a smooth object, and the objectshape was successfully recovered using the estimated reflectance function [47]. Since thereflectance function is measured directly from the input image sequence, the method doesnot assume a particular reflection model such as the Lambertian model which is commonlyused in computer vision. However, their algorithm can be applied to object surfaces with

Page 33: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

2.2 The RGB Color Space 17

uniform reflectance properties, and it cannot be easily extended to overcome this limitation.

These photometric stereo techniques determine surface normals and reflectance param-eters by examining how reflected light intensity at a surface point changes as light sourcedirection varies. This intensity change can be represented in the (image intensity - illu-mination direction) space.

The main difference between the space and the RGB color space is that the formercan represent intensity change caused by illumination/viewing geometry change, while thelatter cannot. This ability is a significant advantage when we want to measure various prop-erties of object surfaces such as surface normals and reflectance properties. Also, the space uses intensity change observed at each surface point. Therefore, necessary informa-tion can be obtained locally, while the RGB color space uses a color histogram which isformed globally from a entire color image.

However, the space fails to represent important information for reflectance analy-sis, i.e., color. Therefore, it is desirable to have a new framework which can represent bothobserved color information and change caused by different illumination/viewing geometry.

In this thesis, we propose a new framework to analyze object shape and surface proper-ties from a sequence of color images. We observe how the color of the image varies onchange in angular illuminating-viewing conditions using a four dimensional “RGB plus illu-minating/viewing angle” space. We call this space the goniochromatic space based on Stan-dard Terminology of Appearance4 from the American Society for Testing and Materials[78].

This chapter first briefly describes the conventional RGB color space and the space in Section 2.2 and Section 2.3. Then, in Section 2.4, we introduce the goniochromaticspace in comparison to those two other spaces.

2.2 The RGB Color Space

A pixel intensity is determined by the spectral distribution of incident light to thecamera and the camera response to the various wavelengths , i.e.,

4. goniochromatism: change in any or all attributes of color of a specimen on change in angular illuminating-viewing conditions but without change in light source or observer.

I θ–

I θ–

I θ–

I θ–

I θ–

I

h λ( ) s λ( )

Page 34: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

18 Chapter 2

(EQ1)

A color camera has color filters attached in front of its sensor device. Each color filterhas a transmittance function which determines the fraction of light transmitted at eachwavelength . Then, pixel intensities , and from red, green, and blue channels of thecolor camera are given by the following integrations:

(EQ2)

where are the transmittance functions of the red, green, and blue fil-ters, respectively. The three intensities , and form a color vector which rep-resents the color of a pixel in the RGB color space.

(EQ3)

Klinker, Shafer, and Kanade [39][40] demonstrated that the histogram of dielectricobject color in the RGB color space forms a T-shape (Figure 9). They extracted the twocomponents of the T-shape in order to separate the specular reflection component and thediffuse reflection component.

I s λ( )h λ( )dλ∫=

τ λ( )λ IR IG, IB

IR τR λ( )s λ( )h λ( )dλ∫=

IG τG λ( )s λ( )h λ( )dλ∫=

IB τB λ( )s λ( )h λ( )dλ∫=

τR λ( ) τG λ( ) τB λ( ), ,IR IG IB 3 1× C

C

IR

IG

IB

τR λ( )s λ( )h λ( )dλ∫τG λ( )s λ( )h λ( )dλ∫τB λ( )s λ( )h λ( )dλ∫

= =

0100

200RED

0

100

200

GREEN

0

100

200

BLUE

0100

200

Page 35: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

2.3 The I-q (Intensity - Illuminating/Viewing Angle) Space 19

Figure 9 A sphere and its color histogram as T shape in the RGB color space(synthesized data)

A significant limitation of the method is that it works only when surface normals in theimage are well distributed in all directions. Suppose that the image has only one planarobject illuminated by a light source which is located far away from the object. Then, all pix-els on the object are mapped to a single point in the color space because observed color isconstant over the object surface. The T-shape converges, in the RGB color space, to a singlepoint which represents the color of the object, because the plane has uniform color. As aresult, we cannot separate the reflection components. This indicates that the method cannotbe applied locally.

2.3 The I-θ (Intensity - Illuminating/Viewing Angle) Space

Nayar, Ikeuchi, and Kanade [49] analyzed an image sequence given by a moving lightsource in the space. They considered how the pixel intensity changes as the light sourcedirection varies (Figure 10).

The pixel intensity from a monochrome camera is written as a function of :

(EQ4)

where represents intensity change with respect to a light source direction . Note thatthe spectral distribution of incident light to the camera is generally dependent on geo-metric relations such as the viewing direction and the illumination direction. However, as anapproximation, we assume that the function is independent of these factors.

The vector shows how pixel intensity changes with respect to light sourcedirection in space (Figure 11).

As opposed to analysis in the RGB color space, the space analysis is appliedlocally. All necessary information is extracted from the intensity change at each individualpixel. Nayar et al. [49] used the space to separate the surface reflection component andthe diffuse reflection component, using a priori knowledge of the geometry of the “photo-metric sampler.”

I θ–

θ

θ

I θ( ) g θ( ) s λ( )h λ( )dλ∫=

g θ( ) θh λ( )

h λ( )

p θ I θ( ),( )I θ–

I θ–

I θ–

Page 36: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

20 Chapter 2

Figure 10 Viewer-centered coordinate systemThese three vectors are coplanar.

Figure 11 The space

2.4 The Goniochromatic Space

Without resorting to relatively strong assumptions, neither the RGB color space nor the space can be used to separate the two reflection components using local pixel informa-

tion. To overcome this weakness, we propose a new four dimensional space, which we call

object

camera

light source

θn

θ

n

θ

I θ( )

p θ I θ( ),( )

I θ–

I θ–

Page 37: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

2.4 The Goniochromatic Space 21

the goniochromatic space. This four dimensional space is spanned by the R, G, B, and axes. The term “goniochromatic space” implies an augmentation of the RGB color spacewith an additional dimension that represents varying illumination/viewing geometry. Thisdimension represents the geometric relationship between the viewing direction, illuminationdirection, and surface normal. In our method that we describe more fully in the next chapter,we keep the viewing direction and surface normal orientation fixed. Then, we vary the illu-mination direction, taking a new image at each new illumination direction. (The same infor-mation could be obtained if we kept the viewing direction and illumination direction fixed,and varied the surface normal orientation. This case will be described in Chapter 4 andChapter 5.)

The goniochromatic space can be thought of as a union of the RGB color space and the space. By omitting the axis, the goniochromatic space becomes equivalent to the

RGB color space; and by omitting two color axes, the goniochromatic space becomes the space. Each point in the space is represented by the light source direction and the

color vector which is a function of :

(EQ5)

(EQ6)

The goniochromatic pace represents how the observed color of a pixel changes asthe direction of the light source changes (Figure 12). Note that, in Figure 12, the dimen-sion of the goniochromatic space is reduced from four to three for clarity. In this diagram,one axis of the RGB color space is ignored.

θ

I θ– θ

I θ– θC θ( ) θ

p θ C θ( ),( )

C θ( )IR θ( )

IG θ( )

IB θ( )

g θ( ) τR λ( )s λ( )h λ( )dλ∫g θ( ) τG λ( )s λ( )h λ( )dλ∫g θ( ) τB λ( )s λ( )h λ( )dλ∫

= =

C θ( )θ

Page 38: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

22 Chapter 2

Figure 12 The goniochromatic space (synthesized data)

-500

50100

THETA [deg]

0

100

200

GREEN

0

100

200

BLUE

-500

50100

Page 39: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

23

Chapter 3

Object Modeling from Color Image Sequence

In the previous chapter, the goniochromatic space was introduced as a method for ana-lyzing a sequence of color images. In this chapter, we introduce a method for estimatingobject surface shapes and reflectance properties from a sequence of color images taken bychanging the illuminating direction. The method consists of two steps. First, by using theGSA introduced in the previous chapter, the diffuse and specular reflection components areseparated from the color image sequence. Then, surface normals and reflectance parametersare estimated based on the separation results. The method was successfully applied to realimages of objects made of different materials.

Objects that we consider in this chapter are made of dielectric or metal material. Also,the method can be applied to objects whose surface normals exist in a 2D plane defined by alight source direction and a viewing direction. Note that this is not a limitation inherent tothe proposed method; rather, it is due to limited coplanar motion of the light source as wewill see later in this chapter.

This chapter is organized as follows. First, a parametric reflectance model used in ouranalysis is described in Section 3.1. Then, the decomposition of the diffuse and specularreflection components from a color image sequence is explained in Section 3.2. The decom-position method requires the specular reflection color and the diffuse reflection color. Amethod for estimating these colors is explained in Section 3.3 and Section 3.4, respectively.The results of experiments conducted using objects of different materials are presented inSection 3.5. The summary of this chapter is given in Section 3.6.

Page 40: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

24 Chapter 3

3.1 Reflection Model

A mechanism of reflection is described in terms of three reflection components, namelythe diffuse lobe, the specular lobe, and the specular spike [50]. Reflected light energy fromobject surface is a combination of these three components.

The diffuse lobe component may be explained as internal scattering. When an incidentlight ray penetrates the object surface, it is reflected and refracted repeatedly at a boundarybetween small particles and medium of the object. The scattered light ray eventually reachesthe object surface, and is refracted into the air in various directions. This phenomenonresults in the diffuse lobe component. The Lambertian model is based on the assumptionthat those directions are evenly distributed in all directions.

On the other hand, the specular spike and lobe are explained as light reflected at aninterface between the air and the surface medium. The specular lobe component spreadsaround the specular direction, while the specular spike component is zero in all directionsexcept for a very narrow range around the specular direction. The relative strengths of thetwo components depends on the microscopic roughness of the surface.

Figure 13 Polar plot of the three reflection components(redrawn from [50])

Unlike the diffuse lobe and the specular lobe components, the specular spike compo-

camera

diffuse lobe

specular spike

light source

reflecting surface

specular lobe

Page 41: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.1 Reflection Model 25

nent is not commonly observed in many actual applications. The component can beobserved only from mirror-like smooth surfaces where reflected light rays of the specularspike component are concentrated in a specular direction. That makes it hard to observe thespecular spike component from viewing directions at coarse sampling angles. Therefore, inmany computer vision and computer graphics applications, a reflection mechanism is mod-eled as a linear combination of two reflection components: the diffuse lobe component andthe specular lobe component.

Those two reflection components are normally called the diffuse reflection componentand the specular reflection component. The reflection model was formally introduced byShafer [73] as the dichromatic reflection model. Based on the dichromatic reflection model,the reflection model used in our analysis is represented as a linear combination of the diffusereflection component and the specular reflection component.

Figure 14 Reflection model used in our analysis

The Torrance-Sparrow model is relatively simple and has been shown to conform withexperimental data [85]. In our analysis, we use the Torrance-Sparrow model [85] for repre-senting the diffuse reflection component and the specular reflection component. As we willsee in Section 3.1.2, this model describes reflection of incident light rays on rough surfaces,i.e., the specular lobe component, and captures important phenomena such as the off-specu-lar effect and spectral change within highlights.

camera

diffuse lobe

light source

reflecting surface

specular lobe

Page 42: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

26 Chapter 3

3.1.1 The Lambertian Model

The Torrance-Sparrow model uses the Lambertian model for representing the diffusereflection component. The Lambertian model has been used extensively for many computervision techniques such as shape-from-shading and photometric stereo. The Lambertianmodel is the first model proposed to approximate the diffuse reflection component.

The mechanism of the diffuse reflection is explained as the internal scattering. When anincident light ray penetrates the object surface, it is reflected and refracted repeatedly at aboundary between small particles and medium of the object (Figure 15). The scattered lightray eventually reaches the object surface, and is refracted into the air in various directions.This phenomenon results in body reflection. The Lambertian model is based on the assump-tion that the directions of the refracted lights are evenly distributed in all directions.

Figure 15 Internal scattering and surface reflection

Figure 16 Solid angles of a light source and illuminated surface

specular componentincident light

diffuse component

pigments

����

���

�����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

n

r

dAs

dAi dωi

dωs

θi

Page 43: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.1 Reflection Model 27

For a Lambertian surface, the radiance of the surface is proportional to the irradianceinto the surface. Let be the incident flux onto (Figure 16). Then,

(EQ7)

where is a source radiance . Also,

(EQ8)

(EQ9)

Substituting (EQ8) and (EQ9) into (EQ7),

(EQ10)

Therefore, the irradiance of the surface is

(EQ11)

As stated above, since the radiance of a Lambertian surface is proportional to the irradi-ance, the radiance of the surface is

(EQ12)

where represents the ratio of the radiance to the irradiance.

It is known that this model tends to model the diffuse lobe component poorly as surfaceroughness increases. Other models may be used to describe the diffuse lobe componentmore accurately. For instance, those models include [57], [94]. However, in our analysis,these more sophisticated diffuse reflection models were not used because they are consider-ably more complex and therefore expensive to use.

3.1.2 The Torrance-Sparrow Reflection Model

The Torrance-Sparrow model describes single reflection of incident light rays by roughsurfaces. This model is reported to be valid when the wavelength of light is much smallerthan the roughness of the surface [85]; a condition which is true for most objects. The sur-face is modeled as a collection of planar micro-facets which are perfectly smooth and reflect

dΦi dAs

dΦi LidωsdAi=

Li W m2 Sr⋅( )⁄[ ]

dAi dωi r2=

dωs

dAs θicos

r2

----------------------=

dΦi LidωidAs θicos=

Ps

dΦi

dAs

--------- Lidωi θicos= =

Lr kD λ( )Lidωi θicos=

kD λ( )

Page 44: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

28 Chapter 3

light rays as perfect specular reflectors. The geometry for the Torrance-Sparrow model isshown in Figure 17. The surface area is located at the center of the coordinate system.An incoming light beam lies in the plane and is incident to the surface at an angle .The radiance and solid angle of the light source are represented as and , respectively.In the Torrance-Sparrow model, the micro-facet slopes are assumed to be normally distrib-uted. Additionally, the distribution is assumed to be symmetric around the mean surfacenormal . The distribution is represented by a one-dimensional normal distribution

(EQ13)

where is a constant, and the facet slope has mean value and standard deviation. In the geometry shown in Figure 17, only planar micro-facets having normal vectors

within the solid angle can reflect incoming light flux specularly. The number of facetsper unit area of the surface that are oriented within the solid angle is equal to .Hence, considering the area of each facet and the area of the illuminated surface , theincoming flux on the set of reflecting facets is determined as

(EQ14)

The Torrance-Sparrow reflection model considers two terms to determine what portionof the incoming flux is reflected as outgoing flux. One term is the Fresnel reflection coeffi-cient, where is the refractive index of the material, and is the wavelength ofthe incoming light. The other term, called the geometric attenuation factor, is represented as

. This factor accounts for the fact that, at large incidence angles, light incomingto a facet may be shadowed by adjacent surface irregularities, and outgoing light along theviewing direction that grazes the surface may be masked or interrupted in its passage to theviewer. Considering those two factors, the flux reflected into the solid angle isdetermined as

. (EQ15)

The radiance of reflected light is defined as

. (EQ16)

Substituting (EQ14) and (EQ15) into(EQ16), we obtain

dAs

X Z– θi

Li dωi

n

ρα α( ) cα2

2σα2

----------–

exp=

c α α⟨ ⟩ 0=

σα

dω′dω′ ρα α( )dω′

af dAs

d2Φi Lidωi afρα α( )dω′dAs θi ′cos=

F θi ′ η′ λ, ,( ) η′ λ

G θi θr φr, ,( )

d2Φr dωr

d2Φr F θi ′ η′ λ, ,( )G θi θr φr, ,( )d2Φi=

dLr

d2Φr

dωrdAs θrcos---------------------------------=

Page 45: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.1 Reflection Model 29

. (EQ17)

Since only facets with normals that lie within the solid angle can reflect light intothe solid angle , the two solid angles are related as

. (EQ18)

Substituting (EQ13) and (EQ18) into (EQ17), the surface radiance of the surface given by the specular reflection component is represented as

. (EQ19)

As stated above, the Fresnel coefficient and the geometrical attenuation factor depend on the illumination and viewing geometry.

To simplify the Torrance-Sparrow model used in our analysis, we have made twoassumptions with respect to the Fresnel reflectance coefficient and the geometrical attenua-tion factor. For both metals and non-metals, the Fresnel reflectance coefficient is nearly con-stant until the local angle of incidence approaches . Also, for most of dielectric andmetal objects, the coefficient is uniform over visible wavelength. Therefore, we assume thatthe Fresnel reflectance coefficient is constant with respect to and . Additionally, it isobserved that the geometrical attenuation factor equals unity for angles of incidence notnear the grazing angle. Based on this observation, we also assume that the geometrical atten-uation factor is equal to 1.

Finally, the surface radiance of the specular reflection component in our experiments isrepresented as:

. (EQ20)

This reflection model for the specular lobe component is combined with the Lambertianmodel (EQ12) to produce

(EQ21)

dLr

F θi ′ η′,( )G θi θr φr, ,( )Lidωi afρα α( )dω′dAs( ) θi ′cos

dωrdAs θrcos---------------------------------------------------------------------------------------------------------------------------------=

dω′dωr

dω′dωr

4 θi ′cos------------------=

dAs

dLr

cafF θi ′ η′,( )G θi θr φr, ,( )4

--------------------------------------------------------------Lidωi

θrcos-------------- α2

2σα2

----------–

exp=

F θi ′ η′,( )G θ i θr φr, ,( )

θi ′ 90°

F θi θr

G

G

dLr

cafF

4-----------

Lidωi

θrcos-------------- α2

2σα2

----------–

exp=

dLr kD λ( ) θicoskS

θrcos-------------- α2

2σα2

----------–

exp+

Lidωi=

Page 46: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

30 Chapter 3

where represents the ratio of the radiance to the irradiance of the diffuse reflection,and . That expression is integrated in the case of a collimated light source toproduce

(EQ22)

where is the surface irradiance on a plane perpendicular to the light source direction.

Figure 17 Geometry for the Torrance-Sparrow reflection model [85]

3.1.3 Image Formation Model

If the object distance is much larger than the focal length and the diameter of theentrance pupil of the imaging system, it can be shown that the image irradiance is pro-portional to scene radiance . The image irrandiance is given as

(EQ23)

where is the diameter of a lens, is the focal length of the lens, and is the angle betweenthe optical axis and the line of sight [29]. In our experiments, changes of those three param-eters , , and are assumed to be relatively small. Therefore, (EQ23) can be simply givenas

kD λ( )kS cafF 4⁄=

Lr Lrd

ωi

∫ kD λ( )s λ( ) θ icoskSs λ( )

θrcos---------------- α2

2σα2

----------–

exp+= =

s λ( )

X

Z

dAs

dωidωr

dω′

θi

θr

φr

θi ′ θi ′

nn′reflected beam

incident beam

α

Ep

Lr

Ep Lrπ4--- d

f---

2 4γcos=

d f γ

d f γ

Page 47: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.2 Decomposition of Reflection Components 31

(EQ24)

where . Combining (EQ22) and (EQ24), we have

. (EQ25)

Now let be the spectral responsivity of the color camera in red,green and blue bands. Then, the output from the color camera in each band can be expressedas

. (EQ26)

This equation can be simplified as:

(EQ27)

where

. (EQ28)

This simplified Torrance-Sparrow model is used as a reflection model in our analysis.

In our analysis, reflection bounced only once from the light source is considered.Therefore, the reflection model is valid only for convex objects, and it cannot representreflection which bounces more than once (i.e., interreflection) on concave object surfaces.We, however, empirically found that interreflection did not significantly affect our analysis.

3.2 Decomposition of Reflection Components

In this section, we introduce a new algorithm for separating the diffuse and specularreflection components. Using red, green, and blue filters, the coefficients and , in(EQ27), become two linearly independent vectors, and , unless the colors of the tworeflection components are accidentally same:

Ep gLr=

gπ4--- d

f---

24γcos=

Ep gkD λ( )s λ( ) θ icosgkSs λ( )

θrcos------------------- α2

2σα2

----------–

exp+=

τm λ( ) m, R G B, ,( )=

Im τm λ( )Ep λ( ) λd

λ∫=

Im KD m, θicos KS m,1

θrcos--------------e

α2

2σα2

----------–

+=

KD m, g τm λ( )kD λ( )s λ( ) λdλ∫= KS m, gkS τm λ( )s λ( ) λd

λ∫=

KD KS

K˜ D K

˜ S

Page 48: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

32 Chapter 3

(EQ29)

(EQ30)

These two vectors represent the colors of the diffuse and specular reflection compo-nents in the dichromatic reflectance model [73].

First, the pixel intensities in the R, G, and B channels with different light sourcedirections, are measured at one pixel. It is important to note that all intensities are measuredat the same pixel. A typical example of the intensity values is shown in Figure 18.

Figure 18 Measurement at one pixel (synthesized data)

K˜ D

KD R,

KD G,

KD B,

g τR λ( )kD λ( )s λ( )c λ( )dλλ∫

g τG λ( )kD λ( )s λ( )c λ( )dλλ∫

g τB λ( )kD λ( )s λ( )c λ( )dλλ∫

= =

K˜ S

KS R,

KS G,

KS B,

gkS τR λ( )s λ( )c λ( )dλλ∫

gkS τG λ( )s λ( )c λ( )dλλ∫

gkS τB λ( )s λ( )c λ( )dλλ∫

= =

m

-80.0 20.0 120.0Theta [deg]

0.0

50.0

100.0

150.0

Inte

nsity

redgreenblue

Page 49: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.2 Decomposition of Reflection Components 33

The three sequences of intensity values are stored in the columns of an matrix .The matrix is called the measurement matrix. Considering the reflectance model (EQ28)and two color vectors in (EQ29) and (EQ30), the intensity values in the R, G, and B chan-nels can be represented as:

(EQ31)

where the two vectors and represent the intensity values of the diffuse and specularreflection components with respect to the light source direction . Vector represents thediffuse reflection color vector. Vector represents the specular reflection color vector. Wecall the two matrices and , the geometry matrix and the color matrix, respectively. Thecolor vectors and the axis span the space in the goniochromatic space. We call the spacespanned by the color vector and the axis the diffuse reflection plane, and the spacespanned by the color vector and the axis the specular reflection plane.

m 3× M

M M˜ R M

˜ G M˜ B

=

θi1cos1

θrcos--------------

α12

2σα2

----------

exp

θi2cos1

θrcos--------------

α22

2σα2

----------

exp

⋅ ⋅⋅ ⋅

θimcos1

θrcos--------------

αm2

2σα2

----------

exp

KD R, KD G, KD B,

KS R, KS G, KS B,

=

G˜ D G

˜ S

K˜ D

T

K˜ S

T=

GK≡

G˜ D G

˜ S

θi K˜ D

K˜ S

G K

θi

K˜ D

T θi

K˜ S

T θi

Page 50: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

34 Chapter 3

Figure 19 Diffuse and specular reflection planes (synthesized data)

In the case of a conductive material, such as metal, the diffuse reflection component iszero, and (EQ31) becomes

(EQ32)

K˜ S K

˜ D

specular reflection plane

diffuse reflection plane

specular reflection component

diffuse reflection component

M M˜ R M

˜ G M˜ B

=

G˜ SK

˜ ST=

1θrcos

--------------α1

2

2σα2

----------

exp

1θrcos

--------------α2

2

2σα2

----------

exp

⋅⋅

1θrcos

--------------αm

2

2σα2

----------

exp

KS R, KS G, KS B,=

Page 51: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.3 Estimation of the Specular Reflection Color 35

because there exists only the specular reflection component.

Suppose we have an estimation of the color matrix . Then, the two reflection compo-nents represented by the geometry matrix are obtained by projecting the observed reflec-tion stored in onto the two color vectors and .

(EQ33)

where is a pseudoinverse matrix of the color matrix .

This derivation is based on the assumption that the color matrix is known. In Section3.3 and Section 3.4, we describe how to estimate the specular reflection color vector and thediffuse reflection color vector from the input color image sequence.

3.3 Estimation of the Specular Reflection Color

It can be seen from (EQ30) that the specular reflection color vector is the same as alight source color vector. Thus, we can estimate the illumination color and use it as the spec-ular reflection color. Several algorithms have been developed by other researchers for esti-mating illuminant color from a single color image. In the next section, we first review theseestimation algorithms, and then explain our method for estimating the specular color vectorfrom a sequence of color images, which was modified from the previously developed meth-ods.

3.3.1 Previously Developed Methods

The following three sections describe previously developed methods for estimating illu-minant color from a single color image.

3.3.1.1 Lee’s Method

According to the dichromatic reflection model [73], the color of reflection from adielectric object is a linear combination of the diffuse reflection component and the specularreflection component. The color of the specular reflection component is equal to the illumi-nant color. Lee [41] proposed, based on this observation, that the illuminant color can beestimated from shading on multiple objects with different body colors.

In the x-y chromaticity diagram, the observed color of the dielectric object lies on a seg-

K

G

M K˜ D K

˜ S

G MK+=

K+ 3 2× K

K

Page 52: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

36 Chapter 3

ment whose endpoints represent the colors of the diffuse and specular reflection compo-nents. By representing the color of each object as a segment in the chromaticity diagram, theilluminant color can then be determined from the intersection of the multiple segmentsattributed to multiple objects of different body colors (Figure 20).

Unfortunately, this method does not work if each object in the color image has non-uni-form reflectance. For instance, if an object has textured surface, then a segment formed fromthe object surface in the chromaticity diagram arbitrarily scatters in the diagram, and there-fore does not form a line segment. This is a rather severe limitation, since few objects we seehave uniform reflectance without surface texture.

Figure 20 x-y chromaticity diagram showing the ideal loci of chromaticities corresponding to colors from five surfaces of different colors

(redrawn from [41])

3.3.1.2 Tominaga and Wandell’s Method

Tominaga and Wandell [84] indicated that the spectral power distributions of all possi-ble observed colors of a dielectric object with a highlight exist on the plane spanned by thespectral power distributions of the diffuse reflection component and the specular reflectioncomponent. They called this plane the color signal plane. Each object color forms its owncolor signal plane. The spectral power distribution of the specular reflection component,which is the same as the spectral power distribution of the illuminant, can be obtained by

x 0.800

y

0.9

Page 53: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.3 Estimation of the Specular Reflection Color 37

taking the intersection of the color signal planes. The singular value decomposition tech-nique was used to determine the intersection of color signal planes.

Fundamentally, their method is equivalent to Lee’s method. Therefore, Tominaga andWandel’s method has the same limitation that we described for Lee’s method in Section3.3.1.1. This estimation method can be applied to a limited class of objects of uniformreflectance without surface texture.

Figure 21 Estimation of illuminant color as an intersection of color signal planes (in the case of tristimulus vectors)

3.3.1.3 Klinker, Shafer, and Kanade’s Method

As described in Chapter 2, a technique for separating the specular reflection componentfrom the diffuse reflection component from one color image was developed by Klinker,Shafer, and Kanade [39]. The algorithm is based on the dichromatic reflection model andthe prediction that the color pixels corresponding to a single dielectric material will form aT-shape cluster in the RGB color space. The directions of two sub-clusters of the T-shapecluster are estimated geometrically in the RGB color space. Those directions of the two sub-cluster correspond to the diffuse color vector and the specular color vector. Subsequently,those two color vectors are used to separate the two reflection component from the T-shapecluster.

Once again, this estimation method is also bounded to the same limitation as the twomethods described above. In addition, because the specular color vector is estimated geo-

illuminant

Q1

Q2

Q3

color signal plane

Page 54: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

38 Chapter 3

metrically in the RGB color space, this method seems to perform less reliably than the othertwo methods.

Figure 22 T-shape color histogram and two color vectors

3.3.2 Our Method for Estimating an Illuminant Color

In our experiment, the specular color vector, i.e., the row of the color matrix , isestimated using a method similar to Lee’s method for illuminant color estimation.

First, several pixels of different colors in the image are manually selected. Theobserved reflection color from those selected pixels is a linear combination of the diffusereflection component and the specular reflection component. By plotting the observedreflection color of each pixel in the x-y chromaticity diagram over the image sequence, weobtain several line segments in the x-y chromaticity diagram. The illuminant color can thenbe determined by the intersection of those line segments in the diagram.

This technique is not limited to the case of objects of uniform reflectance without sur-face texture. That is because we use observed color change at each surface point, rather thancolor change distributed over object surface of uniform reflectance. Therefore, our estima-tion technique can be used for objects with non-uniform reflectance. However, our tech-nique requires that there are multiple objects of different colors in the image. In other words,if the image contains objects of only one color, the light source color cannot be estimated. In

0100

200RED

0

100

200

GREEN

0

100

200

BLUE

0100

200

il lum

inant

color

vecto

r

diffuse color vector

K˜ S

TK

Page 55: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.4 Estimation of the Diffuse Reflection Color 39

that case, the illumination color must be obtained by measuring the color vector of the lightsource as a part of system calibration.

3.4 Estimation of the Diffuse Reflection Color

By using the method we describe in Section 3.3.2, we can estimate the specular reflec-tion color. This has to be done only once because the specular reflection color is determinedby illuminant color and does not depend on objects in the scene. However, the other row of the color matrix cannot be obtained in the same manner because it depends on the mate-rial of the object.

To estimate the diffuse reflection color, we propose to use another estimation methodbased on the following observation.

The specular reflection component represented in the reflection model (EQ27) attenu-ates quickly as the angle increases due to the exponential function. Therefore, if two vec-tors, ( ) are sampled for sufficiently different , at least one ofthese vectors is equal to the color vector of the diffuse reflection component . This vectorhas no specular reflection component.

It is guaranteed that both vectors exist in the row space of the color matrix spanned by the basis and . Therefore, the desired color vector of the diffuse reflectioncomponent is the vector which subtends the largest angle with respect to the vector

(Figure 23). The angle between the two color vectors is simply calculated as:

(EQ34)

Once we obtain the color matrix , the geometry matrix can be calculated from(EQ33) (Figure 24).

After the matrix has been obtained, the loci of the diffuse reflection component andthe specular reflection component in the goniochromatic space can be extracted as shown in(EQ35) and (EQ36):

(EQ35)

K˜ D

T

αw˜ i I Ri IGi IBi

T= i 1 2,= α

K˜ D

T

w˜ i K

K˜ D

TK˜ S

T

K˜ D

Tw˜ i

K˜ S

T

βK˜ S

Tw˜ i⋅

K˜ S

Tw˜ i

------------------acos=

K G

G

Mdiffuse G˜ DK

˜ DT=

Page 56: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

40 Chapter 3

(EQ36)

Figure 23 Estimation of the color vector

Figure 24 Geometry matrix (synthesized data)

3.5 Experimental Results

In order to demonstrate the feasibility of the algorithm outlined in this chapter, the algo-

Mspecular G˜ SK

˜ ST=

�����������������������

����

����

���������������������������� ����������

��

��

������������������

w˜ 1 w

˜ 2

w˜ 2

w˜ 1K

˜ S

K˜ D

β1

β2���������������������������������

�����

�����

��������������������

row space of I

-80.0 20.0 120.0Theta [deg]

0.0

50.0

100.0

150.0

Inte

nsi

ty

redgreenblue

KD

G G˜ D G

˜ S

GD1

GS1

GD2

GS2

… …

GDm

GSm

= =

Theta [deg]

Inte

nsity

column 1column 2

G

Page 57: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 41

rithm was applied to color images of several kinds of objects: a shiny dielectric object, amatte dielectric object, and a metal object. The surface normal and the reflectance parame-ters of the objects were obtained using the algorithm. The algorithm was applied to themetal object to demonstrate that it also works in the case where only the specular reflectioncomponent exits. The algorithm was subsequently applied to each pixel of the entire imageto extract the needle map of the object in the image. The object shape was recovered fromthe needle map. Finally, the method for reflectance component separation was applied to amore complex dielectric object with non-uniform reflectance properties. Also, the proposedmethod for estimating the specular reflection color was applied in the last example.

3.5.1 Experimental Setup

A SONY CCD video camera module model XC-57, to which three color filters (#25,#58, #47) are attached, is placed at the top of a spherical light diffuser. A point light sourceattached to a PUMA 560 manipulator is moved around the diffuser on its equatorial plane.The whole system is controlled by a SUN SPARC workstation. A geometry of the experi-mental setup is shown in Figure 25.

In our experiment, a lamp shade, whose diameter is inches, is used as thespherical light diffuser [49]. The maximum dispersion angle of the extended light sourceis determined by the fixed diameter and the distance from the point light source to the sur-face of the diffuser (Figure 26). The object is placed inside the spherical diffuser. It isimportant to note that the use of the light diffuser for generating an extended light source isnot essential for the algorithm to separate two reflection components. It is used only foravoiding camera saturation when input images are taken. With the light diffuser, highlightsobserved on objects become less bright and are distributed in larger areas on the objects’surfaces. The algorithm introduced in this chapter can be applied to images taken without alight diffuser when the objects are not very shiny.

R 20=

εR

H

Page 58: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

42 Chapter 3

Figure 25 Geometry of the experimental setup

Figure 26 Geometry of the extended light source

As shown in [49], the distribution of the extended light source (Figure 26) is given by

object

color camera light source

surfacenormal

diffuser

light intensitydistribution

θr

θi

θ

camera

diffuser

point light source

object

H

φ

Page 59: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 43

(EQ37)

This distribution is limited to the interval where .This distribution has a rather complex formula and is somewhat difficult to use analytically.Fortunately, this distribution has a profile very similar to a Gaussian distribution function[49]. Therefore, we can approximate this distribution by using a Gaussian distribution func-tion. The standard deviation of the Gaussian distribution function can be computed numeri-cally by using (EQ37). We denote the standard deviation of the extended light’s distributionas .

Finally, the reflection model (EQ27) for this experimental setup is given as

(EQ38)

where represents the angle between the viewing direction and the center of the extendedlight source (Figure 25). The light source direction is controlled by the robotic arm.

3.5.2 Estimation of Surface Normal and Reflectance Parameters

After the geometry matrix has been recovered, the two curves which represent thediffuse and the specular reflection component in (EQ38) are fitted to the separated diffuseand specular reflection components, respectively.

(EQ39)

(EQ40)

and are the direction of the surface normal . and are the parameters ofthe specular and diffuse reflection components, respectively.

3.5.3 Shiny Dielectric Object

A green plastic cylinder with a relatively smooth surface was used in this experiment(Figure 27). In this example, the light source color was measured directly because the objecthad uniform color, and therefore our estimation method described in Section 3.3.2 could no

L φ( ) C R H+( ) φcos R–[ ]

R H R φcos–+( )2R φsin( )2+[ ]

3 2⁄-----------------------------------------------------------------------------------=

ε– φ ε< < ε R R H+( )⁄( )acos=

σe

Im KD m, θ θr–( )cos KS m,1

θrcos-------------

θ 2θr–( )2–

4 2σα σe+( )2

------------------------------------

exp+=

θθ

G

A1 θ A2–( )cos A3+

B1

θ B2–( )– 2

B32

-------------------------

exp

B2 2⁄ A2 θr B1 A1

Page 60: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

44 Chapter 3

be applied.

Figure 28 shows the measured intensities plotted in the goniochromatic space with theblue axis omitted. Note that the intensity values around the maximum intensity contain boththe diffuse reflection component and the specular reflection component. On the other hand,the intensity values for contain only the diffuse reflection component. The curvefor and lies inside the diffuse reflection plane in the goniochromaticspace, whereas the curve for does not lie inside the diffuse reflectionplane. This is because the intensities values for are linear combinations ofthe diffuse color vector and the specular color vector .

Figure 27 Green shiny plastic cylinder

θ 60°–>θ 60°–> θ 140°–<

140°– θ 60°–< <140°– θ 60°–< <

K˜ D K

˜ S

Page 61: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 45

Figure 28 Measured intensities in the goniochromatic space

The algorithm for separating the two reflection components, described in Section 3.2,was applied to the measured data. The red, green, and blue intensities are initially stored inthe measurement matrix as its columns (EQ31). Then, the measurement matrix isdecomposed into the geometry matrix and the color matrix . The columns of the result-ing geometric matrix are plotted in Figure 29. Figure 30 shows the result of the decompo-sition of the reflection in the goniochromatic space. It is evident from this figure that themeasured intensity in the goniochromatic space (Figure 28) has been successfully decom-posed into the diffuse and specular reflection components using our algorithm.

The diffuse reflection plane and the specular reflection plane are shown in Figure 31.This diagram is the result of viewing Figure 30 along the axis. Note that the slope of thespecular reflection plane is in the diagram. This is because the specular reflection vec-tor (EQ30) has been normalized to

(EQ41)

The diffuse reflection plane is shifted toward the green axis because the color of theobserved object is green in this experiment.

-150-100

-500THETA [deg]

0

100

200

GREEN

100

200

BLUE

-150-100

-500

M M

G K

G

θ45°

K˜ S

1

3------- 1

3------- 1

3-------, ,

=

Page 62: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

46 Chapter 3

The result of the fitting procedure described in Section 3.5.2 is shown in Figure 32.From the result, we obtain the direction of the surface normal, and the reflectance parame-ters as follows: the surface normal ( ) , the parameter of the specular reflec-tion component ( ) , and the parameter of the diffuse reflection component ( )

. Notations , and follow (EQ39) and (EQ40).

Figure 29 Decomposed two reflection components

B2 2⁄ 52.09°–B1 230.37 A1

75.00 A1 B1, B2

-150.0 -100.0 -50.0

Theta [deg]

0.0

100.0

200.0

300.0

Inte

nsi

ty

specular reflection

diffuse reflection

Page 63: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 47

Figure 30 Loci of two reflection components in the goniochromatic space

Figure 31 Diffuse and specular reflection planes

specular reflection

diffuse reflection

0.0 50.0 100.0 150.0Red

0.0

50.0

100.0

150.0

Gre

en

specular reflection

diffuse reflection

Page 64: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

48 Chapter 3

Figure 32 Result of fitting

3.5.4 Matte Dielectric Object

A green plastic cylinder with a relatively rough surface was used in this experiment(Figure 33).

The measured intensities are plotted in the goniochromatic space (Figure 34) in thesame manner as explained in Section 3.2. Note that the width of the specular reflection com-ponent is larger than that in the previous experiment. This is mainly attributed to differentsurface roughness of the two plastic cylinders.

specular reflection

diffuse reflection

θ

intensity

Page 65: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 49

Figure 33 Green matte plastic cylinder

Figure 34 Measured intensities in the goniochromatic space

-150-100

-500THETA [deg]

100

200

300

RED100

200

300

GREEN

-150-100

-500

Page 66: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

50 Chapter 3

The intensity is decomposed into the two reflection components according to the algo-rithm shown in Section 3.2. The result of the decomposition is shown in Figure 35. Thedirections of the specular reflection plane and the diffuse reflection plane are the same asthose in the case of the previous shiny green plastic cylinder.

Figure 36 depicts the result of parameter estimation from the reflection components.The surface normal and the parameters of the two reflection components obtained are: thesurface normal ( ) , the parameter of the specular reflection component ( )

, and the parameter of the diffuse reflection component ( ) .

Note that the value of the parameter which is equal to is estimated as. This value is greater than that of the shiny plastic object ( ). The difference is

consistent with the fact that the matte object’s surface roughness is greater than the shinyobject’s surface roughness.

Figure 35 Two decomposed reflection components

B2 2⁄ 49.61°– B1

219.83 A1 308.1

B3 4 2σα σe+( )26.68 13.56

specular reflection

diffuse reflection

Page 67: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 51

Figure 36 Result of fitting

3.5.5 Metal Object

The dichromatic reflection model [73] cannot be applied to non-dielectric objects suchas a metallic specular object. As an example of those objects, an aluminum triangular prismwas used in this experiment (Figure 37). This type of material has only the specular reflec-tion component, but not the diffuse reflection component.

The measured intensities shown in Figure 38 indicate that the reflection from the alumi-num triangular prism possesses only the specular reflection component. This observation isjustified by the result of the decomposition of the two reflection components (Figure 39).The diffuse reflection component is negligibly small compared to the specular reflectioncomponent.

θ

specular reflection

diffuse reflection

intensity

Page 68: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

52 Chapter 3

Figure 37 Aluminum triangular prism

Figure 38 Loci of the intensity in the goniochromatic space

-150-100

-500THETA [deg]

0

100

200

300

RED

0

100

200

300

GREEN

-150-100

-500

Page 69: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 53

Figure 39 Two decomposed reflection components

3.5.6 Shape Recovery

In the previous sections, the decomposition algorithm was applied to real color imagesin order to separate the two reflection components using intensity change at a single pixel.In other words, the reflection components were separated locally. After this separation, thesurface normal ( or ) and the reflectance parameters ( and ) at each pixel wereobtained by nonlinear curve fitting of the two reflection components models ((EQ39),(EQ40)) to the decomposed reflection components. We repeated the same operation over allpixels in the image to obtain surface normals over the entire image. Then, the needle mapand the depth map of the object in the image were obtained from those recovered surfacenormals.

We used a purple plastic cylinder as the observed object in this experiment. The imageis shown in Figure 40. Results of the curve fitting of the diffuse reflection component wereused to obtain surface normal directions. The resulting needle map is shown in Figure 41.The depth map of the purple plastic cylinder is obtained from the needlemap by the relax-ation method proposed by Horn and Brooks [30]. Figure 42 depicts the resulting depth map.

specular reflection

diffuse reflection

B2 2⁄ A2 B1 A1

Page 70: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

54 Chapter 3

Figure 40 Purple plastic cylinder

Figure 41 Needle map

Page 71: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 55

Figure 42 Recovered object shape

3.5.7 Reflection Component Separation with Non-uniform Reflectance

In the previous examples, the proposed method was applied to relatively simple objectswith uniform reflectance. In this example, our method was applied to more complex objectswith non-uniform surface reflectance. Also, our method for estimating illuminant color,which was described in Section 3.3.2, was applied in this example.

First, for estimating the illuminant color, three pixels of different colors were selectedin one of the input color images (Figure 44). Then, the reflection colors from those threepixels through the input color image sequence were plotted in the x-y chromaticity diagramas shown in Figure 43. The loci of observed color sequence at those selected image pixelsforms a line segment in the diagram. Finally, the illuminant color was estimated as

by computing the intersection of those three line seg-ments.

10

20

30

5

10

15

20

0204060

80

10

20

30

5

10

15

20

0204060

0

r g b, ,( ) 0.353 0.334 0.313, ,( )=

Page 72: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

56 Chapter 3

Figure 43 Estimation of illuminant color in the x-y chromaticity diagram(The three pixels on different colors are manually selected in the image in Figure 44.)

By using the pixel-based separation algorithm, we can easily generate images of thetwo reflection components. The algorithm was applied to all pixels of the input imageslocally, and each separated reflection component was used to generate the diffuse reflectionimage and the specular reflection image. Figure 44 shows one frame from the input imagesequence. All pixels in the image are decomposed into two reflection components by usingthe separation algorithm described in Section 3.2. The result of the diffuse reflection imageand the specular reflection image are shown in Figure 45 and Figure 46, respectively.

Note that the input image is successfully decomposed into the images of the two reflec-tion components, even though the input image has a complex object with non-uniformreflectance. This is because the proposed algorithm is pixel-based and does not require glo-bal information. In this kind of situation, the traditional separation algorithm based on theRGB color histogram would easily fail because clusters in the RGB color space becomecrowded and obscure, so that clustering in the RGB space becomes impossible. In contrast,since our algorithm is pixel-based and applied to each pixel separately, the two reflectioncomponents can be successfully separated even in the face of inconspicuous specular reflec-tion.

0.2 0.3 0.4 0.5

Color X

0.2

0.3

0.4

0.5

Col

or Y

Pixel 1Pixel 2Pixel 3

Estimated illumination color(r,g,b)=(0.353, 0.334, 0.313)

Page 73: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.5 Experimental Results 57

Figure 44 Multicolored object4

The three pixels of different colors are manually selected in the image.

4. This figure is shown in color in the Color Figures chapter.

pixel 3

pixel 2

pixel 1

Page 74: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

58 Chapter 3

Figure 45 Diffuse reflection image5

Figure 46 Specular reflection image

5. This figure is shown in color in the Color Figures chapter.

Page 75: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

3.6 Summary 59

3.6 Summary

We proposed goniochromatic space analysis as a new framework for color image anal-ysis, where the diffuse reflection component and the specular reflection component from thedichromatic reflection model span subspaces. We presented an algorithm to separate the tworeflection components at each pixel from a sequence of color images and to obtain the sur-face normal and the parameters of the Torrance-Sparrow reflection model. The significanceof our method lies in its use of local (i.e., pixel-based) and not global information of inten-sity values in the images. This characteristic separates our algorithm from previously pro-posed algorithms for segmenting the diffuse reflection component and the specularreflection component in the RGB color space.

Our algorithm has been applied to objects of different materials to demonstrate thealgorithm’s effectiveness. We have successfully separated the two reflection components inthe temporal-color space. Using the separation result, we have obtained surface normals andparameters of the two reflection components for objects with 2D surface normals. In addi-tion, we are able to reconstruct the shape of the objects. Also, our separation algorithm wassuccessfully applied to a more complex object with non-uniform reflectance.

Page 76: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

60 Chapter 3

Page 77: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

61

Chapter 4

Object Modeling from Range and Color Images:

Object Models Without Texture

In Chapter 3, we introduced a method for estimating object shape and surface reflec-tance parameters from a color image sequence taken by a moving light source. Unfortu-nately, the proposed method is limited in several aspects. An object shape cannot berecovered accurately if the object has a surface with high curvature. That is because themethod can recover only surface normals, but it cannot obtain the 3D shape of the objectsurface directly. Also, the method can recover the object surface shape only partially. Thepart of the object surface that is not seen from the view point cannot be recovered.

Those limitations motivated us to further extend our method for creating a completemodel of a complex object. In this chapter, we extend our object modeling method by creat-ing a complete object model from a sequence of range and color images which are taken bychanging object posture.

First, we review the previously developed methods related to our new method, andexamine their limitations.

Page 78: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

62 Chapter 4

4.1 Background

Techniques to measure object surface shape and reflectance properties together byusing both range images and gray-scale (or color) images have been studied by severalresearchers.

Ikeuchi and Sato originally developed a technique to measure object shapes and reflec-tion function parameters from a range image and intensity image pair [35]. The Torrance-Sparrow reflection model is used, and the Fresnel reflectance parameter in the specularcomponent is assumed to be constant by restricting surface orientations to be less than from the viewing direction. The following four parameters are determined: (i) Lambertianstrength coefficient, (ii) incident orientation of light source, (iii) specular strength coeffi-cient, and (iv) roughness parameter of the specular reflection distribution.

First, the surface shape is measured from the range image, and then surface normals ofthe object surface are computed from the measured shape. Then, surface points whichexhibit only the diffuse reflection component are identified by using a brightness criterion.Pixel intensities of the identified surface points only with the diffuse reflection componentsare used to estimate Lambertian strength and incident direction of the point light source byusing least-squares fitting. Criteria are developed that also identify pixels that are in shadow,and that exhibit the specular reflection component or interreflection. A least-squares proce-dure is applied to fit the specular strength and surface roughness parameters from identifiedpixels with the specular reflection component.

The main drawback of the technique is that it assumes uniform reflectance propertiesover the object surface. Additionally, only partial object shape is recovered because onlyone range image is used in the technique.

Baribeau, Rioux, and Godin [4] measured three reflectance parameters that they call thediffuse reflectance of the body material, the Fresnel reflectance of the air-media interface,and the slope surface roughness of the interface, of the Torrance-Sparrow reflection model.In their method, a polychromatic laser range sensor is used to produce a pair of range andcolor images. Unlike the technique developed by Ikeuchi and Sato, this method can capturemore subtle reflectance properties of the object surface because this method is capable ofestimating the Fresnel reflectance parameter.

However, the Baribeau et al. method still required uniform reflectance over each objectsurface, and only partial object shape was recovered. Also, their method was intended to beused for understanding images, e.g., region segmentation. Therefore, in their method,important features for object modeling were missing. In particular, their method did not

60°

Page 79: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.1 Background 63

guarantee that reflectance parameters are estimated at all points on the object surface.

Kay and Caelli [36] introduced another method to use a range image and 4 or 8 inten-sity images taken under different illumination conditions. By increasing the number ofintensity images, they estimated parameters of the Torrance-Sparrow reflection modellocally for each image pixel. They classified the object surface into three groups: non-high-light regions, specular highlight regions, and rank-deficient regions. Based on this classifi-cation, a different solution method was applied to each region.

Unlike the two techniques described above, Kay and Caelli’s method can handle objectsurfaces with varying reflectance due to the use of multiple intensity images with differentlight source directions. However, it is reported that parameter estimation can be unstable,especially when the specular reflection component is not observed strongly. This prohibitstheir method from being applied to a wide range of real objects.

In this thesis, we propose a new method to recover complete object surface shape andreflectance parameters from a sequence of range images and color images taken by chang-ing the object’s posture. Unlike previously introduced methods, our method is capable ofestimating surface reflectance parameters of objects with non-uniform reflectance. Also, ourmethod guarantees that all surface points are assigned appropriate reflectance parameters.This is desirable especially for the purpose of object modeling for computer graphics.

In this chapter, we consider objects whose surfaces are uniformly painted in multiplecolors. Therefore, the surfaces of such objects can be segmented into regions of uniformcolor. Many real objects fall into this category. However, there are still other objects whosesurfaces have detailed texture. Modeling of such objects will be discussed in the next chap-ter.

In our method, a sequence of range images is used to recover the entire shape of anobject as a triangular mesh model. Then, a sequence of color images is mapped onto therecovered shape. As a result, we can determine an observed color change through the imagesequence for all triangular patches of the object surface shape model. The use of threedimensional shape information is important here because, without the object shape, the cor-respondence problem, i.e., determining where a surface point in one image appears inanother image, cannot be solved easily. This problem was not the case in our methoddescribed in Chapter 3. It was solved automatically because the camera and the object werefixed, and only the light source was moved to take a color image sequence.

Subsequently, by using the algorithm introduced in Chapter 3, the observed colorsequence is separated into the diffuse reflection component and the specular reflection com-ponent. Then, parameters of the Torrance-Sparrow reflection model are estimated reliably

Page 80: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

64 Chapter 4

for the diffuse and specular reflection components. Unlike the diffuse reflection component,special care needs to be taken in estimating the specular parameters. The specular reflectioncomponent can be observed in only a limited viewing direction. Therefore, the specularreflection component can be observed only in a small subset of the object surface. As aresult, we cannot estimate the specular reflection parameters where the specular reflectioncomponent is not observed.

Our approach avoids the problem by using region segmentation of the object surface.Based on the assumption that each region of uniform diffuse color has uniform specularreflectance, we estimate the specular parameters for each region, i.e., not for each surfacepoint. Then, the estimated specular parameters are assigned to all surface points within theregion.

Finally, color images of the object are synthesized using the constructed model to dem-onstrate the feasibility of the proposed approach.

The chapter is organized as follows. First, we explain our imaging system in Section4.2. In Section 4.3, we describe the reconstruction of object shape from multiple rangeimages. In Section 4.4, we explain the projection of color images onto the recovered objectshape. In Section 4.5, describe the estimation of reflectance parameters in our method. Theestimated object shape and reflectance parameters are used to synthesize object imagesunder arbitrary illumination/viewing conditions. Several examples of synthesized objectimages are shown in Section 4.6, Finally, we summarize this chapter in Section 4.7.

4.2 Image Acquisition System

The experimental setup for the image acquisition system used in our experiments isillustrated in Figure 47. The object whose shape and reflectance information is to be recov-ered is mounted on the end of a PUMA 560 manipulator. The object used in our experimentis a plastic toy dinosaur whose height is about .

A range image is obtained using a light-stripe range finder with a liquid crystal shutterand a color CCD video camera [62]. The light-stripe range finder projects a set of stripesonto the scene. Each stripe has a distinct pattern, e.g., a binary code. The CCD video camerais used to acquire images of the scene as the pattern is projected. Each pattern correspondsto a different plane of the projected light. With knowledge of the relative positions of thecamera and projector, the image location and projected light plane determine the position of the point in the scene with respect to the CCD camera. The same color camera is

170mm

X Y Z, ,( )

Page 81: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.2 Image Acquisition System 65

used for digitizing color images. Therefore, pixels of the range images and the color imagesdirectly correspond.

The range finder is calibrated to produce a projection matrix which representsthe projection transformation between the world coordinate system and the image coordi-nate system. The location of the PUMA 560 manipulator with respect to the world coordi-nate system is also given by calibration. Therefore, the object location is given as a transformation matrix for each digitized image.

A single xenon lamp, whose diameter is approximately , is used as a point lightsource. The light source is located near the camera, and the light source direction is consid-ered to be the same as the viewing direction. This light source location is chosen to avoid theproblem of self-shadowing in our images. Then, the gain and offset of outputs from thevideo camera are adjusted so that the light source color becomes .Therefore, the specular reflection color is assumed to be known in this experiment.

The camera and light source locations are fixed in our experiment. The approximatedistance between the object and the camera is .

Using the image acquisition system, a sequence of range and color images of the objectis obtained as the object is rotated at a fixed angle step.

Figure 47 Image acquisition system

3 4× Π

4 4×T

10mm

R G B, ,( ) 1 1 1, ,( )=

2m

color camera

light stripe range finder

PUMA arm

light source object

Page 82: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

66 Chapter 4

4.3 Shape Reconstruction from Multiple Range Images

For generating a three dimensional object shape from multiple range images, we devel-oped a new method to integrate multiple range images by using a volumetric representation[91]. Since the shape reconstruction from multiple range images is an important step in ourtechnique, we now review shape reconstruction techniques previously developed by otherresearchers, and examine their characteristics. Then, we will describe our shape reconstruc-tion method.

The reconstruction of three dimensional object shapes from multiple range images hasbeen studied intensively in the past. But, all of the conventional shape reconstruction tech-niques we review here pay very little, if any, attention to object surface reflectance proper-ties. Those techniques attempt to recover only object surface shapes; they do not recoversurface reflectance properties, which are as important as the shapes for object modeling.

Turk and Levoy [86] developed a technique to combine multiple range images one byone, using a two step strategy: registration and integration. Their technique uses a modifiedversion of the iterated closest-point algorithm (ICP algorithm) which was originally devel-oped by Besl and McKay [7]. After the registration procedure, two surface meshes com-posed of small triangular patches are integrated to produce one combined surface mesh.Turk and Levoy’s method performs poorly if the surfaces are slightly misaligned or if thereis significant noise in the data. Typically, the resulting surfaces would have noticeableseams along the edges where they were pieced together. Turk and Levoy’s method wasmotivated by another method developed by Soucy and Laurendre [77] which uses a compu-tationally intensive strategy for aligning all surface patches together.

Higuchi, Hebert, and Ikeuchi [24] developed a method for merging multiple rangeviews of a free-form surface obtained from arbitrary viewing directions, with no initial esti-mation of relative transformation among those viewing directions. The method is based onthe Spherical Attribute Image (SAI) representation of free-form surfaces which was origi-nally introduced by Delingette, Hebert and Ikeuchi in [13].

Although the Higuchi et al. technique does not require relative transformation betweenobserved surface patches, it can handle only objects which are topologically equivalent to asphere, i.e., objects with no holes. Also, it is difficult to produce object shapes of high reso-lution because of the SAI representation. The computation cost of the algorithm becomesunacceptably high when a SAI of high frequency is used.

The Higuchi et al. method was further extended by Shum et al. [74] to improve therobustness by applying principal components analysis with missing data technique for

Page 83: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.3 Shape Reconstruction from Multiple Range Images 67

simultaneously estimating relative transformations. However, their algorithm still suffersthe same limitations.

Hoppe, DeRose, and Duchamp [26] introduced an algorithm to construct three dimen-sional surface models from a cloud of points without spatial connectivity. The algorithm dif-fers from others in that it does not require surface meshes as input. The algorithm computesthe signed distance function from the points of the range images rather than triangulated sur-faces generated from the range images. The signed distance is computed at each node ofthree dimensional array, i.e., voxel, around the target object to produce volumetric data set.Then, an iso-surface of zero distance is extracted by using the marching cube algorithm[46]. Although their reliance on points rather than on triangulated surface patches makestheir algorithm applicable to more general cases, using points rather than surfaces suffersfrom some practical problems. The main problem is that a surface is necessary to measurethe signed distance correctly. To compensate for this problem, their algorithm locally infersa plane at a surface point from the neighboring points in the input data. Unfortunately, dueto this local estimation of planes, their algorithm is sensitive to outliers in the input 3Dpoints; therefore, their algorithm is not suitable in the case where range data are less accu-rate and contain a significant amount of noise.

Curless and Levoy [12] proposed another technique similar to the technique by Hoppeet al. Curless and Levoy’s technique differs in that triangulated surface patches from rangeimages are used instead of a 3D points. For each voxel of a volumetric data set, they take aweighted average of the signed distances from the voxel center to range image points whoseimage rays intersect the voxel. This is done by following the ray from the camera center toeach range image point and incrementing the sum of weighted signed distances. Then, likethe method by Hoppe et al., the marching cube algorithm is applied to the resulting volumet-ric data set to extract the object surface as an iso-surface of zero distance.

Unfortunately, Curless and Levoy’s algorithm is still sensitive to noisy data and extra-neous data, while it performs significantly better than the one by Hoppe et al. In Curless andLevoy’s algorithm, the weighted signed distance is averaged to produce an estimate of truedistance to the object surface. This certainly reduces some of the noise, but still cannot over-come the effects of large errors in the data.

After having studied and used those previously developed methods for multiple rangeimage integration, we have found that, unfortunately, none of those methods can give us sat-isfactory integration results, especially when input range images contain a significantamount of noise, and when input surface patches are slightly misaligned. That motivated usto develop another method for integrating multiple range images by using a volumetric rep-resentation [91].

Page 84: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

68 Chapter 4

4.3.1 Our Method for Merging Multiple Range Images

Our method consists of the following four steps:

1. Surface acquisition from each range image

The range finder in our image acquisition system cannot measure the object surfaceitself. In other words, the range finder can produce only images of 3D points on the objectsurface. Because of this limitation, we need to somehow convert the measured 3D pointsinto a triangular mesh which represents the object surface shape. This is done by connectingtwo neighboring range image pixels based on the assumption that those points are connectedby a locally smooth surface. If those two points are closer in a 3D distance than some thresh-old, then we consider them to be connected on the object surface.

2. Alignment of all range images

All of the range images are measured in the coordinate system fixed with respect to therange finder system, and they are not aligned to each other initially. Therefore, after weobtain the triangular surface meshes from the range images, we need to transform all of themeshes into a unique object coordinate system.

For aligning all of the range images, we use a transformation matrix which representsan object location for each range image (Section 4.2). Suppose we select one of the rangeimages as a key range image whose coordinate system is used as the world coordinate sys-tem. We refer to the transformation matrix for the key range image as . Then, all otherrange images can be transformed into the key range image’s coordinate system by trans-forming all 3D points as where is a range imageframe number.

3. Merging based on a volumetric representation

After all of the range images are converted into triangular patches and aligned to aunique coordinate system, we merge them using a volumetric representation. First, we con-sider imaginary 3D volume grids around the aligned triangular patches. (The volume grid isusually called voxel in the computer graphics field.) Then, in each voxel, we store the value,

, of the signed distance from the center point of the voxel, , to the closest point on theobject surface. The sign indicates whether the point is outside, , or inside, ,the object surface, while indicates that lies on the surface of the object.

The signed distance can be computed reliably by using the consensus surface algorithm[91]. In the algorithm, a quorum of consensus of locally coherent observation of the object

T

Tmerge

P X Y Z 1, , ,( )= P′ TmergeTf1–P= f 1…n=

f x( ) x

f x( ) 0> f x( ) 0<f x( ) 0= x

Page 85: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 86: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

70 Chapter 4

Figure 49 Input color images4

(6 out of 120 frames are shown)

4.3.3 Shape Recovery

The consensus surface method described in Section 4.3.1 was used for merging eighttriangular surface meshes created from the input range images. The recovered object shapeis shown in Figure 50. The object shape consists of 9943 triangular patches. In the processof merging surface meshes, the object shape was manually edited to remove noticeabledefects such as holes and spikes. Manual editing could be eliminated if more range imagesare used.

4. This figure is shown in color in the Color Figures chapter.

frame 0 frame 20 frame 40

frame 60 frame 80 frame 100

Page 87: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 88: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

72 Chapter 4

We represent world coordinates and image coordinates using homogeneous coordi-nates. A point on the object surface with Euclidean coordinates is expressed by acolumn vector . An image pixel location is represented by

. As described in Section 4.2, the camera projection transformation is repre-sented by a matrix , and the object location is given by a object transformationmatrix . We denote the object transformation matrix for the input color image frame by

( ). Thus, using the projection matrix and the transformation matrix for the key range image in Section 4.3.1, the projection of a 3D point on the object surface inthe color image frame is given as

(EQ42)

where the last component of has to be normalized to give the projected image location.

The observed color of the 3D point in the color image frame is given as the color intensity at the pixel location . If the 3D point is not visible in the color image(i.e., the point is facing away from the camera, or it is occluded), the observed color for the3D point is set to . For determining the visibility efficiently, we used theZ-buffer algorithm ([15], for instance) in our analysis.

Ideally, all triangular patches are small enough to have uniform color on the imageplane. However, a projection of a triangular patch on the image plane often corresponds tomultiple image pixels of different colors. Therefore, we average the color intensity of allcorresponding pixels, and assign that intensity to the triangular patch. This approximation isacceptable as long as the object surface does not have fine texture compared with the resolu-tion of triangular patches. (In the next chapter, we will discuss another approach for the casewhere this assumption does not hold true.)

By applying the mapping procedure for all object orientations, we finally get a collec-tion of triangular patches, each of which has a sequence of observed colors with respect tothe object orientation. The result of the color image mapping is illustrated in Figure 51,which shows six frames as examples.

X Y Z, ,( )P X Y Z 1, , ,[ ]T

= x y,( )p x y 1, ,[ ]T

=

3 4× Π 4 4×T f

Tf f 1…n= Π Tmerge

f

pf ΠTfTmerge 1–

P= f 1…n=( )

pf

x y,( )

f R G B, ,( )x y,( )

R G B, ,( ) 0 0 0, ,( )=

Page 89: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.4 Mapping Color Images onto Recovered Object Shape 73

Figure 51 View mapping result6 out of 120 input color images are shown here. Object surface regions which are not observed in each color image are shown as white area.

Based on the image mapping onto the recovered object shape, a sequence of observedcolors is determined at each triangular patch of the object shape. The observed color is notdefined if the triangular patch is not visible from the camera. In this case, the observed coloris set to zero.

Figure 52 illustrates a typical observed color sequence at a triangular patch with strongspecularity. The specular reflection component can be observed strongly near image frame67. When the specular reflection component exists, the output color intensity is a linearcombination of the diffuse reflection component and the specular reflection component. Thetwo reflection components are separated by using the algorithm introduced in Chapter 3.(The separation result will be shown in the next section.) The intensities are set to zerobefore image frame 39 and after image frame 92 because the triangular patch is not visiblefrom the camera due to occlusion.

frame 0 frame 20 frame 40

frame 60 frame 80 frame 100

Page 90: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

74 Chapter 4

Another example with weak specularity is shown in Figure 53. In the example, theobserved specular reflection is relatively small compared with the diffuse reflection compo-nent. As a result, estimating reflectance parameters for both the diffuse and specular reflec-tion components together could be sensitive to various disturbances such as image noise.That is why the reflection component separation is introduced prior to parameter estimationin our analysis. By separating the two reflection components based on color, reflectanceparameters can be estimated separately in a robust manner.

Figure 52 Intensity change with strong specularity

Figure 53 Intensity change with little specularity

30 40 50 60 70 80 90 100image frame number

0.0

20.0

40.0

60.0

80.0

100.0

120.0

140.0

160.0

inte

nsity

redgreenblue

30 40 50 60 70 80 90 100image frame number

0.0

20.0

40.0

60.0

80.0

100.0

120.0

140.0

160.0

inte

nsity

redgreenblue

Page 91: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.5 Reflectance Parameter Estimation 75

4.5 Reflectance Parameter Estimation

4.5.1 Reflection Model

As Figure 47 illustrates, illumination and viewing directions are fixed and are identical.The Torrance-Sparrow reflection model (EQ27) is modified for the particular experimentsetup as

(EQ43)

where is the angle between the surface normal and the viewing direction (or the lightsource direction) in Figure 54, and are a constant for each reflection component,

is the standard deviation of a facet slope of the Torrance-Sparrow reflection model.The direction of the light source and the camera with respect to the surface normal isreferred as to the sensor direction .

Like our analysis in Chapter 3, reflection bounced only once from the light source isconsidered here. Therefore, the reflection model is valid only for convex objects, and it can-not represent reflection which bounces more than once (i.e., interreflection) on concaveobject surfaces.

Figure 54 Geometry for simplified Torrance-Sparrow model

Im KD m, θcos KS m,1

θcos------------ θ2

2σα2

----------–

exp+= m R G B, ,=

θKD m, KS m,

σα α

θ

X

Z

θi θr α= =

nreflected beam

incident beam&

n′

Page 92: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

76 Chapter 4

4.5.2 Reflection Component Separation

The algorithm to separate the diffuse and specular reflection components was applied tothe observed color sequence at each triangular patch. The red, green, and blue intensities ofthe observed color sequence are stored in the matrix as its columns (EQ31). Then, thematrix is computed from the matrix and the matrix which is estimated as describedin Section 3.3 and Section 3.4. Finally, the diffuse and specular reflection components aregiven as shown in (EQ35) and (EQ36). This reflection component separation is repeated forall triangular patches of the object.

Some of the separation results are shown in Figure 55 and Figure 56. Figure 55 showsthe separated reflection components with strong specularity. (The measured color sequenceis shown in Figure 52 in the previous section.) Another example of the reflection componentseparation is given in Figure 56. In that case, the specular reflection component is relativelysmall compared to the diffuse reflection component. That example indicates that the separa-tion algorithm can be applied robustly even if the specularity is not observed strongly. Afterthe reflection component separation, reflectance parameters can be estimated separately.

The separated reflection components at all triangular patches of a particular imageframe can be used to generate the diffuse reflection image and the specular reflection image.The result of the diffuse and specular reflection images are shown in Figure 57 and Figure58. Image frames 0 and 60 are used to generate Figure 57 and Figure 58, respectively.

Figure 55 Separated reflection components with strong specularity

M

G M K

30 40 50 60 70 80 90 100image frame number

0.0

20.0

40.0

60.0

80.0

100.0

120.0

140.0

160.0

inte

nsity

diffuse-reddiffuse-greendiffuse-bluespecular-redspecular-greenspecular-blue

Page 93: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.5 Reflectance Parameter Estimation 77

Figure 56 Separated reflection component with little specularity

Figure 57 Diffuse image and specular image: example 1

30 40 50 60 70 80 90 100image frame number

0.0

20.0

40.0

60.0

80.0

100.0

120.0

140.0

160.0in

tens

ity

diffuse-reddiffuse-greendiffuse-bluespecular-redspecular-greenspecular-blue

Page 94: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

78 Chapter 4

Figure 58 Diffuse image and specular image: example 2

4.5.3 Reflectance Parameter Estimation for Segmented Regions

In this section, we will discuss how to estimate parameters of the reflectance model forthe triangular patch by using the separated reflection components.

After the separation algorithm is applied, we obtain a sequence of the diffuse reflectionintensities and a sequence of the specular reflection intensities for each triangular patch.This information is sufficient to estimate reflectance parameters of the reflection model(EQ43) separately for the two reflection components.

As (EQ43) shows, the reflectance model is a function of the angle between the surfacenormal and the viewing direction . Therefore, for estimating reflectance parameters: ,

, and , the angle has to be computed as the object posture changes. Since the pro-jection transformation matrix is already given and the object orientation is known in theworld coordinate system, it is straightforward to compute a surface normal vector and aviewing direction vector (or an illumination vector) at the center of each triangular patch.Thus, the angle between the surface normal and the viewing direction vector can be com-puted.

After the angle is computed, the reflectance parameters for the diffuse reflectioncomponent ( ) and the specular reflection component ( and ) are estimated sepa-rately by the Levenberg-Marquardt method [60]. In our experiment, the camera output iscalibrated so that the specular reflection color has the same value from the three color chan-nels. Therefore, only one color band is used to estimate in our experiment.

θ KD m,

KS m, σα θ

θ

θKD m, KS m, σα

KS

Page 95: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.5 Reflectance Parameter Estimation 79

By repeating the estimation procedure for all triangular patches, we can estimate thediffuse reflection component parameters for all triangular patches if those patches are illu-minated in one or more frames of the image sequence.

On the other hand, the specular reflection component can be observed only in a limitedviewing direction. Due to this fact, the specular reflection component can be observed onlyin a small subset of all triangular patches. We cannot estimate the specular reflection com-ponent parameters for those patches in which the specular reflection component is notobserved. Even if the specular reflection component is observed, the parameter estimationcan become unreliable if the specular reflection is not sufficiently strong.

To avoid that problem, we can increase the number of sampled object orientations andtake more color images. However, that still cannot guarantee that all triangular patches showthe specular reflection component. Taking more color images may not be practical sincemore sampled images require more measurement time and data processing time.

For the above reasons, we decided to assign the specular reflection component parame-ters based on region segmentation. In our experiments, it is assumed that the object surfacecan be segmented into a finite number of regions which have uniform diffuse color, and alltriangular patches within each region have the same specular reflection component parame-ters. The result of the region segmentation is shown in Figure 59 (segmented regions arerepresented as grey levels).

By using the segmentation result, the specular reflection parameters of each region canbe estimated from triangular patches with strong specularity. For estimating specular reflec-tion component parameters, several triangular patches (e.g., ten patches in our experiment)with the largest specular reflection component are selected for each of the segmentedregions. The triangular patches with strong specularity can be easily selected after thereflectance component separation.

Then, the specular reflection component parameters of the reflection model (EQ43) areestimated for each of the ten selected triangular patches. Finally, the average of the esti-mated parameters of the selected triangular patches is used as the specular reflection compo-nent parameters of the segmented region.

In our experiments, the four largest segmented regions were used for specular reflectionparameter estimation, and the rest of the regions were not used. These unused regions werefound to be located near or at the boundaries of the large regions. Hence, a surface normal ofa triangular patch does not necessarily represent a surface normal of the object surface at thelocation. That causes the estimation of the specular reflection parameters to be inaccurate. Inaddition, it is more likely that the specular reflection component is not seen in those small

Page 96: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

region 1

region 2

region 4

region 3

region 0

Page 97: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.6 Synthesized Images with Realistic Reflection 81

4.6 Synthesized Images with Realistic Reflection

By using the recovered object shape and reflection model parameters, images of theobject under arbitrary illumination conditions can be generated. In this section, some exam-ples of the images are shown to demonstrate the ability of the proposed method to producerealistic images. Point light sources located far from the object are used for generatingimages.

For comparing synthesized images with the real images of the object, the object modelwas rendered with similar illumination and viewing directions to those in our experimentalsetup. The illumination and viewing directions for input color image frame 0 were used tocreate the image shown in Figure 60. The input color image is shown in Figure 49. It isimportant to see that region 2 shows less specularity than region 0 and region 1. (See Figure59 for region numbers.) In addition, the specular reflection is widely distributed in region 2because region 2 has a large reflectance parameter .

Another image example is shown in Figure 61. The object model is rendered under sim-ilar illumination and viewing conditions as input color image frame 60. Figure 62 shows theobject illuminated by two light sources. The arrow in the image represents the illuminationdirection.

region #

0

1

2

3

Table 1 Estimated parameters of the specular reflection component

KS σα

134.58 0.091

111.32 0.119

38.86 0.147

39.87 0.177

σα

Page 98: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

82 Chapter 4

Figure 60 Synthesized image 15

Figure 61 Synthesized image 26

5. This figure is shown in color in the Color Figures chapter.6. This figure is shown in color in the Color Figures chapter.

Page 99: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

4.7 Summary 83

Figure 62 Synthesized image 37

The arrows in the image represent the illumination directions.

4.7 Summary

We developed a new method for estimating object surface shape and reflectance param-eters from a sequence of range and color images of the object.

A sequence of range and color images is taken by changing the object posture which, inour image acquisition system, is controlled by a robotic arm. First, the object shape is recov-ered from a range image sequence as a collection of triangular patches. Then, a sequence ofinput color images are mapped onto the recovered object shape to determine an observedcolor sequence at each triangular patch individually. The observed color sequence is sepa-rated into the diffuse and specular reflection components. Finally, parameters of the Tor-rance-Sparrow reflection model are estimated separately at each of triangular patches. Byusing the recovered object shape and estimated reflectance parameters associated with eachtriangular patch, highly realistic images of the real object can be synthesized under arbitraryillumination conditions. The proposed approach has been applied to real range and colorimages of a plastic object, and the effectiveness of the proposed approach has been success-

7. This figure is shown in color in the Color Figures chapter.

Page 100: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

84 Chapter 4

fully demonstrated by constructing synthesized images of the object under different illumi-nation conditions.

Page 101: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

85

Chapter 5

Object Modeling from Range and Color Images:

Object Models With Texture

In Chapter 4, we introduced a method for creating a three dimensional object modelfrom a sequence of range and color images of the object. The object model is created as atriangular surface mesh each of whose triangle is assigned parameters of the Torrance-Spar-row reflection model.

The method is based on the assumption that the object surface can be segmented into afinite number of regions, each of which has uniform diffuse color and the same specularreflectance. Then, all triangles within each region are assigned the same specular reflectanceparameters. However, this assumption does not hold for objects with detailed diffuse textureor varying specular reflectance.

Therefore, in this chapter, we extend our object modeling method for objects with tex-ture. Especially, our modeling method is further extended in the following two points.

The first point is dense estimation of surface normals on the object surface. In Chapter4, surface normals were computed as polygonal normals from a reconstructed triangular sur-face mesh model. Polygonal normals approximate real surface normals fairly well whenobject surfaces are relatively smooth and do not have high curvature points. However, accu-

Page 102: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

86 Chapter 5

racy of polygonal normals becomes poor when the object surface has high curvature pointsand the resolution of the triangular surface mesh model are low, i.e., a small number of tri-angles to represent the object shape. In this chapter, rather than using polygonal normals, wecompute surface normals densely over the object surface by using the lowest level input,i.e., 3D points from range images. We consider regular grid points within each triangle (Fig-ure 63). Then, surface normal is estimated at each of the grid points. With dense surface nor-mal information, we now can analyze subtle highlights falling onto a single triangle of theobject shape model.

The second point is dense estimation of reflectance parameters within each triangle ofthe object shape model. In the previous chapter, we assumed uniform diffuse reflectance andspecular reflectance within each triangle, based on a belief that the resolution of the objectshape model is high enough. However, this strategy does not work well when the object hasdense texture on its surface. One way to solve this problem is to increase the resolution ofthe object shape model. In other words, we can increase the number of triangles to representthe object shape until each of the triangles approximately corresponds to surface region ofuniform reflectance. However, this is not a practical solution since it quickly increases therequired storage for the object model.

Instead, like dense estimation of surface normals, we estimate reflectance parameters atregular grid points within each triangle. Then, the densely estimated reflectance parametersare used together with the estimated surface normals for synthesizing object images withrealistic shadings, including subtle highlights on object surfaces.

Figure 63 Object modeling with reflectance parameter mapping

grid points

nx ny nz KD R, KD G, KD B, KS σα, , , , , , ,( )

surface normal and reflectance parameters

Page 103: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.1 Dense Surface Normal Estimation 87

This chapter is organized as follows: Section 5.1 describes estimation of dense surfacenormals from measured 3D points. Section 5.2 describes our method for estimating diffusereflection parameters. Section 5.3 explains estimation of the specular reflection parametersin our object modeling method. Section 5.4 shows experimental results. Finally, Section 5.5presents a summary of this chapter.

5.1 Dense Surface Normal Estimation

The marching cube algorithm used in our shape reconstruction algorithm generally pro-duces a large number of triangles whose sizes vary significantly. Therefore, it is desirable tosimplify the reconstructed object surface shape by reducing the number of triangles. Weused the mesh simplification method developed by Hoppe et al. [27] for this purpose.

One disadvantage of using the simplified object model is that a surface normal com-puted from the simplified model does not approximate a real surface normal accurately eventhough the object shape is preserved reasonably well. As a result, small highlights observedwithin each triangle cannot be analyzed correctly, and therefore they cannot be reproducedin synthesized images. For this reason, we compute surface normals at regular grid points,e.g., points, within each triangle using the 3D points measured in the input rangeimages.

The surface normal at a grid point is determined from a least squares best fittingplane to all neighboring 3D points whose distances to the point are shorter than somethreshold (Figure 64). The surface normal is computed as an eigen vector of the covariancematrix of the neighboring 3D points, specifically, the eigen vector associated with the eigen-value of smallest magnitude. The covariance matrix of 3D points , with cen-troid , is defined as:

(EQ44)

The surface normals computed at regular grid points within each triangle are then laterused for mapping dense surface normals to the triangular mesh of the object shape. Themapped surface normals are used both for reflectance parameter estimation and for render-ing color images of the object.

20 20×

Pg

Pg

n Xi Yi Zi, ,[ ]T

X Y Z, ,[ ]T

C

Xi X–( )

Yi Y–( )

Zi Z–( )

Xi X–( ) Yi Y–( ) Zi Z–( )i 1=

n

∑=

Page 104: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

principal axis of neighboring points

input 3D points

KD R, KD G, KD B,

Page 105: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.3 Specular Reflection Parameter Estimation 89

5.3 Specular Reflection Parameter Estimation

Like the diffuse reflection parameter estimation, the specular reflection parameters( , , , and ) are computed using the angle and the angle in the reflectionmodel (EQ27). However, there is a significant difference between estimation of the diffuseand specular reflection parameters. The diffuse reflection parameters can be estimated aslong as the object surface is illuminated and viewed from the camera. On the other hand, thespecular reflection component is usually observed only from a limited range of viewingdirections. Therefore, the specular reflection component can be observed only at a smallportion of the object surface in the input color image sequence. That is, we cannot estimatethe specular reflection parameters for the rest of the object surface. Even if the specularreflection component is observed, the parameter estimation can become unreliable if thespecular reflection component is not sufficiently strong, or if the separation of the tworeflection components is not performed well.

For the above reasons, unlike the diffuse reflection parameter estimation, we estimatethe specular parameters only at points on the object surface where the parameters can becomputed reliably. Then we interpolate the estimated specular reflection parameters on theobject surface to assign parameters to the rest of the object surface.

For the specular refection parameters to be estimated reliably, the following three con-ditions are necessary at a point on the object surface:

1. The two reflection components are separated reliably. Because the diffuse andspecular reflection components are separated using the difference of the colors ofthe two components (Section 3.2), those color vectors should differ as much aspossible. This can be examined by saturation of the diffuse color (Figure 65).Since the light source color is generally close to white (saturation = 0), if the dif-fuse color has a high saturation value, the diffuse and specular reflection colors aredifferent.

2. The magnitude of the specular reflection component is as large as possible.

3. The magnitude of the diffuse reflection component is as large as possible.Although this condition might seem to be unnecessary, we empirically found thatthe specular reflection parameters can be obtained more reliably if this condition issatisfied.

KS R, KS G, KS B, σ θr α

Page 106: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

90 Chapter 5

Figure 65 Diffuse saturation shown in the RGB color space

Taking these three conditions into account, we select a fixed number of vertices withthe largest values: as suitablesurface points for estimating the specular reflection parameters.

After the specular reflection parameters and are estimated at the selected vertices,the estimated values are linearly interpolated based on a distance on the object surface, sothat the specular reflection parameters are obtained at regular grid points within each trian-gle of the object surface mesh. The obtained specular reflection parameters were then storedas two specular reflection parameter images (a image and a image) in the same man-ner as was the surface normal image.

5.4 Experimental Results

We applied our object modeling method to real range and color images taken by usingthe image acquisition system described in Section 4.2. The target object used in this experi-ment is a ceramic mug whose height is approximately . Using the image acquisitionsystem, a sequence of range and color images of the object was obtained as the object wasrotated at a fixed angle step. Twelve range images and 120 color images were used in thisexperiment. Figure 66 shows four frames of the input range images as a triangular surfacepatch. Figure 67 shows the sequence of input color images of the mug. Six frames out of 120are shown as examples.

The volumetric method for merging multiple range images described in Section 4.3 was

R

G

B

specular color vector

diffuse color vector

1

1

1saturation

v diffuse saturation max specular intensity max diffuse intensity××=

KS σ

KS σ

100mm

Page 107: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.4 Experimental Results 91

applied to the input range image sequence to recover the object shape as a triangular meshmodel. Figure 68 shows the result of object shape reconstruction. In this example, 3782 tri-angles are generated by the marching cube algorithm.

Figure 66 Input range data(4 out of 12 frames are shown)

Figure 67 Input color images(6 out of 120 frames are shown)

frame 0 frame 3 frame 6 frame 9

frame 0 frame 20 frame 40

frame 60 frame 80 frame 100

Page 108: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

92 Chapter 5

Subsequently, the recovered object shape was simplified by using Hoppe’s mesh sim-plification method [27]. In our experiment, the total number of triangles was reduced from

to (Figure 69). In the triangular mesh model initially generated by the marchingcube algorithm, each triangle’s size varies significantly. That is typical for outputs from themarching cube algorithm. After the triangular mesh model was simplified, we can see thatsizes of triangles in the simplified triangular mesh model are more regular, which is desir-able for object modeling.

By using the simplified object shape model and all 3D points measured in the inputrange image sequence, we computed dense surface normals over the object surface. In thisexample, surface normals were estimated at grid points within each triangle. Theestimated surface normals were then stored as a three-band surface normal image.

The estimated surface normals are compared with polygonal normals computed fromthe simplified triangular mesh model in Figure 70. In the figure, surface normals at the cen-ter of each triangle are displayed. Surface normals estimated from 3D points are shown ingreen, and polygon normals are shown in red. As we can see in the figure, there is a signifi-cant difference between the estimated surface normals and polygonal normals. Thus, reflec-tance parameter estimation would fail if polygonal normals were used instead of estimatedsurface normals from 3D points.

Figure 68 Recovered object shape

3782 488

20 20×

Page 109: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.4 Experimental Results 93

Figure 69 Simplified shape modelThe object shape model was simplified from 3782 to 488 triangles.

Figure 70 Estimated surface normals and polygonal normals4

Estimated surface normals are shown in green.Polygonal normals are shown in red.

4. This figure is shown in color in the Color Figures chapter.

Page 110: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

94 Chapter 5

The sequence of input color images was mapped onto the simplified triangular meshmodel of the object shape as described in Section 4.4. Figure 71 shows the result of the map-ping. Six out of 120 frames are shown in the figure.

Then, as explained in Section 4.5.2, the diffuse reflection component and the specularreflection component were separated from an observed color sequence at each point on theobject surface. The separation result was used for estimating the parameters of the Torrance-Sparrow reflection model given in (EQ27).

The diffuse reflection parameters were estimated at regular grid points within each tri-angle just as the surface normals were estimated. The resolution of the regular grid pointswas in our experiment, while the resolution was for the surface normal esti-mation. The higher resolution was necessary to capture details of the diffuse texture on theobject surface. The resolution can be determined by the average number of pixels which fallonto one triangle in color images. Resolutions higher than the average number do not cap-ture any more information than that in the input color images. On the other hand, if the reso-lution is too low, object images synthesized by using the estimated diffuse reflectanceparameters become blurred because high frequency components in the input color imagesare lost.

Figure 72 shows the result of the diffuse reflection parameter estimation where the esti-mated parameters are visualized as surface texture on the mug.

80 80× 20 20×

Page 111: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.4 Experimental Results 95

Figure 71 Color image mapping result6 out of 120 color images are shown here.

Figure 72 Estimated diffuse reflection parameters

frame 0 frame 20 frame 40

frame 60 frame 80 frame 100

Page 112: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

96 Chapter 5

For estimating the specular reflection parameters reliably, we selected suitable surfacepoints on the object surface as described in Section 5.3. Figure 73 illustrates 100 selectedvertices out of a total of 266 vertices for specular parameter estimation.

In our experiment, we used the vertices of the triangular mesh model as candidates forparameter estimation. However, the use of the triangular vertices as initial candidates forspecular parameter estimation is not essential in our method. Without any changes in themethod, we could also use other points on the object surface as candidates. However, wefound that, in most cases, using only triangular vertices was enough to find suitable pointsfor specular parameter estimation.

Then, the specular parameters were estimated at those selected vertices. Subsequently,the estimated values were linearly interpolated based on a distance on the object surface, sothat the specular reflection parameters were obtained at grid points within each trian-gle. The resulting specular parameters were then stored as two specular reflection parameterimages (a image and a image) as the estimated surface normals were stored in the sur-face normal image. For the specular parameter estimation, we used a lower resolution( ) than for the diffuse reflection parameter estimation. This is because specularreflectance usually does not change so rapidly as the diffuse reflectance, i.e., diffuse textureon the object surface. Therefore, the resolution of was enough to capture the specularreflectance of the mug.

Figure 73 Selected vertices for specular parameter estimation100 out of 266 vertices were selected.

20 20×

KS σα

20 20×

20 20×

Page 113: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.4 Experimental Results 97

Figure 74 Interpolated and

Finally, using the reconstructed object shape, the surface normal image (Section 5.1),the diffuse reflection parameter image (Section 5.2), the specular reflection parameter image(Section 5.3), and the reflection model (EQ27), we synthesized color object images underarbitrary illumination/viewing conditions.

Figure 75 shows synthesized images of the object with two point light sources. Notethat the images represent highlights on the object surface naturally. Unlike the object model-ing method described in Chapter 4, diffuse texture on the object surface were satisfactorilyreproduced in the synthesized images in spite of the reduced number of triangles of theobject shape model.

For comparing synthesized images with the input color images of the object, the objectmodel was rendered using the same illumination and viewing directions as some of the inputcolor images. Figure 76 shows two frames of the input color image sequence as well as twosynthesized images that were generated using the same illuminating/viewing condition asthat used for the input color images. It can be seen that the synthesized images closelyresemble the corresponding real images. In particular, highlights, which generally are a veryimportant cue of surface material, appear on the side and the handle of the mug naturally inthe synthesized images.

However, we can see that the synthesized images are slightly more blurred than theoriginal color images, e.g., the eye of the painted fish in frame 50. That comes from slight

KSσ

KS σ

Page 114: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

98 Chapter 5

error in the measured object transformation matrix (Section 4.2) due to imperfect calibra-tion of the robotic arm. Because of the error in the object transformation matrix , the pro-jected input color images (Section 4.4) were not perfectly aligned on the object surface. As aresult, the estimated diffuse reflection parameters were slightly blurred. This blurring effectcan be avoided if, after a color image is projected onto the object surface, the color image isaligned with previously projected images by a local search on the object surface5. However,we have not tested this idea in our implementation yet.

5.5 Summary

In this chapter, we extended our object modeling method for objects with detailed sur-face texture. Especially, to analyze and synthesize subtle highlights on the object surface,our object modeling method estimates surface normals densely over the object surface. Thesurface normals are computed directly from a cloud of 3D points measured in input rangeimages, rather than computed as polygonal surfaces. This gives a more accurate estimationof surface normals.

In addition, the parameters of the Torrance-Sparrow reflection model are also estimateddensely over the object surface. As a result, fine diffuse texture and varying specular reflec-tance observed on the object surface can be captured, and therefore reproduced in synthe-sized images. In particular, the specular reflection parameters were successfully obtained byidentifying suitable surface points for estimation and by interpolating estimated parametersover the object surface. Finally, highly realistic object images were synthesized using therecovered shape and reflectance information to demonstrate the feasibility of our method.

5. personal communication with Richard Szelski at Microsoft Corp [81].

T

T

Page 115: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

5.5 Summary 99

Figure 75 Synthesized object images6

6. This figure is shown in color in the Color Figures chapter.

Page 116: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

100 Chapter 5

Figure 76 Comparison of input color images and synthesized images7

7. This figure is shown in color in the Color Figures chapter.

frame 50

frame 80

input synthesized

input synthesized

Page 117: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

101

Chapter 6

Reflectance Analysis under Solar Illumination

6.1 Background

Most algorithms for analyzing object shape and reflectance properties have beenapplied to intensity images taken in a laboratory. However, reports of applications for realintensity images of outside scenes have been very limited. Intensity images synthesized ortaken in a laboratory setup are well controlled and are less complex than those taken outsideunder sunlight. For instance, in an outdoor environment, there are multiple light sources ofdifferent colors and spatial distributions, namely the sunlight and the skylight. The sunlightcan be regarded as a yellow point light source whose movement is restricted to the ecliptic.4

On the other hand, the skylight is a blue extended light source which appears to be almostuniform over the entire hemisphere. Even more, there may be clouds in the sky, whichmakes modeling the skylight significantly more difficult.

Due to the sun’s restricted movement, the problem of surface normal recovery becomesunderconstrained under sunlight. For instance, if the photometric stereo method is applied totwo intensity images taken outside at different times, two surface normals which are sym-metric with respect to the ecliptic are obtained at each surface point. Those two surface nor-mals cannot be distinguished locally because those two surface normal directions give us

4. Ecliptic: The great circle of the celestial sphere that is the apparent path of the sun among the stars or of the earth as seen from the sun: the plane of the earth’s orbit extended to meet the celestial sphere.

Page 118: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

102 Chapter 6

exactly the same brightness at the surface point.

Another factor that makes reflectance analysis under the solar illumination difficult ismultiple reflection components from the object surface generated. Reflection from objectsurfaces may have multiple reflection components such as the diffuse reflection componentand the specular reflection component. In the previous chapters in this thesis, we used ouralgorithm to separate the two reflection components from an observed color sequence. Inthis case of solar illumination, we observe more than two reflection components on theobject surface because both the sunlight and the skylight act as a light source. Therefore,additional care has to be taken to analyze color images taken in an outdoor environment.

In this chapter, we address the two issues involved in analyzing real outdoor intensityimages taken under the solar illumination: 1. the multiple reflection components includinghighlights, and 2. the unique solution for surface normals under sunlight. We analyze asequence of color images of an object in an outdoor scene. Color images are taken at differ-ent times, e.g., every 15 minutes, on the same day. Then, for each of the two problems, weshow a solution and demonstrate the feasibility of the solution by using real images.

This chapter is organized as follows. The reflectance model that we used for analyzingoutdoor images under solar illumination is described in Section 6.2. The reflection modeltakes into account two light sources of different spectral and spatial distributions. Separationof the multiple reflection components under solar illumination is explained in Section 6.3and Section 6.4. A method to obtain two sets of surface normals for the object surface tochoose the correct set of surface normals is discussed in Section 6.5. Experimental resultsfrom a laboratory setup and the outdoor environment are shown in Section 6.6 and Section6.7, respectively. A summary of this chapter is presented in Section 6.8.

6.2 Reflection Model Under Solar Illumination

In outdoor scenes, there are two main light sources of different spectral and spatial dis-tributions: the sunlight and the skylight. The sunlight acts as a moving point light sourcewith a finite size, while the skylight acts as an extended light source over the entire hemi-sphere.

Both the sunlight and the skylight observed at the earth surface are generated by a verycomplex mechanism [48]. Solar radiation striking the earth surface from above the atmo-sphere is attenuated in passing through the air by two processes: absorption and scattering.Absorption removes light from the beam of light and converts it to heat. Absorption does

Page 119: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.2 Reflection Model Under Solar Illumination 103

not occur uniformly across the spectrum, but only at certain discrete wavelength regionsdetermined by the absorbing molecule’s internal properties. Scattering, while not absorbingenergy, redirects it out of the beam and away from its original direction. It takes place at allvisible wavelengths.

The probability that a single photon of sunlight will be scattered from its original direc-tion by an air molecule is inversely proportional to the fourth power of the wavelength. Theshorter the wavelength of the light is, the greater its chances are of being scattered. Thismeans that, when we look in any part of the sky except directly toward the sun, we are morelikely to see a blue photon of scattered sunlight than a red one. This causes the sky to appearblue. The result of this scattering process varies, depending on how far it is from the directsunlight. This results in a significant change of the spectral distribution of the skylight overthe sky (Figure 77).

Also, the brightness of the sky is determined by the number of molecules in the line-of-sight: more air molecules mean a brighter sky. Therefore, the brightness of the skylight isnot uniform over the sky, and the sky brightness increases to a maximum just above thehorizon.

Another well known behavior of sunlight is color and brightness of the low sun. As thesun approaches the horizon, its color changes from white to bright yellow, orange and evento red. The sun becomes dimmer and redder as it approaches the horizon. At the same time,the spectral distribution of the sunlight changes widely, depending on the sun’s location inthe sky (Figure 78).

To make matters even more complicated, there are usually clouds in the sky, and theskylight becomes a highly non-uniform extended light source. Therefore, it is very difficultto model both the sunlight and the skylight. Nevertheless, in order to analyze color imagestaken under solar illumination, it is necessary to have a reflection model and an illuminationmodel which can represent reflected lights from the sunlight and skylight on object surfaces.

Page 120: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

104 Chapter 6

Figure 77 Comparison of the spectra of sunlight and skylight [48]The spectra were taken on the solar circle at angular distances from the sun of 10, 45, 90, and 135 degrees respectively. All spectra have been scaled to have the same value at 500 nm.

Figure 78 Change of color of sun with altitude [48]These spectra show the sun as viewed through 1.0, 1.5, 2.0, and 4.0 air masses. With increasing air mass the sun becomes dimmer and redder.

Page 121: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.2 Reflection Model Under Solar Illumination 105

In this thesis, as a first step, we decided to use a simpler illumination model to analyzecolor images taken under solar illumination. In our analysis, we model the skylight as anextended light source uniformly distributed over the sky. In addition, the sunlight is mod-eled as a moving point light source which has a different spectral distribution from the sky-light. We consider that this rather simple illumination model approximates the real solarillumination reasonably well as long as the sun is not close to the horizon.

Based on the simplified illumination model, the intensity of incident light from the sun-light and the skylight is represented as

(EQ45)

where the angle represents an incident direction of the sun in the surface normal centeredcoordinate system in Figure 17. is the spectral distribution of the incident light and

is a geometrical term of incident light onto the object surface. The subscript and refer to the sunlight and the skylight, respectively.With this illumination model, the Torrance-Sparrow reflection model (EQ27) becomes

(EQ46)

Note that the diffuse and specular reflection components from the skylight become constantwith respect to the direction of the sun and the viewing direction. The resulting reflectionmodel is illustrated in Figure 79.

Figure 79 Three reflection components from solar illumination

Li θi λ,( ) csun λ( )Lsun θi( ) csky λ( )Lsky+=

θi

c λ( )L θi( ) sun

sky

Im Ksun

D m, θicos KS m,sun 1

θrcos------------- α2–

2σα2

---------

exp Ksky

+ += m R G B, ,=

camera

diffuse from sunlight

specular from sunlight

the sun

reflecting surface

diffuse+specular from skylight

Page 122: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

106 Chapter 6

In our analysis, the reflectance model represented as (EQ46) is used both to remove thespecular reflection component and for the shape recovery.

By using the reflection model given as (EQ46), we analyze a sequence of color imagesof an object in an outdoor scene. In particular, we try to recover the object shape even if theobject surface exhibits specularity. The color images are taken at different times (i.e., every15 minutes) on the same day. Therefore, the sun acts as a moving light source in the colorimage sequence.

As shown in the reflection model, reflected lights under solar illumination contain thethree reflection components: the diffuse reflection component from the sunlight, the specu-lar reflection component from the sunlight, and the reflection component from the skylight.Therefore, we need to isolate those three reflection components, so that object shapes can berecovered. First, the reflection component from the skylight is removed from the observedcolor images. Then, the diffuse reflection component and the specular reflection compo-nents from the sunlight are separated by using our reflection component separation algo-rithm that was introduced in Chapter 3. Finally, we can recover the object shape from theresulting diffuse reflection component from the sunlight.

6.3 Removal of the Reflection Component from the Skylight

As stated in Section 6.2, the diffuse and specular reflection components from the sky-light are constant with respect to the sun’s direction and the viewing direction . There-fore, shadow regions from the sunlight should have uniform pixel intensities since they areilluminated only by the skylight. In other words, pixel intensities in those regions do nothave the reflection components from the sunlight, but only the reflection component fromthe skylight . The value of the reflection component due to the skylight can beobtained as an average pixel intensity in the shadow regions of constant pixel intensity.

is subtracted from all pixel intensities of the image to yield

(EQ47)

Then, the pixel intensity has only the diffuse and specular reflection components fromthe sunlight.

θi θr

Ksky

Ksky

Ksky

Im KD m,sun θicos KS m,

sun 1θrcos

-------------- α2–

2σα2

----------

exp+=

Page 123: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.4 Removal of the Specular Component from the Sunlight 107

6.4 Removal of the Specular Component from the Sunlight

After the reflection component from the skylight is subtracted from an observed colorsequence at each image pixel, we apply our algorithm for separating the diffuse and specularreflection components described in Section 3.2, to the resulting sequence of observed col-ors. That removes the specular reflection component from the observed color sequence. As aresult, the pixel intensities in the image have only the diffuse reflection component from thesunlight, and they can be modeled by the equation

(EQ48)

Since the pixel intensity now has only the diffuse reflection component from the sun-light, the intensities in three color bands are redundant for the purpose of shape recovery.Thus, only one band of the three color bands is used in surface normal estimation.

(EQ49)

6.5 Obtaining Surface Normals

6.5.1 Two Sets of Surface Normals

After the specular reflection removal, the input image sequence has only the diffusereflection component from the sunlight. Usually, shape-from-shading and photometric ste-reo are used for recovering shape information from diffuse reflection images. Initially, thosetechniques were implemented for shape recovery in our experiments.

However, we found that, unfortunately, neither of those techniques could yield correctobject shapes. This problem is attributed to various sources of noise in the input image suchas incomplete removal of the specular reflection component. Shape-from-shading and pho-tometric stereo use a very small number of images for surface normal computation. Thatleads us to an erroneous object shape when the images contain slight errors in pixel intensi-ties.

Therefore, we decided to use another algorithm to determine surface normals from theinput image sequence. The algorithm makes use of more images in the sequence, rather than

Im KD m,sun θicos= m R G B, ,=

I KDsun θicos=

Page 124: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

108 Chapter 6

just a few of them. We describe the algorithm in this section.

Figure 80 Sun direction, viewing direction and surface normal in 3D case

To represent the sun’s motion in three dimensional space, we consider the Gaussiansphere as shown in Figure 80. The ecliptic is represented as the great circle on the Gaussiansphere The viewing direction is fixed. The direction of the sun is specified as the func-tion of in the plane of the ecliptic.

Consider an intensity of one pixel as the function of the sun direction . If the max-imum intensity is observed when the sun is located at the direction , the surface normalof the image pixel should be located somewhere on the great circle which is perpen-dicular to the ecliptic. For obtaining robust estimations, the maximum pixel intensity andthe direction of the sun are found by fitting a second degree polynomial to the observedpixel intensity sequence. According to the reflectance model (EQ49), the angle between thesun direction and the surface normal directions and on the great circle isgiven by

(EQ50)

Here, the reflectance parameter has to be known for computing . If we assumethat at least one surface normal on the object surface is the same as the sun direction , thereflectance parameter is simply obtained as the intensity of the pixel . Thepixel in the image can be found simply as the brightest pixel. In a practical case, the estima-

θs

v

s

n1

n2

:viewing direction

:sun direction

ecliptic

ϕ P2

P1

v sθs

I θs( )θs′P1P2

I ′θs′

s n1 n2 P1P2

ϕ I ′KD

sun-----------

acos=

KDsun ϕ

sKD

sunI KD

sun=

Page 125: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.5 Obtaining Surface Normals 109

tion of the reflectance parameter is computed as the average of the brightest pixel intensitiesfrom multiple images of the input image sequence, for robustness.

6.5.2 Unique Surface Normal Solution

Due to the sun’s restricted movement on the ecliptic, we cannot obtain a unique solu-tion for surface normal by applying photometric stereo to outdoor images taken at differenttimes on the same day. This fact was pointed out by Woodham [95] when he introduced thephotometric stereo method. As a result, there have been no attempts reported for recoveringan object shape by the photometric stereo method applied to outdoor images. However, Onnand Bruckstein [58] recently studied photometric stereo applied to two images and showedthat surface normals can be determined uniquely even if only two images are used, with theexception of some special cases.

By using the algorithm described in the previous section, two sets of surface normals and are obtained. We used the constraint which Onn called integrability constraint, in

order to choose a correct set of surface normals out of the two sets of surface normals.

(EQ51)

Onn’s integrability constraint is described here. First, we compute two surface normals and for all pixels. Then, the object surface is segmented into subregions by defining a

boundary where two surface normals are similar. In practice, if an angle between and is within a threshold, the pixel is included in the boundary. Then, for each subregion , twointegrals are computed.

(EQ52)

(EQ53)

Theoretically, the correct set of surface normals produces the integral value equal tozero. In a practical case, the correct surface normal set can be chosen as the one with theintegral value closer to zero. Onn and Bruckstein showed that the integrability constraint isalways valid except for a few rare cases where the object surface can be represented as

n1 n2

n1 p1– q1– 1, ,( )=

n2 p2– q2– 1, ,( )=

n1 n2n1 n2

R

y∂∂p1

x∂∂q1–

2

xd yd

x y,( ) R∈∫

y∂∂p2

x∂∂q2–

2

xd yd

x y,( ) R∈∫

Page 126: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

110 Chapter 6

in a suitably defined coordinate system. In our experiments, theexceptional case does not occur, so the integrability constraint can be used for obtaining aunique solution for surface normals.

6.6 Experimental Results: Laboratory Setup

In the previous sections, we described the methods issues which are essential for ana-lyzing real color images taken under the sun. They are: the separation of the reflection com-ponents from the two light sources: the sunlight and the skylight; and also the uniquesolution for surface normals.

We tested our solution for unique surface normals (Section 6.5) by using a color imagesequence taken in a laboratory setup. A SONY CCD color video camera module model XC-711 was used to take all images. In our experimental setup, a dielectric object (a plasticdinosaur toy) was placed at the center of the origin of the world coordinate system, and thecolor video camera was placed above the object. The sunlight was simulated by a smallxenon lamp attached to a PUMA 560 manipulator which moves around the object on itsequatorial plane. The skylight was not simulated in our experimental setup. The effect of theskylight and separation of the reflection components from the skylight will be described inSection 6.7.

The sequence of color images was taken as the point light source was moved around theobject from to in steps of . The specular reflection component isremoved from the input image sequence by using the same algorithm used in Section 3.2. Inthis experiment, the specular reflection color was directly measured rather than estimated asdescribed in Section 3.3. The 8th frame of the resulting diffuse reflection image sequence isshown in Figure 81.

The algorithm for obtaining two sets of surface normals which was described in Section6.5 was applied to the red band of the resulting diffuse reflection image sequence. Com-puted two sets of surface normals and are shown in Figure 82 as a needle diagram.

Subsequently, the integrability constraint was applied to determine the correct set ofsurface normals uniquely. First, the object surface was segmented into subregions by defin-ing a boundary where the two surface normals and are similar. The obtained bound-ary is shown in Figure 83. Theoretically, the boundary should be connected and narrow.However, in a practical case, the obtained boundary tends to be wide in order to guaranteeits connectivity. Thus, the thinning operation, in our case the medial axis transformation,

H x y,( ) F x( ) G y( )+=

θs 70°–= θs 70°= 5°

n1 n2

n1 n2

Page 127: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.6 Experimental Results: Laboratory Setup 111

was applied to narrow the boundary. Figure 84 shows the resulting boundary after themedial axis transformation.

Figure 81 Diffuse reflection component image (frame 8)

Figure 82 Two sets of surface normals

Page 128: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 129: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 130: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 131: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.7 Experimental Result: Outdoor Scene (Water Tower) 115

ing. The extracted region of interest is shown in Figure 89.

The next step was to remove the reflection component from the skylight. According tothe reflection model under the solar illumination (EQ46), the reflection component due tothe skylight is represented as a constant value . The constant value can be esti-mated as an average pixel color of a uniform intensity region which is in a shadow from thesunlight. In our experiment, the region of a constant pixel colors was selected manually asshown in Figure 89. The measured pixel color within the region is

with the variance . The pixel color vector wassubtracted from intensities of all pixels to eliminate effects from the skylight.

After this operation, the color images contain only the reflection components due to thesunlight. The resulting image is shown in Figure 90. It can be seen that the image has morecontrast between an illuminated region and a shadow region, compared with the image withthe reflection component due to the skylight (Figure 89). All of frames of the input colorimages were processed in the same manner to remove the reflection component due to theskylight.

Figure 88 Observed color image sequence of a water tankSix frames out of 23 are shown here.

Ksky

Ksky

r g b, ,( ) 14.8 17.2 19.5, ,( )= 0.2 0.3 0.6, ,( )

Page 132: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

116 Chapter 6

Figure 89 Extracted region of interest

Figure 90 Water tank image without sky reflection component

After the removal of the reflection component from the skylight, the sequence of colorimages included two reflection component: the diffuse reflection component and the specu-lar reflection component due to the sunlight as modeled by (EQ47). The algorithm to sepa-rate the diffuse and specular reflection components was applied to the resulting color imagesequence. At each pixel in the color image, the two reflection components were separatedand only the diffuse reflection component was used for further shape recovery. As an exam-ple, one frame of the resulting color image sequence is shown in Figure 91. The imageincludes only one reflection component: the diffuse reflection component from the sunlight.The water tower appears to have a uniform reflectance.

Region of constant color

Page 133: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

6.7 Experimental Result: Outdoor Scene (Water Tower) 117

Figure 91 Water tank image after highlight removal

The algorithm to determine surface normals uniquely by using an image sequence wasapplied to the red band of the resulting color image sequence. Figure 92 shows the recov-ered surface normals of the water tower. Note that surface normals are not obtained in thelower right part of the water tower. This is because, in the region, the maximum intensity isnot observed at each pixel through the image sequence. To recover surface normals in theregion, we need to take an input image sequence over a longer period of time than thisexperiment encompassed. Alternatively, other techniques such as photometric stereo may beused for recovering surface normals in the region. Finally, the relaxation method for calcu-lating height from surface normals was applied. The recovered shape of the part of the watertower is shown in Figure 93.

Figure 92 Surface normals

Page 134: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted
Page 135: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

119

Chapter 7

Conclusions

This thesis explored reflectance analysis of a sequence of color images. In particular,our work was mainly focused on automatic generation of three dimensional object modelsfor synthesizing computer graphics images with realistic shadings. To achieve this goal, wedeveloped novel methods for recovering object shapes and reflectance properties by observ-ing real objects. Furthermore, we developed a method for analyzing color images takenunder solar illumination.

7.1 Summary

This thesis is based on our new framework for analyzing a sequence of color images. Aseries of color images are examined in a four dimensional space whose axes are the RGBcolor axes and one geometrical axis. The framework is called goniochromatic space analysis(GSA). This framework is intended to incorporate strengths of two distinct techniques in thephysics-based computer vision: the RGB color space analysis and the photometric samplingtechnique. The significance of the GSA especially lies in its ability to analyze the change ofobserved color on object surfaces for different geometric relationships between the viewingdirection, illumination direction, and surface normal.

Page 136: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

120 Chapter 7

7.1.1 Object Modeling from Color Image Sequence

Based on GSA, we proposed a new algorithm for separating the two fundamentalreflection components predicted by the dichromatic reflection model: the diffuse reflectioncomponent and the specular reflection component. Unlike the previously developed coloranalysis methods, the two reflection components are separated from an observed colorsequence at each point on the object surface. As a result, our separation method can beapplied to a wide range of objects with complex surface shape and reflectance properties.

In addition, for objects whose surface normals exist in a plane containing the viewingand illumination directions, our method can estimate surface shape and reflectance parame-ters simultaneously from a sequence of color images taken by a moving light source. Ourmethod has been successfully applied to both dielectric and metal objects with differentreflectance properties.

7.1.2 Object Modeling from Range and Color Images

In our modeling object modeling method from a color image sequence taken by a mov-ing light source, surface shape and reflectance parameters were estimated for only a part ofan object surface. Also, the object was limited to have surface normals restricted in a 2Dplane.

For creating complete object models, we developed another method which uses asequence of range and color images. From the range images, a complete object surfaceshape is reconstructed as a triangular mesh model. Then, with the reconstructed object sur-face shape, our method estimates reflectance parameters of the objects. For objects withoutsurface texture, our method estimates the reflectance parameters based on region segmenta-tion over object surfaces. In addition, our method has been further extended to handleobjects with detailed surface textures and varying specular reflectance.

We have shown that our method can create three dimensional object models which canbe used for synthesizing computer graphics images with convincing shadings. Our methodis one of the first object modeling methods which can be used in practice for synthesizingcomputer graphics images with realistic shadings, e.g., diffuse texture and highlights onobject surfaces.

Page 137: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

7.2 Thesis Contributions 121

7.1.3 Reflectance Analysis under Solar Illumination

We have studied a new approach for analyzing color images taken in an outdoor scene.First, we addressed the difficulties involved in analyzing real outdoor images under the solarillumination. They are: 1. multiple reflection components due to multiple light sources ofdifferent spectral and spatial distribution, namely the sunlight and the skylight, and 2. ambi-guity in surface normal determination caused by the sun’s restricted motion.

For each of these two problems, solutions were introduced based on the reflectancemodel under solar illumination. Finally, the effectiveness of the algorithms were success-fully demonstrated by using real color images taken both in a laboratory setup simulatingthe sunlight and in an outdoor environment. Although the assumptions that we made for thisanalysis are not necessarily valid in many of real situations, e.g., cloudy skies, we believeour analysis has demonstrated the possibility of analyzing object shape and reflectanceproperties by using images taken in outdoor scenes.

7.2 Thesis Contributions

In this thesis, we have studied methods for creating three dimensional object models byobserving real objects. In particular, we have developed a new method for analyzing asequence of color images to estimate both object surface shape and reflectance properties.Therefore, the object models can be used for synthesizing color object images with highlyrealistic shading. The object modeling methods we proposed in this thesis were successfullyapplied to real images taken in both an indoor laboratory setup and an outdoor scene undersolar illumination.

The specific contributions of this thesis are:

1. We have proposed a new framework for analyzing a sequence of color images ofan object taken by changing angular illumination-viewing conditions.

2. By using the new framework, we have developed a novel method for separatingthe diffuse and specular reflection components. We have shown experimentallythat our method can be successfully applied to real images of complex objects withnon-uniform reflectance. The quality of reflection component separation by ourmethod is better than other conventional methods which do not utilize additionalinformation such as polarization of reflected lights.

Page 138: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

122 Chapter 7

3. We have developed a method for creating three dimensional object models. Thismethod can produce highly realistic computer graphics images, and can be appliedto a wide variety of real objects with very complex shape and reflectance proper-ties.

4. We have introduced a new approach for analyzing outdoor scenes. A sequence ofcolor images of an object under solar illumination was analyzed to isolate multiplereflection components from the object surface, and to recover the object shape.This is the first attempt to analyze outdoor images based on a reflectance modeland an illumination model for outdoor scenes.

7.3 Directions for Future Research

7.3.1 More Complex Reflectance Model

The Torrance-Sparrow reflection model was used in this thesis as a reflection model torepresent reflection of light on object surface. The model was further simplified to assumethat both the Fresnel reflection coefficient and the geometrical attenuation factor are con-stant. As a result, this simplified model cannot represent subtle phenomena such as changeof spectral distribution of the specular reflection component within highlights.

A natural extension of our reflectance analysis in this thesis would involve estimationof the Fresnel reflection coefficient and the geometrical attenuation factor. Alternatively,other reflection models may be used. For instance, the Ward model [87] is an empiricalreflection model which is simple enough to be easily implemented, yet simultaneously accu-rate for most materials. If a more complete reflection model is desirable, the reflectionmodel proposed by He et al. [19] might be a good choice. The model was developed basedon wave optics. The model was experimentally verified to model real reflection accurately,although the model is considerably more expensive to compute than other models.

One important thing regarding to consider in the selection of an appropriate reflectionmodel is that there is a trade-off between effort for sampling and completeness of the reflec-tion model. If we use a more complex reflection model which can represent real reflectionvery accurately, it requires more sampling, i.e., input color images, of observed reflection.This comes back to our discussion, in the introduction of this thesis, as to whether we shoulduse a BRDF or a parametric reflection model. If we increase the number of sampled imagessufficiently, then a BRDF of the object surface can be approximated from the sampled

Page 139: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

7.3 Directions for Future Research 123

images. However, in general, the use of BRDF is too expensive to be used for modeling awide range of real objects.

Therefore, an appropriate reflection model should be chosen depending on its applica-tion and cost for sampling images. The selection of an appropriate reflection model shouldbe leading to an interesting and important research topic when object modeling techniquesare applied to real applications.

7.3.2 Planning of Image Sampling

Following our discussion for selection of approximate reflection model, there is anotherimportant issue to be considered. That is, given a certain reflection model, how to sampleimages so that real reflection on object surfaces can be approximated most accurately. Cur-rently, we empirically determine how many images to be sampled, and from which direc-tions a light source should illuminate the object. Therefore, our sampling strategy is far fromthe optimal solution.

Other researchers have explored planning problems for illumination or observation. Forinstance, Solomon [76] recently examined illumination planning for photometric measure-ment to achieve the best accuracy of recovered object surface normals from a certain num-ber of illumination directions. In this case, a reflection model and an illumination model areassumed to be known and are used for computing predicted accuracy of surface normals foreach set of light source directions.

In our analysis, the Torrance-Sparrow reflection model is assumed, and illuminationenvironments are well-controlled (except in the case of reflectance analysis under solar illu-mination). This indicates that a similar approach could be applied to the problem of objectmodeling from observation. For example, as a first step, we can assume that an object shapeis first reconstructed from range images. With the reconstructed object shape, we can predictwhat kind of reflection components we will observe for each illumination/viewing direction.Then, we can plan the optimum set of illumination/viewing directions for sampling images,so that the specular reflection component can be observed in the largest area on the objectsurface.

7.3.3 Reflectance Analysis for Shape from Motion

In this thesis, we analyzed two types of image sequences for estimating object surfaceshape and reflectance properties. One is a sequence of color images taken by moving a light

Page 140: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

124 Chapter 7

source. The other is a sequence of range and color images taken by changing object posture.

Furthermore, our reflectance analysis technique can be extended to the case of asequence of color images taken by changing object posture or changing viewing direction.In fact, that is an input for shape recovery techniques called shape-from-motion.

A number of shape from motion techniques have been proposed in the last decade.However, as far as we are aware, there is no report for recovering object surface reflectanceproperties in shape-from-motion literature. Usually, shape from motion techniques areintended to recover object surface shapes, but not to estimate surface reflectance properties.In some cases, observed surface texture is captured in input images, and then placed onto therecovered object surface by using a texture mapping technique. However, the observed sur-face texture does not represent correct reflectance properties of the object surface; rather, itis just a combination of all factors for shading, i.e., illumination and viewing conditions.

When our reflectance analysis is extended to a color image sequence taken by changingobject posture, several issues arise. For instance, without range images which provide densemeasurement of object surface shape, it is difficult to solve the correspondence problembetween color images. In other words, it is hard to determine where a surface point in onecolor image frame will appear in the next color image frame, because shape from motiontechniques usually can provide very sparse measurement of the object surface shape.

Another difficulty is that surface normals computed from shape from motion tech-niques are far less accurate than those given by range images. Noisy surface normal estima-tion leads to inaccurate estimation of reflectance parameters. Therefore, somehow, we haveto either obtain more accurate surface normals or overcome the effect caused by noisy sur-face normal estimation. The use of photometric techniques such as photometric stereo inconjunction with shape-from-motion techniques might be able to solve this problem. Fusinginformation obtained from photometric stereo and shape-from-motion appears to be a goodidea. Photometric stereo techniques can provide surface normal estimation, while shapefrom motion techniques can provide location of points on object surface.

7.3.4 More Realistic Illumination Model for Outdoor Scene Analysis

In this thesis, we assumed a rather simple illumination model for analyzing colorimages taken under solar illumination. However, as we briefly discussed in Section 6.2, thissimple illumination model barely captures the nature of the skylight and the sunlight. Thespectral distribution and brightness of the skylight vary significantly over the sky, depend-

x y z, ,( )

Page 141: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

7.3 Directions for Future Research 125

ing on the location of the sun. Also, the sun itself changes its spectral distribution andbrightness over time. Those factors are ignored in our simplified illumination model, and asa result, accuracy of our analysis was apparently affected.

Another, and possibly the most dominant, factor is clouds in the sky. For instance, inPittsburgh, it is very rare to see the sky completely cloudless anytime in the year. Therefore,for reflectance analysis techniques under solar illumination to be used for actual applica-tions, we have to solve the problem of clouds in the sky. Unfortunately, it is very difficult topredict how clouds in the sky are generated, and how they scatter the sunlight or cast shad-ows on the earth surface.

Perhaps, the most feasible solution would be to directly record clouds in the sky ascolor images of objects are taken. The entire, or most of the, sky can be recorded by using acamera with a very wide field of view lens, e.g., a fish-eye lens, pointing toward he zenith ofthe sky. Then, we use the distribution of clouds in the sky to compensate for various effectscaused by the clouds when we analyze the color images of the object. In theory, intensityand spectral distribution of reflected light are determined by a product of illumination and aBRDF over the entire hemisphere. Therefore, we may be able to predict effects caused bythe clouds if we know the distribution of clouds.

Page 142: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

126 Chapter 7

Page 143: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

127

Color Figures

Figure 44 Multicolored object

pixel 3

pixel 2

pixel 1

Page 144: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

128 Color Figures

Figure 45 Diffuse reflection image

Figure 49 Input color images

frame 0 frame 20 frame 40

frame 60 frame 80 frame 100

Page 145: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

129

Figure 60 Synthesized image 1

Figure 61 Synthesized image 2

Page 146: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

130 Color Figures

Figure 62 Synthesized image 3

Figure 70 Estimated surface normals and polygonal normals

Page 147: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

131

Figure 75 Synthesized object images

Page 148: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

132 Color Figures

Figure 76 Comparison of input color images and synthesized images

frame 50

frame 80

input synthesized

input synthesized

Page 149: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

133

Bibliography

[1] M. Ali, W. N. Martin, and J. K. Aggarwal, “Color-based computer analysis of aerialphotographs,” Computer Graphics Image Processing, vol. 9, pp. 282-293, 1979.

[2] R. Bajcsy, S. W. Lee, and A. Leonardis, “Detection of diffuse and specular interfacereflections by color image segmentation,” International Journal of Computer Vision,vol. 17, no. 3, pp. 241-272, March 1996.

[3] D. H. Ballard and C. M. Brown, Computer Vision, Prentice-Hall, Englewood Cliffs,New Jersey, 1982.

[4] R. Baribeau, M. Rioux and G. Godin, “Color reflectance modeling using a polychro-matic laser range sensor,” IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 14, no. 2, pp. 263-269, 1992.

[5] P. Beckmann and A. Spizzichino, The scattering of electromagnetic waves fromrough surfaces, Pergamon Press, 1963.

[6] T. Beier and S. Neely, “Feature-based image metamorphosis,” Computer Graphics(Proceedings of SIGGRAPH ‘92), vol. 26(2), pp. 35-42, New York, NY, July 1992.

[7] P. J. Besl and N. D. McKay, “A method of registration of 3-D shapes,” IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256,

Page 150: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

134 Bibliography

1992.

[8] J. F. Blinn, “Models of light reflection for computer synthesized pictures,” Com-puter Graphics (Proceedings of SIGGRAPH ‘77), vol. 11, no. 2, pp. 192-198, July1977.

[9] G. B. Coleman and H. C. Andrews, “Image segmentation and clustering,” Proceed-ings of IEEE, vol. 67, no. 5, pp. 773-785, 1979.

[10] D. M. Connah and C. A. Fishbourne, “The use of colour information in industrialscene analysis, “Proceedings of the First International Conference on Robot Visionand Sensory Controls, pp. 340-347, April 1981.

[11] R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACMTransactions on Graphics, vol. 1, no. 1, pp. 7-24, January 1982.

[12] B. Curless and M. Levoy, “A volumetric method for building complex models fromrange image,” Computer Graphics (Proceedings of SIGGRAPH ‘96), pp. 303-312,1996.

[13] H. Delingette, M. Hebert, and K. Ikeuchi, “Object modeling by registration of multi-ple range images,” Image and Vision Computing, vol. 10, no. 3, 1992.

[14] P. E. Debevec, C. J. Taylor, and J. Malik, “Modeling and rendering architecturefrom photographs,” Computer Graphics (Proceedings of SIGGRAPH ‘96), pp. -,New Orleans, August 1996.

[15] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Computer Graphics: Prin-ciples and Practice, 2nd Ed., Addison-Wesley, Reading, Massachusetts, 1990.

[16] B. V. Funt and M. S. Drew, “Color space analysis of mutual illumination,” IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp.1319-1326, December 1993.

[17] A. S. Glassner, Principles of Digital Image Synthesis, Morgan Kaufmann Publish-ers, San Francisco, California, 1995.

[18] R. M. Haralick and G. L. Kelly, “Pattern recognition with measurement space and

Page 151: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

135

spatial clustering for multiple images,” Proceedings of IEEE, 57:pp. 654-665, April1969.

[19] T. He, S. Wang, and A. Kaufman, “Wavelet-based volume morphing,” Proceedingsof Visualization ‘94, pp. 85-91, Los Alamitos, CA, October 1994.

[20] X. D. He, K. E. Torrance, F. X. Sillion, and D. P. Greenberg, “A comprehensivephysical model for light reflection,” Computer Graphics (Proceedings of SIG-GRAPH ‘91), vol. 25, no. 4, pp. 175-186, July 1991.

[21] G. E. Healey, “Estimating spectral reflectance under highlights,” Image and VisionComputing, vol. 9, no. 5, pp. 333-337, October, 1991.

[22] G. E. Healey, “Using color for geometry-insensitive segmentation,” Journal of theOptical Society of America A, vol. 6, no. 6, pp. 920-937, 1989.

[23] G. E. Healey, S. A. Shafer and L. B. Wolff (eds.), Color (physics-based vision),Jones and Bartlett, Boston, 1992.

[24] K. Higuchi, M. Hebert and K. Ikeuchi, “Merging multiple views using a sphericalrepresentation,” Proceedings of IEEE CAD-Based Vision Workshop, pp. 123-131,Champion, 1994.

[25] A. Hilton, J. Stoddart, J. Illingworth, and T. Windeatt, “Reliable surface reconstruc-tion from multiple range images,” Proceedings of European Conference on Com-puter Vision ‘96, pp. 117-126, Springer-Verlag, 1996.

[26] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald and W. Stuetzle, “Surface recon-struction from unorganized points,” Computer Graphics (Proceedings of SIG-GRAPH ‘92), vol. 26, no. 2, pp. 71-78, 1992.

[27] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Mesh optimiza-tion,” Computer Graphics (Proceedings of SIGGRAPH ‘93), pp. 19-26, 1993.

[28] B. K. P. Horn, “Determining lightness from an image,” Computer Graphics andImage Processing, vol. 3, no. 1, pp. 277-299, December 1974.

[29] B. K. P. Horn and R. W. Sjoberg, “Calculating the reflectance map,” Applied Optics

Page 152: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

136 Bibliography

vol. 18, no. 11, pp. 1770-1779, 1979.

[30] B. K. P. Horn and M. J. Brooks, “The variational approach to shape from shading,”Computer Vision, Graphics Image Processing, vol. 33, pp. 174-203, 1986.

[31] B. K. P. Horn, Robot Vision, The MIT Press, Cambridge, Massachusetts, 1988.

[32] B. K. P. Horn, “Obtaining shape from shading information,” in B. K. P. Horn and M.J. Brooks (Ed.), Shape from Shading, MIT Press, pp. 123-171, Cambridge, 1989.

[33] K. Ikeuchi and B. K. P. Horn, “Numerical shape from shading and occluding bound-aries,” Artificial Intelligence, vol. 17, pp. 141-184, August 1981.

[34] K. Ikeuchi, “Determining surface orientations of specular surfaces by using the pho-tometric stereo method,” IEEE Transactions on Pattern Analysis and Machine Intel-ligence, vol. 3, no. 6, pp. 661-669, 1981.

[35] K. Ikeuchi and K. Sato, “Determining reflectance properties of an object using rangeand brightness images,” IEEE Transactions on Pattern Analysis and Machine Intel-ligence, vol. 13, no. 11, pp. 1139-1153, November 1991.

[36] G. Kay and T. Caelli, “Inverting an illumination model from range and intensitymaps,” CVGIP: Image Understanding, vol. 59, no. 2, pp. 183-201, March 1994.

[37] G. Kay and T. Caelli, “Estimating the parameters of an illumination model usingphotometric stereo,” Graphical Models and Image Processing, vol. 57, no. 5, pp.365-388, September 1995.

[38] J. R. Kent, W. E. Carlson, and R. E. Parent, “Shape transformation for polyhedralobjects,” Computer Graphics (Proceedings of SIGGRAPH ‘92), vol. 26(2), pp. 47-54, New York, NY, July 1992.

[39] G. J. Klinker, S. A. Shafer, And T. Kanade, “The measurement of highlight in colorimages,” International Journal of Computer Vision, vol. 2, no. 1, pp. 7-32, 1988.

[40] G. J. Klinker, S. A. Shafer, And T. Kanade, “A Physical Approach To Color ImageUnderstanding,” International Journal of Computer Vision, vol. 4, no. 1, pp. 7-38,1990.

Page 153: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

137

[41] H. C. Lee, “Method for computing the scene-illuminant chromaticity from specularhighlights,” Journal of the Optical Society of America A, vol. 3, no. 10, pp. 1694-1699, 1986.

[42] H. C. Lee, E. J. Breneman, and C. P. Schulte, “Modeling light reflection for com-puter color vision,” IEEE Transactions on Pattern Analysis and Machine Intelli-gence, vol. 12, no. 4, pp. 402-409, April 1990.

[43] H. C. Lee, “Illuminant color from shading,” Proceedings of SPIE #1250: Perceiv-ing, Measuring, and Using Color, pp. 236-244, Santa Clara, CA, 1990.

[44] S. W. Lee and R. Bajcsy, “Detection of specularity using color and multiple views,”Proceedings of Second European Conference on Computer Vision (ECCV ‘92), pp.99-114, May 1992.

[45] A. Lerios, C. D. Garfinkle, and M. Levoy, “Feature-based volume metamorphosis,”Computer Graphics (Proceedings of SIGGRAPH ‘95), pp. 449-456, Los Angeles,CA, August 1995.

[46] W. E. Lorensen and H. E. Cline, “Marching cubes: a high resolution 3D surface con-struction algorithm,” Computer Graphics (Proceedings of SIGGRAPH ‘87), vol. 21,no. 4, pp. 163-169, 1987.

[47] J. Lu and J. Little, “Reflectance function estimation and shape recovery from imagesequence of a rotating object,” Proceedings of IEEE International Conference onComputer Vision ‘95, pp. 80 - 86, June 1995.

[48] D. K. Lynch and W. Livingston, Color and Light in Nature, Cambridge UniversityPress, 1995.

[49] S. K. Nayar, K. Ikeuchi, and T. Kanade, “Determining shape and reflectance ofhybrid surfaces by photometric sampling,” IEEE Transactions on Robotics andAutomation, vol. 6, no. 4, pp. 418-431, 1990.

[50] S. K. Nayar, K. Ikeuchi, and T. Kanade, “Surface reflection: Physical and geometri-cal perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 13, no. 7, pp. 611-634, July 1991.

[51] S. K. Nayar and R. M. Bolle, “Object recognition based on reflectance and geome-

Page 154: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

138 Bibliography

try,” Proceedings of the SPIE, vol. 2031, pp. 328-42, July 1993.

[52] S. K. Nayar, X. Fang, and T. Boult, “Removal of specularities using color and polar-ization,” Proceedings of IEEE Conference on Computer Vision and Pattern Recog-nition, pp. 583-589, 1993.

[53] F. E. Nicodemus, J. C. Richmond, J.J. Hsia, I. W. Ginsberg, and T. Limperis, “Geo-metrical considerations and nonmenclature for reflectance,” U. S. Bureau Standards(U. S.), Monograph 160, Oct. 1977.

[54] C. L. Novak, S. A. Shafer, and R. G. Willson, “Obtaining accurate color images formachine vision research,” SPIE Proceedings, vol. 1250, pp. 54-68, Feb. 1990.

[55] C. L. Novak, “Anatomy of a histogram,” Proceedings of IEEE Conference on Com-puter Vision and Pattern Recognition, pp. 599-605, 1992.

[56] C. L. Novak and S. A. Shafer, “Method for estimating scene parameters from colorhistograms,” Journal of the Optical Society of America A, vol. 11, no. 11, pp. 3020-3026, November 1994.

[57] M. Oren and S. K. Nayar, “Generalization of the Lambertian model and implicationsfor machine vision,” International Journal of Computer Vision, vol. 14, no. 3, pp.227-251, April 1995.

[58] R. Onn and A. Bruckstein, “Integrability disambiguates surface recovery in two-image photometric stereo,” International Journal of Computer Vision, vol. 5, no. 1,pp. 105-113, 1990.

[59] B. Phong, “Illumination for computer-generated pictures,” Communications of theACM, vol. 18, no. 6, pp. 311-317, June 1975.

[60] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Rec-ipes in C, Cambridge University Press, New York, 1988.

[61] A. Sarabi and J. K. Aggarwal, “Segmentation of chromatic images,” Pattern Recog-nition, vol. 13, no. 6, pp. 417-427, 1981.

[62] K. Sato, H. Yamamoto, and S. Inokuchi, “Range imaging system utilizing nematic

Page 155: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

139

liquid crystal mask,” Proceedings of International Conference on Computer Vision,pp. 657-661, 1987.

[63] H. Sato, S. K. Nayar, and K. Ikeuchi, “Implementation and evaluation of a three-dimensional photometric sampler,” Technical Report, School of Computer Science,Carnegie Mellon University, CMU-CS-92-148, 1992.

[64] Y. Sato and K. Ikeuchi, “Temporal-color space analysis of reflection,” Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition ‘93, pp. 570-576,1993.

[65] Y. Sato and K. Ikeuchi, “Temporal-color space analysis of reflection,” Journal of theOptical Society of America A, vol. 11, no. 11, pp. 2990-3002, November 1994.

[66] Y. Sato and K. Ikeuchi, “Reflectance analysis under solar illumination,” Proceed-ings of IEEE Physics-Based Modeling and Computer Vision Workshop, pp. 180-187,1995.

[67] Y. Sato and K. Ikeuchi, “Reflectance analysis for 3D computer graphics model gen-eration,” Graphical Models and Image Processing, vol. 58, no. 5, pp. 437-451, Sep-tember 1996.

[68] Y. Sato and K. Ikeuchi, “Recovering shape and reflectance properties from asequence of range and color image,” Proceedings of IEEE International Conferenceon Multi-sensor Fusion and Integration for Intelligent Systems ‘96, pp. 493 - 500,Washington D. C., November 1996.

[69] Y. Sato and K. Ikeuchi, “Photorealistic object generation from observation for vir-tual reality applications,” Proceedings of International Conference on ArtificialReality and Tele-Existence ‘96, pp. 47- 58, Makuhari Japan, November 1996.

[70] Y. Sato, I. Sato, and K. Ikeuchi, “3D shape and reflectance morphing,” to appear inInternational Conference on Shape Modeling and Applications ‘97, Aizu Japan,March 1997.

[71] T. W. Sederberg, P. Gao, G. Wang, and H. Mu, “2-D shape blending: An intrinsicsolution to the vertex path problem,” Computer Graphics (Proceedings of SIG-GRAPH ‘93), pp. 15-18, New York, NY, August 1993.

Page 156: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

140 Bibliography

[72] S. A. Shafer, “Optical phenomena in computer vision,” Proceedings CSCSI-84,London, Ontario, May 1984. Canadian Society for Computational Studies of Intelli-gence.

[73] S. A. Shafer, “ Using color to separate reflection components,” COLOR Researchand Application, vol. 10, no. 4, pp. 210-218, 1985.

[74] H. Y. Shum, M. Hebert, K. Ikeuchi, and R. Reddy, “An integral approach to free-formed object modeling,” Proceedings of International Conference on ComputerVision ‘95, pp. 870-875, 1995.

[75] R. Siegel and J. R. Howell, Thermal Radiation Heat Transfer, Hemisphere Publish-ing, Bristol, PA, 3rd ed., 1992.

[76] F. J. Solomon, “Illumination planning for photometric measurements,” TechnicalReport CMU-RI-TR-96-21 (Ph.D. Thesis), The Robotics Institute, Carnegie MellonUniversity, June 1996.

[77] M. Soucy and D. Laurendeau, “Multi-resolution surface modeling from multiplerange views,” Proceedings of IEEE Conference on Computer Vision and PatternRecognition ‘92, pp. 348-353, 1992.

[78] “Standard Terminology of Appearance of Materials,” ASTM Standard E284-95b,American Society for Testing and Materials, 1995

[79] T. M. Strat, “Recovering the camera parameters from a transformation matrix,” Pro-ceedings, DARPA Image Understanding Workshop, pp. 264-271, 1984.

[80] P. S. Strauss, “A realistic lighting model for computer animators,” IEEE ComputerGraphics and Applications, vol 10, no. 11, pp. 56-64, November 1990.

[81] R. Szelski, a personal communication, 1996.

[82] Y. Tian and H. Tsui, “A method of shape and reflectance recovery from color imagesequence,” Proceedings of Second Asian Conference on Computer Vision ACCV’95,vol. 2, pp. 638-642, December 1995.

[83] Y. Tian and H. Tsui, “Estimating shape and reflectance of surfaces by color image

Page 157: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

141

analysis,” Proceedings of 3rd International Computer Science Conference ICSC’95,pp. 266-273, December 1995.

[84] S. Tominaga and B. A. Wandell, “Standard surface-reflectance model and illuminantestimation,” Journal of the Optical Society of America A, vol. 6, no. 4, pp. 576-584,April 1989.

[85] K. Torrance and E. Sparrow, “Theory for off-specular reflection from roughenedsurfaces,” Journal of the Optical Society of America, no. 57, pp. 1105 - 1114, 1967.

[86] G. Turk and M. Levoy, “Zippered polygon meshes from range images,” ComputerGraphics (Proceedings of SIGGRAPH ‘94), pp. 311-318, 1994.

[87] G. J. Ward, “Measuring and modeling anisotropic reflection,” Computer Graphics(Proceedings of SIGGRAPH ‘92), vol. 26, no. 2, pp. 265-272, 1992.

[88] G. J. Ward, “Measuring reflectance -even if you can pronounce goniospectroradiom-etry - can you do it?” a tutorial in IMA (University of Minnesota) Workshop on 3DScanning: from Physical Objects to Computer Models, Minneapolis, December1996.

[89] A. Watt, 3D Computer Graphics, 2nd Ed., Addison-Wesley, Reading, Massachu-setts, 1993.

[90] A. Watt and M. Watt, Advanced Animation and Rendering Techniques: Theory andPractice, Addison-Wesley, Reading, Massachusetts, 1992.

[91] M. D. Wheeler, Y. Sato, and K. Ikeuchi, “Consensus surfaces for modeling 3Dobjects from multiple range images,” Technical Report, School of Computer Sci-ence, Carnegie Mellon University, CMU-CS-96-185, 1996.

[92] G. Wolberg, Digital Image Warping, IEEE Computer Society Press, Los Alamitos,CA, 1990.

[93] L.B. Wolff, “Using polarization to separate reflection components,” Proceedings ofIEEE Conference on Computer Vision and Pattern Recognition, pp. 363-369, 1989.

[94] L. B. Wolff, “Generalizing Lambert’s law for smooth surfaces,” Proceedings of

Page 158: Object Shape and Reflectance Modeling from Color …ysato/papers/Sato-CMUTR97.pdfObject Shape and Reflectance Modeling from Color Image Sequence Yoichi Sato CMU-RI-TR-97-06 Submitted

142 Bibliography

European Conference on Computer Vision ‘96, pp. 40-53, 1996.

[95] R. J. Woodham, “Photometric Stereo: A reflectance map technique for determiningsurface orientation from image intensity,” Proceedings of SPIE, vol. 155, pp. 136-143, 1978.

[96] R. J. Woodham, “Surface curvature from photometric stereo,” University of BritishColumbia, Computer Science Technical Report 90-29, October 1990.