28

In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

Color constancy at a pixel

Graham D. Finlayson, Steven D. Hordley

School of Information Systems

University of East Anglia

Norwich NR4 7TJ

United Kingdom

In computational terms, we can solve the color constancy problem if device red, green and

blue sensor responses, or RGBs, for surfaces seen under an unknown illuminant can be mapped to

corresponding RGBs under a known reference light. In recent years, almost all authors have argued

that this 3-dimensional problem is too hard. It is argued that because a bright light striking a dark

surface results in the same physical spectra as a dim light incident on a light surface the magnitude

of RGBs cannot be recovered. Consequently, modern color constancy algorithms attempt only to

recover image chromaticities under the reference light: they solve a 2-dimensional problem. While

signi�cant progress has been made toward achieving chromaticity constancy, recent work has shown

that the most advanced algorithms are unable to render chromaticity stable enough so that it can

be used as a cue for object recognition1.

In this paper we take this reductionist approach a little further and look at the 1-dimensional color

constancy problem. We ask, is there a single color coordinate, a function of image chromaticities, for

which the color constancy problem can be solved? Our answer is an emphatic yes. We show there

exists a single invariant color coordinate, a function of R,G,B, that depends only surface re ectance.

Two corollaries follow. First, given an RGB image of a scene viewed under any illuminant we can

trivially synthesise the same grey-scale image (we simply code the invariant coordinate as a grey-

scale). Second, this result implies that we can solve the 1-dimensional color constancy problem at a

pixel (in scenes with no color diversity whatsoever).

We present experiments which show that invariant grey-scale histograms are a stable feature for

object recognition. Indexing on invariant distributions supports almost perfect recognition for a

dataset of 11 objects viewed under 5 colored lights. In contrast object recognition based on chro-

maticity histograms (post chromaticity constancy pre-processing) delivers much poorer recognition.

1

Page 2: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 2

1. Introduction

The light reaching our eye is a function of surface re ectance and illuminant color. Yet, the colors that we perceivedepend almost exclusively on surface re ectance; the dependency due to illuminant color is removed through colorconstancy computation. As an example, the white page of a book looks white whether viewed under blue sky orunder arti�cial light. However, the processes through which color constancy is attained are not well understood.Indeed, the performance of color constancy algorithms in computer vision remain quite limited1.We can solve the color constancy problem if red, green and blue sensor responses, or RGBs, for surfaces seen under

an unknown illuminant can be mapped to corresponding RGBs under a known reference light2. Despite signi�cante�ort3;4, this general 3-dimensional color constancy problem has yet to be solved. Maloney and Wandell5;6 arguedthat the 3-dimensional problem was in fact too di�cult to solve: there is an intrinsic ambiguity between the brightnessof an illuminant and the lightness of a surface and so dark surfaces viewed under bright lights re ect the same spectralpower distribution as highly re ective surfaces under dimmer light. This argument is taken on board in almost allmodern color constancy algorithms: modern algorithms attempt only to recover reference chromaticities.The chromaticity constancy problem has proven to be much more tractable. In7 Finlayson made two important

observations. First, was that the gamut of possible image chromaticities depended on the illuminant color (thisresult follows from Forsyth's work on 3-dimensional RGB gamuts2) and second, that the illuminant color was itselfquite limited. The chromaticities of real illuminants tend to be tightly clustered around the Planckian locus8;9. InFinlayson's algorithm an image chromaticity is said to be consistent with a particular light if it is within the gamutof all possible chromaticities observable under that light. Usually a single chromaticity will be consistent with manylights; but, di�erent chromaticities are consistent with di�erent sets of lights. Intersecting all the illuminant setsresults in an overall set of feasible illuminants: illuminants that are consistent with all image chromaticities togetherand at the same time. Typically, the set of feasible illuminants is quite small and selecting the mean10 or median11

illuminant from the feasible set leads to good color constancy. Unfortunately, when color diversity is small, thefeasible set can be large. In this case it is quite possible that an incorrect illuminant will be selected and when thishappens poor color constancy results.In more recent work, the ill-posed nature of the color constancy problem has been tackled using the tool of Bayesian

probability theory12;8;13;14. Given, knowledge of typical scenes, it is possible to calculate the probability of observinga particular chromaticity under a particular light. This prior information can then be used to calculate the likelihoodof lights given the chromaticities in an image8. While this approach delivers much better color constancy the problemof low color diversity, though certainly diminished, still remains. For scenes containing small numbers of surfaces (1,2, 3 or 4) many illuminants can be equally likely10.Whether or not the problem of such low color diversity is important depends on applications. In digital photogra-

phy, typical pictures are of color rich scenes and so the probabilistic approach works well15. However, in computervision we are often interested in analysing color, in color de�cient scenes. Beginning with Swain and Ballard16;17

many authors have attempted to use the distribution of colors or chromaticities in an image as a cue to image contentin general18;19 and object recognition in particular. This idea works well when lighting color is held �xed but can failspectacularly when illumination is allowed to vary20. Swain conjectured, that color constancy preprocessing wouldsolve the varying illumination problem. Unfortunately, because the objects we would like to recognize sometimeshave low color diversity | many branded products, such as Campbell's soup (used in Swain's original experiments),have just 1 or 2 colors | the color constancy problem is not easy to solve. Indeed, Funt et al1 tested a variety ofchromaticity constancy algorithms and concluded that none of them rendered chromaticity a stable enough cue forrecognition.The failure of the color constancy preprocessing in object recognition has inspired the color invariant approach.

Color invariants are generally functions of several image colors designed so that terms dependent on illuminationcancel. As an example, if (r1; g1; b1) and (r2; g2; b2) denote camera responses corresponding to two scene pointsviewed under one color of light then (�r1; �g1; b1) and (�r2; �g2; b2) denote the responses induced by the samepoints viewed under a di�erent color of light21 (assuming the camera sensors are su�ciently narrow-band) Clearly,it is easy to derive algebraic expressions where �, � and (and so illumination) cancel: ( r1

r2;g1

g2;b1

b2).

Indexing on color-ratios20;22 (and other invariants23{26), have been shown to deliver illuminant independent objectrecognition. Yet, this approach su�ers from three intrinsic problems. First, because spatial context is used, invariantcomputation is sensitive to occlusion. Second, invariants can only be calculated assuming there are two or more colorsadjacent to one another (not always true). Third, invariants can be calculated post-color constancy computation but

Page 3: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 3

the converse is not true27: color constancy adds more information if it can be computed. To understand this last case,consider an image taken under an unknown illumination. A perfect color constancy algorithm can, by de�nition, mapimage colors to corresponding colors under the reference light: we can calculate the full 3-dimensional absolute colorat each point in an image. In contrast, color invariants cancel out variation due to illumination by exploiting spatialcontext e.g. under fairly reasonable conditions the ratio of adjacent RGBs is illuminant independent. The result isthat absolute color information is confounded: the color of a pixel can only be known relative to the neighbourhood ofthe calculated invariant. Clearly, one might calculate relative information given absolute values (i.e. given the outputof a color constancy algorithm) but the converse is not true. It is in this sense that color constancy computationdelivers more information.In this paper, we bridge the gap between the classical color constancy computation and the invariant approach

sketched above. Our research begins with the following question: Does there exist a 1-dimensional color coordinate,expressed as a function of the RGB or chromaticity, for which the color constancy problem can be solved? The ideahere is that we take our RGB image, convert it in some way to a grey-scale image, and then attempt to map theimage grey-values to those observed under reference lighting conditions. Not only does there exist a color coordinatewhere color constancy computation is easy, there exists a coordinate where no computation actually needs to bedone. By construction the grey-scale image factors out all dependencies due to light intensity and light color.The construction builds on two insights. First, by assuming that the chromaticities of lights lie on the Planckian lo-

cus and that camera sensors sample light like Dirac Delta functions (they have narrow-band sensitivities) we show thatillumination change in log-chromaticity space is translational. It follows the log-chromaticities, which are intensityindependent, also translate under a change in illumination color: (lnR=G; lnB=G) becomes (lnR=G; lnB=G)+(a; b)under a second light. It is important to remark that the translational term (a; b) must be the same for all RGBs.Second, and this is the key insight, that the translational term for di�erent illuminants can always be written as(�a; �b) where a and b are �xed constants and � depends on illumination. That is, illumination change translateslog-chromaticities in the same direction. It follows that the coordinate axis orthogonal to the direction of illuminationvariation, y = �(a=b)x, records only illuminant invariant information: there exist constants r1 and r2 such that thecoordinate r1 lnR=G+ r2 lnB=G is independent of illumination. Of course, real lights will rarely lie exactly on thePlanckian locus nor will camera sensitivities be exactly narrow-band. However, experiments demonstrate that theinvariant computation is robust to departures in either of these conditions.Notice in the above argument we no longer talk about solving for color constancy. Rather the invariant coordinates

calculated under all lights, including the reference light, do not change. That is, invariant computation at a pixeland 1-dimensional color constancy are two sides of the same coin.To test the utility of the derived invariant coordinate, we returned to the color based object recognition problem16.

Here we represent objects by the distributions of the invariant coordinates calculated from color images of the objects;in e�ect, we use grey-scale histograms (since the invariant coordinate can be coded as a grey-scale). Recognition pro-ceeds by distribution comparison: query distributions are compared to object distributions stored in a database andthe closest match identi�es the query. In our experiment database images composed 11 objects viewed under a singleilluminant. Query images were of the same object viewed under 4 other colored illuminations. All images are part ofthe Simon Fraser calibrated image dataset1. We found that indexing on invariant coordinate distribution deliverednear perfect recognition. In comparison indexing on chromaticity distributions (calculated post 2-dimensional colorconstancy processing) delivered markedly poorer performance1.In section 2 of this paper we discuss color image formation, the log-chromaticity coordinate space and image

variation due to Planckian illumination. The invariant coordinate transform is derived in section 3. For cameras withDirac delta function sensitivities and where illumination is modeled by Planck's formula our derivation is analytic.For the practical case of actual cameras and actual lights, a statistical technique is also presented. Experimentalresults are presented in section 4.

2. Background

An image taken with a linear device such as a digital color camera is composed of sensor responses that can bedescribed by

pk=

Z!

E(�)S(�)Rk(�)d� (k = R;G;B) (1)

Page 4: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 4

where � is wavelength, pkis sensor response k = R;G;B (red, green and blue sensitivity), E is the illumination

and S is the surface re ectance and Rkis a camera sensitivity function. Integration is performed over the visible

spectrum !.Let us assume the R

k(�) = �(� � �

k): it is a Dirac delta function with sensitivity only at some wavelength �

k.

Dirac delta functions have the well known sifting property that allow us to rewrite Eqn (1):

pk=

Z!

E(�)S(�)�(� � �k)d� = E(�

k)S(�

k) (2)

Clearly, under E1(�) and E2(�) the RGB response for a particular surface can be related:

24 E1(�R)S(�R)E1(�G)S(�G)E1(�B)S(�B)

35 =

24 E1(�R)=E2(�R) 0 0

0 E1(�G)=E2(�G) 00 0 E1(�B)=E2(�B)

3524 E2(�R)S(�R)E2(�G)S(�G)E2(�B)S(�B)

35 (3)

Notice that the diagonal matrix relating the RGBs across illumination does not depend on surface re ectance. Thesame 3 scalars relate all corresponding pairs of RGBs. To ease the notation we rewrite (3) as:

24 p

r;1

pg;1

pb;1

35 =

24 � 0 00 � 00 0

3524 p

r;2

pg;2

pb;2

35 (4)

where the second subscript denotes dependence on a particular illumination.The substitution of a Dirac delta functions for R

k(�) has clearly simpli�ed the image formation equation. Unfor-

tunately, no camera could possibly, or usefully, be sensitive to only 3 narrow-band wavelengths of light; Equation (4)does not really account for image formation in typical cameras. Fortunately, research has shown that (4) modelsimage formation fairly well for cameras whose response sensitivities are su�ciently narrow-band21. Even when (4)does not hold, it can often be made to hold by applying an appropriate change of sensor basis28;29. We point outthat this result and to a lesser extent that of Worthey and Brill21 is based on a statistical analysis which takesaccount only of 'reasonable' lights and surfaces. This is an important point to bear in mind since, for all but thespecial case of Dirac Delta function sensitivities, it is always possible to hypothesise sets of lights and surfaces forwhich (4) will not hold. In practice however, (4) is, or can be made to be, a tolerable approximation for most realcameras. Henceforth we will assume (4) is a good model of image formation across illumination (and this is veri�edby experiment in section 4).Remarkably, even Equation (4) is an over general model of image formation. Illumination color is not arbitrary and

so the scalars �, � and in Equation (4) are not arbitrary either. Because all lights are all positive functions (powercannot be negative), the scalars themselves must also be positive. However, some positive power spectra do notoccur as illuminations e.g. saturated purple illuminants do not occur in nature. An implication of this observation isthat certain positive triples of scalars are impossible since they serve only to model the relation between illuminationpairs that do not actually occur.Let us suppose that illumination might be modeled as a black-body radiator using Planck's famous equation30:

E(�; T ) = c1��5�ec2

T� � 1��1

(5)

Equation (5) de�nes the spectral concentration of radiant excitance, in Watts per square metre per wavelengthinterval as a function of wavelength � (in meters) and temperature T (in Kelvin). The constants c1 and c2 are equalto 3:74183� 10�16 Wm2 and 1:4388� 10�2 mK respectively.In Figures 1, 2 and 3 we plot the normalized black-body illuminants for temperatures of 2500K, 5500K and 10000K.

It is evident (and very well known) that as the temperature increases so the spectrum of light moves from reddish towhitish to bluish. The range 2500K to 10000K is the most important region in terms of modeling typical illuminantcolors. In Figure 4, we compare the normalized spectral power distribution of a typical Daylight illuminant (CIEstandard daylight source D55) with a 5500K Black-body radiator. It is clear that the shape of the two illuminantcurves is broadly similar.

Page 5: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 5

In general Planck's equation does a good job of capturing the general shape of incandescent and daylight illumi-nants. Of course, while the shapes may be similar, Equation (5) does not account for varying illuminant power. Tomodel varying power we add an intensity constant I to Planck's formula:

E(�; T ) = I � c1��5�ec2

T� � 1��1

(6)

While the shape of daylights and Planckian radiators is similar, this is not true for uorescents (which tend tohave highly localised emission spikes). But, remarkably even here Equation (6) can be used. This can be donebecause we are not really interested in spectra per se but rather in how they combine with sensor and surface informing RGBs. For almost all daylights and typical man-made lights, including uorescents, there exists a black-body radiator, de�ned in (6), which, when substituted in (1), will induce very similar RGBs30 for most surfacere ectances. Interestingly, if such a substitution cannot be made, the color rendering index (broadly, how goodsurface colors look under a particular light) is poor30. Indeed, the lighting industry strives to manufacture lightssuch that their chromaticities lie close to the Planckian locus and which induce RGBs, for most surface re ectances,which are similar to those induced by a corresponding black-body radiator.In Figure 5 we have plotted the xy chromaticity diagram. The solid curving line maps out the chromaticity locus

of Black-body radiators from 1000K to 20000K (from right to left). We have also plotted the chromaticities for 36typical illuminants (including daylights and arti�cial lights) measured around the Simon Fraser University campus31.It is evident that all of these lights fall very close to the Planckian locus.Perhaps more interesting, is the question of whether a Planckian illumination substituted for the uorescent in

Eq. (1) would render similar colors. To evaluate this we carried out the following experiment. Using the XYZ standardobserver functions for the human visual system we calculated CIE Lab coordinates for the 170 object re ectancesmeasured by Vrhel et al? under each of the 36 SFU illuminants (Lab coordinates are a non-linear function of XYZtristimuli). We then calculated CIE Lab coordinates for the same surfaces when a Planckian illuminant (a spectraof the form (6)) is substituted for the actual illuminant. The Euclidean distance, �E error, between the Labcoordinates for the true illuminant and Planckian substitute are calculated for each surface (Euclidean distancesin CIE Lab roughly correlate with perceived perceptual di�erences). For each light we calculated the mean �Ebetween the actual and Planckian substituted Lab coordinates. The average over all 36 means was found to be 2.24with a maximum mean of 6.44. Meyer et al? found that mean �Es of as large as 5 or 6 are acceptable in imagereproduction i.e. images are similarly and acceptably rendered. This said, we might reasonably conclude, that forour 36 illuminants, a Planckian substitution can be made (the perceived perceptual rendering error is quite small).Of course, the XYZ human observer sensitivities are not in general the same as digital camera sensitivities. That is,

accurate rendering for the visual system need not imply accurate rendering for a camera. Fortunately, the responsesof most cameras (and certainly all those known to the authors) are, to a tolerable approximation, transformableto corresponding XYZs32{35. Accurate rendering of XYZs, for reasonable surface re ectances, implies accuraterendering of RGBs.

3. Color constancy at a pixel

In this section we take our model of image formation together with Planck's equation and show that there exists onecoordinate of color, a function of RGB, that is independent of light intensity and light color (where color is de�nedby temperature). However, to make the derivation cleaner we �rst make a small (and often made30) simplifyingalteration to (6). In Planck's equation � is measured in metres; thus, we can write wavelength � = x � 10�7 wherex 2 [1; 10] (the visible spectrum is between 400 and 700 nanometers 10�9). Temperature is measured in thousandsof Kelvin or equivalently t � 103 (where t 2 [1; 10]). Substituting into the exponent of Equation (6) we see that:

c2

T�=

1:4388� 10�2

x� 10�7� t � 103

=1:4388� 102

x � t(7)

Because t is no larger than 10 (10000K) and there is no signi�cant visual sensitivity (for humans or most cameras)

after 700nm, x � 7, it follows that e1:4388�10

2

xt >> 1 and so:

Page 6: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 6

E(�; T ) � Ic1��5e�

c2

T� (8)

In Figures 1 through 3 we contrast the Black-body illuminants derived using (8) with those de�ned by (6) fortemperatures of 2500K, 5500K and 10000K. It is clear that the approximation is very good (in particular the twospectra for 2500K overlay each other exactly).Substituting (8) in (2) we see that:

pk=

Z!

E(�)S(�)�(� � �k)d� = Ic1�

�5k

e�

c2

T�k S(�

k) (9)

Taking natural logarithms of both sides of (9),

ln pk= ln I + ln(S(�

k)��5

k

c1)�c2

T�k

(10)

That is, log-sensor response is an additive sum of three parts: ln I (depends on the power of the illuminant but isindependent of surface and light color), ln(S(�

k)��5

k

c1) (depends on surface re ectance but not illumination) and�

c2

T�

(which depends on illumination color but not re ectance).Remembering that in (10), p

k= R;G;B; we have 3 relations which exhibit the same structure: each of the ln R,

ln G and ln B sensor responses are an additive sum of intensity, surface and illumination components. By cancelingcommon terms, we show below that we can derive two new relations which are intensity independent (but dependon illumination color) and from these a �nal relation which depends only on re ectance.We begin by introducing the following simplifying notation: let S

k= ln(S(�

k)��5

k

c1) and Ek= �

c2

�k

(k = R;G;B

sensor). The following two relations, red and green, and blue and green log-chromaticity di�erences (or LCDs), areindependent of light intensity:

p0

R= ln p

R� ln p

G= S

R� S

G+ 1

T

(ER�E

G)

p0

B= ln p

B� ln p

G= S

B� S

G+ 1

T

(EB�E

G)

(11)

We note that (11) is basically a recapitulation of a result that is well known in the literature. There are manyways in which intensity or magnitude might be removed from a 3-d RGB vector (see36 for a review). We choose theone that makes the subsequent color temperature analysis easier.

It is useful to think of (11) in terms of vectors:

�p0

R

p0

B

�is a sum of the vector

�SR� S

G

SB� S

G

�plus the vector

1T

�ER�E

G

EB�E

G

�where 1

T

is a scalar multiplier. Written in this form (11) is the equation of a line written in vector

notation. If we change surface re ectance only the �rst term in the right-hand side of (11) changes. That is the linesde�ned by di�erent surfaces should all be simple translations apart.Of course, this result is predicated on the approximate Planckian model of illumination (8). To test if this result

held for real Planckian black-body illuminants (Eqn. (6)) we numerically calculated, using Equation (1), sensorresponses for seven surfaces under 10 Planckian lights (using narrow-band sensors anchored at 450nm, 540nm and610nm). The 7 surfaces comprised the Macbeth color checker re ectances37 labelled green, yellow, white, blue,purple, orange and red. The 10 Planckian illuminants were uniformly spaced in temperature from 2800K to 10000K.The LCDs (11) were calculated and the resulting 2-dimensional coordinates are plotted in Figure 6. As predictedas the illumination changes the coordinates for a given surface span a line and all the lines are related by a simpletranslation.Now using the usual rules of substitution it is also a simple matter to derive a relation that is independent of

temperature:

p0

R�

(ER�E

G)

(EB�E

G)p0

B= S

R� S

G�

(ER�E

G)

(EB� E

G)(S

B� S

G) = f(S

R; S

G; S

B) (12)

Page 7: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 7

where all Skand E

kare independent of illuminant color and intensity. Equation (12) informs us that there exists

a weighted combination of LCDs that is independent of light intensity and light color. We have shown that the

we can solve the 1-dimensional color constancy problem at a pixel.

It is useful to visualize the geometric meaning of (12). For a particular surface, all LCDs for di�erent lights fallon a line y = mx+ c or in paramterised coordinate form (x;mx + c). In Equation (12) a linear combination of the

x and y coordinates is calculated: (a0x + b0(mx + c)) (where a0 and b

0 are the constants 1 and � (ER�EG)

(EB�EG)). Clearly

if we scale a0 and b0 by some term v-giving va0 and vb

0, the illuminant invariance of (12) is unaltered. Without lossof generality let us choose v, a = va

0, b = vb0, such that the vector [a b]t has unit length.

We can now calculate the illuminant invariant as the vector dot product (':'):

ax+ b(mx+ c) = [a b]:[x (mx+ c)] (13)

The meaning of (13) is geometrically well understood: we are projecting the log-di�erence coordinate onto the axis[a b]; where this axis is chosen to be orthogonal to the direction of the variation due to illumination.The following equation rotates the log di�erence coordinate axis so that the resulting x axis records illuminant

independent information (and the y axis captures all the variation due to illumination):

�a b

�b a

� �x

mx+ c

�=

�x0

y0

�(14)

The log di�erence data shown in Figure 6 is rotated according to Equation (14). The result is shown in Figure 7.

A. Approximate invariance

If a camera has Dirac delta functions then the invariant coordinate transform can be calculated analytically. Whencamera sensor sensitivities are not perfect Dirac delta functions (they never are) then the best illuminant invariantquantity must be found statistically. There are two steps involved in �nding an invariant. First we must make surethat camera response across illumination follows the diagonal matrix model of Equation (4). Second, we must �ndan equation of the form (14).Worthey and Brill21 found that so long as a camera is equipped with fairly narrow sensitivities, e.g. with a support

of 100nm, the diagonal model will hold. When sensitivities are signi�cantly broader e.g. in excess of 300nm in thecase of the human visual system the simple model of (4) does not hold. However, in a series of works, Finlayson,Drew and others28;29;38;39 found that new narrower-band sensitivities could be formed from broad band sensitivitiesby calculating an appropriate sharpening transform. The diagonal model, relative to the sharpened sensors, onceagain is quite accurate. E�ective sharp transforms have been shown to exist for the broad band sensitivities of thehuman cones28 and the spectrally broad band Kodak DCS 460 camera40 (and all other broad band sensor sets knownto the authors). Henceforth we assume Equation (4).We point out that equation (4) is a necessary, not su�cient, condition for the analysis to proceed. In the strictest

sense the diagonal model of illumination change must occur in tandem with device sensitivities integrating spectralstimuli like Dirac Delta functions. This was in fact found to be true for a sharp transform of XYZ sensitivities29.However, the statistical method set forth below can be applied without explicitly enforcing the Delta functionequivalence.To discover the appropriate invariant, we must understand how LCDs for di�erent surfaces, under (4), relate

to one another and more speci�cally how this relationship can be used as a means for �nding the best invari-ant. Suppose we take an image of a particular surface re ectance Si(�) under a set of representative illuminants:E1(�),E2(�),...,Em

(�). Using Eqns (1) and (11) we calculate the set of m LCD vectors: Qi

1; Q

i

2; � � � ; Q

i

m

Assum-ing, that the camera behaves approximately like a Dirac delta camera, then the LCDs should all be approximatelyco-linear. By subtracting the mean LCD, we can move this line so that it passes through the origin:

�i =

1

m

mXk=1

Qi

k(15a)

Page 8: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 8

qi

j

= Qi

j

� �i (15b)

Because of the invariance properties derived and discussed in the last section, the mean-subtracted points for anothersurface Sj(�), qj

1; q

j

2; � � � ; q

j

m

must also lie on a similarly oriented line (which again passes through the origin). Nowsuppose we take n representative surfaces, generate n sets of mean-subtracted LCDs and place these in the columnsof an 2� nm matrix M :

M = [q11; q

1

2; � � � ; q

1

m

; q2

1; q

2

2; � � � ; q

2

m

; � � � ; qn

1; q

n

2; � � � ; q

n

m

] (16)

The co-variance matrix of this point-set equals:

�(M) =1

nmMM

t (17)

Equation (14) cast the invariant computation as a rotation problem in LCD space. We do likewise here and note

that if we can �nd a rotation matrix R such that the rotated points Rqik

=

��

�(where � is as small as possible)

then good invariance should follow, since

if Rqi

k

=

��

�) RQ

i

k

= R�i +

��

�(18)

where � denotes an illuminant varying quantity which doesn't interest us. Under the assumption that � is small,the �rst coordinate of the rotated LCDs will be approximately invariant to illuminant change: it is equal to thedot-product of the �rst row of R with the mean-vector �i.To �nd the rotation satisfying (18) we wish to �nd the coordinate axis along which the variation (or variance) due

to illumination is minimum. To do this we �rst note that covariance matrix �(M) can be uniquely decomposed41:

�(M) = Ut

DU (19)

where U is a rotation matrix and D is a strictly positive diagonal matrix where the diagonal terms of D, D1;1 = �21

and D2;2 = �22 , are ordered: �

21 > �

22 . Simple algebra establishes that:

�(UM) = D (20)

That is, the diagonal entries of D are the variances of M under rotation U . Furthermore over all choices of rotationmatrix U , it can be shown that �21 is the maximum variance that can be achieved and �

22 the minimum

42. It followsthen that R can be de�ned in terms of U : the �rst and second rows of R equal the second and �rst rows of U ([R11 R12] = [U21 U22] and [R21 R22] = [U11 U12]).

B. invariance across devices

In setting forth the above (both the analytic and statistical methods), it is clear that the derivation is camera speci�c.That is, the invariant information that is available changes with the imaging device. It is interesting however, toask whether the similar invariant information might be calculable across di�erent cameras. The answer to thisquestion is trivially true if there exists a one to one mapping that takes the RGB response of an arbitrary camerato a corresponding RGB response for some �xed canonical camera. Unfortunately, such a mapping, save under veryrestrictive circumstances, cannot exist for all lights and surfaces; there will always exist pairs of physical stimuli that

Page 9: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 9

induce identical responses (they are metamers) with respect to one camera but induce quite di�erent responses withrespect to a second camera. However, in practice, the one to one mapping holds for most reasonable stimilui32.Drew and Funt33 demonstrated that if the color signal (the product of light and surface re ectance) spectra are

described by a 3-dimensional linear models (approximately true for many natural re ectance spectra viewed underdaylight) then RGBs across two di�erent cameras are linearly related. This work was generalized by Marimont andWandell32 who realized that we are not interested in spectra per se but how they project down onto RGBs. Theypresented empirical evidence which shows that the recorded RGBs, for a large corpus of physical stimuli and a varietyof color devices, are linearly related (to a good approximation). It follows that to �nd an invariant coordinate acrossdevices we can:

1. Find the linear transform T which best maps the RGBs for a given device to corresponding RGBs for a canonicaldevice:

T pd

� pc (21)

where the superscripts d and c denote a given device and the standard reference canonical device.

2. For the canonical device we calculate the best invariant using either of the methods set forth in (7) through (20).Denoting the invariant calculation as the function f : R � R �R ! R, then invariant information for pd iscalculated as:

f(T pd) � f(pc) (22)

If the canonical device is chosen to be a hypothetical camera that is equipped with Dirac Delta function sensitivitiesthen T can be thought of as a sharpening transform29.

C. Multiple Illuminants

Often in color constancy research it is interesting to consider situations where there are multiple illuminants presentin a scene. So long as the e�ective illuminant (which may be a combination of the multiple light sources) lies onthe Planckian locus, the derivation (7) through (11) applies and invariance is assured. However, it is reasonable toassume that there may be some averaging of light sources. For example, if E1(�) and E2(�) are spectrally di�erentlight sources both incident at a point in a scene, then due to the superposition of light, the e�ective illuminationE(�) is equal to E1(�) +E2(�). Assuming that E1(�) and E2(�) lie on the Planckian locus and have temperaturesT1 and T2 and intensity constants I1 and I2 then:

E(�) = I1c1��5e�

c2

T1� + I2c1��5e�

c2

T2� (23)

Because the Planckian locus is convex no Planckian illuminant can be written as a convex sum of any other twoPlanckians. For all choices of constants I3 and T3:

I3c1��5e�

c2

T3� 6= I1c1��5e�

c2

T1� + I2c1��5e�

c2

T2� (24)

Fortunately, the Planckian locus, in the region that spans typical illuminants is only very weakly convex. As such,while additive combinations of Planckian illuminants cannot lie on the Planckian locus, they will lie close to thePlanckian locus. In Figure 8, we have plotted the xy chromaticity diagram, the Planckian locus between 2800K and10000K (the range of typical lights) and the chromaticities of 190 illuminants (created by taking weighted convexcombinations of Planckian illuminants in this range). All lights lie on or close to the Planckian locus. That is, weexpect the derived illuminant invariant to be stable even when there are additive light combinations.

Page 10: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 10

4. Experiments

Figure 9 shows normalized spectral sensitivity functions of a SONY DXC-930 three CCD digital camera. It isevident that these sensors are far from idealized delta functions: each is sensitive to a wavelength interval over100nm in extent. Thus, at the outset we need to test the adequacy of our illumination invariant representation.Using, Equation (1), the SONY camera response for seven surfaces under 11 Planckian lights were generated. Asin Figure 6 (the experiment using Dirac delta function sensitivities), green, yellow, white, blue, purple, orange andred re ectances drawn from the Macbeth color checker37 were used along with 10 Planckian black-body illuminantsevenly spaced in temperature from 2800K to 10000K. The corresponding log-chromaticity di�erences (LCDs) werecalculated and the resulting 2-dimensional coordinates are plotted in Figure 10. The rotated coordinates, calculatedusing the approximate invariant method outlined in the last section, are shown in Figure 11.It is evident that for camera responses the variation in log-chromaticity due to illumination is not spread along a

single direction: the calculated invariant (Equation 13) is only approximate. Yet, for particular surfaces the LCDsdo fall on a line. Moreover, while the directions of these lines do vary as a function of surface color they do not varymuch. We conclude then that for the SONY camera it is possible to calculate as single color feature that has veryweak dependence on illumination.To quantify the magnitude of the dependence we carried out the following experiment. Using the technique set

forth in section 3A, we calculated the optimal invariant for 170 object re ectances43 and the 10 Planckian illuminants.For the ith re ectance we calculated �

2i: the variance of the invariant coordinate over the 10 lights. The sum of all

170 individual variances gives us a measure of the error variation in the signal: �2E=P

i

�2i. The total variance

calculated for the invariants all 170 surfaces viewed under the 10 lights, the signal variance, is denoted: �2S. The

signal to noise ratio, �S=�

E, informs us how large the signal is relative to the error. For our data we found the

SNR to be approximately 23: that is the signal is 23 times as large as the noise. Two informal conclusions might bedrawn. First, that the invariant is su�ciently stable to allow, on average, up to 23 colors to be distinguished fromone another. Second, that the invariant conveys slightly more than 4 'bits' of information.By de�nition we choose the invariant that minimizes �

E. However, we could equally �nd the coordinate direction

that maximizes the variation due to illumination e.g. the vector [U11 U12] in (19) which is in the direction orthogonalto our calculated invariant. Relative to this worst case coordinate we expect the calculated invariant to have a muchlower signal to noise ratio. Indeed we found the ratio to be 1.9. That is, the signal variation due to re ectance isonly twice as large as that due to illumination. Informally, it would only be possible to reliably discriminate twocolors.We repeated this experiment using the 10 Planckian lights and 180 new illuminants formed by taking additive

combinations of Planckian illuminants (see Figure 8). These new lights simulate common lighting conditions such asoutside daylight mixing with indoor incandescent illumination. As discussed in 3.B, most of these lights necessarilylie o�-locus and so the invariant calculation is in this case only approximate and so we expect a lower SNR. Wefound the signal to noise ratio is reduced from 23 to 15. This indicates that our calculated invariant is quite robustto many typical lights and lighting conditions (we can still calculate 4 bits of information).We were interested in evaluating the similarity of invariant information calculated across devices. Using the XYZ

color matching functions, plotted in Figure 12, we calculated the XYZ response for the red, green, yellow, purple,white, blue and orange patches under the 10 Planckian illuminants. We found the best 3 � 3 matrix transformmapping XYZs to corresponding Sony RGBs. Finally, we calculated the LCDs and rotated them according to therotation matrix derived for the SONY camera. The resulting coordinates are shown in Figure 13 and are contrastedwith the actual SONY coordinates in Figure 14. It is evident that device independent information is calculable.Though, some re ectances are clearly less stable than others.To test the invariant calculation on real camera images, we took 10 SONY DXC-930 images (from the Simon Fraser

dataset1) of two colorful objects (a beach ball and a detergent package) under the 5 illuminants: Macbeth uorescentcolor temperature 5000K (with and without blue �lter), Sylvania Cool white uorescent, Philips Ultralume, SylvaniaHalogen. These illuminations constitute typical everyday lighting conditions: yellowish to whitish to bluish lights.The luminance grey scale images, calculated by summing R + G + B, are shown in the �rst and third columns ofFigure 15. It is clear that the simple luminance grey scale is not stable across illumination. In columns 2 and 4 thecorresponding invariant grey-scale images are calculated. It is equally clear that the grey-scale pixels in these imagesdo not change signi�cantly as the illumination changes. Also notice that qualitatively the invariant images maintaingood contrast: not only have we obtained illumination invariance but the images that result are visually salient.

Page 11: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 11

As a concrete test of the utility of our calculated illuminant invariant we carried out a set of object recognitionexperiments. To the beach ball and detergent packages we added 9 other colorful objects. These too were imagedunder all 5 lights1. For all 55 images we calculated their respective grey-scale invariant histograms and then usedthese as an index for object recognition. Speci�cally, we took each light in turn and used the corresponding 11 objecthistograms as feature vectors for the object database. The remaining 44 object histograms were matched against thedatabase; the closest database histogram being used to identify the object.We found that a 16 bin invariant grey-scale histogram, matched using the Euclidean distance metric44, delivers near

perfect recognition. Almost 96% of all objects were correctly identi�ed. Moreover, those incorrectly matched, wereall found to be the second best matching image. This performance is really quite remarkable. Funt et al1 measuredthe illuminant using a spectra-radiometer and then corrected the image colors based on this measurement (so calledperfect color constancy). They then indexed objects by matching corrected chromaticity histograms. Surprisinglythey found that they could achieve only 92.3% recognition. Moreover, at least one object was matched in 4th place(the correct matching histogram was the fourth best answer).That 1-d invariant histograms apparently work better than 2-d color histograms is at �rst glance hard to under-

stand. We considered two explanations. Given a measurement of the light it is not possible to exactly map RGBsfrom one lighting condition to another and so we do not expect perfect match performance. However, the perfor-mance we do see must depend on the properties of the color space being used. That is, our �rst explanation is thatif a 1-d coordinate, such as the invariant presented here, is relatively robust (relative to other choices of color space)to illuminant change (there was negligible mapping error for that coordinate) then we might expect the invariantcoordinate to support relatively good performance. Unfortunately, this explanation though appealing (it would bepleasing if our derived invariant was found to have additional nice properties) was not found to be true. Indeed, wefound that the absolute residual error found after correcting for the illuminant was actually relatively high in thedirection of the derived invariant. That is, viewed from a mapping error perspective alone, one might expect the 1-dhistograms to support poorer indexing.The second explanation rests on the color space being used. In Funt et al's original experiments the traditional

rg chromaticity space, which is based on R, G, and B, camera responses is used. In this paper the derived invariantis based on ln R, ln G and ln B responses. Experiments, reported elsewhere45, on the same data set has found thata log chromaticity space supports more accurate indexing than conventional chromaticity space45. We speculatethat indexing based in log color space might work better because the logarithm function maps raw RGBs to moreperceptually meaningful quantities: Euclidean color di�erences in log space are better correlated to perceived colordi�erences (the logarithm function enforces Weber's law). Ultimately the objects we attempt to discriminate inour experiment are colored in such a way that look distinct and di�erent to ourselves and so matching should beperceptually relevant.Funt et al also used a variety of color constancy algorithms, including max RGB, grey-world and a neural net

method3;46;47, as a preprocessing step in color distribution based recognition. All methods tested performed signi�-cantly worse than the perfect color constancy case. No algorithm delivered supported more than a 70% recognitionrate.Other color invariant based methods, predicated on functions of many image pixels, have also been tried on the

same data set27. None delivered results better than the 96% recognition rate reported here.

5. Conclusions

Color constancy in its most general form amounts to re-rendering a given RGB image so that it appears as if itwere taken under a known reference light. This problem has turned out to be very hard to solve2 so most modernalgorithms attempt only to recover reference light chromaticities47;10;14. This simpler 2-dimensional color constancyproblem though more tractable7 is only soluble given su�cient color content in the scene. In this paper, we consideredwhether a 1-dimensional color constancy problemmight be easier to solve. Speci�cally, we asked whether there existeda single image color coordinate, a function of RGB, that might be easily mapped to the known reference conditions.We have shown that such a coordinate exists: a particular linear combination of log RGB responses was shown to

be invariant to light intensity and light color. Moreover, by construction, the invariant coordinate under referenceand all other lights remains unchanged so no color constancy mapping is actually needed (the mapping is alwaysthe identity transform). Our result rests on two assumptions: that camera sensitivities behave like delta functions

Page 12: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 12

and illumination chromaticities fall near the Planckian locus. Many cameras have sensitivities which are su�cientlynarrow that they behave as if they were equipped with Delta function sensitivities. When a camera has broad-bandsensitivities, then a basis transform usually su�ces to take camera measurements to a coordinate system where thedelta function assumption holds29;28. The authors have yet to �nd a camera where the invariant calculation cannotbe made.Experiments show that invariant coordinate (coded as a grey-scale) histograms provide a stable cue for object

recognition. Indeed, they support better indexing performance than chromaticity histograms created post-colorconstancy processing1;27.

Acknowledgements

This work was was funded under EPSRC grant GR/L60852. The authors are also grateful for the support of LightseerLtd.

1: B.V. Funt, K. Barnard, and L. Martin. Is machine colour constancy good enough. In The Fifth European Conference on

Computer Vision (Vol II), pages 445{459. European Vision Society, 1998.

2: D. Forsyth. A novel algorithm for color constancy. Int. J. Comput. Vision, 5:5{36, 1990.

3: E.H. Land. The retinex theory of color vision. Scienti�c American, pages 108{129, 1977.

4: E.H. Land and J.J. McCann. Lightness and retinex theory. J. Opt. Soc. Amer., 61:1{11, 1971.

5: L.T. Maloney and B.A. Wandell. Color constancy: a method for recovering surface spectral re ectance. J. Opt. Soc. Am.

A, 3:29{33, 1986.

6: B.A. Wandell. The synthesis and analysis of color images. IEEE Trans. Patt. Anal. and Mach. Intell., PAMI-9:2{13,

1987.

7: G.D. Finlayson. Color in perspective. IEEE transactions on Pattern analysis and Machine Intelligence, pages 1034{1038,

October 1996.

8: M. D'Zmura and G. Iverson. Probabalistic color constancy. In R.D. Luce, M. M. D'Zmura, D. Ho�man, G. Iverson,

and K. Romney, editors, Geometric Representations of Perceptual Phenomena: Papers in Honor of Tarow Indow's 70th

Birthday. Laurence Erlbaum Associates, 1994.

9: S. Tominaga, S. Ebuisi, and B. Wandell. Color temperature estimation of scene illumination. In The seventh IS&T and

SID's Color Imaging Conference, pages 42{48. 1999.

10: G. Finlayson and S. Hordley. Selection for gamut mapping colour constancy. Image and Vision Computing, 17(8):597{604,

June 1999.

11: G.D. Finlayson and S.D. Hordley. The theory and practice of gamut mapping color constancy. In IEEE conference on

computer vision and pattern recognition, June 1998.

12: G. Sapiro. Bilinear voting. In IEEE International Conference on Computer Vision, pages 178{183, 1998.

13: David H. Brainard and William T. Freeman. Bayesian color constancy. Journal of the Optical Society of America, A,

14(7):1393{1411, 1997.

14: G.D. Finlayson, S.D. Hordley, and P.M. Hubel. Colour by correlation: a simple unifying theory of colour constancy. In

IEEE international conference on computer vision, pages 835{842, 1999.

15: P.M. Hubel, J. Holm, and G.D. Finlayson. Illuminant estimation and colour correction. In The Colour in Multimedia

Conference, pages 97{105. 1998.

16: M.J. Swain and D.H.. Ballard. Color indexing. International Journal of Computer Vision, 7(11):11{32, 1991.

17: M.J. Swain. Interactive indexing into image databases. In Storage and Retrieval for Image and Video Databases I, SPIE

Proceedings Series, pages 95{103. Feb 1993.

18: M. A. Stricker and M. Orengo. Similarity of color images. In Storage and Retrieval for Image and Video Databases III,

volume 2420 of SPIE Proceedings Series, pages 381{392. Feb. 1995.

19: W. Niblack and R. Barber. The QBIC project: Querying images by content using color, texture and shape. In Storage

and Retrieval for Image and Video Databases I, volume 1908 of SPIE Proceedings Series. 1993.

20: B.V. Funt and G.D. Finlayson. Color constant color indexing. PAMI, 17(5):522{529, May 1995.

Page 13: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 13

21: J.A. Worthey and M.H. Brill. Heuristic analysis of von Kries color constancy. Journal of The Optical Society of America

A, 3(10):1708{1712, 1986.

22: S.K. Nayar and R. Bolle. Computing re ectance ratios from an image. Pattern Recognition, 26:1529{1542, 1993.

23: G. Healey and L. Wang. The illumination-invariant recognition of texture in color images. Journal of the optical society

of America A, 12(9):1877{1883, 1995.

24: G. Healey and D. Slater. \Global color constancy: recognition of objects by use of illumination invariant properties of

color distributions". Journal of the Optical Society of America, A, 11(11):3003{3010, November 1994.

25: G.D. Finlayson, S.S. Chatterjee, and B.V. Funt. Color angular indexing. In The Fourth European Conference on Computer

Vision (Vol II), pages 16{27. European Vision Society, Springer Verlag, 1996.

26: G.D. Finlayson, B. Schiele, and J.L. Crowley. Comprehensive colour image normalization. In The Fifth European Con-

ference on Computer Vision. European Vision Society, Springer Verlag, 1998.

27: G.D. Finlayson and G.Y. Tian. Color normalization for color object recognition. International Journal on Pattern Recog-

nition and Arti�cial Intelligence, pages 1271{1285, 1999.

28: G.D. Finlayson, M.S. Drew, and B.V. Funt. Spectral sharpening: Sensor transformations for improved color constancy.

J. Opt. Soc. Am. A, 11(5):1553{1563, May 1994.

29: G.D. Finlayson and B.V. Funt. Coe�cient channels: Derivation and relationship to other theoretical studies. COLOR

research and application, 21(2):87{96, 1996.

30: G. Wyszecki and W.S. Stiles. Color Science: Concepts and Methods, Quantitative Data and Formulas. Wiley, New York,

2nd edition, 1982.

31: K. Barnard. Computational color constancy: taking theory into practice, 1995. MSc thesis, Simon Fraser University,

School of Computing Science.

32: D.H. Marimont and B.A. Wandell. Linear models of surface and illuminant spectra. J. Opt. Soc. Am. A, 9(11):1905{1913,

92.

33: M.S. Drew and B.V. Funt. Natural metamers. CVGIP:Image Understanding, 56:139{151, 1992. In press.

34: M.J. Vrhel and H.J. Trussel. Color correction using principal components. Color Research and Application, 17:328{338,

1992.

35: M.J. Vrhel. Mathematical methods of color correction. PhD thesis, North Carolina State University, Department of

Electrical and Computer Engineering, 1993.

36: T. Gevers. Color Image Invariant Segmentation and Retrieval. University of Amsterdam, 1996. ISBM 90-74795-51-X.

37: C.S. McCamy, H. Marcus, and J.G. Davidson. A color-rendition chart. J. App. Photog. Eng., pages 95{99, 1976.

38: G.D. Finlayson, M.S. Drew, and B.V. Funt. Color constancy: Generalized diagonal transforms su�ce. J. Opt. Soc. Am.

A, 11:3011{3020, 1994.

39: M.S. Drew and G.D. Finlayson. Spectral sharpening with positivity. J. Opt. Soc. Am. A, 17:1361{1370, 2000.

40: G.D. Finlayson and M.S. Drew. Positive bradford curves through sharpening. In IS&T and SID's 7th Color Imaging

Conference. 1999.

41: J.H. Wilkinson. Algebraic Eigenvalue Problem. Monographs on numerical analysis. Oxford University Press, 1965.

42: I.T. Jolli�e. Principal Component Analysis. Springer-Verlag, 1986.

43: M.J. Vrhel, R. Gershon, and L.S. Iwan. Measurement and analysis of object re ectance spectra. COLOR research and

application, 19(1):4{9, 1994.

44: J. Hafner, H.S. Sawhney, W. Equitz, M. Flickner, and W. Niblack. E�cient color histogram indexing for quadratic form

distance functions. IEEE transactions on Pattern analysis and Machine Intelligence, 17(7):729{735, 1995.

45: G.D. Finlayson and J. Berens. Log-opponent chromaticity coding of colour space. Technical Report Col-00-01, University

of East Anglia School of Information Systems, 2000. Available from http://www.sys.uea.ac.uk/Research/colourgroup/Col-

00-01.ps.Z.

46: R.W.G. Hunt. The Reproduction of Color. Fountain Press, 5th edition, 1995.

47: V. Cardei B.V. Funt and K. Barnard. Learning color constancy. In 4th IS&T and SID Color Imaging Conference. 1996.

Page 14: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 14

Wavelength in Nanometers400 450 500 550 600 650 700

0.0

0.1

0.2

0.3

Fig. 1. Normalized 2500K black-body radiator: exact equation (solid line), approximation (dashed line). Note in this

case the two lines are on top of each other.

Page 15: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 15

Wavelength in Nanometers400 450 500 550 600 650 700

0.16

0.17

0.18

0.19

Fig. 2. Normalized 5500K black-body radiator: exact equation (solid line), approximation (dashed line).

Page 16: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 16

Wavelength in Nanometers400 450 500 550 600 650 700

0.10

0.15

0.20

0.25

Fig. 3. Normalized 10000K black-body radiator: exact equation (solid line), approximation (dashed line).

Page 17: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 17

Wavelength in Nanometers400 450 500 550 600 650 700

0.12

0.14

0.16

0.18

0.20

Fig. 4. Normalized 5500K CIE D55 standard daylight(solid line). Planckian 5500K illuminant (dashed line).

Page 18: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 18

X

Y

0.0 0.2 0.4 0.6

0.0

0.2

0.4

0.6

0.8

+++

++++

++ +++

+

+++++

+

+

+ ++

+ ++

++

+++ +

+

+ ++

Fig. 5. CIE xy chromaticity diagram. Solid curved line is the chromaticity locus for all typical Planckian black-body

lights. From left to right illuminants begin bluish, become whitish then yellowish ending in reddish light. Crosses ('+') denote

chromaticities of typical natural and man-made lights.

Page 19: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 19

Ln R/G

Ln B

/G

-1 0 1 2

-3-2

-10

12

O•

• • •• • •

• ••

P•

• • •• • •

• ••

B•

• • •• • •

• ••

G•

• • •• • •

• ••

R•

• • •• • •

• ••

Y•

• • •• • •

• ••

W•

• • •• • •

• ••

Fig. 6. Perfect Dirac-delta camera data (sensitivities anchored at 450nm, 540nm and 610nm). Log chromaticity di�er-

ences (LCDs) for 7 surfaces (green, yellow, white, blue, purple, orange and red) under 10 Planckian lights (with increasing

temperature from 2800K to 10000K). Variation due to illumination is along a single direction.

Page 20: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 20

-2 -1 0 1

-3-2

-10

12

O••••

••

••

P••••

••

••

B••••

••

••

G••••

••

••

R ••••

••

••Y•

••

••

••

W•••

••

••

Fig. 7. Perfect Dirac-delta camera data (sensitivities anchored at 450nm, 540nm and 610nm). Log chromaticity di�er-

ences (LCDs) for 7 surfaces (green, yellow, white, blue, purple, orange and red) under 10 Planckian lights (with increasing

temperature from 2800K to 10000K). LCDs (from Fig. 6) have been rotated so that x coordinate depends only on surface

re ectance; the y coordinate depends strongly on illumination.

Page 21: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 21

XX

YY

Fig. 8. CIE xy chromaticity diagram. Solid curved line is the chromaticity locus for all typical Planckian black-body

lights. From left to right illuminants begin bluish, become whitish then yellowish ending in reddish light. Dots ('.') denote

chromaticities of weighted averages of Planckian locus lights. It is clear that though these new illuminants do not fall on the

locus, they do fall close to the locus.

Page 22: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 22

Wavelength in Nanometers400 450 500 550 600 650 700

0.0

0.1

0.2

0.3

0.4

0.5

Fig. 9. SONY DXC-930 normalized camera sensitivities.

Page 23: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 23

Ln R/G

Ln B

/G

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0

-3-2

-10

1

O

••

••

•••

••

P

••

• ••

• ••

••

B•

• • ••

• •• •

G•

•• •

•••

• ••

R

••

• ••

• ••

••

Y•

•• •

•• •

• ••

W

••

• ••

• ••

••

Fig. 10. SONY DXC-930 camera data. Log chromaticity di�erences (LCDs) for 7 surfaces (green, yellow, white, blue,

purple, orange and red) under 10 Planckian lights (with increasing temperature from 2800K to 10000K). Variation due to

illumination is along a single direction.

Page 24: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 24

-1.0 -0.5 0.0 0.5 1.0 1.5-3

-2

-1

0

1

O

••••••••••

P

••••••••••

B

••••••••••

G

••••••••••

R

••••••••••Y

••

•••••

W•••

•••••

Fig. 11. SONY DXC-930 camera data. Log chromaticity di�erences (LCDs) for 7 surfaces (green, yellow, white, blue,

purple, orange and red) under 10 Planckian lights (with increasing temperature from 2800K to 10000K). LCDs (from Fig. 9)

have been rotated so that x coordinate depends only on surface re ectance; the y coordinate depends strongly on illumination.

Page 25: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 25

Wavelength in Nanometers400 450 500 550 600 650 700

0.0

0.2

0.4

0.6

0.8

1.0

Fig. 12. Normalized XYZ color matching curves.

Page 26: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 26

-1 0 1 2 3

-2

-1

0

1

O

••••••••••

P•••

••

••••

B••••

••

••

G

••••••••••

R

••••

•• •

•• •Y

••••••••••

W

••••••••••

Fig. 13. XYZ tristimuli are transformed to corresponding SONY camera RGBs using a linear transform. Log chromaticity

di�erences (LCDs) for 7 surfaces (green, yellow, white, blue, purple, orange and red) under 10 Planckian lights (with increasing

temperature from 2800K to 10000K) are calculated and rotated according to the SONY rotation matrix. The x coordinate

depends weakly on surface re ectance; the y coordinate depends strongly on illumination.

Page 27: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 27

-1 0 1 2 3

-2

-1

0

1

O

••••••••••

P•••

••

••••

B••••

••

••

G

••••••••••

R

••••

•• •

•• •Y

••••••••••

W

••••••••••

Fig. 14. LCDs are calculated for XYZ tristimuli which have been transformed to corresponding SONY camera RGBs

using a linear transform are plotted plotted as �lled circles. Lines linking theses approximate coordinates to the actual SONY

LCD coordinates are shown.

Page 28: In Journal of the Optical So ciet · 2006. 3. 29. · In Journal of the Optical So ciet y of America, A, Cop yrigh t OSA 2001 3 the con v erse is not true 27: color constancy adds

In Journal of the Optical Society of America, A, Copyright OSA 2001 28

Fig. 15. 1st and 3rd columns show Luminance grey scale images of a Ball and a detergent packet under 5 lights (top to

bottom). 2nd and 4th columns show corresponding invariant images.