10
1684 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986 Formal connections between lightness algorithms Anya Hurlbert Center for Biological Information Processing and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, E25-201, Cambridge, Massachusetts 02139 Received March 1, 1986; accepted June 27, 1986 The computational problem underlying color vision is to recover the invariant surface-spectral-reflectance proper- ties of an object. Lightness algorithms, which recover an approximation to surface reflectance in independent wavelength channels, have been proposed as one method to compute color. This paper clarifies and formalizes the lightness problem by proposing a new formulation of the intensity equation on which lightness algorithms are based and by identifying and discussing two basic subproblems of lightness and color computation: spatial decomposi- tion and spectral normalization of the intensity signal. Several lightness algorithms are reviewed, and a new extension (the multiple-scales algorithm) of one of them is proposed. The main computational result is that each of the lightness algorithms may be derived from a single mathematical formula, under different conditions, which, in turn, imply limitations for the implementation of lightness algorithms by man or machine. In particular, the algorithms share certain limitations on their implementation that follow from the physical constraints imposed on the statement of the problem and the boundary conditions applied in its solution. 1. INTRODUCTION The color of an object tends to stay the same regardless of the type of light illuminating it: a red apple is red under daylight or indoor tungsten light, although the spectrum of light that it reflects is different in each environment. This familiar phenomenon, color constancy, poses a computa- tional problem that must be solved by any artificial or bio- logical visual system that sees in color: how to extract the invariant spectral-reflectance properties of an object's sur- face from the varying light that it reflects. One way to compute color is to compute lightness, the average of surface reflectance over all or part of the spec- trum, relative to the surrounding surface reflectance. The lightness computation in each of three distinct spectral bands yields three distinct lightness values that give an invariant description of the surface-spectral-reflectance function without detailing its behavior at each wavelength. Land's implementation of the retinex lightness algo- rithm'- 3 on a two-dimensional Mondrian (a surface covered with patches of random colors) demonstrates that the re- sults of lightness computations parallel human color percep- tion and therefore suggests that lightness algorithms may provide a model for the computation of color by the human visual system. When the intensity of any or all of three spectrally distinct lights illuminating the Mondrian is arbi- trarily changed, the patches do not change color. In paral- lel, the triplet of lightness values computed for each patch does not change and maps into a constant point in color space. Land's retinex algorithm is the prototype for other light- ness algorithms, 4 - 8 which share its goal to recover a descrip- tion of the relative spectral reflectance at every point along a surface in distinct spectral bands and employ similar physi- cal constraints to achieve it. Other algorithms that compute color employ different constraints and methods to achieve different goals. 7 ' 9 ' 3 The main purpose of this paper is to demonstrate that each of the lightness algorithms reviewed here may be de- rived from a single mathematical formula under specified conditions. The similarities between them are therefore clarified and put into compact form. The differences be- tween them are then confined to questions of implementa- tion. 2. THE INTENSITY EQUATION The starting point for lightness algorithms is the intensity signal, from which the properties of the reflecting surfaces and ambient illumination must be decoded. Here we as- sume that the signal is captured by a two-dimensional array of light sensors (artificial or biological), and we express the intensity equation as I* (X, r) = p*(X, r)F(k, n, s)E*(X, r), (2.1) where X is wavelength, r is the spatial coordinate in the sensor array (or the two-dimensional projection of the sur- face coordinate), E*(X, r) is the intensity of the ambient illumination,' 4 and p*(X, r)F(k, n, s) = R*(X, r) is the sur- face-reflectivity function15" 6 (see Fig. 1). p*(X, r) is the component of the reflectivity function that depends only on material properties of the surface, that is, the albedo, and F(k, n, s) is the component that depends on the viewing geometry, where k is the viewer direction and s is the source direction, each relative to n, the surface normal.1 7 Here the terms albedo and surface reflectance will be used inter- changeably. The factors in Eq. (2.1) may be regrouped to separate the surface properties from the viewing geometry by defining the effective irradiance E*(X, r), where E*(X, r) = F(k, n, s)E**(X, r). The effective irradiance is therefore the ambient illumination modified by the orientation, shape, and loca- tion of the reflecting surface. 0740-3232/86/101684-10$02.00 © 1986 Optical Society of America Anya Hurlbert

Formal connections between lightness algorithms

  • Upload
    anya

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Formal connections between lightness algorithms

1684 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986

Formal connections between lightness algorithms

Anya Hurlbert

Center for Biological Information Processing and Artificial Intelligence Laboratory, Massachusetts Institute ofTechnology, E25-201, Cambridge, Massachusetts 02139

Received March 1, 1986; accepted June 27, 1986

The computational problem underlying color vision is to recover the invariant surface-spectral-reflectance proper-ties of an object. Lightness algorithms, which recover an approximation to surface reflectance in independentwavelength channels, have been proposed as one method to compute color. This paper clarifies and formalizes thelightness problem by proposing a new formulation of the intensity equation on which lightness algorithms are basedand by identifying and discussing two basic subproblems of lightness and color computation: spatial decomposi-tion and spectral normalization of the intensity signal. Several lightness algorithms are reviewed, and a newextension (the multiple-scales algorithm) of one of them is proposed. The main computational result is that each ofthe lightness algorithms may be derived from a single mathematical formula, under different conditions, which, inturn, imply limitations for the implementation of lightness algorithms by man or machine. In particular, thealgorithms share certain limitations on their implementation that follow from the physical constraints imposed onthe statement of the problem and the boundary conditions applied in its solution.

1. INTRODUCTION

The color of an object tends to stay the same regardless ofthe type of light illuminating it: a red apple is red underdaylight or indoor tungsten light, although the spectrum oflight that it reflects is different in each environment. Thisfamiliar phenomenon, color constancy, poses a computa-tional problem that must be solved by any artificial or bio-logical visual system that sees in color: how to extract theinvariant spectral-reflectance properties of an object's sur-face from the varying light that it reflects.

One way to compute color is to compute lightness, theaverage of surface reflectance over all or part of the spec-trum, relative to the surrounding surface reflectance. Thelightness computation in each of three distinct spectralbands yields three distinct lightness values that give aninvariant description of the surface-spectral-reflectancefunction without detailing its behavior at each wavelength.

Land's implementation of the retinex lightness algo-rithm'- 3 on a two-dimensional Mondrian (a surface coveredwith patches of random colors) demonstrates that the re-sults of lightness computations parallel human color percep-tion and therefore suggests that lightness algorithms mayprovide a model for the computation of color by the humanvisual system. When the intensity of any or all of threespectrally distinct lights illuminating the Mondrian is arbi-trarily changed, the patches do not change color. In paral-lel, the triplet of lightness values computed for each patchdoes not change and maps into a constant point in colorspace.

Land's retinex algorithm is the prototype for other light-ness algorithms, 4

-8 which share its goal to recover a descrip-

tion of the relative spectral reflectance at every point along asurface in distinct spectral bands and employ similar physi-cal constraints to achieve it. Other algorithms that computecolor employ different constraints and methods to achievedifferent goals.7 '9 '3

The main purpose of this paper is to demonstrate thateach of the lightness algorithms reviewed here may be de-rived from a single mathematical formula under specifiedconditions. The similarities between them are thereforeclarified and put into compact form. The differences be-tween them are then confined to questions of implementa-tion.

2. THE INTENSITY EQUATION

The starting point for lightness algorithms is the intensitysignal, from which the properties of the reflecting surfacesand ambient illumination must be decoded. Here we as-sume that the signal is captured by a two-dimensional arrayof light sensors (artificial or biological), and we express theintensity equation as

I* (X, r) = p*(X, r)F(k, n, s)E*(X, r), (2.1)

where X is wavelength, r is the spatial coordinate in thesensor array (or the two-dimensional projection of the sur-face coordinate), E*(X, r) is the intensity of the ambientillumination,'4 and p*(X, r)F(k, n, s) = R*(X, r) is the sur-face-reflectivity function15"6 (see Fig. 1). p*(X, r) is thecomponent of the reflectivity function that depends only onmaterial properties of the surface, that is, the albedo, andF(k, n, s) is the component that depends on the viewinggeometry, where k is the viewer direction and s is the sourcedirection, each relative to n, the surface normal.17 Here theterms albedo and surface reflectance will be used inter-changeably.

The factors in Eq. (2.1) may be regrouped to separate thesurface properties from the viewing geometry by definingthe effective irradiance E*(X, r), where E*(X, r) = F(k, n,s)E**(X, r). The effective irradiance is therefore the ambientillumination modified by the orientation, shape, and loca-tion of the reflecting surface.

0740-3232/86/101684-10$02.00 © 1986 Optical Society of America

Anya Hurlbert

Page 2: Formal connections between lightness algorithms

Vol. 3, No. 10/October 1986/J. Opt. Soc. Am. A 1685

I9

Fig. 1. The geometry of the viewing situation. s is the vector fromthe surface to the light source, n is the normal vector to the surface,and k is the vector from the surface to the viewer.

<~~~~

400 500 600nm

A I,r

400 500 600nm

Fig. 2. a) Human photoreceptor spectral sensitivities. , The"blue" cone; - - - - -, the rod; -, the "green" cone; and- - - - -, the "red" cone. Redrawn from Dartnall et al., in Col-our Vision: Physiology and Psychophysics, J. D. Mollon and L. T.Sharpe, eds. (Academic, London, 1983). b) Possible wavelengthchannels formed by linear combinations of the photoreceptor chan-nels.

The effective irradiance is the intensity that the surfacewould reflect if it were painted white, that is, if p*(X, r) = 1.If the surface were also normal to both the source and theviewer direction, that is, if F(k, n, s) = 1, then the effectiveirradiance would equal the ambient illumination intensity,or E*(X, r) = E*(X, r). If the ambient illumination only

were white and F(k, n, s) = 1, then the reflected intensitywould be directly proportional to the surface albedo every-where by the same factor.

The signal transmitted by the sensor is a function of thelight intensity integrated over the wavelength range of thesensor. In humans, this function is nonlinear and is com-monly approximated by a logarithm as shown:

S*v(r) = log | aV(X)p*(X, r)E*(X, r)dX, (2.2)

where v labels the type of sensor [e.g., in humans, the long-,middle-, or short-wavelength-sensitive cone; see Fig. 2a)],Sv(r) is the signal sent by the sensor, and av(X) is its spectral-sensitivity function.

In lightness algorithms, the sensor signal is decomposedinto a sum of two components representing surface reflec-tance and effective irradiance by assuming that the sensorhas the same nonlinear property as a human photoreceptoror by converting to logarithms. For the monochromaticsignal, the decomposition is straightforward,

SV(X, r) = log av(X)p*(X, r) + log E*(X, r),

but for the integrated signal it is not. It is clear that if av(X)were a delta function, 6(X - X,) (that is, if the sensor rangewere a very narrow band), Eq. (2.2) would become

S (r) = log[p*v(r)E*v(r)], (2.3)

where p*v(r) is the surface reflectance and E*P(r) is theeffective irradiance integrated over the narrow vth spectralband. For human photoreceptors and most filters and cam-eras used in artificial visual systems, the spectral sensitiv-ities are not delta functions, so Eq. (2.3) does not immediate-ly follow. Other descriptions of lightness algorithms haveneglected this problem.

The desired decomposition may be obtained exactly byfirst writing p* and E* as weighted sums of a fixed set ofbasis functions, chosen to describe fully most naturally oc-curring illuminants and surface albedos and to satisfy cer-tain constraints. By making appropriate transformations,the sensor signals may then also be written in terms of thebasis functions, which therefore define new lightness chan-nels.7 The intensity equation then becomes

St(r) = Et(r) + pi(r), (2.4)

where the index i labels the type of new lightness channelformed by the transformation and Si(r) = log S*i(r), and soforth. An approximation to Eq. (2.4), derived under lessrestrictive conditions, is probably sufficient as a startingpoint for most natural images. As discussed in Hurlbert,7

the new channels represent linear combinations of the origi-nal sensor channels and may take any of several forms [oneof which is illustrated in Fig. 2b)]. The biological opponent-color channels, for example, may be the result of a similar,less exact transformation. We will assume that althoughthere is no a priori limit on the number of new channels,three will be sufficient to describe most natural images.

3. THE COMPUTATIONAL PROBLEMS INLIGHTNESS ALGORITHMS

The assumption underlying the use of lightness algorithmsin color computation is that Eq. (2.4) may be solved for Ei(r)

a)

a)100

CX0M-oU)

M-, 50a3)N

E0z

0

b)

Anya Hurlbert

% I

Page 3: Formal connections between lightness algorithms

1686 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986

and pi(r) independently and identically for each of the threelightness channels, and the resulting triplet of lightness val-ues specifies color. Thus, at each location in the sensorarray, there are three equations and six unknowns, so theproblem is underconstrained. In the absence of direct mea-surements of either the effective irradiance or the surfacereflectance, physical constraints must be imposed in order tosolve for the unknowns.

The lightness problem may be broken down in terms oftwo subproblems of color computation, spatial decomposi-tion of the intensity signal and spectral normalization of thesurface reflectance and the effective irradiance.

A. Spatial DecompositionThe first step in lightness algorithms is to split the intensitysignal into its two components at each spatial location-spatial decomposition. This step is performed by spatialdifferentiation of the intensity signal under the followingassumptions:

Lightness Assumption 1. The scene is a two-dimension-al Mondrian, that is, a flat surface that is divided into patch-es of uniform reflectance.

Lightness Assumption 2. The effective irradiance variesslowly and smoothly across the entire scene and is every-where independent of the viewer's position.

Under these constraints, the two components are disen-tangled by the following:

(1) Differentiating the intensity signal, Si(r), over space.(2) Thresholding the derivative, d[Si(r)], to eliminate

small values that are due to smooth changes in the effectiveirradiance and to retain large values that are due to abruptchanges in the surface reflectance at borders between patch-es.

(3) Integrating the thresholded derivative, Td[Si(r)]d[pi(r)], over space to recover surface reflectance.

B. Spectral NormalizationSpatial decomposition does not recover surface reflectanceexactly because the constant of integration, representing theabsolute scale of p relative to S, is lost in the final step andmust be reset. (The threshold operation is also inherentlyinaccurate, unless it is made flexible enough to recognizevariations in the size of irradiance and surface-reflectancespatial changes.) The result of the computation, lightness,is therefore at most only proportional to surface reflectance.We may write [ki]'1[TSfr)]i = Li(r) = ci(r)pi(r), where[TS(r)]i is the result of integration of the thresholded inten-sity before the constant of integration, [ki>- has been set, Liis the lightness in the ith wavelength band, and ci(r) is amultiplicative factor that may vary across space.

The value of hi depends on the spectral normalization thatthe algorithm performs. The retinex algorithm and, by ex-tension, other lightness algorithms that do not explicitlyaddress the normalization problem impose the final con-straint.

Lightness Assumption 3. The average surface reflec-tance of each scene in each wavelength band is the same:gray, or the average of the lightest and darkest naturallyoccurring surface-reflectance values.

hi may then be set to the average value of the thresholdedintensity of the scene, so that the computed lightness of a

patch approximately equals its surface reflectance relativeto the average reflectance of its surround. This computa-tional definition agrees with the psychophysical definition oflightness. If lightness assumption 3 holds, then the light-ness of a patch is an accurate and invariant measure of itssurface reflectance. The triplet of lightness values in thethree distinct wavelength channels then defines the color ofa patch.

It is important to emphasize that the computed lightnessis not an absolute measure of the surface reflectance. Be-cause the intensity signal is a product of two components,the scale of E or p relative to S cannot be recovered in theabsence of independent measurements of either component.Consequently, multiplying E cannot be distinguished frommultiplying p; in both cases, S is multiplied in the same way.But lightness assumption 3 implies that any change of scalein S is always interpreted as a change of scale in E.

To illustrate, consider a test patch against a background ofrandom colors. Lightness algorithms compute the effectiveirradiance on the scene, Eci(r), by dividing Si(r) by thelightness LV(r): Eci(r) = Ei(r)pi(r)/ci(r)pi(r) = Ei(r)/ci(r).If Ei(r) is increased, ki(r) and [TI]i(r) are each increased bythe same factor, and the computed lightness of the test patchdoes not change. Effectively, the average surface reflec-tance in the ith-type channel is automatically reset to gray.Eci(r) is increased by the same factor as Ei(r), so the spectralskew is correctly interpreted. The closeness of Ec to Edepends on the value of ci(r), which depends partly on thevalue of hi and which may vary across space if, for example, aspatial gradient of surface reflectance has been discarded bythe threshold operator.

Alternatively, if the background colors are skewed towardone part of the spectrum, the computed lightness of the testpatch changes, but the spectral skew in the intensity signal isincorrectly interpreted as a skew in Ei(r): unless ci(r) isskewed in the same way as ci(r), Eci/ECJ id EiIEJ. That is,lightness algorithms will "see" a dull red patch against arange of green patches as lighter than when against a rangeof red patches under the same illumination and will inter-pret the skew toward green as a lack of red in the illuminant.The spectral energy distribution of the illuminant is correct-ly recovered only if the spatial variation and the average ofthe surface reflectance are the same in each channel (i.e., hi= ki for all i, j).

In summary, the first two lightness assumptions ensurethat the algorithm successfully disentangles spatial changesin the effective irradiance from spatial changes in the spec-tral-reflectance function of a surface, under the given condi-tions. The third lightness assumption provides a normal-ization scheme that correctly interprets temporal changes inthe spectral energy distribution of the illuminant on a givenscene but cannot distinguish such changes from a skew in theassortment of colors in the scene.

Lightness algorithms recover lightness values that have anarbitrary relationship to the physically correct surface-spec-tral-reflectance function and therefore recover a weak formof color constancy. This fact does not disqualify lightnessalgorithms as models for the computation of color by hu-mans, because humans also display weak color constancy. 2 'The limitation of lightness algorithms as models for colorcomputation arises from the fact that they are designed forthe restricted world of two-dimensional Mondrians. They

Anya Hurlbert

Page 4: Formal connections between lightness algorithms

Vol. 3, No. 10/October 1986/J. Opt. Soc. Am. A 1687

do not address a task essential to color constancy in thenatural three-dimensional world: to distinguish shadingand shadows resulting from object contours and the spatialdistribution of the illuminant from true reflectance changes.As Land has recently demonstrated by using still-life scenes,lightness algorithms may, nevertheless, approximate humanperception even in the three-dimensional world.22

4. REVIEW OF LIGHTNESS ALGORITHMS

In this section, individual lightness algorithms are reviewedin more detail to provide a background for the main compu-tational results in this paper, discussed in Section 5.

A. Land's AlgorithmLand's retinex algorithm3 makes the three lightness assump-tions described in Section 3 and assumes that the intensitysignal is recorded by a discrete array of sensors, indexed by aone-dimensional variable.

Spatial decomposition of the intensity signal is performedby one-dimensional first-order spatial differentiation in thefollowing way. Sensor x is connected by random paths toeach of many different sensors w (w = 1, 2,. .. , N; w labels theendpoint and the path). Following a single direction in eachpath, at each junction (kw, kw+l) in the path, the difference inactivities (S'w+l - S'w) is calculated and thresholded, foreach lightness channel i. The differences over each path ineach channel are then summed to yield a ratio Li(x, w) thatunder the three lightness assumptions closely approximateslog(pi/pwi), the log of the ratio of surface reflectances at thestarting points and endpoints.

Spectral normalization is then achieved by averaging Li(x,w) over all endpoints w, which, by lightness assumption 3,fall in a random assortment of colors, to yield the lightnessLi(x) .23

N N

E LV(x, w) 3 log pwi

L'(x) = w=1 log p - 1N N

=log pXj - log pr]. (4.1)

Lightness is therefore normalized by the average surfacereflectance of the scene, as described in Section 3.

B. Horn's AlgorithmHorn's algorithm4 makes the first two lightness assumptionslisted in Section 3. It differs from the retinex algorithm inthat the rotationally symmetric two-dimensional Laplacianoperator V2 performs the first step, spatial differentiation ofthe intensity signal. The lightness obtained by this methodis therefore a solution to the Poisson equation inside theMondrian:

V2 L(x,y) = T[V 2S(x,y)], (4.2)

where T represents the threshold operation that is per-formed on the output of the Laplacian.2 4

In its continuous form (that is, when (x, y) are continuousspatial variables), expression (4.2) has a known solution un-der certain boundary conditions. If (1) the region on whichS is defined (the sensor array or retina) is infinite or (2) thesensor array is finite, but the image is wholly contained in it

and is surrounded by a border of constant reflectance forwhich T[V2S] = 0, the inverse Laplacian is then performedby convolution with the Green's function g = 1/27r In r (Ref.25)

L(x, y) = ff (1/2701n(r) * T[V2 S(D, n)]d~dq, (4.3)

where r2= (x - ¢)2 + (y - q)2 and (x, y) are the coordinates of

the point in the sensor array at which lightness is evaluated.When either condition (1) or (2) is met, the reconstructed

lightness is unique up to a constant, which is set by normaliz-ing as prescribed by lightness assumption 3. Condition (1)is not satisfied by the photoreceptor array in the humanretina: For a finite sensor array or a retina, condition (2) iscrucial to Horn's solution; its importance is discussed later.

When the intensity function S is not continuous but dis-crete, for example, sampled by a discrete sensor array, andboundary condition (2) is met, the inverse solution may beexpressed as a convolution with a similar function.

An alternative method of solving Eq. (4.2) in the discretecase is by iteration. The Laplacian is approximated by adiscrete differencing operator

/ Sij-1/ NT(\SLJ - 1/N Ski) = (4.4)ii]

where Sij is the discretized version of S(x, y), (ij) are coordi-nates of retinal cells, and Skli are the N closest neighbors ofSij, where N = 6 for a closely packed array.

Because T[V 2Sij] = V2Lij [Eq. (4.2)], the left-hand side ofEq. (4.4) may be rewritten as

Li- 1/N E Lkl = V2Lij,N

(4.5)

where Lij is the lightness to be solved for. To perform theinverse Laplacian, Guass-Seidel iteration takes as its start-ing point the exact solution of Eq. (4.5)

N

Li = 72 + N E_ Lk,. (4.6)

In the first step of the iteration, the Lkli are set to zero, sothat the first solution is Lij = V2Lij = T[V2SiJ]. This solutionis then substituted for the Lkli in the next step (that is, Lkli =V2Lkl,). At each subsequent step of the iteration, the solu-tions Lij obtained in the previous step are substituted for theLkii. The iteration coverges slowly but stably to the correctsolution.

C. The Multiple-Scales AlgorithmThe multiple-scales algorithm proposes an alternativemethod of solution for the inverse Laplacian problem ex-pressed in Eq. (4.2). This solution is based on the equation

-f dt L G(x - ,y - -; t)V2f(D, -q)d~dn

= f dt f J V2G(x - ¢, y - n; t)f(~, n)d~dn = f(x, y),

(4.7)

.which states that the Laplacian operator may be inverted bysumming its convolution with a Gaussian (G) over a continu-

Anya Hurlbert

(4.4)

Page 5: Formal connections between lightness algorithms

1688 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986

b)

V \ - DOV2G DOG

where 6(x, y) is a delta function, a Gaussian of infinitesimallysmall scale, and k is a constant that may be normalized tozero. Equation (4.7) therefore becomes

J f O(x -, y - n)f(t, n)d~dn = f(x, y). (4.13)

The multiple-scales algorithm therefore filters the inten-sity signal S(x, y) through -V2 G (the DOG operator),thresholds the result, and sums it over a continuum of scalesto recover L(x, y). In practice, a discrete sum over fewerthan 10 scales of -V 2 G yields a good approximation to theoriginal function f = T[S(x, y)].

Fig. 3. a) The Gaussian distribution in one dimension. b) TheLaplacian of the Gaussian is approximately equal to the differenceof two Gaussians of different scales. c) The sum over multiplescales of DOG's, yielding the difference of the smallest scale Gauss-ian and the largest scale Gaussian. Physiologically, the sum wouldyield a "lightness" neuron with a narrow receptive field center and alarge surround.

um of variances or scales (t). The proof of this statementfollows from the heat equation

V2u(x, y; t) = - u(x, y; t), (4.8)at

which is solved by any function u(x, y; t) that can be writtenas

u(x, y; t) = J J G(x - A, y - n; t)f(¢, n)d~d. (4.9)

The heat equation implies that

-lim J -u(x,y; t)dt =-lim . dtT-0 at T fo

X f J V2G(x - A, y - -q; t)f(t, n)d~dn = u(x, y; 0),

(4.10)

and because u(x, y; 0) = <-'_ G(x-A, y - n; 0)f(A, ) =

fUB 3(x - a, y - n)f(r, n)d~dn = f(x, y), Eq. (4.7) follows.2 6

The last part of Eq. (4.7) implies that performing theLaplacian and then taking its inverse on a function f isequivalent to summing the convolution of f with - V2G overa continuum of scales. Because the Gaussian itself satisfiesthe heat equation

G(x, y; tj) - G(x, y; t 2)V2G = a Ga (x, y; t

(t1 - t2 )

an equivalent way to express the V2G filter is as the differ-ence of two Gaussians of different scales. The sum over V2Gfilters therefore equals the sum over difference-of-Gaussians(DOG), in which all but the smallest- and largest-scale Gaus-sians are canceled [as illustrated in Fig. 3c)]. Formally, thesum of V2G over all scales yields

lim G(x, y; t 1) - lim G(x, y; T) = (x, y) - k, (4.12)t1_0 T-X

D. Crick's Edge-Operator AlgorithmCrick's solution6 to the lightness problem is formulated for aMondrian under uniform illumination surrounded by a bor-der of constant reflectance. Under these special conditions,the edge-operator formula recovers the lightness functionsolely from information at the edges between patches ofconstant reflectance. The formula is the two-dimensionalanalogue of Gauss's integral, which, for a function F that isconstant within patches and zero on the boundary, makesthe following statement:

27x r y = I d r s, (4.14)

where e indicates that the integral is performed over all edgesin the image, dF- is the difference of F across each edge(always taken in the decreasing direction), n is the normal tothe contour s along which the integral is evialuated, and r isthe distance from the point (x, y) to the contour.

Lightness is obtained using Eq. (4.5) by setting F = S.This method of solution makes particularly clear the prob-lem of accurate normalization of the computed lightness.Under the special condition that the illumination is uniformon the scene, its contribution to dS at edges cancels, and.theintegral over edges recovers p. Because the difference at anedge is ultimately referred to the value of p on the boundary,this value must be added to the result of the integral in orderto normalize it. Therefore, as in Horn's algorithm, the con-dition that p be constant on the boundary is crucial in ob-taining an accurate result. Yet because of the lightnessambiguity discussed in Section 3, it is impossible to recoverthe boundary value of p exactly: Only the value of S = E + pon the boundary is known. Because E is assumed constant,the value of S on the boundary may be used as the normal-ization term, ensuring that lightness will be everywhere pro-portional to reflectance by the same factor.

When a gradient of illumination falls on the scene, if it isslow enough, it has no effect on values of dS at edges, but, ifit falls on the boundary region, it violates the condition thatS be constant there, and therefore p will be inaccuratelyrecovered.

The distinguishing feature of Crick's formula for lightnessis the use of lightness information from the edges only.. Insumming the contributions only from contours across whichthe intensity signal changes sharply, the formula performsthe differentiation and thresholding of lightness algorithmsin one step rather than two. It is clear that, in some sense,Land's retinex algorithm performs the same integral, be-cause, when the threshold is appropriately set, only theedges contribute to the lightness calculation. The similarity

)I/

G

c)

+ -

Anya Hurlbert

Page 6: Formal connections between lightness algorithms

Vol. 3, No. 10/October 1986/J. Opt. Soc. Am. A 1689

between these algorithms is made formally explicit in Sec-tion 5.

E. Blake's AlgorithmBlake5 proposes a modification of Horn's lightness algo-rithm based on the following criticism, hinted at in theprevious section: The use of the Green's function convolu-tion in solving for L in Eq. (4.3) relies crucially on the condi-tion that p is constant on the boundary, which is not alwaysmet. Blake argues that many images (Mondrian or natural)are bordered by surfaces of varying reflectance. For theseimages, the solution obtained by using Eq. (4.3) yields aninaccurate reconstruction of lightness.

The inaccuracy results from the fact that the Poissonequation requires only that V2L = T[V2p] = 0 inside surfacepatches. Thus any harmonic function, e.g., a linear or sinus-oidal function, in addition to a constant function, will solvethe equation. A solution in which the reflectance functionvaries linearly across patches, for example, would be forcedon data from a scene with a nonuniform boundary, in orderto match it to inappropriate boundary conditions. If, in-stead, the condition is imposed only that T[V2S] be zero onthe boundary, then the solution to the lightness problem willnot be unique; that is, both accurate and inaccurate recon-structions may be obtained.

A stricter constraint restricting the lightness function tobe constant within patches is more appropriate but would, ingeneral, not admit of any solution when Horn's boundaryconditions do not apply to the data.

In Blake's lightness algorithm, the problem of the bound-ary conditions is obviated by using the gradient operator Vinstead of the Laplacian to perform the spatial-differentia-tion step. The reconstructed lightness function must there-fore satisfy the equation

VL = T[VS]

and is solved for by the path integral

L(x, y) = J T[VS]ds,

(4.15)

between them. The demonstration of formal equivalencesbetween lightness algorithms allows their further evaluationto be based entirely on questions of implementation ratherthan computation.

A. Green's Theorem and Lightness AlgorithmsThe full solution to the lightness problem, as stated by thePoisson equation [Eq. 4.2)], is given by Green's theorem,which expresses the relationship between the surface andline integrals of a scalar function. The symmetric Green'sformula is

JJ (V2g - gV 2 )dY = (c (0ag - ga& )ds, (5.1)

where 0 is a scalar function defined in a region z enclosed bya contour C, ds is an infinitesimal element of the contour,and g is the chosen Green's function.2 7

This formula may be used to solve for lightness in Eq. (4.2)by making appropriate substitutions and by specifying ap-propriate boundary conditions: 4 = L(D, ij) and g = (1/2X)X in r, where r2 = (_ - x)2 + (a - y)2. A unique solution isobtained if one of two boundary conditions is met: in theregion beyond ;, either (1) L is uniform or (2) a)L = 0.27

We may now derive the following identities:

(a) Because V2g is zero, except at the origin (x, y), where itequals 27r (proved by using Green's theorem),

JJ (0V 2 g) = I JJ L(D, ?J)(V 2 In r)dS = L(xy).

(b) Because, under the boundary conditions, a9L = 0 outsideS,

igands = (In r)0aL(D, q)ds = 0.

Therefore the full solution to the lightness problem underthe boundary conditions specified above is

(4.16) L(x, y)

where P is a path connecting (x0, yo), at which the value of Lis arbitrarily defined, and (x, y). If

curl (T[V'S]) = 0,

as for a Mondrian, then the value of the integral dependsonly on (x0, yo) and (x, y) and is independent of the pathtaken between them. This method represents a formaliza-tion of Land's algorithm for a Mondrian.

In the discrete case, an iterative solution may be per-formed that is similar to Horn's but differs in the location ofthe threshold operation. It is clear that Blake's use of thegradient operator permits retention of information aboutgradients in the reflectance function, which in Horn's meth-od is discarded.

5. FORMAL CONNECTIONS BETWEENLIGHTNESS ALGORITHMS

The algorithms described in Section 4 employ differentmethods to follow similar steps in the computation of light-ness. In this section, the algorithms are put into a singlecompact mathematical form that clarifies the similarities

= 21 Ut * V 2 L(D, 7)d~d, + ,) n r= f(Inr) * -7 n L (~, " ds,2wr JJ 2w7rCr2

Term 1 + Term 2 (5.2)

where C is a closed contour chosen to lie in the boundaryregion beyond any edges in the image so that L is definedalong it, n is the normal to C, and * indicates a convolution.

The following discussion shows that Term 1, which sums alocal spatial derivative over the image region, yields thethree different methods of computing lightness found inLand's, Horn's, and Crick's algorithms. Term 2, which de-pends on the value of lightness in the boundary region,normalizes the lightness computed by Term 1. If the light-ness on the boundary is specified exactly, and V2L in theimage region is provided as data, then the formula yields aunique solution to the lightness problem.

B. The Normalization TermIn any application of this formula, one of the two boundaryconditions listed above must be met. If L(t, 7l) is uniform inthe boundary region, then Term 2 is constant and, if known,may be reset arbitrarily simply to offset lightness. If Lvaries along C, so that adL(-, i7) = 0 on C, then Term 2 will

Anya Hurlbert

Page 7: Formal connections between lightness algorithms

1690 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986

vary for each (x, y). In this case, if Term 2 is arbitrarily setto a constant, the reconstructed lightness function, in gener-al, will not resemble the true reflectance function. Thispoint is effectively the same that Blake makes, using a dif-ferent argument (see Section 4).

In Land's normalization scheme, discussed below, L(D, al)is allowed to vary along C, but in such a way that Term 2 is,nevertheless, approximately equal for each (x, y).

C. Horn's MethodTerm 1 is exactly Horn's solution to the lightness problemunder boundary condition 1. In Horn's normalizationscheme, Term 2 is set arbitrarily to a constant (zero).

D. Crick's MethodCrick's solution to the lightness problem is obtained by inte-grating Term 1 by parts under boundary condition 1. For ageneral (non-Mondrian) two-dimensional image, this tech-nique yields

Term 1 = 2 f J Qn-)V 2 L(nq, P)dijdd

+ 1 { L(t ) 2 ds, (5.3)

where E labels the contours that correspond to intensityedges and S - E labels the remaining surface on which theintensity function is smooth. On a Mondrian image, theintegral over S - e vanishes because V2L is zero everywhere,except at borders between patches. The integral over edgesis exactly Crick's edge-operator formula. Term 2 of Eq.(5.2) again provides the essential normalization constant.As discussed in Section 4, this term effectively adds to thefirst term the constant value of lightness on the boundary towhich the lightness values in the image have been referred.

For a single uniform patch on a uniform background, theintegral over S - e again vanishes, and the lightness solutionbecomes

if n. rd+I£ Lc(~, 77) n rd 54anL(¢ )r2 ds + rds, (5.4)

where the integral is evaluated around the one contour thatencloses the point (x, y), adL(¢, 7j) is the difference in L takenfrom the inside to the outside across the contour, and Lc(¢,71) is the value of L on the boundary. Thus the equationreduces to

L(x, y) = 2 fc L1 nrds, (5.5)

where LI is the value of L anywhere inside the contour.Equation (5.5) is Gauss's integral.

E. Land's MethodLand's retinex formula may also be derived from Term 1 ofEq. (5.2) by writing V2 in polar coordinates and again inte-grating by parts. Term 1 then becomes

edL 1f~Term =I - A dO f -dr=- I dO[L(x, y) -LRI,

2wr o o dr 2w J0(5.6)

where Ro is the radius to the edge of the region S at angle 0,

when (x, y) is taken as the origin. In deriving Eq. (5.6),boundary condition 2 is used to make the assumption that asr - a, In r approaches - much more slowly than dL/drapproaches zero.

Written in polar coordinates, Term 2 becomes

LN(X, y) A 2 J L(r, N)dO = LRdO.

The sum of terms 1 and 2 thus yields L(x, y) trivially.The first part of Eq. (5.6) is a new formal expression of

Land's method: L(x, y) is given by integrating the radialgradient of L along a radial path, starting at (x, y) and endingat the edge of the image, and then summing the integral overall paths. In the actual implementation of the method, thegradient is sampled and thresholded at discrete intervals,summed over each path, and then averaged over a finitenumber of paths. Each path starts at (x, y) but may endanywhere in the image. Thus the result is the lightness at(x, y) relative to the average value of lightness in the entireimage.

The retinex algorithm does not normalize this result byadding to it the average value of lightness in the image butinstead relies on lightness assumption 3, which states thatthe average value of lightness is invariant for all scenes.Lightness is therefore simply offset by a constant.

In effect, the retinex algorithm assumes that the lightnessaveraged over the endpoints of paths is equal to LN, thelightness averaged over the contour enclosing the image. Itis important to note that each of these averages is weighted:the contribution of LR, to LN is weighted by the angle 0subtended by its part of the contour at the origin (x, y), andthe contribution of L(r, 0) to the retinex average is weightedby the number of endpoints falling on it. If the imagecontains a fully random collection of colors, and a largenumber of paths is chosen (>200), then for most points inthe image the two normalization terms should be approxi-mately equal. (Land's "alternative" retinex method for com-puting lightness, recently published,2 8 specifically normal-izes the flux at a given point with a similar weighted averageover the entire field.)

F. Spatial Integration and Temporal IterationThe multiple-scales solution and Horn's iterative solution tothe inverse Laplacian are formally equivalent. This state-ment is proved by putting each into the form required forLiouville-Neumann substitution

f = F + Hf,

which is solved by making successive substitutions of (F +Hf) for f in the term Hf, yielding the series

gn) = F+ HF + .. . + Hn-1F,

where F (a function) and H (a matrix or an operator) areknown. In the multiple-scales algorithm, F = AtT[V 2S], f =L, and H = G(At), where G(At) is the Gaussian kernel ofscale At. The multiple-scales integral may then be ex-pressed as a series, the nth term of which is

Ln(x, y) = AtIL"(x, y) + G(At) * L"(¢, i/) + G2(At) * L"(D, -l)

+ . .. G'1(At) * L"(-, 7)1,

where L/ = T[V2S] and * indicates the convolution.

(5.7)

Anya Hurlbert

Page 8: Formal connections between lightness algorithms

Vol. 3, No. 10/October 1986/J. Opt. Soc. Am. A 1691

Horn's iterative solution may be expressed as a series, thenth term of which is

L jn I 2JL- +G*Lk+ 2 * L"k + ... Gn * L'l},

(5.8)

where G is the discrete approximation to the Gaussian inmatrix form and L"ij is T[V 2Sj], as defined in Eq. (4.4).

The two series are therefore equal to within a multiplica-tive constant and can be shown to converge quickly to thesame terms and solution. The operator G (or matrix G)grows with each iteration to involve the entire field fully andthereby mediates the long-range effects represented explic-itly by Land's path.

6. IMPLICATIONS FOR THE PHYSIOLOGICALCOMPUTATION OF COLOR

The lightness algorithms discussed here are successful inrecovering lightness as humans perceive it when implement-ed by machines under the restrictions described above.Whether humans recover lightness by implementing a simi-lar algorithm is an open question. From the computationalpoint of view it is unlikely that the human visual system usesone of the retinex-type algorithms discussed here to com-pute color in a three-dimensional world in which shape,shading, specularities, and shadows confound the intensitysignal. Yet it is clear that the human visual system performsa similar lightness computation, and therefore it is pertinentto discuss briefly the distinct physiological implementationssuggested by the formally equivalent lightness algorithms.

The first critical step is to test the three lightness assump-tions: Is the gray-world assumption satisfied in most natu-ral images, and, if not, does the human perception of light-ness vary with the average reflectance of the scene? Aremost naturally occurring illumination gradients slow andsmooth? Can the retinal image of a three-dimensionalscene be segmented into pieces that can be treated as two-dimensional Mondrians? If so, how are the boundary condi-tions for each Mondrian satisfied?29 Similar questions havebeen addressed by other researchers, and these questions arenow being explored in this laboratory.

The two basic operations in lightness algorithms, spatialdifferentiation and integration, imply two basic types ofoperators that the human visual system should possess if itdoes implement a lightness algorithm. In the implementa-tion of the lightness algorithm the two steps acquire crucial-ly different characters: differentiation is a local processthat mediates spatial decomposition of the intensity signalby separating illumination gradients from reflectancechanges, and integration is a global process that mediatesspectral normalization by averaging the reflectance over alarge portion of the visual field. The local operator musttherefore take the difference in light intensity between near-by parts of the image, whereas the global operator must sumlight intensity from virtually all parts of the image.

A. Spatial DifferentiationThe local operator may in turn be of two types: the direc-tional gradient operator that responds best when the direc-tion of maximum intensity change coincides with its pre-

A

A -A - xz)

Fig. 4. An intensity edge convolved with a DOG filter, yielding astrong positive result on one side of the edge and a strong negativeresult on the other.

ferred direction (Land, Crick, Blake) and a nondirectionalLaplacian operator with a circularly symmetric receptivefield (Horn). The formal argument in the preceding sec-tions shows that each local operator is effectively weightedby its distance from the point at which lightness is evaluated.

As Fig. 4 illustrates, the Laplacian (convolved with aGaussian, see later), operating at an edge, gives a strongpositive result on one side and an equally strong negativeresult on the other. The sum of the two weighted signals isof the order of A(Ar/ro), where A is the amplitude of thesignal on either side, ro is the distance to the nearest side,and Ar is the additional distance to the far side. Thus, forlarge r, the signals sent by Laplacians on either side of anedge tend to cancel each other at (x, y). That is, if the sum istaken locally and then sent to (x, y), it will tend to attenuateover the long distance, and, similarly, if the two weightedsignals are sent independently and then summed, fluctua-tions over the distance will tend to equalize them. Nearbyedges, for which Ar is a significant fraction of r, will thereforecontribute most to the lightness.

The gradient operator, in contrast to the Laplacian, takesthe difference in intensity signals across an edge and thussends a large, robust signal to (x, y). But this signal isweighted by 1/r, which falls off rapidly for points far awayfrom (x, y). Beyond a critical distance r, additional contri-butions from the gradient operator will therefore not changesignificantly the lightness at (x, y), so, again, nearby edgeswill contribute most.

Both local operators thus imply an upper limit on thenecessary length of cell-to-cell connections. Interestingly,weighting of nearby edges may offer a partial explanation ofthe phenomenon of simultaneous contrast, which has else-where been used as a counterexample to Land's originalretinex algorithm.

The recent discovery of double-opponent cells in primateV1, V2, and V4 points to the Laplacian operator as the mostlikely candidate in the physiological computation of color.30

The double-opponent cell, as do other center-surroundcells, performs not the Laplacian of the formal statement ofHorn's algorithm but the DOG operation, equivalent tosmearing the intensity signal through a Gaussian filter be-fore applying the Laplacian. In this respect, double-oppon-ent cells perform exactly the same operation as in Horn'siterative implementation and in the multiple-scales algo-rithm, as Eqs. (4.8) and (5.8) make clear. In fact, the appli-cation of the Gaussian serves an essential computationalpurpose in eliminating noise from the intensity signal andthereby stabilizing the implementation.

Anya Hurlbert

Page 9: Formal connections between lightness algorithms

1692 J. Opt. Soc. Am. A/Vol. 3, No. 10/October 1986

B. Spatial IntegrationThe different lightness algorithms predict long-range con-nections of different types. Horn's iterative method forintegrating the Laplacian could be implemented, for exam-ple, by an array of interconnected center-surround cells, theoutputs of which feed back into and are propagated acrossthe array until a stable solution is reached, without explicitlong-range connections. The sum over the responses of gra-dient operators, in contrast, would be difficult to achievewithout explicit long-range connecting axons or dendrites.

The multiple-scales method for integrating the Laplaciansuggests a third way to implement the global process ofspectral normalization. Rather than perform an iterationover time, as in Horn's method, in the multiple-scales meth-od center-surround cells perform an integration over space,again without explicit long-range connections. The sum istaken over a collection of center-surround cells of differentsizes covering the same central point in the visual field. Ifthe multiple scales are collected in a single anatomical area,the cells with larger receptive fields would necessarily bebuilt up by cells with smaller receptive fields from stageslower in the visual pathway. Alternatively, multiple scalesmay exist at different anatomical levels in the visual path-way; each successive scale could be built up by filtering theone at the previous stage through a Gaussian.

The final-stage cell, or lightness neuron, for each lightnessalgorithm would be expected to have a relatively narrowcenter and a large, shallow surround-a nonclassical recep-tive field3 l (see Fig. 3). Recently, Zeki32'33 has reportedneurons in V4 that appear to have the specific characteris-tics of lightness (or color) neurons. He describes two types:"wavelength-sensitive color-only" cells that respond tomonochromatic light of a preferred wavelength, and topatches of the preferred color only within a Mondrian undera range of illuminants, and "color-only" cells that do notrespond to monochromatic light but do respond to patchesof a preferred color within a Mondrian under a range ofilluminants. Both cells are reported to have large receptivefields, but the extent of the surround necessary to evoke (forcolor-only cells) or influence (for wavelength-sensitive color-only cells) the response has not been determined.

ACKNOWLEDGMENTS

I thank Shimon Ullman, Alan Yuille, and especially FrancisCrick for helpful discussions. I am also grateful to TomasoPoggio for his critical comments and reading of the manu-script. Thanks also go to Elizabeth Willey for juggling thefootnotes and text. Anya Hurlbert was supported in part bya Poitras fellowship and in part by a grant from the U.S.Office of Naval Research Psychological and Engineering Di-vision to the Center for Biological Information Processing atWhitaker College.

REFERENCES AND NOTES

1. E. H. Land, "Color vision and the natural image," Proc. NatI.Acad. Sci. USA 45,115-129 (1959).

2. E. H. Land and J. J. McCann, "Lightness and retinex theory," J.Opt. Soc. Am. 61, 1-11 (1971).

3. E. H. Land, "Recent advances in retinex theory and some impli-

cations for cortical computations: colour vision and the naturalimage," Proc. Natl. Acad. Sci. USA 80, 5163-5169 (1983).

4. B. K. P. Horn, "On lightness," MIT Artificial Intelligence Lab.Memo 295 (Massachusetts Institute of Technology, Cambridge,Mass., 1974).

5. A. Blake, "On lightness computation in Mondrian world," inCentral and Peripheral Mechanisms of Colour Vision, T. Otto-son and S. Zeki, eds. (Macmillan, New York, 1985), pp. 45-49.

6. F. C. Crick, The Salk Institute, LaJolla, Calif. (personal commu-nications).

7. A. Hurlbert, "Color computation in the visual system," MITArtificial Intelligence Lab. Memo 814 (Massachusetts Instituteof Technology, Cambridge, Mass. 1986).

8. T. Poggio, "MIT progress in understanding images," in Pro-ceedings of the Image Understanding Workshop, L. Baumann,ed. (Science Applications International Corporation, MiamiBeach, Fla., 1985), pp. 25-39.

9. L. T. Maloney and B. Wandell, "Color constancy: a method forrecovering surface spectral reflectance," J. Opt. Soc. Am. 3, 29-33 (1986).

10. L. T. Maloney, "Computational approaches to color constancy,"Stanford U. Tech. Rep. 1985-01 (Stanford University, Stanford,Calif., 1985).

11. J. Rubin and W. Richards, "Colour vision and image intensities:when are changes material?" Biol. Cybern. 45, 215-226 (1982).

12. J. Rubin and W. Richards, "Colour vision: representing mate-rial categories," MIT Artificial Intelligence Lab. Memo 764(Massachusetts Institute of Technology, Cambridge, Mass.,1984).

13. A. L. Yuille, "A method for computing spectral reflectance,"MIT Artificial Intelligence Lab. Memo 752 (Massachusetts In-stitute of Technology, Cambridge, Mass., 1984).

14. The ambient illumination, in general, includes the direct sourceillumination and the diffusely reflected illumination from thesurfaces in the scene. Here the diffuse surface reflections willbe ignored.

15. B. K. P. Horn, "Understanding image intensities," Artif. Intell.8, 201-231 (1977).

16. B. K. P. Horn, Robot Vision (McGraw-Hill, New York, 1985).17. Other forms of the intensity equation more complex than Eq.

(2.1) have been proposed, containing, for example, additionalterms that describe the contribution from specular reflec-tions. 1 8 20 Equation (2.1) is sufficient for its uses in this paper.

18. R. L. Cook and K. E. Torrance, "A reflectance model for com-puter graphics," ACM Trans. Graphics 1, 7-24 (1982).

19. H.-C. Lee, "Computing the scene illuminant color from specularhighlight," Eastman Kodak Res. Lab. preprint (Eastman Ko-dak, Inc., Rochester, N.Y., 1986).

20. S. A. Shafer, "Using color to separate reflection components,"U. Rochester Tech. Rep. 136 (University of Rochester, Roches-ter, N.Y., 1984).

21. Land illustrates the arbitrary relationship between lightnessand reflectance with a photograph of a Mondrian under a simpleillumination gradient: Although the surface reflectance of aphotographed patch is a product of the Mondrian patch reflec-tance and the Mondrian illumination and so is nonuniform,humans and lightness algorithms interpret it as uniform. Thiseffect follows from the lightness assumption that all slowchanges in the intensity signal are due to the source illumina-tion, and our perception of the photographed Mondrian indi-cates that the assumption holds true to some extent.

22. E. H. Land, "Recent advances in retinex theory," in Central andPeripheral Mechanisms of Colour Vision, T. Ottoson and S.Zeki, eds. (Macmillian, New York, 1985), pp. 5-17.

23. In the original retinex algorithm, 2 spectral normalization isachieved by a different method, assigning 100% reflectance tothe brightest patch in each wavelength channel.

24. The superscript i is from now on dropped in the expressionsLi(x, y), Si(x, y), etc., although it is still there by implication.

25. In Horn's original formulation, the Green's function used is (1/27r)ln(1/r), which lacks the minus sign that makes it equal to theone used here.

26. A similar equation and result has been independently obtained

Anya Hurlbert

Page 10: Formal connections between lightness algorithms

Vol. 3, No. 10/October 1986/J. Opt. Soc. Am. A 1693

by Zucker and Hummel, 3 4 although not applied to lightnesscomputation.

27. A. Kyrala, Applied Functions of a Complex Variable (Wiley-Interscience, New York, 1972).

28. E. H. Land, "An alternative technique for the computation ofthe designator in the retinex theory of color vision," Proc. Natl.Acad. Sci. USA 83, 3078-3080 (1986).

29. One speculation is that the fovea or a larger cone-dense area"captures" each Mondrian and the rods in the periphery take aspatial average over its boundary, sensitive to the overall level ofillumination.

30. M. S. Livingstone and D. H. Hubel, "Anatomy and physiology ofa color system in the primate visual cortex," J. Neurosci. 4,309-356 (1984), and references therein.

31. J. Allman, F. Miezin, and E. McGuinness, "Direction- and ve-locity-specific responses from beyond the classical receptivefield in the middle temporal visual area (MT)," Perception 14,105-126 (1985).

32. S. M. Zeki, "Colour coding in the cerebral cortex: the reactionof cells in monkey visual cortex to wavelengths and colours,"Neuroscience 9, 741-765 (1983).

33. S. M. Zeki, "The distribution of wavelength and orientationselective cells in different areas of monkey visual cortex," Proc.R. Soc. London Ser. B. 217, 449-470 (1983).

34. S. W. Zucker and R. A. Hummel, "Receptive fields and thereconstruction of visual information," McGill U. Comput. Vis.Robotics Lab. Tech. Rep. 83-17 (McGill University, Montreal,Quebec, Canada, 1983).

Anya Hurlbert