8
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science, Columbia University New York, New York 10027 Email: srinivas, nayar @cs.columbia.edu Abstract Conventional vision systems are designed to perform in clear weather. However, any outdoor vision system is in- complete without mechanisms that guarantee satisfactory performance under poor weather conditions. It is known that the atmosphere can significantly alter light energy reaching an observer. Therefore, atmospheric scattering models must be used to make vision systems robust in bad weather. In this paper, we developa geometric framework for analyzing the chromatic effects of atmospheric scatter- ing. First, we study a simple color model for atmospheric scattering and verify it for fog and haze. Then, based on the physics of scattering, we derive several geometric con- straints on scene color changes, caused by varying atmo- spheric conditions. Finally, using these constraints we de- velop algorithms for computing fog or haze color, depth segmentation, extracting three dimensional structure, and recovering “true” scene colors, from two or more images taken under different but unknown weather conditions. 1 Vision and Bad Weather Current vision algorithms assume that the radiance from a scene point reaches the observer unaltered. However, it is well known from atmospheric physics that the atmosphere scatters light energy radiating from scene points. Ultimately, vision systems must deal with realistic atmospheric condi- tions to be effective outdoors. Several models describing the visual manifestations of the atmosphere can be found in atmospheric optics (see [Mid52], [McC75]). These models can be exploited to not only remove bad weather effects, but also to recover valuable scene information. Surprisingly, little work has been done in computer vision on weather related issues. Cozman and Krotkov[CK97] computed depth cues from iso-intensity points. Nayar and Narasimhan[NN99] used well established atmospheric scat- tering models, namely, attenuation and airlight, to extract complete scene structure from one or two images, irre- This work was supported in parts by a DARPA/ONR MURI Grant(N00014-95-1-0601), an NSF National Young Investigator Award, and a David and Lucile Packard Fellowship. spective of scene radiances. They also proposed a dichro- matic atmospheric scattering model that describes the de- pendence of atmospheric scattering on wavelength. How- ever, the algorithm they developed to recover structure using this model, requires a clear day image of the scene. In this paper, we develop a general chromatic framework for the analysis of images taken under poor weather conditions. The wide spectrum of atmospheric particles makes a general study of vision in bad weather hard. So, we limit ourselves to weather conditions that result from fog and haze. We be- gin by describing the key mechanisms of scattering. Next, we analyze the dichromatic model proposed in [NN99], and experimentally verify it for fog and haze. Then, we derive several useful geometric constraints on scene color changes due to different but unknown atmospheric conditions. Fi- nally, we develop algorithms to compute fog or haze color, to construct depth maps of arbitrary scenes, and to recover scene colors as they would appear on a clear day. All of our methods only require images of the scene taken under two or more poor weather conditions, and not a clear day image of the scene. 2 Mechanisms of Scattering The interactions of light with the atmosphere can be broadly classified into three categories, namely, scattering, absorp- tion and emission. Of these, scattering due to suspended atmospheric particles is most pertinent to us. For a detailed treatment of the scattering patterns and their relationship to particle shapes and sizes, we refer the reader to the works of [Mid52] and [Hul57]. Here, we focus on the two fundamen- tal scattering phenomena, namely, airlight and attenuation, which form the basis of our framework. 2.1 Airlight While observing an extensive landscape, we quickly notice that the scene points appear progressively lighter as our at- tention shifts from the foreground toward the horizon. This phenomenon, known as airlight (see [Kos24]), results from the scattering of environmental light toward the observer, by the atmospheric particles within the observer’s cone of vision.

Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

Chromatic Framework for Vision in Bad Weather

Srinivasa G. Narasimhan and Shree K. NayarDepartmentof ComputerScience,ColumbiaUniversity

New York, New York 10027Email:

srinivas,nayar @cs.columbia.edu

AbstractConventional vision systemsare designedto perform inclear weather. However, any outdoorvision systemis in-completewithout mechanismsthat guaranteesatisfactoryperformanceunder poor weatherconditions. It is knownthat the atmosphere can significantly alter light energyreaching an observer. Therefore, atmosphericscatteringmodelsmustbe usedto make vision systemsrobust in badweather. In this paper, we developa geometricframeworkfor analyzingthe chromaticeffectsof atmosphericscatter-ing. First, we studya simplecolor modelfor atmosphericscatteringand verify it for fog and haze. Then,basedonthephysicsof scattering, wederiveseveral geometriccon-straints on scenecolor changes,causedby varying atmo-sphericconditions.Finally, usingtheseconstraints wede-velop algorithms for computingfog or hazecolor, depthsegmentation,extracting three dimensionalstructure, andrecovering “true” scenecolors, from two or more imagestakenunderdifferentbut unknownweatherconditions.

1 Vision and Bad Weather

Currentvision algorithmsassumethat the radiancefrom ascenepoint reachesthe observer unaltered.However, it iswell known from atmosphericphysicsthat the atmospherescatterslight energyradiatingfromscenepoints.Ultimately,vision systemsmustdealwith realisticatmosphericcondi-tions to be effective outdoors. Several modelsdescribingthevisualmanifestationsof theatmospherecanbefoundinatmosphericoptics(see[Mid52], [McC75]). Thesemodelscanbeexploitedto notonly removebadweathereffects,butalsoto recovervaluablesceneinformation.

Surprisingly, little work hasbeendonein computervisionon weatherrelated issues. Cozmanand Krotkov[CK97]computeddepthcuesfrom iso-intensitypoints. NayarandNarasimhan[NN99] usedwell establishedatmosphericscat-tering models,namely, attenuationand airlight, to extractcompletescenestructurefrom one or two images, irre-

This work was supported in parts by a DARPA/ONR MURI

Grant(N00014-95-1-0601),an NSF National Young InvestigatorAward,andaDavid andLucile PackardFellowship.

spective of sceneradiances.They alsoproposeda dichro-matic atmosphericscatteringmodel that describesthe de-pendenceof atmosphericscatteringon wavelength. How-ever, thealgorithmthey developedto recoverstructureusingthismodel,requiresa cleardayimageof thescene.

In thispaper, wedevelopageneralchromaticframework fortheanalysisof imagestakenunderpoorweatherconditions.Thewidespectrumof atmosphericparticlesmakesageneralstudyof vision in badweatherhard.So,we limit ourselvesto weatherconditionsthatresultfrom fog andhaze.We be-gin by describingthekey mechanismsof scattering.Next,weanalyzethedichromaticmodelproposedin [NN99], andexperimentallyverify it for fog andhaze.Then,we deriveseveralusefulgeometricconstraintsonscenecolor changesdue to differentbut unknownatmosphericconditions. Fi-nally, we developalgorithmsto computefog or hazecolor,to constructdepthmapsof arbitraryscenes,andto recoverscenecolorsasthey wouldappearonaclearday. All of ourmethodsonly requireimagesof the scenetakenundertwoor morepoorweatherconditions,andnota cleardayimageof thescene.

2 Mechanisms of Scattering

Theinteractionsof light with theatmospherecanbebroadlyclassifiedinto threecategories,namely, scattering,absorp-tion and emission. Of these,scatteringdue to suspendedatmosphericparticlesis mostpertinentto us. For a detailedtreatmentof thescatteringpatternsandtheir relationshiptoparticleshapesandsizes,wereferthereaderto theworksof[Mid52] and[Hul57]. Here,wefocusonthetwo fundamen-tal scatteringphenomena,namely, airlight andattenuation,which form thebasisof our framework.

2.1 Airlight

While observinganextensive landscape,we quickly noticethat thescenepointsappearprogressively lighter asour at-tentionshifts from theforegroundtowardthehorizon.Thisphenomenon,known asairlight (see[Kos24]), resultsfromthe scatteringof environmentallight toward the observer,by the atmosphericparticleswithin the observer’s coneofvision.

Page 2: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

The radianceof airlight increaseswith pathlength andisgivenby (see[McC75] and[NN99]),

"! $# (1)

% is called the total scatteringcoefficient and it repre-sentstheability of a volumeto scatterflux of a givenwave-length , in all directions.

% & is calledtheopticalthick-nessfor thepathlength . ' is known asthe“horizon”radiance.More precisely, it is the radianceof the airlightfor an infinite pathlength.As expected,the airlight at theobserver( )( ) is zero.

Assuminga camerawith a linear radiometricresponse,theimageirradiancedueto airlight canbewrittenas * +,- . "! / where- accountsfor thecamerapa-rameters.Substituting

* - 0 (2)

weobtain

* +,1* 2 345 67 "! 0# (3)

2.2 Attenuation

As alight beamtravelsfrom ascenepointthroughtheatmo-sphere,it getsattenuateddueto scatteringby atmosphericparticles.Theattenuatedflux thatreachesanobserver froma scenepoint, is termedas direct transmission[McC75].Thedirect transmissionfor collimatedlight beamsis givenby Bouguer’sexponentiallaw[Bou30]:

* - 8+ 67 "! (4)

where * +, is the attenuatedirradianceat the observer,and

'8 is theradianceof thescenepoint prior to atten-uation. Again, - accountsfor the cameraparameters.Al-lard’s law[All76] modifiesthe above model for divergentlight beamsfrom pointsourcesas

* +, -:9 80 7;, <!= (5)

where 9 80 is the radiant intensity of the point source.In the subsequentsections,we usethe terms“attenuationmodel”and“direct transmissionmodel”, interchangeably.

2.3 Overcast Sky Illumination

Allard’sattenuationmodelis in termsof theradiantintensityof a point source. This formulationdoesnot take into ac-countthesky illuminationandits reflectionby scenepoints.We make two simplifying assumptionsregardingthe illu-minationreceivedby a scenepoint. Then,we reformulatethe attenuationmodel in termsof sky illumination andtheBRDFof scenepoints.

Usually, the sky is overcast under foggy conditions.So we use the overcast sky model for environmentalillumination[GC66][MS42]. We alsoassumethat the irra-dianceat eachscenepoint is dominatedby the radianceofthesky, andthat the irradiancedueto otherscenepointsisnot significant. In AppendixA, we have shown that theat-tenuatedirradianceat theobserver is givenby,

* - > 7 "! = # (6)

where is thehorizonradiance.> representsthesky

aperture(theconeof sky visiblefrom ascenepoint),andthereflectanceof thescenepoint in thedirectionof theviewer.From(2), wehave

* +, * > "! = # (7)

Theabove expressionfor thedirect transmissionof a scenepoint includesthe effects of sky illumination and the re-flectanceof thescenepoint. In theremainderof thepaper,we referto (7) asthedirecttransmissionmodel.

3 Dichromatic Model

Hitherto, we have describedattenuationandairlight sepa-rately. At night, therecanbe no airlight (sincethereis noenvironmentalillumination) and hence,attenuationdomi-nates.In contrast,underdensefog or hazeduringdaylight,the radiancefrom a scenepoint is severely attenuatedandhenceairlight dominates.However, in mostsituationstheeffects of both attenuationand airlight coexist. Here, wediscussthechromaticeffectsof atmosphericscatteringthatincludebothattenuationandairlight.

Nayar and Narasimhan[NN99] derived a color model foratmosphericscatteringcalled the dichromaticatmosphericscatteringmodel.It statesthatthecolorof ascenepointun-derbadweatheris a linearcombinationof thedirect trans-missioncolor (asseenon a clearday, whenthereis mini-mal atmosphericscattering),andairlight color (fog or hazecolor).

Figure1 illustratesthe dichromaticmodel in R-G-B colorspace.Let ? betheobservedcolor vectorfor a scenepoint@

, on a foggy or hazyday. Let theunit vector AB representthedirectionof direct transmissioncolor of

@. Let theunit

vector AC representthedirectionof airlight color. Then,wecanwrite ?5EDFABHGJI AC (8)

whereD is themagnitudeof directtransmission,andI

is themagnitudeof airlight of

@.

For thevisible light spectrum,therelationshipbetweenthescatteringcoefficient

%, andthe wavelength , is given by

Page 3: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

R

GK

BL

OM

DN ∧O

A∧P

E

pQq

Figure 1: Dichromaticatmosphericscatteringmodel. The colorRof a scenepoint on a foggy or hazyday, is a linearcombination

of thedirection ST of directtransmissioncolor, andthedirection SUof airlight color.

Rayleigh’s law: % WV,XZY.[\,]Y\$^ (9)

where_5`ba (3ced . Fortunately, for fog andhaze,_5fE( (see[Mid52], [McC75]). In thesecases,

%doesnot dependon

wavelength.So,we drop theparameter from theairlightmodelin (3) andthedirecttransmissionmodelin (7). Then,we can write the coefficients D and

Iof the dichromatic

modelas,

Dg * > ! = I 1* h5 ! 0# (10)

This implies that the dichromaticmodel is linear in colorspace.In otherwords, AB , AC and ? lie on thesamedichro-maticplanein color space.Furthermore,theunit vectors ABand AC , do not changedueto differentatmosphericcondi-tions.Therefore,thecolorsof ascenepoint

@, observedun-

derdifferentatmosphericconditions,lie on a singledichro-maticplane,asshown in figure2.

D∧

A∧i

Ej

1

E 2k

Ol

Dichromatic Plane

Figure 2: The observed color vectorsRm

of a scenepoint underdifferent(two in thiscase)foggy or hazyconditionslie on a planecalledthedichromaticplane.

Nayar and Narasimhan[NN99] did not extensively verifytheir modelfor real images.Sinceour framework is basedon this model,we experimentallyverified the model in R-G-B color space. Experimentswereperformedusingtwo

noqpsrpError (degrees)tvu/wsw0x (# yez|~x (#

Table 1: Experimentalverificationof thedichromaticmodelwithtwo scenesimagedunder threedifferent foggy and hazy condi-tions, respectively. The error wascomputedasthe meanangulardeviation(in degrees)of theobservedscenecolorvectorsfrom theestimateddichromaticplanes,overall /ss pixelsin theim-ages.

scenes(seefigures6(] ) and ( V )) underthreedifferent fogandhazeconditions.Theimagesusedwereof sizee(e(:(e(pixels. Thedichromaticplanefor eachpixel wascomputedby fitting a planeto thecolorsof thatpixel, observedunderthe threeatmosphericconditions.Theerrorof theplane-fitwascomputedin termsof the anglebetweenthe observedcolor vectorsandtheestimatedplane.Theaverageerror(indegrees)for all thepixelsin eachof thetwo scenesis shownin table1. The small error valuesindicatethat the dichro-maticmodelindeedworkswell for fog andhaze.

4 Computing the Direction of Airlight Color

The directionof airlight (fog or haze)color canbe simplycomputedby averagingapatchof thesky onafoggyor hazyday, or from scenepointswhosedirecttransmissioncolor isblack1. Thesemethodsnecessitateeither

] the inclusionof a partof thesky (which is moreproneto color saturationor clipping) in the imageor

& a clear day imageof thescenewith sufficientblackpointsto yield a robustestimateof thedirectionof airlight color. Here,wepresenta methodthatdoesnot requireeitherthesky or a cleardayimage,tocomputethedirectionof airlight color.

Figure3 illustratesthe dichromaticplanesfor two scenepoints

@and

@ , with differentdirect transmissioncolorsAB and AB . Thedichromaticplanes and aregiven

by theirnormals,

? ? = ? ? = # (11)

Sincethe direction AC of the airlight color is the samefortheentirescene,it mustlie on thedichromaticplanesof allscenepoints. Hence, AC is givenby the intersectionof thetwo planes and ,

AC # (12)

1Sky andblackpointstakeonthecolorof airlight onabadweatherday.

Page 4: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

O

E(i)1

E(i)2

E(j)

1

(j)

E

2

D ∧ (i)

D ∧ (j)

A ∧

Plane Q¡

Plane Q¡

Figure 3: Intersectionof two differentdichromaticplanesyieldsthedirection SU of airlight color.

In practice,sceneshaveseveralpointswith differentcolors.Therefore,we cancomputea robust intersectionof severaldichromaticplanesby minimizingtheobjective function

¤ #¥AC = # (13)

Thus,weareableto computethecolorof fog or hazeusingonly the observedcolorsof the scenepointsundertwo at-mosphericconditions,andnot relyingon a patchof theskybeingvisible in theimage.

We verified theabove methodfor the two scenesshown infigures6(] ) and( V ). First, thedirectionof airlight colorwascomputedusing(13). Then,we comparedit with thedirec-tion of the airlight color obtainedby averagingan unsatu-ratedpatchof the sky. For the two scenes,theangularde-viationswerefoundto be

# y ¦ and # §¦ respectively. These

smallerrorsin thecomputeddirectionsof airlight color in-dicatetherobustnessof themethod.

5 Iso-depth Scene PointsIn thissection,wederiveasimpleconstraintfor scenepointsthatareat thesamedepthfrom theobserver. Thisconstraintcanthenbeusedto segmentthescenebasedondepth,with-out knowing theactualreflectancesof thescenepointsandtheir sky apertures.For this, we first prove the followinglemma.

Lemma 1 Ratiosof thedirecttransmissionmagnitudesforpointsundertwo differentweatherconditionsare equal, ifand only if the scenepoints are at equal depthsfrom theobserver.

Proof: Let% and

% = be two unknown weatherconditionswith horizonbrightnessvalues * ¨ and * ª© . Let

@and@

be two scenepointsat depths and , from the ob-server. Also, let > and > representsky aperturesandreflectancesof thesepoints.

E1

D∧®

A1

2A° A

∧±At²

Q(P)

p³ p³1

qµ1 qµ

2

Figure 4: Theratio ¶$·6¸0¶6¹ of thedirecttransmissionsfor a scenepointundertwo differentatmosphericconditionsis equalto thera-tio º »¼·¾½À¿&º¸º »Á¹¾Âgº of theparallelsides.Shadedtrianglesaresimi-lar.

From (10), thedirect transmissionmagnitudesof@

under% and% = , canbewrittenas

D * ¨ > 6 ¨ !sà = D = * © > 6 © !sà = # (14)

Similarly, the direct transmissionmagnitudesof@

under% and% = , are

D * ¨ > 6 ¨ !,Ä = D = * g© > 6 © !,Ä = # (15)

Then,we immediatelyseethattherelation:

D =D D =D * ©* ¨ ' © 6 ¨ "! (16)

holdsif andonly if W Å . So, if we have the ratioof direct transmissionsfor eachpixel in the image,we cangroupthescenepointsaccordingto theirdepthsfromtheob-server. But how dowecomputethisratiofor any scenepointwithoutknowing theactualdirecttransmissionmagnitudes?

Considerthedichromaticplanegeometryfor a scenepoint@, asshown in figure(4). Here,we denotea vectorby the

line segmentbetweenits endpoints. Let D and D = be theunknown directtransmissionmagnitudesof

@under

% and% = , respectively. Similarly, letI and

I = be the unknownairlight magnitudesfor

@under

% and% = .

We defineanairlight magnitudeÆÈÇÁÉÊqÆ suchthat

* = ÉhÊ * ÇF# (17)

Page 5: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

Also, sincethe directionof direct transmissioncolor for ascenepoint doesnot vary dueto differentatmosphericcon-ditions, * É * = É = # Here É and É = correspondto theendpointsof theairlight magnitudesof

@under

% and% = ,

asshown in figure(4). Thus, Ë* ÇÁÉ 4Ì Ë* = É Ê É = # Thisimplies, D =D

I = Æ;ÇÁÉhÊqÆI ÆÈ* = ÉÊqÆÆ;* ÇªÆ # (18)

Sincetheright handsideof (18) canbecomputedusingtheobservedcolorvectorsof thescenepoint

@, wecancompute

the ratio D =Í D of direct transmissionmagnitudesfor

@undertwo atmosphericconditions.Therefore,from (16), wehaveasimplemethodto find pointsat thesamedepth,with-out having to know their reflectancesandsky apertures.Asequentiallabelinglike algorithmcanthenbe usedto effi-cientlysegmentscenesinto regionsof equaldepth.

6 Scene structure

We extendthe direct transmissionratio constraintgiven in(16) onestepfurtherandpresenta methodto constructthecompletestructureof an arbitraryscene,from two imagestakenunderpoorweatherconditions.

From(16), theratio of directtransmissionsof a scenepoint@undertwo atmosphericconditions,is givenby

D =D * ©* ¨ © ¨ "! # (19)

Note that we have alreadycomputedthe left handsideoftheaboveequationusing(18). Takingnaturallogarithmsonbothsides,weget

% = % /ÏÎÑÐ * ©* ¨ ÎÑÐ D =D # (20)

So, if we know the horizon brightnessvalues, * ¨ and* © , thenwecancomputethescaleddepth % = % & at

@.

In fact, % = % & is theDifferencein OpticalThicknesses

(DOT) for the pathlength , underthe two weathercondi-tions. In theatmosphericopticsliterature,thetermDOT isusedasa quantitative measureof the “change”in weatherconditions.

6.1 Estimation of ÒÔÓ ¹ and ÒÓ ·Theexpressionfor scaleddepthgivenin (20), includesthehorizonbrightnessvalues,* ¨ and * g© . Thesetwo termsareobservablesonly if somepartof thesky is visible in theimage. However, the brightnessvalueswithin the regionof the imagecorrespondingto the sky, cannotbe trustedsincethey areproneto intensitysaturationandcolor clip-ping. Here,weestimate* ¨ and * © usingonly pointsinthe“non-sky” regionof thescene.

LetI and

I = denotethemagnitudesof airlight for a scenepoint

@underatmosphericconditions

% and% = . Using

(10), wehave I 1* ¨ h5 ¨ ! 0I = 1* © 45 © ! 0# (21)

Therefore,

* ª© I =* ¨ I * ª©* ¨ © ¨ "! # (22)

Substituting(19), wecanrewrite theaboveequationas

D =D I = VI (23)

where,

V Õ* © D =D * ¨ # (24)

Comparing (23) and (18), we get V ÆÈÇÁÉÊÆ (seefigure(4)). Hence,(24) representsa straightline equationin theunknown parameters,* ¨ and * © .Now considerseveral pairs of Ö V

D = Í D /× corre-

spondingto scenepoints@

, at differentdepths.Then,theestimationof * ¨ and * © is reducedto a line fitting prob-lem. Quitesimply, we have shown that thehorizonbright-nessesunderdifferentweatherconditionscanbecomputedusingonlynon-sky scenepoints.

Sinceboth the termson the right handsideof (20) canbecomputedfor everyscenepoint,wehaveasimplealgorithmfor computingthe scaleddepth at eachscenepoint, andhencethe completescenestructure,from two badweatherimages.

6.2 Experimental Results

We now presentresultsshowing scenestructurerecoveredfrom bothsyntheticandrealimages.Thesyntheticsceneweusedis shown on theleft sideof figure5

] asa y(e(ØØye(e(pixel imagewith 16 color patches.The colors in this im-agerepresentthedirect transmissionor “true” colorsof thescene. We assigneda randomdepthvalue to eachcolorpatch. The rotated D structureof the sceneis shown onthe right sideof figure5

] . Then, two different levels offog (

% # ( ,% = # z ) wereaddedto thesyntheticscene

accordingto thedichromaticmodel.To testrobustness,weaddednoiseto the foggy images.Thenoisewasrandomlyselectedfrom a uniformly distributedcolor cubeof dimen-sion 10. The resultingtwo foggy (and noisy) imagesareshown in figure5

& . Thestructureshown in 5 V is recov-

eredfrom thetwo foggy imagesusingthetechniquewe de-scribedabove.

Simulationswererepeatedfor the scenein figure5 ] for

two relative scatteringcoefficient values(% Í % = ), andfour

Page 6: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

( ] )

()

( V )Figure 5: ÙÛÚÜ Ontheleft, a ÝsÝ pixel imagerepresentingasyntheticscenewith 16 color patches,andon theright, its rotatedÞD structure.ÙÛ᝚ Two levelsof fog (à ¹:áâsã , à ·váJâ¥ãåä ) areadded

to thesyntheticimageaccordingto thedichromaticmodel.To testrobustness,noiseis addedby randomselectionfrom a uniformlydistributedcolorcubeof dimensionâ . ÙÛæsÜ Therecoveredstructure(Þ Þ medianfiltered).Refer to [Web00] for version with color

images.

NoiseÛç ( z ( z

Estimated* ¨ (( (#éè (eê6# y ê# (Estimated* © yezz yey#éè yee6# y èc# (DepthError (%) (# ( èë# c e #éè z#

ActualValuesÖ % Í % = ,* ¨ ,* g© ×h1Ö,(# z (e(6yzez6× ] Noise

Ûç ( z ( zEstimated* ¨ ye(( ye(c# yeye6#Ûè yceê# zEstimated* © ce(( ce(# c è# z ccec# yDepthError (%) (# ( y6# z# èë#

ActualValuesÖ % Í % = * ¨ ,* © ×h1Ö,(# §èë3ye((c(e(§×& Table 2: Simulationswererepeatedfor thescenein figure5(Ú ), fortwo setsof parametervalues(shown in (Ú ) and(ß )), andfour dif-ferentnoiselevels.Noisewasrandomlyselectedfrom auniformlydistributedcolor cubeof dimensionì .

( ] )

()

( V )

( )Figure 6: ÙÛÚÜ A sceneimagedundertwo differentfoggy condi-tions. ÙÛ᝚ Computeddepthmapof thesceneusingthetwo imagesin ÙÛÚÜ . ÙÛæ¥Ü Anothersceneimagedundertwo differenthazycondi-tions. ÙÛí7Ü Computeddepthmapof thesceneusingthetwo imagesin ÙÛæsÜ . See [Web00] for version with color images.

different noise levels. Once again, the noise was ran-domly selectedfrom a uniformly distributedcolor cubeofdimension

ç. Table2

] shows resultsof simulationsforthe parameterset Ö % Í % = ,* ¨ ,* © ×îÖï(6# z6 ((3yezez§×vwhile 2

& shows the resultsfor the set Ö,(# §èye((3ce(e(§×v#The computedvaluesfor * ¨ , * © , and the percentageRMS error in the recoveredscaleddepths,computedoverall ye((5ðy(e( pixels are given. Theseresultsshow thatourmethodfor recoveringstructureis robustfor reasonableamountsof noise.

Experimentswith two realscenesunderfoggyandhazycon-ditionsareshown in figure6. Theseexperimentsarebasedon imagesacquiredusinga Nikon N90sSLR cameraanda Nikon LS-2000slide scanner. All imagesare linearizedusingtheradiometricresponsecurveof theimagingsystem

Page 7: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

that is computedoff-line usinga color chart. The first ofthetwo sceneswasimagedundertwo foggyconditions,andis shown in 6

] . Thesecondscenewasimagedundertwohazy conditionsas shown in 6

V . Figures6/ and 6

0show thecorrespondingrecovereddepthmaps.

7 True Scene ColorAs we statedin the beginning of the paper, mostoutdoorvision applicationsperformwell only underclearweather.Any discernibleamountof scatteringdueto fog or hazeinthe atmosphere,hindersa clearview of the scene.In thissection,wecomputethedirecttransmissionor “true” colorsof theentiresceneusingminimala priori sceneinformation.For this, we show that, givenadditionalsceneinformation(airlight or direct transmissionvector)at a singlepoint inthescene,wecancomputethetruecolorsof theentirescenefrom two badweatherimages.

Considerthedichromaticmodelgivenin (8). Theobservedcolorof ascenepoint

@ underweathercondition

%is,

? ñD AB GIe AC (25)

whereD is thedirect transmissionmagnitude,andI

istheairlight magnitudeof

@ . Supposethatthedirection AB

of direct transmissioncolor for a singlepoint@

is given.Besides,thedirection AC of airlight colorfor theentirescenecanbeestimatedusing(13). Therefore,thecoefficientsD and

I canbecomputedusing(25). Furthermore,theopti-

cal thickness% of

@ canbecomputedfrom (10).

Sincewe have alreadyshown how to computethe scaleddepth of every scenepoint (see(20)), the relative depth Í of any otherscenepoint

@ with respectto

@ can

be computedusing the ratio of scaleddepths. Hence,theoptical thicknessandairlight for the scenepoint

@ , under

thesameatmosphericconditionaregivenby% % Í I 1* 345ë6!,Ä 0# (26)

Finally, the direct transmissioncolor vector of@

can becomputedasD AB 1? I AC # Thus,givena sin-gle measurement(in this case,thedirectionof direct trans-missioncolor of a singlescenepoint), we have shown thatthedirecttransmissionandairlightcolorvectorsof any otherpoint,andhencetheentirescenecanbecomputed.But howdo we specifythetruecolor of any scenepoint without ac-tually capturingthecleardayimage?

For this,weassumethatthereexistsatleastonescenepointwhosetruecolor

Blieson thesurfaceof thecolorcubeand

we wish to identify suchpoint(s) in the scene. Considerthe R-G-B color cubein figure(7). If the true color of ascenepoint lies on the surfaceof the color cube,then thecomputedòI is equalto theairlight magnitude

Iof thatpoint.

R

GB q

Dó∧

~qô

O

E∧-A

Figure 7: The observed colorR

of a scenepoint, its airlight di-rection SU andtruecolordirection ST areshown in theR-G-Bcolorcube. õö is thedistancefrom

Rto a surfaceof thecubealongneg-

ative SU . For scenepointswhosetruecolorsdo not lie on thecubesurface, õö is greaterthanthetrueairlight magnitudeö .

Figure 8: True color imagesrecoveredusingthe two foggy andhazyimagesshown in figure6ÙÛÚÜ and ÙÛæ¥Ü respectively. Thecolorsin the dark window interiorsare dominatedby airlight and thustheir truecolorsareblack. The imagesarebrightenedfor displaypurposes.See [Web00] for the version with color images.

However, if the truecolor of thepoint lies within thecolorcube,thenclearly òIÀ÷)I # For eachpoint

@ , wecomputeòI

andoptical thickness% . Notethat

% mayor maynotbe the correctoptical thickness.We normalizethe opticalthicknessesof thescenepointsby theirscaleddepthsto get

ø % % = % & v# (27)

For scenepointsthatdonot lie onthecolorcubesurface, òø is greaterthanwhat it shouldbe. Sincewe have assumedthat thereexistsatleastonescenepoint whosetruecolor ison thesurfaceof thecube,it mustbethepoint thathastheminimum òø . So, òI of thatpointis its trueairlight. Hence,from(26), theairlightsandtruecolorsof theentirescenecanbecomputedwithoutusingacleardayimage.

Usually in urbanscenes,window interiorshave very littlecolorof theirown. Their intensitiesaresolelydueto airlightandnotdueto directtransmission.In otherwords,their truecolor is black (the origin of the color cube). We detectedsuchpointsin the sceneusingthe above techniqueandre-coveredthe true colorsof two foggy andhazyscenes(seefigure(8)).

Page 8: Chromatic Framework for Vision in Bad W eather · 2008-12-03 · Chromatic Framework for Vision in Bad W eather Srinivasa G. Narasimhan and Shree K. Nayar Department of Computer Science,

8 ConclusionIn this paper, we presenteda generalchromaticframeworkfor sceneunderstandingunderbadweatherconditions.Notethat conventional image enhancementtechniquesare notuseful heresincethe effects of weathermust be modeledusingatmosphericscatteringprinciplesthatarecloselytiedto scenedepth. We basedour work on the simpleyet use-ful dichromaticmodel. Severalusefulconstraintson scenecolor changesdueto differentatmosphericconditionswerederived. Using theseconstraints,we developedsimpleal-gorithmsto recover thethreedimensionalstructureandtruecolorsof scenes,fromimagestakenunderpoorweathercon-ditions. Thesealgorithmsweredemonstratedfor bothsyn-theticandrealscenes.

References[All76] E. Allard. Memoire sur l’intensite’ et la porteedes

phares.paris,dunod.1876.

[Bou30] P. Bouguer. Traite’ d’optiquesur la gradationde la lu-miere.1730.

[CK97] F. CozmanandE. Krotkov. Depthfrom scattering.Pro-ceedingsof the 1997 Conferenceon ComputerVisionandPatternRecognition, 31:801–806,1997.

[GC66] J.GordonandP. Church.Overcastsky luminancesanddirectionalluminousreflectancesof objectsand back-groundsunderovercastskies. Applied Optics, 5:919,1966.

[Hor86] B.K.P. Horn. RobotVision. TheMIT Press,1986.

[Hul57] VanDeHulst. Light ScatteringbysmallParticles. JohnWiley andSons,1957.

[Kos24] H. Koschmieder. Theorieder horizontalensichtweite.Beitr. Phys.freienAtm.,, 12:33–53,171–181,1924.

[McC75] E.J.McCartney. Opticsof the Atmosphere: Scatteringbymoleculesandparticles. JohnWiley andSons,1975.

[Mid52] W.E.K.Middleton.VisionthroughtheAtmosphere. Uni-versityof TorontoPress,1952.

[MS42] P. Moon and D.E. Spencer. Illumination from a non-uniformsky. Illum Eng., 37:707–726,1942.

[NN99] S.K.NayarandS.G.Narasimhan.Visionin badweather.Proceedingsof the 7th International Conference onComputerVision, 1999.

[Web00] Website. http://www.cs.columbia.edu/cave/research/publications/visionweather.html. 2000.

A Direct Transmission under Overcast SkiesWe presentananalysisof theeffect of sky illuminationandits reflectionby a scenepoint, on the direct transmissionfrom the scenepoint. For this, we make two simplifyingassumptionson the illumination received by scenepoints.Usually, thesky is overcastunderfoggy conditions.Soweusethe overcastsky model[GC66] for environmentalillu-mination.We alsoassumethat theirradianceof eachscene

δθ δφ

n

δω

Ω

Sky

Figure 9: Theilluminationgeometryof ascenepoint ú with sur-facenormal Sû . Theirradianceof ú is dueto theairlight radianceof its sky apertureü ãpoint is dominatedby the radianceof the sky, andthat theirradiancedueto otherscenepointsis notsignificant.

Considertheilluminationgeometryshown in figure(9). Let@be a point on a surfaceand ýþ be its normal. We define

the sky aperture ÿ of point@

, as the coneof sky visiblefrom

@. Consideran infinitesimalpatchof thesky, of size

in polarangleand

in azimuthasshown in figure(9).Let this patchsubtenda solid angle

at@

. For overcastskies, Moon[MS42] and Gordon[GC66] have shown thatthe radianceof the infinitesimalcone

ÿ , in the direction is givenbyh G y VïXë[ where [ Y # Hence,theirradianceat

@dueto theen-

tire apertureÿ , is givenby

* 3 G y VïXë[ VïXë[ [ Y (28)

whereVïXë[ accountsfor foreshortening[Hor86]. If is theBRDF of

@, thentheradiancefrom

@towardtheobserver

canbewrittenas

'8 ' 0 (29)

where Ø G y VïXë[ VïXë[ [ Y # Let be the pro-jection of a unit patcharound

@, on a planeperpendicu-

lar to the viewing direction. Then,the radiantintensityof@is given by 9 8 8 0# Since

is a con-stantwith respectto

and

, we can factor it out of the

integral and write conciselyas 9 8 > where>H 0 # This term > representsthe sky apertureandthe reflectancein the directionof theviewer. Substitutingfor 9 80 in the direct transmissionmodelin (5), weobtain

* - > 7 "! = # (30)

We have thus formulatedthe direct transmissionmodel intermsof overcastsky illuminationandthereflectanceof thescenepoints.