10
Received October 11, 2018, accepted October 28, 2018, date of publication November 6, 2018, date of current version December 3, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2879893 Adaptive Haze Removal for Single Remote Sensing Image FENGYING XIE , JIAJIE CHEN, XIAOXI PAN , AND ZHIGUO JIANG Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China Corresponding author: Xiaoxi Pan ([email protected]) This work was supported by the National Natural Science Foundation of China under Grant 61471016 and Grant 61871011. ABSTRACT Haze is a common phenomenon in remote sensing images, which limits their applications. In this paper, a novel adaptive dehazing method is proposed for remote sensing images. First, a new prior, namely, dark channel-saturation prior, is developed based on the relation between dark channel and saturation of haze-free remote sensing images. Second, optimal transmission is estimated through the proposed prior on the basis of haze imaging model. Finally, using the estimated transmission, haze is removed from the images through the haze imaging model. Because no parameter needs to be set manually in this proposed method, the nonuniform haze can be adaptively removed. Experiments are carried out on simulated images and real images respectively. Compared with the other state-of-the-art methods, the proposed method can recover the scene in hazy regions more clearly along with better information retainability in haze-free regions. INDEX TERMS Adaptive dehazing, dark channel-saturation prior, haze removal, remote sensing. I. INTRODUCTION Remote sensing images can provide sufficient information for meteorological monitoring, resources investigation, military reconnaissance and many other fields. However, these images are usually degraded by the weather conditions, such as haze, fog, and semitransparent cloud, which are uncontrollable and will affect the results of subsequent image interpretation. Hence, it is imperative to develop a scheme which removes haze, fog or semitransparent cloud from remote sensing images to improve their application value. Because these phenomena (haze, fog and semitransparent cloud) are similar and not distinguished in the existing literatures, in this paper, the term haze is uniformly used for describing these weather conditions. Existing dehazing methods of remote sensing images include radiative transfer based methods and image based methods. Radiative transfer based methods such as LOWTRAN [1] and MODTRAN [2] are accurate in terms of dehazing, however, it’s difficult to acquire all the detailed parameters of geographic and atmospheric conditions [3]. By contrast, image based methods require no additional information except image data, which have been widely used in dehazing field [4]. Traditional enhancement methods like histogram equaliza- tion (HE) and retinex can be used to remove haze. Fu et al. [5] developed a remote sensing image enhancement method combining HE with the discrete cosine transform (DCT). Wu et al. [6] improved the remote sensing image quality using multi-scale retinex (MSR). Shen et al. [7] proposed a method in the frequency domain based on homomorphic filter to remove haze from visible remote sensing images. Because enhancement methods do not consider the reason of image degradation, they can eliminate slight haze but fail in thick haze, and color distortion is easily caused. With the first physics imaging model developed in [8], a bunch of dehazing methods based on different imaging models are proposed and gradually become predominant in the outdoor image field [9]–[11] and the remote sensing image field [12], [13]. These approaches improve the dehazing results notably since the model is designed based on the mechanisms of atmospheric scattering. Makarau et al. [12] and Qi et al. [14] restored the haze-free remote sensing image from Landsat and AVNIR-2 through subtracting its corresponding haze thickness map (HTM) which is constructed by searching dark objects locally in the image, and their dehazing effect is more significant than enhancement methods. Makarau’s method can dehaze for multispectral bands including vis- ible, near-infrared and shortwave infrared channels. Since visible bands are degraded by haze more easily, some dehazing methods focus on visible bands. Zhang et al. [13] proposed a haze optimized transformation (HOT) method based on the correlation between the blue and the red bands. 67982 2169-3536 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. VOLUME 6, 2018

Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

Received October 11, 2018, accepted October 28, 2018, date of publication November 6, 2018, date of current version December 3, 2018.

Digital Object Identifier 10.1109/ACCESS.2018.2879893

Adaptive Haze Removal for SingleRemote Sensing ImageFENGYING XIE , JIAJIE CHEN, XIAOXI PAN , AND ZHIGUO JIANGImage Processing Center, School of Astronautics, Beihang University, Beijing 100191, China

Corresponding author: Xiaoxi Pan ([email protected])

This work was supported by the National Natural Science Foundation of China under Grant 61471016 and Grant 61871011.

ABSTRACT Haze is a common phenomenon in remote sensing images, which limits their applications.In this paper, a novel adaptive dehazing method is proposed for remote sensing images. First, a new prior,namely, dark channel-saturation prior, is developed based on the relation between dark channel and saturationof haze-free remote sensing images. Second, optimal transmission is estimated through the proposed prior onthe basis of haze imaging model. Finally, using the estimated transmission, haze is removed from the imagesthrough the haze imaging model. Because no parameter needs to be set manually in this proposed method,the nonuniform haze can be adaptively removed. Experiments are carried out on simulated images and realimages respectively. Compared with the other state-of-the-art methods, the proposed method can recover thescene in hazy regions more clearly along with better information retainability in haze-free regions.

INDEX TERMS Adaptive dehazing, dark channel-saturation prior, haze removal, remote sensing.

I. INTRODUCTIONRemote sensing images can provide sufficient information formeteorological monitoring, resources investigation, militaryreconnaissance andmany other fields. However, these imagesare usually degraded by the weather conditions, such as haze,fog, and semitransparent cloud, which are uncontrollable andwill affect the results of subsequent image interpretation.Hence, it is imperative to develop a scheme which removeshaze, fog or semitransparent cloud from remote sensingimages to improve their application value. Because thesephenomena (haze, fog and semitransparent cloud) are similarand not distinguished in the existing literatures, in this paper,the term haze is uniformly used for describing these weatherconditions.

Existing dehazing methods of remote sensing imagesinclude radiative transfer based methods and imagebased methods. Radiative transfer based methods such asLOWTRAN [1] and MODTRAN [2] are accurate in termsof dehazing, however, it’s difficult to acquire all the detailedparameters of geographic and atmospheric conditions [3].By contrast, image based methods require no additionalinformation except image data, which have been widely usedin dehazing field [4].

Traditional enhancement methods like histogram equaliza-tion (HE) and retinex can be used to remove haze. Fu et al. [5]developed a remote sensing image enhancement method

combining HE with the discrete cosine transform (DCT).Wu et al. [6] improved the remote sensing image qualityusing multi-scale retinex (MSR). Shen et al. [7] proposed amethod in the frequency domain based on homomorphic filterto remove haze from visible remote sensing images. Becauseenhancement methods do not consider the reason of imagedegradation, they can eliminate slight haze but fail in thickhaze, and color distortion is easily caused. With the firstphysics imaging model developed in [8], a bunch of dehazingmethods based on different imaging models are proposedand gradually become predominant in the outdoor imagefield [9]–[11] and the remote sensing image field [12], [13].These approaches improve the dehazing results notablysince the model is designed based on the mechanisms ofatmospheric scattering. Makarau et al. [12] and Qi et al. [14]restored the haze-free remote sensing image from Landsatand AVNIR-2 through subtracting its corresponding hazethickness map (HTM) which is constructed by searchingdark objects locally in the image, and their dehazing effectis more significant than enhancement methods. Makarau’smethod can dehaze for multispectral bands including vis-ible, near-infrared and shortwave infrared channels. Sincevisible bands are degraded by haze more easily, somedehazing methods focus on visible bands. Zhang et al. [13]proposed a haze optimized transformation (HOT) methodbased on the correlation between the blue and the red bands.

679822169-3536 2018 IEEE. Translations and content mining are permitted for academic research only.

Personal use is also permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

VOLUME 6, 2018

Page 2: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

Moro and Halounova’s [3], He et al.’s [15] andHou et al.’s [16] strategies were developed to make HOTmethod more suitable and robust. Singh and Vijay [17], [18]estimated haze through a fourth-order partial differentialequations based trilateral filter.

In recent years, a lot of research tasks, such as tar-get detection [19], [20], image segmentation [21], [22], havebeen developed on remote sensing images fromGoogle Earth.Google Earth images have become one of the important datasources. Therefore dehazing for these images is needed inorder to support the subsequent image analysis and applica-tion. Ni et al. [23] removed haze from Google Earth imagesthrough a linear intensity transformation combined with localproperty analysis. Among model-based dehazing methods,the method based on dark channel prior [24] is the mostrepresentative. Long et al. [25] utilized the dark channel priorand a low-pass Gaussian filter to estimate the transmission.Considering the difference in the dark channel statistic fea-tures between remote sensing images and outdoor images,Pan et al. [26] deformed the haze imagingmodel by introduc-ing a translation term to achieve dehazing effects.

All the methods mentioned above either improve the per-formance to make the recovered color vivid or focus onthe evaluation of the processing speed. Few of them putthe emphasis on removing nonuniform haze adaptively. Spa-tially varying haze is common for remote sensing images.Using these methods above to remove haze from remotesensing images often leads to two phenomena: 1)under dehaz-ing, haze cannot be removed completely; 2)over dehazing,the original information is changed in haze-free regions andcolor drift is caused in hazy regions. A good dehazing methodcan keep the information of haze-free regions unchangedas much as possible and meanwhile, remove haze from thehazy regions without color drift and texture distortion. Hence,adaptive dehazing is needed for remote sensing images.

Recently, learning-based methods have been developedto remove haze from outdoor images and remote sens-ing images. Tang et al. [27] used a regression model basedon Random Forest to predict the medium transmission.Cai et al. [28] used a convolutional neural network (CNN)model to regress the transmission from outdoor images.Jiang and Lu [29] designed a multi-scale residual networkto remove haze from remote sensing images. Cai and Jiangregarded the transmission map as locally constant, theyregressed the transmission through the network, which willeasily cause error estimation. Qin et al. [30] fused multi-ple CNN individuals dehazing results with weight maps toremove haze from multispectral remote sensing images endto end. The method regressed the clear image directly, whichcan obtain better dehazing results.

In this paper, a novel adaptive dehazingmethod is proposedfor remote sensing images from Google Earth, which are8-bit and three-channel RGB data. Ourmain contributions areas follows:

(1)We analyzed the relationship between dark channel andsaturation in remote sensing images through statistic, and

proposed a new prior combining dark channel with satura-tion, which is more suitable for remote sensing images thantraditional dark channel prior.

(2) Based on the new prior, the transmission estimation for-mula is re-derived, and the nonuniform haze in the images isadaptively removed. In addition, the information of haze-freeregions is retained.

The remaining of the paper is organized as follows.Section II describes the haze imaging model which isemployed in this work. In Section III, we propose the noveladaptive dehazing method, and Section IV presents and ana-lyzes experimental results. Section V concludes this work.

II. HAZE IMAGING MODELHaze imaging model supports the general frameworkfor single image dehazing and most of the presentalgorithms [31], [32] are based on it. According toKoschmieder’s law for transparent objects [33], it can bedescribed as:

I(x) = L∞ρ(x)e−βd(x) + L∞(1− e−βd(x)) (1)

where x = (x, y) represents the location of a pixel, I(x)stands for the observed image, L∞ is the atmospheric light,ρ(x) is the reflectance of an object in the image, β is theatmospheric attenuation coefficient including absorption andscattering, d(x) is the distance between an object and theobserver. In (1), the first term L∞ρ(x)e−βd(x) is the directattenuation, the second term L∞(1− e−βd(x)) is the airlight.Let J(x) = L∞ρ(x), t(x) = e−βd(x) and A = L∞, (1) can

be rewritten as

I(x) = J(x)t(x)+ A(1− t(x)) (2)

where J(x) is the scene radiance needed to be restored, A isthe global atmospheric light and t(x) is the medium transmis-sion describing the portion of the light that is not scatteredand reaches the camera. Equation (2) is exactly the imagingmodel used in dehazing.

The purpose of dehazing is recovering the scene radianceJ(x) from a given hazy image I(x). I(x) is the only knownvariable. In order to resolve the ill-posed problem, the atmo-spheric light A and transmission t(x) need to be estimated.And once A and t(x) are estimated, the scene radiance J(x)can be recovered by:

J(x) =I(x)− A

t(x)+ A (3)

The atmospheric light is always estimated from themost-opaque region of a single hazy image and the presentestimation methods are semi-automatic [34] or automatic[24]–[26]. After obtaining the atmospheric light, the trans-mission can be estimated. There are also two categories:methods based on priors or assumptions [24], [35], andlearning-based methods [27]–[29].

VOLUME 6, 2018 67983

Page 3: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

III. ADAPTIVE DEHAZINGDark channel prior proposed in [24] is based on the obser-vation on outdoor haze-free images, that is, in most of thenonsky patches, at least one color channel has some pixelswhose intensities are very low and close to zero. Its definitioncan be described as:

Jd (x) = miny∈�(x)

( minc∈(r,g,b)

Jc(y)) (4)

where Jd (x) is the dark channel, c is one of the color channel,�(x) represents a local patch centered at pixel x.According to [26], the average intensity of remote sensing

images’ dark channel is low, but not close to zero. This isbecause that the imaging distance is long, and even on aclear day, suspending particles still exist, in addition, evenif at polar regions with scarce aerosols, a path radiance frommolecules also exists and again diffuses light, thus weakensthe brightness of the color for remote sensing images. There-fore, in spite of the success of dark channel prior on outdoorimages, if applying it to remote sensing images directly,inaccurate dehazing results will be caused. Long’s [25] andPan’s [26] methods are based on dark channel prior to dehazefor remote sensing images, in which some improvementsare employed to make the prior suitable for remote sensingimages. But because some parameters need to be set, they cannot remove nonuniform haze adaptively. In this paper, a newprior applicable to remote sensing images is proposed, whichcan remove spatially varying haze adaptively.

A. DARK CHANNEL-SATURATION PRIORThe saturation represents the purity of color. For a patch,we define its saturation as the maximum saturation of thepixels in it,

Js(x) = W ∗ maxy∈�(x)

S(y) (5)

S(y) = 1−3minc∈(r,g,b) Jc(y)

Jr (y)+ Jg(y)+ Jb(y)(6)

where Jr (y), Jg(y), Jb(y) denote the three channels of pixel y,W is the number of intensity levels of the image, in this paper,W is fixed to 255.For a clear patch, the higher the saturation is, themore vivid

the color becomes. Meanwhile, according to dark channelprior [24], the value of the dark channel for a clear patchis small. For a patch, we calculate the difference betweenthe saturation of the patch using (5) and the saturation ofthe pixel whose dark channel value is the smallest in thispatch. Figure 1 is the statistic results of 125000 patches withsize 11×11, which are from 5000 haze-free remote sensingimages with different land cover types manually picked upfromGoogle Earth, where (a) is the histogram of the differentvalues and (b) is the corresponding cumulative probabilitycurve. As can be seen, there are 75.4% of patches whosesaturation different values are zero and 97.2% of patches areless than 10, which means for most patches, it is the same

FIGURE 1. Statistic results of saturation difference values on125000 patches. (a) The histogram of the different values. (b) Thecorresponding cumulative probability curve.

pixel which decides the dark channel value and the saturationvalue. Therefore, dark channel and saturation are a pair offeatures complementing each other for a clear image or patch.

Since a remote sensing image’s dark channel does notalways tend to zero, we use saturation to adjust these nonzerovalues. Therefore, the dark channel-saturation prior isdefined as:

Jds(x) = max(Jd (x)− Js(x), 0) (7)

where the maximum operator is applied to avoid possiblenegative values.

FIGURE 2. Instance of dark channel and saturation for a remote sensingimage. (a) Original image. (b) Dark channel. (c) The correspondingsaturation. (d) Dark channel-saturation map.

Figure 2 shows an instance of the dark channel-saturationprior, where (a) is an original remote sensing image, and(b) is its corresponding dark channel. It can be seen that thedark channel prior loses its effectiveness for remote sens-ing images since the values of dark channel in haze-freeregion are still more than zero. Fig. 2(c) is the saturationof (a). Clearly, dark channel and saturation do have theopposite property. Fig. 2(d) demonstrates the map of thedark channel-saturation using (7). As can be seen, its val-ues in haze-free region are very low, and even close tozero. Fig. 3 is the statistic of dark channel-saturation on5000 haze-free remote sensing images, where (a) is the

67984 VOLUME 6, 2018

Page 4: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

FIGURE 3. Statistic results of dark channel-saturation prior for 5000haze-free remote sensing images. (a) Histogram of the intensity of darkchannel-saturation map. (b) Histogram of the average intensity of eachdark channel-saturation map.

histogram of the intensity in dark channel saturation mapand (b) is the histogram of the average intensity of eachdark channel-saturation map. It can be seen that the darkchannel-saturation intensity of 91.2% of pixels is below 25.Thus similar to the property of dark channel prior for outdoorimages [24], we achieve the conclusion that the values of darkchannel-saturation map for clear remote sensing images arevery low and even close to zero:

Jds(x) −→ 0 (8)

B. ATMOSPHERIC LIGHT ESTIMATIONBefore calculating the transmission, atmospheric light needsto be estimated. The atmospheric light is usually estimatedfrom the most-opaque region of a single hazy image [24].In [26], the locations of top 0.1% brightest pixels in thedark channel-saturation map are selected as the candidateregion and then the atmospheric lightA is estimated using theaverage value of the brightest pixels in that region of the inputimage. In this paper, the same method is adopted to obtain theatmospheric light A, and it is denoted as a scalar A0 in theremaining paper for simplicity.

C. IMPROVED TRANSMISSION ESTIMATIONIn (2), transmission is a function about extinction coefficientβ and the distance d(x) between an object in the image and theobserver. Extinction coefficient β is negative correlated withthe wavelength λ and positive correlated with haze turbidityT [36]. And it is commonly assumed to be constant fordifferent wavelengths in manymethods dealing with particlesthat the size is larger compared with the wavelength of light,such as fog, haze, aerosol, etc [31]. For remote sensing imagesfrom Google Earth, the spectral information is unavailable,and the wavelength is also usually neglected when estimatingthe transmission, which is image-derived [23], [25], [26].

For a hazy remote sensing image, we assume thetransmission to be constant locally, and according tothe dark channel-saturation prior, the values of darkchannel-saturation map for a dehazed remote sensing image,denoted as J

′ds, should be very low and even close to zero.Equivalently, the difference value between J

′d , the dark chan-nel of a restored image and J

′s, its corresponding saturation,

should be close, see (8). And they can be computed through:

J′d (x) = min

y∈�(x)( minc∈(r,g,b)

J′c(y)) (9)

J′s(x) = W ∗ max

y∈�(x)(1−

3minc∈(r,g,b) J′c(y)

J′r (y)+ J′g(y)+ J′b(y))

= W −W ∗ miny∈�(x)

3minc∈(r,g,b) J′c(y)

J′r (y)+ J′g(y)+ J′b(y)(10)

Therefore, J′ds can be obtained through:

J′ds(x) = min

y∈�(x)( minc∈(r,g,b)

J′c(y))

+W ∗ miny∈�(x)

3minc∈(r,g,b) J′c(y)

J′r (y)+ J′g(y)+ J′b(y)−W (11)

Since a patch’s dark channel and its corresponding satura-tion is decided by the same pixel, so J

′ds can be furthercalculated as:

J′ds(x) = min

y∈�(x)( minc∈(r,g,b)

J′c(y)

+W3minc∈(r,g,b) J

′c(y)J′r (y)+ J′g(y)+ J′b(y)

)−W (12)

where minc∈(r,g,b) J′c(y) can be expressed using (3),

minc∈(r,g,b)

J′c(y) =

minc∈(r,g,b) Ic(y)− A0˜t(y)

+ A0 (13)

Therefore,

J′ds(x) = min

y∈�(x)(minc∈(r,g,b) Ic(y)+ A0 ˜t(y)− A0

˜t(y)

+W3(minc∈(r,g,b) Ic(y)+A0 ˜t(y)−A0)

WI (y)+ 3A0 ˜t(y)− 3A0)−W (14)

whereWI is the sum of three channels,

WI (y) = Ir (y)+ Ig(y)+ Ib(y) (15)

According to the dark channel-saturation prior, the esti-mated transmission t̃ can be obtained through,

t̃ = argmint| J′ds|

s.t. 0 < t ≤ 1 (16)

In order to obtain the optimal transmission, grid searchmethod is adopted to find the minimal | J

′ds| value by gradu-

ally increasing t with a short interval within the range [0, 1].In this paper, the interval is set to 0.05. The t correspondingto the minimal | J

′ds| is regarded as the optimal estimation

of the transmission. The transmission estimation is the keyto remove haze adaptively and decide the dehazing effectwhether significant or not.

Finally, a guided filter [37] (the radius of a window is 21and ε is 0.1) is used to transfer structure from the originalimage to the final refined transmission and avoid some halosand block artifacts at the same time.With the estimated atmo-spheric light and the refined transmission, the scene radiancecan be recovered according to (3).

VOLUME 6, 2018 67985

Page 5: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

IV. EXPERIMENTAL RESULTSThe performance of the proposed method is evaluated bycomparing with the six state-of-the-art algorithms, includingHTM [14], He et al. [24], Long et al. [25], Pan et al. [26] andtwo deep learning methods Cai et al. [28] and Qin et al. [30].For fair comparisons,we retrained Cai’s and Qin’s networkson the synthesized hazy remote sensing images, which areobtained through the haze simulation method in [38]. In orderto verify the effectiveness of the proposed method, the anal-ysis for experimental results is conducted from two aspects:quantitative comparisons, visual comparisons.

A. HAZY REMOTE SENSING IMAGE SIMULATIONAccording to [38], we first extract the transmissions fromreal hazy images, and then take the extracted transmissions asmasks and add them to clear images through the haze imagingmodel (2) to generate the simulated hazy images. Fig. 4and Fig. 5 are hazy image simulation instances. In Fig. 4,the right column is the extracted two transmission masksfrom the real images in the left column. Adding the twoextracted transmission masks to the clear image in Fig. 5(a)through haze imaging model (2) respectively, two syntheticimages with different haze distributions are generated, seeFig. 5(b) and (c). Since the transmission masks are from realhazy images, which ensures the distribution of the haze inthe synthetic images is random and spatially varied. 20 trans-mission masks extracted from real remote sensing imageswith nonuniform haze are added to 10 clear images withdifferent land cover types and then, 200 synthesis images aregenerated.

FIGURE 4. Extracting transmission masks from real hazy images, wherethe first column shows real hazy images and the second column showsthe extracted transmission masks from the first column.

FIGURE 5. Simulated hazy remote sensing images. (a) Reference image.(b) and (c) are synthetic images using the two transmission masks in theright column of Fig. 4.

B. COMPARISONS ON SIMULATED IMAGESIn this section, we test the performance of the seven methodsover the simulation dataset, including six comparison meth-ods and the proposed method.

1) DEHAZING EFFECTIVENESSFig. 6 shows the dehazing results of a synthetic image withslight and uniform haze, where (a) is the reference imagewithout haze, (b) is the simulated image generated by a uni-form transmission, and from (c) to (i) are the dehazing resultsof HTM, He’s method, Long’s method, Pan’s method, Cai’smethod, Qin’s metohd and the proposedmethod, respectively.In Long’s method, a parameter M needs to be set experi-mentally for each image, which is 20 in Fig. 6(g). For Pan’smethod, a parameter C needs be determined, which is fixedto 27 for Google Earth images [26]. From Fig. 6, it can beseen that HTM is not successful, He’s method, Cai’s methodand Qin’s method cause obvious color drift, Long’s methodand Pan’s method lead to over dehazing phenomenon, andthe color of the dehazed image is over enhanced, while ourmethod has a good dehazing result and the closest color to thereference image among these methods. Therefore, the pro-posed method outperforms the six comparison methods incolor retainability.

FIGURE 6. Dehazing instances. (a) Reference image. (b) Simulated image.(c)-(i) are dehazing results of HTM, He’s method, Long’s method(M=20),Pan’s method, Cai’s method, Qin’s method and the proposed method,respectively.

Fig. 7 demonstrates the dehazing results of a syntheticimage with nonuniform haze using these seven methods,where (b) is the simulated image with spatially varied hazeusing the transmission mask in the top-right corner of Fig. 4.It can be seen that Cai’s method is failed for nonuniform hazeand the thick haze still exists. In the results of HTM and Qin’smethod, some texture information are lost. As for dark chan-nel prior-based dehazing methods, He’s result and Long’s

67986 VOLUME 6, 2018

Page 6: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

FIGURE 7. Dehazing instances. (a) Reference image. (b) Simulated image.(c)-(i) are dehazing results of HTM, He’s method, Long’s method(M=5),Pan’s method, Cai’s method, Qin’s method and the proposed method,respectively.

result contain residual haze, and their color are changedobviously. Pan’s result and our result have closer color tothe original image compared with other algorithms, but Pan’smethod makes the color of dehazed image become dark.Therefore ourmethod has better haze removal effect and colormaintenance relative to the six comparison methods.

Taking the reference image as ground truth, the dehazingresult can be evaluated quantitatively by calculating statisticsof three RGB bands. Themetricmean square error (MSE) andstructural similarity index (SSIM) [39] are used. Small MSEdenotes a good dehazing result. SSIM represents structuralsimilarity, the closer the SSIM to 1, the more similar the twoimages. According to [40] and [41], the hue component inHSI space changes little after dehazing for a good dehazingmethod. Therefore, we also calculate the difference value ofhue 1H between the dehazed image and its correspondingreference image to evaluate the color change. The smaller the1H , the better the color is retained. Table 1 shows the averagevalues of MSE, SSIM and 1H for the 200 synthetic images.Obviously, our approach outperforms the other six ones.

TABLE 1. Assessment on dehazing performance for simulated images.

2) TRANSMISSION ESTIMATIONFor a dehazing method, the more accurate the estimatedtransmission, the better the dehazing result. Fig. 8 shows theestimated transmissions for synthetic images generated usingthe same transmission but under different scenes, where the

FIGURE 8. Estimated Transmissions, where the first row shows thetransmission ground truth, the second row shows simulated imagesunder different scenes using the transmission, from the third row to thelast row are estimated transmissions from three simulated images usingHe’s method, Long’s method, Pan’s method, Cai’s method and theproposed method, respectively.

first row shows the transmission ground truth, the secondrow shows simulated images under different scenes usingthe transmission, from the third row to the last row are esti-mated transmissions from three simulated images using He’smethod, Long’s method, Pan’s method, Cai’s method andthe proposed method, respectively. Considering that HTMand Qin’s method cannot estimate the transmission, they areexcluded here. It can be seen that the transmissions extractedby the four comparison methods are influenced by textures ofground objects, and they are changed under different scenes,which decreases the accuracy of the recovered information.While our extracted transmissions under the three differentscenes are nearly the same to each other, and they are veryclose to the ground truth (see the first row in Fig. 8) and nearlynot influenced by textures of ground objects. Therefore, ourtransmission estimation is more stable and the recoveredimages using our transmission are more accurate.

VOLUME 6, 2018 67987

Page 7: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

TABLE 2. Accuracy of transmission estimation.

Taking the transmission masks used in simulating imagesas ground truth, we compare the extracted transmission fromsimulated hazy images with their ground truth. Similarly,the metric MSE and SSIM is computed to evaluate the esti-mated transmissions by the four methods. Table 2 gives thestatistic results for the 200 synthetic images. It can be seenthat, with the lowest MSE and the closest SSIM to 1, ourmethod can obtain the most accurate transmissions.

C. COMPARISONS ON REAL IMAGESIn this section, the performances of the seven dehazing meth-ods are tested on real hazy remote sensing images.

1) DEHAZING FOR HAZY IMAGESFig. 9 shows visual comparisons of different algorithms ona real hazy remote sensing image. Obviously, many texturedetails are lost in HTM’s results, He’s, Long’s and Pan’salgorithms over estimate the densities of haze and lead toover enhancement on color. Cai’s method can’t remove densehaze, and Qin’s method makes hazy regions dim after dehaz-ing. Our method recover most scene details occluded by hazeand maintain the original color.

FIGURE 9. Dehazing instance for a real hazy image. (a) Hazy image.(b)-(h) are dehazing results of HTM, He’s method, Long’s method(M=20),Pan’s method, Cai’s method, Qin’s method and the proposed method,respectively.

Fig. 10 is the transmissions of Fig. 9(a) estimated bythe five dehazing methods, including He’s, Long’s, Pan’s,Cai’s and our algorithms. It can be seen that, the four

FIGURE 10. Transmission instance for real hazy image. (a)-(d) areestimated transmissions of He’s method, Long’s method(M=20), Pan’smethod, Cai’s method and the proposed method, respectively.

compared methods’ results are easily influenced by the tex-ture of the ground objects. In addition, the red rectangleregion in Fig. 9(a) is haze-free, obviously, the compared fourmethods over estimate the thickness of the haze. While ourmethod is insensitive to the ground object texture, and visu-ally, the estimated transmission has the similar distributionwith the real haze in the image. Therefore, for real hazyimages, our method can extract transmissions more correctlyand restore the scene more accurately than the comparedmethods.

2) DEHAZING FOR HAZE-FREE IMAGESEven though the results after dehazing and enhancing aremore attractive and vivid (like Fig. 9(e) and (g)), an effec-tive dehazing method should recover the original scene aswell as possible, and meanwhile, keep the haze-free regionunchanged. Fig. 11(a) is several dehazing instances for hazefree images, where the second row is the haze-free regionfrom the red rectangle in the first row, from the third row to thelast row are dehazing results of the six comparison methodsand the proposed method. It can be seen that, HTM’s resultsare close to the original images in the last two columns, whilecause obvious color change in the first column. He’s, Long’s,Pan’s and Cai’s methods make the haze-free region becomedark more or less. While the color of dehazed image usingQin’s method and our method are the closest to the originalimage. Table 3 gives these methods’ averageMSE, SSIM and1H metrics of three images in Fig. 11. Clearly, the differencecaused by the proposedmethod is smaller than other methods.Therefore, our method has the best information retainabilityfor the haze-free region or clear image among these methods.

TABLE 3. Assessment on dehazing performance for haze-free images.

67988 VOLUME 6, 2018

Page 8: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

FIGURE 11. ‘‘Dehazing’’ from haze-free regions. The first row are hazeimages, the second row are the haze-free regions from the red rectanglein the first row, from the third row to the last row are dehazing results ofthe six comparison methods and the proposed method.

3) DEHAZING FOR MORE REAL IMAGESFig. 12 shows more dehazing instances for real images withhaze, where the first row is the real images with differentland cover types, and from the second row to the last roware the dehazing results by the six comparison methods andour method. In these real images, the one in the first column is

FIGURE 12. Dehazing for more real images with different land covertypes, where the first row shows the real images with haze, fromthe second row to the last row are dehazing results of HTM, He’s method,Long’s method, Pan’s method, Cai’s method, Qin’s method and theproposed method, respectively.

with nonuniform haze, and the last two ones contain snow andwater respectively. It can be seen that, HTM and Cai’s meth-ods can not correctly recover the clear scenes. He’s, Long’sand Qin’s results have obvious color drift, and especiallylong’s and Qin’s methods, the snow in the second image isfalse removed as haze. Pan’s method and our method havebetter dehazing effects than other five methods, but Pan’sresults have color change in the first and the last images.In addition, although the images dehazed by Pan’s methodin Fig. 12 look acceptable, they actually have lower accuracy

VOLUME 6, 2018 67989

Page 9: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

than our dehazing results because the transmissions estimatedby Pan’s method are easily influenced by the ground objecttexture(see Fig. 10). Therefore, with clear recovered scenesand good color consistency, our method outperforms the sixcomparison methods.

Real hazy images have no references, and the dehazingresults will change a lot compared with their original images.Therefore, MSE and SSIM can not be used to assess thedehazing result. Here, we calculate the value of1H betweenthe dehazed image and its original image to evaluate thedehazing performance of different methods. A better dehazedimage should have lower1H . Table 4 shows the average1Hof the three images in Fig. 12. Clearly, our method has a betterdehazing performance on real hazy images.

TABLE 4. Assessment on dehazing performance for real hazy images.

D. COMPUTATIONAL COMPLEXITYTable 5 shows the average dehazing time of our method andother six compared methods in 200 simulated images. Allexperiments are done on the PC with 3.5 GHz Inter Corei5 6600K Processor. Two learning-based methods are testedin Caffe with CPU, other methods are tested in MATLAB2018a. It can be seen that our method’s processing time isvery close to He’s and Pan’s methods, and less than other fourmethods. Considering that our method has the best dehazingeffect, it is superior to other methods.

TABLE 5. Processing time (s).

V. CONCLUSIONIn remote sensing images, haze phenomenon is common,which influences the subsequent analysis results. Althoughsome dehazing methods have been developed, adaptivedehazing is still a difficult point. A good dehazing methodshould restore the information occluded by haze and at thesame time, keep the information unchanged in haze-freeregion. In this paper, a novel adaptive dehazing method basedon haze imaging model is proposed for the remote sensingimages from Google Earth. Dark channel and saturation are apair of features complementing each other for a clear image.And a powerful prior, namely dark channel-saturation prior,is proposed for remote sensing images. Based on the newprior, the optimal transmission is estimated, and the spatiallyvarying haze is removed. Since no parameter needs to be set,the proposed method is adaptive. Experimental results showthat compared with other state-of-the-art dehazing methods,the transmission extracted by our method is insensitive to the

ground object texture, and the nonuniform haze is effectivelyremoved, and at the same time, the information in haze-freeregions is well retained.

REFERENCES[1] F. X. Kneizys, E. P. Shettle, L. W. Abreu, J. H. Chetwynd, and

G. P. Anderson, ‘‘Users guide to LOWTRAN 7,’’ Air Force GeophysicsLab, Hanscom AFB, MA, USA, Tech. Rep. AFGL-TR-88-0177, 1988.

[2] A. Berk, L. S. Bernstein, and D. C. Robertson, ‘‘MODTRAN: A moderateresolution model for LOWTRAN,’’ Spectral Sci. Inc., Burlington, MA,USA, Tech. Rep. SSI-TR-124, 1987.

[3] G.DalMoro and L.Halounova, ‘‘Haze removal for high-resolution satellitedata: A case study,’’ Int. J. Remote Sens., vol. 28, no. 10, pp. 2187–2205,2007.

[4] P. S. Chavez, Jr., ‘‘An improved dark-object subtraction technique foratmospheric scattering correction of multispectral data,’’ Remote Sens.Environ., vol. 24, no. 3, pp. 459–479, Apr. 1988.

[5] X. Fu, J. Wang, D. Zeng, Y. Huang, and X. Ding, ‘‘Remote sens-ing image enhancement using regularized-histogram equalization andDCT,’’ IEEE Geosci. Remote Sens. Lett., vol. 12, no. 11, pp. 2301–2305,Nov. 2015.

[6] W. Henan, Y. Guang, X. Zhonglin, L. Dejun, and Y. Yuwei, ‘‘Remote sens-ing image enhancement method based on multi-scale Retinex,’’ in Proc.IEEE Inf. Technol., Comput. Eng. Manage. Sci. (ICM), vol. 3, Sep. 2011,pp. 15–18.

[7] H. Shen, H. Li, Y. Qian, L. Zhang, and Q. Yuan, ‘‘An effective thincloud removal procedure for visible remote sensing images,’’ ISPRS J.Photogramm. Remote Sens., vol. 96, pp. 224–235, Oct. 2014.

[8] S. K. Nayar and S. G. Narasimhan, ‘‘Vision in bad weather,’’ in Proc.CVPR, vol. 2, 1999, pp. 820–827.

[9] M. Ju, Z. Gu, and D. Zhang, ‘‘Single image haze removal based onthe improved atmospheric scattering model,’’ Neurocomputing, vol. 260,pp. 180–191, Oct. 2017.

[10] R. He, Z. Wang, Y. Fan, and D. D. Feng, ‘‘Multiple scattering modelbased single image dehazing,’’ in Proc. Ind. Electron. Appl. (ICIEA), 2013,pp. 733–737.

[11] M. Ju, C. Ding, D. Zhang, and Y. J. Guo, ‘‘Gamma-correction-basedvisibility restoration for single hazy images,’’ IEEE Signal Process. Lett.,vol. 25, no. 7, pp. 1084–1088, Jul. 2018.

[12] A. Makarau, R. Richter, R. Müller, and P. Reinartz, ‘‘Haze detection andremoval in remotely sensed multispectral imagery,’’ IEEE Trans. Geosci.Remote Sens., vol. 52, no. 9, pp. 5895–5905, Sep. 2014.

[13] Y. Zhang, B. Guindon, and J. Cihlar, ‘‘An image transform to char-acterize and compensate for spatial variations in thin cloud contami-nation of Landsat images,’’ Remote Sens. Environ., vol. 82, nos. 2–3,pp. 173–187, 2002.

[14] L. Qi, X. Gao, L. He, andW. Lu, ‘‘Haze removal for a single visible remotesensing image,’’ Signal Process., vol. 137, pp. 33–43, Aug. 2017.

[15] X. Y. He, J. B. Hu, W. Chen, and X. Y. Li, ‘‘Haze removal basedon advanced haze-optimized transformation (AHOT) for multispectralimagery,’’ Int. J. Remote Sens., vol. 31, no. 20, pp. 5331–5348, 2010.

[16] H. Jiang, L. Ning, and L. Yao, ‘‘A high-fidelity haze removal method basedon hot for visible remote sensing images,’’ Remote Sens., vol. 8, no. 10,p. 844, 2016.

[17] D. Singh and K. Vijay, ‘‘Dehazing of remote sensing images using fourth-order partial differential equations based trilateral filter,’’ IET Comput. Vis.,vol. 12, no. 2, pp. 208–219, 2017.

[18] D. Singh and K. Vijay, ‘‘A novel dehazing model for remote sensingimages,’’ Comput. Electr. Eng., vol. 69, pp. 14–27, Jul. 2018.

[19] Z. An and Z. Shi, ‘‘Scene learning for cloud detection on remote-sensingimages,’’ IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8,no. 8, pp. 4206–4222, Aug. 2015.

[20] Z. Shi, X. Yu, and Z. Jiang, and B. Li, ‘‘Ship detection in high-resolutionoptical imagery based on anomaly detector and local shape feature,’’ IEEETrans. Geosci. Remote Sens., vol. 52, no. 8, pp. 4511–4523, Aug. 2014.

[21] M. Dikmen and U. Halici, ‘‘A learning-based resegmentation method forextraction of buildings in satellite images,’’ IEEE Geosci. Remote Sens.Lett., vol. 11, no. 12, pp. 2150–2153, Dec. 2014.

[22] S. Aksoy, I. Z. Yalniz, and K. Tasdemir, ‘‘Automatic detection and seg-mentation of orchards using very high resolution imagery,’’ IEEE Trans.Geosci. Remote Sens., vol. 50, no. 8, pp. 3117–3131, Aug. 2012.

67990 VOLUME 6, 2018

Page 10: Adaptive Haze Removal for Single Remote Sensing …xfy.buaa.edu.cn/paper/Adaptive_Haze_Removal_for_Single...F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image Moro

F. Xie et al.: Adaptive Haze Removal for Single Remote Sensing Image

[23] W. Ni, X. Gao, and Y. Wang, ‘‘Single satellite image dehazing via linearintensity transformation and local property analysis,’’ Neurocomputing,vol. 175, pp. 25–39, Jan. 2016.

[24] K. He, J. Sun, and X. Tang, ‘‘Single image haze removal using darkchannel prior,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12,pp. 2341–2353, Dec. 2011.

[25] J. Long, Z. Shi, W. Tang, and C. Zhang, ‘‘Single remote sensing imagedehazing,’’ IEEE Geosci. Remote Sens. Lett., vol. 11, no. 1, pp. 59–63,Jan. 2014.

[26] X. Pan, F. Xie, Z. Jiang, and J. Yin, ‘‘Haze removal for a single remotesensing image based on deformed haze imaging model,’’ IEEE SignalProcess. Lett., vol. 22, no. 10, pp. 1806–1810, Oct. 2015.

[27] K. Tang, J. Yang, and J. Wang, ‘‘Investing haze-relevant features in alearning framework for image dehazing,’’ in Proc. CVPR, Jun. 2014,pp. 2995–3002.

[28] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, ‘‘DehazeNet: An end-to-end system for single image haze removal,’’ IEEE Trans. Image Process.,vol. 25, no. 11, pp. 5187–5198, Nov. 2016.

[29] H. Jiang and N. Lu, ‘‘Multi-scale residual convolutional neural networkfor haze removal of remote sensing images,’’ Remote Sens., vol. 10, no. 6,p. 945, 2018.

[30] M. Qin, F. Xie, W. Li, Z. Shi, and H. Zhang, ‘‘Dehazing for multispectralremote sensing images based on a convolutional neural network with theresidual architecture,’’ IEEE J. Sel. Topics Appl. Earth Observ. RemoteSens., vol. 11, no. 5, pp. 1645–1655, May 2018.

[31] S. G. Narasimhan and S. K. Nayar, ‘‘Vision and the atmosphere,’’ Int. J.Comput. Vis., vol. 48, no. 3, pp. 233–254, 2002.

[32] S. G. Narasimhan and S. K. Nayar, ‘‘Chromatic framework forvision in bad weather,’’ in Proc. IEEE Conf. Comput. Vis. PatternRecognit. (CVPR), vol. 1. Jun. 2000, pp. 598–605.

[33] H. Koschmieder, ‘‘Theorie der horizontalen Sichtweite,’’ in Beiträge zurPhysik der freien Atmosphäre. Munich, Germany: Keim&Nemnich, 1924,pp. 33–53.

[34] K. Nishino, L. Kratz, and S. Lombardi, ‘‘Bayesian defogging,’’ Int. J.Comput. Vis., vol. 98, no. 3, pp. 263–278, Jul. 2012.

[35] R. Fattal, ‘‘Single image dehazing,’’ ACM Trans. Graph., vol. 27, no. 3,pp. 1–9, 2008.

[36] A. J. Preetham, P. Shirley, and B. Smits, ‘‘A practical analytic model fordaylight,’’ in Proc. ACM SIGGRAPH, 1999, pp. 91–100.

[37] K. He, J. Sun, and X. Tang, ‘‘Guided image filtering,’’ IEEE Trans. PatternAnal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013.

[38] X. Pan, F. Xie, Z. Jiang, Z. Shi, and X. Luo, ‘‘No-reference assessment onhaze for remote-sensing images,’’ IEEEGeosci. Remote Sens. Lett., vol. 13,no. 12, pp. 1855–1859, Dec. 2016.

[39] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image qualityassessment: From error visibility to structural similarity,’’ IEEE Trans.Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.

[40] C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, ‘‘A fast semi-inverseapproach to detect and remove the haze from a single image,’’ in Proc.ACCV, 2010, pp. 501–504.

[41] H. Liu, J. Yang, Z.-P. Wu, Q. N. Zhang, and Y. Deng, ‘‘A fast singleimage dehazing method based on dark channel prior and Retinex theory,’’(in Chinese), Acta Automatica Sinica, vol. 41, no. 7, pp. 1264–1273,Jul. 2015.

FENGYING XIE received the Ph.D. degree inpattern recognition and intelligent system fromBeihang University, Beijing, China, in 2009. Shewas a Visiting Scholar with the Laboratory forImage and Video Engineering, The University ofTexas at Austin, from 2010 to 2011. She is cur-rently a Professor with the School of Astronautics,Beihang University. Her research interests includebiomedical image processing, remote sensingimage understanding and application, imagequality assessment, and object recognition.

JIAJIE CHEN received the B.S. degree fromthe School of Astronautics, Beihang University,Beijing, China, in 2017. His research interestsinclude remote sensing image dehazing and objectdetection.

XIAOXI PAN received the B.S. degree from theSchool of Astronautics, Northwestern Polytechni-cal University, Xi’an, China, in 2013, and the M.S.degree from the School of Astronautics, BeihangUniversity, Beijing, China, in 2016. Her researchinterests include remote sensing image dehazingand quality assessment.

ZHIGUO JIANG received the B.E., M.S., andPh.D. degrees from Beihang University, Beijing,China, in 1987, 1990, and 2005, respectively.He was appointed as a Professor in image process-ing and pattern recognition in 2005. His researchinterests include remote sensing image analysis,medical imaging and analysis, and target classi-fication, detection, and recognition. He currentlyserves as a standing member of the ExecutiveCouncil of China Society of Image and Graphics.

VOLUME 6, 2018 67991