Upload
hoangthuan
View
214
Download
0
Embed Size (px)
Citation preview
Received: 17 March 2017 Accepted: 19 March 2017
DOI: 10.1002/cav.1776
S P E C I A L I S S U E P A P E R
Sky detection- and texture smoothing-based high-visibility hazeremoval from images and videos
Chunxiao Liu Yiyun Shen Yaqi Shao Jinwei Zhao Xun Wang
School of Computer Science and
Information Engineering, Zhejiang
Gongshang University, Hangzhou, China
CorrespondenceChunxiao Liu, School of Computer Science
and Information Engineering, Zhejiang
Gongshang University, Hangzhou, China.
Email: [email protected].
Funding informationZhejiang Provincial Natural Science
Foundation of China, Grant/Award Number:
LY14F020004; National Natural Science
Foundation of China, Grant/Award Number:
61003188, 61379075 and U1609215; Talent
Young Foundation of Zhejiang Gongshang
University, Grant/Award Number: QZ13-9;
National Key Technology R&D Program,
Grant/Award Number: 2014BAK14B01;
Zhejiang Provincial Commonweal
Technology Applied Research Projects of
China, Grant/Award Number: 2015C33071;
State Key Lab of Virtual Reality Technology
and Systems at Beihang University,
Grant/Award Number:
BUAA-VR-13KF-2013-3; Zhejiang
Provincial Key Laboratory of Electronic
Commerce and Logistics Information
Technology, Grant/Award Number:
2011E10005; Zhejiang Provincial Research
Center of Intelligent Transportation
Engineering and Technology, Grant/Award
Number: 2015ERCITZJ-KF1
AbstractTo address the gloomy sky and the low contrast caused by the left fog in the
existing image dehazing methods, we propose a robust haze removal algorithm for
images and videos. First, a sky detection-based adaptive atmospheric light estimation
method is designed for brighter and cleaner restoration results for the sky regions.
Second, in order to reconstruct a transmission map in line with the depth variation,
we preprocess the input image with texture smoothing to keep the color consistency
inside the same planar object and devise a texture smoothing-based robust trans-
mission estimation method, with which the contrast and color saturation of fog-free
image are greatly promoted. Finally, the restored results are post-processed with
the joint bilateral filter for the purpose of noise removal. What’s more, a guided
filter-based temporally coherent atmospheric light smoothing strategy and a Gaus-
sian filter-based spatial-temporally coherent transmission smoothing strategy are put
forward for video dehazing, which can ensure the spatial as well as temporal conti-
nuity of the haze-free videos. Experimental results show that the recovered haze-free
images and videos have high contrast and color saturation with cleaner sky regions,
and the haze-free videos are free of jittering and flickering phenomena.
KEYWORDSatmospheric light, haze removal, sky detection, temporal coherence, texture smoothing, transmission
1 INTRODUCTION
Haze or fog forms when gas droplets and solid particles are
suspended and mixed in air. The suspended substances in air
scatter light, reduce light quality, and degrade images shot
outdoors, which severely hinder video surveillance, traffic
regulation, and image recognition tasks. As a result, haze
removal from images and videos has become a popular inter-
disciplinary research area of computer vision and computer
graphics.
Image and video dehazing has three main technical chal-
lenges. First, sky regions are prone to color distortion and
gloomy appearance. We put forward an adaptive atmospheric
light estimation method to incur a bright and clean sky.
Second, non-sky regions usually suffer from halo effect and
insufficient contrast enhancement. A robust transmission esti-
mation method is designed to make transmission map in line
with depth variation and get a high contrast non-sky restora-
tion result. Third, the recovered haze-free videos easily appear
jittering and flickering. We propose a temporally coherent
Comput Anim Virtual Worlds. 2017;28:e1776. wileyonlinelibrary.com/journal/cav Copyright © 2017 John Wiley & Sons, Ltd. 1 of 10https://doi.org/10.1002/cav.1776
2 of 10 LIU ET AL.
atmospheric light smoothing strategy and a spatial-temporally
coherent transmission smoothing strategy to avoid jittering
and flickering in the haze-free videos.
2 RELATED WORK
Haze-free images generally have higher contrast than the hazy
ones; therefore, some contrast enhancement-based image
dehazing methods1,2 are proposed. However, they are easy to
cause uncoordinated hue in the recovered results. The current
mainstream image dehazing methods are usually based on the
hazy image degradation model,3 but the lack of depth infor-
mation remains a great challenge. Therefore, some of them are
built upon more than the single hazy image, such as multiple
images taken at the same scene but under different weather4
or polarized with different angles,5 as well as depths of the
scene.6 They have poor practicality due to stringent require-
ments for the input data. Different types of assumptions and
priors make it possible to effectively remove haze from a sin-
gle image. Tan et al.7 remove haze by maximizing the local
contrast of the hazy images, which is prone to local exces-
sive contrast enhancement. Zhu et al.8 estimate transmission
according to the relationship between brightness and satu-
ration of the hazy images, yet they set a uniform scattering
coefficient, which is apt to local insufficient dehazing effects.
Shiau et al.9 employ an extremum approximate method and an
edge preserving mean filter to estimate atmospheric light and
transmission, respectively, and design an 11-stage pipelined
hardware architecture to reduce computing time, but color
distortion still exists.
Compared with existing single image-based dehazing
methods, Fattal’s method 10 and the He et al. method 3 present
better dehazing effects. The former10 is founded on the basis
of the color line prior; however, the color line selection pro-
cess is relatively harsh, resulting in unavailability of most
local transmissions and reduced reliability of transmission
map. The latter3 builds upon the dark channel prior that
applies to most outdoor hazy images, but there are still two
deficiencies. First, sky regions disagree with the dark channel
prior, and the restored haze-free images often present gloomy
or color distorted sky regions. In this regard, Liu et al.11
introduce an adaptive protection factor to increase the trans-
mission values for the sky regions, but it leads to foggy sky
regions with insufficient dahazing effect. Xing et al.12 take
the mean value of sky regions as the atmospheric light, but
it does not apply to hazy images without sky regions. Sec-
ond, transmission should be refined by soft matting3 or guided
filter13 to reduce halo effect, but they tend to trigger unwanted
transmission fluctuations inside the same planar object, which
is detrimental to the contrast enhancement. In this regard,
Zhao et al.14 describe a patch shift-based transmission esti-
mation strategy to reduce halo effect, but residual halo effect
is roughly processed with replacement, which will generate
visual defects around object boundaries. Lai et al.15 estimate
the transmission map by solving a large constrained matrix
to avoid unreasonable transmission fluctuations. However, it
is inefficient, and color distortion still exists in the restored
results.
Although video dehazing has broad application scenarios as
well, less research work has been done. Gibson et al.16 obtain
the transmission sequence with motion vector estimation, but
it causes flickering if there isn’t motion within local patch.
Chen et al.17 suppress visual artifacts by minimizing residual
gradients, but it is detrimental to contrast enhancement. Com-
pared with the existed video dehazing methods, Kim et al.18
present a better one. It optimizes the contrast enhancement
term and the information loss term simultaneously and cal-
culates transmission for each frame with overlapped image
patches, but it has serious halo effect and flickering phenom-
ena around the boundaries of the moving objects.
To address the aforementioned deficiencies among the
existing haze removal methods, we propose a sky detection-
and texture smoothing-based robust haze removal algorithm
for images and videos. It aims for brighter and cleaner sky
regions without color distortion, and higher contrast and satu-
ration for the non-sky regions. It results in coherent haze-free
videos without jittering and flickering.
3 OUR ROBUST IMAGE AND VIDEODEHAZING ALGORITHM
The haze imaging model3 can be expressed as
I(x) = J(x)t(x) + A(1 − t(x)), (1)
where I(x) represents the color value of pixel x in the input
hazy image I, J is the expected haze-free image, t is the trans-
mission, and A is the atmospheric light. The transmission tcan be expressed by the following formula3:
t(x) = e−𝛽d(x), (2)
where d(x) represents the depth value of pixel x and 𝛽 is the
scattering coefficient.
The sky regions do not conform to the dark channel prior3
and will appear with color distortion or gray phenomena
in the restored results. Considering that transmission adjust-
ment fails for the sky regions, atmospheric light A is used
to optimize sky regions, and a sky detection-based adaptive
atmospheric light estimation method is proposed here. For
the hazy images with sky, bright sky restoration results with
significant dehazing effects can be achieved. Then, texture
smoothing is applied to preprocess the input image, and a tex-
ture smoothing-based robust transmission estimation method
is proposed to avoid the transmission fluctuations inside the
planar object with same depth. Thus, for those regions that
LIU ET AL. 3 of 10
FIGURE 1 Our image dehazing flowchart
conform to the dark channel prior, the contrast will be pro-
moted in the restored results. At last, joint bilateral filtering
is used to eliminate the noise influence in the recovered
haze-free image. Figure 1 shows the flowchart of our image
dehazing algorithm.
For video dehazing, a temporally coherent atmospheric
light smoothing strategy and a spatial-temporally coherent
transmission smoothing strategy are designed to guarantee the
spatial and temporal coherence of the haze-free videos and
avoid jittering and flickering phenomena.
3.1 Sky detection-based adaptiveatmospheric light estimationEquation 1 shows that when t(x) approaches zero, I(x) will
approach A; therefore, A can be approximated by the inten-
sity of most haze-opaque regions in the hazy image. For the
images with sky, the estimated atmospheric light values from
the previous methods3 will be higher than most of the sky
pixel values, which results in darker sky pixels and gloomy
sky regions in the restored results. Conversely, if the atmo-
spheric light values are lower than most of the sky pixel
values, the recovered sky regions will be brighter. Thus, we
design an adaptive atmospheric light estimation method to get
relatively small atmospheric light values for the images with
sky, which can make the restored sky regions brighter and
cleaner.
A support vector machine (SVM)-based atmospheric light
validation method14 is carried out to obtain the initial
atmospheric light A′. And a one-dimensional histogram
segmentation-based sky recognition method11 is adopted to
detect the sky regions Ωsky. If Ωsky ≠ 𝜙, sky pixels are sorted
by luminance. In order to get smaller atmospheric light values
and avoid the noise influence from the sky detection result, the
sky pixels with the minimum 0.1% to 0.5% luminance values
are chosen and denoted as Ω′sky. Thus, the atmospheric light
values can be taken as
A ={ meanx∈Ω′
sky(I(x)), Ω′
sky ≠ 𝜙;A′, Ω′
sky = 𝜙,(3)
where mean(·) is an average operator and I is the input hazy
image. If the images contain sky regions, the obtained atmo-
spheric light values are lower enough to ensure a brighter
sky restoration result, which is also reasonable because the
depth of the sky is infinite. If the images do not hold sky
regions, the atmospheric light values are taken as A′ directly.
Because SVM-based atmospheric light validation method14
can effectively avoid the interference of bright objects such
as car lights, the location of the atmospheric light can prop-
erly reflect the meaning of most haze-opaque regions on the
image.
Figure 2 shows an image dehazing example with sky. The
red and blue regions in Figure 2(c) are the locations of atmo-
spheric light estimated by the He et al. method3 and our
method, respectively, which show that our atmospheric light
values are smaller. Figure 2(h) shows the haze-free image
gained by replacing our adaptive atmospheric light estima-
tion module with that of the He et al. method.3 Compared
with the input image, the sky region in Figure 2(h) is gloomy
with serious color distortion. On the contrary, our sky restora-
tion result is brighter and cleaner. For the hazy image without
sky in Figure 3, the SVM-based atmospheric light validation
method14 is directly used to estimate the atmospheric light
values. The red boxes in Figure 3(b) indicate the rejected
atmospheric light positions, and the green box represents the
finally accepted atmospheric light position.
3.2 Texture smoothing-based robusttransmission estimationThe He et al. method3 generates halo effect near object bound-
aries. The existing solutions for halo effect elimination3,13 are
to refine the coarse transmission map under the guidance of
the input image, which achieves consistent edge information
between the transmission map and the input image. However,
they will result in unwanted texture fluctuations inside the
planar objects and reduce the dehazing effect in the halo adja-
cent regions, which is detrimental to the contrast enhancement
of haze-free images. Thus, a texture smoothing-based robust
transmission estimation method is proposed to solve the above
problems.
Texture smoothing is first applied to preprocess the input
image with L0 gradient minimization filter,19 that is,
arg minI′ {∑
p(I′p − Ip)2 + 𝜆C(I′)},C(I′) = #{p|||𝜕xI′p|| + ||𝜕yI′p|| ≠ 0}, (4)
where I′p is the color value of pixel p after texture smooth-
ing, I is the input image, C(I′) is the number of pixels with
4 of 10 LIU ET AL.
FIGURE 2 Haze removal example for the “Tiananmen” image with sky. (a) input image, (b) sky detection, (c) atmospheric light estimation, (d)
preliminary transmission t, (e) fine transmission map with (a), (f) texture smoothing of (a), (g) fine transmission map with (f), (h) haze removal with
old A,3 (i) our haze removal result
greater gradient than 0 in I′, and 𝜆 is the smoothing coef-
ficient. L0 gradient minimization filter cannot only suppress
texture, making pixel values inside the same planar object as
consistent as possible, but also enhance structure, avoiding
different planar objects being smoothed together. According
to lots of experiments on the hazy images, the best filtering
parameters are taken as 𝜆 ∈ (0.01, 0.05). Considering that
𝜆 values in the narrow span will not lead to big differences
among the smoothing results, we set a default value for 𝜆 as
0.03 for the convenience of the users.
Then, we obtain a preliminary estimated transmission map
t from I′ and A with a patch shift-based transmission cal-
culation strategy,14 which can effectively reduce unreliable
transmission. And only few complex structure pixels remain
with halo effect in the t recovered haze-free images, which
are resolved with the guided filter13 to refine t as t under the
guidance of I′. Because the texture details in I′ are effectively
suppressed and the pixel values inside the same planar objects
are similar after the texture smoothing process,19 the refine-
ment operation under the guidance of I′ cannot only maintain
the transmission consistency inside the planar objects but also
keep transmission variation with the depth variation where
usually the structure in the input image are.
In Figure 3(d) and 3(g1), the refined transmission map
under the guidance of I takes on obvious texture fluctuations
on the wall with approximately same depth, which is incon-
sistent with the depth variation. Figure 3(e) shows the texture
smoothing result I′ of I′, where textures are suppressed and
pixel values inside are consistent on the wall. Figure 3(f)
and 3(g2) display the refined transmission map under the
guidance of I′, where transmissions inside the same planar
objects maintain consistency. Comparing Figure 3(h) and 3(i),
we can find that our result has higher contrast and saturation
than the He et al. result.3
3.3 Joint bilateral filtering-basedpost-processingLet’s substitute A obtained by the adaptive atmospheric light
estimation method and t obtained by the robust transmission
estimation method into Equation 1 for single image dehaz-
ing and denote the haze-free image as J. J is full of color,
LIU ET AL. 5 of 10
FIGURE 3 Haze removal example for the “Mansion” image without sky. (a) input image, (b) atmospheric light validation, (c) preliminary
transmission t, (d) refined transmission with (a), (e) texture smoothing of (a), (f) refined transmission with (e), (g1) local magnification of (d), (g2)
local magnification of (f), (h) our defogging result, (i) the He et al. result3
with high contrast and visibility; however, noise appears in
the original dark area. Here, the joint bilateral filter is used to
post-process J as follows:
Jp = 1
Kp
∑q∈Ωp
Jqf (p − q)g(Ip − Iq), (5)
where Jp represents the color value of pixel p in the
image after post-processing, I is the input image, f (x) =exp(−||x||2∕𝜎2
s ) is the Gaussian weight function in the spatial
space, g(x) = exp(−||x||2∕𝜎2r ) is the Gaussian weight function
in the color space, and Kp is the sum of weights of all pixels
in a local region Ωp centered on p. Joint bilateral filter refers
to both spatial information of pixels and color information of
I, which can maintain image edge while removing noise.
Figure 4(b) and 4(c) display the haze-free images J and
J before and after post-processing, respectively. From their
local magnifications, Figure 4(d1) and 4(d2), we can see that
noise in the haze-free image is effectively suppressed, while
the detail and edge information are well preserved.
3.4 Spatial-temporally coherent videodehazingRecovering each frame of the hazy videos with single image
dehazing algorithm directly will result in hue jittering and
local flickering phenomena in the haze-free videos due
to the temporal incoherence of atmospheric lights as well
as transmissions among adjacent frames. Consequently, the
spatial-temporal smoothing of the atmospheric lights and the
frame-wise texture smoothed videos will be helpful.
To address the mutations of atmospheric lights, a guided
filter-based temporally coherent atmospheric light smooth-
ing strategy is proposed. Let’s denote the average luminance
sequence of the video as SL(n) and the atmospheric light
sequence of the video as S′A(n), where n is the number of
the frame. Because SL(n) are usually smooth among adja-
cent frames, we smooth S′A(n) under the guidance of SL(n) by
one-dimensional guided filter and get SA(n), which can avoid
abrupt hue and luminance variations in the haze-free videos.
Nevertheless, in order to prevent the atmospheric lights from
the influence of sudden SL(n) changes, we further slow down
the changes of the atmospheric lights as
A(n) = 𝛼 ∗ A(n − 1) + (1 − 𝛼) ∗ SA(n), (6)
where A(n) is the atmospheric light of frame n ⩾ 2 and the
damping coefficient 𝛼 is taken as 0.95 here.
To address the temporal and spatial incoherence of the
transmissions, a Gaussian filter-based spatial-temporally
coherent transmission smoothing strategy is proposed to fur-
ther smooth the frame-wise texture smoothed videos.
6 of 10 LIU ET AL.
FIGURE 4 Joint bilateral filtering-based post-processing result
FIGURE 5 Comparison of different image defogging methods. (a) input images, (b) the He et al. results,3 (c) the Fattal’s results,10 and (d) our
results
LIU ET AL. 7 of 10
I′p(n) =∑Δn
∑q∈Ωp
f (p − q,Δn)I′q(n + Δn)K
, (7)
where I′p(n) is the expected color value for pixel p in frame
n and I′ is the frame-wise texture smoothed result of original
frame I. Ωp is a 3 × 3 local image patch centered on pixel p,
Δn ∈ [−7, 7] indicates the front and rear 7 frames centered
on the current frame, f (x, y) = exp(−(||x||2 + ||y||2)∕𝜎2) is
a Gaussian weight function, and K is the normalized weight
coefficient. I′(n) is used with A(n) to get the transmissions and
the restored result of frame n.
Our video dehazing examples are shown in Figure 7, which
achieve our goal in restoring high-visibility haze-free results
while avoiding jittering and flickering phenomena.
4 EXPERIMENTAL RESULTS ANDDISCUSSIONS
Figures 5 and 6 show the comparison of image dehazing
effects between our algorithm and five existing methods.
Figure 7 shows the comparison of video dehazing effects
between our algorithm and the Kim et al. method.18
Figure 5 compares the dehazing effects among the He et al.
results,3 Fattal’s results,10 and our results. Example (1) shows
that the dehazing intensity of both the He et al. result3 and
Fattal’s result10 are insufficient. Our result is brighter, and
the contrast and saturation among wall and leaves are higher.
Example (2) reveals inconsistent dehazing degrees around
middle branches in the He et al. result,3 resulting in visual
disharmony, and color distortion around left branches in
Fattal’s result.10 Our result is bright with vivid color, without
visual disharmony or color distoration. Example (3) indicates
that our result has higher contrast and saturation than the
He et al. result3 with bad overall dehazing effect, as well as
Fattal’s result10 with low overall color fluctuations.
Figure 6 compares our algorithm with three dark channel
prior-based improved image dehazing methods. The Liu et al.
results11 have color distortion phenomena in Examples (1)
and (3) and foggy sky without sufficient dehazing intensity in
Examples (2) and (4). The Xing et al. method12 will lead to
serious over-exposure for images without sky as Example (1)
and present low contrast and saturation in all examples as well
FIGURE 6 Comparison with three dark channel prior-based improved image dehazing methods. (a) input images, (b) the Liu et al. results,11 (c)
the Xing et al. results,12 (d) the Zhao et al. results,14 and (e) our results
8 of 10 LIU ET AL.
FIGURE 7 Video dehazing comparison with the Kim et al. method18
as color distortion phenomena in Example (3). The Zhao et al.
results 14 have low contrast in Example (1) and serious color
distortion phenomena in sky regions in the other examples.
On the contrary, our results hold high contrast and saturation
without color distortion phenomena as well as clear and bright
sky regions.
Figure 7 compares video dehazing effects from our
algorithm and the Kim et al. method.18 Example (1) shows
that our results have higher contrast and keep the same warm
colors as the hazy video; however, the Kim et al. results18 look
a little cold. Examples (2) and (3) show that the Kim et al.
results18 have halo effect around object boundaries, resulting
in serious flickering phenomena. On the contrary, our results
have higher contrast and brighter sky regions, without halo
effect. For relevant videos, please refer to our multimedia
attachments.
5 CONCLUSIONS
To address the gloomy sky issue of existing image dehaz-
ing methods, a sky detection-based adaptive atmospheric
light estimation method is proposed to avoid color distor-
tion and recover a bright and clean sky. To address the
insufficient contrast enhancement issue caused by incon-
sistency between transmission map and depth information,
a texture smoothing-based robust transmission estimation
method is proposed for maintaining transmission consistency
LIU ET AL. 9 of 10
inside the planar object. It effectively enhances the contrast
and saturation of non-sky regions in the restored images
and videos. Finally, a guided filter-based temporally coher-
ent atmospheric light smoothing strategy and a Gaussian
filter-based spatial-temporally coherent transmission smooth-
ing strategy are proposed for maintaining the temporal and
spatial coherence of haze-free videos that don’t have jittering
and flickering phenomena.
However, over enhancement of color saturation can cause
color distortion, such as the window glass shown in
Figure 5(a). And the accuracy of sky detection will sometimes
have impact on atmospheric light estimation. These two
defects are what we are trying to solve in our following
research work.
ACKNOWLEDGEMENTSThis work is supported by the Zhejiang Provincial Natural
Science Foundation of China under grant no. LY14F020004,
the National Natural Science Foundation of China under
grant nos. 61003188, 61379075, and U1609215, the Tal-
ent Young Foundation of Zhejiang Gongshang University
under grant no. QZ13-9, the National Key Technology
R&D Program under grant no. 2014BAK14B01, the Zhe-
jiang Provincial Commonweal Technology Applied Research
Projects of China under grant no. 2015C33071, the open
funding project of State Key Lab of Virtual Reality Tech-
nology and Systems at Beihang University under grant
no. BUAA-VR-13KF-2013-3, the Zhejiang Provincial Key
Laboratory of Electronic Commerce and Logistics Infor-
mation Technology under grant no. 2011E10005, and the
Zhejiang Provincial Research Center of Intelligent Trans-
portation Engineering and Technology under grant no.
2015ERCITZJ-KF1.
REFERENCES1. Rahman Z, Jobson DJ, Woodell GA. Multi-scale retinex for color
image enhancement. Proceedings of the IEEE International Confer-
ence on Image Processing; 1996. p.1003–1006.
2. Reza AM. Realization of the contrast limited adaptive histogram
equalization (CLAHE) for real-time image enhancement. J. VLSI
Sig. Proc. Syst. Signal, Image Video Technol. 2004;38(1):35–44.
3. He KM, Sun J, Tang XO. Single image haze removal using
dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell.
2011;33(12):2341–2353.
4. Narasimhan SG, Nayar SK. Contrast restoration of weather
degraded images. IEEE Trans. Pattern Anal. Mach. Intell.
2003;25(6):713–724.
5. Shwartz S, Namer E, Schechner YY. Blind haze separation. Pro-
ceedings of the IEEE International Conference on Computer Vision
and Pattern Recognition; New York, USA; 2006. p.1984–1991.
6. Kopf J, Neubert B, Chen B, Cohen MF, Cohen-Or D, Deussen
O, Uyttendaele M, Lischinski D. Deep photo: Model-based
photograph enhancement and viewing. ACM Trans. Graphics.
2008;27(5):116:1–116:10.
7. Tan RT. Visibility in bad weather from a single image. Proceed-
ings of the IEEE International Conference on Computer Vision and
Pattern Recognition; San Francisco, USA; 2008. p.1956–1963.
8. Zhu Q, Mai J, Shao L. A fast single image haze removal
algorithm using color attenuation prior. IEEE Trans. Image Process.
2015;24(11):3522–3533.
9. Shiau YH, Yang HY, Chen PY, Chuang YZ. Hardware implemen-
tation of a fast and efficient haze removal method. IEEE Trans.
Circuits Syst. Video Technol. 2013;23(8):1369–1374.
10. Fattal R. Dehazing using color-lines. ACM Trans. Graphics.
2014;34(1):13:1–13:14.
11. Liu XY, Dai SK. Halo-free and color-distortion-free algorithm
for image dehazing. Chinese J. Image and Graphics.
2015;20(11):1453–1461.
12. Xing XM, Liu W. Haze removal for single traffic image. Chinese J.
Image and Graphics. 2016;21(11):1440–1447.
13. He KM, Sun J, Tang XO. Guided image filtering. IEEE Trans.
Pattern Anal. Mach. Intell. 2013;35(6):1397–1409.
14. Zhao JW, Shen YY, Liu CX, Ouyang Y. Dark channel prior-based
image dehazing with atmospheric light validation and halo elimi-
nation. Chinese J Image and Graphics. 2016;21(9):1221–1228.
15. Lai YH, Chen YL, Chiou CJ, Hsu CT. Single-image dehazing via
optimal transmission map under scene priors. IEEE Trans. Circuits
Syst. Video Technol. 2015;25(1):1–14.
16. Gibson K, Võ D, Nguyen T. An investigation in dehazing com-
pressed images and video. Proceedings of the MTS/IEEE Oceans
Conference; Seattle, USA; 2010. p.1–8.
17. Chen C, Do MN, Wang J. Robust image and video dehazing with
visual artifact suppression via gradient residual minimization. Pro-
ceedings of the Europe Conference on Computer Vision; 2016.
p.576–591.
18. Kim JH, Jang WD, Sim JY, Kim CS. Optimized contrast enhance-
ment for real-time image and video dehazing. J. Visual Commun.
Image Represent. 2013;24(3):410–425.
19. Xu L, Lu CW, Xu Y, Jia JY. Image smoothing via L0 gradient
minimization. ACM Trans. Graphics. 2011;30(6):174:1–174:11.
Chunxiao Liu is an associate
professor at the School of Com-
puter Science & Information
Engineering, Zhejiang Gong-
shang University, China. He
received his PhD degree in
Mathematics from the State Key
Lab of CAD&CG, Zhejiang
University in 2009. His research interests include
image and video-based rendering, smart video surveil-
lance, pattern analysis and intelligent systems, and
computer vision.
Yiyun Shen is now a junior
majored in Computer Sci-
ence and Technology at the
School of Computer Science
and Information Engineering,
Zhejiang Gongshang Univer-
sity, China. His current research
10 of 10 LIU ET AL.
interest is visual computing and
understanding.
Yaqi Shao is currently work-
ing towards the BS degree in
Computer Science and Technol-
ogy at the School of Computer
Science and Information Engi-
neering, Zhejiang Gongshang
University, China. Her research
interests focus on machine learn-
ing for computer vision.
Jinwei Zhao is currently
working towards the BS
degree in Computer Sci-
ence and Technology at the
School of Computer Science
& Information Engineering,
Zhejiang Gongshang Uni-
versity, China. His research
interests include computer vision and computer
graphics.
Xun Wang is a professor and
dean at the School of Computer
Science and Information Engi-
neering, Zhejiang Gongshang
University, China. He received
his BSc in Mechanics, MSc, and
PhD degrees in Computer Sci-
ence, all from Zhejiang Univer-
sity. His current research interests include visual media
computing, intelligent information processing, multi-
media information security, and geographic informa-
tion system.
How to cite this article: Liu C, Shen Y, Shao Y,
Zhao J, Wang X. Sky detection and texture smooth-
ing based high visibility haze removal from images and
videos. Comput Anim Virtual Worlds. 2017;28:e1776.
https://doi.org/10.1002/cav.1776
本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。
学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,
提供一站式文献检索和下载服务”的24 小时在线不限IP
图书馆。
图书馆致力于便利、促进学习与科研,提供最强文献下载服务。
图书馆导航:
图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具