8
Research Article Adaptive Graph Cut Based Cloud Detection in Wireless Sensor Networks Shuang Liu and Zhong Zhang College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China Correspondence should be addressed to Zhong Zhang; [email protected] Received 7 November 2014; Accepted 3 December 2014 Academic Editor: Qilian Liang Copyright © 2015 S. Liu and Z. Zhang. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We focus on the issue of cloud detection in wireless sensor networks (WSN) and propose a novel detection algorithm named adaptive graph cut (AGC) to tackle this issue. We first automatically label some pixels as “cloud” or “clear sky” with high confidence. en, those labelled pixels serve as hard constraint seeds for the following graph cut algorithm. In addition, a novel transfer learning algorithm is proposed to transfer knowledge among sensor nodes, such that cloud images captured from different sensor nodes can adapt to different weather conditions. e experimental results show that the proposed algorithm not only achieves better results than other state-of-the-art cloud detection algorithms in WSN, but also achieves comparable results compared with the interactive segmentation algorithm. 1. Introduction Cloud is one of the most important meteorological phe- nomena related to the hydrological cycle and the earth radiation balance. Most of existing cloud-related research requires the technology of ground-based cloud observation, such as ground-based cloud cover evaluation. e target of cloud cover evaluation is to estimate the proportion of cloud pixels accounted for all pixels in a ground-based cloud image. At present, cloud cover evaluation is still mainly conducted by professionally trained observers who manually estimate the coverage [1]. However, different observers may obtain discrepant observation results. Moreover, this work is complicated and time-consuming. erefore, automatic ground-based cloud cover evaluation is highly desired in this area. Cloud detection, which classifies each pixel into cloud or clear sky element, is a fundamental task for cloud cover evaluation. In recent years, a lot of ground-based imaging devices have been developed for capturing cloud images, which provides hardware supporting for automatic cloud detection. e sky images can be obtained by these devices, such as whole sky imager (WSI) [2, 3], total sky imager (TSI) [4, 5], infrared cloud imager (ICI) [6], and all-sky imager (ASI) [7]. e traditional detection methods process the cloud images captured from only one image sensor. Meanwhile, recent advances in wireless communications and electronics have enabled connecting amount of image sensors as wireless sensor networks (WSN) where each image sensor is a sensor node [8, 9]. Cloud detection in WSN has two advantages over traditional cloud detection. First, each image sensor should allocate a computing device in traditional cloud detection, while all the image sensors in WSN share one computing device in the task manager node which is able to reduce costs. Second, cloud detection in WSN can obtain more complete cloud observation data than traditional cloud detection due to deploying sensor nodes at different locations. However, cloud detection in WSN is particularly challenging due to the severe illumination changes and vague boundaries between cloud and sky regions, as shown in Figure 1. Furthermore, in practical applications, we only have the ground-truth in one sensor node. If we train the model using the cloud images from one sensor node and test in other sensor nodes, the performance will degrade sharply, which is a transfer learning problem. In this paper, we focus on the issue of cloud detection in WSN. A novel transfer learning algorithm is proposed so that the parameters of model adapt in different sensor nodes. We also propose a novel algorithm named adaptive graph cut (AGC) to detect cloud in the task manager node. First, Hindawi Publishing Corporation International Journal of Distributed Sensor Networks Volume 2015, Article ID 947169, 7 pages http://dx.doi.org/10.1155/2015/947169

Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

Research ArticleAdaptive Graph Cut Based Cloud Detection inWireless Sensor Networks

Shuang Liu and Zhong Zhang

College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China

Correspondence should be addressed to Zhong Zhang; [email protected]

Received 7 November 2014; Accepted 3 December 2014

Academic Editor: Qilian Liang

Copyright © 2015 S. Liu and Z. Zhang.This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We focus on the issue of cloud detection in wireless sensor networks (WSN) and propose a novel detection algorithm namedadaptive graph cut (AGC) to tackle this issue.We first automatically label some pixels as “cloud” or “clear sky” with high confidence.Then, those labelled pixels serve as hard constraint seeds for the following graph cut algorithm. In addition, a novel transfer learningalgorithm is proposed to transfer knowledge among sensor nodes, such that cloud images captured from different sensor nodes canadapt to different weather conditions. The experimental results show that the proposed algorithm not only achieves better resultsthan other state-of-the-art cloud detection algorithms inWSN, but also achieves comparable results compared with the interactivesegmentation algorithm.

1. Introduction

Cloud is one of the most important meteorological phe-nomena related to the hydrological cycle and the earthradiation balance. Most of existing cloud-related researchrequires the technology of ground-based cloud observation,such as ground-based cloud cover evaluation. The targetof cloud cover evaluation is to estimate the proportion ofcloud pixels accounted for all pixels in a ground-based cloudimage. At present, cloud cover evaluation is still mainlyconducted by professionally trained observers who manuallyestimate the coverage [1]. However, different observers mayobtain discrepant observation results. Moreover, this workis complicated and time-consuming. Therefore, automaticground-based cloud cover evaluation is highly desired in thisarea.

Cloud detection, which classifies each pixel into cloudor clear sky element, is a fundamental task for cloud coverevaluation. In recent years, a lot of ground-based imagingdevices have been developed for capturing cloud images,which provides hardware supporting for automatic clouddetection. The sky images can be obtained by these devices,such as whole sky imager (WSI) [2, 3], total sky imager(TSI) [4, 5], infrared cloud imager (ICI) [6], and all-skyimager (ASI) [7]. The traditional detection methods process

the cloud images captured from only one image sensor.Meanwhile, recent advances in wireless communications andelectronics have enabled connecting amount of image sensorsas wireless sensor networks (WSN) where each image sensoris a sensor node [8, 9]. Cloud detection in WSN has twoadvantages over traditional cloud detection. First, each imagesensor should allocate a computing device in traditionalcloud detection, while all the image sensors in WSN shareone computing device in the task manager node which isable to reduce costs. Second, cloud detection in WSN canobtainmore complete cloud observation data than traditionalcloud detection due to deploying sensor nodes at differentlocations. However, cloud detection in WSN is particularlychallenging due to the severe illumination changes and vagueboundaries between cloud and sky regions, as shown inFigure 1. Furthermore, in practical applications, we only havethe ground-truth in one sensor node. If we train the modelusing the cloud images fromone sensor node and test in othersensor nodes, the performance will degrade sharply, which isa transfer learning problem.

In this paper, we focus on the issue of cloud detectionin WSN. A novel transfer learning algorithm is proposed sothat the parameters of model adapt in different sensor nodes.We also propose a novel algorithm named adaptive graphcut (AGC) to detect cloud in the task manager node. First,

Hindawi Publishing CorporationInternational Journal of Distributed Sensor NetworksVolume 2015, Article ID 947169, 7 pageshttp://dx.doi.org/10.1155/2015/947169

Page 2: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

2 International Journal of Distributed Sensor Networks

Figure 1: The examples of ground-based cloud images.

we automatically label some pixels as “cloud” or “clear sky”with high confidence. Then, those labelled pixels serve ashard constraint seeds for the following graph cut algorithm.The experimental results show that the proposed algorithmnot only achieves better results than state-of-the-art clouddetection algorithms, but also achieves comparable resultscompared with the interactive segmentation algorithm.

1.1. Related Work. To the best of our knowledge, our workis the first to study cloud detection in WSN, and thereforewe only introduce the traditional cloud detection algorithms.The cloud observation consists of several fields, such ascloud classification [10, 11] and cloud detection [12]. Exist-ing cloud classification techniques are generally based onthe characteristics of structure [13] and texture in cloudimages [14]. Recently, a lot of methods have been proposedfor ground-based cloud detection. Due to the scatteringdifference between cloud particles and air molecules, mostcloud detection algorithms treat color as the primary featureto distinguish cloud from clear sky [15]. Long et al. [5]proposed a threshold algorithm for cloud detection accordingto a certain radio of 𝑅 over 𝐵 intensity using a red-green-blue (RGB) image. Specifically, pixels with 𝑅/𝐵 greater than0.6 are identified as cloud, otherwise as clear sky. Kreuteret al. [16] reported a different threshold of 0.77 on the𝑅/𝐵 ratio for identifying cloud, which is illustrated to be amore suitable choice than previously mentioned value of 0.6.Other algorithms are proposed to classify the cloud basedon other features, such as saturation [17], difference value[3], and the Euclidean geometric distance (EGD) [18]. Thiskind of methods can be referred to as the fixed thresholdalgorithm. However, fixed threshold methods are unsuitablefor different sky conditions. To overcome this drawback,adaptive threshold algorithms based on the Otsu algorithmare investigated [1]. Moreover, neural network model [7]and MRF (Markov random fields) [15] are also appliedfor cloud detection. Although certain improvement can beachieved, their performance is still unsatisfactory for realworld applications.

Cloud is actually a special kind of object, and thereforeit is natural to apply object segmentation techniques forcloud detection. Interactive object segmentation is a popular

algorithm in this field. Among them, interactive graph cuthas been widely applied and has achieved good performance[19, 20]. However, this model needs users to provide hardconstraints for segmentation. Specifically, theymanually labelcertain pixels as “object” or “background” seeds and thenconduct the graph cut algorithm based on these seeds.

The remainder of this paper is organized as follows:Section 2 details the proposed algorithm; Section 3 presentsthe experimental results which show the super performanceof our algorithm. Finally, we conclude the paper in Section 4.

2. The Proposed Algorithm

When receiving a cloud image 𝑥𝑖 from the 𝑖th sensor node,we first calculate the thresholds 𝑇𝑖

ℎand 𝑇

𝑖

𝑙using the transfer

learning algorithm proposed in Section 2.1. Then, we detectclouds using the proposed AGC with the thresholds 𝑇𝑖

ℎand

𝑇𝑖

𝑙.

2.1. Transfer Knowledge inWSN. Let𝑋𝑗 = {𝑥𝑗

𝑛, 𝑛 = 1, . . . , 𝑁}

denote a set of cloud images captured from the 𝑗th sensornode, where the ground-truth of clouds is given and thereforethe thresholds𝑇𝑗

ℎand𝑇𝑗𝑙are known.The thresholds𝑇𝑗

ℎand𝑇𝑗𝑙

are used to calculate the hard constraint seeds for the cloudimages. Let 𝑋𝑖 = {𝑥

𝑖

𝑚, 𝑚 = 1, . . . ,𝑀} denote a set of cloud

images captured from the 𝑖th sensor node, where the ground-truth of clouds is not given. Here,𝑁 and𝑀 are the number ofcloud images in𝑋𝑗 and𝑋𝑖, respectively.We expect to transferknowledge from the 𝑗th sensor node to the 𝑖th sensor nodeso that the thresholds 𝑇𝑖

ℎand 𝑇𝑖

𝑙can be inferred. This can be

formulated as

𝑇𝑖

ℎ= 𝑤𝑖𝑗𝑇𝑗

ℎ,

𝑇𝑖

𝑙= 𝑤𝑖𝑗𝑇𝑗

𝑙,

(1)

where𝑤𝑖𝑗is a weight parameterwhich indicates the difference

of cloud image distributions captured from the 𝑖th and𝑗th sensor nodes. Equation (1) shows the weight 𝑤

𝑖𝑗can

compensate the distribution difference and help to transferknowledge from one sensor node to another. When a cloud

Page 3: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

International Journal of Distributed Sensor Networks 3

image contains more cloud, it has lower brightness. Hence,𝑤𝑖𝑗is estimated as

𝑤𝑖𝑗=

{{{{{

{{{{{

{

𝑞𝑖𝑗,

1

𝑀

𝑀

𝑚=1

V𝑖𝑚≤

1

𝑁

𝑁

𝑛=1

V𝑗𝑛

2 − 𝑞𝑖𝑗,

1

𝑀

𝑀

𝑚=1

V𝑖𝑚>

1

𝑁

𝑁

𝑛=1

V𝑗𝑛,

(2)

where V𝑖𝑚

and V𝑗𝑛are the sum of all the pixel brightness

in the 𝑖th and 𝑗th cloud images, respectively. Equation (2)represents that if the average brightness of cloud image set𝑋𝑖 is lower than that of cloud image set𝑋𝑗, the𝑤

𝑖𝑗is equal to

𝑞𝑖𝑗, otherwise (2 − 𝑞

𝑖𝑗). Here,

𝑞𝑖𝑗= 𝑝 (𝑑

𝑖𝑗) =

1

𝜎√2𝜋𝑒−(𝑑𝑖𝑗)2/2𝜎2

, (3)

where 𝜎 is the standard deviation. Here, 𝑑𝑖𝑗 is the distancebetween𝑋

𝑖and𝑋

𝑗

𝑑𝑖𝑗=

1

𝑁 +𝑀

𝑀

𝑚=1

𝑁

𝑛=1

𝑑𝑖𝑗

𝑚𝑛, (4)

where 𝑑𝑖𝑗𝑚𝑛

is the chi-square distance between cloud images𝑥𝑖

𝑚and 𝑥

𝑗

𝑛from the 𝑖th and 𝑗th sensor nodes, respectively.

We extract local binary pattern (LBP) as the local feature andred-green-blue (RGB) color feature as the global feature foreach cloud image. Thus, each cloud image is represented bya histogram ℎ. The chi-square distance between two cloudimages is formulated as

𝑑𝑖𝑗

𝑚𝑛=

𝐾

𝑘=1

(ℎ𝑖

𝑚(𝑘) − ℎ

𝑗

𝑛(𝑘))2

ℎ𝑖𝑚(𝑘) + ℎ

𝑗

𝑛 (𝑘)

, (5)

where ℎ𝑖𝑚and ℎ𝑗

𝑛are the histograms of cloud images 𝑥𝑖

𝑚and

𝑥𝑗

𝑛, respectively,𝐾 is the number of bins, and ℎ𝑖

𝑚(𝑘) and ℎ𝑗

𝑛(𝑘)

are the values of ℎ𝑖𝑚and ℎ𝑗𝑛at 𝑘th bin.

2.2. Cloud Detection in the Task Manager Node. In thetask manager node, we deal with cloud images transmittedfrom sensor nodes. Our goal is to extract the cloud pixels(foreground) from the clear sky pixels (background) pre-cisely. The interactive graph cut is an effective algorithmfor segmentation. However, one drawback is that interactivegraph cut needs users to manually label hard constraints byassigning several pixels (seeds) to be part of foreground andthat of background. In this paper, a novel cloud detectionalgorithm named adaptive graph cut (AGC) is proposed.The proposed method consists of two stages: (1) accordingto the characteristics of ground-based cloud images, weautomatically label some pixels with high confidence as“cloud” or “clear sky” elements; (2) those labelled pixelsserve as hard constraint seeds for the consecutive graph cutalgorithm. The details of the proposed AGC algorithm aregiven below.

2.2.1. Automatically Acquiring Hard Constraints. Since theground-based cloud image is obtained from outdoor envi-ronment, the large illumination variations are inevitable.Moreover, vague boundaries between cloud and sky regionsas the intrinsic characteristics of cloud image are ubiquitous.The above characteristics make cloud detection challenging.In this paper, we propose to automatically assign severalpixels with high confidence as hard constraint seeds, whichforms part of the foreground and background. As stated inSection 1.1, color features are the primary characteristic fordistinguishing cloud from clear sky. Based on this property,the hard constraint seeds are obtained as follows.

(1) A red-green-blue (RGB) cloud image is transformedinto a single-channel feature image. Here, we use𝑅/𝐵 as the feature image, which can increase thedifference between cloud and clear sky and alleviatethe illumination variations to some extent.

(2) According to (6), we can obtain hard constraint seedswith high confidence. Concretely, pixels with 𝑅/𝐵

value greater than 𝑇ℎare identified as cloud and the

pixels with 𝑅/𝐵 less than 𝑇𝑙are identified as clear sky

𝑖 ∈ cloud pixels,𝑅𝑖

𝐵𝑖

> 𝑇ℎ,

𝑖 ∈ clear sky pixels,𝑅𝑖

𝐵𝑖

< 𝑇𝑙.

(6)

Here, 𝑖 is the pixel of the cloud image and 𝑇ℎand 𝑇

𝑙

are the threshold parameters stated in Section 2.1.The strategy of automatically labeling hard constraint

seeds is feasible in practice. Since the threshold of 𝑇ℎis high

enough, it can ensure pixels with the threshold larger than𝑇ℎare cloud elements. Likewise, the threshold of 𝑇

𝑙is low

enough, so it also can ensure pixels with the threshold lessthan 𝑇

𝑙are clear sky elements. We visualize hard constraint

seeds to verify the reliability of automatically labelling seeds.Figure 2 shows the examples of automatically labeling hardconstraint seeds in cloud images. The pixels with green arethe seeds of cloud elements and pixels with red are the seedsof clear sky elements, from which we can see that these seedsbelong to the truth class (cloud or clear sky).

2.2.2. Graph Cut Algorithm. After obtaining the hard con-straint seeds, we conduct the graph cut algorithm. Let 𝐺 =

{𝑉, 𝐸} be a weighted undirected graph, where 𝑉 are thenodes (the pixels of the cloud image) and 𝐸 denote the edgesconnecting two nodes. Each edge in the graph is assigneda nonnegative weight 𝑤

𝑒(cost). There are also two special

nodes named terminals: the foreground and backgroundterminals. A cut is a subset of edges so that each node isassigned to either of the terminals.The cost of a cut is usuallydefined as the sum of the costs of the edges [19]:

|𝐶| = ∑

𝑒∈𝐶

𝑤𝑒. (7)

The cost function that we use as the soft constraints forsegmentation should include region and boundary proper-ties. Let 𝑃 denote all the pixels in the image and𝑁 denote the

Page 4: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

4 International Journal of Distributed Sensor Networks

(a)

(b)

Figure 2: The examples of automatically labeling hard constraint seeds in the cloud images. The pixels with green are the seeds of cloudelements and the pixels with red are the seeds of clear sky elements.

set of pairs (𝑝, 𝑞) where 𝑝 and 𝑞 are the neighboring pixelsin 𝑃. The cost function 𝐸(𝐿) for each segmentation can bedefined as

𝐸 (𝐿) = 𝜆 ⋅ 𝑅 (𝐿) + 𝐵 (𝐿) , (8)

where

𝑅𝐿= ∑

𝑝∈𝑃

𝑅𝑝(𝐿𝑝) ,

𝐵 (𝐿) = ∑

(𝑝,𝑞)∈𝑁

𝐵(𝑝,𝑞)

⋅ 𝛿 (𝐿𝑝, 𝐿𝑞) ,

𝛿 (𝐿𝑝, 𝐿𝑞) =

{

{

{

1, if 𝐿𝑝

= 𝐿𝑞

0, otherwise.

(9)

The coefficient 𝜆 ≥ 0 in (8) specifies the relative importancebetween the region cost 𝑅(𝐿) and the boundary cost 𝐵(𝐿).The region cost 𝑅(𝐿) measures the individual penalty forlabeling the pixel 𝑝 as the foreground or background. Eachpixel has two cost weights 𝑅

𝑝(1) and 𝑅

𝑝(0) corresponding to

the case of connecting with the foreground and background,respectively. The boundary cost 𝐵(𝐿) reflects the penaltiesfor the discontinuity between neighboring pixels {𝑝, 𝑞}. 𝐵

(𝑝,𝑞)

should be large when the values of pixels 𝑝 and 𝑞 are similar

and 𝐵(𝑝,𝑞)

is close to zero when the two pixel values are verydifferent.

In this paper, 𝑅(𝐿) and 𝐵(𝐿) in (8) are given as thefollowing steps.

(1) Assume that we get 𝑛 foreground seeds and 𝑚

background seeds denoted by {Fore}𝑛and {Back}

𝑚,

respectively.(2) For each pixel 𝑝, calculate its color feature distance

to each seed, denoted as 𝐷{Fore}𝑝𝑛and 𝐷{Back}𝑝

𝑚,

respectively.(3) 𝑅𝑝(1) and 𝑅

𝑝(0) can be calculated as follows:

𝑅𝑝 (1) = min {𝐷 {Fore}𝑝

𝑘} , 𝑘 = 1, 2, . . . , 𝑛,

𝑅𝑝 (0) = min {𝐷 {Back}𝑝

𝑘} , 𝑘 = 1, 2, . . . , 𝑚.

(10)

(4) Theboundary cost𝐵(𝐿) is a decreasing function of thedistance between pixels𝑝 and 𝑞. For a cloud image, weuse the function:

𝐵{𝑝,𝑞}

= exp(−

(𝑓𝑝− 𝑓𝑞)2

2𝜎2) , (11)

where 𝑓𝑝and 𝑓

𝑞denote the feature vectors, and their

values are 𝑅/𝐵. 𝜎 is a factor which is set to 0.3 in thispaper.

Page 5: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

International Journal of Distributed Sensor Networks 5

Original images Rad Diff Adp Ours

Figure 3: The experimental results of four different detection algorithms in one sensor node.

To obtain the values of all the parameters, we considerhow to solve this segmentation problem. The min-cut/max-flow algorithm in [21] is applied for solving (8), and [19]proved that this min-cut algorithm would give a segmenta-tion minimizing among all segmentations.

3. Experimental Results

3.1. Database. One sensor node used to capture the ground-based cloud images is a digital camera equippedwith a fisheyelens, which provides a field of view larger than 180

∘. Thecamera is set to capture one cloud image per 15 seconds.Moreinformation about the camera can be found in [22]. In thispaper, one database of such images captured from WSN isused. The square area of an all-sky image is considered andcropped as shown in Figure 1. Then each image is resized to200 ∗ 200 pixels. A set of 500 images from different sensornodes in WSN is constructed to evaluate the performance ofour proposed method.

3.2. Methods in Comparison Study. In this paper, the pro-posed method is compared with three current cloud detec-tion methods: (1) ratio threshold (Rad) [16]: we assign 𝑅/𝐵 =

0.6 as the threshold. Specially, pixels with 𝑅/𝐵 > 0.6 areclassified as cloud, otherwise as clear sky; (2) differencethreshold (Diff) [17]: we assign 𝑅 − 𝐵 = 30 as the threshold.Concretely, pixels with 𝑅 − 𝐵 > 30 are classified as cloud,otherwise as clear sky; (3) adaptive threshold (Adp) [1]: thethreshold is calculated by the Otsu algorithm. In order tomake the results comparable to others, we use 𝑅/𝐵 as featureimage for all the experiments, and 𝑇

ℎ= 0.9 and 𝑇

𝑙= 0.3,

which ensure the hard constraint seed with high confidence,are given for only one sensor node.

3.3. Experimental Results and Analysis. To evaluate the effec-tiveness of the proposed algorithm for the cloud detection, wecarry out a series of experiments on the database mentionedin Section 3.1. Furthermore,meteorological expertsmanually

Table 1: Average detection accuracy of different algorithms inWSN.

Rad Diff Adp OursPrecision (%) 67.23 68.43 72.67 83.26

Table 2: Average detection accuracy of different algorithms in onesensor node.

Rad Diff Adp OursPrecision (%) 76.53 75.46 80.32 85.79

segment the 500 cloud images into binary masks which areassigned as the ground-truth. Quantitative evaluations of allthe above algorithms are performed as

NTcloud +NTsky

𝑁, (12)

where NTcloud and NTsky are the number of true cloud pixelsand true clear sky pixels, respectively, and 𝑁 is the totalnumber of pixels in the ground-based image.

Table 1 shows the average detection accuracy of differentalgorithms inWSN, fromwhichwe can see that our algorithmobtains significant improvement on the performance. It isbecause our algorithm considers the difference of cloudimage distributions from different sensor nodes and transfersknowledge inWSN. Based on the transfer learning algorithmstated in Section 2.1, our method assigns specific thresholds𝑇ℎand 𝑇

𝑙for each sensor node, while other methods use the

same thresholds for all the sensor nodes. Thus, our methodadapts to different weather conditions.

Table 2 lists the detection performance by the four detec-tion algorithms in one sensor node. We can see that ourmethod achieves the highest detection accuracy. Figure 3shows the results of some examples detected by the four algo-rithms. From the results, several conclusions can be drawn.First, the “Rad” and “Diff” algorithms are not adaptablefor different sky conditions. Second, “Adp” algorithm cannotsolve the conditions with nonuniform illumination. Third,

Page 6: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

6 International Journal of Distributed Sensor Networks

(a)

(b) (c)

(d) (e)

Figure 4: (a) Original cloud image; (b) labeling hard constraint seeds automatically; (c) labeling hard constraint seeds manually; (d) theproposed AGC detection results; (e) interactive graph cut detection results.

the proposed algorithm obtains the best detection resultsin all weather conditions. In addition, we also compare theproposed algorithm with interactive graph cut which needsusers to label hard constraint seeds manually. Figure 4 showsthe detection results. We can see that the proposed algorithmachieves comparable detection results compared with theinteractive graph cut, while our algorithm overcomes theshortcomings arising from manual labeling hard constraintseeds.

4. Conclusions

In this paper, we propose a novel transfer learning algo-rithm and a detection algorithm named adaptive graph cut(AGC) for segment ground-based cloud images inWSN.Thetransfer learning algorithm could compensate the differenceof cloud image distributions captured from different sensornodes. The AGC could automatically label pixels with highconfidence as “cloud” or “clear sky,” and those labelled pixels

Page 7: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

International Journal of Distributed Sensor Networks 7

serve as hard constraint seeds for the following graph cutalgorithm. The experimental results show that the proposedalgorithm not only achieves better results than state-of-the-art cloud detection algorithms in WSN, but also achievescomparable results compared with the interactive segmenta-tion algorithm.

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper.

Acknowledgments

This work is supported by National Natural Science Founda-tion of China under Grant nos. 61401309 and 61271429 andDoctoral Fund of Tianjin Normal University under Grant no.5RL134.

References

[1] J. Yang, W. Lv, Y. Ma, W. Yao, and Q. Li, “An automatic ground-based cloud detection method based on adaptive threshold,”Journal of Applied Meteorological Science, vol. 20, no. 6, pp. 713–721, 2009.

[2] J. E. Shields, R. W. Johnson, M. E. Karr, A. R. Burden, and J.G. Baker, “Daylight visible/NIR whole-sky imagers for cloudand radiance monitoring in support of UV research programs,”in Ultraviolet Ground- and Space-Based Measurements, Models,and Effects III, vol. 5156 of Proceedings of SPIE, pp. 155–166,International Society for Optical Engineering, San Diego, Calif,USA, August 2003.

[3] A. Heinle, A. Macke, and A. Srivastav, “Automatic cloudclassification of whole sky images,” Atmospheric MeasurementTechniques, vol. 3, no. 3, pp. 557–567, 2010.

[4] C. N. Long, D. W. Slater, and T. Tooman, Total Sky ImagerModel 880 Status andTesting Results, PacificNorthwestNationalLaboratory, Richland, Wash, USA, 2001.

[5] C. N. Long, J. M. Sabburg, J. Calbo, and D. Pages, “Retrievingcloud characteristics from ground-based daytime color all-skyimages,” Journal of Atmospheric andOceanic Technology, vol. 23,no. 5, pp. 633–652, 2006.

[6] J. A. Shaw and B. Thurairajah, “Short-term Arctic cloud statis-tics at NSA from the infrared cloud imager,” in Proceedings ofthe 13th ARM Science Team Meeting, 2003.

[7] A. Cazorla, F. J. Olmo, and L. Alados-Arbotedas, “Developmentof a sky imager for cloud cover assessment,” Journal of theOptical Society of America A, vol. 25, no. 1, pp. 29–39, 2008.

[8] Q. Liang, X. Cheng, S. C. Huang, and D. Chen, “Opportunisticsensing in wireless sensor networks: theory and application,”IEEE Transactions on Computers, vol. 63, no. 8, pp. 2002–2010,2014.

[9] Q. Liang, “Radar sensor wireless channel modeling in foliageenvironment: UWB versus narrowband,” IEEE Sensors Journal,vol. 11, no. 6, pp. 1448–1457, 2011.

[10] A. Taravat, F. F. Del, C. Cornaro, and S. Vergari, “Neuralnetworks and support vector machine algorithms for automaticcloud classification of whole-sky ground-based images,” IEEEGeoscience and Remote Sensing Letters, vol. 12, no. 3, pp. 666–670, 2014.

[11] S. Liu, C. Wang, B. Xiao, Z. Zhang, and X. Cao, “Tensorensemble of ground-based cloud sequences: its modeling, clas-sification, and synthesis,” IEEE Geoscience and Remote SensingLetters, vol. 10, no. 5, pp. 1190–1194, 2013.

[12] S. Liu, L. Zhang, Z. Zhang, C. Wang, and B. Xiao, “Automaticcloud detection for all-sky images using superpixel segmenta-tion,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 2,pp. 354–358, 2015.

[13] J. Calbo and J. Sabburg, “Feature extraction from whole-skyground-based images for cloud-type recognition,” Journal ofAtmospheric and Oceanic Technology, vol. 25, no. 1, pp. 3–14,2008.

[14] M. Singh and M. Glennen, “Automated ground-based cloudrecognition,” Pattern Analysis and Applications, vol. 8, no. 3, pp.258–271, 2005.

[15] Q. Li, W. Lu, J. Yang, and J. Z. Wang, “Thin cloud detection ofall-sky images using Markov random fields,” IEEE Geoscienceand Remote Sensing Letters, vol. 9, no. 3, pp. 417–421, 2012.

[16] A. Kreuter, M. Zangerl, M. Schwarzmann, and M. Blumthaler,“All-sky imaging: a simple, versatile system for atmosphericresearch,” Applied Optics, vol. 48, no. 6, pp. 1091–1097, 2009.

[17] M. P. Souza-Echer, E. B. Pereira, L. S. Bins, and M. A. R.Andrade, “A simple method for the assessment of the cloudcover state in high-latitude regions by a ground-based digitalcamera,” Journal of Atmospheric andOceanic Technology, vol. 23,no. 3, pp. 437–447, 2006.

[18] S. L. M. Neto, A. von Wangenheim, E. B. Pereira, and E.Comunello, “The use of Euclidean geometric distance on RGBcolor space for the classification of sky and cloud patterns,”Journal of Atmospheric and Oceanic Technology, vol. 27, no. 9,pp. 1504–1517, 2010.

[19] Y. Y. Boykov and M. P. Jolly, “Interactive graph cuts for optimalboundary & region segmentation of objects in N-D images,”in Proceedings of the 8th International Conference on ComputerVision, pp. 105–112, Vancouver, Canada, July 2001.

[20] C. Rother, V. Kolmogorov, and A. Blake, “GrabCut: interactiveforeground extraction using iterated graph cuts,”ACMTransac-tions on Graphics, vol. 23, no. 3, pp. 309–314, 2004.

[21] Y. Boykov and V. Kolmogorov, “An experimental comparisonof min-cut/max-flow algorithms for energy minimization invision,” IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 26, no. 9, pp. 1124–1137, 2004.

[22] J. Kalisch and A. Macke, “Estimation of the total cloud coverwith high temporal resolution and parametrization of short-term fluctuations of sea surface insolation,” MeteorologischeZeitschrift, vol. 17, no. 5, pp. 603–611, 2008.

Page 8: Research Article Adaptive Graph Cut Based Cloud Detection ...downloads.hindawi.com/journals/ijdsn/2015/947169.pdf · graph cut needs users to manually label hard constraints by assigning

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Journal ofEngineeringVolume 2014

Submit your manuscripts athttp://www.hindawi.com

VLSI Design

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

DistributedSensor Networks

International Journal of