Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Compressed Sensing Based Adaptive-Resolution Data
Recovery in Underwater Sensor Networks
Wenjing Kang, Gongliang Liu, and Bin Hu School of Information and Electrical Engineering, Harbin Institute of Technology, Weihai 264209, P.R.China
Emails: {kwjqq, liugl}@hit.edu.cn; [email protected]
Abstract—With marine development thriving today,
underwater acoustic sensor networks have become a vital
method in exploring and monitoring the ocean. In this paper, a
data recovery scheme with adaptive resolution based on
compressed sensing theory is proposed, aiming at acclimating
to the atrocious conditions under the water. The fundamental
thought of the scheme is to achieve better quality of recovered
data at the sacrifice of data resolution. Data resolution adjusting
method is raised firstly, then a recovered data quality evaluation
algorithm and an adaptive resolution selecting strategy are
proposed. The experimental results show that schemes
presented are able to evaluate the quality of the recovered data
accurately and adaptively select the resolution. Accordingly,
recovery resolution is modified triumphantly to achieve higher
accuracy on the premise of acceptable data resolution. Index Terms—Underwater sensor networks, compressed
sensing, data recovery, adaptive resolution
I. INTRODUCTION
Oceans are significant sources to maintain the
existence of mankind and promote social progress.
Wireless sensor networks have become significant
facilities helping people to cognize the oceans and exploit
marine resources owing to their flexible deployment, low
cost and extensive coverage. Nowadays, they’ve played
an important role in applications such as scientific data
gathering, environmental monitoring and protection,
natural undersea resources prospection and disaster
monitoring. Underwater Sensor Networks (USNs) consist
of sensor nodes with low energy consumption and short
communication distance, deployed underwater and
networked via wireless communication to collaboratively
glean information through various sensors and transmit it
to the sink node [1].
Different from terrestrial sensor networks [2], USNs
mostly communicate via acoustic wave, which extends
the propagation delay substantially. Furthermore, power
of acoustic modulator and demodulator is highly larger
due to severer propagation attenuation. Difficulty in
battery charging makes it important to reduce energy
consumption to prolong network lifetime. The
Manuscript received August 24, 2015; revised March 9, 2016.
This work was supported by the National Natural Science
Foundation of China (61371100, 61501139, 61401118), and the Natural Scientific Research Innovation Foundation in Harbin Institute of
Technology (HIT.NSRIF.2011114, HIT.NSRIF.2013136) Corresponding author email: [email protected]
doi:10.12720/jcm.11.3.317-324
complexity of communication environment such as non-
stationary sea noise, alteration of water temperature and
pressure and affects of marine organisms induces severe
multipath interference and delay jitter, challenging the
reliability of communication.
Candès, Romberg, Tao and Donoho proposed the
theory of Compressed Sensing (CS) [3]-[6] in 2004,
which combines signal acquisition process with
compression process innovatively. CS theory asserts that
signals can be recovered from far fewer samples than
traditional methods need, if the signal is sparse in a
proper basis [7]. CS exploits the fact that many natural
signals are sparse in a proper basis, making it feasible to
apply the theory in many practical applications, for
instance, wireless sensor networks [8]-[10]. Applying CS
theory in USNs not only significantly reduces the data
collection quantity, but relieves the pressure of power
consumption and bandwidth requirement in data
transmission. Besides, low encoding computational
complexity of CS theory orientates sensor nodes’
limitation in energy, communication capacity and
computing capability. In a word, the CS theory improves
the network energy efficiency and prolongs the network
lifetime.
Under the background of abominable underwater
environment, in this paper, an adaptive resolution data
recovery scheme based on CS theory is proposed. The
essence of this scheme is to achieve better recovery
quality at the sacrifice of the resolution. Owing to the
instability of underwater acoustic channel, the quantity of
received packets may vary acutely, which indicates the
probability of the received data packets inadequacy.
Without sufficient data packets, more precisely, not
enough for success recovery in a high probability, the
error of data recovery will increase dramatically. To
maintain data recovery accuracy, conventional data
collecting system needs to gather more packets, which
increases the network burden and energy consumption.
However, it is shown that cutting down the recovery
resolution will result in reduction of signal sparsity.
Taking advantage of this feature, recovery error will
decrease by reducing the recovery resolution, when lack
of measurements, according to the CS theory. Compared
with conventional data gather system, the proposed
scheme is more energy efficient and relies less on
communication reliability, for conventional schemes are
all on the premise of sufficient correct received data
317
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
packets whereas scheme proposed in this paper works
without them. However, there’s a nonnegligible
limitation that decrease of resolution damages the value
of the recovered data. Therefore, a data quality evaluation
method and an adaptive resolution selecting strategy are
proposed afterwards. The proposed data quality
evaluation method exploits the data smoothness in spatial
domain to estimate the reconstruction error. And the
adaptive resolution selecting strategy keeps moderate loss
of resolution to achieve less reconstruction error. Another
adaptive data gathering scheme based on CS was
proposed in [8]. The proposed scheme introduces
autoregressive (AR) model into objective function of
convex optimization, exploiting the local correlation in
sensed data to achieve the local adaptive sparsity.
Furthermore, the recovered data is evaluated according to
temporal correlation between historically reconstructed
data and current reconstruction result, and then the
number of measurement is tuned adaptively. The
difference between [8] and this paper is that we consider
the circumstance that the gathered data is insufficient, in
which circumstance, the sparsity is definite. Only the
resolution decrease can reduce the sparsity then
implement better recovery quality.
The remainder of this paper is organized as follows.
Section II presents the data recovery strategy with
adjustable resolution, based on CS theory. Section III
proposes a recovered data quality evaluation algorithm.
In Section IV, an adaptive resolution selecting strategy is
raised. Experimental results are detailed and discussed in
Section V. Finally, Section VI concludes this paper.
II. DATA RECOVERY WITH ADJUSTABLE RESOLUTION
BASED ON CS THEORY
A. Introduction of Compressed Sensing Theory
Consider a discrete real signal ,x n 1,2, ,n N ,
and an orthonormal basis 1 2, , N N
N R ,
where , 1i i N are columns of the orthonormal
basis . Then x can be expanded in the basis as follows:
1
N
j j
j
x
(1)
where 1{ }N
j j is the coefficient sequence of x in the
basis . can be written as:
T x (2)
If at most k ( k N ) entries of are nonzero, then x
is k-sparse. Only in this case, CS theory is feasible.
Design orthonormal measurement matrix M NR to
sample the signal x to get the measurements y. Moreover,
the sensing matrix defined as CSA must obey the
RIP [11]. Projecting x on obtain measurements y with
M entries: CSy x A (3)
At the receiving end, there’re plenty reconstruction
algorithms, such as Convex Relaxation algorithm [12]
[13] and Greedy Pursuits algorithm [14], [15], to decode y
to get the original signal and x. The CS theory
indicates that when the number of random measurements
M is larger than a constant sN , with the probability at
least 1- MN , the original signal can be reconstructed
uniquely and accurately. sN is defined as
logsN Ck N (4)
where C is a constant irrelevant to N and k [16].
However, under the circumstance of severe channel
condition in USNs, the actual received measurements
may less than sN . Decreasing the data reconstruction
resolution, its sparseness k will decrease, too, illustrate in
Fig. 8 and Fig. 9, then possibly meet the requirement of
success reconstruction.
Fig. 1. Resolution adjustment mechanism
B. Data Recovery Scheme with Adjustable Resolution
This paper basically aims at marine scientific
information, a set of two-dimension data correlated with
latitude and longitude, illustrated in Fig. 1. The entire
figure denotes a sea area. An asterisk represents a sensor
node; solid line grids divide the field into different
regions that hold the same data measured by the sensor
node in its center. The original data 0x is a set of two-
dimension data 0 0
0
N Nx R
whose entries are in one-to-
one correspondence with those grids. It represents
ensemble of data measured by all sensor nodes, with the
amount of 0 0N N N . All data is numbered then
reshaped to a vector 1
1{ }N N
i ix x R
. In each round, the
sink node obtains M measurements, composing a
vector1
1{ }M M
j jy y R
. Every measurement (every
term in y) is a measured value uploaded by a sensor node,
i.e. , , , , 1,2i j i jy x y y x x i j N . Especially,
different i is corresponding to different j, that means
there’re no repeated data packets from same nodes. Here
by, the measurement matrix consists of M rows and N
columns. There’re one “1” and N-1 “0”s in every row.
The number of column with “1” is the same as the
number of the node that upload this data, i.e. , 1i j .
318
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
The basic principle of resolution adjustment is, given a
particular y, reconstructing x in different resolution by
designing different . In brief, the mechanism is to
redraft these grids and layout virtual sensor nodes, as
shown in Fig. 1, too. Those circles represent virtual
sensor nodes, and dash dot lines segment the area into
different ranges that hold the same measured values. If a
virtual grid includes several actual nodes, the measured
value of the virtual node in that grid is the mean value of
those actual nodes’s data. On the contrary, if there’s only
one node in a virtual grid, the measured value of
corresponding virtual node is the same as the actual node.
After data fusion, size of 0x alters from
0 0N N to
' '
0 0N N ; x with N elements turns into x with N
elements, where ' '
0 0N N N . Like the adjustment in
grids, if there’re several data packets come from one
virtual grid, then their mean value forms one new packet.
After data fusion of y, a new measurements vector y
with M elements replaces y. As for new measurement
matrix , it contains M rows and N columns. Due to
variation in number and correspondence of sensor nodes,
the column number with non-zero value in every row
varies too. Reconstruction process proceeds use y and
to get recovered data vector x̂ , then reshape x̂ to
obtain recovered data matrix 0 0
0ˆ N Nx R
.
237.2 237.3 237.4 237.5 237.635.6
35.7
35.8
35.9
36
original data(50*50)
longitude( W)
latitu
de(
N)
12
12.5
13
13.5
Fig. 2. Original data (50×50)
237.3 237.4 237.5 237.6
35.65
35.7
35.75
35.8
35.85
35.9
35.95
36
36.05
longitude( W)
data after reconstruction changing(40*40)
latit
ude(
N)
12
12.5
13
13.5
Fig. 3. Data with adjusted resolution (40×40, 64%)
In this paper, data from online database of Jet
Propulsion Laboratory subordinate to NASA
(http://ourocean.jpl.nasa.gov/) is analyzed. A set of
temperature information of Southern California beachline
is shown in Fig. 2, whose resolution is 50×50. Fig. 3 to
Fig. 5 show dataset after resolution adjustment. These
figures indicate that the temperature data is fairly smooth,
therefore moderate and slight change in resolution won’t
affect its visuality. But sharp decrease in resolution will
result in heavy losses of information it contain.
237.3 237.4 237.5 237.6
35.65
35.7
35.75
35.8
35.85
35.9
35.95
36
36.05
longitude( W)
data after reconstruction changing(30*30)
latit
ude(
N
)12
12.5
13
13.5
Fig. 4. Data with adjusted resolution (30×30,36%)
237.3 237.4 237.5 237.6
35.65
35.7
35.75
35.8
35.85
35.9
35.95
36
36.05
data after reconstruction changing(20*20)
longitude( W)
latit
ude(
N
)
12
12.5
13
13.5
Fig. 5. Data with adjusted resolution (20×20,16%)
Fig. 6. Recovery error distribution
III. RECOVERED DATA QUALITY EVALUATION
Negative effects of resolution reduction make it
necessary to compromise between resolution and
319
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
accuracy. In view of the fact that real data is unavailable,
calculating the conventional error is incapable. However,
the target two-dimension data can be regarded as an
image without texture and boundary, considering its
smoothness. Analysis of recovery error indicates that it
follows the Gauss distribution, shown in Fig. 6, which
accords with the error distribution hypothesis in most of
image noise level estimation algorithms. So it’s feasible
to adopt these algorithms to estimate recovery error.
The recovered data quality evaluation algorithm
proposed is based on [17], using Principal Component
Analysis (PCA).
Firstly, normalize the recovered two-dimension data0x̂ .
Secondly consider the data is a image and decompose the
image into overlapping patches. Thirdly vectorize those
image patches, let 0
ˆix be the vectorized patch
0 0
ˆ =i i ix x n (5)
where 0ix is the original image patch (consider the
original data as a image, too), andin is i.i.d zero-mean
Gaussian noise with variance of 2
n . In our case in is the
recovery error.
According to the column, arrange all the vectorized
image patches0
ˆix to structure a new matrix, whose
covariance matrix is defined as
0ˆ 0 0
1
1ˆ ˆ= ( )( )
PT
x i i
i
x xN
(6)
where P is the quantity of image patch, and is the
mean value of dataset 0 0
ˆ ˆ={ }ix x .
The minimum eigenvalue of the covariance matrix can
be derived as
0 0
2
ˆmin min( )= ( )x x n (7)
where0ˆmin ( )x denotes the minimum eigenvalue of the
covariance matrix formed by recovered data image.
Likewise, 0min ( )x denotes the minimum eigenvalue of
the covariance matrix formed by original data image.
The smooth patches are known to span only low-
dimensional subspace [17], in which case, 0min ( )x is
approximately zero. Them the recovery error err can be
estimated as
0
2
ˆminˆ ( )n xerr (8)
The quality or recovered data is related not only to
recovery error, but data resolution. The less recovery
error, higher resolution, the better quality. We evaluate
the data quality factor (dqf) to estimate the quality of
recovered data.
0ˆ* / ( ') / ( )dqf err sqrt N sprs x (9)
where is a coefficient (in this paper, =500 ), 'N is
the quantity of the recovered data, 0ˆ( )sprs x is its
sparseness.
parameter setting:
thre_score,
step_resl, max_resl, min_resl
thre_sparse, thre_score
iter = 1, resl(iter) = max_res
flag_suc = false
begin
input y
dqf(iter) < thre_score
output
Y
N
iter = iter + 1
resl(iter) = resl(iter-1) - step_resl
reconstruct with resl(iter),
calculate dqf(iter)
reconstruct with resl(iter),
calculate dqf(iter), sparse(iter)
N > 2N
Y
flag1 = sparse(iter-1) < thre_sparse &&
resl(iter-1) > min_res &&
dqf(iter-1) < thre_score &&
dqf(iter-1) < dqf(iter-2) &&
dqf(iter-1) � dqf(iter)
flag1 == true
flag2 = sparse(iter-1) > thre_sparse ||
resl(iter-1) < min_resl
flag2 = = true
Y
N
end
flag_suc = true
flag2_suc = = true
0x
output:fail
to recovery
N
Y
0x
0x
Y N
Fig. 7. Flowchart of the proposed algorithm
320
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
IV. ADAPTIVE RESOLUTION SELECTING STRATEGY
The preceding sections have introduced the
mechanisms of resolution adjusting and recovered data
quality evaluation. In this section, an adaptive resolution
adjustment algorithm is proposed. Fig. 11 illustrates that
under a proper sampling probability, there’s valley point
on the resolution-dqf curve which is the best compromise
between resolution and recovery error. The fundamental
of the proposed algorithm is to find this valley point with
the largest resolution and dqf smaller than a threshold.
After data gather process of every round, recovered
data is constructed in max resolution max_resl , then dqf
is evaluated at the sink node. If dqf is less than a
presupposed threshold thre_score, the data reconstruction
succeed, and resolution adjustment is unnecessary.
Otherwise, subtract presupposed step_reslu from
resolution, then data reconstruction and evaluation are
processed again. The iteration continues until specific
conditions are satisfied. As mentioned above, the process
aims at finding a valley point, therefore in every iteration,
whether the last iteration reconstruction is successful is
decided.
A successful reconstruction should meet four
requirements:
1) Sparseness requirement: the sparseness proportion
of recovered data should be less than thre_sparse;
2) Resolution requirement: the resolution of recovered
data should be greater than min_reslu;
3) Data quality requirement: the dqf of recovered data
should be less than thre_score;
4) Valley point requirement: the point on resolution-
dqf curve should be a valley point.
If one recovered data meets these requirements, the
iteration would stop with a successful recovery. On the
contrary, if in an iteration, the resolution of the recovered
data is less than min_reslu or the sparseness proportion is
greater than thre_sparse, then the iteration would stop
with a fail. The detail of the algorithm is illustrated in Fig.
7.
V. EXPERIMENTAL RESULTS
A. Influence of Resolution Adjustment
Firstly, we examine the influence of resolution
changing on sparsity. We selecting 100×100 dataset and
frequency domain as sparse domain. In frequency domain,
quantity of coefficients whose energy is over 99.95% of
the total energy is regarded as the sparseness. As seen in
Fig. 8, the sparseness decreases with the decreasing of
resolution. Thus, according to the CS theory and equation
(4), the probability of accurate reconstruction increases.
However, Fig. 9 indicates that the proportion of non-zero
coefficients increase with the decreasing of resolution. So
that resolution reduction is finite. Excessively low
resolution leads to invalidation of sparsity feature, and
then CS theory become invalid.
0 2000 4000 6000 8000 100000
50
100
150
200
250
300resolution - sparsity
data number
spars
ity
Fig. 8. Sparseness varies with resolution
0 2000 4000 6000 8000 100000.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.22
resolution - sparsity ratio
data number
spars
ity r
atio
Fig. 9. Sparse level varies with resolution
Secondly, we test the influence of resolution changing
on the recovery error with different sampling probability.
Size of dataset is 50×50; frequency domain is sparse
domain; reconstruction algorithm is OMP [12]; Monte-
Carlo simulation 50 times.
Simulation results are shown in Fig. 10. In Fig. 10, the
right-most value of every curve is the error of original
resolution reconstruction. It illustrates that with lower
resolution, the reconstruction error is relatively lower in
temperate sampling probability, which demonstrates the
feasibility of proposed resolution adjusting scheme.
When the sampling probability equals to 0.1, the error
curve is evidently higher than other curves. It indicates
that received data is not adequate to reconstruction in any
resolution. With comparatively high sampling probability,
the reconstruction error is generally constant and along
with the decrease of resolution, there’s a tendency to
increase.
B. Quality Estimation of Recovered Data
Firstly, we test the quality estimation with different
resolution of original data. Fig. 11 shows that with the
decrease of resolution, dqf of the data increases. More
precisely, the dqf increases gently at first, sharply at last.
It accords with the fact that low resolution results in poor
data quality.
321
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
Fig. 11. The dqf of original data with different resolution
Fig. 12. Error estimation
Fig. 13. Quality estimation of recovered data
Secondly, we consider the quality estimation of
recovered data with resolution variation. The image patch
size is 5×5; other simulation conditions are the same as
simulations above. Simulation results are presented in Fig.
12 and Fig. 13. Fig. 12 shows that the error estimation
curve basically fits the error curve in Fig. 10. The PCA
algorithm is applicable in reconstruction error estimation.
Fig13 illustrates the recovered data quality evaluation. It
shows that there’re valley points on dqf curves with
temperate sampling probability. On the left side of the
valley points, the dqf increases due to low resolution.
Conversely on the right side, the dqf increases due to
large reconstruction error. Similar to Fig 10, when the
sampling probability is extremely low, like 0.1,
successful reconstruction is unable to reach. When
sampling probability is comparatively high (0.35-0.4),
due to low reconstruction error, the dqf curve is gentle,
there’s no obvious valley point. Particularly, although dqf
is relatively low, it’s not proper to adopt extremely low
resolution (400-800), owing to severe information loss.
C. Reconstruction with Adaptive Resolution
To evaluate the success of the scheme proposed,
calculate the recovery resolution according to the
flowchart presented in Fig. 7. Meanwhile, calculate
recovery error (Pe) in each iteration, too. If the error
corresponding to the chosen resolution is the smallest one
among all the errors calculated in overall iterations, then
the choice is correct. Using Monte Carlo simulation with
100 times, the result is shown in Fig. 14. It illustrates that
with the increase of sampling probability, the success rate
of the proposed scheme increases, too. The overall
success rate is over 70%, so, given consideration to the
randomness of every measured dataset, the proposed
scheme is capable of choosing the best recovery
resolution. Fig. 15 and Fig. 16 show the improvement of
Pe and resolution modify rate of every sampling
probability. For instance, when the sampling probability
is 0.21, applying the proposed scheme, the Pe decreases
7.53% at the sacrifice of 16.18% resolution. It’s shown
that when the sampling probability increases, the
improvement of accuracy decreases, due to better
recovery quality in original resolution. Meanwhile, the
resolution reduction drops either.
Fig. 14. Success rate
Fig. 15. Pe improvement
322
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
Fig. 16. Resolution modify
VI. CONCLUSIONS
This paper proposed an adaptive resolution data
recovery scheme based on CS theory, aiming at
accommodating to the features like enormous energy
consumption and low communication reliability in USNs.
In the circumstances of insufficient measured data, the
presented scheme is able to achieve better recovery
quality at the sacrifice of data resolution. The
experimental results show that the proposed scheme
evaluates recovery quality accurately and achieves
adaptive adjustment of resolution and compromises
properly between resolution and recovery accuracy.
REFERENCES
[1] R. Kumar, “A survey on data aggregation and clustering
schemes in underwater sensor networks,” International
Journal of Grid and Distributed Computing, vol. 7, no. 6,
pp. 29-52, 2014.
[2] H. Zhang, H. Xing, J. Cheng, A. Nallanathan, and V.
Leung, “Secure resource allocation for OFDMA two-way
relay wireless sensor networks without and with
cooperative jamming,” IEEE Transactions on Industrial
Informatics, 2015.
[3] E. J. Candès, “Compressive sampling,” in Proc.
International Congress of Mathematicians, 2006, pp.
1433-1452.
[4] E J Candès and J K. Romberg, “Quantitative robust
uncertainty principles and optimally sparse
decompositions,” Foundations of Computational
Mathematics, vol. 6, no. 2, pp. 227-254, 2006.
[5] E. J. Candès and J. K. Romberg, “Signal recovery from
random projections,” in Proc. Electronic Imaging 2005,
International Society for Optics and Photonics, pp. 76-86,
2005.
[6] D. L. Donoho, “Compressed sensing,” IEEE Trans.
Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.
[7] E. J. Candès and M. B. Wakin, “An introduction to
compressive sampling,” IEEE Signal Processing
Magazine, vol. 25, no. 2, pp. 21-30, 2008.
[8] J. Wang, S. Tang, B. Yin, et al., “Data gathering in
wireless sensor networks through intelligent compressive
sensing,” in Proc. IEEE INFOCOM, 2012, pp. 603-611.
[9] M. Leinonen, M. Codreanu, and M. Juntti, “Sequential
compressed sensing with progressive signal
reconstruction in wireless sensor networks,” IEEE
Transactions on Wireless Communications, vol. 14, no. 3,
pp. 1622-1635, 2015.
[10] M. Kaneko and K. A. Agha, “Compressed sensing based
protocol for interfering data recovery in multi-hop sensor
networks,” IEEE Communications Letters, vol. 18, no. 1,
pp. 42-45, January 2014.
[11] E. J. Candès, “The restricted isometry property and its
implications for compressed sensing,” Acadèmie Des
Sciences, vol. 346, no. 1, pp. 598-592, 2006.
[12] J. A. Tropp and A. C. Gilbert, “Signal recovery from
random measurements via orthogonal matching pursuit,”
IEEE Trans Information Theory, vol. 53, no. 12, pp.
4655-4666, 2007.
[13] D. L. Donoho, Y. Tsaig, andI. Drori, et al., “Sparse
solution of underdetermined systems of linear equations
by stagewise orthogonal matching pursuit,” IEEE Trans.
Information Theory, vol. 58, no. 2, pp. 1094-1121, 2012.
[14] K. Koh, S. J. Kim, and S. P. Boyd, “An interior-point
method for large-scale l1-regularized logistic regression,”
IEEE Journal of Selected Topics in Signal Processing, vol.
1, no. 4, pp. 606-617, 2007.
[15] D. Sundman, S. Chatterjee, and M. Skoglund,
“Distributed greedy pursuit algorithms,” Signal
Processing, vol. 105, pp. 298-315, 2014.
[16] E. J. Candes, J. Romberg, and T. Tao, “Robust
uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information,” IEEE Trans
Information Theory, vol. 52, no. 2, pp. 489-509, 2006.
[17] X Liu, M Tanaka, and M Okutomi, “Noise level
estimation using weak textured patches of a single noisy
image,” in Proc. IEEE International Conference on
Image Processing, September 2012, pp. 665-668.
Wenjing Kang received the B.S., M.Eng.
and Ph.D. degrees in measuring &
control technology and instrumentations
in 2001, 2003 and 2007, respectively, all
from Harbin Institute of Technology,
Harbin, China. She is now with the
Harbin Institute of Technology at Weihai
as an lecture. Her current interests
include underwater sensor networks, moving target tracking and
image processing.
Gongliang Liu received the B.S. degree
in measuring & control technology and
instrumentations, and the M.Eng. and
Ph.D. degrees in electrical engineering in
2001, 2003 and 2007, respectively, all
from Harbin Institute of Technology,
Harbin, China. He is now with the
Harbin Institute of Technology at Weihai
as a professor. His current interests include satellite
communications and underwater sensor networks.
323
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications
Bin Hu received the B.S. and M.Eng.
degrees in 2013 and 2015, respectively,
all from Harbin Institute of Technology.
Her current interest include underwater
sensor networks and communication
systems.
324
Journal of Communications Vol. 11, No. 3, March 2016
©2016 Journal of Communications