13
Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu Huang 1 Introduction In imaging applications, the observed images are unavoidably corrupted by the noise during the process of acquisition, transmission, or storage. Therefore, the image denoising is always an essential task in image processing. Researchers have proposed many methods to remove the noise; see [112] and references therein. But in the synthetic aperture radar (SAR) imaging system, the observed images are usually contaminated with the multiplicative noise due to the image formation under coherent radiation while the additive noise due to the thermal vibrations of image capture radiation [13, 14]. Assume that Ω R 2 is a connected bounded domain with the compacted Lipschitz boundary; we consider a degradation model under the mixed additive and multiplicative noise, f = u + k 0 η + k 1 (1) where f L 2 (Ω) is the noisy image, u is the unknown original image, η denotes the Gaussian white noise with mean zero and variance one, and k 0 ,k 1 > 0 represent the noise level. As far as we know, there are a few mathematical techniques for removing the mixed additive and multiplicative noise. In [14], the authors assume that a patch from the original image is a linear combination of patches from the noisy image. They considered the total least square (TLS) sense to obtain the true image. In [15], J.-J. Mei () · T.-Z. Huang School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, China School of Mathematics and Statistics, Fuyang Normal University, Fuyang, Anhui, China e-mail: [email protected];[email protected] © Springer International Publishing AG, part of Springer Nature 2019 M. Jiang et al. (eds.), The Proceedings of the International Conference on Sensing and Imaging, Lecture Notes in Electrical Engineering 506, https://doi.org/10.1007/978-3-319-91659-0_16 223

Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping GroupSparsity for Removing Mixed Noise

Jin-Jin Mei and Ting-Zhu Huang

1 Introduction

In imaging applications, the observed images are unavoidably corrupted by thenoise during the process of acquisition, transmission, or storage. Therefore, theimage denoising is always an essential task in image processing. Researchers haveproposed many methods to remove the noise; see [1–12] and references therein.But in the synthetic aperture radar (SAR) imaging system, the observed images areusually contaminated with the multiplicative noise due to the image formation undercoherent radiation while the additive noise due to the thermal vibrations of imagecapture radiation [13, 14]. Assume that Ω ⊂ R

2 is a connected bounded domainwith the compacted Lipschitz boundary; we consider a degradation model under themixed additive and multiplicative noise,

f = u + k0η + k1uη (1)

where f ∈ L2(Ω) is the noisy image, u is the unknown original image, η denotesthe Gaussian white noise with mean zero and variance one, and k0, k1 > 0 representthe noise level.

As far as we know, there are a few mathematical techniques for removing themixed additive and multiplicative noise. In [14], the authors assume that a patchfrom the original image is a linear combination of patches from the noisy image.They considered the total least square (TLS) sense to obtain the true image. In [15],

J.-J. Mei (�) · T.-Z. HuangSchool of Mathematical Sciences, University of Electronic Science and Technology of China,Chengdu, Sichuan, China

School of Mathematics and Statistics, Fuyang Normal University, Fuyang, Anhui, Chinae-mail: [email protected];[email protected]

© Springer International Publishing AG, part of Springer Nature 2019M. Jiang et al. (eds.), The Proceedings of the International Conference on Sensingand Imaging, Lecture Notes in Electrical Engineering 506,https://doi.org/10.1007/978-3-319-91659-0_16

223

Page 2: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

224 J.-J. Mei and T.-Z. Huang

since the TV regularization preserves the image edges effectively, Chumchob et al.proposed a convex variational model (called TV-EXP for short) for removing themixed noise,

minu∈BV (Ω)

∫Ω

|∇u|dx + α1

2

∫Ω

(u − f )2dx + α2

∫Ω

(u + f e−u)dx (2)

where α1 and α2 are the positive regularization parameters, which control the trade-off between the TV regularization term and the data-fitting terms. BV (Ω) is thespace of functions u ∈ L1(Ω) such that

∫Ω

|∇u|dx := sup

{∫Ω

u divϕ dx : ϕ ∈ (C∞0 (Ω))2, ‖ϕ‖∞ ≤ 1

}

is finite. With the norm ‖u‖BV (Ω) = ‖u‖L1(Ω) + ∫Ω

|∇u|dx, BV (Ω) is a Banachspace. For more details, see [16, 17] and references therein. In order to obtain thesolution of (2), they applied a nonlinear multigrid method based on the fixed-pointsmoother. But this mathematical method is comparatively complicated and time-consuming. Moreover, there exist some staircase artifacts in the restored images.

Recently, researchers have studied a new TV regularization method based on theoverlapping group sparsity [10, 18, 19]. The numerical experiments showed that thisregularization can suppress the staircase artifacts effectively. Therefore, inspired bythe advantage of the TV with overlapping group sparsity, we propose two convexmodels for removing the mixed additive and multiplicative noise. In this paper, wedevelop the ADMM algorithm to solve the proposed models, and the convergence ofthe algorithm is guaranteed under certain conditions. Furthermore, according to thepeak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) [20],experimental results show that our proposed methods are effective.

This paper is summarized as follows. In the next section, we review the TV withoverlapping group sparsity and the framework of ADMM. In Sect. 3, we proposetwo variational models based on the TV with overlapping group sparsity and developthe ADMM algorithm for solving the proposed models. The experiments show thesuperior performance in Sect. 4. Finally, we conclude the paper in Sect. 5.

2 Preliminaries

2.1 TV with Overlapping Group Sparsity

For completeness, we briefly review the TV with overlapping group sparsity.Firstly, we assume that the original image u ∈ R

n2which is rearranged in the

lexicographically order. In other words, the ((j − 1)n + i)th element of the vectoru is equal to the (i, j)th element of the corresponding square matrix. According to[10], a K-square-point group of a two-dimensional image is defined as follows:

Page 3: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 225

ui,j,K =

⎛⎜⎜⎜⎝

ui−K1,j−K1 ui−K1,j−K1+1 · · · ui−K1,j+K2

ui−K1+1,j−K1 ui−K1+1,j−K1+1 · · · ui−K1+1,j+K2...

.... . .

...

ui+K2,j−K1 ui+K2,j−K1+1 · · · ui+K2,j+K2

⎞⎟⎟⎟⎠ ∈ R

K×K,

where K1 =[

K−12

], K2 = [

K2

], and [x] represents the largest integer no more than

x. Similarly, by stacking all the columns of ui,j,K , we obtain a vector ui,j,K ∈ RK2

.Then an overlapping group sparsity functional is defined as

φ(u) =n∑

i,j=1

‖ui,j,K‖2.

According to [10, 18, 19], the anisotropic TV functional with overlapping groupsparsity is given by

�(Du) = φ(D1u) + φ(D2u),

where the operator D : Rn2 → R

2n2is the discrete gradient operator satisfied

(Du)i,j = ((D1u)i,j , (D2u)i,j ). Here, D1 and D2 are the first-order finite differenceoperators in the horizontal and vertical directions under the periodic boundarycondition.

2.2 Classic ADMM

The ADMM technique is widely applied for solving the constrained separableoptimization problem

minx∈X,y∈Y

f (x) + g(y) (3)

s.t.Ax + By = b

where f (x) and g(y) are closed convex and lower semicontinuous functions,X ⊂ R

m and Y ⊂ Rn are closed convex sets, and A ∈ R

l×m and B ∈ Rl×n

are linear operators [21–23]. By introducing a multiplier λ ∈ Rl , the corresponding

augmented Lagrangian function is given by

L(x, y; λ) = f (x) + g(y) + λ(Ax + By − b) + β

2‖Ax + By − b‖2

2 (4)

Page 4: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

226 J.-J. Mei and T.-Z. Huang

where β is a positive penalty parameter. According to the framework of ADMM,the solution (xk+1, yk+1) is obtained by

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

xk+1 = arg minx

f (x) + β

2‖Ax + Byk − b + λk

β‖2

2,

yk+1 = arg miny

g(y) + β

2‖Axk+1 + By − b + λk

β‖2

2,

λk+1 = λk + γβ(Axk+1 + Byk+1 − b)

(5)

where γ > 0 represents a relax parameter. Based on the work [22], if γ ∈(0,

√5+12

), the convergence of ADMM is guaranteed.

3 The Proposed Model

Inspired by the works [1, 6, 10], we propose an exponential variational model (justreferred to as OGSTV-EXP) for removing the mixed additive and multiplicativenoise

minu>0

φ(D1u) + φ(D2u) + α1

2‖u − f ‖2

2 + α2〈u + f e−u, 1〉 (6)

where 1 denotes a vector which all components are equal to one and the multiplica-tion between two vectors is performed in componentwise. The fourth data-fittingterm of the model (6) is obtained by the logarithmic transformation. But thelogarithmic transformation is nonlinear. For overcoming the disadvantage, we alsopresent an I-divergence variational model by combining the TV with overlappinggroup sparsity,

minu>0

φ(D1u) + φ(D2u) + α1

2‖u − f ‖2

2 + α2〈u − f log u, 1〉. (7)

We refer to the above model as OGSTV-Idiv. According to [7], due to the invariantproperty of TV, the solutions of the exponential model and the I-divergence modelfor removing the mixed additive and multiplicative noise are equal. Then for dealingwith the models (6) and (7), we rewrite these two models as the following syntheticalmodel:

minu>0

φ(D1u) + φ(D2u) + α1

2‖u − f ‖2

2 + α2F(u), (8)

where F(u) is equal to 〈u + f e−u, 1〉 for (6) and 〈u − f log u, 1〉 for (7). Inthe following, we apply the ADMM technique mentioned above to solve theminimization model (8).

Page 5: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 227

By introducing three auxiliary variables v1, v2, and w ∈ Rn2

, we transform themodel (8) into the equivalent constrained minimization,

minu>0,v1,v2,w

φ(v1) + φ(v2) + α1

2‖u − f ‖2

2 + α2F(w). (9)

s.t.v1 = D1u, v2 = D2u,w = u

Then, the corresponding augmented Lagrangian function is given as

L(u, v1, v2, w; λ1, λ2, λ3) = φ(v1) + 〈λ1, v1 − D1u〉 + β1

2‖v1 − D1u‖2

2 + φ(v2)

+ 〈λ2, v2 − D2u〉 + β1

2‖v2 − D2u‖2

2 + α1

2‖u − f ‖2

2

+ α2F(w) + 〈λ3, w − u〉 + β2

2‖w − u‖2

2,

where β1, β2 > 0 are the penalty parameters. By the framework of ADMM, thewhole algorithm for removing the mixed additive and multiplicative noise is givenas follows.

Algorithm 1 ADMM algorithm for solving (8)

1: Initialize u0, v01 , v0

2 , w0, λ01, λ0

2 and λ03; set α1, α2, β1, β2, γ .

2: For k = 1, 2, . . . , calculate uk+1, vk+11 , vk+1

2 , wk+1, λk+11 , λk+1

2 , λk+13 by

vk+1l = arg min

vl

φ(vl) + β1

2‖vl − Dlu

k + λkl

β1‖2

2, l = 1, 2 (10)

wk+1 = arg minw

α2F(w) + β2

2‖w − uk + λk

3

β2‖2

2 (11)

uk+1 = arg minw

L(u, vk+11 , vk+1

2 , wk+1; λk1, λ

k2, λ

k3) (12)

λk+1l = λk

l + γβ1(vk+1l − Dlu

k+1), l = 1, 2

λk+13 = λk

3 + γβ2(wk+1 − uk+1)

3: If uk+1 satisfies the stopping criteria ‖uk+1 − uk‖2/‖uk‖2 ≤ 1 × 10−4, return uk+1 and stop.

1. For obtaining vk+11 and vk+1

2 , we utilize the classical majorization-minimization(MM) method to deal with (10). The MM method is a good way to address thedifficult optimization problem. Specifically, let Q(t, t ′) be a majorizor1 of the

1A function Q(t, t ′) is a majorizor of the function P(t), if Q(t, t ′) ≥ P(t) for all t , t ′ and Q(t, t) =P(t).

Page 6: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

228 J.-J. Mei and T.-Z. Huang

function P(t). Then instead of directly solving the minimizer of P(t), the MMiterative method is formulated as an easier optimization minimization problem

tk+1 = arg mint

Q(t, tk). (13)

Note that when P(t) is convex, the iterative sequence tk+1 obtained by (13)converges to the minimizer of P(t). Therefore, in order to solve (10), we first

need to find a majorizor of φ(vl). Based on the fact that 12

(1

‖t‖2‖s‖2

2 + ‖t‖2

)≥

‖s‖2 for all t, s ∈ Rn2

and t �= 0, we have a majorizor of φ(vl),

S(t, vl) = 1

2

n∑i,j=1

(1

‖ti,j,K‖2‖(vl)i,j,K‖2

2 + ‖ti,j,K‖2

)

= 1

2‖Λ(t)vl‖2

2 + C(t), l = 1, 2

where ti,j,K �= 0, C(t) is independent of vl and Λ(t) is a diagonal matrix with

each diagonal element

√∑K2m1,m2=−K1

(∑K2n1,n2=−K1

|ti−m1+n1,j−m2+n2 |2)−1/2

.

As a result, we solve the minimization

vk+1l = arg min

vl

1

2‖Λ(vk

l )vl‖22 + β1

2‖vl − Dlu

k + λkl

β1‖2

2, l = 1, 2.

The solutions of (10) are obtained by

vk+1l =

(I + 1

β1Λ2(vk

l )

)−1(

Dluk − λk

l

β1

), l = 1, 2 (14)

where I represents the identity matrix.2. For the w-subproblem, if F(w) = 〈w+f e−w, 1〉, we apply the Newton iterative

method to solve (11). But if F(w) = 〈w−f log w, 1〉, by the first-order conditionwith respect to w, we deduce the following equation:

w2 +(

α2

β2I + λk

3

β2− uk

)w − α2

β2f = 0.

The solution is given by

wk+1 =uk − α2

β2I − λk

3β2

+√

(uk − α2β2

I − λk3

β2)2 + 4α2

β2f

2. (15)

Page 7: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 229

3. With respect to the u-subproblem, we obtain the corresponding normal equation

(β1D

D + (α1 + β2)I)

u =2∑

l=1

Dl (λk

l + β1vk+1l ) + α1f + λk

3 + β2wk+1.

Under the periodic boundary condition, the Hessian matrix on the left-hand sidecan be diagonalized by the discrete fast Fourier transform F . Consequently, weget the solution

u = F−1

(F(

∑2l=1 D

l (λkl + β1v

k+1l ) + α1f + λk

3 + β2wk+1)

F (β1DD + (α1 + β2)I

))

(16)

where F−1 represents the inverse fast Fourier transform.

Algorithm 1 is a direct application of the classic ADMM. Motivated by [22], wegive the convergence analysis of Algorithm 1.

Theorem 1 For fixed β1, β2 > 0 and γ ∈ (0,√

5+12 ), the ADMM algorithm for the

model (8) converges.

Proof For illustrating the convergence of Algorithm 1, we first transform (8) intothe general constrained convex problem (3). Therefore, we let

x = (v1, v2, w), f (x) = φ(v1) + φ(v2) + α2F(w)

y = u, g(y) = α1

2‖u − f ‖2

2.

The constrained conditions in (9) are rewritten as the following form:

Ax + By = b

where

A =⎛⎝ I 0 0

0 I 00 0 I

⎞⎠ , B =

⎛⎝−D1

−D2

−I

⎞⎠ , b = 0.

According to [22, 24], we deduce that for fixed β1, β2 > 0, and γ ∈ (0,√

5+12 ),

Algorithm 1 is convergent.

Note that the v1, v2 and u-subproblems have the closed solutions in Algorithm 1.Although the w-subproblem is approximatively solved by the Newton iterativemethod, Algorithm 1 is empirically convergent. Furthermore, Fig. 1 shows the curveof the energy functional values is decreasing along the iteration number increasing.

Page 8: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

230 J.-J. Mei and T.-Z. Huang

0 20 40 60 80 100 120 140 160Iteration

6

6.5

7

7.5

8

8.5

9

9.5

10

Ener

gy fu

nctio

nal v

alue

× 106

Fig. 1 Plot of the energy functional values in (8) for the image “test,” where the level of the mixednoise is (10, 0.3)

Fig. 2 Original images. (a) Test (256 × 256); (b) house (256 × 256); (c) peppers (512 × 512); (d)SAR (256 × 256)

4 Numerical Experiments

In this section, we demonstrate the performance of the proposed methods forremoving the mixed additive and multiplicative noise. Figure 2 shows four 8-bitgrayscale test images including three natural images and one real SAR image. Wecompare the proposed methods with TV-EXP [15]. All numerical experiments areperformed under Windows 10 and MATLAB R2015b running on a Lenovo desktopwith 3.4 GHz Intel Core CPU and 4GB RAM. We apply PSNR and SSIM to measurethe quality of the restored images, which are, respectively, defined as

PSNR = 20 log 10

(255n

‖u∗ − u‖2

), SSIM = 2μu∗μu(2σ + c2)

(μ2u∗ + μ2

u + c1)(σ2u∗ + σ 2

u + c2),

Page 9: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 231

Table 1 The values of PSNR and SSIM for the restored images by applying different methods

Image Noisy TV-EXP OGSTV-EXP OGSTV-Idiv

Test (10,0.1) 35.83 0.9451 36.26 0.9664 36.35 0.9680

(20,0.1) 33.83 0.9335 34.40 0.9623 34.33 0.9570

(10,0.3) 30.31 0.9160 31.15 0.9460 31.10 0.9450

(20,0.3) 29.68 0.9014 30.27 0.9424 30.31 0.9270

House (10,0.1) 30.18 0.8142 30.68 0.8231 30.72 0.8240

(20,0.1) 28.77 0.7917 29.26 0.8023 29.32 0.8040

(10,0.3) 26.47 0.7421 27.02 0.7564 27.06 0.7570

(20,0.3) 25.72 0.7188 26.35 0.7421 26.38 0.7350

Peppers (10,0.1) 30.55 0.9205 31.11 0.9265 31.13 0.9270

(20,0.1) 28.97 0.8890 29.66 0.9041 29.67 0.9050

(10,0.3) 27.09 0.8521 27.65 0.8626 27.65 0.8620

(20,0.3) 26.48 0.8292 26.96 0.8482 26.96 0.8470

SAR (10,0.1) 27.42 0.7542 27.71 0.7670 27.77 0.7701

(20,0.1) 26.00 0.7119 26.25 0.7164 26.30 0.7203

(10,0.3) 24.10 0.6297 24.32 0.6353 24.35 0.6395

(20,0.3) 23.56 0.6021 23.77 0.6103 23.81 0.6180

where u∗ is the restored image, μu∗ is the mean of u∗, μu is the mean of the originalimage u, σ 2

u∗ and σ 2u are their respective variances, σ is the covariance of u∗ and u,

and c1, c2 > 0 are constants.For the parameters α1, α2, β1 and β2 in Algorithm 1, we manually tune for

obtaining the highest PSNR values. Since the value of γ affects the convergentspeed, we set γ = 1.618 which makes the ADMM algorithm converge faster thanγ = 1. In addition, we set the iteration number of the Newton method for solving (6)as 5. The iteration number of the MM method equals 10 for solving (6) and (7). Forthe TV regularization with overlapping group sparsity, we set the group size K = 3.

In the experiments, the original images are corrupted by the mixed additiveand multiplicative noise with the noisy levels (10, 0.1), (20, 0.1), (10, 0.3), and(20, 0.3). In order to show the superior performance, we compare the three methodsfor the mixed additive and multiplicative noise removal. Table 1 lists the values ofPSNR and SSIM of the restored images. Obviously, comparing with OGSTV-EXPand OGSTV-Idiv, the TV-EXP model provides the worst values of PSNR and SSIM.Especially, the values of PSNR by the proposed methods are about 0.6dB higherthan the TV-EXP model. Furthermore, the OGSTV-EXP and OGSTV-Idiv modelsobtain the competitive results in terms of PSNR and SSIM.

Figure 3 shows the comparison of different methods for removing the mixednoise. By using the different methods, the restored images of the TV-EXP modelobviously bring in the staircase effect. However, our proposed methods preservethe image details and remove the mixed additive and multiplicative noise. Due tothe TV with overlapping group sparsity, it is obvious that two variational methodsoutperform TV-EXP, and the staircase artifacts are effectively eliminated. To further

Page 10: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

232 J.-J. Mei and T.-Z. Huang

Fig. 3 Comparison of different methods for removing the mixed additive and multiplicative noisewith the noise level (10, 0.3). (a) Noisy images; (b) TV-EXP; (c) OGSTVEXP; (d) OGSTVIdiv

illustrate the performance of the proposed methods, we give the zoomed version ofthe original images and restored images shown in Fig. 4. Obviously, we find thatthe staircase artifacts are reduced in the homogeneous region by using the proposedmethods, especially in the restored images “test,” “peppers,” and “SAR.”

5 Conclusion

In this paper, we review the TV regularization with overlapping group sparsity andthe classic ADMM. Based on the exponential model [6] and the I-divergence model[7], we present two convex variational models for removing the mixed additive and

Page 11: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 233

Fig. 4 Zoomed version of the images in Fig. 3. (a) Original images; (b) TV-EXP; (c) OGSTVEXP;(d) OGSTVIdiv

multiplicative noise. Due to the convergent property, ADMM is applied to solvethe proposed variational problems. Numerical experiments show that the proposedmethods outperform the TV-EXP model in qualitative and quantitative comparisons.

Acknowledgements This research is supported by NSFC (61772003, 61402082, 11401081) andthe Fundamental Research Funds for the Central Universities (ZYGX2016J129).

References

1. Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms.Phys D Nonlinear Phenom 60:259–268. https://doi.org/10.1016/0167-2789(92)90242-F

Page 12: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

234 J.-J. Mei and T.-Z. Huang

2. Chambolle A (2004) An algorithm for total variation minimization and applications. J MathImaging Vis 20:89–97. https://doi.org/10.1023/B:JMIV.0000011325.36760.1e

3. Chan RH, Tao M, Yuan XM (2013) Constrained total variation deblurring models and fastalgorithms based on alternating direction method of multipliers. SIAM J Imaging Sci 6:680–697. https://doi.org/10.1137/110860185

4. Aubert G, Aujo JF (2008) A variational approach to removing multiplicative noise. SIAM JAppl Math 28:925–946. https://doi.org/10.1137/060671814

5. Beck A, Teboulle M (2009) Fast gradient-based algorithms for constrained total variationimage denoising and deblurring problems. IEEE Trans Image Process 18:2419–2434. https://doi.org/10.1109/TIP.2009.2028250

6. Bioucas-Dias JM, Figueiredo MAT (2010) Multiplicative noise removal using variable splittingand constrained optimization. IEEE Trans Image Process 19:1720–1730. https://doi.org/10.1109/TIP.2010.2045029

7. Steidl G, Teuber T (2010) Removing multiplicative noise by Douglas-Rachford splittingmethods. J Math Imaging Vis 36:168–184. https://doi.org/10.1007/s10851-009-0179-5

8. Zhao XL, Wang F, Ng MK (2014) A new convex optimization model for multiplicative noiseand blur removal. SIAM J Imaging Sci 7:456–475. https://doi.org/10.1137/13092472X

9. Dong YQ, Zeng TY (2013) A convex variational model for restoring blurred images withmultiplicative noise. SIAM J Imaging Sci 6:1598–1625. https://doi.org/10.1137/120870621

10. Liu J, Huang TZ, Selesnick IW, Lv XG, Chen PY (2015) Image restoration using total variationwith overlapping group sparsity. Inform Sci 295:232–246. https://doi.org/10.1016/j.ins.2014.10.041

11. Mei JJ, Huang TZ (2016) Primal-dual splitting method for high-order model with applicationto image restoration. Appl Math Model 40:2322–2332. https://doi.org/10.1016/j.apm.2015.09.068

12. Mei JJ, Dong YQ, Huang TZ, Yin WT (2017) Cauchy noise removal by nonconvex ADMMwith convergence guarantees. J Sci Comput 1–24. https://doi.org/10.1007/s10915-017-0460-5

13. Lukin VV, Fevralev DV, Ponomarenko NN, Abramov SK, Pogrebnyak O, Egiazarian KO,Astola JT (2010) Discrete cosine transform-based local adaptive filtering of images corruptedby nonstationary noise. J Electron Imaging 19:023007–023007. https://doi.org/10.1117/1.3421973

14. Hirakawa K, Parks TW (2006) Image denoising using total least squares. IEEE Trans ImageProcess 15:2730–2742. https://doi.org/10.1109/TIP.2006.877352

15. Chumchob N, Chen K, Brito-Loeza C (2013) A new variational model for removal of combinedadditive and multiplicative noise and a fast algorithm for its numerical approximation. Int JComput Math 90:140–161. https://doi.org/10.1080/00207160.2012.709625

16. Almgren F (1987) Review: Enrico Giusti, minimal surfaces and functions of bounded variation.Bull Am Math Soc (NS) 16:167–171

17. Ambrosio L, Fusco N, Pallara D (2000) Functions of bounded variation and free discontinuityproblems. Oxford mathematical monographs. The Clarendon Press/Oxford University Press,New York

18. Selesnick IW, Chen PY (2013) Total variation denoising with overlapping group sparsity. In:2013 IEEE international conference on acoustics, speech and signal processing (ICASSP),pp 5696–5700. https://doi.org/10.1109/ICASSP.2013.6638755

19. Liu G, Huang TZ, Liu J, Lv XG (2015) Total variation with overlapping group sparsity forimage deblurring under impulse noise. PLOS ONE 10:1–23. https://doi.org/10.1371/journal.pone.0122562

20. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from errorvisibility to structural similarity. IEEE Trans Image Process 13:600–612. https://doi.org/10.1109/TIP.2003.819861

21. Yang JF, Zhang Y, Yin WT (2010) A fast alternating direction method for TVL1-L2 signalreconstruction from partial Fourier data. IEEE J Sel Top Signal Process 4:288–297. https://doi.org/10.1109/JSTSP.2010.2042333

Page 13: Total Variation with Overlapping Group Sparsity for Removing … · 2018-09-18 · Total Variation with Overlapping Group Sparsity for Removing Mixed Noise Jin-Jin Mei and Ting-Zhu

Total Variation with Overlapping Group Sparsity for Removing Mixed Noise 235

22. He BS, Yang H (1998) Some convergence properties of a method of multipliers for linearlyconstrained monotone variational inequalities. Oper Res Lett 23:151–161. https://doi.org/10.1016/S0167-6377(98)00044-3

23. Chen C, Ng MK, Zhao XL (2015) Alternating direction method of multipliers for nonlinearimage restoration problems. IEEE Trans Image Process 24:33–43. https://doi.org/10.1109/TIP.2014.2369953

24. Glowinski R (1984) Numerical methods for nonlinear variational problems. Springer,Berlin/Heidelberg. https://doi.org/10.1007/978-3-662-12613-4