99
Convex and Non-Convex Optimization in Image Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU 29 May- 2 June, 2017 NUS , Singapore

Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Convex and Non-Convex Optimization in ImageRecovery and Segmentation

Tieyong ZengDept. of Mathematics, HKBU

29 May- 2 June, 2017NUS , Singapore

Page 2: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Outline

1. Variational Models for Rician Noise Removal

2. Two-stage Segmentation

3. Dictionary and Weigthed Nuclear Norm

Page 3: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Background

We assume that the noisy image f is obtained from a perfectunknown image u

f = u + b.

I b: additive Gaussian noise.

I TV-ROF:

minu

µ

2

∫Ω

(u − f )2dx +

∫Ω|Du|,

where the constant µ > 0, Ω ⊂ Rn.I The MAP estimation leads to the data fitting term

∫Ω

(u − f )2.I Smoothness from the edge-preserving total variation (TV)

term,∫

Ω|Du|.

Page 4: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Rician noise with Blur

The degradation process reads

f =√

(Au + η1)2 + η22.

I A describes the blur operator

I The Rician noise was built from white Gaussian noiseη1, η2 ∼ N (0, σ2)

I The distribution has probability density

p(f |u) =f

σ2e−

(Au)2+f 2

2σ2 I0(Auf

σ2).

Page 5: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

TV-model for removing Rician noise with blur

The MAP estimation leads to:

infu

1

2σ2

∫Ω

(Au)2dx −∫

Ωlog I0(

Auf

σ2)dx + γTV(u),

I I0 is the solution of the zero order modified Bessel function

xy ′′ + y ′ − xy = 0.

I the TV prior can recover sharp edges, the constant γ > 0

I Non-convex!

Page 6: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Convex variational model for denoising and deblurring

[Getreuer-Tong-Vese,ISVC, 2011]

infu

∫ΩGσ(Au, f ) + γTV(u). (1)

Let z = Au and c = 0.8246,

Gσ(z , f ) =

Hσ(z) if z ≥ cσ,

Hσ(cσ) + H ′σ(cσ)(z − cσ) if z ≤ cσ,

Hσ(z) =f 2 + z2

2σ2− log I0(

fz

σ2)

H ′σ(z) =z

σ2− f

σ2B(

fz

σ2),

B(t) =I1(t)

I0(t)≈ t3 + 0.950037t2 + 2.38944t

t3 + 1.48937t2 + 2.57541t + 4.65314.

I Complex and difficult to derive its mathematical property.

Page 7: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

New elegant convex TV-model

We introduce a quadratic penalty term:

infu

1

2σ2

∫Ω

(Au)2dx −∫

Ωlog I0(

Auf

σ2)dx +

1

σ

∫Ω

(√Au −

√f )2dx + γTV(u),

(2)

Page 8: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

The quadratic penalty term

Why the penalty term∫

Ω(√f−√Au)2

σ dx?

I the value of e is always bounded, where e is defined below.

Proposition 1 Suppose that the variables η1 and η2 independently

follow the Normal distribution N (0, σ2). Set f =√

(u + η1)2 + η22

where u is fixed and u ≥ 0. Then we can get the followinginequality,

e :=E((√f −√u)2)

σ≤√

2

π(π + 2).

Page 9: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Proof of the Proposition

Lemma 1 Assume that a, b ∈ R. Then,

|(u2 + 2au + b2)14 − u

12 | ≤

√|a|+

√|b| is true whenever u ≥ 0

and |a| ≤ |b|.I Based on the lemma, we can get

E((√f −√u)2) ≤ 2E(|η1|) + 2E((η2

1 + η22)

12 ).

I Set Y :=η2

1+η22

σ2 , where η1, η2 ∼ N (0, σ2). Thus, Y ∼ χ2(2).

I It can be shown that

E(√Y ) =

E((η21 + η2

2)12 )

σ=

√2π

2,

E(|η1|) =

√2

πσ.

Page 10: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximation table

Table: The values of e with various values of σ for different originalimages u.

Image σ = 5 σ = 10 σ = 15 σ = 20 σ = 25

Cameraman 0.0261 0.0418 0.0571 0.0738 0.0882

Bird 0.0131 0.0255 0.0371 0.0486 0.0590

Skull 0.0453 0.0754 0.1009 0.1234 0.1425

Leg joint 0.0356 0.0654 0.0906 0.1105 0.1263

Page 11: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

The quadratic penalty term

Why the penalty term∫

Ω(√f−√Au)2

σ dx?

I By adding this term, we obtain a strictly convex model (2) forrestoring blurry images with Rician noise.

Page 12: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Convexity of the proposed model (2)

How to prove the convexity of the proposed model (2)?

I Let us define a function g0 as

g0(t) := − log I0(t)− 2√t.

I If g0(t) is strictly convex on [0,+∞), then the convexity ofthe first three terms in model (2) as a whole can be easily

proved by letting t = f (x)Au(x)σ2 .

I Since∫

Ω |∇u| is convex, the proposed model is strictly convex.

Page 13: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Convexity of the proposed model (2)

How to prove the strict convexity of function g(t)?

I

g ′′0 (t) = −(I0(t + I2(t))I0(t)− 2I 21 (t)

2I 20 (t)

+1

2t−

32 .

Here, In(t) is the modified Bessel functions of the first kind with

order n.

I Based on the following lemma, we can prove that g0(t) isstrictly convex.

Lemma 2 Let h(t) = t32

(I0(t)+I2(t))I0(t)−2I 21 (t)

I 20 (t)

. Then

0 ≤ h(t) < 1 on [0,+∞).

Page 14: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solutionTheorem 1 Let f be in L∞(Ω) with infΩ f > 0, then the model(2) has a unique solution u∗ in BV (Ω) satisfying

0 <σ2

(2 supΩ f + σ)2infΩ

f ≤ u∗ ≤ supΩ

f .

I Set c1 := σ2

(2 supΩ f +σ)2 infΩ f , c2 := supΩ f , and define two

functions as follows,

E0(u):=1

2σ2

∫Ωu2dx −

∫Ω

log I0(fu

σ2)dx

+1

σ

∫Ω

(√u −√f )2dx ,

E1(u) :=E0(u) + γ

∫Ω|Du|dx , (3)

where E1(u) is the objective function in model (2).

Page 15: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solution

I According to the integral forms of the modified Besselfunctions of the first kind with integral orders n, we have

I0(x) =1

π

∫ π

0ex cos θdθ ≤ ex , ∀x ≥ 0, (4)

thus, for each fixed x ∈ Ω, − log I0( f (x)tσ2 ) ≥ − f (x)t

σ2 witht ≥ 0.

I E1(u) in (3) is bounded below.

E1(u) ≥ E0(u) ≥ 1

2σ2

∫Ωu2dx −

∫Ω

log I0(fu

σ2)dx

≥∫

Ω(

1

2σ2u2 − fu

σ2)dx

≥ − 1

2σ2

∫Ωf 2dx .

Page 16: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solutionI For each fixed x ∈ Ω, let the real function g on R+

⋃0 be

defined as

g(t) :=1

2σ2t2 − log I0

( f (x)t

σ2

)+

1

σ(√t −

√f (x))2.

g ′(t) =1

σ2t − f (x)

σ2

I1(f (x)tσ2

)I0(f (x)tσ2

) +1

σ(1−

√f (x)

t).

I We can prove that g(t) is increasing if t ∈ (f (x),+∞) and

decreasing if 0 ≤ t < σ2

(2f (x)+σ)2 f (x). This implies that

g(min (t,V )) ≤ g(t) if V ≥ f (x).I With

∫Ω |D inf(u, c2)| ≤

∫Ω |Du| [Kornprobst,Deriche,Aubert

1999], we haveE1(inf(u, c2)) ≤ E1(u).

Similarly, we can get E1(sup(u, c1)) ≤ E1(u).I The unique solution u∗ to the model (2) should be restricted

in [c1, c2].

Page 17: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum PrincipleLemma 3 The function I0(x) is strictly log-convex for all x > 0.

I In order to prove that the function I0(x) is strictly log-convexin (0,+∞), it suffices to show that its logarithmicsecond-order derivative is positive in (0,+∞)

(log I0(x))′′ =12 (I0(x) + I2(x))I0(x)− I1(x)2

I 20 (x)

.

I Using Cauchy-Schwarz inequality, we obtain

1

2(I0(x) + I2(x))I0(x)

=1

π

∫ π

0cos2 θex cos θdθ · 1

π

∫ π

0ex cos θdθ

≥(1

π

∫ π

0cos θex cos θdθ)2 = (I1(x))2.

Since cos θe12x cos θ and e

12x cos θ are not linear dependent when

θ changes, the strict inequality in above holds.

Page 18: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum Principle

Lemma 4 Let g(x) be a strictly convex and strictly increasingfunction in (0,+∞). Meanwhile, let g(x) be differentiable.Assume that 0 < a < b, 0 < c < d , then we have:g(ac) + g(bd) > g(ad) + g(bc).

Based on Theorem 1, Lemma 3 and Lemma 4, we can furtherestablish the following comparison principle (minimum-maximumprinciple).

Page 19: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum Principle

Proposition 3 Let f1 and f2 be in L∞(Ω) with infΩ f1 > 0 andinfΩ f2 > 0. Suppose u∗1 (resp. u∗2) is a solution of model (2) withf = f1 (resp. f = f2). Assume that f1 < f2, then we have u∗1 ≤ u∗2a.e. in Ω.

I Note u∗1 ∧ u∗2 := inf(u∗1 , u∗2), u∗1 ∨ u∗2 := sup(u∗1 , u

∗2).

I E i1(u) denotes E1(u) defined in (3) with f = fi .

I Since u∗1 (resp. u∗2) is a solution of model (2) with f = f1(resp. f = f2), we can easily get

E 11 (u∗1 ∧ u∗2) ≥ E 1

1 (u∗1),

E 21 (u∗1 ∨ u∗2) ≥ E 2

1 (u∗2).

Page 20: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum Principle

I Adding the two inequalities together, and using the fact that∫Ω |D(u∗1 ∧ u∗2)|+

∫Ω |D(u∗1 ∨ u∗2)| ≤

∫Ω |Du

∗1 |+

∫Ω |Du

∗2 |, we

obtain∫Ω

1

2σ2(u∗1 ∧ u∗2)2 − log I0(

f1(u∗1 ∧ u∗2)

σ2)

+1

σ(√u∗1 ∧ u∗2 −

√f1)2dx +

∫Ω

1

2σ2(u∗1 ∨ u∗2)2

− log I0(f2(u∗1 ∨ u∗2)

σ2) +

1

σ(√

u∗1 ∨ u∗2 −√f2)2dx

≥∫

Ω

1

2σ2(u∗1)2 − log I0(

f1u∗1

σ2) +

1

σ(√u∗1 −

√f1)2dx

+

∫Ω

1

2σ2(u∗2)2 − log I0(

f2u∗2

σ2) +

1

σ(√u∗2 −

√f2)2dx .

Page 21: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum Principle

I As Ω can be written as Ω = u∗1 > u∗2 ∪ u∗1 ≤ u∗2, it is clearthat∫

Ω((u∗1 ∧ u∗2)2 + (u∗1 ∨ u∗2)2)dx =

∫Ω

((u∗1)2 + (u∗2)2)dx .

I The inequality can be simplified as follows

∫u∗1>u∗2

[logI0(

f1u∗1σ2 )I0(

f2u∗2σ2 )

I0(f1u∗2σ2 )I0(

f2u∗1σ2 )

+1

σ(√u∗1 −

√u∗2)(

√f1 −

√f2)]dx ≥ 0.

I Since I0(t) is exponentially increasing function, we get log I0 isstrictly increasing.

Page 22: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimum-maximum Principle

I Based on Lemma 3 and Lemma 4, we get that if f1 < f2 andu∗1 > u∗2 ,

log I0(f1u∗1

σ2) + log I0(

f2u∗2

σ2) < log I0(

f1u∗2

σ2) + log I0(

f2u∗1

σ2),

which is equivalent to

logI0(

f1u∗1σ2 )I0(

f2u∗2σ2 )

I0(f1u∗2σ2 )I0(

f2u∗1σ2 )

< 0.

I From the assumption f1 < f2, in this case we conclude thatu∗1 > u∗2 has zero Lebesgue measure , i.e., u∗1 ≤ u∗2 a.e. in Ω.

Page 23: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solution

Theorem 2 Assume that A ∈ L(L2(Ω)) is nonnegative, and itdoes not annihilate constant functions, i.e., A1 6= 0. Let f be inL∞(Ω) with infΩ f > 0, then the model (2) has a solution u∗.Moreover, if A is injective, then the solution is unique.

I Define EA(u) as follows

EA(u)=1

2σ2

∫Ω

(Au)2dx −∫

Ωlog I0(

Auf

σ2)dx

+1

σ

∫Ω

(√Au −

√f )2dx + γ

∫Ω|Du|dx .

I Similar to the proof of Theorem 1, EA is bounded from below.Thus, we choose a minimizing sequence un for (2), andhave that

∫Ω |Dun| is bounded.

Page 24: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solutionI Applying the Poincare inequality, we get

‖un −mΩ(un)‖2 ≤ C

∫Ω|D(un −mΩ(un))| = C

∫Ω|Dun|,

(5)

where mΩ(un) = 1|Ω|∫

Ω un dx , |Ω| denotes the measure of Ω,

and C is a constant. As Ω is bounded, ‖un −mΩ(un)‖2 isbounded for each n.

I Since A ∈ L(L2(Ω)) is continuous, A(un −mΩ(un)) mustbe bounded in L2(Ω) and in L1(Ω).

I Based on the boundedness of EA(un), ‖√Aun −

√f ‖2 is

bounded, which implies that Aun is bounded in L1(Ω).Meanwhile, we have:

|mΩ(un)| · ‖A1‖1=‖A(un −mΩ(un))− Aun‖1

≤‖A(un −mΩ(un))‖1 + ‖Aun‖1,

which turns out that mΩ(un) is uniformly bounded, becauseof A1 6= 0.

Page 25: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existence and uniqueness of a solution

I As we know that un −mΩ(un) is bounded, the boundness ofun in L2(Ω) and thus in L1(Ω) is obvious.

I Therefore, there exists a subsequence unk which convergesweakly in L2(Ω) to some u∗ ∈ L2(Ω), and Dunk weakly-∗converges as a measure to Du∗.

I Since the linear operator A is continuous, we have thatAunk converges weakly to Au∗ in L2(Ω) as well.

I Then according to the lower semi-continuity of the totalvariation and Fatou’s lemma, we obtain that u∗ is a solutionof the model (2).

I Furthermore, if A is injective, then its minimizer has to beunique since (2) is strictly convex.

Page 26: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Numerical AlgorithmPrimal-Dual Algorithm for solving the model (2)

1: Fixed τ and β. Initialize u0 = u0 = w0 = w0 = f , v0 = ∇(u0), p0 = (0, · · · , 0)> ∈ R2n , and

q0 = (0, · · · , 0) ∈ Rn .

2: Calculate pk+1, qk+1, wk+1 and uk+1 from

pk+1=arg maxp〈vk −∇uk , p〉 −

1

2β‖p − pk‖2

2

=pk + β(vk −∇uk ), (6)

qk+1=arg maxq〈wk − Auk , q〉 −

1

2β‖q − qk‖2

2

=qk + β(wk − Auk ), (7)

uk+1=arg min0≤u≤255

〈u, γ div pk+1 − A>qk+1〉 +1

2τ‖u − uk‖2

2

=uk + τ(A>qk+1 − div pk+1), (8)

vk+1=arg minvγ‖v‖1 + 〈v, pk+1〉 +

1

2τ‖v − vk‖2

2, (9)

wk+1=arg min0≤w≤255

G(w) + 〈w, qk+1〉 +1

2τ‖w − wk‖2

2, (10)

uk+1=2uk+1 − uk , (11)

vk+1=2vk+1 − vk , (12)

wk+1=2wk+1 − wk, (13)

3: Stop; or set k := k + 1 and go to step 2.

Page 27: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Numerical Experiments - Denoising

(a) Original (b) MAP: 27.47 (c) Getreuer’s: 27.28 (d) Ours: 27.82

(e) Noisy: 22.09 (f) Residual of MAP (g) Residual ofGetreuer

(h) Residual of Ours

Figure: Results and PSNR values of different methods when removing the Riciannoise with σ = 20 in natural image ”Cameraman”. Row 1: the recovered images withdifferent methods. Row 2: the residual images with different methods. (a) Zoomedoriginal ”Cameraman”, (b) MAP model (γ = 0.05), (c) Getreuer’s model (λ = 20),(d) Our proposed model (γ = 0.05), (e) Noisy ”Cameraman” with σ = 20, (f)-(h) areresidual images of MAP model, Getreuer’s model and our model, respectively.

Page 28: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Denoising

Table: The comparisons of PSNR values, SSIM values and CPU-time in seconds bydifferent methods for denoising case.

σ = 20 σ = 30Images Methods PSNR SSIM Time(s) PSNR SSIM Time(s)

Noisy 22.09 0.4069 18.46 0.2874Camera- MAP 27.47 0.8153 106.85 24.25 0.7342 137.20

man Getreuer’s 27.28 0.7512 1.96 25.12 0.6567 1.93Ours 27.82 0.8221 1.79 25.84 0.7637 2.23Noisy 21.94 0.4113 18.20 0.2680

Lumbar- MAP 27.52 0.7485 71.87 24.26 0.6259 65.06Spine Getreuer’s 27.57 0.6746 1.27 24.31 0.4886 1.23

Ours 28.18 0.7743 1.26 24.87 0.6444 1.14Noisy 21.13 0.4666 17.60 0.3384MAP 26.52 0.7191 72.42 22.70 0.6484 81.77

Brain Getreuer’s 28.84 0.8893 1.50 25.84 0.8194 1.37Ours 29.21 0.8860 1.11 26.56 0.8067 1.30Noisy 22.28 0.4395 18.82 0.3016MAP 28.62 0.7870 72.42 26.01 0.7129 87.97

Liver Getreuer’s 28.46 0.7726 2.23 26.29 0.6800 2.19Ours 28.96 0.7986 2.38 26.78 0.7326 2.23Noisy 21.81 0.4311 18.27 0.2989MAP 27.53 0.7675 98.38 24.31 0.6804 104.03

Average Getreuer’s 28.04 0.7719 1.74 25.39 0.6612 1.68Ours 28.52 0.8203 1.64 26.01 0.7369 1.73

Page 29: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring and Denoising

(a) Original (b) MAP: 26.73 (c) Getreuer’s: 25.76 (d) Ours: 27.30

(e) Blurry & Noisy:23.15

(f) Residual of MAP (g) Residual ofGetreuer

(h) Residual of Ours

Figure: Results and PSNR values of different methods when removing the Riciannoise with σ = 15 and Motion blur in MR image ”Lumbar Spine”. Row 1: therecovered images with different methods. Row 2: the residual images with differentmethods. (a) Original ”Cameraman”, (b) MAP model (γ = 0.05), (c) Getreuer’smodel (λ = 20), (d) Our proposed model (γ = 0.04), (e) Blurry ”Lumbar Spine”image with Rician noise σ = 10, (f)-(h) are residual images of MAP model, Getreuer’smodel and our model, respectively.

Page 30: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring and Denoising

Table: The comparisons of PSNR values, SSIM values and CPU-time in seconds bydifferent methods for deblurring with denoising.

Motion Blur Gaussian BlurImages Methods PSNR SSIM Time(s) PSNR SSIM Time(s)

Degraded 21.70 0.3680 22.26 0.3834Camera- MAP 24.87 0.7811 81.49 25.35 0.7877 81.81

man Getreuer’s 24.64 0.7697 2.87 25.29 0.7816 3.12Ours 25.11 0.7797 2.24 25.74 0.7866 2.27

Degraded 23.15 0.4290 23.66 0.4543Lumbar- MAP 26.73 0.7492 60.60 27.56 0.7810 60.68

Spine Getreuer’s 25.76 0.6760 1.68 26.86 0.6953 1.40Ours 27.30 0.7645 1.19 28.10 0.7816 1.47

Degraded 22.72 0.4912 22.96 0.5039MAP 27.61 0.7572 68.09 28.84 0.7728 69.05

Brain Getreuer’s 27.91 0.8728 2.09 28.96 0.8915 2.09Ours 28.49 0.8737 1.87 29.44 0.8905 1.87

Degraded 23.41 0.4452 23.76 0.4639MAP 28.02 0.7741 125.51 28.51 0.7826 126.78

Liver Getreuer’s 27.87 0.7717 3.05 28.30 0.7798 3.03Ours 28.19 0.7820 2.22 28.79 0.7935 2.71

Degraded 22.75 0.4334 23.16 0.4513MAP 26.81 0.7654 83.92 27,57 0.7810 84.58

Average Getreuer’s 26.55 0.7722 2.42 27.35 0.7871 2.41Ours 27.27 0.8000 1.88 28.02 0.8130 2.08

Page 31: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Energy comparison

Table: The comparisons of the energy values E(uMAP), E(uGetreuer′s), and E(uOurs).Here, E(u) := 1

2σ2

∫Ω u2dx −

∫Ω log I0( fu

σ2 )dx + γ∫

Ω |Du|dx , with the same γ. Thesethree values are nearly the same.

σ = 20 σ = 30Method MAP Getreuer’s Ours MAP Getreuer’s Ours

Cameraman -404.15 -404.04 -404.05 -247.06 -247.09 -247.17Skull -84.34 -83.86 -84.21 -35.65 -35.09 -35.59Liver -150.70 -150.67 -150.55 -82.47 -82.51 -82.37Brain -218.03 -220.11 -217.91 -110.30 -110.44 -110.31

Average -214.30 -214.67 -214.18 -118.87 -118.78 -118.86

As one of our aim is to obtain a good approximation for the globalsolution of the original non-convex model (presented in the captionof the above table)

I we plugged the solutions obtained by the MAP model,Getreuer’s convex model and our model into the originalfunctional to get the corresponding energy values.

I as these three energy values are quite close, Getreuer’s andour approximation approaches are quite reasonable.

I in the denoising case, our method is comparable to the MAPand Getreuer’s methods.

Page 32: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Comments on numerical results

Our convexified model is very competitive to other recentlyproposed methods.

I Higher PSNR values and better visual results.

I Less computational time.

I Unique minimizer, independent of initialization.

It seems the effort of convexification is justified, so we continue toanother non-convex problem...

Page 33: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Other non-convex models

Many other non-convex models in image processing.

I Generalization?

I Possible....

I Not so easy

Page 34: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Multiplicative Gamma noise with blur

Degradation modelf = (Au)η.

I η: multiplicative noise.

I We assume η follows Gamma distribution, i.e.,

pη(x ; θ,K ) =1

θKΓ(K )xK−1e−

xθ .

I Mean and variance of η are Kθ and Kθ2.

I We assume mean of η equals 1.

Page 35: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

TV-Multiplicative model

MAP analysis leads to:

minu

∫Ω

(log(Au) +

f

Au

)dx + λ

∫Ω|Du|,

I Known as the AA model.

I The edge-preserving TV term,∫

Ω|Du|, the constant λ > 0

I Non-convex.

Page 36: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

New approach: Convexified TV-Multiplicative model

We introduce a quadratic penalty term:

minu

∫Ω

(log(Au) +

f

Au

)dx + α

∫Ω

(√Au

f− 1)2

dx + λ

∫Ω|Du|.

(14)

I α > 0 is a penalty parameter.

I If α ≥ 2√

69 , the model (14) is convex!

Page 37: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

The quadratic penalty term

More reasons to add the penalty term∫

Ω

(√Auf − 1

)2dx :

I Set Y = 1√η , where η follows Gamma distribution with mean

1.

I It can be shown that (K is the shape parameter in Gammadistribution)

limK→+∞

E ((Y − 1)2) = 0.

I For large K , Y can be well approximated by Gaussiandistribution. (We will introduce the concept of theKullback-Leibler (KL) divergence later to reveal therelationship.)

I The MAP estimation leads to∫

Ω(v − f )2dx as data fittingterm in additive Gaussian noise removal.

Page 38: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximate by Gaussian distribution

Proposition Suppose that the random variable η follows Gammadistribution with mean 1. Set Y = 1√

η , with mean µK and

variance σ2K . Then

limK→+∞

DKL(Y ||N (µK , σ2K )) = 0,

where N (µK , σ2K ) is the Gaussian distribution.

I It can be shown that DKL(Y ||N (µK , σ2K )) = O( 1

K ).

Page 39: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximate by Gaussian distribution(cont.)

Indeed, we can prove that:

(i)∫ +∞

0 pY (y) log pY (y)dy =

log 2− log(√KΓ(K )) + 2K+1

2 ψ(K )− K , where

ψ(K ) := d log Γ(K)dK is the digamma function;

(ii)∫ +∞

0 pY (y) log pN (µK ,σ2K )(y)dy = −1

2 log(2πeσ2K ), where

pN (µK ,σ2K )(y) denotes the PDF of the Gaussian distribution

N (µK , σ2K );

(iii) DKL(Y ||N (µK , σ2K )) = O( 1

K ).

Page 40: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximate by Gaussian distribution (cont.)Indeed, by calculation, we can get,

σ2K = E(Y 2)− E2(Y )

=KΓ(K − 1)

Γ(K )−

KΓ2(K − 12 )

Γ2(K ).

and

DKL(Y ||N (µK , σ2K ))

= log 2− log(√KΓ(K )) +

2K + 1

2ψ(K )− K +

1

2log(2πeσ2

K )

= log 2 +1

2logKσ2

K +O(1

K)

= log 2 +1

2log

(1

4+O(

1

K)

)+O(

1

K)

= O(1

K).

Page 41: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximate by Gaussian distribution (cont.)

Furthermore, if we set Z := Y−µKσK

, then

limK→∞

DKL(Z ||N (0, 1)) = 0.

I A simple change of variable shows

DKL(Z ||N (0, 1)) = DKL(Y ||N (µK , σ2K )).

I Z tends to the standard Gaussian distribution N (0, 1).

Page 42: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Approximation graph

0 0.5 1 1.5 2 2.5 30

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

y

p(y)

pY(y)

pN(µ,σ2

)(y)

(a)

0 0.5 1 1.5 2 2.5 30

0.5

1

1.5

2

2.5

y

p(y)

pY(y)

pN(µ,σ2

)(y)

(b)

Figure: The comparisons of the PDFs of Y and N (µ, σ2) with differentK . (a) K = 6, (b) K = 10.

Page 43: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Numerical Experiments - Denoising

(a) (b)

(c) (d)

Figure: Results of different methods when removing the multiplicative noise withK = 10. (a) Noisy ”Cameraman”, (b) AA method , (c) RLO method, (d) our method.

Page 44: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Cauchy noise with blur

Degradation modelf = (Au) + v .

I v : Cauchy noise.

I A random variable V follows the Cauchy distribution if it hasdensity

g(v) =1

π

γ

γ2 + (v − δ)2,

where γ > 0 is the scale parameter and δ is localizationparameter.

Page 45: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Cauchy noise

0 20 40 60 80 100 120 140 160 180 20030

40

50

60

70

80

90

100

110

120

(a) Noise free signal.

0 20 40 60 80 100 120 140 160 180 20030

40

50

60

70

80

90

100

110

120

(b) Degraded by Gaussiannoise.

0 20 40 60 80 100 120 140 160 180 200−100

−50

0

50

100

150

200

250

300

350

400

(c) Degraded by Cauchynoise.

Figure: Alpha-stable noise in 1D: notice that the y -axis has different scale (scale

between 30 and 120 on (a) and (b) and −100 and 400 on (c)). (a) 1D noise free

signal; (b) signal degraded by an additive Gaussian noise; (c) signal degraded by an

additive Cauchy noise. Cauchy noise is more impulsive than the Gaussian noise.

Page 46: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

TV-Cauchy model

MAP analysis leads to:

infu∈BV (Ω)

TV (u) +λ

2

∫Ω

log(γ2 + (Au − f )2

)dx

I γ > 0 is the scale parameter and A is the blurring operator.

I Edge-preserving.

I Non-convex.

Page 47: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Image corrupted by Cauchy noise

(a) Original image (b) Gaussian noise (c) Cauchy noise (d) Impulse noise

(e) Zoom of (a) (f) Zoom of (b) (g) Zoom of (c) (h) Zoom of (d)

Figure: Comparison of different noisy images. (a) Original image u0; (b) u corrupted by an additive Gaussian

noise; (c) u corrupted by an additive Cauchy noise; (d) u corrupted by an impulse noise; (e)–(h) zoom of the top

left corner of the images (a)–(d), respectively. Cauchy noise and impulse noise are more impulsive than the

Gaussian noise.

Page 48: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

New approach: Convexified TV-Cauchy model

We introduce a quadratic penalty term:

infu∈BV (Ω)

TV (u) +λ

2

(∫Ω

log(γ2 + (Au − f )2

)dx + µ‖Au − u0‖2

2

).

I Cauchy noise has impulsive character.

I u0 is the image obtained by applying the median filter to thenoisy image.

I The median filter is used instead of the myriad filter forsimplicity and computational time.

Page 49: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Numerical Experiments - Denoising

(a) Wavelet: 23.13 (b) SURE-LET: 23.22 (c) BM3D: 28.74 (d) Ours: 30.94

Figure: Recovered images (with PSNR(dB)) of different approaches for removing

Cauchy noise from the noisy image ”Peppers”. (a) Wavelet shrinkage; (b) SURE-LET;

(c) BM3D; (d) our model.

Page 50: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Denoising

Table: PSNR values for noisy images and recovered images given by different methods(ξ = 0.02). In the last line of the table, we compute the average of the values.

Noisy ROF MD MR L1-TV Ours

Peppers 19.15 25.03 29.64 29.85 30.34 30.94

Parrot 19.13 23.88 27.05 27.13 28.02 28.98

Cameraman 19.07 24.00 26.14 26.57 27.21 27.91

Lena 19.06 24.58 28.94 28.98 29.84 30.36

Baboon 19.17 21.16 21.38 21.64 24.24 24.96

Goldhill 18.99 24.40 26.80 27.12 28.23 28.80

Boat 19.03 24.21 27.27 27.49 28.70 29.20

Average 19.09 23.89 26.75 26.97 28.08 28.74

Page 51: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring and Denoising

Original

L1-TV

Ours

Figure: The zoomed-in regions of the recovered images from blurry images with

Cauchy noise. First row: details of original images; second row: details of restored

images by L1-TV approach; third row: details of restored images by our approach.

Page 52: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring and Denoising

Table: PSNR values for noisy images and recovered images given by different methods

(ξ = 0.02). In the last line of the table, we compute the average of the values.

Noisy ROF MD MR L1-TV Ours

Peppers 18.31 24.21 25.19 25.01 26.70 27.46

Parrot 18.23 24.06 24.48 24.57 25.75 26.79

Cameraman 18.29 23.98 24.43 24.39 25.49 26.27

Lena 18.64 25.74 26.70 26.72 27.26 28.14

Baboon 17.42 20.84 21.54 21.49 21.36 21.81

Goldhill 18.47 24.84 25.88 25.85 26.17 26.76

Boat 18.48 24.36 25.42 25.43 26.18 26.69

Average 18.28 24.00 24.81 24.78 25.56 26.31

Page 53: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Remarks

I We introduced three convex models based on Total Variation.

I Under mild conditions, our models have unique solutions.

I Because of convexity, fast algorithms can be employed.

I Numerical experiments suggest good performance of themodels.

I Generalization?

Page 54: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Outline

1. Variational Models for Rician Noise Removal

2. Two-stage Segmentation

3. Dictionary and Weigthed Nuclear Norm

Page 55: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Mumford-Shah model (1989)

Mumford-Shah model is an energy minimization problem for imagesegmentation:

ming ,Γλ2

∫Ω(f − g)2dx + µ

2

∫Ω\Γ|∇g |

2dx + Length(Γ).

I f observed image, g a piecewise smooth approximation of f ,Γ boundary of segmented region.

I λ and µ are positive.

I Non-convex, very difficult to solve!

Page 56: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Simplifying Mumford-Shah Model

Chan-Vese modelWhen µ→ +∞, it reduces to:

minΓ,c1,c2

λ

2

∫Ω1

(f − c1)2dx +λ

2

∫Ω2

(f − c2)2dx + Length(Γ),

where Ω is separated into Ω1,Ω2 by Γ.

I Level-set method can be applied.

I Rather complex!

I Still non-convex.

I Multi-phases?

Page 57: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Our approach

We propose a novel two-stage segmentation approach.

I First stage, solve for a convexified variant of theMumford-Shah model.

I Second stage, use data clustering method to threshold thesolved solution.

Page 58: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Stage One

I Length(Γ)←→∫

Ω |∇u|; and equal when u is binary.

I phases of g can be obtained from u by thresholding.

Observation: (especially when g is nearly binary)

I jump set of g ≈ jump set of u.

I∫

Ω |∇g | ≈∫

Ω |∇u| ←→ Length(Γ).

Page 59: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Stage One

I Length(Γ)←→∫

Ω |∇u|; and equal when u is binary.

I phases of g can be obtained from u by thresholding.

Observation: (especially when g is nearly binary)

I jump set of g ≈ jump set of u.

I∫

Ω |∇g | ≈∫

Ω |∇u| ←→ Length(Γ).

Page 60: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Stage One: Unique Minimizer

Our convex variant of the Mumford-Shah model (HintermullerH1-norm for image restoration) is:

E (g) =λ

2

∫Ω

(f −Ag)2dx +µ

2

∫Ω|∇g |2dx +

∫Ω|∇g |dx (15)

Its discrete version:

λ

2‖f −Ag‖2

2 +µ

2‖∇g‖2

2 + ‖∇g‖1

TheoremLet Ω be a bounded connected open subset of R2 with a Lipschitzboundary. Let Ker(A)

⋂Ker(∇) = 0 and f ∈ L2(Ω), where A

is a bounded linear operator from L2(Ω) to itself. Then E (g) hasa unique minimizer g ∈W 1,2(Ω).

Page 61: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Stage Two

With a minimizer g from the first stage, we obtain a segmentationby thresholding g .

I We use the K-means method to obtain proper thresholds.

I Fast multiphase segmentation.

I Number of phases K and thresholds ρ are determined after gis calculated. Little computation to change K and ρ. No needto recalculate u!

I Users can try different K and ρ.

Page 62: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Extensions to Other Noise Models

First stage: solve for

ming

λ

∫Ω

(Ag − f logAg)dx +µ

2

∫Ω|∇g |2dx +

∫Ω|∇g |dx

(16)

I data fitting term good for Poisson noise from MAP analysis

I also suitable for Gamma noise (Steidl and Teuber (10)).

I objective functional is convex (solved by Chambolle-Pock)

I admits unique solution if Ker(A) ∩ Ker(∇) = 0.

Second stage: threshold the solution to get the phases.

Page 63: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Example: Poisson noise with motion blur

Original image Noisy & blurred Yuan et al. (10)

Dong et al. (10) Sawatzky et al. (13) Our method

Page 64: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Example: Gamma noise

Original image Noisy image Yuan et al. (10)

Li et al. (10)

12

3

Our method

Page 65: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Three-stage Model

We propose a SLaT (Smoothing, Lifting and Thresholding)method for multiphase segmentation of color images corrupted bydifferent types of degradations: noise, information loss, and blur.

I Stage One: Smoothing

I Stage Two: Lifting

I Stage Three: Thresholding

Page 66: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Three-stage Model

Let the given degraded image be in V1.

I Smoothing: The convex variational model ((15) or (16)) forgrayscale images is applied in parallel to each channel of V1.This yields a restored smooth image.

I Color Dimension Lifting: We transform the smoothed colorimage to a secondary color space V2 that provides us withcomplementary information. Then we combine these imagesas a new vector-valued image composed of all the channelsfrom color spaces V1 and V2.

I Thresholding: According to the desired number of phases K ,we apply a multichannel thresholding to the combined V1-V2

image to obtain a segmented image.

Page 67: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Smoothing stage

Let f = (f1, . . . , fd) be a given color image with channelsfi : Ω→ R, i = 1, · · · , d . Denote Ωi

0 the set where fi is known.We consider the minimizing the functional E below

E (gi ) =λ

2

∫Ωωi · Φ(fi , gi )dx +

µ

2

∫Ω|∇gi |2dx +

∫Ω|∇gi |dx , i = 1, . . . , d ,

(17)

where ωi (·) is the characteristic function of Ωi0, i.e.

ωi (x) =

1, x ∈ Ωi

0,

0, x ∈ Ω \ Ωi0.

(18)

For Φ in (17) we consider the following options:

1. Φ(f , g) = (f −Ag)2, the usual choice;

2. Φ(f , g) = Ag − f log(Ag) if data are corrupted by Poissonnoise.

Page 68: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Uniqueness and existence

The Theorem below proves the existence and the uniqueness of theminimizer of (17) where we define the linear operator (ωiA) by

(ωiA) : u(x) ∈ L2(Ω) 7→ ωi (x)(Au)(x) ∈ L2(Ω).

TheoremLet Ω be a bounded connected open subset of R2 with a Lipschitzboundary. Let A : L2(Ω)→ L2(Ω) be bounded and linear. Fori ∈ 1, . . . , d, assume that fi ∈ L2(Ω) and thatKer(ωiA)

⋂Ker(∇) = 0 where Ker stands for null-space. Then

E (gi ) =λ

2

∫Ωωi · Φ(fi , gi )dx +

µ

2

∫Ω|∇gi |2dx +

∫Ω|∇gi |dx ,

with either Φ(fi , gi ) = (fi −Agi )2 or Φ(fi , gi ) = Agi − fi log(Agi )has a unique minimizer gi ∈W 1,2(Ω).

Page 69: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Segmentation of Real-world Color Images

(A) Noisy image (A1) FRC (A2) PCMS (A3) DPP (A4) Ours

(B) Information (B1) FRC (B2) PCMS (B3) DPP (B4) Ours

loss + noise

(C) Blur + noise (C1) FRC (C2) PCMS (C3) DPP (C4) Ours

Figure: Four-phase sunflower segmentation (size: 375× 500). (A): Given Gaussian

noisy image with mean 0 and variance 0.1; (B): Given Gaussian noisy image with 60%

information loss; (C): Given blurry image with Gaussian noise;

Page 70: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Observations for Two/Three-stage segmentation

I Convex segmentation model with unique solution. Can besolved easily and fast.

I No need to solve the model again when thresholds or numberof phases changes.

I Easily extendable to e.g. blurry images and non-Gaussiannoise.

I Link image segmentation and image restoration.

I Efficient algorithms of color image segmentation.

Page 71: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Outline

1. Variational Models for Rician Noise Removal

2. Two-stage Segmentation

3. Dictionary and Weigthed Nuclear Norm

Page 72: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring under impulse noiseDegraded model

g = Nimp(Hu),

where g is corrupted image, H is the blur kernel.

I one-phase model:

minu

J(u) + λF (Hu − g) ,

where J is the regularization term, λ is a positive parameter,F is the data fidelity-term.

I two-phase methods:

minu

J(u) + λ∑s

F (Λs(Hu − g)s) ,

where

Λs =

0, if s ∈ N ,1, otherwise,

with N the noise candidates set.

Page 73: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Variational model

[Ma, Yu, and Z, SIIMS 2013]In order to restore image from degraded model

g = Nimp(Hu).

we consider the following model:

minαs ,u,D

∑s∈P

µs ‖αs‖0 +∑s∈P‖Dαs − Rsu‖2

2 + η ‖∇u‖1 + λ ‖Λ(Hu − g)‖1 ,

(19)

where ‖∇u‖1 denotes the discrete version of the isotropic totalvariation norm defined as:

‖∇u‖1 =∑s

|(∇u)s |, |(∇u)s | =√|(∇u)1

s |2 + (∇u)2s |2.

Page 74: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Experimental results

PSNR(dB) values for various methods for the test imagescorrupted by Gaussian blur and random-valued noise with noiselevels r = 20%, 40%, 60% respectively.

Image/r FTVd Dong Cai1 Cai2 Yan Ours

Bar./20% 25.29 25.18 28.37 29.34 27.33 31.30Bar./40% 24.17 24.19 25.99 26.68 26.21 28.82Bar./60% 22.13 21.27 24.02 24.07 25.02 25.28

Cam./20% 29.56 28.19 33.25 37.22 33.32 39.24Cam./40% 24.92 25.43 29.44 31.74 30.64 35.21Cam./60% 19.92 21.00 23.33 23.92 24.55 28.59

Lena/20% 35.08 32.99 37.60 40.68 36.08 42.42Lena/40% 31.26 30.85 35.74 36.67 35.20 39.53Lena/60% 25.77 23.47 30.33 30.59 30.25 34.41

Page 75: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Numerical Experiments - Denoising

Degraded FTVd: 19.92 Dong: 21.00

Cai1: 23.33 Cai2: 23.92 Yan: 24.55 Our: 28.59

Figure: Recovered images (with PSNR(dB)) of different methods onimage Cameraman corrupted by Gaussian blur and random-valued noisewith noise level 60%.

Page 76: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

SSMS deblurring

1. Applying Wiener filter to remove the blur effect;

2. SSMS denoising to remove the color noise.The underlying clear image patch x can be estimated from thenoisy patch y via

f =∑l∈Λ

〈f , φk0,l〉φk0,l ,

whereΛ = l : |〈f , φk0,l〉| > T ,

and the threshold value T depends on the noise variance.

Drawback: Wiener filter doesn’t work well when the noise level ishigh.

Page 77: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring via total variation based SSMS

[Ma, and Z, Journal of Scientific Computing 2015]Model:

minu,D,αs

λ

2‖Au − g‖2

2 +β

2

∑s∈P‖Rsu − Dαs‖2

2

+∑s∈P

µs ‖αs‖1 + ‖∇u‖1 ,(20)

For each patch Rsu, a best orthogonal basis Φk0 of size N × N isselected by SSMS.

We employ the alternating minimization method to solve themodel.

Page 78: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Experiments

The proposed algorithm is compared to five related existing imagedeblurring methods:

1. TV based method

2. Fourier wavelet regularized deconvolution (ForWaRD) method

3. Structured Sparse Model Selection (SSMS) deblurring

4. Framelet based image deblurring approach

5. Non-local TV based method

Page 79: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Experimental results

Comparison of the PSNR (dB) of the recovered results by differentmethods, with respect to the noise level σ = 2.

Kernel TV ForWaRD SSMS Framelet NLTV Ours

Boat.box 27.97 27.69 27.75 28.90 28.86 29.39gaussian 27.91 27.80 27.73 28.62 28.64 28.80motion 28.24 28.34 28.03 29.53 29.57 30.33

Lenabox 28.79 29.72 29.82 30.65 29.64 31.18gaussian 30.01 30.45 30.68 31.76 30.73 32.28motion 29.56 30.60 30.55 31.82 30.97 32.72

Cam.box 25.44 25.66 25.69 26.43 26.29 27.05gaussian 24.83 25.06 25.04 25.27 24.92 25.62motion 26.62 26.89 26.74 28.11 28.38 29.16

Page 80: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Experimental results

Comparison of the PSNR (dB) of the recovered results by differentmethods, with respect to the noise level σ = 10.

Kernel TV ForWaRD SSMS Framelet NLTV Ours

Boat.box 25.06 25.15 25.26 25.72 25.71 25.90gaussian 25.97 26.36 26.30 26.68 26.76 26.92motion 24.67 24.94 25.07 25.29 25.50 25.94

Lenabox 26.47 27.46 27.71 27.82 26.92 28.45gaussian 27.82 28.96 29.05 29.17 28.43 29.92motion 26.18 27.21 27.62 27.53 27.10 28.64

Cam.box 23.09 22.35 22.34 23.50 23.38 24.02gaussian 23.55 23.43 23.41 24.04 24.11 24.37motion 22.89 22.26 22.27 23.43 23.52 24.28

Page 81: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Low rank

Figure: Grouping blocks

BM3D:Grouping by matching; Collaborative filtering.

Low rank:Grouping by matching; Low rank minimization.

Page 82: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Low rank minimization

minX‖Y − X‖2

F , s.t. rank(X ) ≤ r

Drawback: nonconvexConvex relaxation: nuclear norm minimization (NNM:

minX

1

2‖Y − X‖2

F + λ ‖X‖∗ (21)

with solution X ∗ = USλ(Σ)VT.Y = UΣVT is the singular value decomposition (SVD) of matrixY , where

Σ =

(diag (σ1, σ2, · · · , σn)

0

),

and the singular value shrinkage operator is defined as

Sλ(Σ) = max(0,Σ− λ),

which actually is the proximal operator of the nuclear normfunction (Cai, Candes and Shen).

Page 83: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

WNNM

NNM: using a same value to penalize every singular valueHowever, the larger singular value is more important and shouldshrink less in many cases.

The weighted nuclear norm minimization (WNNM) problem:

minX

1

2‖Y − X‖2

F + ‖X‖~w ,∗ , (22)

where

‖X‖~w ,∗ =n∑

i=1

wiσi (X ), (23)

with σ1(X ) ≥ σ2(X ) ≥ · · · ≥ σn(X ) ≥ 0, the weights vector~w = [w1,w2, · · · ,wn], and wi ≥ 0.

Page 84: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existent mathematical properties of WNNMProposition: Assume Y ∈ Rm×n, the SVD of Y is Y = UΣVT,and 0 ≤ w1 ≤ w2 ≤ · · · ≤ wn, the global optimal solution of theWNNM problem in (22) is

X ∗ = UDVT, (24)

where D =

(diag (d1, d2, · · · , dn)

0

)is a diagonal non-negative

matrix and di = max(σi − wi , 0), i = 1, · · · , n.

Proposition: Assume Y ∈ Rm×n, and the SVD of Y isY = UΣVT, the solution of the WNNM problem in (22) can be

expressed as X ∗ = UDVT, where D =

(diag (d1, d2, · · · , dn)

0

)is a diagonal non-negative matrix and (d1, d2, · · · , dn) is thesolution of the following convex optimization problem:

mind1,··· ,dn

n∑i=1

1

2(di − σi )2 + widi , s.t. d1 ≥ d2 ≥ · · · ≥ dn ≥ 0. (25)

Page 85: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existent mathematical properties of WNNM

It is extremely important and urgent if one can find explicitsolution for Problem (22) which uses the weights in an arbitraryorder. This is one of our main contributions.Theorem 3:Assume that (a1, · · · , an) ∈ Rn and n ∈ N. Then thefollowing minimization problem

mind1≥···≥dn≥0

n∑i=1

(di − ai )2, (26)

has a unique global optimal solution in the closed form. Moreover,for any k ∈ N, we assume that Mk = (d1, · · · , dk) is the solutionto the k-th problem of (26) (that is (26) with n = k), in particular,we denote by d∗k = dk the last component of Mk and d∗0 = +∞.

Page 86: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Existent mathematical properties of WNNM

If

s0 := sups ∈ N | d∗s−1 ≥∑n

k=s akn − s + 1

, s = 1, · · · , n, (27)

then the solution Mn = (d1, · · · , dn) to (26) satisfies

(d1, · · · , ds0−1) = Ms0−1 and ds0 = · · · = dn = max

( ∑nk=s0

ak

n − s0 + 1, 0

).

(28)

Furthermore, the global solution of (22) is given in (24) with

D =

(diag (d1, d2, · · · , dn)

0

).

Page 87: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Mathematical properties

I The uniqueness is clear since the objective function in (26) isstrictly convex. We should employ the method of induction toprove the existence result of (26).

I Firstly, for n = 1, (26) has a unique solution d1 = maxa1, 0.I Assume that for any k ≤ n − 1, the k-th problem of (26) has

a unique solution Mk = (d1, · · · , dk). Then to solve the n-thproblem (26), we compare d∗n−1 and an as follows:

I if d∗n−1 ≥ an, then dn = maxan, 0 and (26) is solvable.Moreover, the solution Mn satisfies (28) with s0 = n;

I if d∗n−1 < an, we take d = dn−1 = dn and we have to solve thefollowing reduced (n − 1)-th problem

mind1≥···≥dn−2≥d≥0

(n−2∑i=1

(di − ai )2 + 2(d − an−1 + an

2)2). (29)

Page 88: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Mathematical properties

I For the latter case, we have to solve (29). To do so, we

compare d∗n−2 and an−1+an2 in the similar way as above:

I if d∗n−2 ≥an−1+an

2 , then dn−1 = dn = d = max an−1+an2 , 0 and

(26) is solvable. Moreover, the solution Mn satisfies (28) withs0 = n − 1;

I if d∗n−2 <an−1+an

2 , we take d = dn−2 = dn−1 = dn and we haveto solve the following reduced (n − 2)-th problem

mind1≥···≥dn−3≥d≥0

(n−3∑i=1

(di − ai )2 + 3(d − an−2 + an−1 + an

3)2).

(30)

Page 89: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Mathematical properties

I We shall repeat the above arguments and stop at thefollowing s0-th problem

mind1≥···≥ds0−1≥d≥0

(s0−1∑i=1

(di − ai )2 + (n − s0 + 1)(d −

∑nk=s0

ak

n − s0 + 1)2).

(31)

where d = ds0 = · · · = dn and s0 is defined in (27). Since

d∗s0−1 ≥∑n

k=s0ak

n−s0+1 , we have

ds0 = · · · = dn = d = max(∑n

k=s0ak

n−s0+1 , 0). Then (26) is solvableand the solution Mn satisfies (28). Thus, it completes theproof of the theorem.

Page 90: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Deblurring based on WNNMThe proposed model is

minu

∑j∈P‖Rju‖~w ,∗ + ‖∇u‖1 +

λ

2‖Au − g‖2

F , (32)

where P denotes the set of indices where small image patchesexist. The operator Rj firstly collects the similar patches of thereference patch located at j , and then stacks those patches into amatrix which should be low rank.

Introduce an auxiliary variable v :

minu,v

∑j∈P‖Rjv‖~w ,∗ + ‖∇u‖1 +

λ

2‖Au − g‖2

F , s.t. v = u. (33)

Then using the split Bregman method, we get(uk+1, vk+1

)= min

u,v

∑j∈P‖Rjv‖~w ,∗ + ‖∇u‖1 + λ

2 ‖Au − g‖2F + α

2

∥∥u − v − bk∥∥2

F,

bk+1 = bk +(vk+1 − uk+1

),

Page 91: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Minimization for each patches group

For each similar patches group Rjv , we have:

‖Rjv‖~w ,∗ +α

2

∥∥∥Rjv −(Rju

k+1 − Rjbk)∥∥∥2

F.

Then, using the WNNMG to solve this problem, we have

Rjvk+1 = U

(diag (d1, d2, · · · , dn)

0

)VT,

where(Rju

k+1 − Rjbk)

= U

(diag (σub,1, σub,2, · · · , σub,n)

0

)VT,

and di = max(

0,(σub,i −

w2i

σub,i

)), wi =

c1

√Nsp

σi (Rjvk+1)+ε, for

i = 1, 2, · · · , n.

Page 92: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Experimental results

Comparison of the PSNR (dB) of the recovered results by differentmethods, with respect to the noise level σ = 5.

Image Kernel ROF ForWaRD Framelet NLTV BM3DDEB Ours

Bar.box 22.73 23.57 23.79 23.12 24.03 24.05gaussian 23.11 23.87 24.02 23.32 24.09 24.14motion 23.13 23.74 24.22 23.72 25.02 25.15

Cam.box 24.00 23.83 24.67 24.73 24.92 25.84gaussian 24.17 24.28 24.62 24.64 24.79 25.27motion 24.28 24.15 25.15 25.32 25.67 27.02

Lenabox 27.43 28.52 28.90 28.23 29.38 29.96gaussian 28.70 29.93 30.46 29.64 30.85 31.34motion 27.48 28.55 29.43 28.78 30.27 31.10

Page 93: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Comparison

Original Degraded:20.57 ROF:24.00 ForWaRD:23.83

Framelet:24.67 NLTV:24.73 BM3DDEB:24.92 Ours:25.84

Figure: Recovered results (with PSNR(dB) of different methods on imageCameraman corrupted by 9× 9 uniform blur and Gaussian noise withstandard deviation σ = 5.

Page 94: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Remarks

I Good recovered results (blur + Gaussian noise \Impulsenoise);

I Extent to non-Gaussian noises such as Poisson noise andmultiplicative noise;

I Extent to other image applications such as imageclassification.

Page 95: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Summary

There are many non-convex image recover and segmentationmodels, we consider 3 ways to solve those models

1. Add an extra convex term

2. Relax the functional

3. Compute the analytic solution

Messages:

1. Extra convex term, good to overcome non-convexity

2. Relaxation technique, good for segmentation

3. Sparsity models lead to good image recovery results

Page 96: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

References

I L. Chen, and T. Zeng, ”A Convex Variational Model for RestoringBlurred Images with Large Rician Noise”, J. Math. Imaging Vis.,2015.

I Y. Dong and T. Zeng, ”A Convex Variational Model for RestoringBlurred Images with Multiplicative Noise”, SIAM J. Imaging Sci.,2013.

I F. Sciacchitano, Y. Dong, and T. Zeng, ”Variational Approach forRestoring Blurred Images with Cauchy Noise”, SIAM J. ImagingSci., 2015.

I L. Ma, J. Yu and T. Zeng,, ”Sparse Representation Prior and TotalVariation–Based Image Deblurring Under Impulse Noise”, SIAM J.Imaging Sci., 2013.

I L. Ma, and T. Zeng, ”Image Deblurring via Total Variation BasedStructured Sparse Model Selection”, J. Sci. Comput., 2015.

I L. Ma, L. Xu and T. Zeng, ”Low rank prior and total variation

regularization for image deblurring”, J. Sci. Comput., 2017.

Page 97: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

References

I X. Cai, R. Chan, and T. Zeng, ”A Two-stage Image SegmentationMethod using a Convex Variant of the Mumford-Shah Model andThresholding”, SIAM J. Imaging Sci., 2013.

I R. Chan, H. Yang and T. Zeng, ”A Two-Stage Image SegmentationMethod for Blurry Images with Poisson or Multiplicative GammaNoise”, 2014.

I X. Cai, R. Chan, M. Nikolova and T. Zeng, ”A Three-stage

Approach for Segmenting Degraded Color Images: Smoothing,

Lifting and Thresholding (SLaT).” J. Sci. Comput., To appear.

Page 98: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

Contributors

I Xiaohao Cai (University of Cambridge)

I Raymond Chan (The Chinese University of Hong Kong)

I Liyuan Chen (University of Texas at Dallas)

I Yiqiu Dong (Technical University of Denmark)

I Jian Yu (Beijing Jiaotong University)

I Lionel Moisan (Universit Paris Descartes)

I Zhi Li (Michigan Stage University)

I Liyan Ma (Institute of Microelectronics, CAS)

I Mila Nikolova (Ecole Normale Superieure de Cachan)

I Federica Sciacchitano (Technical University of Denmark)

I Li Xu (Chinese Academy of Science)

I Hongfei Yang (The Chinese University of Hong Kong)

Page 99: Convex and Non-Convex Optimization in Image Recovery and ...ims.nus.edu.sg/events/2017/data/files/tieyong.pdf · Recovery and Segmentation Tieyong Zeng Dept. of Mathematics, HKBU

THANK YOU!