48
A General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization in Imaging Science Ernie Esser (UCI) Georgia Tech Applied and Computational Mathematics Seminar 11-8-2010 Joint work with Xiaoqun Zhang (Shanghai Jiaotong University) and Tony Chan (HKUST) 1

A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

A General Framework for a Class ofFirst Order Primal Dual Algorithms forConvex Optimization in Imaging Science

Ernie Esser (UCI)

Georgia Tech Applied and Computational Mathematics Seminar

11-8-2010

Joint work with Xiaoqun Zhang (Shanghai Jiaotong University)

and Tony Chan (HKUST)

1

Page 2: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Outline

• Motivation to Study Primal Dual Methods for Image Processing• A Model Convex Minimization Problem• Connections Between Algorithms

• The Primal Dual Hybrid Gradient (PDHG) Method• Proximal Forward Backward Splitting (PFBS)↔ Alternating

Minimization Algorithm (AMA)• Alternating Direction Method of Multipliers (ADMM)↔ Split

Bregman↔ Douglas Rachford Splitting• Split Inexact Uzawa (SIU)↔ Modified PDHG (PDHGM)

• Comparison of Algorithms• Application to TV Minimization Problems• Conclusions

2

Page 3: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Motivation

• Split Bregman and PDHG, both first order primal-dual methods,demonstrated potential to be significantly more efficient than previousmethods used to solve convex models in image processing, butconvergence properties were initially unclear

• Overwhelming number of seemingly related classical methods, but withoften unclear connections

• Need for methods with simple, explicit iterations capable of solving largescale, often non-differentiable convex models with separable structure

3

Page 4: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

A Model Convex Minimization Problem

minu∈Rm

J(Au) +H(u) (P )

J , H closed proper convex

H : Rm → (−∞,∞]

J : Rn → (−∞,∞]

A ∈ Rn×m

Assume there exists an optimal solutionu∗ to (P )

So we can use Fenchel duality later, also assume there existsu ∈ ri(domH)such thatAu ∈ ri(domJ) (almost always true in practice)

4

Page 5: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Saddle Point Form via Legendre TransformThe Legendre transform ofJ is defined by

J∗(p) = supw

〈w, p〉 − J(w)

SinceJ∗∗ = J for arbitrary closed proper convexJ , we can use this to definea saddle point version of(P ).

J(Au) = J∗∗(Au) = supp〈p,Au〉 − J∗(p)

Primal Function FP (u) = J(Au) +H(u)

Saddle Function LPD(u, p) = 〈p,Au〉 − J∗(p) +H(u)

Saddle Point Problem

minu

supp

−J∗(p) + 〈p,Au〉+H(u) (PD)

5

Page 6: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Dual Problem and Strong Duality

The dual problem ismaxp∈Rn

FD(p) (D)

where the dual functionalFD(p) is a concave function defined by

FD(p) = infu∈Rm

LPD(u, p) = infu∈Rm

−J∗(p)+〈p,Au〉+H(u) = −J∗(p)−H∗(−AT p)

• By Fenchel duality there exists an optimal solutionp∗ to (D)• Strong duality holds, meaningFP (u

∗) = FD(p∗)

• u∗ solves (P) andp∗ solves (D) iff(u∗, p∗) is saddle point ofLPD

R. T. ROCKAFELLAR, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.

6

Page 7: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

More Saddle Point Formulations

Introduce the constraintw = Au in (P) and form the Lagrangian

LP (u,w, p) = J(w) +H(u) + 〈p,Au− w〉

The corresponding saddle point problem is

maxp∈Rn

infu∈Rm,w∈Rn

LP (u,w, p) (SPP )

Introduce the constrainty = −AT p in (D) and form the Lagrangian

LD(p, y, u) = J∗(p) +H∗(y) + 〈u,−AT p− y〉

Obtain yet another saddle point problem,

maxu∈Rm

infp∈Rn,y∈Rm

LD(p, y, u) (SPD)

7

Page 8: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

(P) minu FP (u)

FP (u) = J(Au) + H(u)

(D) maxp FD(p)

FD(p) = −J∗(p) − H∗(−AT p)

(PD) minu supp LPD(u, p)

LPD(u, p) = 〈p, Au〉 − J∗(p) + H(u)

(SPP) maxp infu,w LP (u, w, p)

LP (u, w, p) = J(w) + H(u) + 〈p, Au − w〉

(SPD) maxu infp,y LD(p, y, u)

LD(p, y, u) = J∗(p) + H∗(y) + 〈u,−AT p − y〉

? ?

AMAon

(SPP)

-�PFBS

on(D)

PFBSon(P)

-�AMA

on(SPD)

PPPPPPPPPPq

����������)+ 1

2α‖u − uk‖2

2 + 1

2δ‖p − pk‖2

2

��

��

��

��

����

AAAAAAAAAAAU

+ δ

2‖Au − w‖2

2 +α

2‖AT p + y‖2

2

Relaxed AMAon (SPP)

Relaxed AMAon (SPD)

@@@R@

@@I ����

���

ADMMon

(SPP)

-�

DouglasRachford

on(D)

DouglasRachford

on(P)

-�ADMM

on(SPD)

@@ ��

@@@R

���

+ 1

2〈u − uk, ( 1

α− δAT A)(u − uk)〉 + 1

2〈p − pk, ( 1

δ− αAAT )(p − pk)〉

Primal-Dual Proximal Point on(PD)

=PDHG

��

���

@@

@@@R

pk+1 →

2pk+1 − pk

uk →

2uk − uk−1

SplitInexactUzawa

on (SPP)

-� PDHGMp PDHGMu -�

SplitInexactUzawa

on (SPD)

Legend: (P): Primal(D): Dual(PD): Primal-Dual(SPP): Split Primal(SPD): Split Dual

AMA: Alternating Minimization Algorithm (4.2.1)PFBS: Proximal Forward Backward Splitting (4.2.1)ADMM: Alternating Direction Method of Multipliers (4.2.2)PDHG: Primal Dual Hybrid Gradient (4.2)PDHGM: Modified PDHG (4.2.3)Bold: Well Understood Convergence Properties

8

Page 9: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Subdifferential Calculus

Let F : Rn → (−∞,∞] be convex.

• Subdifferential:q ∈ ∂F (x) if F (y)− F (x)− 〈q, y − x〉 ≥ 0 ∀y ∈ Rn

• q ∈ ∂F (x) ⇔ x ∈ ∂F ∗(q)

• AT ∂J(Au) ⊂ ∂(J(Au))

• ∂(J(Au)) + ∂H(u) ⊂ ∂(J(Au) +H(u))

Note: the containments are actually equalities given the assumptions we’vemade aboutJ , H andA.

R.T. ROCKAFELLAR, Convex Analysis, 1970.

9

Page 10: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Moreau DecompositionMoreau’s decomposition is the main tool for demonstrating connectionsbetween the algorithms that follow.

Let f ∈ Rm, J a closed proper convex function onRn, andA ∈ R

n×m.

f = arg minu∈Rm

J(Au)+1

2α‖u− f‖2

2+αAT arg min

p∈Rn

J∗(p)+α

2

AT p− f

α

2

2

Proof: Letp∗ be a minimizer ofJ∗(p) + α2‖AT p− f

α‖22

Then 0 ∈ ∂J∗(p∗) + αA(AT p∗ − fα)

Let u∗ = f − αAT p∗, which impliesAu∗ ∈ ∂J∗(p∗)

Then p∗ ∈ ∂J(Au∗), which impliesAT p∗ ∈ AT∂J(Au∗)

Thus 0 ∈ AT∂J(Au∗) + u∗−fα

, which means

u∗ = argminu J(Au) +1

2α‖u− f‖2

J. J. MOREAU, Proximite et dualite dans un espace hilbertien, Bull. Soc. Math. France, 93, 1965, pp.

273-299.

10

Page 11: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

The PDHG Method

Interpret PDHG as a primal-dual proximal point method for finding a saddlepoint of

minu∈Rm

supp∈Rn

−J∗(p) + 〈p,Au〉+H(u) (PD)

PDHG iterations:

pk+1 = arg maxp∈Rn

−J∗(p) + 〈p,Auk〉 − 1

2δk‖p− pk‖22

uk+1 = arg minu∈Rm

H(u) + 〈AT pk+1, u〉+ 1

2αk

‖u− uk‖22

M. ZHU, AND T. F. CHAN, An Efficient Primal-Dual Hybrid Gradient Algorithm for Total Variation Image

Restoration, UCLA CAM Report [08-34], May 2008.

11

Page 12: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PFBS on (D)

PFBS alternates a gradient descent step with a proximal step:

pk+1 = arg minp∈Rn

J∗(p) +1

2δk‖p− (pk + δkAu

k+1)‖22,

whereuk+1 = ∇H∗(−AT pk).

Sinceuk+1 = ∇H∗(−AT pk) ⇔ −AT pk ∈ ∂H(uk+1), which is equivalentto

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉,

PFBS on (D) can be rewritten as

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉

pk+1 = arg minp∈Rn

J∗(p) + 〈p,−Auk+1〉+ 1

2δk‖p− pk‖22

P. COMBETTES AND W. WAJS, Signal Recovery by Proximal Forward-Backward Splitting, Multiscale

Modeling and Simulation, 2006.

12

Page 13: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

AMA on Split Primal

AMA applied to (SPP) alternately minimizes first the LagrangianLP (u,w, p)

with respect tou and then the augmented LagrangianLP + δk2‖Au− w‖22

with respect tow before updating the Lagrange multiplierp.

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δk

2‖Auk+1 − w‖22

pk+1 = pk + δk(Auk+1 − wk+1)

• Can show equivalence to PFBS on (D) by a direct application ofMoreau’s decomposition

P. TSENG, Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational

Inequalities, SIAM J. Control Optim., Vol. 29, No. 1, 1991, pp. 119-138.

13

Page 14: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Equivalence by Moreau DecompositionAMA applied to (SPP):

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δk

2‖Auk+1 − w‖22

pk+1 = pk + δk(Auk+1 − wk+1)

The rewritten PFBS on (D) and AMA on (SPP) have the same first step.

Combining the last two steps of AMA yields

pk+1 = (pk + δkAuk+1)− δk argmin

wJ(w) +

δk

2

w − (pk + δkAuk+1)

δk

2

2

,

which is equivalent to the second step of PFBS by direct application ofMoreau’s decomposition.

pk+1 = argminp

J∗(p) +1

2δk‖p− (pk + δkAu

k+1)‖2214

Page 15: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

AMA/PFBS Connection to PDHGPFBS on (D) plus additional proximal penalty is PDHG

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ 1

2αk

‖u− uk‖22

pk+1 = arg minp∈Rn

J∗(p) + 〈p,−Auk+1〉+ 1

2δk‖p− pk‖22

AMA on (SPP) with first step relaxed by same proximal penalty is PDHG

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ 1

2αk

‖u− uk‖22

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δk

2‖Auk+1 − w‖22

pk+1 = pk + δk(Auk+1 − wk+1)

• PFBS on (P) and AMA on (SPD) are connected to PDHG analogously• Can think of PDHG as a relaxed version of AMA

15

Page 16: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

AMA Connection to ADMMAMA on (SPP) withδ

2‖Au− wk‖22 added to first step is ADMM applied to

(SPP):

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ δ

2‖Au− wk‖22

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δ

2‖Auk+1 − w‖22

pk+1 = pk + δ(Auk+1 − wk+1)

• ADMM alternately minimizes the augmented LagrangianLP + δ

2‖Au− w‖22 with respect tou andw before updating the

Lagrange multiplierp• Equivalent to Split Bregman, a method which combines Bregman

iteration and operator splitting to solve constrained convex optimizationproblems

D. BERTSEKAS AND J. TSITSIKLIS, Parallel and Distributed Computation, 1989.

T. GOLDSTEIN AND S. OSHER, The Split Bregman Algorithm for L1 Regularized Problems, SIIMS, Vol. 2,

No. 2, 2008. 16

Page 17: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Bregman/Lagrangian Connection

Bregman iteration applied tominw,u J(w) +H(u) s.t. w = Au

(wk+1, uk+1) = argminw,u

J(w)− J(wk)− 〈pkw, w − wk〉

+H(u)−H(uk)− 〈pku, u− uk〉

2‖Au− w‖2

pk+1w ∈ ∂J(wk+1) ⇒ pk+1

w = pkw + δ(Auk+1 − wk+1)

pk+1u ∈ ∂H(uk+1) ⇒ pk+1

u = pku + δAT (Auk+1 − wk+1)

• Equivalent to method of multipliers withpkw = pk andpku = AT pk

• Alternating minimization forw andu yields split Bregman and ADMM

17

Page 18: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Equivalence to Douglas Rachford Splitting

Can apply Moreau decomposition twice along with an appropriate change ofvariables to show ADMM on (SPP) or (SPD) is equivalent to DouglasRachford Splitting on (D) and (P) resp.

Douglas Rachford splitting on (D):

zk+1 = argminq

H∗(−AT q) +1

2δ‖q − (2pk − zk)‖22 + zk − pk

pk+1 = argminp

J∗(p) +1

2δ‖p− zk+1‖22

Note: zk = pk + δwk with pk andwk the same as in ADMM on (SPP)

S. SETZER, Split Bregman Algorithm, Douglas-Rachford Splitting and Frame Shrinkage, LNCS, 2008.

J. ECKSTEIN, AND D. BERTSEKAS, On the Douglas-Rachford splitting method and the proximal point

algorithm for maximal monotone operators, Math. Program. 55, 1992.

P.L. COMBETTES AND J-C. PESQUET, A Douglas-Rachford Splitting Approach to Nonsmooth Convex

Variational Signal Recovery, IEEE, 2007.

18

Page 19: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Decoupling VariablesOne can add additional proximal-like penalties to ADMM iterations andobtain a more explicit algorithm that still converges.

Given an ADMM step of the formuk+1 = argminu J(u) +δ2‖Ku− fk‖22

modify functional by adding12

u− uk,(

1

α− δKTK

)

(u− uk)⟩

,

whereα is chosen such that0 < α < 1

δ‖KTK‖. The δ

2‖Ku‖22 term cancels!

Modified update is given by

uk+1 = argminu

J(u) +1

2α‖u− uk + αδKT (Kuk − fk)‖2.

This method is closely related to the surrogate functional approach andRockafellar’s proximal method of multipliers.

X. ZHANG, M. BURGER, X. BRESSON, AND S. OSHER, Bregmanized Nonlocal Regularization for

Deconvolution and Sparse Reconstruction, UCLA CAM Report [09-03] 2009.

19

Page 20: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Split Inexact Uzawa Method

Consider adding12〈u− uk, ( 1

α− δATA)(u− uk)〉 to the first step of the

Alternating Direction Method of Multipliers (ADMM) applied to (SPP), with0 < α < 1

δ‖A‖2 .

Split Inexact Uzawa applied to (SPP):

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ 1

2α‖u− uk + δαAT (Auk − wk)‖22

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δ

2‖Auk+1 − w‖22

pk+1 = pk + δ(Auk+1 − wk+1)

• In general we could similarly modify both minimization steps inADMM, but by only modifying the first step we obtain an interestingPDHG-like interpretation.

• The more general SIU algorithm was proposed by Zhang, BurgerandOsher.

X. ZHANG, M. BURGER, AND S. OSHER, A Unified Primal-Dual Algorithm Framework Based on Bregman

Iteration, UCLA CAM Report [09-99], 2009.20

Page 21: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Modified PDHG (PDHGMp)

Replacepk in first step of PDHG with2pk − pk−1 to get PDHGMp:

uk+1 = arg minu∈Rm

H(u) +⟨

AT(

2pk − pk−1)

, u⟩

+1

2α‖u− uk‖22

pk+1 = arg minp∈Rn

J∗(p)− 〈p,Auk+1〉+ 1

2δ‖p− pk‖22

• Can show equivalence to SIU on (SPP) using Moreau’s decomposition

Related Works:

• G. CHEN AND M. TEBOULLE, A Proximal-Based Decomposition Method for Convex Minimization

Problems, Mathematical Programming, Vol. 64, 1994.

• T. POCK, D. CREMERS, H. BISCHOF, AND A. CHAMBOLLE, An Algorithm for Minimizing the

Mumford-Shah Functional, ICCV, 2009.

• A. CHAMBOLLE, V. CASELLES, M. NOVAGA, D. CREMERS AND T. POCK, An introduction to Total

Variation for Image Analysis,

http://hal.archives-ouvertes.fr/docs/00/43/75/81/PDF/preprint.pdf, 2009.

• A. CHAMBOLLE AND T. POCK, A first-order primal-dual algorithm for convex problems with

applications to imaging, 2010.21

Page 22: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Equivalence of PDHGMp and SIU on (SPP)SIU on (SPP): (the only change from PDHG is addition of blue term)

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ 1

2α‖u− uk+δαAT (Auk − wk)‖22

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δ

2‖Auk+1 − w‖22

pk+1 = pk + δ(Auk+1 − wk+1)

Replaceδ(Auk − wk) in theuk+1 update withpk − pk−1.Combinepk+1 andwk+1 to get

pk+1 = (pk + δAuk+1)− δ argminw

J(w) +δ

2‖w − (pk + δAuk+1)

δ‖2

and apply Moreau’s decomposition.

PDHGMp: (the only change from PDHG is thatpk became2pk − pk−1)

uk+1 = arg minu∈Rm

H(u) +⟨

AT(

2pk−pk−1)

, u⟩

+1

2α‖u− uk‖22

pk+1 = arg minp∈Rn

J∗(p)− 〈p,Auk+1〉+ 1

2δ‖p− pk‖22 22

Page 23: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Modified PDHG (PDHGMu)

PDHGMu: (the only change from PDHG is thatuk became2uk − uk−1)

pk+1 = arg minp∈Rn

J∗(p)−⟨

p,A(

2uk − uk−1)⟩

+1

2δ‖p− pk‖22

uk+1 = arg minu∈Rm

H(u) +⟨

AT pk+1, u⟩

+1

2α‖u− uk‖22

PDHGMu is analogously equivalent to the split inexact Uzawa(SIU) methodapplied to (SPD)

23

Page 24: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

(P) minu FP (u)

FP (u) = J(Au) + H(u)

(D) maxp FD(p)

FD(p) = −J∗(p) − H∗(−AT p)

(PD) minu supp LPD(u, p)

LPD(u, p) = 〈p, Au〉 − J∗(p) + H(u)

(SPP) maxp infu,w LP (u, w, p)

LP (u, w, p) = J(w) + H(u) + 〈p, Au − w〉

(SPD) maxu infp,y LD(p, y, u)

LD(p, y, u) = J∗(p) + H∗(y) + 〈u,−AT p − y〉

? ?

AMAon

(SPP)

-�PFBS

on(D)

PFBSon(P)

-�AMA

on(SPD)

PPPPPPPPPPq

����������)+ 1

2α‖u − uk‖2

2 + 1

2δ‖p − pk‖2

2

��

��

��

��

����

AAAAAAAAAAAU

+ δ

2‖Au − w‖2

2 +α

2‖AT p + y‖2

2

Relaxed AMAon (SPP)

Relaxed AMAon (SPD)

@@@R@

@@I ����

���

ADMMon

(SPP)

-�

DouglasRachford

on(D)

DouglasRachford

on(P)

-�ADMM

on(SPD)

@@ ��

@@@R

���

+ 1

2〈u − uk, ( 1

α− δAT A)(u − uk)〉 + 1

2〈p − pk, ( 1

δ− αAAT )(p − pk)〉

Primal-Dual Proximal Point on(PD)

=PDHG

��

���

@@

@@@R

pk+1 →

2pk+1 − pk

uk →

2uk − uk−1

SplitInexactUzawa

on (SPP)

-� PDHGMp PDHGMu -�

SplitInexactUzawa

on (SPD)

Legend: (P): Primal(D): Dual(PD): Primal-Dual(SPP): Split Primal(SPD): Split Dual

AMA: Alternating Minimization Algorithm (4.2.1)PFBS: Proximal Forward Backward Splitting (4.2.1)ADMM: Alternating Direction Method of Multipliers (4.2.2)PDHG: Primal Dual Hybrid Gradient (4.2)PDHGM: Modified PDHG (4.2.3)Bold: Well Understood Convergence Properties

24

Page 25: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

More About Algorithm Connections

• P.L. COMBETTES AND J-C. PESQUET, Proximal Splitting Methods in Signal Processing, 2009.

• J. ECKSTEIN, Splitting Methods for Monotone Operators with Applications to Parallel Optimization,

Ph. D. Thesis, MIT, Dept. of Civil Engineering, 1989.

• E. ESSER, Applications of Lagrangian-Based Alternating Direction Methods and Connections to Split

Bregman, UCLA CAM Report [09-31], 2009.

• E. ESSER, X. ZHANG, AND T. F. CHAN, A General Framework for a Class of First Order Primal-Dual

Algorithms for TV Minimization, UCLA CAM Report [09-67], 2009.

• R. GLOWINSKI AND P. LE TALLEC, Augmented Lagrangian and Operator-splitting Methods in

Nonlinear Mechanics, SIAM, 1989.

• C. WU AND X.C TAI, Augmented Lagrangian Method, Dual Methods, and Split Bregman Iteration for

ROF, Vectorial TV, and High Order Models, 2009.

25

Page 26: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Recent Developments

• A. CHAMBOLLE AND T. POCK, A first-order primal-dual algorithm for convex problems with

applications to imaging, 2010.

Instead ofpk+1 → 2pk+1 − pk, let pk+1 → pk+1 + θk(pk+1 − pk)

Well chosenθk accelerates convergence rate:O( 1

N) in general case,O( 1

N2 ) ifJ∗ or H is uniformly convex, and linear if both are uniformly convex

• B. HE, M. TAO AND X. YUAN, A splitting method for separate convex programming with linking linear

constraints, 2010.

Extends ADMM to splitting functional into more than two parts and achievesmonotone convergence of iterates using predictor-corrector approach.

• R. MONTEIRO AND B. SVAITER, Iteration-complexity of block-decomposition algorithms and the

alternating minimization augmented Lagrangian method, 2010.

General converge rate analysis including ADMM as special case.

26

Page 27: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Comparison of Methods: PDHG

pk+1 = arg minp∈Rn

J∗(p)− 〈p,Auk〉+ 1

2δk‖p− pk‖22

uk+1 = arg minu∈Rm

H(u) + 〈AT pk+1, u〉+ 1

2αk

‖u− uk‖22

Pros:• Simple iterations• Explicit. Variables not coupled by matrixA• Empirically can be very efficient for well chosenαk andδk parameters

Cons:• General convergence properties unknown

(Recall we are assuming throughout thatJ andH are closed proper convexfunctions.)

27

Page 28: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Comparison of Methods: PFBS/AMAPFBS on (D):

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉

pk+1 = arg minp∈Rn

J∗(p) + 〈p,−Auk+1〉+ 1

2δk‖p− pk‖22

Convergence theory guaranteespk converges to a solution of (D) anduk

converges to the unique solution of (P) assumingH∗ is differentiable,∇(H∗(−AT p)) is Lipschitz continuous with Lipschitz constant equal to1

β,

and0 < inf δk ≤ sup δk < 2βConvergence of AMA on (SPP) analogously requiresH to be strongly convex

Pros:• Simple, explicit iterations. Variables not coupled by matrix A

Cons:• Must additionally assume differentiability or strong convexity

28

Page 29: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Comparison of Methods: ADMM/DRADMM on (SPP):

uk+1 = arg minu∈Rm

H(u) + 〈AT pk, u〉+ δ

2‖Au− wk‖22

wk+1 = arg minw∈Rn

J(w)− 〈pk, w〉+ δ

2‖Auk+1 − w‖22

pk+1 = pk + δ(Auk+1 − wk+1)

Only needsδ > 0 to ensure convergence ofpk but an additional assumption isneeded to ensureuk converges. It suffices to assumeH(u) + ‖Au‖2 is strictlyconvex.Convergence ofpk in Douglas Rachford splitting on (D) only requiresδ > 0

Pros:• Converges under minimal assumptions

Cons:• Theuk+1 update is more implicit in the sense that variables are coupled

by the matrixA

29

Page 30: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Comparison of Methods: PDHGM/SIUPDHGMp:

uk+1 = arg minu∈Rm

H(u) +⟨

AT(

2pk − pk−1)

, u⟩

+1

2α‖u− uk‖22

pk+1 = arg minp∈Rn

J∗(p)− 〈p,Auk+1〉+ 1

2δ‖p− pk‖22

Convergence requires0 < α < 1

δ‖A‖2

Pros:• Converges under minimal assumptions• Simple, explicit iterations. Variables not coupled by matrix A

• The more general form of SIU can be even more flexible, allowing forpartially implicit steps

Cons:• Dynamic step size schemes are not theoretically justified for the above

algorithm, but they are for the closely related method studied byChambolle and Pock

30

Page 31: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Application to TV Minimization ProblemsDiscretize‖u‖TV using forward differences and assuming Neumann BC

‖u‖TV =

Mr∑

r=1

Mc∑

c=1

(D+c ur,c)2 + (D+

r ur,c)2

VectorizeMr ×Mc matrix by stacking columns

Define a discrete gradient matrixD and a norm‖ · ‖E such that‖Du‖E = ‖u‖TV .

To compare numerical performance on TV denoising,

minu

‖u‖TV +λ

2‖u− f‖22,

first letA = D, J(Au) = ‖Du‖E andH(u) = λ2‖u− f‖22 to write the model

in form of

minu∈Rm

J(Au) +H(u) (P )31

Page 32: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

TV Notation DetailsDefine a directed grid-shaped graph withm = MrMc nodes corresponding tomatrix elements(r, c).

3× 3 example:

For each edgeη with endpoint indices(i, j), i < j, define:

Dη,k =

−1 for k = i,

1 for k = j,

0 for k 6= i, j.

Eη,k =

{

1 if Dη,k = −1,

0 otherwise.

Can useE to define norm‖ · ‖E onRe by

‖w‖E =

ET (w2)

1

=m∑

i=1

(

ET (w2)

)

i

=m∑

i=1

‖wi‖2

wherewi is the vector of edge values for directed edges coming out of nodei.

For TV regularization,J(Au) = ‖Du‖E = ‖u‖TV

32

Page 33: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PDHG for TV Denoising

pk+1 = ΠX(pk + δkDuk)

uk+1 = (1

αk

+ λ)−1(λf −DT pk+1 +uk

αk

)

Originally proposed as

pk+1 = ΠX(pk + τkλDuk)

uk+1 = (1− θk)uk + θk(f − DT pk+1

λ)

with τk = δkλ

andθk = αkλ1+λαk

, τk = .2 + .008k θk =.5− 5

15+k

τk

X = {p : ‖p‖E∗ ≤ 1}

ΠX(p) = argminq∈X

‖q − p‖22 =p

Emax(

ET (p2), 1) (componentwise)

33

Page 34: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Original, Noisy and Benchmark Images

Use256× 256 cameraman image.

Add white Gaussian noise having standard deviation20.

Let λ = .053.

34

Page 35: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Iterations Required for TV DenoisingAlgorithm tol = 10

−2 tol = 10−4 tol = 10

−6

PDHG (adaptive) 14 70 310

PDHGMu (adaptive) 19 92 365

PDHGα = 5, δ = .025 31 404 8209

PDHGα = 1, δ = .125 51 173 1732

PDHGα = .2, δ = .624 167 383 899

PDHGMuα = 5, δ = .025 21 394 8041

PDHGMuα = 1, δ = .125 38 123 1768

PDHGMuα = .2, δ = .624 162 355 627

PDHGα = 5, δ = .1 22 108 2121

PDHGα = 1, δ = .5 39 123 430

PDHGα = .2, δ = 2.5 164 363 742

PFBSδ = .0132 48 750 15860

ADMM δ = .025 17 388 7951

ADMM δ = .125 22 100 1804

ADMM δ = .624 97 270 569

Note: PDHGMp performs analogously to PDHGMu

35

Page 36: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PFBS as Gradient Projection

PFBS on (D) for TV denoising can be written as

pk+1 = ΠX

(

pk − τkD(DTpk − λf))

,

which can be interpreted as projected gradient descent applied to

minp∈X

G(p) :=1

2‖DT p− λf‖2,

an equivalent form of the dual problem.

Gradient projection on the dual problem corresponds to theθ = 1 (α = ∞)case of PDHG

36

Page 37: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PDHG as Projected Averaged Gradient

Plugging all theuk updates into thepk+1 update allows PDHG for TVdenoising to be written as

pk+1 = ΠX

(

pk − τkdk)

, (1)

where

dk =k∑

i=0

sik∇G(pi), sik = θi−1

k−1∏

j=i

(1− θj).

Note that∑k

i=0sik = 1 for all k ≥ 0

ie: The directiondk is a convex combination of gradients ofG at all previousiterates, which suggests connections to bundle methods/heavy ball methods.

Can show convergence so far in cases whereθk → 1, in which case PDHGasymptotically behaves like gradient projection on the dual

37

Page 38: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PDHGMp for More General Problems

Applying PDHGMp tominu∑N

i=1Ji(Aiu) +H(u) yields:

uk+1 = argminu

H(u) +1

u−(

uk − α

N∑

i=1

ATi (2p

ki − pk−1

i )

)∥

2

2

pk+1

i = argminpi

J∗i (pi) +

1

∥pi −(

pki + δAiuk+1)∥

2

2i = 1, ..., N

whereJ(Au) =∑N

i=1Ji(Aiu).

Lettingp =

p1...

pN

, the decoupling follows fromJ∗(p) =∑N

i=1J∗i (pi)

• Thepi subproblems are decoupled

• Need0 < α < 1

δ‖A‖2 for stability withA =

A1

...

AN

• Preconditioning is possible38

Page 39: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PreconditioningPreconditioning by introducing matricesBi can be effective as long as theBi

are simple enough not to complicate the minimization subproblems

Example:

N∑

i=1

Ji(Aiu)

N∑

i=1

Ji(BiAiu+ bi) +H(u)

=N∑

i=1

Ji(Aiu) +H(u)

= J(Au) +H(u),

whereA =

A1

...

AN

andJi(zi) = Ji(Bizi + bi).

39

Page 40: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Handling Convex Constraints

For PDHGMp to work well, we want simple, explicit solutions to theminimization subproblems.

Convex constraintu ∈ T can be handled by adding convex indicator function

gT (u) =

{

0 if u ∈ T

∞ otherwise.

This leads to a simple update when the orthogonal projection

ΠT (z) = argminu

gT (u) + ‖u− z‖2

is easy to compute. For example,

T = {z : ‖z − f‖2 ≤ ǫ} ⇒ ΠT (z) = f +z − f

max(

‖z−f‖ǫ

, 1)

40

Page 41: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Constrained TV Deblurring Example

min‖Ku−f‖2≤ǫ

‖u‖TV

can be rewritten asminu

‖Du‖E + gT (Ku),

wheregT is the indicator function forT = {z : ‖z − f‖2 ≤ ǫ}

In order to treat bothD andK explicitly, let

H(u) = 0 and J(Au) = J1(Du) + J2(Ku),

whereA =

[

D

K

]

.

Write the dual variable asp =

[

p1

p2

]

and apply PDHGMp.

41

Page 42: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

PDHGMp for Constrained TV Deblurring

uk+1 = uk − α(

DT (2pk1 − pk−1

1 ) +KT (2pk2 − pk−1

2 ))

pk+1

1 = ΠX

(

pk1 + δDuk+1)

pk+1

2 = pk2 + δKuk+1 − δΠT

(

pk2δ

+Kuk+1

)

,

whereΠX andΠT are defined as before.

Both projections are simple to compute:• ΠX is analogous to orthogonal projection onto anl∞ ball• ΠT is orthogonal projection onto thel2 ǫ-ball centered atf

As we will see, for fixedα andδ the performance is nearly identical toPDHG, but PDHGMp is proven to converge.

42

Page 43: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Deblurring Numerical Result

K convolution operator for normalizedGaussian blur with Std. dev.3

h clean image

f = Kh+ η

η zero mean Gaussian noise Std. dev.1

ǫ = 256

α = .2, δ = .55

‖uk − u∗‖ versus iterations for

PDHG and PDHGMp

0 500 1000 1500 2000 2500 3000 3500 4000 4500 500010

1

102

103

104

iterations

||u−

u*|| 2

PDHG PDHGMp

Original, blurry/noisy and image recovered from 300 PDHGMpiterations43

Page 44: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Some Easy Functions to Deal WithPDHGMp and the other primal dual methods discussed are most efficientwhen the decoupled minimization subproblems can be easily computed.Examples of "nice" functions and their Legendre transformsinclude

J(z) =1

2α‖z‖22 J∗(p) =

α

2‖p‖22

J(z) = ‖z‖2 J∗(p) = g{p:‖p‖2≤1}

J(z) = ‖z‖1 J∗(p) = g{p:‖p‖∞≤1}

J(z) = ‖z‖E J∗(p) = g{p:‖p‖E∗≤1}

J(z) = ‖z‖∞ J∗(p) = g{p:‖p‖1≤1}

J(z) = max(z) J∗(p) = g{p:p≥0 and‖p‖1=1}

J(z) = infw

F (w) +H(z − w) J∗(p) = F ∗(p) +H∗(p)

Note: All indicator functionsg are for convex sets that are easy to projectonto.

Although there’s no simple formula for projecting a vector onto thel1 unit ball(or its positive face) inRn, this can be computed withO(n log n) complexity.

44

Page 45: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Multiphase Segmentation ExampleMany problems deal with the normalization constraintc ∈ C, where

C = {c = (c1, ..., cW ) : cw ∈ RM ,

W∑

w=1

cw = 1, cw ≥ 0}

Example: Convex relaxation of multiphase segmentation

Goal: Segment a given image,h ∈ RM , intoW regions where the intensities

in thewth region are close to given intensitieszw ∈ R and the lengths of theboundaries between regions are not too long.

gC(c) +W∑

w=1

(

‖cw‖TV +λ

2〈cw, (h− zw)

2〉)

This is a convex approximation of the related nonconvex functional whichadditionally requires the labels,c, to only take on the values zero and one.

E. BAE, J. YUAN, AND X. TAI, Global Minimization for Continuous Multiphase Partitioning Problems Using a

Dual Approach, UCLA CAM Report [09-75], 2009.

C. ZACH, D. GALLUP, J.-M. FRAHM, AND M. NIETHAMMER, Fast global labeling for real-time stereo using

multiple plane sweeps, VMV, 2008. 45

Page 46: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Application of PDHGMp

Let H(c) = gC(c) +λ

2〈c,

W∑

w=1

X Tw (h− zw)

2〉,

J(Ac) =W∑

w=1

Jw(DXwc),

whereA =

DX1

...

DXW

, Xwc = cw and

Jw(DXwc) = ‖DXwc‖E = ‖Dcw‖E = ‖cw‖TV .

PDHGMp iterations:

ck+1 = ΠC

(

ck − α

W∑

w=1

X Tw (DT (2pkw − pk−1

w ) +λ

2(h− zw)

2)

)

pk+1w = ΠX

(

pkw + δDXwck+1)

for w = 1, ...,W.

46

Page 47: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Segmentation Numerical Result

λ = .0025 z =[

75 105 142 178 180]

α = δ =.995√40

Thresholdc when each‖ck+1w − ckw‖∞ < .01 (150 iterations)

original image segmented image

region 1 region 2 region 3 region 4 region 5

Segmentation of Brain Image Into5 RegionsModifications: We can also addµw parameters to regularize differently thelengths of the boundaries of each region and alternately update the averageszwhen they are not known beforehand.

47

Page 48: A General Framework for a Class of First Order Primal Dual ...eesser/papers/gatech2.pdfA General Framework for a Class of First Order Primal Dual Algorithms for Convex Optimization

Conclusions

• The PDHG-related framework for a class of primal-dual algorithmshopefully reduces the dauntingly large space of potential methods byshowing some close connections between them.

• The methods discussed can be efficiently applied to convex models withseparable structure which can be written in the form of (P).

• Simple, explicit iterations and convergence under minimalassumptionsmake modified PDHG practical for many large scale separable convexminimization problems that arise in image processing.

• The large amount of recent work in the imaging science literature aboutthese and related methods demonstrates their usefulness inthis area.

48