18
Noname manuscript No. (will be inserted by the editor) Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform Rick Archibald · Anne Gelb · Rodrigo B. Platte the date of receipt and acceptance should be inserted later Abstract Fourier samples are collected in a variety of applications including magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under- sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV is used for the l 1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach. Keywords Fourier Data · l 1 regularization · Split Bregman · Edge Detection · Polynomial Annihilation 1 Introduction Data are acquired as partial Fourier samples in several applications, including magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). In an idealized situation, recovering images from partial Fourier data may be done simply and efficiently by using the inverse fast Fourier transform (FFT). In practice, the data acquisition system is usually under-prescribed and noisy. Moreover, the Fourier domain is not well suited for recovering the underlying image, which is generally only piecewise smooth. In recent years l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy data for images that have some sparsity properties, that is, some measurable features of the image have sparse representation. 1 Also, l 1 regularization provides a formulation that is compatible with compressed sensing (CS) applications, specifically, when an image can be reconstructed Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 ([email protected]) School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287 ([email protected]) School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287 ([email protected]) Address(es) of author(s) should be given 1 Technically, the l 0 norm of an expression is a better measure of sparsity. However, the l 0 norm does not meet the convexity requirements and is very slow to compute. Additional detail on using the l 1 norm in place of the l 0 in order to measure sparsity can be found in [7].

Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Embed Size (px)

Citation preview

Page 1: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Noname manuscript No.(will be inserted by the editor)

Image Reconstruction from Undersampled Fourier Data Using the PolynomialAnnihilation Transform

Rick Archibald · Anne Gelb · Rodrigo B. Platte

the date of receipt and acceptance should be inserted later

Abstract Fourier samples are collected in a variety of applications including magnetic resonance imaging(MRI) and synthetic aperture radar (SAR). The data are typically under-sampled and noisy. In recent years,l1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is,some measurable features of the image have sparse representation. The reconstruction algorithm is typicallydesigned to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l1

regularization terms. The Split Bregman Algorithm provides a fast explicit solution for the case when TV isused for the l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a varietyof applications.

A well known drawback in using TV as an l1 regularization term is that the reconstructed image will tend todefault to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomialannihilation edge detection method was used to generate a higher order sparsifying transform, and was coinedthe “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the casewhen the PA transform is used as the l1 regularization term. In so doing, we achieve a more accurate imagereconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably tothe TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.

Keywords Fourier Data · l1 regularization · Split Bregman · Edge Detection · Polynomial Annihilation

1 Introduction

Data are acquired as partial Fourier samples in several applications, including magnetic resonance imaging(MRI) and synthetic aperture radar (SAR). In an idealized situation, recovering images from partial Fourierdata may be done simply and efficiently by using the inverse fast Fourier transform (FFT). In practice, thedata acquisition system is usually under-prescribed and noisy. Moreover, the Fourier domain is not well suitedfor recovering the underlying image, which is generally only piecewise smooth.

In recent years l1 regularization has received considerable attention in designing image reconstructionalgorithms from under-sampled and noisy data for images that have some sparsity properties, that is, somemeasurable features of the image have sparse representation.1 Also, l1 regularization provides a formulationthat is compatible with compressed sensing (CS) applications, specifically, when an image can be reconstructed

Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 ([email protected])

School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287 ([email protected])

School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, 85287 ([email protected])

Address(es) of author(s) should be given

1 Technically, the l0 norm of an expression is a better measure of sparsity. However, the l0 norm does not meet the convexityrequirements and is very slow to compute. Additional detail on using the l1 norm in place of the l0 in order to measure sparsitycan be found in [7].

Page 2: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

2 Rick Archibald et al.

from a very small number of measurements [4–6]. In particular, the goal for reconstructing an image from SARor MRI data is to solve

minfJ(f) such that ||Ff − f ||2 = 0, (1)

where f consists of samples of the Fourier transform of the unknown image, f . F contains a subset of rows ofa Fourier matrix, and J is an appropriate l1 regularization term, [10,20–23]. Typically for measured data therelated (TV) “denoising” problem,

minfJ(f) such that ||Ff − f ||2 < σ, (2)

is solved. It is in general still difficult to develop efficient and robust techniques for solving (2). The SplitBregman Algorithm, [12], is a numerically efficient and stable algorithm that has successfully solved (2) fora variety of applications. In this paper we use the Split Bregman Algorithm as a launching point to developa new technique for solving (2) based on the polynomial annihilation l1 regularization introduced in [27]. Wewill demonstrate that our method yields improved accuracy in regions away from discontinuities, especiallyin the case of under-sampled data. We adopt the standardizations and terminology from [12] to describe ouralgorithm.

A well known drawback in using TV as an l1 regularization term is that the reconstructed image defaultsto a piecewise constant approximation. While suitable for some applications, in others it is desirable to seemore details. This has been addressed in several ways. For example, total generalized variation (TGV), whichgenerates a piecewise (typically quadratic) polynomial approximation in smooth regions, was developed in[2]. Multi-wavelets have also been used to formulate sparsifying transforms, [24]. The polynomial annihilation(PA) transform, which exploits the sparsity of the underlying image in the jump discontinuity domain, wasintroduced in [27]. It was demonstrated there that generating a sparsifying transform based on the sparsity ofedges in the underlying image (as expressed in (11)), yields improved accuracy and convergence properties forboth image reconstruction and edge identification. In particular, high order accuracy is possible in regions ofsmoothness, which has two important consequences, also true in multi-dimensions. First, it is possible to seemore variation in the underlying image, and second, fewer data points are needed to reconstruct an image.

In [27], the MATLAB CVX package [15,14] was used to implement (2) for the PA transform l1 regular-ization. Although suitable for one-dimensional problems, CVX is not efficient enough for higher dimensionalproblems. Because of this limitation, the technique introduced in [27] was not pursued for other applications,such as compression or reducing the dimensionality of the data for efficient processing. This paper thus seeks toexpand the recent results in [27] in two ways. First, we will improve the efficiency of the numerical algorithmsin multi-dimensions. In this regard, we will adapt the Split Bregman Algorithm, [12], to the PA transform.Once this is accomplished, we will demonstrate how the PA transform is an effective tool for reconstructingpiecewise smooth images from under-sampled Fourier data. The rest of the paper is organized as follows. InSection 2 we describe the reconstruction problem in one dimension and discuss how the polynomial annihila-tion edge detection method is used to construct an l1 regularization term for solving (2). In Section 3 we reviewthe Split Bregman Algorithm for sparse Fourier data using the TV operator and demonstrate how it can beadapted for the PA transform operator without increasing computational cost. In Section 4 we provide somenumerical examples and demonstrate that our method is robust to noise and undersampling. We compare ourresults to those using TV and TGV (combined with shearlet regularization), based on the code given in [17].2

Concluding remarks are provided in Section 5.

2 Preliminaries

We begin by describing the one-dimensional problem. Let f ∈ R be a piecewise smooth function on [−1, 1],with f(−1) = f(1). Suppose we are given its first 2N + 1 (normalized) Fourier coefficients,

f(k) =1

2

∫ 1

−1f(x)e−ikπxdx, for k = −N, . . . , N. (3)

2 In [27] our method compared favorably to multi-wavelet constructed regularization terms, [24]. We do not repeat thoseexperiments here.

Page 3: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3

We wish to recover f from (3) at a finite set of grid points, xj = −1 + j∆x, j = 0, · · · , 2N , with ∆x = 1N .3

The standard Fourier partial sum reconstruction used to approximate periodic replicas of f is defined as

SNf(x) =

N∑k=−N

fkeikπx. (4)

At the gridpoints x = xj , we can write (4) as the linear system

Ff = f , (5)

where F denotes the discrete Fourier transform matrix,

Fk,j = e−ikπxj , 0 ≤ j ≤ 2N, −N ≤ k ≤ N. (6)

Here f is the 2N + 1 vector of Fourier coefficients given in (3), and f = {fj}2Nj=0 is the calculated solution tof(xj), j = 0, . . . , 2N . The FFT can be used to efficiently solve (5), [13]. Since the underlying function f is onlypiecewise smooth, the approximation given by (4) will yield the Gibbs phenomenon. Filtering is often used toalleviate the Gibbs phenomenon and to reduce the effects of high frequency noise. The filtered approximationis given by

SσNf(x) =

N∑k=−N

σkfkeikπx, (7)

where σk is an admissible filter, [13]. The corresponding linear system at the gridpoints x = xj is then

Fσf = f , (8)

where Fσ denotes the filtered discrete Fourier transform matrix

Fσk,j = σke−ikπxj , 0 ≤ j ≤ 2N, −N ≤ k ≤ N, (9)

and once again the FFT can be employed. As will be seen in Section 4, filtering may cause too much smoothingover the jump discontinuities, especially if the Gibbs oscillations are to be completely removed. It is alsoevident that filtering can not address the poor reconstruction quality due to undersampling. Thus we seekother methods of regularization. Since noise is inherent to all sampling systems, we will demonstrate that ourtechnique is effective for noisy input data, f = f + ε.

2.1 Sparse Representation in the Jump Function Domain

As mentioned in the introduction, we seek to regularize (2) by enforcing the sparsity of edges in the imagedomain. To do this, we must first define the jump function of a piecewise smooth function. Note that thewords “jump” and “edge” are used interchangeably throughout the remainder of this paper.

Definition 1 Let f : [−1, 1] :→ R. For all x ∈ (−1, 1), let f(x−) and f(x+) denote its left and right handlimits. The jump function of f is defined at each x as

[f ](x) = f(x+)− f(x−). (10)

3 For simplicity we choose 2N + 1 equally spaced grid points to match the number of Fourier coefficients. The techniquesdescribed in this paper are easily extended to different gridding schemes in the image domain.

Page 4: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

4 Rick Archibald et al.

From the above definition, we see that [f ] is zero everywhere that f is continuous, while it takes on the jumpvalue at each jump discontinuity location. We make the assumption that there is at most one jump within acell Ij = [xj , xj+1). Thus, if [f ](xj) is the value of the jump that occurs within the cell Ij , we can write

[f ](x) =

2N−1∑j=0

[f ](xj)χj(x), (11)

where χj(x) is defined as

χj(x) =

{1 if x ∈ Ij0 for all other x.

For simplicity, the numerical algorithms used in this investigation all place the jump discontinuity at the leftboundary of its corresponding cell. (This also means that the solution to (2) can be written as the expansioncoefficients of the standard basis.)

Since [f ](x) = 0 for almost all grid point values x = xj , it is apparent that (11) has only a few nonzerocoefficients. Therefore we say that [f ](x) has sparse representation, or equivalently, that the jump functiondomain of f is sparse. Hence we seek to regularize (5) in the form of (2) using (11). This will require anapproximation to (11) that is suitable for convex optimization problems.

2.2 Convex Optimization Using Sparsity of Edges

While there are a variety of ways to approximate (11), for our purpose we will use the polynomial annihilationedge detection method, [1]. The advantage in using the polynomial annihilation method is that it is high order,meaning that in regions of smoothness the coefficients of the approximation of (11) will indeed be sparse.Moreover, it is simple to generate a transform matrix for l1 regularization. This was accomplished in [27],where the polynomial annihilation (PA) transform matrix was introduced.

The polynomial annihilation edge detection method, [1], is defined as

Lmf(x) =1

qm(x)

∑xj∈Sx

cj(x)f(xj), (12)

where Sx is the local set of m+1 grid points from the set of given grid points about x, cj(x) are the polynomialannihilation edge detection coefficients, (13), and qm(x) is the normalization factor, (14). Each parameter ofthe method can be further described as:

– Sx: For any particular cell Ij = [xj , xj+1), there are m possible stencils, Sx of size m+ 1, that contain theinterval Ij . For simplicity, we assume that the stencils are centered around the interval of interest, Ij , andare given by

SIj = {xj−m2, · · · , xj+m

2}, SIj = {xj−m+1

2, · · · , xj+m−1

2}

for m even and odd respectively. For non-periodic solutions the stencils are adapted to be more one sidedas the boundaries of the interval are approached, [1]. To avoid cumbersome notation, we write Sx as thegeneric stencil unless further clarification is needed.

– cj(x): The polynomial annihilation edge detection coefficients, cj(x), j = 1, . . . ,m+ 1, are constructed toannihilate polynomials up to degree m. They are obtained by solving the system∑

xj∈Sx

cj(x)p`(xj) = p(m)` (x), j = 1, . . . ,m+ 1, (13)

where p`, ` = 0, . . . ,m, is a basis of for the space of polynomials of degree ≤ m.– qm(x): The normalization factor, qm(x), normalizes the approximation to assure the proper convergence

of Lmf to the jump value at each discontinuity. It is computed as

qm(x) =∑xj∈S+

x

cj(x), (14)

where S+x is the set of points xj ∈ Sx such that xj ≥ x.

Page 5: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 5

It was shown in [1] that the the polynomial annihilation edge detection method has mth order convergenceaway from the jump discontinuities. More precisely, this accuracy is seen in the region outside the stencil thatcontains the jump. Oscillations will naturally develop in the region of the jump, which will increase with m.The consequences of these oscillations on our method will be discussed more in Section 4.

If the solution vector f is on uniform points in [−1, 1], then we solve (12) on the set of points xj = −1+j∆x,j = 0, . . . , 2N, with ∆x = 1

N . In this case there is an explicit formula for the polynomial annihilation edgedetection coefficients, independent of location x, computed as ([1])

cj =m!∏m+1

k=1,k 6=j(j − k)∆x, j = 1, . . . ,m+ 1. (15)

We can now define the polynomial annihilation (PA) transform matrix, Lm, as

Lmj,l =c(j, l)

qm(xl), 0 ≤ l ≤ 2N, 0 ≤ j < 2N, (16)

where

c(j, l) =

{cj−l−bm2 c 0 < j − l − bm2 c+ s(j, l) ≤ m+ 1

0 otherwise

and

s(j, l) =

l − bm2 c l ≤ bm2 cl +m− bm2 c − 2N l +m− bm2 c > 2N0 otherwise.

For example, not assuming periodicity, the banded matrix Lm for m = 4 can be written as

Lm =1

3

3 −12 18 −12 3−1 4 −6 4 −11 −4 6 −4 1

1 −4 6 −4 1...

1 −4 6 −4 11 −4 6 −4 1

1 −4 6 −4 1−3 12 −18 12 −3

.

The PA transform produces a vector with small, nonzero values in the smooth regions of f and largevalues at the jump locations. Therefore, by minimizing ||Lmf ||1, we encourage a solution f that has sparserepresentation in the jump function domain, as given in (11). As was demonstrated in [27], using the PAtransform in (2) reduces the Gibbs oscillations without smearing over the discontinuities or causing a staircasingeffect.

Remark 1 When m = 1, applying the PA transform is equivalent to TV regularization (up to a normalizationconstant). As is evident in Figure 2, using TV as a minimizing constraint causes a “staircasing” effect, since itencourages a solution that minimizes the differences between the solution vector components. In this regard,the PA transform can be seen as high order, meaning that the accuracy away from the jump discontinuities isO(∆xm).

As mentioned in the introduction, in [27], the MATLAB CVX package was used to implement (2) with(16). While suitable for one dimensional problems, it is not efficient for multiple dimensions. Below we describehow the split Bregman algorithm, [12], can be adapted to incorporate the PA transform, (16), into (2), andthus improve the overall efficiency.

Page 6: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

6 Rick Archibald et al.

3 The Split Bregman Algorithm for l1 regularization of Sparse Fourier Data

Bregman iteration, first developed to find extrema of convex functionals, [3], has been used in many applicationswhere the optimization problem is of the form (1) or (2), [18,22,28]. The Split Bregman Algorithm, developedin [12], was shown to be equivalent to Bregman iteration. Its popularity is owed to the fact that it is very fast,typically having the computational cost on the order of an FFT, and also that the nonlinear steps involve onlysoft thresholding, while all other aspects of the algorithm involve solving invertible linear systems.

In recent years the Split Bregman Algorithm has been used for solving a broad class of l1 regularizedoptimization problems. In particular, the Split Bregman Algorithm was used to solve (2) for the two dimen-sional Fourier transform and termed the “sparse MRI data reconstruction problem” in [18,20,25].4 In thiscase we assume that f : R2 → R is a periodic piecewise smooth function on [−1, 1]2 for which we are giventhe (normalized) Fourier coefficients,

f(k, l) =1

4

∫ 1

−1

∫ 1

−1f(x, y)e−iπ(kx+ly)dydx, for k, l = −N, . . . , N. (17)

We wish to recover f from (17) at a finite set of uniform grid points, (xi, yj), for i, j = 0, · · · , 2N . Thefidelity term in (2) is directly extended with F being the analogous two-dimensional Fourier transform matrixof (6). Although the polynomial annihilation edge detection is an inherently multi-dimensional method, [1],our results indicate that using a dimension by dimension construction, Jx and Jy for the two dimensionalregularization term J , is more efficient for uniformly spaced data.

We now seek f = {f(xi, yj) : 0 ≤ i, j ≤ 2N} that solves the convex optimization problem

minf

(Jxf + Jyf) such that ||Ff − f ||2 ≤ σ (18)

In the case where only partial information is given, the corresponding convex optimization problem we solveis

minf

(Jxf + Jyf) such that ||MFf − f ||2 ≤ σ. (19)

Here the matrix M represents a ‘row selector’ matrix, which comprises a subset of the rows of an identitymatrix, corresponding to the known set of Fourier samples. We note that the one dimensional version posedin (2) can also be similarly adapted. Fast algorithms were developed for (19) as a means for approximating(18) in [28] and [12]. It was first demonstrated in [28] that (19) could be solved using a Bregman iteration ofthe sequence of two unconstrained problems of the form

fk+1 = minf||Jxf ||1 + ||Jyf ||1 +

µ

2||MFf − fk||2, (20a)

fk+1 = fk + f −MFfk+1, (20b)

where µ > 0 is an optimization parameter. The computational challenge comes in solving (20a). Using theSplit Bregman methodology, a fast explicit solution was developed in [12] for the case where TV is used for thel1 regularization terms, that is, when Jx = ∇x and Jy = ∇y. We outline the basic steps of the Split BregmanAlgorithm for the TV denoising model below and then demonstrate in Section 3.2 how it can be extended tothe case where Jx and Jy are the polynomial annihilation transforms, resulting in a more accurate solutionwithout additional computational cost. Since we do not explicitly motivate the algorithm, we refer interestedreaders to [12] for more general details.

4 Although designed for MRI data, it is applicable whenever data are sampled in the Fourier domain, especially when dimensionreduction is desirable.

Page 7: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 7

3.1 The Split Bregman Algorithm for TV Denoising

Following the framework in [12], for Jx = ∇x and Jy = ∇y, we make the replacements dx ← ∇xf anddy ← ∇yf . Using

||∇v||1 =∑i,j

√|∇xvi,j |2 + |∇yvi,j |2,

we obtain as the augmented optimization of (20a):

minf ,dx,dy

∑i,j

√|dx,i,j |2 + |dy,i,j |2 +

λ

2||dx −∇xf − bx||2

2||dy −∇yf − by||2 +

µ

2||MFf − f ||2. (21)

The variables bx and by arise from derivation of the Split Bregman Algorithm, [12], and their calculation isgiven in Algorithm 1. Solving (21) is accomplished in steps. When f is held fixed, the exact optimization ofdx and dy can be calculated using the shrink operator, [26], as

doptx = max(s− 1/λ, 0)∇xf + bx

sand dopty = max(s− 1/λ, 0)

∇yf + bys

, (22)

where

s =√|∇xf + bx|2 + |∇yf + by|2. (23)

When dx and dy are held fixed, optimizing (21) reduces to the l2 minimization problem

minf

λ

2||dx −∇xf − bx||2 +

λ

2||dy −∇yf − by||2 +

µ

2||MFf − f ||2. (24)

Since the subproblem in (24) is differentiable, the optimal solution f can be found by differentiating withrespect to f and setting the result equal to zero, arriving at

(µFTMTMF + λ∇Tx∇x + λ∇Ty∇y)f = rhs, (25)

where

rhs = µFTMT f + λ∇Tx (dx − bx) + λ∇Ty (dy − by). (26)

Using the identities ∇T∇ = −4 and FT = F−1 produces the inverse problem

F−1KFf = rhs, (27)

where K = (µMTM −F4F−1). Note that K is a diagonal operator. Therefore, the optimal solution f to (24)can be calculated at the cost of two Fourier transforms as

fopt = F−1K−1F(rhs). (28)

Incorporating the above calculations into (19) leads to a fast algorithm for the TV denoising problem, [12]:

Algorithm 1 The Split Bregman Algorithm for TV Denoising

Define Jx = ∇x and Jy = ∇y in (19).

Initialize k = 0, f0 = F−1MT f , and b0x = b0

y = d0x = d0

y = 0

while ||MFfk − f ||2 > σfk+1 = F−1K−1Frhskdk+1x = max(sk − 1/λ, 0)

∇xfk+bk

x

sk

dk+1y = max(sk − 1/λ, 0)

∇yfk+bk

y

sk

bk+1x = bk+1

x + (∇xfk+1 − dk+1x )

bk+1y = bk+1

y + (∇yfk+1 − dk+1y )

Page 8: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

8 Rick Archibald et al.

fk+1 = fk + f −MFfk+1

k = k + 1end

The iteration terms sk and rhsk are provided in (23) and (26), where all variables are superscripted with k.The first five steps of Algorithm 1 can alternatively be iterated in a short inner loop before updating fk+1.We note that Algorithm 1 works for the one dimensional optimization problem given in (2) by using the onedimensional Fourier transform with J as the TV operator and eliminating the y components, by and dy.

3.2 The Split Bregman Algorithm for PA l1 Regularization

As noted in the introduction, the main limitation in using the TV operator as an l1 regularization term is thatit will generate a piecewise constant solution for the underlying signal or image. In [27] it was demonstratedthat the TV operator is equivalent to the PA transform operator, (12), with m = 1, and that better accuracymay be achieved when m > 1, especially if the underlying signal or image has significant variation betweendiscontinuities. We now demonstrate that solving (2) with the PA transform operator as the l1 regularizationterm can be made efficient via an extension of the Split Bregman Algorithm for TV denoising.

We begin by defining Jx := Lmx and Jy := Lmy in (19), where Lmx and Lmy are the respective directionalPA transform operators defined by in one dimension by (16). Bregman iteration of (19) is then given by thesequence of two unconstrained optimization problems of the form

fk+1 = minf||Lmx f ||1 + ||Lmy f ||1 +

µ

2||MFf − fk||2, (29a)

fk+1 = fk + f −MFfk+1, (29b)

where µ > 0 is again an optimization parameter. Following the technique described in Section 3.1, we writethe augmented problem corresponding to (29a) and (29b) as

minf ,dx,dy

∑i,j

√|dx,i,j |2 + |dy,i,j |2 +

λ

2||dx − Lmx f − bx||2

2||dy − Lmy f − bx||2 +

µ

2||MFf − f ||2. (30)

When f is held fixed, the exact optimization of dx and dy can be calculated using the shrink operator, [26],similar to the derivation of (22), as

doptx = max(s− 1/λ, 0)Lmx f + bx

sand dopty = max(s− 1/λ, 0)

Lmy f + by

s, (31)

where s is given by

s =√|Lmx f + bx|2 + |Lmy f + by|2. (32)

When dx and dy are held fixed, (30) reduces to the l2 minimization problem

minf

λ

2||dx − Lmx f − bx||2 +

λ

2||dy − Lmy f − by||2 +

µ

2||MFf − f ||2. (33)

Once again, since the subproblem in (33) is differentiable, the optimal solution f can be found by differentiatingwith respect to f and setting the result equal to zero, arriving at(

µFTMTMF + λ(Lmx )TLmx + λ(Lmy )

TLmy

)f = rhs, (34)

where

rhs = µFTMT f + λ(Lmx )T

(dx − bx) + λ(Lmy )T

(dy − by). (35)

Page 9: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 9

Of significant importance here is that

(Lm)TLm = (Lmx )

TLmx + (Lmy )

TLmy = −1m

( ∂2m∂x2m

+∂2m

∂y2m

)is a diagonal operator in the Fourier domain. Hence the inverse problem is solved by(

F−1KLMF)f = rhs. (36)

Note that KLM = (µMTM + F(Lm)TLmF−1) is a diagonal operator. Therefore, the optimal solution f to

(24) can be calculated at the cost of two Fourier transforms as

fopt = F−1K−1LMFrhs. (37)

Thus, just as in the case for the TV denoising model, solving (19) with PA l1 regularization, (12), can bemade numerically efficient via the Split Bregman Algorithm:

Algorithm 2 The Split Bregman Algorithm for PA l1 Regularization

Define Jx = Lmx and Jy = Lmy in (19).

Initialize k = 0, f0 = F−1MT f , and b0x = b0

y = d0x = d0

y = 0

while ||MFfk − f ||2 > σfk+1 = F−1K−1LMFrhsk

dk+1x = max(sk − 1/λ, 0)

Lmx fk+bk

x

sk

dk+1y = max(sk − 1/λ, 0)

Lmy fk+bk

y

sk

bk+1x = bk+1

x + (Lmx fk+1 − dk+1x )

bk+1y = bk+1

y + (Lmy fk+1 − dk+1y )

fk+1 = fk + f −MFfk+1

k = k + 1end

The iteration terms sk and rhsk are provided in (32) and (35), where all variables are superscripted with k.The first five steps of Algorithm 2 can alternatively be iterated in a short inner loop before updating fk+1.As before, Algorithm 2 works for the one dimensional optimization problem (2) by using the one dimensionalFourier transform with J = Lm and eliminating the y components by and dy. Finally, we note that Algorithm2 is equivalent to Algorithm 1 when m = 1.

4 Numerical Results

We are now ready to demonstrate the Split Bregman Algorithm for PA l1 regularization, given by Algorithm2, for solving the one and two dimensional optimization problems. We will assume that we are given a finitenumber of Fourier coefficients for the underlying signal or image. The wave numbers of the chosen coefficientswere then drawn from a normal distribution with standard deviation 2N/6 (rounded to the nearest integer),where 2N + 1 is the number of recovered function values.

For our one dimensional experiments, we consider the two test functions defined on [−1, 1]:

Example 1

fa(x) =

{−1− x if −1 ≤ x < 01− x otherwise

; fb(x) =

cos πx2 if 1 ≤ x < − 12

cos 3πx2 if − 1

2 ≤ x <12

cos 7πx2 if 1

2 ≤ x ≤ 1

Page 10: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

10 Rick Archibald et al.

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Test Function

Uniform Grid

(a) fa(x)−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fourier Reconstruction

Test Function

(b) SNfa(x)−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Filtered Fourier Reconstruction

Test Function

(c) SσNfa(x)

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Test Function

Uniform Grid

(d) fb(x)−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Fourier Reconstruction

Test Function

(e) SNfb(x)−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Filtered Fourier Reconstruction

Test Function

(f) SσNfb(x)

Fig. 1 (a) fa and (d) fb plotted on 2N + 1 = 129 uniform gridpoints; (b) and (e) Fourier partial sum approximation, (4), for faand fb respectively; (c) and (f) Filtered Fourier partial sum approximation, (7), with filter σk = exp(−α(k/N)2p), α = 32 and2p = 4, for fa and fb respectively.

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=1

Test Function

(a) PA (m = 1) for fa

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=2

Test Function

(b) PA (m = 2) for fa

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=3

Test Function

(c) PA (m = 3) for fa

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=1

Test Function

(d) PA (m = 1) for fb

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=2

Test Function

(e) PA (m = 2) for fb

−1 −0.5 0 0.5 1

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

PA m=3

Test Function

(f) PA (m = 3) for fb

Fig. 2 Results using Algorithm 2 with 2N = 128, µ = 1 (29a), λ = .015 and m = 1, 2 and 3.

Figures 1(a) and (d) show fa and fb plotted on a uniform grid of 2N + 1 = 129 points in [−1, 1]. Thecorresponding Fourier and filtered Fourier partial sum approximations, (4) and (7), are illustrated in Figure1(b) and (e), and in Figure 1(c) and (f) respectively. In this reconstruction we used 50% of the original2N + 1 = 129 Fourier coefficients sampled. It is not surprising that having a sparse sampling of Fourierdata severely limits the reconstruction quality when directly applying either the standard or filtered Fourierapproximation. The convex optimization framework is clearly better suited in this case.

As illustrated in Figure 2, the approximation is readily improved by employing the PA transform, (16),for the l1 regularization term in (2). Algorithm 2 was used in all cases with the number of iterations rangingfrom 30-40.

Page 11: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 11

−1 −0.5 0 0.5 1−6

−5

−4

−3

−2

−1

0

m=1

m=2

m=3

(a) logErr(fa(j))−1 −0.5 0 0.5 1

−6

−5

−4

−3

−2

−1

0

m=1

m=2

m=3

(b) logErr(fb(j))

Fig. 3 Log of the pointwise error, (38), for m = 1, 2, and 3. (a) fa and (b) fb.

Figure 3 illustrates the pointwise error, given by

Err(f(j)) =|fj − f(xj)|, j = 0, . . . , 2N, (38)

where fj are the components of our solution and f(xj) are the corresponding components to the underlyingfunction. The smallest error near the discontinuities occurs when m = 2. It is furthermore evident that usingm = 2 is the best choice for fa, which is piecewise linear. As demonstrated in Figure 3, the pointwise errordecreases as m increases in regions away from the discontinuity, which is consistent with earlier discussions onthe effect of m on reconstruction (see also [27]). Figure 4 shows the pointwise error using Algorithm 2 for

−1 −0.5 0 0.5 1−7

−6

−5

−4

−3

−2

−1

0

2N=32

2N=64

2N=128

2N=256

(a) logErr(fa(j))−1 −0.5 0 0.5 1

−6

−5

−4

−3

−2

−1

0

2N=32

2N=64

2N=128

2N=256

(b) logErr(fb(j))

Fig. 4 Log of the pointwise error, (38), for increasing N .

fa and fb with fixed parameters µ = 1 (29a), λ = .015, m = 3. Here 2N = 32, 64, 128, 256, and 50% of thecoefficients were used. It is evident that as N increases, the resolution at the jump location becomes sharperwhile the error generally improves in smooth regions.

Phase transition diagrams are effective at showing when an undersampled reconstruction is likely to beaccurate, [8,9,19]. Figure 5 demonstrates how using the PA transform as the l1 regularization term increasesthe likelihood of recovering the correct values of piecewise polynomials with multiple jump discontinuities asthe undersampling rates are changed. Each target function was generated using the following random process:First, the location of the jumps were drawn uniformly on the interval [−1, 1]. Then, the polynomial pieceswhere generated by interpolating random values drawn from a standard normal distribution. All trials werecarried out without noise and solutions were obtained by solving (1). Each point in these diagrams correspondto the fraction of successful recoveries in 20 trials. The sparsity factor (number of jumps / number of samples)varies along the y-axis, while the undersampling factor (number of samples / number of recovered values)varies along the x-axis. Not surprisingly, TV works best when the target functions are piecewise constants.For piecewise linear and quadratic polynomials, this is no longer the case. The bottom two rows in this figure

Page 12: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

12 Rick Archibald et al.

m = 1 (TV) m = 2 m = 3

Fig. 5 Phase transition diagrams for the reconstruction of piecewise polynomials using TV and PA with m = 2 and 3. Thecolormap shows the fraction of successful recoveries in 20 trials. A trial is deemed successful when the relative l2 error is below10−2. Top row : piecewise constant functions. Middle row : piecewise linear functions. Bottom row : piecewise quadratic polynomials.

show that TV is likely to fail if the data is undersampled by a factor of 0.4 or less regardless of the numberof jumps. As expected, higher values of m are more appropriate when the polynomial degree of the piecewisetarget functions is increased. For piecewise linear polynomials, the algorithm works best when m = 2, whichis consistent with our other findings, that is, there are fewer oscillations near the edges for m = 2, and sincethe underlying function is piecewise linear, m = 2 is sufficient for recovery.

Although we did not formally analyze the parameter sensitivity, our experiments indicate that the resultsare generally robust with respect to parameter selection. In particular, Figure 6(a) and (b) show the mean l1

error for ten random trials when 50% of the coefficients are used for calculating fa and fb using m = 1, 2, 3.The best results occurs near λ = .015. Figure 6(c) shows the l1 error for fc over a range of λ, and Figure 6(d)

Page 13: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 13

10−4

10−3

10−2

10−1

100

101

102

λ

L1 E

rro

r

PA m=1

PA m=2

PA m=3

(a) Err(fa)

10−4

10−3

10−2

10−1

100

101

102

λ

L1 E

rro

r

PA m=1

PA m=2

PA m=3

(b) Err(fb)

10−2

10−1

100

101

102

101

102

103

λ

L1 E

rro

r

PA m=1

PA m=2

PA m=3

(c) Err(fc)

100

102

104

102

103

104

λ

L1 E

rror

PA m=1

PA m=2

PA m=3

(d) Err(fc) with 5 dB SNR

Fig. 6 log ||Err||l1 , given by (38), for various λ used in Algorithm 2. Here 2N = 64

shows the same result with 5 dB SNR. We observe that while λ is somewhat sensitive to noise, the transitionsare smooth and they do not vary greatly from the cases where no noise is present. Future work will include amore detailed investigation into optimal parameter selection.

To illustrate Algorithm 2 for two dimensions, we consider the following test function defined on [−1, 1]2:

Example 2

fc(x, y) =

{sin(π

√x2 + y2/2) if 0 < x, y < 3

4g(x, y) otherwise,

with

g(x, y) =

{cos(3π

√x2 + y2/2) if

√x2 + y2 ≤ 1

2

cos(π√x2 + y2/2) if

√x2 + y2 > 1

2 .

The Fourier data for our numerical experiments were chosen either randomly from a Gaussian distribution,shown in Figure 7(a), or from a tomographic sampling pattern, shown in Figure 7(b). In each case we used 50%of the [2N + 1]2 Fourier coefficients of the underlying image. To demonstrate the accuracy of reconstruction insmooth regions, we also calculated the l2 error after removing all values within two pixels away from internaljump discontinuities. Figure 7(c) shows the mask subtracted out of the l2 error calculation.

(a) Gaussian Sampling (b) Tomographic Sampling (c) Edge map of fc(x, y).

Fig. 7 (a) Gaussian sampling of the Fourier data; (b) Tomographic sampling of the Fourier data; (c) Mask of fc(x, y) used tocalculate l2 error.

We compared our results with those generated by Algorithm 1 as well as those constructed using the totalgeneralized variation shearlet (TGVSH) based image reconstruction algorithm, [16]. In the latter case, weused the publicly available code in [17] with its default parameters.5 Figure 8 displays the various methodsfor reconstructing fc(x, y) in Example 2.

Figure 9 shows the cross section error comparison of the various methods.

5 We tried a variety of parameters in our experiments to ensure that our comparisons were fair. As it turned out, the defaultparameters yielded the best results.

Page 14: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

14 Rick Archibald et al.

(a) TV (b) TGVSH

(c) PA (m = 2) (d) PA (m = 3)

Fig. 8 Reconstruction of fc(x, y). The parameters for Algorithm 2 chosen were µ = 1 (29a) and λ = 10−1.

−1 −0.5 0 0.5 1−7

−6

−5

−4

−3

−2

−1

0

Fourier

TV

TGVSHCS

Spa m=2

Spa m=3

(a) Cross section error at x = .155

Fig. 9 Cross section error of fc(x, y) at x = .155.

Figure 10 compares the results for reconstructing fc(x, y) using the same techniques for the case wherethe Fourier data are sampled using the tomographic pattern with noise level 10dB SNR. Similarly, the recon-struction results in Figure 11 display the case when the data are chosen randomly from the Gaussian samplingpattern. Here the noise level is 5dB SNR. We note that our results were not sensitive to the particular samplingpattern chosen.

Page 15: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 15

(a) TV (b) TGVSH

(c) PA (m = 2) (d) PA (m = 3)

Fig. 10 Reconstruction of fc(x, y) given noisy Fourier data (10 dB SNR) and using tomographic sampling. Parameters forAlgorithm 2 chosen were µ = 1 (29a) and λ = 100.

Table 1 compares the l2 errors for reconstructing fc when 50% of the 129×129 Fourier coefficients selectedrandomly from a Gaussian distribution are used. Observe that Algorithm 2 is particularly effective when theFourier samples are noisy.

Table 1 l2 errors for reconstructing fc with various methods when using 50% of the 129 × 129 Fourier coefficients selectedrandomly from a Gaussian distribution. l21 is the standard l2 error. l22 is the l2 error calculated 2 pixel points away from theinternal edges. The parameters chosen are the same as in Figures 8 and 11 respectively.

method noise l21 l22TV None 3.33 .36TGVSH None 3.81 .96m = 2 None 3.35 .07m = 3 None 3.43 .05TV 10 dB SNR 5.13 3.64TGVSH 10 dB SNR 5.39 3.78m = 2 10 dB SNR 4.76 2.71m = 3 10 dB SNR 4.71 1.80TV 5 dB SNR 6.58 5.42TGVSH 5 dB SNR 6.49 5.27m = 2 5 dB SNR 5.45 4.03m = 3 5 dB SNR 4.74 2.65

Page 16: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

16 Rick Archibald et al.

(a) TV (b) TGVSH

(c) PA (m = 2) (d) PA (m = 3)

Fig. 11 Reconstruction of fc(x, y) given noisy Fourier data (5 dB SNR). Parameters for Algorithm 2 chosen were µ = 1 (29a)and λ = 100.

Finally, Figure 12 compares these same algorithms when applied to the synthetic aperture radar (SAR)image of a golf course, [11], using µ = 1 (29a) and λ = 100 in Algorithm 2. In this case we computed theFourier coefficients from the given image data. The ‘Ground Truth’ image is down-sampled from the originalimage while the Fourier coefficients were calculated via trapezoidal rule from the original image. As illustratedin Figure 12(d), using the PA transform for l1 regularization better captures the underlying features of theimage.

5 Conclusions and Future Considerations

This paper demonstrates how the Split Bregman Algorithm can be adapted to use the PA transform as the l1

regularization term in solving the “denoising” model, (2), when the data are acquired as Fourier coefficients.The method is especially effective when the data are undersampled and noisy, as demonstrated by the examplesin Section 4. In particular, the PA transform demonstrates improved accuracy away from the boundaries ascompared to other methods. The phase transition diagrams illustrate that using the PA transform yieldsa greater likelihood of success in undersampled cases when the underlying image is not piecewise constant.Moreover, Table 1 verifies that the PA transform is particularly effective as the SNR is reduced. Our adaptationof the Split Bregman Algorithm means that the PA transform is just as efficient as using TV. The optimizationparameters, although not fully tested, appear to be robust. This will be the subject of future investigations.

A downside to using the PA transform for l1 regularization is that oscillations start to form near thediscontinuities as the order is increased. Preliminary investigations suggest that it is possible to combinethe results of low (TV) and higher order PA transform for l1 regularization at very little additional cost.

Page 17: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 17

(a) Original SAR Image (b) TV: Relative l2 error = 1.35e−1 (c) TGVSH: Relative l2 error = 1.28e−1

(d) PA m=2: Relative l2 error = 9.67e−2 (e) PA m=3: Relative l2 error = 9.08−2

Fig. 12 Reconstruction of a Synthetic Aperture Radar (SAR) image of a golf course, [11]. We calculated 601 × 601 Fouriercoefficients and added 5dB SNR. Algorithm 2 was applied on 50% of the Fourier coefficients randomly selected from a Gaussiandistribution. Parameter values used were µ = 1 (29a) and λ = 100.

Specifically, both algorithms can be run simultaneously with a map of the internal edges calculated as abyproduct. Then, as a final step, the solution would use the TV results near the edges and the higher orderPA transform in smooth regions. We note that errors made in calculating the internal edge map will onlyresult in the downgrading of the local accuracy to the less accurate of the two approximations. Moreover, atolerance threshold can be applied depending on the SNR, undersampling, and any other prior information.These ideas will be explored in future investigations.

Acknowledgments

This work is supported in part by grants NSF-DMS 1216559 and AFOSR FA9550-12-1-0393. The submittedmanuscript is based upon work, authored in part by contractors [UT-Battelle LLC, manager of Oak RidgeNational Laboratory (ORNL)], and supported by the U.S. Department of Energy, Office of Science, Office ofAdvanced Scientific Computing Research, Applied Mathematics program. Accordingly, the U.S. Governmentretains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, orallow others to do so, for U.S. Government purposes.

References

1. Archibald, R., Gelb, A., and Yoon, J. Polynomial fitting for edge detection in irregularly sampled signals and images.SIAM J. Numer. Anal. 43, 1 (2005), 259–279.

Page 18: Image Reconstruction from Undersampled Fourier Data …platte/pub/AGP14.pdf · Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform 3 We

18 Rick Archibald et al.

2. Bredies, K., Kunisch, K., and Pock, T. Total generalized variation. SIAM J. Imaging Sci. 3, 3 (2010), 492–526.3. Bregman, L. The relaxation method of finding the common points of convex sets and its application to the solution of

problems in convex optimization. USSR Comput. Math. Math. Phys. 7 (1967), 200–217.4. Candes, E. J., and Romberg, J. Signal recovery from random projections. In Proc. SPIE Comput. Imaging III (2005),

vol. 5674, pp. 76–186.5. Candes, E. J., Romberg, J., and Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete

frequency information. IEEE Trans. Inform. Theory 52 (2006), 489–509.6. Donoho, D. Compressed sensing. IEEE Trans. Inform. Theory 52 (2006), 1289–1306.7. Donoho, D. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest

solution. Commun. Pure Appl. Math. 59, 6 (2006), 797–829.8. Donoho, D., and Tanner, J. Observed universality of phase transitions in high-dimensional geometry, with implications for

modern data analysis and signal processing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 367, 1906 (2009), 4273–4293.9. Donoho, D. L., Maleki, A., and Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad.

Sci. 106, 45 (2009), 18914–18919.10. Durand, S., and Froment, J. Reconstruction of wavelet coefficients using total variation minimization. SIAM J. Sci.

Comput. 24, 5 (2003), 1754–1767.11. Ellsworth, M., and Thomas, C. A fast algorithm for image deblurring with total variation regularization. Unmanned Tech

Solutions 4 (2014).12. Goldstein, T., and Osher, S. The split bregman method for l1-regularized problems. SIAM J. Imaging Sci. 2, 2 (2009),

323–343.13. Gottlieb, D., and Orszag, S. A. Numerical analysis of spectral methods: theory and applications. Society for Industrial

and Applied Mathematics, Philadelphia, Pa., 1977. CBMS-NSF Regional Conference Series in Applied Mathematics, No. 26.14. Grant, M., and Boyd, S. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and

Control, V. Blondel, S. Boyd, and H. Kimura, Eds., Lecture Notes in Control and Information Sciences. Springer-VerlagLimited, 2008, pp. 95–110.

15. Grant, M., and Boyd, S. CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx,Mar. 2014.

16. Guo, W., Qin, J., and Yin, W. A New Detail-Preserving Regularization Scheme. SIAM J. Imaging Sci. 7, 2 (2014),1309–1334.

17. Guo, W., Qin, J., and Yin, W. Matlab scripts for TGV shearlet based image reconstruction algorithm. http://www.math.

ucla.edu/~qinjingonly/TGVSHCS/webpage.html, Feb. 2015.18. He, L., Chang, T.-C., and Osher, S. MR image reconstruction from sparse radial samples by using iterative refinement

procedures. In Proc. 13th Annu. Meet. ISMRM (2006), p. 696.19. Krzakala, F., Mezard, M., Sausset, F., Sun, Y., and Zdeborova, L. Probabilistic reconstruction in compressed sensing:

algorithms, phase diagrams, and threshold achieving matrices. J. Stat. Mech. Theory Exp. 2012, 08 (2012), P08009.20. Lustig, M., Donoho, D., and Pauly, J. M. Sparse mri: The application of compressed sensing for rapid mr imaging. Magn.

Reson. Med. 58, 6 (2007), 1182–1195.21. Moulin, P. A wavelet regularization method for diffuse radar-target imaging and speckle-noise reduction. J. Math. Imaging

Vis. 3, 1 (1993), 123–134.22. Osher, S., Burger, M., Goldfarb, D., Xu, J., and Yin, W. An iterative regularization method for total variation-based

image restoration. Multiscale Model. Simul. 4, 2 (2005), 460–489.23. Rudin, L., Osher, S., and Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D 60 (1992), 259–268.24. Schiavazzi, D., Doostan, A., and Iaccarino, G. Sparse multiresolution stochastic approximation for uncertainty quantifi-

cation. Recent Adv. Sci. Comput. Appl. 586 (2013), 295.25. Trzasko, J., Manduca, A., and Borisch, E. Sparse MRI reconstruction via multiscale l0-continuation. In Stat. Signal

Process. 2007. SSP ’07. IEEE/SP 14th Work. (Aug 2007), pp. 176–180.26. Wang, Y., Yin, W., and Zhang, Y. A fast algorithm for image deblurring with total variation regularization. CAAM Tech.

Reports (2007).27. Wasserman, G., Archibald, R., and Gelb, A. Image reconstruction from Fourier data using sparsity of edges. J. Sci.

Comput., to appear (2015).28. Yin, W., Osher, S., Goldfarb, D., and Darbon, J. Bregman iterative algorithms for l1-minimization with applications to

compressed sensing. SIAM J. Imaging Sci. 1 (2008), 143–168.