5
Optimal Quantization for Compressive Sensing under Message Passing Reconstruction Ulugbek Kamilov ,, Vivek K Goyal , and Sundeep Rangan * Research Laboratory of Electronics, Massachusetts Institute of Technology ´ Ecole Polytechnique F´ ed´ erale de Lausanne * Polytechnic Institute of New York University [email protected], [email protected], [email protected] Abstract—We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which generalizes approximate message passing (AMP) for arbitrary measurement channels. Its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism. We utilize these results to design mean-square optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. I. I NTRODUCTION By exploiting signal sparsity and smart reconstruction schemes, compressive sensing (CS) [1], [2] can enable signal acquisition with fewer measurements than traditional sam- pling. In CS, an n-dimensional signal x is measured through m random linear measurements. Although the signal may be undersampled (m<n), it may be possible to recover x assuming some sparsity structure. So far, most of the CS literature has considered signal recovery directly from linear measurements. However, in many practical applications, measurements have to be discretized to a finite number of bits. The effect of such quantized measurements on the performance of the CS reconstruction has been studied in [3], [4]. In [5]–[7] the authors adapt CS reconstruction algorithms to mitigate quantization effects. In [8], high-resolution functional scalar quantization theory was used to design quantizers for lasso reconstruction [9]. The contribution of this paper to the quantized CS problem is twofold: First, for quantized measurements, we propose reconstruction algorithms based on Gaussian approximations of belief propagation (BP). BP is a graphical model-based estimation algorithm widely used in machine learning and channel coding [10], [11] that has also received significant recent attention in compressed sensing [12]. Although exact implementation of BP for dense measurement matrices is generally computationally difficult, Gaussian approximations of BP have been effective in a range of applications [13]–[18]. We consider a recently developed Gaussian-approximated This material is based upon work supported by the National Science Foundation under Grant No. 0729069 and by the DARPA InPho program through the US Army Research Office award W911-NF-10-1-0404. BP algorithm, called generalized approximate message pass- ing [16], [19], that extends earlier methods [15], [18] to nonlinear output channels. We show that the GAMP method is computationally simple and, with quantized measurements, provides significantly improved performance over traditional CS reconstruction based on convex relaxations. Our second contribution concerns the quantizer design. With linear reconstruction and mean-squared error distortion, the optimal quantizer simply minimizes the mean squared error (MSE) of the transform outputs. Thus, the quantizer can be optimized independently of the reconstruction method. However, when the quantizer outputs are used as an input to a nonlinear estimation algorithm, minimizing the MSE between quantizer input and output is not necessarily equivalent to minimizing the MSE of the final reconstruction [20]. To optimize the quantizer for the GAMP algorithm, we use the fact that the MSE under large random transforms can be predicted accurately from a set of simple state evolution (SE) equations [19], [21]. Then, by modeling the quantizer as a part of the measurement channel, we use the SE formalism to optimize the quantizer to asymptotically minimize distortions after the reconstruction by GAMP. A longer document discusses both undersampling and over- sampling and both regular and non-regular quantization [22]. Code for GAMP is available online [23]. II. NOTATIONS We use two sets of indices, with i, j ∈{1,...,n} denoting signal components and a, b ∈{1,...,m} measurement com- ponents. Bold faced variables x, y represent random quantities. The (a, i) entry of the matrix A is denoted by A ai . The Gaus- sian probability density function with mean μ and variance σ 2 is denoted by φ(· ; μ,σ 2 ). III. BACKGROUND A. Compressive Sensing In a noiseless CS setting the signal x R n is acquired via m<n linear measurements of the type z = Ax, (1) where A R m×n is the measurement matrix. The objective is to recover x from (z,A). Although the system of equations 2011 IEEE International Symposium on Information Theory Proceedings 978-1-4577-0595-3/11/$26.00 ©2011 IEEE 459

[IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

  • Upload
    sundeep

  • View
    216

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

Optimal Quantization for Compressive Sensingunder Message Passing ReconstructionUlugbek Kamilov†,‡, Vivek K Goyal†, and Sundeep Rangan!

†Research Laboratory of Electronics, Massachusetts Institute of Technology‡Ecole Polytechnique Federale de Lausanne

!Polytechnic Institute of New York [email protected], [email protected], [email protected]

Abstract—We consider the optimal quantization of compressivesensing measurements along with estimation from quantizedsamples using generalized approximate message passing (GAMP).GAMP is an iterative reconstruction scheme inspired by the beliefpropagation algorithm on bipartite graphs which generalizesapproximate message passing (AMP) for arbitrary measurementchannels. Its asymptotic error performance can be accuratelypredicted and tracked through the state evolution formalism.We utilize these results to design mean-square optimal scalarquantizers for GAMP signal reconstruction and empiricallydemonstrate the superior error performance of the resultingquantizers.

I. INTRODUCTIONBy exploiting signal sparsity and smart reconstruction

schemes, compressive sensing (CS) [1], [2] can enable signalacquisition with fewer measurements than traditional sam-pling. In CS, an n-dimensional signal x is measured throughm random linear measurements. Although the signal may beundersampled (m < n), it may be possible to recover xassuming some sparsity structure.So far, most of the CS literature has considered signal

recovery directly from linear measurements. However, in manypractical applications, measurements have to be discretizedto a finite number of bits. The effect of such quantizedmeasurements on the performance of the CS reconstructionhas been studied in [3], [4]. In [5]–[7] the authors adaptCS reconstruction algorithms to mitigate quantization effects.In [8], high-resolution functional scalar quantization theorywas used to design quantizers for lasso reconstruction [9].The contribution of this paper to the quantized CS problem

is twofold: First, for quantized measurements, we proposereconstruction algorithms based on Gaussian approximationsof belief propagation (BP). BP is a graphical model-basedestimation algorithm widely used in machine learning andchannel coding [10], [11] that has also received significantrecent attention in compressed sensing [12]. Although exactimplementation of BP for dense measurement matrices isgenerally computationally difficult, Gaussian approximationsof BP have been effective in a range of applications [13]–[18].We consider a recently developed Gaussian-approximated

This material is based upon work supported by the National ScienceFoundation under Grant No. 0729069 and by the DARPA InPho programthrough the US Army Research Office award W911-NF-10-1-0404.

BP algorithm, called generalized approximate message pass-ing [16], [19], that extends earlier methods [15], [18] tononlinear output channels. We show that the GAMP methodis computationally simple and, with quantized measurements,provides significantly improved performance over traditionalCS reconstruction based on convex relaxations.Our second contribution concerns the quantizer design.

With linear reconstruction and mean-squared error distortion,the optimal quantizer simply minimizes the mean squarederror (MSE) of the transform outputs. Thus, the quantizercan be optimized independently of the reconstruction method.However, when the quantizer outputs are used as an input to anonlinear estimation algorithm, minimizing the MSE betweenquantizer input and output is not necessarily equivalent tominimizing the MSE of the final reconstruction [20]. Tooptimize the quantizer for the GAMP algorithm, we use thefact that the MSE under large random transforms can bepredicted accurately from a set of simple state evolution (SE)equations [19], [21]. Then, by modeling the quantizer as apart of the measurement channel, we use the SE formalism tooptimize the quantizer to asymptotically minimize distortionsafter the reconstruction by GAMP.A longer document discusses both undersampling and over-

sampling and both regular and non-regular quantization [22].Code for GAMP is available online [23].

II. NOTATIONSWe use two sets of indices, with i, j " {1, . . . , n} denoting

signal components and a, b " {1, . . . ,m} measurement com-ponents. Bold faced variables x,y represent random quantities.The (a, i) entry of the matrix A is denoted by Aai. The Gaus-sian probability density function with mean µ and variance !2is denoted by "(· ; µ,! 2).

III. BACKGROUNDA. Compressive SensingIn a noiseless CS setting the signal x " Rn is acquired via

m < n linear measurements of the type

z = Ax, (1)

where A " Rm!n is the measurement matrix. The objectiveis to recover x from (z,A). Although the system of equations

2011 IEEE International Symposium on Information Theory Proceedings

978-1-4577-0595-3/11/$26.00 ©2011 IEEE 459

Page 2: [IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

formed is underdetermined, the signal is still recoverableif some favorable conditions on x and A are satisfied andexploited. Generally, in CS the common assumption is that thesignal is exactly or approximately sparse in some orthonormalbasis !, i.e., there is a vector u = !"1x " Rn with most of itselements equal or close to zero. Additionally, for certain guar-antees on the recoverability of the signal to hold, the matrix Amust satisfy the restricted isometry property (RIP) [24]. Somefamilies of random matrices, like appropriately-dimensionedmatrices with i.i.d. Gaussian elements, have been demonstratedto satisfy the RIP with high probability.A common method for recovering the signal from the

measurements is basis pursuit. Although it is possible to solvebasis pursuit in polynomial time by casting it as a linearprogram, its complexity has motivated researchers to lookfor even cheaper alternatives like numerous recently-proposediterative methods [12], [16], [17], [25], [26]. Moreover, inreal applications a CS reconstruction scheme must be ableto mitigate imperfect measurements, due to noise or limitedprecision [3], [5], [6].

B. Scalar QuantizationA quantizer is a function that discretizes its input by per-

forming a mapping from a continuous set to some discrete set.More specifically, consider K-point regular scalar quantizerQ, defined by its output levels C = {ci; i = 1, 2, . . . , K},decision boundaries {(bi"1, bi) # R; i = 1, 2, . . . , K}, anda mapping ci = Q(s) when s " [bi"1, bi) [27]. Additionallydefine the inverse image of the output level ci under Q as acell Q"1(ci) = [bi"1, bi). For i = 1, if b0 = $% we replacethe closed interval [b0, b1) by an open interval (b0, b1).Typically quantizers are optimized by selecting decision

boundaries and output levels in order to minimize the dis-tortion between the random vector s " Rm and its quantizedrepresentation s = Q(s). For example, for a given vector sand the MSE distortion metric, optimization is performed bysolving

Q# = argminQ

E

!

&s$Q (s)&2"

, (2)

where minimization is done over all K-level regular scalarquantizers. One standard way of optimizing Q is via the Lloydalgorithm, which iteratively updates the decision boundariesand output levels by applying necessary conditions for quan-tizer optimality [27].However, for the CS framework finding the quantizer that

minimizes MSE between s and s is not necessarily equivalentto minimizing MSE between the sparse vector x and its CSreconstruction from quantized measurements x [8], [20]. Thisis due to the nonlinear effect added by any particular CSreconstruction function. Hence, instead of solving (2), it ismore interesting to solve

Q# = argminQ

E

!

&x$ x&2"

, (3)

where minimization is performed over all K-level regularscalar quantizers and x is obtained through a CS reconstruction

+Ax ! R

nz ! R

m

s y x ! Rn

Q(.) G-AMP

! ! Rm

Fig. 1: Compressive sensing set up with quantization ofnoisy measurements s. The vector z denotes noiseless randommeasurements.

method like basis pursuit or GAMP. This is the approach takenin this work.

C. Message Passing AlgorithmsConsider the problem of estimating a random vector x "

Rn from noisy measurements y " Rm, where the noise isdescribed by a measurement channel py|z(ya | za), which actsidentically on each measurement za of the vector z obtainedvia (1). Moreover suppose that elements in the vector x aredistributed i.i.d. according to px(xi). Then we can constructthe following conditional probability distribution over randomvector x given the measurements y:

px|y (x | y) =1

Z

n#

i=1

px (xi)m#

a=1

py|z (ya | za) , (4)

where Z is the normalization constant and za = (Ax)a. Bymarginalizing this distribution it is possible to estimate eachxi. Although direct marginalization of px|y(x | y) is compu-tationally intractable in practice, we approximate marginalsthrough BP [12], [16], [17]. BP is an iterative algorithmcommonly used for decoding of LDPC codes [11]. We applyBP by constructing a bipartite factor graph G = (V, F,E)from (4) and passing the following messages along the edgesE of the graph:

pt+1i$a (xi) ' px (xi)

#

b%=a

ptb$i (xi) , (5)

pta$i (xi) '$

py|z (ya | za)#

j %=i

ptj$a (xj) dx, (6)

where ' means that the distribution is to be normalized sothat it has unit integral and integration is over all the elementsof x except xi. We refer to messages {pi$a}(i,a)&E as vari-able updates and to messages {pa$i}(i,a)&E as measurementupdates. We initialize BP by setting p0i$a(xi) = px(xi).Earlier works on BP reconstruction have shown that it

is asymptotically MSE optimal under certain verifiable con-ditions. These conditions involve simple single-dimensionalrecursive equations called state evolution (SE), which predictsthat BP is optimal when the corresponding SE admits a uniquefixed point [15], [21]. Nonetheless, direct implementation ofBP is still impractical due to the dense structure of A, whichimplies that the algorithm must compute the marginal ofa high-dimensional distribution at each measurement node.However, as mentioned in Section I, BP can be simplified

460

Page 3: [IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

through various Gaussian approximations, including the re-laxed BP method [15], [16] and approximate message passing(AMP) [17], [19]. Recent theoretical work and extensivenumerical experiments have demonstrated that, in the case ofcertain large random measurement matrices, the error perfor-mance of both relaxed BP and AMP can also be accuratelypredicted by SE. Hence the optimal quantizers can be obtainedin parallel for both of the methods, however in this paper weconcentrate on design for Generalized AMP, while keeping inmind that identical work can be done for Relaxed BP as well.Due to space limitations, in this paper we will limit our

presentation of GAMP and SE equations to the setting inFigure 1. See [19] for more general and detailed analysis.

IV. QUANTIZED GAMPConsider the CS setting in Figure 1, where without loss of

generality we assumed that ! = In. The vector x " Rn ismeasured through the random matrix A to result in z " Rm,which is further perturbed by some additive white Gaussiannoise (AWGN). The resulting vector s can be written as

s = z+ # = Ax+ #, (7)

where {#a} are i.i.d. random variables distributed as N (0,!2).These noisy measurements are then quantized by the K-levelscalar quantizer Q to give the CS measurements y " Rm.The GAMP algorithm is used to estimate the signal x fromthe corrupted measurements y, given the matrix A, noisevariance !2 > 0, and the quantizer mapping Q. Note thatunder this model each quantized measurement ya indicatesthat sa " Q"1(ya), hence our measurement channel can becharacterized as

py|z (ya | za) =$

Q!1(ya)"%

t; za, !2&

dt, (8)

for a = 1, 2, . . . , m and where "(·) is the Gaussian function.GAMP can be derived by approximating the updates in

(5) and (6) by two scalar parameters each and introducingsome first-order approximations, as discussed in [19]. Thengiven the functions Fin, Ein, D1, and D2 described below, foreach iteration t = 0, 1, 2, . . . , the GAMP algorithm producesestimates xt

i of the variables xi. The estimation is performedas follows:

xt+1i ( Fin

'

xti +

(ma=1 Aaiut

a(m

a=1 A2ai$

ta

,1

(ma=1 A

2ai$

ta

)

, (9)

$ t+1i ( Ein

'

xti +

(ma=1 Aaiut

a(m

a=1 A2ai$

ta

,1

(ma=1 A

2ai$

ta

)

, (10)

uta ( D1

*

ya,n+

i=1

Aaixti $ ut"1

a

n+

i=1

A2ai$

ti ,

n+

i=1

A2ai$

ti + !

2

,

,

(11)

$ ta ( D2

*

ya,n+

i=1

Aaixti $ ut"1

a

n+

i=1

A2ai$

ti ,

n+

i=1

A2ai$

ti + !

2

,

,

(12)

where !2 is the variance of the components #a.

We refer to messages {xi, $i}i&V as variable updates and tomessages {ua, $a}a&F as measurement updates. The algorithmis initialized by setting x0

i = xinit, $0i = $init, and u"1a = 0,

where xinit and $init are the mean and variance of the priorpx(xi). The nonlinear functions Fin and Ein are the conditionalmean and variance

Fin (q,% ) ( E [x | q = q] , (13)Ein (q,% ) ( var (x | q = q) , (14)

where q = x + v, x ) px (xi), and v ) N (0, %). Note thatthese functions can easily be evaluated numerically for thegiven values of q and %. Similarly, the functions D1 and D2

can be computed via

D1 (y, z,% ) (1

%(Fout (y, z,% )$ z) , (15)

D2 (y, z,% ) (1

%

'

1$Eout (y, z,% )

%

)

, (16)

where the functions Fout and Eout are the conditional mean andvariance

Fout (y, z,% ) ( E-

z | z " Q"1 (y).

, (17)Eout (y, z,% ) ( var

%

z | z " Q"1 (y)&

, (18)

of the random variable z ) N (z,% ). These functions admitclosed-form expressions in terms of erf (z) = 2'

!

/ z

0 e"t2 dt.

V. STATE EVOLUTION FOR GAMPThe equations (9)–(12) are easy to implement, however they

provide us no insight into the performance of the algorithm.The goal of SE equations is to describe the asymptoticbehavior of GAMP under large measurement matrices. TheSE for our setting in Figure 1 is given by the recursion

$t+1 = Ein

'

1

D2 (&$t,!2)

)

, (19)

where t * 0 is the iteration number, & = n/m is a fixednumber denoting the measurement ratio, and !2 is the varianceof the AWGN components which is also fixed. We initializethe recursion by setting $0 = $init, where $init is the varianceof xi according to the prior px(xi). We define the function Einas

Ein (%) = E [Ein (q,% )] , (20)

where the expectation is taken over the scalar random variableq = x+v, with x ) px(xi), and v ) N (0, %). Similarly, thefunction D2 is defined as

D2

%

%,! 2&

= E-

D2

%

y, z,% + !2&.

, (21)

where D2 is given by (16) and the expectation is taken overpy|z(ya | za) and (z, z) ) N (0, Pz(%)), with the covariancematrix

Pz (%) =

'

&$init &$init $ %&$init $ % &$init $ %

)

. (22)

One of the main results of [19] was to demonstrate theconvergence of the error performance of the GAMP algorithmto the SE equations under large measurement matrices. The

461

Page 4: [IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

following theorem is a corollary of this result; see SectionVI-D of [19].

Theorem 1. Consider the GAMP algorithm under assump-tions in Sections V-B and V-D of [19]. Then for any fixediteration number t and fixed variable index i, the errorvariances satisfy the limit

limn$(

E

!

0

0xi $ xti

0

0

2"

= $t, (23)

where $t is the output of the SE equation (19).

Another important result regarding SE recursion in (19) isthat it admits at least one fixed point. It has been shown thatas t + % the recursion decreases monotonically to its largestfixed point [16].It is important to note that the results on convergence of

GAMP to SE have been demonstrated for large i.i.d. Gaussianmatrices A [18], [19]. Prior analyses of approximate BPalgorithms, like Relaxed BP, were based on certain large sparseassumptions for the matrix A, which are rarely satisfied in theCS setting [16].

VI. OPTIMAL QUANTIZATIONWe now return to the problem of designing MSE-optimal

quantizers under GAMP presented in (3). By modeling thequantizer as part of the channel and working out the resultingequations for GAMP and SE, we can make use of theconvergence results to recast our optimization problem to

QSE = argminQ

{$#} , (24)

where $# is the largest fixed point of the SE equations givenby

$# = limt$(

$t. (25)

Minimization is done over all K-level regular scalar quan-tizers. In practice, about 10 to 20 iterations are sufficient toreach the fixed point of $t. Then by applying Theorem 1, weknow that the asymptotic performance of Q# will be identicalto that of QSE. It is important to note that the SE recursionbehaves well under quantizer optimization. This is due tothe fact that SE is independent of actual output levels andsmall changes in the quantizer boundaries result in only minorchange in the recursion (see (18)). Although closed-formexpressions for the derivatives of $t for large t’s are difficultto obtain, we can approximate them by using finite differencemethods. Finally, the recursion itself is fast to evaluate, whichmakes the scheme in (24) practically realizable using standardoptimization methods.

VII. EXPERIMENTAL RESULTSWe now present experimental validation for our results.

Assume that the signal x is generated with i.i.d. elements fromthe Gauss–Bernoulli distribution

xi )

1

N (0, 1/') , with probability ';0, with probability 1$ ',

(26)

!5 0 5

1

2

Quantizer Boundaries

Qu

an

tize

r

Rx = 1 bits/component

Optimal RBPUniform RBP

Fig. 2: Optimized quantizer boundaries for 1 bit/componentof x. Optimal quantizer is found by optimizing boundaries foreach & and then picking the result with smallest distortion.

where ' is the sparsity ratio that represents the average fractionof nonzero components of x. In the following experiments weassume ' = 0.1. We form the measurement matrix A fromi.i.d. Gaussian random variables, i.e., Aai ) N (0, 1/m); andwe assume that AWGN with variance !2 = 10"5 perturbsmeasurements before quantization. Related results with noadditive noise appear in [22].Now, we can formulate the SE equation (19) and perform

optimization (24). We compare two CS-optimized quantizers:Uniform and Optimal. We fix boundary points b0 = $%and bK = +%, and compute the former quantizer throughoptimization of type (2). In particular, by applying the centrallimit theorem we approximate elements sa of s to be Gaussianand determine the Uniform quantizer by solving (2), but withan additional constraint of equally-spaced output levels. Todetermine Optimal quantizer, we perform (24) by using astandard SQP optimization algorithm for nonlinear continuousoptimization.Figure 2 presents an example of quantization boundaries.

For the given bit rate Rx over the components of the inputvector x, we can express the rate over the measurements sas Rs = &Rx, where & = n/m is the measurement ratio.To determine the optimal quantizer for the given rate Rx

we perform optimization for all &s and return the quantizerwith the least MSE. As we can see, in comparison withthe uniform quantizer obtained by merely minimizing thedistortion between the quantizer input and output, the oneobtained via SE minimization is very different; in fact, it looksmore concentrated around zero. This is due to the fact that byminimizing SE we are in fact searching for quantizers thatasymptotically minimize the MSE of the GAMP reconstruc-tion by taking into consideration the nonlinear effects dueto the method. The trend of having more quantizer pointsnear zero is opposite to the trend shown in [8] for quantizersoptimized for lasso reconstruction.

462

Page 5: [IEEE 2011 IEEE International Symposium on Information Theory - ISIT - St. Petersburg, Russia (2011.07.31-2011.08.5)] 2011 IEEE International Symposium on Information Theory Proceedings

1 1.2 1.4 1.6 1.8 2!50

!45

!40

!35

!30

!25

!20

!15

!10

!5

0

Rate (bits / component)

MS

E (

dB

)

Linear

LASSO

Uniform RBP

Optimal RBP

Fig. 3: Performance comparison of GAMP with other sparseestimation methods.

Figure 3 presents a comparison of reconstruction distortionsfor our two quantizers and confirms the advantage of usingquantizers optimized via (19). To obtain the results we varythe quantization rate from 1 to 2 bits per component ofx, and for each quantization rate, we optimize quantizersusing the methods discussed above. For comparison, the figurealso plots the MSE performance for two other reconstructionmethods: linear MMSE estimation and the widely-used lassomethod [9], both assuming a bounded uniform quantizer. Thelasso performance was predicted by state evolution equationsin [19], with the thresholding parameter optimized by theiterative approach in [28]. It can be seen that the proposedGAMP algorithm offers dramatically better performance—more that 10 dB improvement. At higher rates, the gap isstill larger although GAMP performance saturates due to theAWGN at the quantizer input. Similarly we can see that theMSE of the quantizer optimized for the GAMP reconstructionis much smaller than the MSE of the standard one, with morethan 4 dB difference for many rates.

VIII. CONCLUSIONSWe present generalized approximate message passing as

an efficient algorithm for compressive sensing reconstructionfrom the quantized measurements. We integrate ideas from re-cent generalization of the algorithm for arbitrary measurementchannels to design a method for determining optimal quan-tizers under GAMP reconstruction. Although computationallysimpler, experimental results show that under quantized mea-surements GAMP offers significantly improved performanceover traditional reconstruction schemes. Additionally, perfor-mance of the algorithm is further improved by using the stateevolution framework to optimize the quantizers.

REFERENCES[1] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles:

Exact signal reconstruction from highly incomplete frequency informa-tion,” IEEE Trans. Inform. Theory, vol. 52, pp. 489–509, Feb. 2006.

[2] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory,vol. 52, pp. 1289–1306, Apr. 2006.

[3] E. J. Candes and J. Romberg, “Encoding the !p ball from limitedmeasurements,” in Proc. IEEE Data Compression Conf., (Snowbird,UT), pp. 33–42, Mar. 2006.

[4] V. K. Goyal, A. K. Fletcher, and S. Rangan, “Compressive sampling andlossy compression,” IEEE Signal Process. Mag., vol. 25, pp. 48–56, Mar.2008.

[5] W. Dai, H. V. Pham, and O. Milenkovic, “A comparative study ofquantized compressive sensing schemes,” in Proc. IEEE Int. Symp.Inform. Theory, (Seoul, Korea), pp. 11–15, June–July 2009.

[6] A. Zymnis, S. Boyd, and E. Candes, “Compressed sensing with quan-tized measurements,” IEEE Signal Process. Lett., vol. 17, pp. 149–152,Feb. 2010.

[7] J. N. Laska, P. T. Boufounos, M. A. Davenport, and R. G. Baraniuk,“Democracy in action: Quantization, saturation, and compressive sens-ing,” Appl. Comput. Harm. Anal., vol. 31, 2011.

[8] J. Z. Sun and V. K. Goyal, “Optimal quantization of random measure-ments in compressed sensing,” in Proc. IEEE Int. Symp. Inform. Theory,(Seoul, Korea), pp. 6–10, June–July 2009.

[9] R. Tibshirani, “Regression shrinkage and selection via the lasso,” J.Royal Stat. Soc., Ser. B, vol. 58, no. 1, pp. 267–288, 1996.

[10] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks ofPlausible Inference. San Mateo, CA: Morgan Kaufmann Publ., 1988.

[11] T. J. Richardson and R. L. Urbanke, “The capacity of low-density paritycheck codes under message-passing decoding,” IEEE Trans. Inform.Theory, vol. 47, pp. 599–618, Feb. 2001.

[12] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressivesensing via belief propagation,” IEEE Trans. Signal Process., vol. 58,pp. 269–280, Jan. 2010.

[13] J. Boutros and G. Caire, “Iterative multiuser joint decoding: Unifiedframework and asymptotic analysis,” IEEE Trans. Inform. Theory,vol. 48, pp. 1772–1793, July 2002.

[14] T. Tanaka and M. Okada, “Approximate belief propagation, densityevolution, and neurodynamics for CDMA multiuser detection,” IEEETrans. Inform. Theory, vol. 51, pp. 700–706, Feb. 2005.

[15] D. Guo and C.-C. Wang, “Asymptotic mean-square optimality of beliefpropagation for sparse linear systems,” in Proc. IEEE Inform. TheoryWorkshop, (Chengdu, China), pp. 194–198, Oct. 2006.

[16] S. Rangan, “Estimation with random linear mixing, belief propagationand compressed sensing,” in Proc. Conf. on Inform. Sci. & Sys.,(Princeton, NJ), pp. 1–6, Mar. 2010.

[17] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing al-gorithms for compressed sensing,” Proc. Nat. Acad. Sci., vol. 106,pp. 18914–18919, Nov. 2009.

[18] M. Bayati and A. Montanari, “The dynamics of message passing ondense graphs, with applications to compressed sensing,” IEEE Trans.Inform. Theory, vol. 57, pp. 764–785, Feb. 2011.

[19] S. Rangan, “Generalized approximate message passing for estimationwith random linear mixing.” arXiv:1010.5141v1 [cs.IT]., Oct. 2010.

[20] V. Misra, V. K. Goyal, and L. R. Varshney, “Distributed scalar quan-tization for computing: High-resolution analysis and extensions,” IEEETrans. Inform. Theory, vol. 57, Aug. 2011. To appear.

[21] D. Guo and C.-C. Wang, “Random sparse linear systems observed viaarbitrary channels: A decoupling principle,” in Proc. IEEE Int. Symp.Inform. Theory, (Nice, France), pp. 946–950, June 2007.

[22] U. Kamilov, V. K. Goyal, and S. Rangan, “Message-passing estimationfrom quantized samples.” arXiv:1105.6368v1 [cs.IT]., May 2011.

[23] S. Rangan et al., “Generalized approximate message passing.”SourceForge.net project gampmatlab. Available on-line athttp://gampmatlab.sourceforge.net/.

[24] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEETrans. Inform. Theory, vol. 51, pp. 4203–4215, Dec. 2005.

[25] A. Maleki and D. L. Donoho, “Optimally tuned iterative reconstructionalgorithms for compressed sensing,” IEEE J. Sel. Topics Signal Process.,vol. 4, pp. 330–341, Apr. 2010.

[26] J. A. Tropp and S. J. Wright, “Computational methods for sparse solutionof linear inverse problems,” Proc. IEEE, vol. 98, pp. 948–958, June2010.

[27] R. M. Gray and D. L. Neuhoff, “Quantization,” IEEE Trans. Inform.Theory, vol. 44, pp. 2325–2383, Oct. 1998.

[28] S. Rangan, A. K. Fletcher, and V. K. Goyal, “Asymptotic analysis ofMAP estimation via the replica method and applications to compressedsensing.” arXiv:0906.3234v1 [cs.IT]., June 2009.

463