24
278 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012 Compressive MUSIC: Revisiting the Link Between Compressive Sensing and Array Signal Processing Jong Min Kim, Ok Kyun Lee, Student Member, IEEE, and Jong Chul Ye, Member, IEEE Abstract—The multiple measurement vector (MMV) problem addresses the identication of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insuf- cient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior perfor- mance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the prob- abilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unied approach that revisits the link between CS and array signal processing rst unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with nite number of snapshots even in cases where the signals are linearly dependent. Index Terms—Compressive sensing, multiple measurement vector problem, joint sparsity, MUSIC, S-OMP, thresholding. I. INTRODUCTION C OMPRESSIVE SENSING (CS) theory [1]–[3] addresses the accurate recovery of unknown sparse signals from un- derdetermined linear measurements and has become one of the main research topics in the signal processing area with lots of applications [4]–[9]. Most of the compressive sensing theories have been developed to address the single measurement vector (SMV) problem [1]–[3]. More specically, let and be pos- itive integers such that . Then, the SMV compressive sensing problem is given by (I.1) Manuscript received June 24, 2010; revised March 28, 2011; accepted July 18, 2011. Date of current version January 06, 2012. This work was supported by the Korea Science and Engineering Foundation (KOSEF) grant funded in part by the Korean government (MEST) (No. 2009-0081089) and in part by the Industrial Strategic Technology Program of the Ministry of Knowl- edge Economy (KI001889). Parts of this work were presented at the SIAM Conference on Imaging Science, Chicago, IL, 2010, with the title “Multiple measurement vector problem with subspace-based algorithm”. The authors are with the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Guseong-dong Yuseong-gu, Daejon 305-701, Korea (e-mail: [email protected]; [email protected]; [email protected]). Communicated by J. Romberg, Associate Editor for Signal Processing. Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TIT.2011.2171529 where , and denotes the number of nonzero elements in the vector . Since (P0) requires a computationally expensive combinatorial optimiza- tion, greedy methods [10], reweighted norm algorithms [11], [12], convex relaxation using norm [2], [13], or Bayesian approaches [14], [15] have been widely investigated as alter- natives. One of the important theoretical tools within this con- text is the so-called restricted isometry property (RIP), which enables us to guarantee the robust recovery of certain input sig- nals [3]. More specically, a sensing matrix is said to have a -restricted isometry property(RIP) if there is a con- stant such that for all such that . It has been demonstrated that is sufcient for equivalence [2]. For many classes of random matrices, the RIP condition is satised with extremely high probability if the number of measurements satises for some constant [3]. Another important area of compressive sensing research is the so-called multiple measurement vector problem (MMV) [16]–[19]. The MMV problem addresses the recovery of a set of sparse signal vectors that share common nonzero support. More specically, let and be positive integers such that . In the MMV context, and denote the number of sensor elements and snapshots, respectively. For a given observation matrix , a sensing matrix such that for some , the MMV problem is formulated as (I.2) where and , where and is the -th row of . The MMV problem also has many important applications [20]–[24]. Currently, greedy algorithms such as S-OMP (simultaneous orthogonal matching pursuit) [16], [25], convex relaxation methods using mixed norm [26], [27], M-FOCUSS [17], M-SBL (Multiple Sparse Bayesian Learning) [28], randomized algorithms such as REduce MMV and BOost (ReMBo) [18], and model-based compressive sensing using block-sparsity [29], [30] have also been applied to the MMV problem within the context of compressive sensing. In MMV, thanks to the common sparse support, it is quite pre- dictable that the recoverable sparsity level may increase with the increasing number of measurement vectors. More speci- cally, given a sensing matrix , let denote the smallest 0018-9448/$31.00 © 2012 IEEE

Revisiting the Link Between Compressive Sensing and Array Signal

  • Upload
    letuyen

  • View
    231

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Revisiting the Link Between Compressive Sensing and Array Signal

278 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

Compressive MUSIC: Revisiting the Link BetweenCompressive Sensing and Array Signal Processing

Jong Min Kim, Ok Kyun Lee, Student Member, IEEE, and Jong Chul Ye, Member, IEEE

Abstract—The multiple measurement vector (MMV) problemaddresses the identification of unknown input vectors that sharecommon sparse support. Even though MMV problems have beentraditionally addressed within the context of sensor array signalprocessing, the recent trend is to apply compressive sensing (CS)due to its capability to estimate sparse support even with an insuf-ficient number of snapshots, in which case classical array signalprocessing fails. However, CS guarantees the accurate recoveryin a probabilistic manner, which often shows inferior perfor-mance in the regime where the traditional array signal processingapproaches succeed. The apparent dichotomy between the prob-abilistic CS and deterministic sensor array signal processing hasnot been fully understood. The main contribution of the presentarticle is a unified approach that revisits the link between CS andarray signal processing first unveiled in the mid 1990s by Feng andBresler. The new algorithm, which we call compressive MUSIC,identifies the parts of support using CS, after which the remainingsupports are estimated using a novel generalizedMUSIC criterion.Using a large system MMV model, we show that our compressiveMUSIC requires a smaller number of sensor elements for accuratesupport recovery than the existing CS methods and that it canapproach the optimal -bound with finite number of snapshotseven in cases where the signals are linearly dependent.

Index Terms—Compressive sensing, multiple measurementvector problem, joint sparsity, MUSIC, S-OMP, thresholding.

I. INTRODUCTION

C OMPRESSIVE SENSING (CS) theory [1]–[3] addressesthe accurate recovery of unknown sparse signals from un-

derdetermined linear measurements and has become one of themain research topics in the signal processing area with lots ofapplications [4]–[9]. Most of the compressive sensing theorieshave been developed to address the single measurement vector(SMV) problem [1]–[3]. More specifically, let and be pos-itive integers such that . Then, the SMV compressivesensing problem is given by

(I.1)

Manuscript received June 24, 2010; revised March 28, 2011; accepted July18, 2011. Date of current version January 06, 2012. This work was supportedby the Korea Science and Engineering Foundation (KOSEF) grant fundedin part by the Korean government (MEST) (No. 2009-0081089) and in partby the Industrial Strategic Technology Program of the Ministry of Knowl-edge Economy (KI001889). Parts of this work were presented at the SIAMConference on Imaging Science, Chicago, IL, 2010, with the title “Multiplemeasurement vector problem with subspace-based algorithm”.The authors are with the Department of Bio and Brain Engineering, Korea

Advanced Institute of Science and Technology, Guseong-dong Yuseong-gu,Daejon 305-701, Korea (e-mail: [email protected]; [email protected];[email protected]).Communicated by J. Romberg, Associate Editor for Signal Processing.Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIT.2011.2171529

where , and denotesthe number of nonzero elements in the vector . Since (P0)requires a computationally expensive combinatorial optimiza-tion, greedy methods [10], reweighted norm algorithms [11],[12], convex relaxation using norm [2], [13], or Bayesianapproaches [14], [15] have been widely investigated as alter-natives. One of the important theoretical tools within this con-text is the so-called restricted isometry property (RIP), whichenables us to guarantee the robust recovery of certain input sig-nals [3]. More specifically, a sensing matrix is saidto have a -restricted isometry property(RIP) if there is a con-stant such that

for all such that . It has been demonstratedthat is sufficient for equivalence [2]. Formany classes of random matrices, the RIP condition is satisfiedwith extremely high probability if the number of measurementssatisfies for some constant [3].Another important area of compressive sensing research is

the so-called multiple measurement vector problem (MMV)[16]–[19]. The MMV problem addresses the recovery of a setof sparse signal vectors that share common nonzero support.More specifically, let and be positive integers such that

. In the MMV context, and denote the numberof sensor elements and snapshots, respectively. For a givenobservation matrix , a sensing matrixsuch that for some , the MMV problemis formulated as

(I.2)

where and ,where and is the-th row of . The MMV problem also has many importantapplications [20]–[24]. Currently, greedy algorithms suchas S-OMP (simultaneous orthogonal matching pursuit) [16],[25], convex relaxation methods using mixed norm [26], [27],M-FOCUSS [17], M-SBL (Multiple Sparse Bayesian Learning)[28], randomized algorithms such as REduce MMV and BOost(ReMBo) [18], and model-based compressive sensing usingblock-sparsity [29], [30] have also been applied to the MMVproblem within the context of compressive sensing.In MMV, thanks to the common sparse support, it is quite pre-

dictable that the recoverable sparsity level may increase withthe increasing number of measurement vectors. More specifi-cally, given a sensingmatrix , let denote the smallest

0018-9448/$31.00 © 2012 IEEE

Page 2: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 279

number of linearly dependent columns of . Then, according toChen and Huo [16], Feng and Bresler [31], if satis-fies and

(I.3)

then is the unique solution of (I.2). In (I.3), the last in-equality comes from the observation that

. Recently, Davies and Eldar showed that (I.3) is also anecessary condition for to be a unique solution for (I.2) [32].Compared to the SMV case , (I.3) informs usthat the recoverable sparsity level increases with the number ofmeasurement vectors. However, the performance of the afore-mentioned MMV compressive sensing algorithms are not gen-erally satisfactory, and significant performance gaps still existfrom (I.3) even for a noiseless case when only a finite numberof snapshots is available.On the other hand, before the advance of compressive

sensing, the MMV problem (I.2), which was often termed asdirection-of-arrival (DOA) or the bearing estimation problem,had been addressed using sensor array signal processing tech-niques [21]. One of the most popular and successful DOAestimation algorithms is the so-called the MUSIC (MUltipleSIgnal Classification) algorithm [33]. The MUSIC estimatorhas been proven to be a large snapshot (for ) realizationof the maximum likelihood estimator for any , if andonly if the signals are uncorrelated [34]. By adapting MUSICconcept for MMV problem, Feng and Bresler [31] showedwhen and the nonzero row vectors are ingeneral position, the maximum sparsity level that is uniquelyrecoverable using the MUSIC approach is

(I.4)

which implies that the MUSIC algorithm achieves the bound(I.3) of the MMV when . Recent survey in[54] discussed this issue in more detail. However, one of themain limitations of the MUSIC algorithm is its failure when

which occurs, even if , when thenonzero rows of are linearly dependent. This problem isoften called the “coherent source” problem within the sensorarray signal processing context [21].While the link between compressed sensing and sensor

array signal processing was first unveiled by Bresler and Fengin [31], [54], their proposed MUSIC-like algorithm requires

, and is therefore inapplicable to the coherentsource case. The main contribution of the present article is,therefore, to provide a new class of algorithms to address theproblem. The new algorithm, termed compressive MUSIC(CS-MUSIC), can be regarded as a deterministic extensionof compressive sensing to achieve the optimality, or as ageneralization of the MUSIC algorithm using a probabilisticsetup to address the difficult problem of the coherent sourcesestimation. This generalization is due to our novel discovery ofa generalized MUSIC criterion, which tells us that an unknownsupport of size can be estimated deterministicallyas long as a support can be estimated with anycompressive sensing algorithm such as S-OMP or thresh-olding. Therefore, as approaches , our compressive

MUSIC approaches the classical MUSIC estimator; whereas,as becomes , the algorithm approaches to a classicalSMV compressive sensing algorithm. Furthermore, even if thesparsity level is not known a priori, compressive MUSIC canaccurately estimate the sparsity level using the generalizedMUSIC criterion. This emphasizes the practical usefulness ofthe new algorithm. Since the fraction of the support that shouldbe estimated probabilistically is reduced from to ,one can conject that the required number of sensor elementsfor compressive MUSIC is significantly smaller than that forconventional compressive sensing. Using the large systemMMV model, we derive explicit expressions for the minimumnumber of sensor elements, which confirms our conjecture.Furthermore, we derive an explicit expression of the minimumSNR to guarantee the success of compressive MUSIC. Numer-ical experiments confirm out theoretical findings.The remainder of the paper is organized as follows. We pro-

vide the problem formulation and mathematical preliminaries inSection II, followed by a review of existing MMV algorithms inSection III. Section IV gives a detailed presentation of the gen-eralized MUSIC criterion, and the required number of sensorelements in CS-MUSIC is calculated in Section V. Numericalsolutions are given in Section VI, followed by the discussionand conclusion in Section VII and VIII, respectively.

II. PROBLEM FORMULATION AND MATHEMATICALPRELIMINARIES

Throughout the paper, and correspond to the -th rowand the -th column of matrix , respectively. When is anindex set, corresponds to a submatrix collecting corre-sponding rows of and columns of , respectively. The fol-lowing noiseless version of the canonical MMV formulation isvery useful for our analysis.

Definition 2.1 (Canonical Form Noiseless MMV): Letand be positive integers that represent

the number of sensor elements, the ambient space dimension,and the number of snapshots, respectively. Suppose that we aregiven a sensing matrix and an observation matrix

such that for some and. A canonical form noiseless multiple

measurement vector (MMV) problem is given as the estimationproblem of -sparse vectors using the following for-mulation:

(II.5)

whereis the -th row of , and the observation matrix is full rank,i.e., .Compared to (I.2), the canonical form MMV has the addi-

tional constraint that . This is not prob-lematic though since everyMMVproblem can be converted intoa canonical form using the following dimension reduction.• Suppose we are given the following linear sensor observa-tions: where and satisfies

.

Page 3: Revisiting the Link Between Compressive Sensing and Array Signal

280 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

• Compute the SVD as , where is andiagonal matrix, consists of right singular

vectors, and , respectively.• Reduce the dimension as and .• The resulting canonical form MMV becomes

.We can easily show that and the sparsity

with probability 1. Therefore, withoutloss of generality, the canonical form of the MMV in Definition2.1 is assumed throughout the paper.The following definitions are used throughout this paper.

Definition 2.2: [35] The rows (or columns) in are in gen-eral position if any collection of rows (or columns) are linearlyindependent.If , where , the columns of are in general

position if and only if . Also, it is equivalentto - where -rank denotes the Kruscal rank,where a Kruscal rank of is the maximal number such thatevery collection of columns of is linearly independent [18].

Definition 2.3 (Mutual Coherence): For a sensing matrix, the mutual coherence is

given by

where the superscript denotes the Hermitian transpose.

Definition 2.4 (Restricted Isometry Property (RIP)): Asensing matrix is said to have a -restricted isom-etry property (RIP) if there exist left and right RIP constants

such that

for all such that . A single RIP constantis often referred to as the RIP constant.

Note that the condition for the left RIP constantis sufficient for the uniqueness of any -sparse vector sat-isfying for any -sparse vector , but the condition

is often too restrictive.

III. CONVENTIONAL MMV ALGORITHMS

In this section, we review the conventional algorithms forthe MMV problem and analyze their limitations. This surveyis useful in order to understand the necessity of developing anew class of algorithms. Except for the MUSIC and cumulantMUSIC algorithm, all other algorithms have been developed inthe context of compressive sensing. We will show that all theexisting methods have their own disadvantages. In particular,the maximum sparsity levels that can be resolved by these al-gorithms are limited in achieving the maximum gain from jointsparse recovery.

A. Simultaneous Orthogonal Matching Pursuit (S-OMP) [16],[25]

The S-OMP algorithm is a greedy algorithm that performs thefollowing procedure:

• at the first iteration, set and ,• after iterations, and

, where is the orthogonal projection onto,

• select such that and

set .Worst-case analysis of S-OMP [36] shows that a sufficient con-dition for S-OMP to succeed is

(III.1)

where . An explicit form of recoverable sparsitylevel is then given by

(III.2)

Note that these conditions are exactly the same as Tropp’s exactrecovery conditions for the SMV problem [37], implying thatthe sufficient condition for the maximum sparsity level is notimproved with an increasing number of snapshots even in thenoiseless case. In order to resolve this issue, the authors in [36]and [38] performed an average case analysis for S-OMP, andshowed that S-OMP can recover the input signals for the MMVproblem with higher probability when the number of snapshotsincreases. However, the simulation results in [36] and [38] sug-gest that S-OMP performance is saturated after some numberof snapshots, even with noiseless measurements, and S-OMPnever achieves the bound with a finite number of snapshots.

B. 2-Thresholding [36]

In 2-thresholding, we select a set with such that

If we estimate the by the above criterion, we can recoverthe nonzero component of by the equation . In[36], the authors demonstrated that the performance of 2-thresh-olding is often not as good as that of S-OMP, which suggeststhat 2-thresholding never achieves the -bound (I.3) with finitesnapshots even if the measurements are noiseless.

C. ReMBO Algorithm [18]

ReduceMMV and Boost (ReMBo) byMishali and Eldar [18]addresses the MMV problem by reducing it to a series of SMVproblems based on the following.

Theorem 3.1: [18] Suppose that satisfies andwith . Let be a random

vector with an absolutely continuous distribution and defineand . Then, for a random SMV systemand , we have

(a) For every , the vector is the unique -sparse solu-tion.(b) .

Employing the above theorem, Mishali and Eldar [18] pro-posed the ReMBo algorithm which performs the following pro-cedure:• set the maximum number of iterations as , set

and ;

Page 4: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 281

• while and , generate a random SMVproblem as in Theorem 3.1,— if the SMV problem has a -sparse solution, then we let

be the support of the solution vector, and let ;— otherwise, increase by 1;

• if , find the nonzero components of by theequation .

In order to achieve the bound (I.4) by ReMBO without anycombinatorial SMV solver, an uncountable number of randomvectors are required. With a finite number of choices of ,the performance of ReMBo is therefore dependent on randomlychosen input and the solvability of a randomly generated SMVproblem so that it is difficult to achieve the theoretical -boundeven with noiseless measurements.

D. Mixed Norm Approach [27]

The mixed norm approach is an extension of the convex re-laxation method in SMV [10] to theMMV problem. Rather thansolving the original MMV problem (II.5), the mixed norm ap-proaches solve the following convex optimization problem:

(III.3)

where . The optimization problemcan be formulated as an SOCP (second order cone program)[26], homotopy continuation [39], and so on.Worst-case boundsfor the mixed norm approach were derived in [16], which showsno improvement with the increasing number of measurement.Instead, Eldar et al. [38] considered the average case analysiswhen and and showed that if

where , then the probability success recovery ofjoint sparsity increases with the number of snapshots. However,it is not clear whether this convex relaxtion can achieve thebound.

E. Block Sparsity Approaches [29]

Block sparse signals have been extensively studied by Eldaret al. using the uncertainty relation for the block-sparse signaland block coherence concept. Eldar et al. [29] showed that theblock sparse signal can be efficiently recovered using a fewernumber of measurements by exploiting the block sparsity pat-tern as described in the following theorem:

Theorem 3.2: [29] Let positive integers andbe given, where for

some positive integer and for each .Let be the block-coherence which is defined by

where denotes the spectral radius, be the sub-coherence ofthe sensing matrix which is defined by

and be the block size. Then, the block OMP and block mixedoptimization program successfully recover the -block

sparse signal if

(III.4)

Note that we can transform into an SMV system, where is block-

sparse with length and denotes the Kronecker product ofmatrices. Therefore, one may think that we can use the blockOMP or block optimization problem to solve the MMVproblem. However, the following theorem shows that this is pes-simistic.

Theorem 3.3: For the canonical MMV problem in Defini-tion 2.1, the sufficient condition (III.4) for recovery using block-sparsity is equivalent to

(III.5)

where denotes the mutual coherence of the sensing matrix.

Proof: Since , if we let, we have due to the diagonality, and

by the definition of mutual coherence. Applying (III.4) withand , we obtain (III.5).

Note that (III.5) is the same as that of OMP for SMV. Themain reason for the failure of the block sparse approach for theMMV problem is that the block sparsity model does not ex-ploit the diversity of unknownmatrix . For example, the blocksparse model cannot differentiate a rank-one input matrix andfull-rank matrix .

F. M-SBL [28]

M-SBL (Sparse Bayesian Learning) by Wipf and Rao [40] isa Bayesian compressive sensing algorithm to address the min-imization problem. M-SBL is based on the ARD (automatic rel-evance determination) and utilizes an empirical Bayesian priorthereby enforcing a joint sparsity. Specifically, the M-SBL per-forms the following procedure:

(a) initialize and .(b) compute the posterior variance and mean as fol-lows:

where denotes a regularization parameter.(c) update by

Page 5: Revisiting the Link Between Compressive Sensing and Array Signal

282 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

(d) repeat (b) and (c) until converges to some fixed point.

Wipf and Rao [40] showed that increasing the number of snap-shots in SBL reduces the number of local minimizers so thatthe possibility of recovering input signals increases from jointsparsity. Furthermore, in the noiseless setting, if we have lin-early independent measurements and the nonzero rows ofare orthogonal, there is a unique fixed point so that we cancorrectly recover the -sparse input vectors. To the best of ourknowledge, M-SBL is the only compressive sensing algorithmthat achieves the same -bound as MUSIC when . How-ever, the orthogonality condition for the input vector thatachieves the maximal sparsity level is more restricted than thatof MUSIC. Furthermore, no explicit expression for the max-imum sparsity level was provided for the range .

G. The MUSIC Algorithm [31], [33]

The MUSIC algorithm was originally developed to estimatethe continuous parameters such as bearing angle or DOA. How-ever, the MUSIC criterion can be still modified to identify thesupport set from the finite index set as shown by Feng andBresler [31], [54] and is reproduced here:

Theorem 3.4: [31], [54] (MUSIC Criterion) Assume that wehave linearly independent measurements such that

for and . Also,we assume that the columns of a sensing matrixare in general position; that is, any collection of columnsof are linearly independent. Then, for any

if and only if

(III.6)

or equivalently

(III.7)

where consists of orthonormal columns suchthat so that , which is often called“noise subspace”. Here, for matrix denotes the rangespace of .

Proof: By the assumption, the matrix of multiple mea-surements can be factored as a product where

and , where isthe matrix which consists of columns whose indices are inand is the matrix that consists of rows whose indices are in. Since has full column rank and has full row rank,

. Then we can obtain a singular value decom-position as

where . Then, if and onlyif so that can be expressed as a linearcombination of . Since the columns of are in generalposition, if and only if .Note that the MUSIC criterion (III.6) holds for all

if the columns of are in general position. Using the compres-

sive sensing terminology, this implies that the recoverable spar-sity level by MUSIC (with a probability 1 for the noiseless mea-surement case) is given by

(III.8)

where the last equality comes from the definition of the spark.Therefore, the bound (I.3) can be achieved by MUSIC when

. However, for any , the MUSIC condition (III.6)does not hold. This is a major drawback of MUSIC comparedto the compressive sensing algorithms that allows perfect re-construction with extremely large probability by increasing thenumber of sensor elements, .

H. Cumulant MUSIC

The fourth-order cumulant or higher order MUSIC wasproposed by Porat and Friedlander [41] and Cardoso [42] toimprove the number of resolvable resolvable sources over theconventional second-order MUSIC. Specifically, the cumulantMUSIC derives a MUSIC-type subspace criterion from thecumulant of the observation matrix. It has been shown that thecumulant MUSIC can resolve more sources than conventionalMUSIC for specific array geometries [43]. However, a signif-icant increase in the variance of the target estimate of a weaksource in the presence of stronger sources has been reported,which was not observed for second order MUSIC [44]. Thisincrease often prohibit the use of fourth-order methods, evenfor large SNR, when the dynamic range of the sources isimportant [44]. Furthermore, for general array geometries, theperformance of the cumulant MUSIC is not clear. Therefore,we need to develop a new type of algorithm that can overcomethese drawbacks.

I. Main Contributions of Compressive MUSIC

Note that the existing MMV compressive sensing approachesare based on a probabilistic guarantee, whereas array signal pro-cessing provides a deterministic guarantee. Rather than takingsuch extreme view points to address a MMV problem, the maincontribution of CS-MUSIC is to show that we should take thebest of both approaches. More specifically, we show that as longas partial support can be estimated with any com-pressive sensing algorithms, the remaining unknown support of

can be estimated deterministically using a novel gen-eralized MUSIC criterion. By allowing such hybridization, ourCS-MUSIC can overcome the drawbacks of the all existing ap-proaches and achieves the superior recovery performance thathad not been achievable by any of the aforementioned MMValgorithms. Hence, the following sections discuss what condi-tions are required for the generalizedMUSIC and partial supportsupport recovery to succeed, and how CS-MUSIC outperformsexisting methods.

IV. GENERALIZED MUSIC CRITERIONFOR COMPRESSIVE MUSIC

This section derives an important component of compressiveMUSIC, which we call the generalized MUSIC criterion. This

Page 6: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 283

extends the MUSIC criterion (III.6) for . Recall that whenwe obtain linearly independent measurement vectors, we candetermine the support of multiple signals with the condition that

if and only if . In general, if we havelinearly independent measurement vectors, where , wehave the following.

Theorem 4.1 (Generalized MUSIC Criterion): Let andbe positive integers such that . Suppose that we aregiven a sensing matrix and an observation matrix

. Assume that the MMV problem is in canonicalform, that is, . Then, the following holds:(a) .(b) If the nonzero rows of are in general position (i.e., anycollection of nonzero rows are linearly independent) andsatisfies the RIP condition with , then

Proof: See Appendix A.

Note that, unlike the classical MUSIC criterion, a conditionfor the left RIP constant is required inTheorem 4.1 (b). This condition has the following very inter-esting implication.

Lemma 4.2: For the canonical form MMV, satis-fies RIP with if and only if

(IV.1)

Proof: Since has the left RIP condition, any collection of columns of are

linearly independent so that . Hence

since . For the converse, assume the condition(IV.1). Then we have which implies

.Hence, if satisfies RIP with and if we

have -sparse coefficient matrix that satisfies , thenis the unique solution of the MMV. In other words, under

the above RIP assumption, for noiseless case we can achievethe -uniqueness bound, which is the same as the theoreticallimit (I.3). Note that when , we have ,which is equivalent to there being some ’s such that ,which is equivalent to the classical MUSIC criterion. By theabove lemma, we can obtain a generalized MUSIC criterion forthe case in the following theorem.

Theorem 4.3: Assume that , andsatisfy and the conditions in Theorem 4.1 (b).

If with and ,which consists of columns of , whose indices are in , thenfor any

(IV.2)

if and only if .Proof: See Appendix B.

Fig. 1. Geometric view for the generalized MUSIC criterion: the dashedline corresponds to the conventional MUSIC criterion, where the squarednorm of the projection of onto the noise subspace

may not be zero. is orthogonal to the subspaceso that we can identify the indices of the support

of with the generalized MUSIC criterion.

When and (IV.2) is the same as the classicMUSIC criterion (III.6) since. However, the generalized MUSIC criterion (IV.2) foris based on the rank of the matrix, which is prone to error underan incorrect estimate of noise subspace when the measure-ments are corrupted by additive noise. Hence, rather than using(IV.2), the following equivalent criterion is more practical.

Corollary 4.4: Assume that, and are the same as in Theorem

4.3. Then,

(IV.3)

if and only if .Proof: See Appendix B.

Note that in MUSIC criterion (III.7) is nowreplaced by where .The following theorem shows that hasvery important geometrical meaning.

Theorem 4.5: Assume that we are given a noiseless MMVproblemwhich is in canonical form. Also, suppose that andsatisfy the conditions as in Theorem 4.1 (b). Let and

consist of orthonormal columns such thatand . Then the following

properties hold:(a) is equal to the orthogonal pro-jection onto .(b) is equal to the orthogonal pro-

jection onto .(c) is equal to the orthogonal com-plement of or .Proof: See Appendix C.

Fig. 1 illustrates the geometry of corresponding subspaces.Unlike the MUSIC, the orthogonality of theneed to be checked with respect to .Based on the geometry, we can obtain following algorithms forsupport detection.

Page 7: Revisiting the Link Between Compressive Sensing and Array Signal

284 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

(Algorithm 1: Original form)

1) Find indices of by any MMV compressivesensing algorithms such as 2-thresholding or SOMP.

2) Let be the set of indices which are taken in Step 1and .

3) For , calculate the quantitiesfor all .

4) Make an ascending ordering of , chooseindices that correspond to the first elements, and putthese indices into .

Alternatively, we can also use the signal subspace form to iden-tify the support of :

(Algorithm 2: Signal subspace form)

1) Find indices of by any MMV compressivesensing algorithms such as 2-thresholding or SOMP.

2) Let be the set of indices which are taken in Step 1and .

3) For , calculate the quantitiesfor all

.4) Make a descending ordering of , chooseindices that correspond to the first elements, and putthese indices into .

In compressive MUSIC, we determine indices ofwith CS-based algorithms such as 2-thresholding

or S-OMP, where the exact reconstruction is a probabilisticmatter. After that process, we recover remaining indices of

with a generalized MUSIC criterion, which is given inTheorem 4.3 or Corollary 4.4, and this reconstruction processis deterministic. This hybridization makes the compressiveMUSIC applicable for all ranges of , outperforming all theexisting methods.So far, we have discussed about the recovery of the sup-

port of the multiple input vectors assuming that we know aboutthe size of the support. One of the disadvantages of the ex-isting MUSIC-type algorithms is that if the sparsity level isoverestimated, spurious peaks are often observed. However, inCS-MUSIC when we do not know about the correct size of thesupport, we can still apply the following lemma to estimate thesize of the support.

Lemma 4.6: Assume that andsatisfy and the conditions in theorem 4.1 (b),

and denotes the true sparsity level, i.e., . Also,assume that and we are givenwith , where is the partial support of size

estimated by any MMV compressive sensing algorithm.Also, we let . Then,

if and only if

(IV.4)

Proof: Necessity is trivial by Corollary 4.4 so we only needto show sufficiency of (IV.4) assuming the contrary. We dividethe proof into two parts.(i) : By the Lemma 4.1, for any

As in the proof of Corollary 4.4, this implies for any, so that we have for .

(ii) : Here, we have already chosen at leastindices of the support of . By Corollary 4.4, (IV.3)

holds only for, at most, elements of since. Hence, for .

The minimization in (IV.4) is over all index sets of size thatinclude elements from and no elements form .For fixed and , this minimization can be performed byfirst computing the summands for alland then selecting the of smallest magnitude. Lemma 4.6 alsotells us that if we calculate by increasing from , thenthe first such that corresponds to the unknownsparsity level. For noisy measurements, we can choose the firstlocal minimizer of by increasing .

V. SUFFICIENT CONDITIONS FOR SPARSE RECOVERYUSING COMPRESSIVE MUSIC

A. Large System MMV Model

Note that the recovery performance of compressive MUSICrelies entirely on the correct identification of partialsupport in via compressive sensing approaches andthe remaining indices using the generalized MUSIC crite-rion. In practice, the measurements are noisy, so the theorywe derived for noiseless measurement should be modified. Inthis section, we derive sufficient conditions for the minimumnumber of sensor elements (the number of rows in each mea-surement vector) that guarantee the correct support recoveryby compressive MUSIC. Note that for the success of compres-sive MUSIC, both CS step and the generalized MUSIC stepshould succeed. Hence, this section derives separate conditionsfor each step, which is required for the success of compressiveMUSIC.For SMV compressive sensing, Fletcher, Rangan and Goyal

[45] derived an explicit expression for the minimum numberof sensor elements for the 2-thresholding algorithm to find thecorrect support set. Also, Fletcher and Rangan [46] derived asufficient condition for S-OMP to recover . Even though theirderivation is based on a large system model with a Gaussiansensing matrix, it has provided very useful insight into the SMVcompressive sensing problem. Therefore, we employed a largesystem model to derive a sufficient condition for compressiveMUSIC.

Definition 5.1: A large system noisy canonical MMVmodel,, is defined as an estimation problem of

Page 8: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 285

-sparse vectors that shares a common sparsity pat-tern through multiple noisy snapshots using thefollowing formulation:

(V.1)

where is a random matrix with i.i.d.entries, is an additive noisematrix, asand where is afixed number or is proportionally increasing with respect to. Here, we assume that and

exist, and andfor some .

Note that the conditions , andare technical conditions that prevent , and from reachingequivalent values when .

B. Sufficient Condition for Generalized MUSIC

For the case of a noisy measurement, is corrupted and thecorresponding noise subspace estimate is not correct. How-ever, the following theorem shows that if the ,then the generalized MUSIC estimate is consistent and achievesthe correct estimation of the remaining -indices for sufficientlylarge SNR.

Theorem 5.1: For a , if we have, then we can find remaining indices of

supp with the generalized MUSIC criterion if

(V.2)

for some provided that, where denotes the condition number and

denotes the smallest singular value of .Proof: See Appendix D.

Note that for , the condition becomesfor some . However, as

decreases, the first term dominates and we need more sensorelements.

C. Sufficient Condition for Partial Support Recovery Using2-Thresholding

Now, define the thresholding estimate as where

Now, we derive sufficient conditions for the success of 2-thresh-olding in detecting support when is a small fixed numberor when is proportionally increasing with respect to .

Theorem 5.2: For a , supposeare deterministic sequences and

(V.3)

(V.4)

where

(V.5)

where

and is the -th value if we are ordering thevalues of for with descending order. Then,2-thresholding asymptotically finds a sparsity pattern.

Proof: See Appendix F.

• For noiseless single measurement vector (SMV) case, i.e.,, if , this becomes

Using Lemma E.2 in Appendix E, we have

(V.6)

Hence, we have

for some , as the sufficient condition for 2-thresh-olding in SMV cases. Compared to the result in [45] as

our bound has a slight gain due to and, where . This is because even

for the SMV problem, the one remaining index can beestimated using the generalized MUSIC criterion.

• If is a fixed number and, by using Lemma E.2 in Appendix E,

our bound can be simplified as

(V.7)

Page 9: Revisiting the Link Between Compressive Sensing and Array Signal

286 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

Therefore, the MMV gain over SMV mainly comes from.

• If and , thenunder the condition (V.6) we have

Therefore, the factor disappears, which pro-vides more MMV gain compared to (V.7).

D. Sufficient Condition for Partial Support Recovery UsingSubspace S-OMP

Next, we consider the minimum number of measurements forcompressive MUSIC with S-OMP. In analyzing S-OMP, ratherthan analyzing the distribution of where de-notes the set of indices which are chosen in the first step ofS-OMP, we consider the following version of subspace S-OMPdue to its superior performance [32], [47].1) Initialize and .2) Compute which is the projection operator onto theorthogonal complement of the span of .

3) Compute and for all , compute.

4) Take and .If return to Step 2.

5) The final estimate of the sparsity pattern is .Now, we also consider two cases according to the number of

multiple measurement vectors. First, we consider the case whenthe number of multiple measurement vectors is a finite fixednumber. Conventional compressive sensing (the SMV problem)is this kind of case. Second, we consider the case when isproportional to . This case includes the conventional MUSICcase.Theorem 5.3: For , let

and suppose the following conditions hold:(a) is a fixed finite number.(b) Let satisfy

(V.8)

If we have

(V.9)

then we can find correct indices of by applyingsubspace S-OMP.

Proof: See Appendix G.

• As a simple corollary of Theorem 5.3, when, we can easily show that the number

of sensor elements required for the conventional OMP tofind the all -support indices in SMV problem is given by

(V.10)

for a small . This is equivalent to the result in [45].

• When , then the number of sensor ele-ments for subspace S-OMP is

for some . Hence, the sampling ratio is the reciprocalof the number of multiple measurement vectors.

• Since in our large system model, (V.8) tells us thatthe required should increase to infinity.

Next, we consider the case that is proportionally increasingwith respect . In this case, we have the following theorem.

Theorem 5.4: For , letand suppose the following conditions hold.

(a) is proportionally increasing with respect to so thatexist.

(b) Let satisfy

(V.11)

Then if we have

(V.12)

for some where

is the probability measure with

support satisfies andis a probability measure with support

. Here, is an increasing function such thatand . Then we can find correct indicesof by applying subspace S-OMP.

Proof: See Appendix G.

• As a corollary of Theorem 5.4, when and, we can see that the number of sensor

elements required for subspace S-OMP to find sup-port indices is given by

for a small , which is the same as the number ofsensor elements required for MUSIC.

• We can expect that the number of sensor elements requiredfor subspace S-OMP to find support indices is at most

in the noiseless case, where is an arbitrarysmall number. Hence, the factor is not necessary.

• Unlike the case in Theorem 5.3, the condition is nowlower bounded by a finite number .This implies that we don’t need infinite for supportrecovery, in contrast to SMV or Theorem 5.3. This is oneof the important advantages of MMV over SMV.

VI. NUMERICAL RESULTS

In this section, we demonstrate the performance of compres-sive MUSIC. This new algorithm is compared to the conven-

Page 10: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 287

Fig. 2. Phase transition map for compressive MUSIC with subspace S-OMPwhen is constant for all , and(a) and (b) . The overlayed curves are calculated based on (VI.1).

tional MMV algorithms, especially 2-SOMP, 2-thresholdingand mixed-norm approach [26]. We do not compare thenew algorithm with the classical MUSIC algorithm since itfails when . We declared the algorithm as a success if theestimated support is the same as the true , and the suc-cess rates were averaged for 5000 experiments. The simulationparameters were as follows:

, and , respectively. Elements ofsensing matrix were generated from a Gaussian distributionhaving zero mean and variance of , and the werechosen randomly. The maximum iteration was set to for theS-OMP algorithm.According to (V.2), (V.9) and (V.12), for noiseless measure-

ments, piece-wise continuous boundaries exist for the phasetransition of CS-MUSIC with subspace S-OMP:

(VI.1)

Note that in our canonical MMV model, includes manyMMV problems in which the number of snapshots is largerthan the sparsity level since our canonical MMVmodel reducesthe effective snapshot as . Fig. 2(a) shows a typicalphase transition map of our compressive MUSIC with subspaceS-OMP for noiseless measurements when andand is constant for all . Even though thesimulation step is not in the large system regime, but isquite small, so that we can expect that is aboundary for phase transition. Fig. 2(b) corresponds to the casewhen and is constant for all . Sincein this setup is comparable to , we use the asa boundary. The results clearly indicates the tightness of oursufficient condition.

Similarly, multiple piecewise continuous boundaries exist forthe phase transition map for compressiveMUSICwith 2-thresh-olding

(VI.2)

Since the phase transition boundary depends on the unknownjoint sparse signal through and , we inves-tigate this effect. Fig. 3(a) and (b) show a typical phase transi-tion map of our compressive MUSIC with 2-thresholding when

and , respectively, for noiseless measurementsand are constant for all ; Fig. 3(c) and (d) corresponds tothe same case except . We overlayed theoreti-cally calculated phase boundaries over the phase transition di-agram. The empirical phase transition diagram clearly revealedthe effect of the distribution . Still, the theoretically calcu-lated boundary clearly indicates the tightness of our sufficientcondition.Fig. 4 shows the success rate of S-OMP, 2-thresholding, and

compressive MUSIC with subspace S-OMP and 2-thresholdingfor 40 dB noisy measurement when is constant for all

. When , the performance level of the compres-sive MUSIC algorithm is basically the same as that of a com-pressive sensing algorithm such as 2-thresholding and S-OMP.When , the recovery rate of the compressive MUSIC al-gorithm is higher than the case , and the compressiveMUSIC algorithm outperforms the conventional compressivesensing algorithms. If we increase to 16, the success of thecompressive MUSIC algorithm becomes nearly deterministicand approaches the bound, whereas conventional compres-sive sensing algorithms do not.In order to compare compressive MUSIC with other methods

more clearly the recovery rates of various algorithms are plottedin Fig. 5(a) for S-OMP, compressive MUSIC with subspaceS-OMP, and the mixed norm approach when ; andin Fig. 5(b) for 2-thresholding and compressive MUSIC with2-thresholding, when andis constant, and dB. Note that compressive MUSICoutperforms the existing methods.To show the relationship between the recovery performance

in the noisy setting and the condition number of matrices ,we performed the simulation on the recovery results for threedifferent types of the source model . More specifically, thesingular values of are set to be exponentially decaying with(i) , (ii) and (iii) respectively, i.e.,the singular values of are given by for

. In this simulation, we are using noisy samplesthat are corrupted by additive Gaussian noise of 40 dB.Fig. 6(a) shows the results when entries of the support areknown a priori by an “oracle” algorithm, whereas entriesof the support are determined by subspace S-OMP in Fig. 6(b)and by thresholding in Fig. 6(c). The results provide evidenceof the significant impact of the condition number of .

Page 11: Revisiting the Link Between Compressive Sensing and Array Signal

288 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

Fig. 3. Phase transition map for compressive MUSIC with 2-thresholding when , and (a) , (b) when is constant for all, and (c) , (d) when . The overlayed curves are calculated based on (VI.2).

Fig. 7 illustrates the cost function to estimate the unknownsparsity level, which confirms that compressive MUSIC can ac-curately estimate the unknown sparsity level as described inLemma 4.6. In this simulation, and .The correct support size is marked as circle. Note thathas the smallest value at that point for the noiseless measure-ment cases, as shown Fig. 7(a), confirming our theory. For the40 dB noisy measurement case, we can still easily find the cor-rect since it corresponds to the first local minimizer as in-creases, as shown in Fig. 7(b).

VII. DISCUSSION

A. Comparison With Subspace-Augmented MUSIC [47]

Recently, Lee and Bresler [47] independently developed ahybrid MMV algorithm called as subspace-augmented MUSIC(SA-MUSIC). The SA-MUSIC performs the following proce-dure.1) Find indices of by applying SOMP to theMMV problem where the set of columns of isan orthonormal basis for .

2) Let be the set of indices which are taken in Step 1 and.

3) For , computewhere consists of orthonormal columnssuch that .

4) Make an ascending ordering of , chooseindices that correspond to the first elements, and put theseindices into .

By Theorem 4.5(c), we can see that the subspace-augmentedMUSIC is equivalent to compressive MUSIC, since theMUSICcriterion in subspace-augmented measurement andthe generalized MUSIC criterion in compressive MUSIC areequivalent. Therefore, we can expect that the performance ofboth algorithm should be similar except for the following dif-ferences. First, the subspace S-OMP in Lee and Bresler [47]

is applying the subspace decomposition once for the data ma-trix whereas our analysis for the subspace S-OMP is basedon subspace decomposition for the residual matrix at each step.Hence, our subspace S-OMP is more similar to that of [32].However, based on our experiments the two versions of thesubspace S-OMP provide similar performance when combinedwith the generalized MUSIC criterion. Second, the theoreticalanalysis of SA-MUSIC is based on the RIP condition whereasours is based on large system limit model. One of the advantageof RIP based analysis is its generality for any type of sensingmatrices. However, our large system analysis can provide ex-plicit bounds for the number of required sensor elements andSNR requirement thanks to the Gaussian nature of sensing ma-trix.

B. Comparison With Results of Davies and Eldar [32]

Another recent development in joint sparse recovery ap-proaches is the rank-awareness algorithm by Davies and Eldar[32]. The algorithm is derived in the noiseless measurementsetup and is basically the same as our subspace S-OMP inSection 5 except that in step 3 is normalized after applying

to the original dictionary . For the full rank measure-ment, i.e., , the performance of the rank-aware subspaceS-OMP is equivalent to that of MUSIC. However, for ,the lack of the generalized MUSIC criterion may make thealgorithm inferior since our generalized MUSIC criterion canidentify support deterministically whereas the rank-awaresubspace S-OMP should estimate the remaining support withadditional error-prone greedy steps.

C. Compressive MUSIC With a Mixed Norm Approach

Another important issue in CS-MUSIC is how to combine thegeneral MUSIC criterion with nongreedy joint sparse recoveryalgorithms such as a mixed norm approach [26]. Towards this,the greedy step required for the analysis for CS-MUSICshould be modified. One solution to mitigate this problem is to

Page 12: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 289

Fig. 4. Recovery rates for various and when dB and nonzero rows of are constant for all . Each row (from top to bottom) indicates therecovery rates by S-OMP, 2-thresholding, and compressive MUSIC with subspace S-OMP and 2-thresholding. Each column (from left to right) indicatesand , respectively.

choose a support from the nonzero support of the solu-tion and use it as a partial support for generalized MUSIC cri-terion. However, we still need a criterion to identify a correct

support from the solution, since the generalized MUSICcriterion only holds with a correct support. Recently, weshowed that a correct partial support out of -sparse so-lution can be identified using a subspace fitting criterion [48].Accordingly, the joint sparse recovery problem can be relaxed

to a problem to find a solution that has at least correctsupport out of nonzero support estimate. This is a significantrelaxation of CS-MUSIC in its present form that requiressuccessful consecutive greedy steps. Accordingly, the new for-mulation was shown to significantly improve the performanceof CS-MUSIC for the joint-sparse recovery [48]. However, thenew results are beyond scope of this paper and will be reportedseparately.

Page 13: Revisiting the Link Between Compressive Sensing and Array Signal

290 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

Fig. 5. Recovery rates by various MMV algorithms for a uniform source when , and and dB: (a) recovery rate forS-OMP, compressive MUSIC with subspace S-OMP, and mixed norm approach when and (b) recovery rate for 2-thresholding and compressiveMUSIC with 2-thresholding.

Fig. 6. Recovery rates by compressive MUSIC when nonzero supports are estimated by (a) an “oracle” algorithm, (b) subspace S-OMP, and (c) 2-thresh-olding. Here, is given with and . Smaller provides larger condition number . The measurements are corrupted by additiveGaussian noise of dB and .

Fig. 7. Cost function for sparsity estimation when , and the measurements are (a) noiseless and (b) corrupted by additiveGaussian noise of dB. The circles illustrate the local minima, whose position corresponds to the true sparsity level.

Page 14: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 291

Fig. 8. Rate regions for the multiple measurement vector problem and CS-MUSIC, when (a) is a fixed number, and (b) .

D. Relation With Distributed Compressive Sensing CodingRegion

Our theoretical results as well as numerical experiments indi-cate that the number of resolvable sources can increase thanksto the exploitation of the noise subspace. This observation leadsus to investigate whether CS-MUSIC achieves the rate regionin distributed compressed sensing [20], which is analogous toSlepian-Wolf coding regions in distributed source coding [49].Recall that the necessary condition for a maximum likelihood

for SMV sparse recovery is given by [45]:

as . Let denote the number of sensor elements atthe -thmeasurement vector. If the total number of samples fromvectors are smaller than that of SMV-CS, i.e.,, then we cannot expect a perfect recovery even from noise-

less measurement vectors. Furthermore, the minimum sensor el-ements should be to recover the values of the -th coeffi-cient vector, even when the indices of the support are correctlyidentified. Hence, the converse region at is definedby the as shown Fig. 8(a) and (b).Now, for a fixed our analysis shows that the achievable rate

by the CS-MUSIC is [Fig. 8(a)]. On theother hand, if , the achievable rate bythe CS-MUSIC is as shown in Fig. 8(b).Therefore, CS-MUSIC approaches the converse regin at ,whereas for the intermediate ranges of there exists a perfor-mance gap from the converse region. However, even in thiscase if we consider a separate SMV decoding without consid-ering correlation structure in MMV, the required sampling rateis which is significantly larger than thatof CS-MUSIC. This analysis clearly reveals that CS-MUSIC isa quite efficient decoding method from distributed compressedsensing perspective.

E. Discretization

The MUSIC algorithm was originally developed for spectralestimation or direction-of-arrival (DOA) estimation problem,where the unknown target locations and bearing angle are con-tinuously varying parameters. If we apply CS-MUSIC to this

type of problems to achieve a finer resolution, the search re-gion should be discretized more finely with a large . The mainproblem of such discretization is that the mutual coherence ofthe dictionary approaches to 1, which can violate the RIP con-dition of the CS-MUSIC. Therefore, the trade-off between theresolution and the RIP condition should be investigated; Duarteand Baraniuk recently investigated such trade-off in the contextof spectral compressive sensing [50]. Since this problem is veryimportant not only for the CS-MUSIC but for SMV compressedsensing problems that are originated from discretizing contin-uous problems, systematic study needs to be done in the future.

VIII. CONCLUSIONS AND FUTURE WORKS

In this paper, we developed a novel compressive MUSIC al-gorithm that outperforms the conventional MMV algorithms.The algorithm estimates entries of the support using con-ventional MMV algorithms, while the remaining support in-dices are estimated using a generalized MUSIC criterion, whichwas derived from the RIP properties of sensing matrix. Theo-retical analysis as well as numerical simulation demonstratedthat our compressive MUSIC algorithm achieved the boundas approaches the nonzero support size . This is fundamen-tally different from existing information theoretic analysis [51],which requires the number of snapshots to approach infinity toachieve the bound. Furthermore, as approaches 1, the re-covery rate approaches that of the conventional SMV compres-sive sensing. We also provided a method that can estimate theunknown sparsity, even under noisy measurements. Theoreticalanalysis based on a large system MMV model showed that therequired number of sensor elements for compressive MUSICis much smaller than that of conventional MMV compressivesensing. Furthermore, we provided a closed form expressionof the minimum SNR to guarantee the success of compressiveMUSIC.The compressive sensing and array signal processing produce

two extreme approaches for the MMV problem: one is basedon a probabilistic guarantee, the other on a deterministic guar-antee. One important contribution of this paper is to abandonsuch extreme viewpoints and propose an optimal method totake the best of both worlds. Even though the resulting idea ap-pears simple, we believe that this opens a new area of research.Since extensive research results are available from the array

Page 15: Revisiting the Link Between Compressive Sensing and Array Signal

292 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

signal processing community, combining the already well-es-tablished results with compressive sensing may produce algo-rithms that may be superior to the compressive MUSIC algo-rithm in its present form. Another interesting observation is thatthe RIP condition , which is essential for com-pressive MUSIC to achieve the bound, is identical to therecovery condition for the so-called modified CS [52]. In modi-fied CS, support indices are known a priori and the remaining

are estimated using SMV compressive sensing. The du-ality between compressive MUSIC and the modified CS doesnot appear incidental and should be investigated. Rather thanestimating indices first using MMV compressive sensingand estimating the remaining using the generalized MUSICcriterion, there might be a new algorithm that estimates sup-ports indices first in a deterministic pattern, while the remaining

are estimated using compressive sensing. This directionof research might reveal new insights about the geometry of theMMV problem.

APPENDIX APROOF OF THEOREM 4.1

Proof: (a) First, we show that .Since , we have for . Take aset with . Then, there exists a nonzero

such that

(A.1)

where denotes a submatrix collecting rows correspondingto the index set .Since the columns of are linearly independent,

but . By (A.1)

so that .(b) Suppose that there is such that

Since so that there is asuch that and . Hence, we have

By the RIP condition . It followsthat whenever and , we have

. Since , there isa such that . Hence, if and

, by the RIP condition of , we have.

Finally, it suffices to show that for any

Suppose that . Then there is a set such thatand . Then, there exists asuch that

This is impossible since the nonzero rows of are in generalposition.

APPENDIX BPROOF OF THEOREM 4.3 AND COROLLARY 4.4

Proof of Theorem 4.3: In order to show that (IV.2) implies, let be an index set with and

. Then by Lemma 4.1 and the definition of thespark(A),

By the assumption, there is an such thatso that we have

Since , there isa such that and. Hence we have such that and

so that . By the RIP condi-tion , it follows that

since . Hence, under the condition(IV.2), we have .In order to show that implies (IV.2), assume the

contrary. Then we have

where with . Then forany so that

. Set sothat . Then there is a such that

Then we have since the rows of arein general position. Note that . Since

for some, which is a contradiction.Proof of Corollary 4.4: Here we let

and . Since we already have ,(IV.2) holds if and only if

Note that

Page 16: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 293

where because of. Since

(IV.2) is equivalent to

where . Hence, (IV.3) holds if and only if.

APPENDIX CPROOF OF THEOREM 4.5 AND LEMMA 4.6

Proof of Theorem 4.5: (a) By the definitions of and ,we have so that

Since is a self-adjoint matrix, it is an orthog-onal projection. Next, to show that

, we only need to show the followingproperties:

(i) for any ,(ii) for any

,(iii) for any

.For (i), it can be easily shown by using andfor any . For (ii), any , there

is a such that . Then by usingthe property , we can see that property (ii) also holds.Finally, we can easily see that (iii) also holds by usingfor any .(b) This is a simple consequence of (a) since

andis an orthogonal complement of

.(c) Since by Lemma 4.1,

has linearly independent columns. Hence

we only need to find the orthogonal complement of.

Since , we haveby the projection update rule so that

is thenoise subspace for or .

APPENDIX DPROOF OF THEOREM 5.1

We first need to show the following lemmas.

Lemma D.1: Assume that we have noisy measurementthrough multiple noisy snapshots where

where , and is additivenoise. We also assume that . Then there is a

such that for any and

(D.1)

if , where is a spectral norm of andconsists of orthonormal columns such that

.Proof: First, here we let (or )

be the minimum (or the maximum) nonzero singular value of. Then, is also of full column rank if

. For such an

by the consecutive use of triangle inequality. If we have, we get

so that

(D.2)

Page 17: Revisiting the Link Between Compressive Sensing and Array Signal

294 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

where we use and . Bythe projection update rule, we have

(D.3)

and similarly

(D.4)

By applying (D.3) and (D.4) as done in [47], we have

(D.5)

Then, for any and , by the generalizedMUSIC criterion (IV.3), we have

if we have

(D.6)

where

Lemma D.2: Suppose a minimum is given by

where

is the condition number of and

Then, for any and

Proof: Using (D.6), the generalized MUSIC correctly es-timates the remaining indices when

where we use the definition of the condition number of thematrix, i.e., . This implies that

This concludes the proof.

Corollary D.3: For a , if we haveand a minimum satisfies

(D.7)

where , then we can find remainingindices of with generalized MUSIC criterion.Proof: It is enough to show that

Page 18: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 295

First, for each , is a chi-square randomvariable with degree of freedom so that we have by Lemma3 in [45, Lemma 3]

since . On the other hand, for anyis independent from

so that is a chi-squarerandom variable with degree of freedom since

is the projection operator ontothe orthogonal complement of . Since

, again by Lemma 3in [45], we have

so that

Proof of Theorem 5.1: First, we need to show the left RIPcondition to apply the generalized MUSICcriterion. Using Marćenko-Pastur theorem [53], we have

Hence, we need to make. Second, we need to calculate

the condition for the number of sensor elements for the SNRcondition (D.7). Since , we have (D.7)provided that

Therefore, if we have and

then we can identify the remaining indices of .

APPENDIX E

The following two lemmas are quite often used in this paper.

Lemma E.1: Suppose that is a given number, andis a set of i.i.d. chi-squared random variables with degree offreedom . Then

in probability.Proof: Assume that is a chi-squared random variable

of degree of , then we have

(E.1)

where denotes the upper incomplete Gamma function.Then we use the following asymptotic behavior:

For , we consider the probability. By using union bound, we see that

as . Now, considering the probability, we see that

as so that the claim is proved.

Lemma E.2: Let be the Gaussian sensing matrixwhose components are independent random variable withdistribution . Then

Proof: Because for all andis a chi-squared random variable of degree

of freedom so that by Lemma 3 in [45], we have

(E.2)

Since, for fixed is -dimensionalrandom vector that is nonzero with a probability of 1 and in-dependent of , the random variable is aGaussian random variable with a variance of by applying

Page 19: Revisiting the Link Between Compressive Sensing and Array Signal

296 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

[45, Lemma 2]. Since we have (E.2) and the variance ofgoes to 0 as

(E.3)

for all . Then

as .

APPENDIX FPROOF OF THEOREM 5.2

Proof of Theorem 5.2: Let with ,where is constructed by the first indices of if we areordering the values of for with decreasingorder. Then for

where and . Then

(F.1)

where

First, by Lemma 3 in [45], so thatwe have

(F.2)

by the definition of and the construction of .For , observe that each is a Gaussian -dimensional

vector with total variance

and is a chi-squared distribution with a degree offreedom for . Hence, using [45, Lemma 3] and

(F.3)

as so that

so that we have

for and . Finally

follows beta distribution as shown in [45]. Sincethere are terms in , [45, Lemma 6] and inequality (F.3)shows that

so that we have

(F.4)

For , we have

where is the singular value decompostion ofand where

. As will be shownlater, the decomposition in the second line of the above equa-tion is necessary to deal with different asymptotic behavior ofchi-square random variable of degree of freedom 1 and . Sinceis statistically independent from for and

is an orthonormal set, is a chi-squaredrandom variable of degree of freedom and each is achi-squared random variable of degree of freedom 1. Also, wehave

(F.5)

since

by Lemma 4 in [45]. When is a fixed number, then by LemmaE.1, we have

(F.6)

Page 20: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 297

On the other hand, when is proportionally increasing with re-spect to [45], then we have

(F.7)

Combining (F.6), (F.7) and (F.5), we have

(F.8)

for , when is given by (V.5).For the noisy measurement , we have for all

(F.9)

Let

Then, for , combining (F.1), (F.2), (F.4) and (F.9),we have

On the other hand, using (F.8) and (F.9), for , wehave

so that we need to show that

(F.10)

under the condition (V.3) and (V.4). First, note that ifand only if

which is equivalent to that

which holds under the condition (V.3), where we used the defi-nition . Then we can see that if we have

, then

(F.11)

Also, if we have

then

(F.12)

Hence, by applying (F.11) and (F.12), if we assume the condi-tion

then the inequality (F.10) holds so that we can identifyby 2-thresholding.

APPENDIX GPROOF OF THEOREM 5.3 AND 5.4

In this section, we assume the large system limit such thatand exist. We first need to have the following results.

Theorem .G1: [53] Suppose that each entry ofis generated from i.i.d. Gaussian random variable .Then the asymptotic probability density of squared singularvalue of is given by

(G.1)

where .

Corollary G.2: Suppose that each entry of is gen-erated from i.i.d. Gaussian random variable . Thenthe asymptotic probability density of singular value of isgiven by

(G.2)

Proof: This is obtained from Theorem G.1 using a simplechange of variable.

Lemma G.3: Let be positive integers and. Then for any -dimensional subspace of , we

have

where .

Page 21: Revisiting the Link Between Compressive Sensing and Array Signal

298 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

Proof: Let be the extended singular valuedecomposition of where

and . If we let , thenwe have

(G.3)

and

If we let , since is a subspace ofand , we have

by (G.3). Since for , using, we have

Lemma G.4: For and , we letwhich satisfies

where is the probability measure which is given by

Furthermore, we let is the probability measure whichis given by

Then we have

For the proof of Lemma G.4, we need the following lemma.

Lemma G.5: Let . Suppose thatand are continouous probability density functions onsuch that for any

and satisfy that

Then for any nonnegative increasing function on andfor any such that

(G.4)

we have

(G.5)

Proof: First, we define

Then both and are strictly increasing functions sothat their inverse functions exist and satisfyfor any . For any which satisfies(G.4), there is some such that .Applying the change of variable, we have

since for any and is in-creasing on .

Proof of Lemma G.4: Noting that we have

by Lemma G.5, we only need to show that

for any . Let and be given by

Then we can see that

Since and are probability density functions with sup-port so that we can easily see that for any

so that the claim holds.

Page 22: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 299

Proof of Theorem 5.3 and 5.4: Note that S-OMP can findcorrect indices from if we have

(G.6)

for each , since for

. Hence, it is enough to check that the condition(G.6) for .First, for , since is statistically independent of

. For , the dimension of is so thatis of chi-squared distribution of degree of

freedom .On the other hand, for , we have

by using and Lemma G.3, wherehave singular values

. If we let

then by (G.1), we have

(G.7)

where is the value satisfying

If we let

(G.8)

then we have for any

(G.9)

then by substitution with , we have

(G.10)

where the inequality comes from Lemma G.4. Substituting(G.10) into (G.7), we have

(G.11)

where is an increasing func-tion with respect to such that and, and .For noisy measurement , we have the following inequality:

(G.12)

as , where .Then we consider two limiting cases according to the number

of measurement vectors.(Case 1 : Theorem 5.3) For

are independent chi-squared random variables ofdegree of freedom so that by Lemma E.1, we have

(G.13)

Here we assume that

(G.14)

(G.15)

Then by Marćenko-Pastur theorem [53]

Page 23: Revisiting the Link Between Compressive Sensing and Array Signal

300 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 1, JANUARY 2012

so that

(G.16)

Combining (G.12) and (G.16), for noisy measurement , wehave

if we have (G.15). Hence, when is a fixed number, if we have(G.15), then we can identify correct indices ofwith subspace S-OMP in LSMMV.(Case 2: Theorem 5.4) Similarly as in the previous case, for

are indepen-

dent chi-squared distribution. Since , byLemma 3 in [45], we have

(G.17)

By using (G.12), we have

(G.18)

We let

(G.19)

(G.20)

for some . Note that (G.20) is equivalent to

Again we let

Then for a quadratic function , if, then we have

(G.21)

since by (G.19) and . Combining (G.18)and (G.21), we have for and , wehave

for some . Hence, in the case of ,we can identify the correct indices of if we have (G.20).

ACKNOWLEDGMENT

The authors would like to thank Dr. Dmitry Malioutov forproviding the mixed norm code in [26] and anonymousreviewers for their helpful comments that have improved thispaper significantly.

REFERENCES

[1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.52, no. 4, pp. 1289–1306, Apr. 2006.

[2] E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles:Exact signal reconstruction from highly incomplete frequency infor-mation,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb.2006.

[3] E. Candès and T. Tao, “Decoding by linear programming,” IEEE Trans.Inf. Theory, vol. 51, no. 12, pp. 4203–4215, Dec. 2005.

[4] H. Jung, J. C. Ye, and E. Y. Kim, “Improved k-t BLAST and k-t SENSEusing FOCUSS,” Phys. in Med. and Biol., vol. 52, pp. 3201–3226,2007.

[5] J. C. Ye, S. Tak, Y. Han, and H. W. Park, “Projection reconstructionMR imaging using FOCUSS,” Magn. Res. in Med., vol. 57, no. 4, pp.764–775, 2007.

[6] H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye, “k-t FOCUSS:A general compressed sensing framework for high resolution dynamicMRI,” Magn. Res. in Med., vol. 61, no. 1, pp. 103–116, 2009.

[7] G. H. Chen, J. Tang, and S. Leng, “Prior image constrained compressedsensing (PICCS): A method to accurately reconstruct dynamic CT im-ages from highly undersampled projection data sets,” Med. Phys., vol.35, p. 660, 2008S. F. Cotter and B. D. Rao, “Sparse channel estimationvia matching pursuit with application to equalization,” IEEE Trans.Commun., vol. 50, no. 3, pp. 374–377, Mar. 2002.

[8] S. F. Cotter and B. D. Rao, “Sparse channel estimation via matchingpursuit with application to equalization,” IEEE Trans. Commun., vol.50, no. , pp. 374–377, 2002.

[9] A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperserdesign for coded aperture snapshot spectral imaging,” Appl. Opt., vol.47, no. 10, pp. 44–51, 2008.

[10] J. A. Tropp, “Just Relax: Convex programmingmethods for identifyingsparse signals in noise,” IEEE Trans. Inf. Theory, vol. 52, no. 3, pp.1030–1051, Mar. 2006.

[11] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction fromlimited data using FOCUSS: Re-weighted minimum norm algorithm,”IEEE Trans. Signal Process., vol. 45, no. 3, pp. 600–616, Mar. 1997.

[12] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity byreweighted minimization,” J. Fourier Anal. and Applic., vol. 14, no.5, pp. 877–905, 2008.

Page 24: Revisiting the Link Between Compressive Sensing and Array Signal

KIM et al.: COMPRESSIVE MUSIC: REVISITING THE LINK BETWEEN CS AND ARRAY SIGNAL PROCESSING 301

[13] S. S. Chen, D. L. Donoho, andM.A. Saunders, “Atomic decompositionby basis pursuit,” SIAM J. Scientif. Comput., vol. 20, no. 1, pp. 33–61,1999.

[14] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEETrans. Signal Process., vol. 56, no. 6, pp. 2346–2356, Jun. 2008.

[15] D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basisselection,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2153–2164,2004.

[16] J. Chen and X. Huo, “Theoretical results on sparse representations ofmultiple measurement vectors,” IEEE Trans. Signal Process., vol. 54,no. 12, pp. 4634–4643, 2006.

[17] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse so-lutions to linear inverse problems with multiple measurement vectors,”IEEE Trans. Signal Process., vol. 53, no. 7, p. 2477, 2005.

[18] M. Mishali and Y. C. Eldar, “Reduce and boost: Recovering arbitrarysets of jointly sparse vectors,” IEEE Trans. Signal Process., vol. 56,no. 10, pp. 4692–4702, Oct. 2008.

[19] E. Berg and M. P. Friedlander, “Theoretical and empirical results forrecovery from multiple measurements,” IEEE Trans. Inf. Theory, vol.56, no. 5, pp. 2516–2527, May 2010.

[20] M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, and R. G. Bara-niuk, “Distributed compressed sensing of jointly sparse singals,” inAsilomar Conf. Signals, Sys., Comput., 2005, pp. 1537–1541.

[21] H. Krim and M. Viberg, “Two decades of array signal processing re-search: the parametric approach,” IEEE Signal Process. Mag., vol. 13,no. 4, pp. 67–94, 1996.

[22] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger,“SENSE: Sensitivity encoding for fast MRI,”Magn. Res. in Med., vol.42, no. 5, pp. 952–962, 1999.

[23] A. Joshi, W. Bangerth, K. Hwang, J. C. Rasmussen, and E. M. Se-vick-Muraca, “Fully adaptive FEM based fluorescence optical tomog-raphy from time-dependent measurements with area illumination anddetection,” Med. Phys., vol. 33, p. 1299, 2006.

[24] O. Lee, J. M. Kim, Y. Bresler, and J. C. Ye, “Compressive diffuse op-tical tomography: Non-iterative exact reconstruction using joint spar-sity,” IEEE Trans. Med. Imag., vol. 30, no. 5, pp. 1129–1142, 2011.

[25] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simul-taneous sparse approximation. Part I: Greedy pursuit,” Signal Proces.,vol. 86, no. 3, pp. 572–588, 2006.

[26] D. Malioutov, M. Cetin, and A. S. Willsky, “A sparse signal recon-struction perspective for source localization with sensor arrays,” IEEETrans. Signal Process., vol. 53, no. 8, pp. 3010–3022, 2005.

[27] J. A. Tropp, “Algorithms for simultaneous sparse approximation. PartII: Convex relaxation,” Signal Process., vol. 86, no. 3, pp. 589–602,2006.

[28] D. P. Wipf, “Bayesian Methods for Finding Sparse Representations,”Ph.D. dissertation, Univ. of California, San Diego, 2006.

[29] Y. C. Eldar, P. Kuppinger, and H. Bolcskei, “Compressed sensing ofblock-sparse signals: Uncertainty relations and efficient recovery,”IEEE Trans. Signal Process., vol. 58, no. 6, pp. 3042–3054, 2010.

[30] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-basedcompressive sensing,” IEEE Trans. Inf. Theory, vol. 56, no. 4, pp.1982–2001, Apr. 2010.

[31] P. Feng, “Universal Minimum-Rate Sampling and Spectrum-Blind Re-construction for Multiband Signals,” Ph.D. dissertation, Univ. of Illi-nois, Urbana-Champaign, 1997.

[32] M. E. Davies and Y. C. Eldar, “Rank awareness for joint sparse re-covery,” 2010, arXiv:1004.4529.

[33] R. Schmidt, “Multiple emitter location and signal parameter estima-tion,” IEEE Trans. Antennas Propag., vol. 34, no. 3, pp. 276–280, 1986.

[34] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, andCramér-Rao bound,” IEEE Trans. Acoust., Speech, Signal Process.,vol. 37, no. 5, pp. 720–741, May 1989.

[35] D. L. Donoho and J. Tanner, “Sparse nonnegative solution of under-determined linear equations by linear programming,” Proc. NationalAcademy of Sciences, vol. 107, no. 25, pp. 9446–9451, 2005.

[36] R. Gribonval, H. Rauhut, K. Schnass, and P. Vandergheynst, “Atomsof all channels, unite! Average case analysis of multi-channel sparserecovery using greedy algorithms,” J. Fourier Anal. and Applic., vol.14, no. 5, pp. 655–687, 2008.

[37] J. A. Tropp, “Greed is good: Algorithmic results for sparse approxima-tion,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2231–2242, Oct.2004.

[38] Y. C. Eldar and H. Rauhut, “Average case analysis of multichannelsparse recovery using convex relaxation,” IEEE Trans. Inf. Theory, vol.56, no. 1, pp. 505–519, Jan. 2010.

[39] F. R. Bach, “Consistency of the group Lasso and multiple kernellearning,” J. Mach. Learn. Res., vol. 9, pp. 1179–1225, 2008.

[40] D. P. Wipf and B. D. Rao, “An empirical Bayesian strategy for solvingthe simultaneous sparse approximation problem,” IEEE Trans. SignalProcess., vol. 55, no. 7, pt. 2, pp. 3704–3716, Jul. 2007.

[41] B. Porat and B. Friedlander, “Direction finding algorithms based onhigh-order statistics,” IEEE Trans. Signal Process., vol. 39, no. 9, pp.2016–2024, Sep. 1991.

[42] J. F. Cardoso, “Source separation using higher order moments,” inProc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, 1989,pp. 2109–2112.

[43] E. Gonen, J. M. Mendel, and M. C. Dogan, “Applications of cumulantsto array processing. IV. Direction finding in coherent signals case,”IEEE Trans. Signal Process., vol. 45, no. 9, pp. 2265–2276, Sep. 1997.

[44] J. F. Cardoso and E. Moulines, “Asymptotic performance analysis ofdirection-finding algorithms based on fourth-order cumulants,” IEEETrans. Signal Process., vol. 43, no. 1, pp. 214–224, Jan. 1995.

[45] A. K. Fletcher, S. Rangan, and V. K. Goyal, “Necessary and sufficientconditions for sparsity pattern recovery,” IEEE Trans. Inf. Theory, vol.55, no. 12, pp. 5758–5772, Dec. 2009.

[46] A. K. Fletcher and S. Rangan, “Orthogonal matching pursuit fromnoisy random measurements: A new analysis,” in Proc. Conf. NeuralInformation Processing Systems, 2009.

[47] K. Lee and Y. Bresler, “Subspace-augmented MUSIC for joint sparserecovery,” 2010, arXiv:1004.4371.

[48] J. M. Kim, O. Lee, and J. C. Ye, “Compressive MUSIC with optimizedpartial support for joint sparse recovery,” 2011, arXiv:1102.3288.

[49] D. Slepian and J. Wolf, “Noiseless coding of correlated informationsources,” IEEE Trans. Inf. Theory, vol. 19, no. 4, pp. 471–480, 1973.

[50] M. Duarte and R. Baraniuk, “Spectral compressive sensing,” submittedfor publication, 2010.

[51] G. Tang and A. Nehorai, “Performance analysis for sparse support re-covery,” IEEE Trans. Inf. Theory, vol. 56, no. 3, pp. 1383–1399, Mar.2010.

[52] N. Vaswani andW. Lu, “Modified-CS:Modifying compressive sensingfor problems with partially known support,” in Proc. IEEE Int. Symp.Inform. Theory (ISIT), Seoul, Korea, 2009.

[53] V. A.Marčenko and L. A. Pastur, “Distribution of eigenvalues for somesets of random matrices,” Sbornik: Math., vol. 1, no. 4, pp. 457–483,1967.

[54] Y. Bresler, “Spectrum-blind sampling and compressive sensing forcontinuous-index signals,” Proc. IEEE Information Theory and Appli-cations Workshop, pp. 547–554, 2008.

JongMin Kim received the B.S. degree in mathematics from KAIST, Daejeon,Republic of Korea. He also received the M.S. degree in applied mathematicsin 2001, as well as Ph.D. degree in mathematical sciences in 2007, both fromKAIST. He is currently a research professor at Dept. of Bio and Brain Engi-neering, KAIST. His research interests include compressive sensing, samplingtheory, harmonic analysis and signal processing.

OkKyun Lee (S’09) received the B.S. degree in electrical, electronic and com-puter engineering in 2007 from the Hanyang University. He received the M.S.degree in electronic and electrical engineering in 2009 from the Korea Ad-vanced Institute of Science and Technology (KAIST), where he is currentlypursuing the Ph.D. degree. His research interrests include compressive sensing,neuroimaging, and diffuse optical tomography.

Jong Chul Ye (M’99) received the B.Sc. and M.Sc. degrees with honors fromSeoul National University, Korea, in 1993 and 1995, respectively, and the Ph.D.degree from the School of Electrical and Computer Engineering, Purdue Uni-versity, West Lafayette, in 1999. Before he joined KAIST in 2004, he workedas was a postdoc fellow at University of Illinois at Urbana-Champaign, and se-nior member research staff at Philips Research, NY, and GE Global ResearchCenter, NY. He is currently an Associate Professor at Dept. of Bio and BrainEngineering of KAIST, and also an Adjunct Professor at Dept. of Electrical En-gineering of KAIST. His current research interests include compressed sensingand statistical signal processing for various imaging modalities such as MRI,NIRS, CT, PET, optics, etc.Dr. Ye received Korean Government Overseas Scholarship Award for his

doctoral study. He served as a technical committee member and organizingcommittee in various conferences such as IEEE Int. Conf. on Image Processing(ICIP), IEEE Symp. Biomedical Imaging (ISBI), and etc.