23
Inversion algorithm based on the generalized objective functional for compressed sensing Jing Lei , Shi Liu School of Energy, Power and Mechanical Engineering, North China Electric Power University, Changping District, Beijing 102206, China article info Article history: Received 23 October 2011 Received in revised form 11 July 2012 Accepted 18 September 2012 Available online 28 September 2012 Keywords: Compressed sensing Inverse problems Tikhonov regularization method Homotopy method Artificial physics optimization algorithm abstract Owing to providing a novel insight for signal and image processing, compressed sensing (CS) has attracted increasing attention. The accuracy of the reconstruction algorithms plays an important role in real applications of the CS theory. In this paper, a generalized recon- struction model that simultaneously considers the inaccuracies on the measurement matrix and the measurement data is proposed for CS reconstruction. A generalized objec- tive functional, which integrates the advantages of the least squares (LSs) estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the mer- its of the homotopy method and the artificial physics optimization (APO) algorithm is developed for solving the proposed objective functional. Numerical simulations are imple- mented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this paper, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverse problems. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction CS is a promising theory that is based on the fact that a relatively few number of the projections of a sparse signal can capture most of its salient information. Owing to providing a novel insight for signal and image processing, CS theory has attracted increasing attention. Presently, the CS theory has found numerous potential applications in various fields such as the signal and image processing, machine learning, astronomy, wireless sensing networks, medical imaging, spectral imaging and remote sensing [1–10]. The accuracy of reconstruction algorithms plays a crucial role in real applications. At present, various algorithms have been developed for CS reconstruction, which can be divided into three rough categories [11]: (1) the greedy pursuit algorithms, such as the orthogonal matching pursuit (OMP) method [12], the stagewise OMP (StOMP) algorithm [13], the regularized OMP (ROMP) method [14] and the compressive sampling matching pursuit (CoSaMP) algorithm [11], where these methods build up an approximation one step at a time by making locally optimal choices at each step; (2) the convex relaxation algorithms, such as the interior-point method [15], the gradient projection method [16], the iterative threshold- ing algorithm [17], the Bregman iteration technique [18,19], the separable approximation algorithm [20] and (3) the combinatorial algorithms [11]. Overall, these algorithms have played a prominent role in promoting the developments and applications of CS theory. Owing to the characteristics and complexities of CS reconstruction problems, however, developing an efficient reconstruction algorithm is highly desirable. At present, CS reconstruction algorithms often consider the inaccurate property on the measurement data. In real applications, however, the measurement matrix or model may be inaccurate owing to the facts such as the physically 0307-904X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2012.09.049 Corresponding author. Tel.: +86 10 61772472; fax: +86 10 61772219. E-mail address: [email protected] (J. Lei). Applied Mathematical Modelling 37 (2013) 4407–4429 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Inversion algorithm based on the generalized objective functional for compressed sensing

  • Upload
    shi

  • View
    225

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Inversion algorithm based on the generalized objective functional for compressed sensing

Applied Mathematical Modelling 37 (2013) 4407–4429

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling

journal homepage: www.elsevier .com/locate /apm

Inversion algorithm based on the generalized objective functionalfor compressed sensing

Jing Lei ⇑, Shi LiuSchool of Energy, Power and Mechanical Engineering, North China Electric Power University, Changping District, Beijing 102206, China

a r t i c l e i n f o

Article history:Received 23 October 2011Received in revised form 11 July 2012Accepted 18 September 2012Available online 28 September 2012

Keywords:Compressed sensingInverse problemsTikhonov regularization methodHomotopy methodArtificial physics optimization algorithm

0307-904X/$ - see front matter � 2012 Elsevier Inchttp://dx.doi.org/10.1016/j.apm.2012.09.049

⇑ Corresponding author. Tel.: +86 10 61772472; fE-mail address: [email protected] (J. Lei).

a b s t r a c t

Owing to providing a novel insight for signal and image processing, compressed sensing(CS) has attracted increasing attention. The accuracy of the reconstruction algorithms playsan important role in real applications of the CS theory. In this paper, a generalized recon-struction model that simultaneously considers the inaccuracies on the measurementmatrix and the measurement data is proposed for CS reconstruction. A generalized objec-tive functional, which integrates the advantages of the least squares (LSs) estimation andthe combinational M-estimation, is proposed. An iterative scheme that integrates the mer-its of the homotopy method and the artificial physics optimization (APO) algorithm isdeveloped for solving the proposed objective functional. Numerical simulations are imple-mented to evaluate the feasibility and effectiveness of the proposed algorithm. For thecases simulated in this paper, the reconstruction accuracy is improved, which indicatesthat the proposed algorithm is successful in solving CS inverse problems.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

CS is a promising theory that is based on the fact that a relatively few number of the projections of a sparse signal cancapture most of its salient information. Owing to providing a novel insight for signal and image processing, CS theory hasattracted increasing attention. Presently, the CS theory has found numerous potential applications in various fields suchas the signal and image processing, machine learning, astronomy, wireless sensing networks, medical imaging, spectralimaging and remote sensing [1–10].

The accuracy of reconstruction algorithms plays a crucial role in real applications. At present, various algorithms havebeen developed for CS reconstruction, which can be divided into three rough categories [11]: (1) the greedy pursuitalgorithms, such as the orthogonal matching pursuit (OMP) method [12], the stagewise OMP (StOMP) algorithm [13], theregularized OMP (ROMP) method [14] and the compressive sampling matching pursuit (CoSaMP) algorithm [11], wherethese methods build up an approximation one step at a time by making locally optimal choices at each step; (2) the convexrelaxation algorithms, such as the interior-point method [15], the gradient projection method [16], the iterative threshold-ing algorithm [17], the Bregman iteration technique [18,19], the separable approximation algorithm [20] and (3) thecombinatorial algorithms [11]. Overall, these algorithms have played a prominent role in promoting the developmentsand applications of CS theory. Owing to the characteristics and complexities of CS reconstruction problems, however,developing an efficient reconstruction algorithm is highly desirable.

At present, CS reconstruction algorithms often consider the inaccurate property on the measurement data. In realapplications, however, the measurement matrix or model may be inaccurate owing to the facts such as the physically

. All rights reserved.

ax: +86 10 61772219.

Page 2: Inversion algorithm based on the generalized objective functional for compressed sensing

4408 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

implementing the measurement matrix in a sensor and the model approximation distortions [21]. A detailed discussion onthe inaccurate properties of the measurement matrix and the measurement data can be found in [21]. As a result, it may bemore reasonable to simultaneously consider the inaccurate properties on the measurement data and the measurementmatrix in the process of CS reconstruction. To obtain a meaningful solution, additionally, CS construction process is oftenformulated into an optimization problem, and finding an efficient optimization algorithm will be highly desirable for realapplications. Optimization algorithms can be rough divided into two categories: the local optimization algorithms andthe global optimization algorithms. Presently, local optimization algorithms have found numerous applications in the fieldof CS reconstruction. Unfortunately, it is hard for the local optimization algorithms to ensure a global optimal solution.Consequently, developing a global optimization algorithm may be important for solving CS inverse problem. This paperpresents a generalized reconstruction model that simultaneously considers the inaccurate properties on the measurementmatrix and the measurement data. A generalized objective functional, which integrates the merits of the LS estimationand the combinational M-estimation, is proposed. An iterative scheme that integrates the advantages of the homotopymethod and the APO algorithm is developed for solving the proposed objective functional. Numerical simulations areimplemented to validate the feasibility of the proposed algorithm.

The rest of this paper is organized as follows. Section 2 briefly introduces CS model. In Section 3, a generalized reconstruc-tion model that simultaneously considers the inaccurate properties on the measurement matrix and the measurement datais proposed. In Section 4, a generalized objective functional that integrates the advantages of the LS estimation and the com-binational M-estimation is in detail described. Section 5 introduces the homotopy method and the APO algorithm, and aniterative scheme that integrates the merits of the both algorithms is developed for solving the proposed objective functional.Numerical results and discussions are presented in Section 6. Finally, Section 7 presents a summary and conclusions.

2. CS model

The CS model is introduced briefly in this section, and more theoretical discussions on the CS technique can be found in[22–25]. If a signal x is sparse, the CS method attempts to reconstruct x from just a few linear measurements of x, which canbe formulated by:

y ¼ Ux; ð1Þ

where x indicates an n� 1 dimensional vector standing for a sparse signal or image; y is an m� 1 dimensional vector indi-cating the linear measurements of x; U represents a matrix of dimension m� n, which is named as the measurement matrix.Popular measurement matrices include the Gaussian measurement matrix, the Binary measurement matrix and the Fouriermeasurement matrix, and more details can be found in [23].

If a signal is not sparse, however, it can be sparsely represented by the other bases, such as the wavelet bases and theFourier bases, Eq. (1) can be rewritten by [26]:

y ¼ Uwa; ð2Þ

where x ¼ wa, and w is a matrix of dimension n� n.In real applications, taking the measurement noises into account, Eqs. (1) and (2) can be reformulated by [26]:

y ¼ Uxþ r; ð3Þ

y ¼ Uwaþ r; ð4Þ

where r is a m� 1 dimensional vector standing for the measurement noise.In brief, the primary task of the CS inverse problem is to estimate x from the known U and y under the condition of sat-

isfying sparsity assumption of x. In practice, the solving of the CS model is often formulated into the following optimizationproblem [8]:

min JðxÞ ¼ kUx� yk2 þ aXn

i¼1

jxij; ð5Þ

where a > 0 is the regularization parameter; k � k defines the 2-norm and j � j represents the absolute value operator.

3. Generalized reconstruction model

It can be found from Eqs. (3) and (4) that the standard CS model only considers the inaccurate property on the measure-ment data, however, the inaccuracy of the measurement matrix or model is not considered. In practice, the measurementmatrix or model may be inaccurate [21]. As a result, it is essential to simultaneously consider the inaccuracy propertieson the measurement data and the measurement matrix in the process of CS reconstruction, which can be described by [27]:

ðUþ EÞx ¼ y þ r; ð6Þ

Page 3: Inversion algorithm based on the generalized objective functional for compressed sensing

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4409

where E is a perturbation matrix of dimension m� n. It is crucial to consider this kind of noise since it can account for theprecision errors when applications call for physically implementing the measurement matrix in a sensor. In other CS scenar-ios, such as when U represents a system model, E can absorb the errors in assumptions made about the transmission chan-nel. Furthermore, E can also model the distortions derived from discretizing the domain of the analog signals and systems[21]. More discussions on the perturbation analysis for Eq. (6) can be found in [21,28,29].

Directly solving Eq. (6) is challenging. For easy computation, Eq. (6) can be reformulated as:

Uxþ Ex ¼ y þ r: ð7Þ

Let Ex ¼ B, Eq. (7) can be expressed by:

Uxþ B ¼ y þ r: ð8Þ

In fact, Eq. (8) is called as the semiparametric estimation in the field of the econometrics, and more details can be found in[30]. It is worth mentioning that B in Eq. (8) can also be considered as the model errors. Eq. (8) is equivalent to Eq. (3) whenB ¼ 0; obviously, Eq. (3) is a special case of Eq. (8). Additionally, other approaches, such as the regularized total least squaresmethod, are available for solving Eq. (6), and more discussions can be found in [27,31].

4. Design of the objective functional

In Eq. (8), a generalized model is proposed for CS reconstruction, and finding an efficient computational method is impor-tant for real applications of Eq. (8). In this section, a generalized objective functional, which integrates the advantages of theLS estimation and the combinational M-estimation, is described in detail.

4.1. Regularized semiparametric method

Directly solving Eq. (8) is challenging because two unknown variables U and B need to be solved. A popular approach is toreformulate the solving of Eq. (8) into an optimization problem. In [32], the authors proposed the following optimizationproblem:

minx;B

J ¼ kRðUxþ B� yÞk2 þ a1kRxk2 þ a2kNBk2; ð9Þ

where a1 > 0 and a2 > 0 represent the regularization parameters; R, N and R are the weighted matrices. Eq. (9) can be con-sidered as a generalized Tikhonov functional; especially Eq. (9) is the standard Tikhonov regularization (STR) method whenB ¼ 0, R and R are the identity matrices, which can be described by:

minx

J ¼ kUx� yk2 þ a1kxk2: ð10Þ

4.2. Extension of the objective functional

Designing a Tikhonov regularization functional includes two key issues: the choice of the measure function of accuracy ofa solution and the design of stabilizing functional. In general, a Tikhonov regularization solution is a result of balancing theaccuracy and stability of a solution [33,34]. According to the Tikhonov regularization method [33], Eq. (9) can be generalizedas:

minx;B

J ¼ /ðx;BÞ þ a1X1ðxÞ þ a2X2ðBÞ; ð11Þ

where /ðx;BÞ measures the accuracy of a solution; X1ðxÞ and X2ðBÞ can be referred to as the stabilizing functionals or theconstrained functionals.

It can be seen from the first item of the right hand side of Eq. (9) that the function of sum of squares is used to measure theaccuracy of a solution. Studies indicate that the LS estimation is strongly influenced by the small cluster of outliers, which istherefore not directly suitable for the estimation problems with outliers [35]. Consequently, seeking a robust method is cru-cial to improve the robustness of the LS estimation. In [36], the authors proposed the combinational estimation of the LSestimation and the least absolute deviation (LAD) estimation to improve the robustness of the LS estimation. The noisesin the measurement data are complicated in CS applications; in order to improve the robustness of the LS estimation, a com-binational estimation that integrates the advantages of the LS estimation and the combinational M-estimation is proposed,which can be formulated by:

/ðx;BÞ ¼ d1kUxþ B� yk2 þ d2

Xm

j¼1

q1ðUjxþ Bj � yjÞ þ d3

Xm

j¼1

q2ðUjxþ Bj � yjÞ; ð12Þ

where 0 6 d1 6 1, 0 6 d2 6 1, 0 6 d3 6 1 and d1 þ d2 þ d3 ¼ 1; q1ð�Þ and q2ð�Þ stand for the M-estimation functions; Uj is thejth row of matrix U; Bj and yj represent the jth element of vectors B and y. Popular M-estimation functions include the

Page 4: Inversion algorithm based on the generalized objective functional for compressed sensing

4410 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

absolute value function, the Huber function, the Cauchy function, the G-M function and the Fair function, and morediscussions on the M-estimation can be found in [35,37].

The design of the stabilizing functional, which will strongly influence the reconstruction quality, is crucial for Eq. (11).Overall, stabilizing functionals can be designed according to a specific reconstruction task. Especially, the design of thestabilizing functional must meet the sparsity assumption of a solution in the process of CS reconstruction. Presently, differ-ent stabilizing functionals are available for CS reconstruction, such as the ‘1 norm [16], the ‘p norm (0 < p 6 1) [38], theM-estimators (such as the Laplace function and the Geman-McClure function) [39] and so on. From the viewpoint of thepenalty function, their differences depend mainly on different penalties are imposed on unknown variables [40]. Studiesindicate that the quadratic stabilizing functional (that is the standard ‘2 norm) penalizes the large nonzero values in thereconstructed signals or images in a quadratic fashion, which discourages the existence of such values and favors thesolutions with small nonzero entries. As a result, it is hard to ensure a sparse solution [41]. In contrast, in order to ensurea sparse solution, the stabilizing functional should impose a relatively small penalty on the large nonzero values, and arelatively large penalty on the small nonzero values. As a result, such stabilizing functionals will allow for the existenceof the large values since it penalizes such values lighter than the quadratic scheme, and discourage the existence of the smallnonzero values; finally, a sparse solution may be obtained [41]. More discussions on the design of the sparsity stabilizingfunctionals can be found in [39,41–43]. In this paper, an alternative stabilizing functional that satisfies the above propertiesis proposed, which can be described by:

X1ðxÞ ¼Xn

i¼1

1� exp �b lnð1þlnð1þjxi jpi ÞÞuþlnð1þlnð1þjxi jpi ÞÞ

� �� �1þ exp �b lnð1þlnð1þjxi jpi ÞÞ

uþlnð1þlnð1þjxi jpi ÞÞ

� �� � ; ð13Þ

where u > 0, pi > 0 and b > 0. In fact, Eq. (13) can be considered as a generalized M-evaluation function [44].For easy computation, the absolute value function is approximated by [45]:

jxj � ðx2 þ nÞ1=2; ð14Þ

where n > 0. Therefore, Eq. (13) can be approximated by:

X1ðxÞ �Xn

i¼1

1� exp �blnð1þlnð1þðx2

iþnÞpi=2ÞÞ

uþlnð1þlnð1þðx2iþnÞpi=2ÞÞ

� �� �

1þ exp �blnð1þlnð1þðx2

iþnÞpi=2ÞÞ

uþlnð1þlnð1þðx2iþnÞpi=2ÞÞ

� �� � : ð15Þ

In this work, X2ðBÞ is defined as:

X2ðBÞ ¼Xn

j¼1

lnð1þ lnð1þ jBjjqj ÞÞ; ð16Þ

where qj > 0. For ease of calculation, Eq. (16) is approximated by:

X2ðBÞ �Xn

j¼1

lnð1þ lnð1þ ðB2j þ nÞqj=2ÞÞ: ð17Þ

In this paper, the employed M-estimation functions are the Cauchy function and the G-M function [35,37], which are pre-sented in Eqs. (18) and (19):

qCauchy ¼l2

1

2ln 1þ x

l1

� �2 !

; ð18Þ

qG�MðxÞ ¼12� x2

l2 þ x2 ; ð19Þ

where l1 > 0 and l2 > 0 are the predetermined parameters.According to the above discussions, a generalized objective functional can be obtained for CS reconstruction:

minx;B

J ¼ d1krk2 þ d2

Xm

j¼1

l21

2ln 1þ rj

l1

� �2 !

þ d3

Xm

j¼1

12�

r2j

l2 þ r2j

þ a1

Xn

i¼1

1� exp �blnð1þlnð1þðx2

iþnÞpi=2ÞÞ

uþlnð1þlnð1þðx2iþnÞpi=2ÞÞ

� �� �

1þ exp �blnð1þlnð1þðx2

iþnÞpi=2ÞÞ

uþlnð1þlnð1þðx2iþnÞpi=2ÞÞ

� �� �

þ a2

Xn

j¼1

lnð1þ lnð1þ ðB2j þ nÞqj=2ÞÞ: ð20Þ

where rj ¼ Ujxþ Bj � yj.

Page 5: Inversion algorithm based on the generalized objective functional for compressed sensing

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4411

5. Solving of the objective functional

Eq. (20) essentially is an unconstrained optimization problem and finding an efficient computational method is importantfor the applications of Eq. (20). In brief, algorithms of solving the unconstrained optimization problems can be divided into twogroups: the local optimization algorithms and the global optimization algorithms. Global optimization algorithms can also berough classified into two categories: the deterministic methods such as the filled function method and the tunneling functionmethod, and the stochastic methods such as the particle swarm optimization algorithm, the genetic algorithms, the APO algo-rithm, and the simulated annealing algorithm and so on. Local optimization algorithms have favorable ability of local search;however, the global search capability is relatively weak. Stochastic global optimization algorithms have satisfactory globalsearch capability; however, the ability of local search is relatively weak. In real applications, an algorithm that integrates theadvantages of the local search and stochastic global search will be highly desirable. In this section, the APO algorithm and thehomotopy method is introduced and an iterative scheme that integrates the merits of the both algorithms is described in detail.

5.1. Homotopy method

Eq. (20) includes two unknown variables, x and B, and directly solving the equation is challenging. In the alternatingdirection iteration optimization technique [19], the variables x and B in Eq. (20) can be alternately solved. The main ideasbehind the method is that in the kth step, if the value of variable x is given, the estimation of variable B can be obtainedby solving Eq. (20), and then variable x can be recalculated when B is fixed. As a result, variables x and B can be alternatelysolved by repeating the above process. Especially, in the kth step, when the estimation of variable B is provided, minimizingEq. (20) is equivalent to the solving of the following system of nonlinear equations [46]:

uðxÞ ¼ 0; ð21Þ

where uðxÞ is the gradient vector of Eq. (20).

The general homotopy paradigm involves embedding the equations to be solved, uðxÞ ¼ 0, in a system of equations of onehigher dimension, @ðx; kÞ ¼ 0, with the introduction of one more variable k, called the homotopy parameter or continuationparameter. Typically, k is restricted to the range [0,1], and the embedding is done so that the augmented system @ðx; kÞ ¼ 0 iseasy to solve and reduces to the original problem when k ¼ 1, i.e., @ðx;1Þ ¼ uðxÞ. More details on the design of the homotopyequation can be found in [47–49]. To solve the homotopy equation @ðx; kÞ ¼ 0, k can be divided into [49]:

0 ¼ k0 < k1 < � � � < km ¼ 1: ð22Þ

At each point ki, solve the following discrete homotopy equation:

@ðx; kiÞ ¼ 0; i ¼ 1;2; � � � ; m: ð23Þ

Set the solution xðkiÞ as the initial value of the homotopy equation @ðx; kiþ1Þ ¼ 0, repeat this process until ki ¼ 1.For easy calculation, the fixed point homotopy is used in this paper, which can be described by [49]:

@ðx; kiÞ ¼ ð1� kiÞðx� x0Þ þ kiuðxÞ ¼ 0: ð24Þ

For concise notation, Eq. (24) can be rewritten as:

x ¼ Uðx; kiÞ; ð25Þ

where Uð�Þ is a function of variables x and k.In this paper, the fixed point iterative algorithm is used to solve Eq. (25), which can be formulated by [48]:

xkþ1 ¼ Uðxk; kiÞ; ð26Þ

where k is the index of iterations.It can be known in advance that the range of the solution belongs to ½H1;H2�, therefore the iterative scheme is slightly

modified according to the prior information. As a result, a projected operator is introduced to the iterative scheme:

xkþ1 ¼ ProjectfUðxk; kiÞg; ð27Þ

where

Projectfxg ¼H1; if x < H1

x; if H1 6 x 6 H2

H2; if x > H2:

8<: ð28Þ

5.2. Artificial physics optimization

The APO algorithm is an emerging stochastic global optimization method, and it consists of three procedures, such as theinitialization, the calculation force and the motion, which are distinct features that differ from other stochastic global opti-mization algorithms. The implementation of the APO algorithm can be summarized as follows [50,51]:

Page 6: Inversion algorithm based on the generalized objective functional for compressed sensing

4412 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

Step 1. Specify the problem to be solved and algorithmic parameters, and generate an initial population.Step 2. Evaluate the fitness of each individual using the objective function value.Step 3. Update the global best position xbest.Step 4. Compute the mass and component force of each individual according to Eqs. (29) and (30).Step 5. Compute the total force of each individual using Eq. (31).Step 6. Update the velocity and position of each individual using Eqs. (32) and (33).Step 7. Loop to Step 2 until a predetermined stopping criterion is satisfied.

In the initialization phase, - individuals are selected. The fitness of each individual is evaluated by the objective functionf ðxÞ, and the individual with the best fitness is selected and stored in xbest.

In the procedure of the calculation force, the masses of individuals are calculated firstly. The mass function can be de-scribed by mi ¼ gðf ðxiÞÞ with the conditions that mi 2 ð0;1� and gð�Þ is a positive bounded monotonic decreasing function.According the definition, Eq. (29) is one of the functions [50,51]:

mi ¼ expf ðxbestÞ � f ðxiÞ

f ðxworstÞ � f ðxbestÞ

� �;8i; ð29Þ

where f ðxbestÞ and f ðxworstÞ represent the function values of individual best and individual worst, in whichbest ¼ arg minff ðxiÞg and worst ¼ arg maxff ðxiÞg. The component forces exerted on each individual via all other individualscan be calculated by [50,51]:

F ij;k ¼Gmimjðxj;k � xi;kÞ; if f ðxjÞ < f ðxiÞ�Gmimjðxj;k � xi;kÞ; if f ðxjÞP f ðxiÞ;

�ð30Þ

where F ij;k is the component force exerted on individual i via individual j in the kth dimension; xi;k and xj;k represent the posi-tions of individual i and individual j in the kth dimension. Finally, the total force F i;k exerted on individual i via all other indi-viduals in the kth dimension can be calculated by:

F i;k ¼X-

j¼1j – i

F ij;k; 8i – best: ð31Þ

It is worth mentioning that the individual best cannot be attracted or repelled by other individuals. That is to say that thecomponent force and the total force exerted on individual best are zero.

In the update step of the velocity and position, the velocity and position of individual i at time t þ 1 are updated by Eqs.(32) and (33) [50,51].

v i;kðt þ 1Þ ¼ wvi;kðtÞ þ �hF i;k

mi; 8i – best; ð32Þ

xi;kðt þ 1Þ ¼ xi;kðtÞ þ v i;kðt þ 1Þ; 8i – best; ð33Þ

where v i;kðtÞ and xi;kðtÞ are the velocity and position of individual i in the kth dimension at generation t; �h represents a ran-dom variable generated with uniform distribution with the range of (0, 1); w is an inertia weight with the range of (0, 1). Themovement of each individual is restricted in the feasible domain with xi;k 2 ½xmin; xmax� and v i;k 2 ½vmin;vmax�. It is worth men-tioning that the current individual best does not move.

Finally, if a stopping criterion is satisfied, then computation is terminated, and the optimal solution is output. Otherwise,loop to Step 2 until a predetermined stopping criterion is met.

5.3. Hybrid algorithm

The homotopy method has favorable capability of the local search; however, it is hard for the algorithm to ensure a globaloptimal solution owing to the fact that the search process may fall into the attraction domain of the local optimal solution.The APO algorithm has satisfactory global search ability; unfortunately, the capability of the local search is relatively weak.In practice, an algorithm that integrates the merits of the local search and the global search is highly desirable [52]. Presently,various algorithms that integrate the merits of the local search algorithms and the stochastic global search algorithms havebeen developed for solving complicated optimization problems, and more details can be found in [52–54]. In this paper, ahybrid algorithm that integrates the advantages of the homotopy method and the APO algorithm is proposed, which canbe summarized as follows:

Step 1. Specify the problem of interest and initialize the algorithmic parameters for the homotopy method and the APOalgorithm.

Step 2. Implement the homotopy method and a local optimal solution can be found.Step 3. Generate an initial population in the neighborhood of the local optimal solution found in Step 2, and implement a

predetermined number of iterations for the APO algorithm.

Page 7: Inversion algorithm based on the generalized objective functional for compressed sensing

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4413

Step 4. Select the best individual from the population in the APO algorithm, set the best individual into an initial solution,and loop to Step 2. Repeat the above steps until a predetermined stopping criterion is satisfied.

Several interesting properties can be found from the above iterative scheme. The implementation of the homotopyalgorithm can fast find a local optimal solution. Implementing the APO algorithm after the homotopy algorithm willaccelerate the convergence of the APO algorithm owing to the fact that a good initial solution is provided, increasethe possibility of escaping from the attraction domain of the local optimal solution and expand the search space. Afterimplementing an entire iteration, the best candidate solution that is found by the APO algorithm will be further exploitedby the homotopy algorithm, which not only accelerates the convergence of the APO algorithm but also enhances thecapability of local search. Especially, when a whole iteration is implemented, a new initial solution is obtained for imple-menting the homotopy method. This process resembles the multi-point starting strategy, which may enhance the globalsearch capability of the homotopy method. A distinct feature of the proposed algorithm is that it integrates the advan-tages of the local search derived from the local optimization method and the global search derived from the stochasticglobal optimization algorithm.

6. Numerical simulations and discussions

According to the above discussions, the proposed reconstruction process involves solving Eq. (20) using the proposed iter-ation scheme, which can be briefly called as the generalized reconstruction (GR) algorithm. In this section, numerical sim-ulations were implemented to evaluate the feasibility and the efficiency of the GR algorithm and the reconstruction quality iscompared with the standard Tikhonov regularization (STR) method [33], the SpaRSA algorithm [20], the l1_ls method [15]and the GPSR algorithm [16]. All algorithms are implemented using the MATLAB 7.0 software on a PC with a Pentium IV 2.4 GHz CPU, and 4 G bytes memory. The reconstruction precision is evaluated by the relative error, which is defined by:

g ¼ kxoriginal � xreconstructedkkxoriginalk

; ð34Þ

where g stands for the relative error; xoriginal and xreconstructed represent the original signal and the reconstructed signal. In thispaper, the noise on the measurement data is defined as:

ycontaminated ¼ yoriginal þ r; ð35Þ

where r ¼ r1 � randn; r1 represents the standard deviation and randn stands for a normal distribution random number withthe mean of 0 and the standard deviation of 1, which can be achieved by the function ‘randn’ in the MATLAB software; yoriginal

and ycontaminated define the original and noise-contaminated measurement data. Similarly, the inaccuracy on the measurementmatrix is defined as:

Ucontaminated ¼ Uoriginal þ E; ð36Þ

where E ¼ r2 � randn; r2 is the standard deviation; Uoriginal and Ucontaminated stand for the original and noise-contaminatedmeasurement matrix.

6.1. Case 1

In this section, a complicated signal, which is presented in Fig. 1, is used to evaluate the feasibility and effectiveness of theGR algorithm, and the reconstruction quality is compared with the STR method, the SpaRSA algorithm, the l1_ls method andthe GPSR algorithm. In this case, the inaccuracies on the measurement matrix and the measurement data are simulated, andthe standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.01 and 0.001. The value ofthe regularization parameter for the STR method is 1 � 10�10. In the GR algorithm, d1 ¼ 0:4, d2 ¼ 0:3, d3 ¼ 0:3, l1 ¼ 1, l2 ¼ 1,k ¼ 1, b ¼ 10, u ¼ 1, pi ¼ p, qi ¼ q, the values of p and q are determined according to a specific reconstruction task, matrix w isdefined by the ‘Symlets 8’ wavelet in the MATLAB software, and the rest of algorithmic parameters are listed in Table 1. In theAPO algorithm, the number of population is 50. The regularization parameters for the SpaRSA method, the l1_ls algorithmand the GPSR method are 0.03, 1 � 10�4 and 0.3. The Gaussian measurement matrix is employed, and the sizes for subfiguresa and b in Figs. 2–6 are 100 � 2048 and 300 � 2048. Figs. 2–6 show the signals reconstructed by the STR algorithm, the SpaR-SA method, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers. The reconstruc-tion errors for the compared algorithms under different sampling numbers were listed in Table 2.

Figs. 2–5 demonstrate the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and theGPSR method under different sampling numbers when the inaccurate properties on the measurement matrix and the mea-surement data are considered. It can be seen from Table 2 that when the sampling number is 100, the quality of the signalsreconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is far from satisfactory,and the distortions are large. When the sampling number is 300, however, the quality of the signals reconstructed by the STRalgorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is improved.

Page 8: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

Fig. 1. Original signal.

Table 1Algorithmic parameters for the GR algorithm.

Algorithmic parameters Fig. 6a Fig. 6b

a1 1 � 10�10 1 � 10�10

a2 0.005 0.005q 1 1p 1.3 1.3

0 500 1000 1500 2000 2500-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

(a) (b)

Fig. 2. Reconstructed signals by the STR method.

4414 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

Fig. 6 is the signals reconstructed by the GR algorithm under different sampling numbers when the inaccurate propertieson the measurement matrix and the measurement data are considered. As can be expected, the GR algorithm shows satis-factory numerical performances, the accuracy of the reconstructed signals is improved as compared to the STR algorithm, theSpaRSA method, the l1_ls algorithm and the GPSR method, and the detailed information on the original signal can be wellreconstructed. At the same time, it can be found from Figs. 2–6 that the quality of the signals reconstructed by the GR algo-rithm under different sampling numbers is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and theGPSR method, which indicates that the GR algorithm is successful in solving CS reconstruction problems.

Table 2 shows the reconstruction errors under different sampling numbers for the compared algorithms. It can be foundthat for the cases simulated in this section the GR algorithm gives the smallest reconstruction errors, which indicates that theGR algorithm is competent to solve CS reconstruction problems. Additionally, it can be found from Table 2 that the GR algo-rithm can well reconstruct the original signal under a relative few sampling number, which will facilitate real applications.

Page 9: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 2500-1.5

-1

-0.5

0

0.5

1

1.5

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 3. Reconstructed signals by the SpaRSA algorithm.

0 500 1000 1500 2000 2500-1

-0.5

0

0.5

1

1.5

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 4. Reconstructed signals by the l1_ls algorithm.

0 500 1000 1500 2000 2500-1.5

-1

-0.5

0

0.5

1

1.5

2

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 5. Reconstructed signals by the GPSR algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4415

6.2. Case 2

A complex signal is used to better evaluate the feasibility of the GR algorithm, which is presented in Fig. 7. In this case, theinaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviations for the

Page 10: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

(a) (b)

Fig. 6. Reconstructed signals by the GR algorithm.

Table 2Reconstruction errors under different sampling numbers.

Algorithms 100 300

STR 0.9787 0.9312SpaRSA 0.3882 0.0577l1_ls 0.2922 0.0599GPSR 0.7713 0.0867GR 0.0688 0.0406

4416 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

measurement data and the measurement matrix r1 and r2 are 0.001 and 0.001. Matrix w is defined by the ‘Daubechies 8’wavelet in the MATLAB software. The Gaussian measurement matrix is used in the case, and the sizes for subfigures aand b in Figs. 8–12 are 300 � 2048 and 400 � 2048. The algorithmic parameters for the STR algorithm, the SpaRSA method,the l1_ls algorithm, the GPSR method and the GR algorithm are the same as Section 6.1. Figs. 8–12 are the signalsreconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm underdifferent sampling numbers. Table 3 presents the reconstruction errors for the compared algorithms under differentsampling numbers.

Figs. 8–11 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSRmethod under different sampling numbers when the inaccurate properties on the measurement data and the measurementmatrix are considered. Numerical simulation results indicate that the quality of the signals reconstructed by the STR algo-rithm, the SpaRSA method, the l1_ls algorithm and the GPSR method is not satisfactory when the sampling numbers are rel-atively few. Especially, it can be found from Figs. 8–11 that when the sampling number is 300, the distortions of the signalsreconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method are relatively serious.

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Ampl

itude

Fig. 7. Original signal.

Page 11: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 2500-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

Signal index

Ampl

itude

0 500 1000 1500 2000 2500-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

Signal index

Ampl

itude

(a) (b)

Fig. 8. Reconstructed signals by the STR method.

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Ampl

itude

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Ampl

itude

(a) (b)

Fig. 9. Reconstructed signals by the SpaRSA algorithm.

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 10. Reconstructed signals by the l1_ls algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4417

The signals reconstructed by the GR algorithm under different sampling numbers are presented in Fig. 12. It can be seenthat when the inaccurate properties on the measurement matrix and the measurement data are considered, the GR

Page 12: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

(a) (b)

Fig. 11. Reconstructed signals by the GPSR algorithm.

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

(a) (b)

Fig. 12. Reconstructed signals by the GR algorithm under different sampling numbers.

Table 3Reconstruction errors under different sampling numbers.

Algorithms 300 400

STR 0.9298 0.8764SpaRSA 0.2342 0.1278l1_ls 0.2325 0.1239GPSR 0.3446 0.2536GR 0.0887 0.0573

4418 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

algorithm shows satisfactory numerical performances, and the quality of the signals reconstructed by the GR algorithm ishigher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method. Especially, it can be foundfrom Table 3 that the GR algorithm gives the smallest reconstruction errors in all cases simulated in this section, whichindicates that the GR algorithm is successful in solving CS reconstruction problems.

6.3. Case 3

In this section, an extremely complicated signal, which is presented in Fig. 13, is employed to further evaluate the numer-ical performances and effectiveness of the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and theGR algorithm. Matrix w is defined by the ‘Symlets 8’ wavelet in the MATLAB software. The algorithmic parameters for thecompared algorithms are the same as Section 6.1. The Gaussian measurement matrix is used in this section, and the sizesfor the subfigures a and b in Figs. 14–18 are 100 � 1024 and 200 � 1024. In this case, the inaccuracies on the measurement

Page 13: Inversion algorithm based on the generalized objective functional for compressed sensing

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4419

matrix and the measurement data are simulated, and the standard deviations for the measurement data and the measure-ment matrix r1 and r2 are 0.01 and 0.001. Figs. 14–18 show the signals reconstructed by the STR algorithm, the SpaRSAmethod, the l1_ls algorithm, the GPSR method and the GR algorithm under different sampling numbers. Table 4 is the recon-struction errors for the compared algorithms under different sampling numbers.

Figs. 14–18 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR methodand the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and themeasurement data are considered. It can be found from Figs. 14–18 that the quality of the signals reconstructed by the GRalgorithm is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, and the detailedinformation of the original signal can be well reconstructed. At the same time, it can be seen from Table 4 that in all cases theGR algorithm gives the smallest reconstruction errors and can well reconstruct the detailed information of the original signalunder a relatively few sampling number, which indicates that the GR algorithm is successful in solving CS reconstructionproblems.

6.4. Case 4

In this section, a complex signal, which is presented in Fig. 19, is used to further evaluate the feasibility and efficiency ofthe STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. Matrix w is defined bythe ‘Daubechies 8’ wavelet in the MATLAB software. The algorithmic parameters for the compared algorithms are the sameas Section 6.1. The Gaussian measurement matrix is used, and the sizes for the subfigures a and b in Figs. 20–24 are200 � 1024 and 450 � 1024. In this case, the inaccuracies on the measurement matrix and the measurement data are sim-ulated, and the standard deviations for the measurement data and the measurement matrix r1 and r2 are 0.06 and 0.002.Figs. 20–24 show the signals reconstructed by the compared algorithms under different sampling numbers. Table 5 illus-trates the reconstruction errors for the compared algorithms under different sampling numbers.

Figs. 20–24 are the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR methodand the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix andmeasurement data are considered. It can be seen from Figs. 20–24 that the quality of the signals reconstructed by the GRalgorithm outperforms the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method, and distortionsof the signals reconstructed by the GR algorithm are smallest among the compared algorithms, which indicates that theGR algorithm is successful in solving CS inverse problems. Additionally, it can be seen from Table 5 that in all cases theGR algorithm gives the smallest reconstruction errors and can well reconstruct the detailed information of the original signalunder a relatively few sampling number, which is a desirable feature for real applications.

6.5. Case 5

In this section, the noise-contaminated data with different standard deviations are used to further evaluate the numericalperformances of the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm. Theoriginal signal is presented in Fig. 25. For the GR algorithm, p ¼ 1, and the rest of the algorithmic parameters are the sameas Section 6.1. Matrix w is defined by the ‘Symlets 2’ wavelet in the MATLAB software. The Gaussian measurement matrix isused, and the sizes for subfigures a and b in Figs. 26–30 are 300 � 1024. The inaccurate properties on the measurement dataand the measurement matrix are simulated, and the standard deviations for subfigures a and b in Figs. 26–30 are r1 ¼ 0:01

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

Fig. 13. Original signal.

Page 14: Inversion algorithm based on the generalized objective functional for compressed sensing

0 200 400 600 800 1000 1200-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

(a) (b)

Fig. 14. Reconstructed signals by the STR method.

0 200 400 600 800 1000 1200-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 15. Reconstructed signals by the SpaRSA algorithm.

0 200 400 600 800 1000 1200-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 16. Reconstructed signals by the l1_ls algorithm.

4420 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

and r2 ¼ 0:001, r1 ¼ 0:06 and r2 ¼ 0:002. Figs. 26–30 present the signals reconstructed by the compared algorithms. Thereconstruction errors for the compared algorithms are shown in Table 6.

Page 15: Inversion algorithm based on the generalized objective functional for compressed sensing

0 200 400 600 800 1000 1200-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 17. Reconstructed signals by the GPSR algorithm.

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

(a) (b)

Fig. 18. Reconstructed signals by the GR algorithm.

Table 4Reconstruction errors under different sampling numbers.

Algorithms 100 200

STR 0.9485 0.9075SpaRSA 0.1332 0.0857l1_ls 0.1331 0.0843GPSR 0.2751 0.1394GR 0.0659 0.0549

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4421

When the inaccurate properties on the measurement matrix and the measurement data are considered, Figs. 26–30 illus-trate the signals reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GRalgorithm. As can be expected, the GR algorithm shows satisfactory robustness. It can be seen from Table 6 that at differentnoise conditions, the GR algorithm gives the smallest reconstruction errors, which indicates that the GR algorithm is success-ful in treating with the inaccurate properties on the measurement matrix and the measurement data. Additionally, it isworth mentioning that the reconstruction quality decreases with the increase of the noises, which indicates that the noisesin the measurement matrix and the measurement data should be treated seriously, and this issue should be further inves-tigated in the future.

6.6. Case 6

To further evaluate the feasibility and efficiency of the GR algorithm, a two-dimensional case is also illustrated. Theoriginal image is the ‘checkerboard’ image in the MATLAB software, which is presented in Fig. 31. In this case, matrix w is

Page 16: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

Fig. 19. Original signal.

0 500 1000 1500 2000 2500-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-1

-0.5

0

0.5

1

1.5

Signal index

Am

plitu

de

(a) (b)

Fig. 20. Reconstructed signals by the STR method.

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 21. Reconstructed signals by the SpaRSA algorithm.

4422 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

defined by the discrete cosine transform (DCT) matrix and the Gaussian measurement matrix is employed. The size of theoriginal image is 48 � 48, and for easy computation the image matrix is reshaped into a vector with a dimension of2304 � 1. The sizes of the measurement matrix for subfigures a and b in Figs. 32–36 are 700 � 2304 and 800 � 2304. In this

Page 17: Inversion algorithm based on the generalized objective functional for compressed sensing

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 22. Reconstructed signals by the l1_ls algorithm.

0 500 1000 1500 2000 2500-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

Signal index

Am

plitu

de

0 500 1000 1500 2000 2500-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 23. Reconstructed signals by the GPSR algorithm.

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

0 500 1000 1500 2000 25000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

(a) (b)

Fig. 24. Reconstructed signals by the GR algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4423

section, the inaccuracies on the measurement matrix and the measurement data are simulated, and the standard deviationsfor the measurement data and the measurement matrix r1 and r2 are 0.01 and 0.001. The algorithmic parameters for the STR

Page 18: Inversion algorithm based on the generalized objective functional for compressed sensing

Table 5Reconstruction errors under different sampling numbers.

Algorithms 200 450

STR 0.9419 0.8792SpaRSA 0.1649 0.0679l1_ls 0.1633 0.0766GPSR 0.2581 0.0789GR 0.0994 0.0517

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

Fig. 25. Original signal.

0 200 400 600 800 1000 1200-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Signal index

Am

plitu

de

(a) (b)

Fig. 26. Reconstructed signals by the STR method.

4424 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR method and the GR algorithm are the same as Section 6.1.Figs. 32–36 show the images reconstructed by the compared algorithms under different sampling numbers. The reconstruc-tion errors for the compared algorithms are shown in Table 7.

Figs. 32–36 are the images reconstructed by the STR algorithm, the SpaRSA method, the l1_ls algorithm, the GPSR methodand the GR algorithm under different sampling numbers when the inaccurate properties on the measurement matrix and themeasurement data are considered. It can be found from Figs. 32–36 that as can be expected, when the inaccurate propertieson the measurement matrix and the measurement data are considered, the quality of the images reconstructed by the GRalgorithm is higher than the STR algorithm, the SpaRSA method, the l1_ls algorithm and the GPSR method. In addition, itcan be seen from Table 7 that in all cases the GR algorithm gives the smallest reconstruction errors, which indicates thatthe GR algorithm is successful in solving CS inverse problems.

Page 19: Inversion algorithm based on the generalized objective functional for compressed sensing

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 27. Reconstructed signals by the SpaRSA algorithm.

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 28. Reconstructed signals by the l1_ls algorithm.

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

0 200 400 600 800 1000 1200-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Signal index

Am

plitu

de

(a) (b)

Fig. 29. Reconstructed signals by the GPSR algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4425

Page 20: Inversion algorithm based on the generalized objective functional for compressed sensing

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Signal index

Am

plitu

de

(a) (b)

Fig. 30. Reconstructed signals by the GR algorithm.

Table 6Reconstruction errors under different noise conditions.

Algorithms r1 ¼ 0:01, r2 ¼ 0:001 r1 ¼ 0:06, r2 ¼ 0:002

STR 0.8595 0.8606SpaRSA 0.0917 0.1532l1_ls 0.0974 0.1639GPSR 0.2330 0.2460GR 0.0352 0.0969

Fig. 31. Original image.

4426 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

7. Conclusions

The accuracy of the reconstruction algorithms plays a vital role in real applications of CS theory. In this paper, a general-ized inversion model, which simultaneously considers the inaccurate properties on the measurement matrix and the mea-surement data, is proposed for CS reconstruction. A generalized objective functional, which integrates the advantages of theLS estimation and the combinational M-estimation, is proposed. An iterative scheme that integrates the advantages of thehomotopy method and the APO algorithm is developed for solving the proposed objective functional. Numerical simulationsare implemented to evaluate the feasibility and effectiveness of the proposed algorithm. For the cases simulated in this pa-per, the reconstruction accuracy is improved, which indicates that the proposed algorithm is successful in solving CS inverseproblems. As a result, a promising algorithm is introduced for solving CS inverse problems.

Applications indicate that each algorithm has its advantages and disadvantages, and may show different numerical per-formances to different reconstruction tasks. In real applications, the selection of an appropriate algorithm depends mainly on

Page 21: Inversion algorithm based on the generalized objective functional for compressed sensing

Fig. 32. Reconstructed images by the STR method.

Fig. 33. Reconstructed signals by the SpaRSA algorithm.

Fig. 34. Reconstructed signals by the l1_ls algorithm.

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4427

the characteristics and prior information of a specific reconstruction task. Our work provides an alternative approach for CSreconstruction, which needs to be further validated by more cases in the future.

Page 22: Inversion algorithm based on the generalized objective functional for compressed sensing

Fig. 35. Reconstructed signals by the GPSR algorithm.

Fig. 36. Reconstructed images by the GR algorithm.

Table 7Reconstruction errors under different sampling numbers.

Algorithms 700 800

STR 0.7249 0.6667SpaRSA 0.1618 0.1410l1_ls 0.1521 0.1270GPSR 0.2852 0.2557GR 0.1147 0.0864

4428 J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429

Acknowledgements

The authors wish to thank the National Natural Science Foundation of China (Nos. 51206048, 51006106 and 50906086),the Fundamental Research Funds for the Central Universities (No. 10MG20), the China Postdoctoral Science Foundation(Nos. 20090460263 and 201003088), and the Program for Changjiang Scholars and Innovative Research Team in University(No. IRT0952) for supporting this research.

References

[1] J.C. Ye, Compressed sensing shape estimation of star-shaped objects in Fourier imaging, IEEE Signal Process. Lett. 14 (2007) 750–753.[2] J. Bobin, J.L. Starck, R. Ottensamer, Compressed sensing in astronomy, IEEE J. Select. Top. Signal Process. 2 (2008) 718–726.[3] J. Ma, Single-pixel remote sensing, IEEE Geosci. Remote Sens. Lett. 6 (2009) 199–203.

Page 23: Inversion algorithm based on the generalized objective functional for compressed sensing

J. Lei, S. Liu / Applied Mathematical Modelling 37 (2013) 4407–4429 4429

[4] W. Bajwa, J. Haupt, A. Sayeed, R. Nowak, Compressive wireless sensing, in: The Fifth International Conference on Information Processing in SensorNetworks, Nashville, TN, USA, 19–21 April 2006, pp. 134–142.

[5] M. Lustig, D.L. Donoho, J.M. Santos, J.M. Pauly, Compressed sensing MRI, IEEE Signal Process. Mag. 25 (2008) 72–82.[6] Y. Rivenson, A. Stern, Compressed imaging with a separable sensing operator, IEEE Signal Process. Lett. 16 (2009) 449–452.[7] P. Parasoglou, D. Malioutov, A.J. Sederman, J. Rasburn, H. Powell, L.F. Gladden, A. Blake, M.L. Johns, Quantitative single point imaging with compressed

sensing, J. Magn. Reson. 201 (2009) 72–80.[8] J. Provost, F. Lesage, The application of compressed sensing for photo-acoustic tomography, IEEE Trans. Med. Imaging 28 (2009) 585–594.[9] M.E. Gehm, R. John, D.J. Brady, R.M. Willett, T.J. Schulz, Single-shot compressive spectral imaging with a dual-disperser architecture, Opt. Express 15

(2007) 1403–1427.[10] G. Hennenfent, F.J. Herrmann, Simply denoise: wavefield reconstruction via jittered undersampling, Geophysics 73 (2008) 19–28.[11] D. Needell, J.A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmonic Anal. 26 (2009) 310–321.[12] J.A. Tropp, A.C. Gilbert, Signal recovery from random measurement via orthogonal matching pursuit, IEEE Trans. Inf. Theory 53 (2007) 4655–4666.[13] D.L. Donoho, Y. Tsaig, I. Drori, J.L. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, IEEE Trans.

Inf. Theory 58 (2012) 1094–1121.[14] D. Needell, R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Found. Comput. Math. 9

(2009) 317–334.[15] S.J. Kim, K. Koh, M. Lustig, S. Boyd, D. Gorinevsky, An interior-point method for large-scale ‘1 -regularized least squares, IEEE J. Select. Top. Signal

Process. 1 (2007) 606–617.[16] M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse

problems, IEEE J. Select. Top. Signal Process. 1 (2007) 586–597.[17] T. Blumensath, M.E. Davies, Iterative thresholding for sparse approximations, J. Fourier Anal. Appl. 14 (2008) 629–654.[18] W. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for L1-minimization with applications to compressed sensing, SIAM J. Imag. Sci. 1

(2008) 143–168.[19] T. Goldstein, S. Osher, The split Bregman algorithm for Ll-regularized problems, SIAM J. Imag. Sci. 2 (2009) 323–343.[20] S.J. Wright, R.D. Nowak, M.A.T. Figueiredo, Sparse reconstruction by separable approximation, IEEE Trans. Signal Process. 57 (2009) 2479–2493.[21] M.A. Herman, T. Strohmer, General deviants: an analysis of perturbations in compressed sensing, IEEE J. Select. Top. Signal Process. 4 (2010) 342–349.[22] E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principle: exact signal reconstruction from highly incomplete Fourier information, IEEE Trans. Inf.

Theory 52 (2006) 489–509.[23] E.J. Candès, M.B. Wakin, An introduction to compressive sampling, IEEE Signal Process. Mag. 25 (2008) 21–30.[24] E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math. 59 (2005) 1207–1233.[25] D.L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52 (2006) 1289–1306.[26] J. Ma, Compressed sensing by inverse scale space and curvelet thresholding, Appl. Math. Comput. 206 (2008) 980–988.[27] A. Beck, A. Ben-Tal, On the solution the Tikhonov regularization of the total least squares problem, SIAM J. Optim. 17 (2006) 98–118.[28] J.G. Sun, Perturbation Analysis of Matrix, Science Press, Beijing, 1987.[29] Q.S. Chen, Mathematical Principles of Digital Signals Processing, Petroleum Industry Press, Beijing, 1993.[30] G.X. Chai, S.Y. Hong, Semiparametric Regression Model, Anhui Education Press, Hefei, 1995.[31] I. Markovsky, D.M. Sima, S.V. Huffel, Total least squares methods, WIREs Comput. Stat. 2 (2010) 212–217.[32] Y.H. Liu, L.J. Song, Ridge estimation method for ill conditioned semiparametric regression model, Hydrogr. Surv. Chart. 28 (2008) 1–3.[33] A.N. Tikhonov, V.Y. Arsenin, Solution of Ill-Posed Problems, V.H. Winston & Sons, New York, 1977.[34] J. Lei, S. Liu, Z.H. Li, M. Sun, X.Y. Wang, A multi-scale image reconstruction algorithm for electrical capacitance tomography, Appl. Math. Model. 35

(2011) 2585–2606.[35] P.J. Huber, Robust Statistics, John Wiley & Sons, New York, 1981.[36] Y. Dodge, J. Jureckova, Adaptive Regression, Spring-Verlag, New York, 2000.[37] A. Cichocki, S. Amari, Adaptive Blind Signal and Image Processing, Johan Wiley & Sons, 2002.[38] L.C. Potter, E. Ertin, J.T. Parker, M. Cetin, Sparsity and compressed sensing in Radar imaging, Proc. IEEE 98 (2010) 1006–1020.[39] J. Trzasko, A. Manduca, Highly undersampled magnetic resonance image reconstruction via homotopic L0-minimization, IEEE Trans. Med. Imaging 28

(2009) 106–121.[40] Y.F. Wang, Computational Methods for Inverse Problems and Their Applications, Higher Education Press, Beijing, 2007.[41] Z.M. Wang, J.B. Zhu, Resolution Improvement Approaches for SAR images, Science Press, Beijing, 2006.[42] B. Han, L. Li, Computational Methods and Applications of Nonlinear Ill-posed Problems, Science Press, Beijing, 2011.[43] M.E. Zervakis, A.K. Katsaggelos, T.M. Kwon, A class of robust entropic functionals for image restoration, IEEE Trans. Image Process. 4 (1995) 752–773.[44] F.M. Ham, I. Kostanic, Principles of Neurocomputing for Science and Engineering, McGraw-Hill, 2001.[45] R. Acar, C.R. Vogel, Analysis of bounded variation penalty methods for ill posed problem, Inverse Prob. 10 (1994) 1217–1229.[46] Y.M. Guo, S.L. Wang, X. Cai, Engineering Optimization: Principle, Algorithm and Implementation, China Machine Press, Beijing, 2008.[47] Z.H. Ma, Handbook of Modern Applied Mathematics, Tsinghua University Press, Beijing, 2005.[48] X.D. Huang, Z.G. Zeng, Y.N. Ma, Theory and Methods for Nonlinear Numerical Analysis, Wuhan University Press, Wuhan, 2004.[49] Q.Y. Li, Z.H. Mo, L.Q. Qi, Numerical Methods of System of Nonlinear Equations, Science Press, Beijing, 1987.[50] L.P. Xie, J.C. Zeng, An extended artificial physics optimization for global optimization problems, in: Fourth International Conference on Innovation

Computation, Information and Control, 2009, pp. 881–884.[51] L.P. Xie, J.C. Zeng, R.A. Formato, Convergence analysis and performance of the extended artificial physics optimization algorithm, Appl. Math. Comput.

218 (2011) 4000–4011.[52] H.B. Duan, X.Y. Zhang, C.F. Xu, Bio-inspired Computation, Science Press, Beijing, 2011.[53] C.Y. Liang, C.G. Wu, X.H. Shi, H.W. Ge, Theory and Applications of the Swarm Intelligence Algorithms, Science Press, Beijing, 2009.[54] E. Zahara, Y.T. Kao, Hybrid Nelder-Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Syst.

Appl. 36 (2009) 3880–3886.