Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Research ArticleThe Sparsity Adaptive Reconstruction Algorithm Based onSimulated Annealing for Compressed Sensing
Yangyang Li ,1 Jianping Zhang ,2 Guiling Sun ,1 and Dongxue Lu1
1College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China2Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USA
Correspondence should be addressed to Jianping Zhang; [email protected]
Received 27 January 2019; Revised 17 May 2019; Accepted 2 July 2019; Published 14 July 2019
Academic Editor: Panajotis Agathoklis
Copyright © 2019 Yangyang Li et al. -is is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
-is paper proposes a novel sparsity adaptive simulated annealing algorithm to solve the issue of sparse recovery. -is algorithmcombines the advantage of the sparsity adaptive matching pursuit (SAMP) algorithm and the simulated annealing method inglobal searching for the recovery of the sparse signal. First, we calculate the sparsity and the initial support collection as the initialsearch points of the proposed optimization algorithm by using the idea of SAMP. -en, we design a two-cycle reconstructionmethod to find the support sets efficiently and accurately by updating the optimization direction. Finally, we take advantage of thesparsity adaptive simulated annealing algorithm in global optimization to guide the sparse reconstruction. -e proposed sparsityadaptive greedy pursuit model has a simple geometric structure, it can get the global optimal solution, and it is better than thegreedy algorithm in terms of recovery quality. Our experimental results validate that the proposed algorithm outperforms existingstate-of-the-art sparse reconstruction algorithms.
1. Introduction
In recent years, the research of compressed sensing (CS)[1–3] has received more attention in many fields. CS is anovel signal processing theory, which can efficiently re-construct a signal by finding solutions to underdeterminedlinear equations [4]. It is very magical that CS theory canexactly restore the sparse signal or compressible signals by asampling rate which does not satisfy the Nyquist–Shannonsampling theorem. And many papers have demonstratedthat CS can effectively get the key information from a fewnoncorrelative measurements.
-e core idea of CS is that sampling and compression aredone simultaneously. CS technology can reduce the hardwarerequirements, further reduce the sampling rate, improve thesignal restoration quality, and save the cost of signal pro-cessing and transmission. Currently, CS has been widely usedin wireless sensor networks [5, 6], information theory [7],image processing [8–10], earth science, optical/microwaveimaging, pattern recognition [11], wireless communications[12, 13], atmosphere, geology, and other fields.
CS theory is mainly divided into three aspects: (1) sparserepresentation; (2) uncorrelated sampling; (3) sparse re-construction. However, how to reconstruct sparse signals iscrucial. It is a huge challenge for the researchers to propose areconstruction algorithm with reliable accuracy. Since re-construction methods need to recover the original signalfrom the low-dimensional measurements, the signal re-construction requires solving an underdetermined equation.-at may include infinite solutions. Mathematically, theissue can be regarded as the l0-norm minimization. How-ever, it is an NP-hard problem [14]. It requires exhaustivelylisting all possible solutions of the original signal. -at isunrealistic for large-scale data processing problems.
At present, some scholars have proposed many re-construction algorithms. Mostly, the nonconvex l0-normminimization will be replaced with the convex l1-normminimization to design the reconstruction model. -ereare two important kinds of reconstruction algorithms. Oneis greedy pursuit algorithms, and the other is the convexoptimization algorithm. -e convex optimization algo-rithm solves a much easier l1-norm minimization problem
HindawiJournal of Electrical and Computer EngineeringVolume 2019, Article ID 6950819, 8 pageshttps://doi.org/10.1155/2019/6950819
mailto:[email protected]://orcid.org/0000-0002-5059-7533https://orcid.org/0000-0002-4451-6661https://orcid.org/0000-0001-5283-1760https://creativecommons.org/licenses/by/4.0/https://doi.org/10.1155/2019/6950819
(e.g., basis pursuit algorithm). But it requires high com-putational complexity to achieve exact restoration. How-ever, a series of iterative greedy algorithms receive greatattention due to their low computational complexity andsimple geometric interpretation. -e greedy algorithmsinclude the orthogonal matching pursuit (OMP) [15], thestagewise OMP (StOMP) [4], the regularized OMP(ROMP) [16], the subspace pursuit (SP) [17], the com-pressive sampling OMP (CoSaMP) [18], and so on. -e keystep of these methods is to estimate the support sets withhigh probability. At each iteration, one or several indexes ofthe sparse signal are added into the support sets for sub-sequent processing. If those indexes are reliable, they willbe added to the final support sets to recover the originalsignal.
In this paper, we propose a novel sparsity adaptivesimulated annealing method (SASA) to reconstruct thesparse signal efficiently. -is new method combines theadvantage of the SAMP [19] algorithm and the simulatedannealing (SA) algorithm to solve the CS reconstructionproblem.-e simulated annealing algorithm is known for itsefficient global optimization strategies and superior resultsin solving nonconvex problems.-e proposed algorithm canestimate the sparsity level, and it achieves superior perfor-mance in finding the optimization solution. -e experi-mental results have shown that the proposed SASA methodoutperforms existing state-of-the-art sparse reconstructionalgorithms.
-emain contributions of this paper are listed as follows:
(1) We take advantage of the sparsity adaptive simulatedannealing algorithm in global searching to guide thesparse reconstruction
(2) -e proposed algorithm can reconstruct a sparsesignal accurately, and it does not require sparsity as apriori information
(3) -e proposed algorithm has both the simple geo-metric structure of the greedy algorithm and theglobal optimization ability of the SA algorithm
-e remainder of this paper is organized as follows.Section 2 introduces the background of CS and some ap-proximation methods that may be applied in the latersection. In Section 3, we propose the SASA algorithm andgive the algorithm’s frameworks. We discuss the experi-mental results in Section 4. Finally, Section 5 draws theconclusion.
2. Background and Related Work
In this section, we introduce the compressive sensing modeland the sparse signal approximation method.
2.1. Compressive Sensing Model. Sparse reconstruction is torecover the high dimensional original signal x(x ∈ RN×1).y(y ∈ RM×1) is the measurement vector. -e compressivesensing model can be defined as
y � Φx, (1)
where Φ is the sensing matrix (Φ ∈ RM×N,M≪N). Math-ematically, this problem can be solved by the l0-normminimization. It can be written as follows:
minx
‖x‖0,
s.t. y � Φx,(2)
where ‖∗‖0 is the zero norm. However, the l0-norm mini-mization is the NP-hard problem. It needs to exhaustivelylist all CKN (K is the sparsity) possible solutions for de-termining the locations of the nonzero entries in x.-erefore, it is usually converted to the convex l1-normminimization problem as follows:
minx
‖x‖1,
s.t. y � Φx.(3)
-e above model is a convex optimization problem. Itcan resort to linear programming methods or Bregman splitalgorithm. And linear programming methods have shown tobe effective in solving such problems with high accuracy.-eoretically, the CS has explained that the sparse signal xcan be exactly reconstructed from only a few observation ywhen the sensing matrix Φ satisfies the restricted isometryproperty (RIP). -is is an important property for us toanalyze and design the reconstruction algorithm. -e RIP isgiven as follows:
1− δs( ‖x‖22 ≤ ‖Φx‖
22 ≤ 1 + δs( ‖x‖
22, (4)
where δs is a constant (0≤ δs < 1).Denote ‖x‖22 as the l2 norm
of a vector x.Generally speaking, if δS is very close to one, then it is
possible that y cannot preserve any information on x (i.e.,‖Φx‖22 ≈ 0, x is the null space of the sensing matrix Φ). Inthis case, it is nearly impossible to reconstruct the sparsesignal x by using any greedy algorithms.
-ere are many kinds of sensing matrix Φ. It is mainlydivided into deterministic matrices and learning matrices.Based on signal characteristics, the learning matrix can becontinuously updated by using a machine learning method.However, the commonly used sensing matrices include theGaussian random matrix, Fourier random matrix, andToeplitz matrix.
2.2. Sparse Signal ApproximationMethod. In this section, wegive some notations and matrix operators to facilitate thesummary of the proposed algorithm.
Definition 1 (projection and residue). Let y ∈ RM×1 andΦS ∈ RM×|S|. IfΦ is the full column rank, then (Φ′Φ)
−1Φ′ isthe Moore–Penrose pseudoinverse of Φ, and Φ′ representsthe matrix transpose. -e projection of the measurements yonto span (ΦS) is written as
yp � ΦSΦ†Sy, (5)
where Φ†S is the pseudoinverse of the matrix ΦS and isdefined as
2 Journal of Electrical and Computer Engineering
Φ†S � ΦS′ΦS( −1ΦS′. (6)
It can be seen from equation (5) that we can get therestoration of the sparse signal x by using the followingformulation:
x � Φ†Sy � ΦS′ΦS( −1ΦS′y. (7)
-e residue of the projection vector y is expressed as
yr � E−ΦSΦ†S y. (8)
Suppose thatΦS′Φs is invertible. -e projection of y ontospan (ΦS) is defined as
yp � ΦSΦ†Sy. (9)
E is the unit matrix. To illustrate the relationship be-tween projection and residue in iteration, we draw Figure 1[17]. Refer to Figure 1 to help better understand thedefinition.
As can be seen from Figure 1, the residue is perpen-dicular to the hyperplane which can be ensured by usingequation (7). It means that the residue will be minimized ineach iteration. And, that can speed up the convergence of thealgorithm.
3. SASA:SparsityAdaptiveSimulatedAnnealingMatching Pursuit Algorithm
-e greedy pursuit algorithms for sparse recovery have lowcomputational complexity and perfect reconstructionquality. -us, how to design a stable and efficient CS re-construction algorithm is a meaningful research issue. Butsome existing greedy algorithms often get a local optimi-zation solution to the reconstruction problem. To bettersolve this issue, we introduce an intelligent optimizationalgorithm to get better reconstruction result.
Simulated annealing (SA) is a famous heuristic opti-mization algorithm, and it has been empirically indicatedthat SA has superior performance in finding global opti-mization solutions. As a special optimization reconstructionproblem, we can directly solve the problem (2) by using theSA algorithm. However, due to the high computationalcomplexity, it is not practical to directly use SA to solve thesparse recovery problem.
As we all know, the greedy algorithms have low com-plexity and simple geometric interpretation. But greedyalgorithms do not have global optimization capabilities, andit is easy to get a suboptimal solution. -is means that thegreedy strategy and SA can complement each other. Wecombine the advantage of the SAMP algorithm and the SAalgorithm to recover a sparse signal from only a few mea-surements y.
Inspired by this, we propose a new method to solve thesparse reconstruction issue. -e proposed algorithm notonly does not require sparsity as a necessary prior but alsohas superior optimization performance.-is paper proposesa novel sparse signal recovery algorithm called sparsity
adaptive simulated annealing matching pursuit algorithm(SASA).
3.1. Brief of the Simulated Annealing Algorithm.Kirkpatrick proposed the SA algorithm in 1983. SA wasfirstly proposed to solve thermodynamic problems. -e SAalgorithm can escape from a local optimization solution.Specially, since it can add a few interference items with acertain probability, SA is more likely to find global opti-mums or approximate optimization solutions. So far, the SAalgorithm is still receiving great attention due to its specialstrategy.
-e detail procedure of the standard SA is summarizedin Algorithm 1 [20]. In Algorithm 1, f(·) is the objectivefunction and NH(·) is a function that returns a randomneighbor solution to neighborhood structure H.
3.2. Proposed SASA. In this part, we introduce the pro-cessing of the SASA algorithm in detail. In real signalprocessing, the sparsity is usually unavailable for restorationtask. We cannot know the signal sparsity in advance. If thesparsity level K can be accurately obtained, we can solve theproblem (2) easily.
Theorem 1. Suppose the support set I is the unique solutionfor the ground truth x and K< spark(Φ)/2. 8e spark(Φ) isthe minimum number of columns fromΦ, and those columnsare linearly dependent. 8en, we can get the estimation so-lution x � x if and only if Φ
IֆIy � y. I is the estimated
solution. And, the x can be obtained by the least squares.
From theorem 1, we can define the cost function as
f(I) � ΦIΦ†Iy − y
��������2, (10)
where I denotes the support set in each iteration. In thefollowing, we give the pseudocode of the SASA algorithm,and the main steps of signal reconstruction are summarizedin Algorithm 2.
Span (Фsi)
y
yip
Span (Фs
i–1)
ypi–1 yir
yri–1
Figure 1: In each iteration, a T-dimensional hyperplane closer to yis obtained.
Journal of Electrical and Computer Engineering 3
(1) Input:(2) Measurement signal y;(3) Sensing matrix Φ;(4) Initialize outmax, innermaxsq, β.(5) Iteration:(6) Estimated sparsity level K.(7) Initialize the K support set I0.(8) Itemp � I0;(9) Initialize T0 � 10∗f(I0);(10) While i< outmax do(11) for j< innermax do(12) Iij � Itemp;(13) Choose q elements from C− Iij and combined with Iij as E;(14) Calculate xij � Φ
†Ey;
(15) Choose the K largest elements from xij as Itemp;(16) Calculate δ � f(Itemp)−f(Iij);(17) if δ < 0 then(18) Itemp � Itemp(19) else(20) Calculate β � min(1, exp(−(abs(δ))/T0))(21) Take a random ra ∈ [0, 1](22) if β> ra then Itemp � Itemp;(22) else Itemp � Iij(23) Calculate error � f(Itemp)(24) if error< β then break(25) end for(26) Update T0 � T0/log(i + 1)(27) End while(28) Output: x � Φ†Itempy;
ALGORITHM 2: SASA algorithm.
(1) Input:(2) Initial solution S;(3) Initial the temperature t0;(4) Cooling rate α;(5) -e outer-loop iteration outmax;(6) -e inner-loop iteration innermax.(7) S∗⟵ S, t⟵ t0(8) Iteration:(9) While i< outmax do(10) for j< innermax do(11) S′⟵ random neighbor of a random neighborhood k, S′ ∈ NH(S)(12) Δ⟵f(S′)−f(S)(13) if Δ≤ 0 then(14) S⟵ S′(15) if f(S′)
4. Simulation Results and Analysis
In this section, a series of simulations are explained toevaluate the superior performance of the proposed SASAalgorithm. To demonstrate the efficiency of the proposedalgorithm, the SASA is compared with those existing state-of-the-art reconstruction methods. -ose methods includeSP, StOMP, CoSaMP, SWOMP [21], and ROMP. We per-formed simulation experiments on one-dimensional dataand two-dimensional data, respectively. We adopted theGaussian random matrix as the sampling operator for allmethods.
All the simulations were performed on the MATLAB(R2017b) onWindows 10 system with a 3.20GHz Intel Corei7 processor and 8GB memory.
4.1. One-Dimensional Signal. In the experiment, theGaussian random sparse signal length is set to N� 256. -esensing matrix uses the random matrix to get the obser-vation y. To evaluate the reconstruction performance of allalgorithms, we conducted a variety of simulation experi-ments. Simultaneously, we calculated the average of 500independent experiments as the final experimental result toavoid randomness of the experiments.
In Figure 2, we give the curve of the exact reconstructionrate under different measurement number M. -e exactreconstruction rate can be interpreted as the number ofcorrectly reconstructed signal in 500 independent experi-ments. If a signal is correctly reconstructed, we can think‖x− x‖< 10−6. x is the reconstruction signal.-e sparsityK isset to 20. We can observe that the proposed algorithm SASAhas a higher exact reconstruction rate for the different M.Especially, even when the measurement number M is lessthan 55, the proposed SASA is significantly superior to otheralgorithms.
In Figure 3, we provide the curve of the exact re-construction rate under different sparsity level K. -emeasurement number is set to 60. K varies between 5 and 35.From Figure 3, we find that SASA is far superior to otheralgorithms under different sparsity levels. Even if K> 25, theproposed SASA is better than other classical algorithms.
In Figure 4, we provide the curve of the exact re-construction rate under different signal length N. K� 10 andM� 25 are fixed, and signal length varies between 30 and120. We observe that the proposed SASA performs betterthan the other five methods in the exact recovery rate. In
40 45 50 55 60 65 70Number of measurements
0
10
20
30
40
50
60
70
80
90
100
Exac
t rec
over
y ra
te
CoSaMPStOMPSP
ProposedSWOMPOMP
Figure 2: Curve of the exact reconstruction rate under differentmeasurement numbers M.
5 10 15 20 25 30 35Sparsity level (K)
0
10
20
30
40
50
60
70
80
90
100
Exac
t rec
over
y ra
te
CoSaMPStOMPROMP
SPProposedSWOMP
Figure 3: Curve of the exact reconstruction rate under differentsparsity level K. (K � 5, 10, . . . , 25, 30).
0
10
20
30
40
50
60
70
80
90
100
Exac
t rec
over
y ra
te
30 40 50 60 70 80 90 100 110 120Length of signal
CoSaMPStOMPSP
ProposedSWOMPOMP
Figure 4: Curve of the exact reconstruction rate under differentsignal length (K � 10, M � 25).
Journal of Electrical and Computer Engineering 5
general, the proposed SASA achieves higher exact re-construction rate under different signal lengths.
In Figure 5, we tested the exact reconstruction perfor-mance of the SASA algorithm under different sparsity levels.We give the exact reconstruction rate as a function ofmeasurement numberM. K varies between 5 and 25, andMvaries between 10 and 200. We find that the SASA canexactly reconstruct the signal of different sparsity levels.
-rough the above various simulation experiments, wecan demonstrate that the proposed SASA algorithm hassuperior reconstruction performance under different re-construction conditions.
4.2. Two-Dimensional Image. To validate the performance ofthe proposed SASA algorithm for real data, we use standardtest images with a size of 256 × 256. In addition, we need to
0 20 40 60 80 100 120 140 160 180 200Number of measurements
0
10
20
30
40
50
60
70
80
90
100
Exac
t rec
over
y ra
te
K = 5K = 10K = 15
K = 20K = 25
Figure 5: Curve of the exact reconstruction rate under different measurement numbers.
CoSaMP
PSNR = 28.18 PSNR = 28.72 PSNR = 28.43 PSNR = 29.55
PSNR = 32.47PSNR = 30.55PSNR = 31.76PSNR = 31.47
PSNR = 27.74 PSNR = 28.15 PSNR = 27.19 PSNR = 28.80
SASASWOMPSPOriginal image
Figure 6: Images reconstructed by different algorithms.
6 Journal of Electrical and Computer Engineering
note that the input image will be normalized, which facil-itates sparse processing by using the discrete wavelettransform (DWT). We adopted a Gaussian random matrixas the compression sampling operator Φ. M� 190 andN� 256 are fixed. Figure 6 shows the image reconstructed byCoSaMP, SP, SWOMP, OMP, and our SASA algorithms. Toassess the superior performances of all algorithms, the peaksignal-to-noise ratio (PSNR) is employed as the re-construction image quality indices. -e PSNR is defined asfollows:
PSNR � 10 log10M × N × 2552
‖x − x‖2 . (11)
All methods are evaluated on three different images, andthose images include the Lena, Barche, and Camer. Asshown in Figure 6, we can find that the proposed SASAmethod gets better image reconstruction quality than otheralgorithms.-e proposed SASA achieves the best PSNR.-ePSNR also demonstrates that the SASA algorithm can makethe reconstructed image smoother and achieve more details.
Conclusively, various experiments show that the pro-posed SASA algorithm achieves superior performance fordifferent types of data. Compared with other algorithms, theproposed SASA got higher exact reconstruction rate andPSNR.
5. Conclusions
In this paper, we propose a new sparsity adaptive simulatedannealing algorithm. It can solve the sparse reconstructionissue efficiently. -e proposed algorithm considers bothadaptive sparsity and global optimization for sparse signalreconstruction. A series of experimental results show thatthe proposed SASA algorithm achieves the best re-construction performance and is superior to the existingstate-of-the-art methods in terms of reconstruction quality.For those advantages, the proposed SASA algorithm hasbroad application prospects and a higher guiding signifi-cance for the sparse signal reconstruction.
Data Availability
-e data used to support the findings of this study are in-cluded within the article.
Conflicts of Interest
-e authors declare that there are no conflicts of interestregarding the publication of this paper.
Acknowledgments
-e research was supported by the National High-TechResearch and Development Program (National 863 Project,No. 2017YFB0403604) and the project supported by theNational Nature Science Foundation of China (No.61771262). Yangyang would also like to thank the ChineseScholarship Council.
References
[1] D. L. Donoho, “Compressed sensing,” IEEE Transactions onInformation 8eory, vol. 52, no. 4, pp. 1289–1306, 2006.
[2] E. J. Candes and M. B. Wakin, “An introduction to com-pressive sampling,” IEEE Signal Processing Magazine, vol. 25,no. 2, pp. 21–30, 2008.
[3] E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recoveryfrom incomplete and inaccurate measurements,” Communi-cations on Pure and Applied Mathematics, vol. 59, no. 8,pp. 1207–1223, 2006.
[4] D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, “Sparsesolution of underdetermined systems of linear equationsby stagewise orthogonal matching pursuit,” IEEE Trans-actions on Information8eory, vol. 58, no. 2, pp. 1094–1121,2012.
[5] B. Sun, Y. Guo, N. Li, and D. Fang, “Multiple target countingand localization using variational Bayesian EM algorithm inwireless sensor networks,” IEEE Transactions on Communi-cations, vol. 65, no. 7, pp. 2985–2998, 2017.
[6] J.-Y. Wu, M.-H. Yang, and T.-Y. Wang, “Energy-efficientsensor censoring for compressive distributed sparse signalrecovery,” IEEE Transactions on Communications, vol. 66,no. 5, pp. 2137–2152, 2018.
[7] L. D. Abreu and J. L. Romero, “MSE estimates for multitaperspectral estimation and off-grid compressive sensing,” IEEETransactions on Information 8eory, vol. 63, no. 12, pp. 7770–7776, 2017.
[8] V. J. Barranca, G. Kovačič, D. Zhou, and D. Cai, “Efficientimage processing via compressive sensing of integrate-and-fire neuronal network dynamics,” Neurocomputing, vol. 171,pp. 1313–1322, 2016.
[9] S. Li, G. Zhao, H. Sun, and M. Amin, “Compressive sensingimaging of 3-D object by a holographic algorithm,” IEEETransactions on Antennas and Propagation, vol. 66, no. 12,pp. 7295–7304, 2018.
[10] T. Tashan and M. Al-Azawi, “Multilevel magnetic resonanceimaging compression using compressive sensing,” IET ImageProcessing, vol. 12, no. 12, pp. 2186–2191, 2018.
[11] J. Yu, R. Hong, M. Wang, and J. You, “Image clustering basedon sparse patch alignment framework,” Pattern Recognition,vol. 47, no. 11, pp. 3512–3519, 2014.
[12] Z. Qin, J. Fan, Y. Liu, Y. Gao, and G. Y. Li, “Sparse repre-sentation for wireless communications: a compressive sensingapproach,” IEEE Signal Processing Magazine, vol. 35, no. 3,pp. 40–58, 2018.
[13] W. Chen and I. Wassell, “A decentralized Bayesian algorithmfor distributed compressive sensing in networked sensingsystems,” IEEE Transactions on Wireless Communications,vol. 15, no. 2, pp. 1282–1292, 2016.
[14] G. Ditzler, N. Carla Bouaynaya, and R. Shterenberg,“AKRON: an algorithm for approximating sparse kernelreconstruction,” Signal Processing, vol. 144, pp. 265–270,2018.
[15] K. Schnass, “Average performance of orthogonal matchingpursuit (OMP) for sparse approximation,” IEEE Signal Pro-cessing Letters, vol. 25, no. 12, pp. 1865–1869, 2018.
[16] D. Needell and R. Vershynin, “Uniform uncertainty principleand signal recovery via regularized orthogonal matchingpursuit,” Foundations of Computational Mathematics, vol. 9,no. 3, pp. 317–334, 2009.
[17] W. Dai and O. Milenkovic, “Subspace pursuit for compressivesensing signal reconstruction,” IEEE Transactions on In-formation 8eory, vol. 55, no. 5, pp. 2230–2249, 2009.
Journal of Electrical and Computer Engineering 7
[18] Z. Lin, “Image adaptive reconstruction based on compressivesensing via CoSaMP,” Applied Mechanics & Materials,vol. 631–632, pp. 436–440, 2014.
[19] H. Wu and S. Wang, “Adaptive sparsity matching pursuitalgorithm for sparse reconstruction,” IEEE Signal ProcessingLetters, vol. 19, no. 8, pp. 471–474, 2012.
[20] H. G. Santos, T. A. M. Toffolo, C. L. T. F. Silva, and G. VandenBerghe, “Analysis of stochastic local search methods for theunrelated parallel machine scheduling problem,” In-ternational Transactions in Operational Research, vol. 26,no. 2, pp. 707–724, 2019.
[21] T. Blumensath and M. E. Davies, “Stagewise weak gradientpursuits,” IEEE Transactions on Signal Processing, vol. 57,no. 11, pp. 4333–4346, 2009.
8 Journal of Electrical and Computer Engineering
International Journal of
AerospaceEngineeringHindawiwww.hindawi.com Volume 2018
RoboticsJournal of
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Active and Passive Electronic Components
VLSI Design
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Shock and Vibration
Hindawiwww.hindawi.com Volume 2018
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawiwww.hindawi.com
Volume 2018
Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com
The Scientific World Journal
Volume 2018
Control Scienceand Engineering
Journal of
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com
Journal ofEngineeringVolume 2018
SensorsJournal of
Hindawiwww.hindawi.com Volume 2018
International Journal of
RotatingMachinery
Hindawiwww.hindawi.com Volume 2018
Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Navigation and Observation
International Journal of
Hindawi
www.hindawi.com Volume 2018
Advances in
Multimedia
Submit your manuscripts atwww.hindawi.com
https://www.hindawi.com/journals/ijae/https://www.hindawi.com/journals/jr/https://www.hindawi.com/journals/apec/https://www.hindawi.com/journals/vlsi/https://www.hindawi.com/journals/sv/https://www.hindawi.com/journals/ace/https://www.hindawi.com/journals/aav/https://www.hindawi.com/journals/jece/https://www.hindawi.com/journals/aoe/https://www.hindawi.com/journals/tswj/https://www.hindawi.com/journals/jcse/https://www.hindawi.com/journals/je/https://www.hindawi.com/journals/js/https://www.hindawi.com/journals/ijrm/https://www.hindawi.com/journals/mse/https://www.hindawi.com/journals/ijce/https://www.hindawi.com/journals/ijap/https://www.hindawi.com/journals/ijno/https://www.hindawi.com/journals/am/https://www.hindawi.com/https://www.hindawi.com/