3
Sistem tanımı için seyrek özyineli ters uyarlanır algoritma tasarımı Design of a sparse recursive inverse adaptive algorithm for system identification Mohammad N. S. Jahromi, Aykut Hocanin, Osman Kukrer Electrical and Electronic Engineering Dept. Eastern Mediterranean University TRNC, Mersin 10, Turkey Email: {mohammad.sabet, aykut.hocanin, osman.kukrer}@emu.edu.tr Mohammad Shukri Salman Electrical and Electronic Engineering Dept. Mevlana (Rumi) University Konya, Turkey Email: [email protected] Özetçe —Sıkı¸ stırmalı algılamadaki son yıllardaki geli¸ smelere dayanarak, LMS-tabanlı algoritmalar seyrek sistem tanımı için kullanılmaya ba¸ slanmı¸ stır. Bu tür uyarlanır algoritmalarda, maliyet i¸ slevine 1-norm maliyeti eklenerek, tanımlanacak olan sistemin seyreklik özelli˘ ginden faydalanarak, uyarlama sürecinde sistem süzgeç katsayılarının sıfıra çekilmesi sa˘ glanmaktadır. Bu bildiride, tanımlanacak sistemin dürtü yanıtınin seyrek oldu˘ gu varsayılarak yeni bir uyarlanır süzgeç önerilmekte ve önerilen süzgecin daha dü¸ sük ortalama karesel sapma ve daha hızlı yakınsama özelliklerine sahip oldu˘ gu gösterilmektedir. Özyineli ters uyarlama (RI) ve sıfıra çekim (ZA) süreçleri birle¸ stirilerek önerilen algoritma (ZA-RI) olarak isimlendirilmi¸ stir. Benzetim sonuçları, ZA-RI algoritmasının, geleneksel LMS-tabanlı algorit- malara göre önemli ba¸ sarım kazançlarına yol açtı˘ gını göstermek- tedir. Abstract—Based on the developments in the field of compres- sive sensing in recent years, several LMS-based algorithms have been developed for sparse system identification. These adaptive algorithms combine a 1-norm penalty with the the original cost function of the LMS to create a zero attractor (ZA) and hence utilize the sparsity in the filter taps during the adaptation process. In this paper, we propose a new adaptive algorithm to achieve faster convergence rate and lower mean-square deviation under sparsity assumption of impulse response. The proposed modifications employ the recursive inverse adaptive filtering (RI) scheme and the zero attractor to generate the ZA-RI algorithm. Simulation results demonstrate that the proposed modifications result in significant performance gain in comparison to the conventional LMS-based methods. KeywordsZA-LMS Adaptive Filtering, Compressed Sensing, Recursive Inverse Adaptive Filtering, System Identification. I. I NTRODUCTION Adaptive algorithms are extensively used in solving many signal processing problems [1]. The least-mean-square (LMS) algorithm, due to its simplicity and low computational cost, is one the most popular adaptive methods in applications such as noise cancelation, system identification and channel estimation [1], [2]. The unknown system to be identified in many practical situations is sparse. In other words, the impulse response contains only a few nonzero coefficients, while the majority of the remaining values are near-zero [5]. Under such scenarios, the conventional LMS algorithm, like many major attractive adaptive schemes such as; recursive least squares (RLS) and Kalman filter, is not able to exploit the sparsity characteristic of the system [6]. In recent years, mainly motivated by LASSO [3] and the developments in the field of compressive sampling [4] , a new class of adaptive filtering is proposed based on p norm constraint to address the sparsity. The basic idea of such algorithms is to incorporate the convex sparsity constraint on the cost function of gradient descent [5]. The main example of this approach includes ZA-LMS and its variants [6], [7], [8], [9]. ZA-LMS uses the 1 norm to generate the zero attraction on small adaptive taps and pulls them toward the origin. This mechanism results in faster convergence and lower steady state error compared to non-sparse model. In this paper, we propose a new sparsity-aware adaptive filtering algorithm that further improves the performance of sparse system identification problem in terms of convergence speed and steady state error. The proposed method employs the recently developed recursive inverse adaptive filtering (RI) [10], combined with a zero attractor for sparse adaptation and is called ZA-RI. Simulation results show the superiority of the proposed method compared to the several existing LMS-based solutions such as; ZA-LMS, ZA-VSSLMS [7] and WZA- LLMS [8]. The paper is organized as follows: Section II introduces the proposed sparse recursive inverse adaptive algorithm. In section III, simulation results that compare the performance of the proposed algorithm with those of standard ZA-LMS, ZA- VSSLMS and WZA-LLMS algorithms are shown. Conclusions are drawn in section IV. II. SPARSE RECURSIVE ALGORITHM Recall the a priori filtering error e(k) in RI [10] as: e(k)= d(k) x T w(k 1), (1) 978-1-4799-4874-1/14/$31.00 c 2014 IEEE 1627 2014 IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014)

[IEEE 2014 22nd Signal Processing and Communications Applications Conference (SIU) - Trabzon, Turkey (2014.4.23-2014.4.25)] 2014 22nd Signal Processing and Communications Applications

Embed Size (px)

Citation preview

Sistem tanımı için seyrek özyineli ters uyarlanıralgoritma tasarımı

Design of a sparse recursive inverse adaptivealgorithm for system identification

Mohammad N. S. Jahromi, Aykut Hocanin, Osman KukrerElectrical and Electronic Engineering Dept.

Eastern Mediterranean UniversityTRNC, Mersin 10, Turkey

Email: {mohammad.sabet, aykut.hocanin, osman.kukrer}@emu.edu.tr

Mohammad Shukri SalmanElectrical and Electronic Engineering Dept.

Mevlana (Rumi) UniversityKonya, Turkey

Email: [email protected]

Özetçe —Sıkıstırmalı algılamadaki son yıllardaki gelismeleredayanarak, LMS-tabanlı algoritmalar seyrek sistem tanımı içinkullanılmaya baslanmıstır. Bu tür uyarlanır algoritmalarda,maliyet islevine �1-norm maliyeti eklenerek, tanımlanacak olansistemin seyreklik özelliginden faydalanarak, uyarlama sürecindesistem süzgeç katsayılarının sıfıra çekilmesi saglanmaktadır. Bubildiride, tanımlanacak sistemin dürtü yanıtınin seyrek olduguvarsayılarak yeni bir uyarlanır süzgeç önerilmekte ve önerilensüzgecin daha düsük ortalama karesel sapma ve daha hızlıyakınsama özelliklerine sahip oldugu gösterilmektedir. Özyineliters uyarlama (RI) ve sıfıra çekim (ZA) süreçleri birlestirilerekönerilen algoritma (ZA-RI) olarak isimlendirilmistir. Benzetimsonuçları, ZA-RI algoritmasının, geleneksel LMS-tabanlı algorit-malara göre önemli basarım kazançlarına yol açtıgını göstermek-tedir.

Abstract—Based on the developments in the field of compres-sive sensing in recent years, several LMS-based algorithms havebeen developed for sparse system identification. These adaptivealgorithms combine a �1-norm penalty with the the originalcost function of the LMS to create a zero attractor (ZA) andhence utilize the sparsity in the filter taps during the adaptationprocess. In this paper, we propose a new adaptive algorithm toachieve faster convergence rate and lower mean-square deviationunder sparsity assumption of impulse response. The proposedmodifications employ the recursive inverse adaptive filtering (RI)scheme and the zero attractor to generate the ZA-RI algorithm.Simulation results demonstrate that the proposed modificationsresult in significant performance gain in comparison to theconventional LMS-based methods.

Keywords—ZA-LMS Adaptive Filtering, Compressed Sensing,Recursive Inverse Adaptive Filtering, System Identification.

I. INTRODUCTION

Adaptive algorithms are extensively used in solving manysignal processing problems [1]. The least-mean-square (LMS)algorithm, due to its simplicity and low computational cost, isone the most popular adaptive methods in applications such asnoise cancelation, system identification and channel estimation[1], [2].

The unknown system to be identified in many practicalsituations is sparse. In other words, the impulse responsecontains only a few nonzero coefficients, while the majority ofthe remaining values are near-zero [5]. Under such scenarios,the conventional LMS algorithm, like many major attractiveadaptive schemes such as; recursive least squares (RLS) andKalman filter, is not able to exploit the sparsity characteristicof the system [6]. In recent years, mainly motivated by LASSO[3] and the developments in the field of compressive sampling[4] , a new class of adaptive filtering is proposed based on �pnorm constraint to address the sparsity. The basic idea of suchalgorithms is to incorporate the convex sparsity constraint onthe cost function of gradient descent [5]. The main example ofthis approach includes ZA-LMS and its variants [6], [7], [8],[9]. ZA-LMS uses the �1 norm to generate the zero attractionon small adaptive taps and pulls them toward the origin. Thismechanism results in faster convergence and lower steady stateerror compared to non-sparse model.

In this paper, we propose a new sparsity-aware adaptivefiltering algorithm that further improves the performance ofsparse system identification problem in terms of convergencespeed and steady state error. The proposed method employsthe recently developed recursive inverse adaptive filtering (RI)[10], combined with a zero attractor for sparse adaptation andis called ZA-RI. Simulation results show the superiority of theproposed method compared to the several existing LMS-basedsolutions such as; ZA-LMS, ZA-VSSLMS [7] and WZA-LLMS [8].

The paper is organized as follows: Section II introducesthe proposed sparse recursive inverse adaptive algorithm. Insection III, simulation results that compare the performance ofthe proposed algorithm with those of standard ZA-LMS, ZA-VSSLMS and WZA-LLMS algorithms are shown. Conclusionsare drawn in section IV.

II. SPARSE RECURSIVE ALGORITHM

Recall the a priori filtering error e(k) in RI [10] as:

e(k) = d(k)− xT w(k − 1), (1)978-1-4799-4874-1/14/$31.00 c©2014 IEEE

1627

2014 IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014)

where the cost function of the RI algorithm is defined as:

J(k) =

k∑

i=k−N+1

βk−ie2(i). (2)

N is the filter length and β is the forgetting factor. Minimizingthe cost function w.r.t w, gives the iterative update equation ofthe RI as the following:

w(k) = [I − μ(k)R(k)]w(k − 1) + μ(k)p(k), (3)

where I is an N × N identity matrix, μ(k) is the variablestep-size, the R(k) and p(k) are the autocorrelation andcross-correlation matrices, respectively. The correlations arerecursively estimated as

R(k) = βR(k − 1) + x(k)xT (k), (4)

and

p(k) = βp(k − 1) + d(k)x(k). (5)

The variable step-size μ(k) [10] is defined as

μ(k) =μ0

1− βkwhere μ0 < μmax,

where μmax = 2(1−β)λmax(Rxx)

. λmax is the maximum eigenvalueof Rxx and Rxx = E{x(k)xT (k)}.

In order to take sparsity into account, we define a new costfunction for the RI by imposing the �1-norm penalty into (2),

J(k) =k∑

i=k−N+1

βk−ie2(i) + γ‖w‖1, (6)

where γ is a small positive constant.

By minimizing the cost function in (6) w.r.t w we obtain

w(k) = [I−μ(k)R(k)]w(k−1)+μ(k)p(k)−ρsgn(w(k)), (7)

where ρ = γμ(k) and sgn(w(k)) is the component-wise signfunction of w(k).

The term ρsgn(w(k)) in (7) corresponds to the zero-attraction term. It attracts the small filter coefficients to zero.Particularly, if the filter coefficient is positive, it will decreaseand if it is negative, it will increase.

III. EXPERIMENTAL RESULTS

In this section, the performance of the proposed algorithmis tested for a system identification problem where the inputto the system is a white process with an additive Gaussiannoise added such that the signal-to-noise ratio (SNR) is 20 dB.The performance measure used is the mean-square deviation(MSD) defined as MSD = E‖h − w(n)‖2. The experimentsin this section were implemented for 100 independent runs.

0 500 1000 1500 2000 2500 3000

10−3

10−2

10−1

100

101

Iteration

MS

D WZA−LLMS

ZA−LMS

ZA−RI

RI

ZA−VSSLMS

Figure 1. Comparison of convergence rate in terms of MSD for RI, ZA-LMS,ZA-VSSLMS and WZA-LLMS algorithms, driven by white input.

In the first experiment, in order to exploit the sparsity of thesystem, we use a filter of 16 coefficients in the time varyingsystem. Initially, one random tap of the unknown system isset to 1 and the others to zero; resulting in a sparsity of 1/16.After 1000 iterations, all the 8 random taps are set to 1 andthe rest kept at zero, i.e., a sparsity of 8/16. Finally, after 2000iterations all the taps are set to -1 and 1 randomly, modeling acompletely non-sparse system. The algorithms were simulatedwith the parameters shown in Table I. Fig.1 shows the averageMSD estimates of all algorithms. As it can be seen from theMSD results, when the system is highly sparse (before the500th iteration), the proposed ZA-RI algorithm outperformsboth ZA-LMS, ZA-VSSLMS algorithms by 5 dB and about 3dB over WZA-VSSLMS and RI algorithms. After the 500th

iteration, as the number of non-zero taps increases, we see thatthe performance of the ZA-RI algorithm deteriorates since theshrinkage in the ZA-RI algorithm does not distinguish betweenthe zero taps and non-zero taps. However, the ZA-RI algorithmconverges at the same rate to the same MSD as that of the RIalgorithm even if the system is non-sparse. This shows that theRI adaptation process which uses a variable step size and theinstantaneous value of the autocorrelation matrix in the updateequation, will additionally exploit the sparsity of the systemand hence improves the performance.

In the second experiment, we investigate the convergencebehavior of the ZA-RI for an echo cancelation problem byestimating the room impulse response, and compare it withthose of the original RI, ZA-LMS, ZA-VSSLMS and WZA-LLMS algorithms. The room impulse response is assumedto be sparse with a total of 128 coefficients (N = 128);randomly 6 taps are set to 1 while the others are kept zero.The algorithms were simulated with the parameters shownin Table II. As illustrated in Fig.2, the ZA-RI algorithmconverges to 5 dB and 8.5 dB lower MSDs compared to the RI,ZA-LMS algorithms respectively. In addition, The proposedalgorithm outperforms all the simulated algorithms in termsof convergence speed.

1628

2014 IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014)

Table I. PARAMETERS OF RI, ZA-RI, ZA-LMS, ZA-VSSLMS AND WZA-LLMS ALGORITHMS FOR EXPERIMENT 1.

β μ0 ρ μ γ ζ α μmin μmax

RI 0.97 0.004 - - - - - - -ZA-RI 0.97 0.004 0.0005 - - - - - -ZA-LMS - - 0.0005 0.01 - - - - -WZA-LLMS - - 0.0005 0.02 0.01 10 - - -ZA-VSSLMS - - 0.0005 - 0.00048 - 0.97 0.01 0.05

Table II. PARAMETERS OF RI, ZA-RI, ZA-LMS, ZA-VSSLMS AND WZA-LLMS ALGORITHMS FOR EXPERIMENT 2.

β μ0 ρ μ γ ζ α μmin μmax

RI 0.991 0.0015 - - - - - - -ZA-RI 0.991 0.0015 0.0005 - - - - - -ZA-LMS - - 0.0005 0.01 - - - - -WZA-LLMS - - 0.0005 0.0045 0.001 10 - - -ZA-VSSLMS - - 0.0005 - 0.0048 - 0.97 0.003 0.006

0 500 1000 1500 200010

−3

10−2

10−1

100

101

Iteration

MS

D

ZA−VSSLMS

WZA−LLMS

ZA−LMS RI

ZA−RI

Figure 2. Convergence rate in terms of MSD for RI, ZA-LMS, ZA-VSSLMSand WZA-LLMS algorithms, for echo canceler driven by white input.

IV. CONCLUSIONS

In this paper, we propose a new sparsity-aware adaptivefiltering ZA-RI that improves the performance of sparse systemidentification problem in terms of the convergence speedand the steady state error. The proposed method employsthe recursive inverse adaptive filtering (RI) combined with azero attractor to utilize the sparsity of the unknown system.Simulation results show the superiority of proposed methodcompared to the several existing LMS-based solutions suchas; ZA-LMS, ZA-VSSLMS and WZA-LLMS.

REFERENCES

[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice Hall,Eaglewood Cliffs, NJ, 1985.

[2] S. Haykin, Adaptive Filter Theory, Prentice Hall, Eaglewood Cliffs, NJ,1986.

[3] R. Tibshirani, “Regression shrinkage and selection via the Lasso,"Journal of the Royal Statistical Society Series B, vol. 58, pp. 267-288,1996.

[4] D. Donoho, “Compressive sensing," IEEE Transactions on InformationTheory, vol. 52, pp. 1289-1306, April 2006.

[5] G. Su, J. Jin, Y. Gu, “Performance analysis of �0 norm constraint leastmean square algorithm,” IEEE Transactions on Signal Processing, vol. 60,pp. 2223-2235, May 2012.

[6] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,"Proceedings of IEEE International Conference on Acoustics, Speech andSignal Processing, pp. 3125-3128, Taipei, Taiwan, April 2009.

[7] M. S. Salman, M. N. S. Jahromi, A. Hocanin and O. Kukrer, “A zero-attracting variable step-size LMS algorithm for sparse system identifi-cation," BIHTEL 2012 IX , Sarajevo, Bosnia and Herzegovina, October25-27, 2012.

[8] M. S. Salman, M. N. S. Jahromi, A. Hocanin and O. Kukrer, “Aweighted zero-attracting leaky-LMS algorithm”, SoftCOM 2012 Inter-national Conference on Software, Telecommunications and ComputerNetworks, Croatia, September 11-13, 2012.

[9] M. N. S. Jahromi,M. S. Salman, A. Hocanin and O. Kukrer, “Conver-gence analysis of the zero-attracting variable step-size LMS algorithmfor sparse system identification”, Signal Image and Video Processing,Springer, DOI: 10.1007/s11760-013-0580-9, 2014.

[10] M. S. Salman, O. Kukrer, A. Hocanin “Recursive inverse adaptivefiltering algorithm”, Digital Signal Processing, Elsevier, volume 21, Issue4, pp. 491-496, July 2011.

[11] G. Gui, A. Mehbodniya, F. Adachi “Least mean square/fourth algorithmfor adaptive sparse channel estimation”, IEEE 24th international sym-posium on personal indoor and mobile radio communications (PIMRC),pp.296-300, 2013.

1629

2014 IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014)