Adaptivels Signal Spectra

Embed Size (px)

Citation preview

  • 8/13/2019 Adaptivels Signal Spectra

    1/19

    MIT OpenCourseWarehttp://ocw.mit.edu

    2.161 Signal Processing: Continuous and Discrete

    Fall2008

    For information about citing these materials or our Terms of Use, visit:http://ocw.mit.edu/terms.

    http://ocw.mit.edu/http://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/termshttp://ocw.mit.edu/
  • 8/13/2019 Adaptivels Signal Spectra

    2/19

    MASSACHUSETTS INSTITUTEOFTECHNOLOGY DEPARTMENTOFMECHANICALENGINEERING

    2.161SignalProcessing- ContinuousandDiscreteIntroductiontoLeast-SquaresAdaptiveFilters1

    1 IntroductionInthishandoutweintroducetheconceptsofadaptiveFIRfilters,wherethecoefficientsarecontinuallyadjustedonastep-by-stepbasisduringthefilteringoperation. Unlikethestaticleast-squaresfilters,whichassumestationarityoftheinput,adaptivefilterscantrackslowlychangingstatisticsintheinputwaveform.

    TheadaptivestructureisshowninFig.1. TheadaptivefilterisFIRoflengthMwithcoefficientsbk, k = 0, 1, 2, . . . , M 1. The input stream {f(n)} is passed through the filter to produce thesequence{y(n)}. Ateachtime-stepthefiltercoefficientsareupdatedusinganerrore(n) = d(n)y(n) where d(n) is the desired response (usually based of{f(n)}). The filter is not designed to

    Figure1: Theadaptiveleast-squaresfilterstructure.handleaparticularinput. Becauseit isadaptive,itcanadjusttoabroadlydefinedtask.2 TheAdaptiveLMSFilterAlgorithm2.1 SimplifiedDerivationInthe lengthM FIRadaptivefilterthecoefficientsbk(n),k= 1, 2, . . . , M 1,attimestepnareadjustedcontinuouslytominimizeastep-by-stepsquared-errorperformanceindexJ(n):

    2M1J(n) = e2(n) = (d(n)y(n))2 = d(n) b(k)f(nk) (1)

    k=0J(n)isdescribedbyaquadraticsurfaceinthebk(n),andthereforehasasingleminimum. AteachiterationweseektoreduceJ(n)usingthesteepestdescentoptimizationmethod,thatiswemoveeach bk(n) an amount proportional to J(n)/b(k). In other words at step n+1 we modify thefiltercoefficientsfromthepreviousstep:

    J(n)bk(n+ 1) = bk(n)(n) , k= 0, 1, 2, . . . M 1 (2)

    bk(n)1D.RowellDecember9,2008

    1

  • 8/13/2019 Adaptivels Signal Spectra

    3/19

    where (n) is an empirically chosen parameter that defines the step size, and hence the rate ofconvergence. (Inmanyapplications(n)=,aconstant.) FromEq.(1)

    J(n) e2(n) e(n)= = 2e(n) =2e(n)f(nk)

    bk bk bkandtheadaptivefilteralgorithm is

    bk(n+ 1) = bk(n) + e(n)f(nk), k= 0, 1, 2, . . . M 1 (3)or inmatrixform

    b(n+ 1) = b(n) + e(n)f(n), (4)where

    b(n) = [b0(n) b1(n) b2(n) bM1]Tisacolumnvectorofthefiltercoefficients,and

    f(n) = [f(n) f(n1) f(n2) f(n(M1))]Tisavectoroftherecenthistoryofthe input{f(n)}.

    Equation(3),or(4),definesthefixed-gainFIRadaptiveLeast-Mean-Square(LMS)filteralgorithm. ADirect-Form implementationforafilterlengthM =5isshowninFig.2.

    Figure2: ADirect-FormLMSadaptivefilterwithlengthM =5..

    2.2 ExpandedDerivationForafilterasshown inFig.1,themean-squared-error(MSE) isdefinedas

    MSE = E e2(n) =E (d(n)y(n))2

    =

    E d

    2

    (n) +E y

    2(n)

    2E

    {d(n)y(n)}

    = dd(0)+yy(0)2dy(0) (5)

    whereE{}istheexpectedvalue,dd(k)andyy(k)areautocorrelationfunctionsanddy(k) is the cross-correlationfunctionbetween{d(n)}and{y(n)}.

    ThefilteroutputisM1

    y(n) = bkf(nk)k=0

    andforstationarywaveforms,Eq. (5)attimestepnreducestoM1M1 N1

    MSE=dd(0)+ bm(n)bk(n)ff(mk)2 bk(n)fd(k) (6)m=0 k=0 k=02

  • 8/13/2019 Adaptivels Signal Spectra

    4/19

    2.2.1 TheOptimalFIRCoefficientsThe optimal FIR filter coefficients bopt, k = 0, . . . , M 1, that minimize the MSE, are found byksettingthederivativeswithrespecttoeachofthebk(n)sequaltozero. FromEq.(6)

    (MSE)= 2

    N1=0m

    bopt

    ff(mk)2fd(k) = 0, k= 0,1, . . . , N 1 (7)mbk(n)whichisasetoflinearequationsintheoptimalcoefficientsboptk ,andwhichinmatrixformiswritten

    Rbopt =P or bopt =R1P, (8)where

    ff(0)ff(1)ff(2)...

    ff(1)ff(0)ff(1)...

    ff(2)ff(1)ff(0)...

    ff(M1)ff(M2)ff(M3)...

    ff(M1) ff(M2) ff(M3) ff(0)

    R=

    isaToeplitzmatrix,knownasthecorrelationmatrix,andP= [fd(0) fd(1) fd(2) fd(M1)]T

    isthecross-correlationvector.WiththesedefinitionstheMSE,asexpressedinEq.(6),reducesto

    MSE=dd(0)+b(n)TRb(n)2PTb(n), (9)andtheminimumMSEis

    MSEmin=dd(0)PTbopt (10)2.2.2 TheLMSAlgorithmAssumeanupdatealgorithmoftheform

    b(n+1)=b(n)+12

    (n)S(n) (11)whereS(n)isadirectionvectorthatwillmoveb(n)towardbopt,and(n)isanempiricallychosengain schedule that determines the step-size at each iteration. In particular, with the method ofsteepestdescent,letS(n) =g(n)where

    d(MSE)g(n) =

    dbisthegradientvector,and

    MSEgk(n) = = 2 1M

    bk m=0bk(n)ff(mk)2fd(k), k= 0,1, . . . M 1.

    fromEq.(6). Then,asabove,g(n)=2[Rb(n)P], (12)

    andtheLMSalgorithm isb(n+1) = b(n)

    12

    (n)g(n) (13)= [I(n)R]b(n)+(n)P (14)

    3

  • 8/13/2019 Adaptivels Signal Spectra

    5/19

    whereI isthe identitymatrix. It isinterestingtonotethat ifwedefineb(n) =bopt b(n)

    theLMSalgorithmbecomesb(n+ 1)=b(n)

    (n)Rb(n) (15)

    and ifanybk(n) =boptk ,bk(n+ 1)=boptk forallsubsequenttimesteps.In practice the LMS algorithm does not haveP orR available, and we seek an estimator for

    [Rb(n)P]. Theerrore(n) isM1

    e(n) =d(n)y(n) =d(n) b(k)f(nk)k=0

    andtherefore M1

    b(k)f(nj)f(nk)

    , forj= 0,1,2, . . . , M 1.E{e(n)f(nj)}=E{d(n)f(nj)}Ek=0

    Theindividualequationsmaybecollected intovectorformE{e(n)f(n)}=PRb(n) (16)

    andusingEq.(12)thegradientvectorcanbewritteng(n) =2E{e(n)f(n)}. (17)

    Anunbiasedestimateg(n)ofthegradientvectoratthenthiterationissimplyfoundfromEq,(17)as

    g(n) =2e(n)f(n), (18)andsubstituted intoEq,(11)togeneratetheLMSalgorithm

    b(n+ 1)=b(n)+(n)e(n)f(n), (19)which isidenticaltothatobtainedwiththesimplifiedderivation inEq.(4).2.3 Convergence PropertiesA full discussion of the convergence properties of the LMS algorithm is beyond the scope of thishandoutThevalueofthegainconstantmustbeselectedwithsomecare. Wesimplystatewithoutproofthatb(n)willconvergetobopt provided

    20

  • 8/13/2019 Adaptivels Signal Spectra

    6/19

    2.4 Variations on the Basic LMS Algorithm2.4.1 FixedGainSchedule ImplementationTheLMSalgorithmiscommonlyusedwithafixedgainschedule(n)=fortworeasons: firstinorderthatthefiltercanrespondtovaryingsignalstatisticsatanytimeit is importantthat,(n)not be a direct function of n. If (n) 0 as n , adaptation could not occur. The secondfactoristhatthefixedgainLMSalgorithmiseasytoimplementinhardwareandsoftware.2.4.2 TheNormalizedMLSAlgorithmEquation (20) demonstrates that the convergence and stability depend of the signal power. TonormalizethisdependenceamodifiedformoftheLMSalgorithm,frequentlyused inpracticeis

    b(n+ 1)=b(n) + 2e(n)f(n), (21)||f(n)||

    which isessentiallyavariablegainmethodwith(n) = 2||f(n)||

    To avoid numerical problems when the norm of the signal vector is small, a small positiveconstant isoftenadded

    (n) = 2+||f(n)||

    Thesealgorithmsareknownasnormalized MLS,orNMLS,algorithms.2.4.3 SmoothedMLSAlgorithmsBecausetheupdatealgorithmusesanestimatorofthegradientvectorthefiltercoefficientswillbesubjecttorandomperturbationsevenwhennominallyconvergedtobopt. Severalapproacheshavebeenusedtosmoothoutthesefluctuations:(a)UsearunningaverageoverthelastK estimates.

    AFIRfiltermaybeusedtoprovideasmoothedestimateg(n)ofthegradientvector,g(n),forexampleafixedlengthmovingaverage

    1 K1g(n) = g(nk). (22)

    Kk=0

    TheLMSalgorithm(Eq.(13)) isthenbasedontheaveragedgradientvectoreatimate:1b(n+ 1)=b(n) (n)g(n)2

    (b)UpdatethecoefficientseveryK steps,usingtheaverage.Avariationontheabove istoupdatethecoefficientseveryN timesteps,andcomputetheaverageofthegradientvectorestimatesduringtheintermediatetimesteps.

    1 N1g(N n) = g(nN+k). (23)

    Kk=0

    Theupdateequation,appliedeveryN time-steps,is1b((n+1)N) =b(nN) (N n)g(N n)2

    5

  • 8/13/2019 Adaptivels Signal Spectra

    7/19

  • 8/13/2019 Adaptivels Signal Spectra

    8/19

    % Author: D. Rowell 12/9/07% ------------------------------------------------------------------------%function [y, bout] = LSadapt(f, d ,FIR_M)persistent f_history b lambda M%% The following is initialization, and is executed once%if (ischar(f) && strcmp(f,initial))

    lambda = d; M = FIR_M; f_history = zeros(1,M); b = zeros(1,M); b(1) = 1; y = 0;

    else% Update the input history vector:

    for J=M:-1:2f_history(J) = f_history(J-1);

    end; f_history(1) = f;

    % Perform the convolutiony = 0;for J = 1:M

    y = y + b(J)*f_history(J); end;

    % Compute the error and update the filter coefficients for the next iteratione = d - y;for J = 1:M

    b(J) = b(J) + lambda*e*f_history(J); end; bout=b;

    end4 Application - Suppression of Narrow-band Interference in a

    Wide-band SignalIn

    this

    section

    we

    consider

    an

    adaptive

    filter

    application

    of

    suppressing

    narrow

    band

    interference,

    orintermsofcorrelationfunctionsweassumethatthedesiredsignalhasanarrowauto-correlationfunctioncomparedtothe interferingsignal.

    Assumethattheinput{f(n)}consistsofawide-bandsignal{s(n)}thatiscontaminatedbyanarrow-band interferencesignal{r(n)}sothat

    f(n) =s(n) +r(n).The filtering task is to suppress r(n) without detailed knowledge of its structure. Consider thefiltershown inFig.3. This issimilartoFig.1,withtheadditionofadelayblockoftimestepsinfrontofthefilter,andthedefinitionthatd(n) =f(n). Theoverallfilteringoperation isa littleunusualinthattheerrorsequence{e(n)}istakenastheoutput. TheFIRfilterisusedtopredictthe narrow-band component so that y(n) r(n), which is then subtracted from d(n) = f(n) toleavee(n)s(n).

    7

  • 8/13/2019 Adaptivels Signal Spectra

    9/19

    Figure3: Theadaptiveleast-squaresfilterstructurefornarrow-bandnoisesuppression.The delay block is known as the decorrelation delay. Its purpose is to remove any cross-

    correlationbetween{d(n)}andthewide-bandcomponentoftheinputtothefilter{s(n)},sothatitwillnotbepredicted. Inotherwords itassumesthat

    ss() = 0, for >.| |This least squares structure issimilar to a -step linear predictor. It acts to predict the currentnarrow-band(broadauto-correlation)componentfromthepastvalues,whilerejectinguncorrelatedcomponents in{d(n)}and{f(n)}.

    IftheLMSfiltertransferfunctionattime-stepnisHn(z),theoverallsuppressionfilterisFIRwithtransferfunctionH(z):

    H(z) = E(z) = F(z)zHn(z)F(z)F(z) F(z)

    = 1zHn(z)= z0+ 0z1+. . .+ 0z(1)b0(n)zb1(n)z(+1)+. . .

    . . .bM1(n)z(+M1) (28)thatis,aFIRfilterof lengthM+with impulseresponseh(k)where

    h

    (k) =

    1 k= 0 0 1k

  • 8/13/2019 Adaptivels Signal Spectra

    10/19

    describedbyStearnsandHush fortheone-steppredictorofasinewavewith lengthsM =2andM =3. Inthisfirstexample,wenotethesimilarityoftheadaptivefiltertotheone-steppredictorand examine the convergence of the filter to the closed-form filter coefficients. The input is anoise-freesinusoid

    2n

    f(n)=sin .12ThestabilityofthealgorithmisgovernedbyEq.(20),andsinceforasinusoidEf(n)2= 0.5weareconstrainedto

    2 2 forM = 2<

  • 8/13/2019 Adaptivels Signal Spectra

    11/19

    0 100 200 300 400 500 600 700 800 900 10000.5

    0

    0.5Predictor Error (M=2 Lambda=0.1)

    Time step (n)

    Error

    0 100 200 300 400 500 600 700 800 900 10000.5

    0

    0.5Predictor Error (M=3 Lambda=0.1)

    Time step (n)

    Error

    0 100 200 300 400 500 600 700 800 900 10000.5

    0

    0.5Predictor Error (M=2 Lambda=1)

    Error

    Time step (n)

    Figure4: Convergenceofpredictorerrore(n)withfilter lengthM andgain.4.2 Example 2: Suppression of a Sinusoid in NoiseForthesecondexamplewelookattherejectionofasinusoidofunknownfrequencyinwhitenoise.Thiscaseisextremeinthatthesignal{s(n)}hasanauto-correlationfunctionss(n) =(n),whilethe interferencehasaperiodicauto-correlation.

    ThefollowingMATLABscriptdemonstratestheefficacyofthemethod.% Create the input as white noise with a strong sinusoidal component f = randn(1,10000); L = length(f); y = zeros(1,length(f)); e = zeros(1,length(f)); Lambda = .001; Delta = 1; M=15; x = LSadapt(initial, Lambda, M); f_delay = zeros(1,Delta+1); for J = 1:L

    f(J) = f(J) + 3*sin(2*pi*J/12); for K = Delta+1:-1:2

    f_delay(K) = f_delay(K-1); end f_delay(1) = f(J); [y(J),b] = LSadapt(f_delay(Delta+1),f(J)); e(J) = f(J) - y(J);

    end;w=-pi:2*pi/1000:pi;

    10

  • 8/13/2019 Adaptivels Signal Spectra

    12/19

    subplot(1,2,1), plot(w, fftshift(abs(fft(f(L-1000:L))))); xlabel(Normalized frequency)ylabel(Magnitude)title(Input spectrum)subplot(1,2,2), plot(w, fftshift(abs(fft(e(L-1000:L))))); xlabel(Normalized frequency)ylabel(Magnitude)title(Output spectrum)Figure5showsthe inputandoutputspectraofthelast1000samplesofthedatarecord.

    Input spectrum Output spectrum

    0

    200

    400

    600

    800

    1000

    1200

    Magnitude

    0

    200

    400

    600

    800

    1000

    1200

    Magnitude

    0 1 2 3 0 1 2 3Normalized frequency Normalized frequency

    Figure5: InputandoutputspectraforthefilterinExample2.

    4.3 Example3: FrequencyDomainCharacteristicsofanLMSSuppressionFil-ter

    This example demonstrates the filter characteristics after convergence. The interfering signal iscomprisedof100sinusoidswithrandomphaseandrandomfrequenciesbetween0.3and0.6. Thesignaliswhitenoise. ThefilterusedhasM =31,=1,andwasadjustedtogiveareasonableconvergence rate. The overall systemH(z) = 1 zHn(z) frequency response magnitude, Eq.(30), isthencomputedandplotted,alongwiththez-planepole-zeroplot.% The frequency domain filter characteristics of an interference% suppression filter with finite bandwidth interference%% Create the interference as a closely packed sum of sinusoids% between 0.3pi

  • 8/13/2019 Adaptivels Signal Spectra

    13/19

    phase = 2*pi*rand(1,100); freq = 0.3 + 0.3*rand(1,100); f = zeros(1,100000); for J=1:100000

    f(J) = 0; for k = 1:100

    f(J) = f(J) + sin(freq(k)*J + phase(k)); end

    end % The "signal" is white noise

    signal = randn(1,100000); f = .005*f + 0.01*signal;

    % Initialize the filter with M = 31 , Delta =1% Choose filter gain parameter Lambda = 0.1

    Delta = 1; Lambda = 0.5; M = 31;x = LSadapt(initial,Lambda, M);

    % Filter the dataf_delay = zeros(1,Delta+1); y = zeros(1,length(f)); e = zeros(1,length(f)); for J = 1:length(f)

    for K = Delta+1:-1:2f_delay(K) = f_delay(K-1);

    end f_delay(1) = f(J); [y(J),b] = LSadapt(f_delay(Delta+1),f(J)); e(J) = f(J) - y(J);

    end;% Compute the overall filter coefficients% H(z) = 1 - z^{-Delta}H_{LMS}(z)

    b_overall = [1 zeros(1,Delta-1) -b];% Find the frequency response

    [H,w] = freqz(b_overall,1); zplane(b_overall,1)

    The inputandoutputspectraareshown inFig.6,andthefilter frequencyresponsemagnitude isshowninFig.7. Theadaptivealgorithmhasclearlygeneratedanotch-filtercoveringthebandwidthof the interference. The pole-zero plot in Fig. 8 shows how the zeros have been placed over thespectral

    region

    (0.3