18
i Adaptive Filtering Mustafa Khaleel Year 2016

Adaptive filters

Embed Size (px)

Citation preview

Page 1: Adaptive filters

i

Adaptive Filtering

Mustafa Khaleel

Year 2016

Page 2: Adaptive filters

ii

Contents 1. Introduction ............................................................................................................................ 1

2. Digital Filters ........................................................................................................................... 2

2.1. Linear and Nonlinear Filter ............................................................................................. 2

2.2. Filter Design .................................................................................................................... 3

3. Wiener Filters .......................................................................................................................... 4

3.1. Error Measurements ....................................................................................................... 6

3.2. The Mean-Square Error (MSE). ....................................................................................... 6

3.3. Mean Square Error Surface ............................................................................................ 7

4. Method of Steepest Descent .................................................................................................. 8

5. The Least Mean Squares (LMS) Algorithm ........................................................................... 10

5.1. Convergence in the Mean-Sense .................................................................................. 11

5.2. Convergent in the Mean Square Sense ........................................................................ 12

6. Simulation and Results ......................................................................................................... 13

Conclusion ..................................................................................................................................... 15

Reference ...................................................................................................................................... 15

Annex ............................................................................................................................................ 16

Figure 1 FIR Filter ............................................................................................................................ 3

Figure 2 IIR Filter ............................................................................................................................. 4

Figure 3 Wiener Filters .................................................................................................................... 5

Figure 4 Error surface with two weights ........................................................................................ 7

Figure 5 Adaptive Filter with LMS ................................................................................................ 10

Figure 6 Adaptive Filter (Noise Cancellation) .............................................................................. 13

Figure 7 Step-Size small ................................................................................................................ 13

Figure 8 Step-Size Large ................................................................................................................ 14

Figure 9 Step-Size Acceptable ...................................................................................................... 14

Page 3: Adaptive filters

1

1. Introduction

Filtering is a signal processing operation whose objective is to process a signal in

order to manipulate the information contained in the signal. In other words, a

filter is a device that maps its input signal to another output signal facilitating the

extraction of the desired information contained in the input signal. A digital filter

is the one that processes discrete-time signals represented in digital format. For

time-invariant filters the internal parameters and the structure of the filter are

fixed, and if the filter is linear the output signal is a linear function of the input

signal. Once prescribed specifications are given, the design of time-invariant

linear filters entails three basic steps, namely: the approximation of the

specifications by a rational transfer function, the choice of an appropriate

structure defining the algorithm, and the choice of the form of implementation

for the algorithm.

An adaptive filter is required when either the fixed specifications are unknown or

the specifications cannot be satisfied by time-invariant filters. Strictly speaking an

adaptive filter is a nonlinear filter since its characteristics are dependent on the

input signal and consequently the homogeneity and additivity conditions are not

satisfied. However, if we freeze the filter parameters at a given instant of time,

most adaptive filters considered in this text are linear in the sense that their

output signals are linear functions of their input signals

The adaptive filters are time-varying since their parameters are continually

changing in order to meet a performance requirement. In this sense, we can

interpret an adaptive filter as a filter that performs the approximation step on-

line. Usually, the definition of the performance criterion requires the existence of

a reference signal that is usually hidden in the approximation step of fixed-filter

design.

Page 4: Adaptive filters

2

2. Digital Filters

The term filter is commonly used to refer to any device or system that takes a mixture of particles/elements from its input and processes them according to some specific rules to generate a corresponding set of particles/elements at its output. In the context of signals and systems, particles/elements are the frequency components of the underlying signals and, traditionally, filters are used to retain all the frequency components that belong to a particular band of frequencies, while rejecting the rest of them, as much as possible. In a more general sense, the term filter may be used to refer to a system that reshapes the frequency components of the input to generate an output signal with some desirable features.

2.1. Linear and Nonlinear Filter

Filters can be classified as either linear or nonlinear types. A linear filter is one whose output is some linear function of the input. In the design of linear filters it is necessary to assume stationary (statistical-time-invariance) and know the relevant signal and noise statistics a priori. The linear filter design attempts to minimize the effects of noise on the signal by meeting a suitable statistical criterion. The classical linear Wiener filter, for example, minimizes the Mean Square Error (MSE) between the desired signal response and the actual filter response. The Wiener solution is said to be optimum in the mean square sense, and it can be said to be truly optimum for second-order stationary noise statistics (fully described by constant finite mean and variance). A linear adaptive filter is one whose output is some linear combination of the actual input at any moment in time between adaptation operations. A nonlinear adaptive filter does not necessarily have a linear relationship between the input and output at any moment in time. Many different linear adaptive filter algorithms have been published in the literature. Some of the important features of these algorithms can be identified by the following terms

1. Rate of convergence - how many iterations to reach a near optimum solution. 2. Misadjustment- measure of the amount by which the final value of the MSE, averaged over an ensemble of adaptive filters, deviates from the MSE produced by the Wiener solution. 3. Tracking - ability to follow statistical variations in a non-stationary environment.

Page 5: Adaptive filters

3

4. Robustness - implies that small disturbances from any source (internal or 5.external) produce only small estimation errors. 6. Computational requirements - the computational operations per iteration, Data storage and programming requirements. 7. Structure - of information flow in the algorithm, e.g., serial, parallel etc., which determines the possible hardware implementations. 8. Numerical properties - type and nature of quantization errors, numerical stability and numerical accuracy.

2.2. Filter Design

There two common way to design a Filter (recursion) and (non- recursion).

For non-recursion filter also its call (Finite Impulse Response or FIR).

The Filter is implemented by convolution, each sample in the output is calculated

by weighting the samples in the input, and adding them together.

Recursive filters (Infinite Impulse Response or IIR filters) are an extension of this,

using previously calculated values from the output, besides points from the input.

Recursive filters are defined by a set of recursion coefficients.

Figure 1 FIR Filter

Page 6: Adaptive filters

4

Figure 2 IIR Filter

Finally we can classify digital filters by their use and by their implementation. The use of a digital filter can be broken into three categories: time domain, frequency domain and custom. As previously described, time domain filters are used when the information is encoded in the shape of the signal's waveform. Time domain filtering is used for such actions as: smoothing, DC removal, waveform shaping, etc. In contrast, frequency domain filters are used when the information is contained in the amplitude, frequency, and phase of the component sinusoids. The goal of these filters is to separate one band of frequencies from another. Custom filters are used when a special action is required by the filter, something more elaborate than the four basic responses (high-pass, low-pass, band-pass and band-reject).

3. Wiener Filters Wiener formulated the continuous-time, least mean square error, estimation Problem in his classic work on interpolation, extrapolation and smoothing Of time series (Wiener 1949). The extension of the Wiener theory from Continuous time to discrete time is simple, and of more practical use for Implementation on digital signal processors. A Wiener filter can be an Infinite-duration impulse response (IIR) filter or a finite-duration impulse

Page 7: Adaptive filters

5

Response (FIR) filter. In general, the formulation of an IIR Wiener filter results in a set of non-linear

equations, whereas the formulation of an FIR Wiener filter results in a set of

linear equations and has a closed-form solution e they are relatively simple to

compute, inherently stable and more practical. The main drawback of FIR filters

compared with IIR filters is that they may need a large number of coefficients to

approximate a desired response.

Figure 3 Wiener Filters

Where 𝑥(𝑛) is input signal and 𝑤 are filter coefficients, respectively; that is

𝑥(𝑛) = [ 𝑥(𝑛)𝑥 … 𝑥(𝑛 − 𝑁 + 1)]𝑇 (1)

𝑤 = [ 𝑤0𝑤1 … . 𝑤𝑁]𝑇 (2)

And 𝑦(𝑘) is the output signal,

𝑦(𝑛) = ∑ 𝑤𝑖𝑥(𝑛 − 𝑖)𝑁𝑖=0

= 𝑤𝑜𝑥(𝑛) + 𝑤1𝑥(𝑛 − 1) + ⋯ + 𝑤𝑁𝑥(𝑛 − 𝑁)

𝑦(𝑛) = 𝑤𝑇𝑥(𝑛) (3)

𝑑(𝑛) Is the training or desired signal, and e(n) is error signal (different between the

Output signal 𝑦(𝑛) and desired signal 𝑑(𝑛)).

𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛) (4)

Page 8: Adaptive filters

6

3.1. Error Measurements

Adaptation of the filter coefficients follows a minimization procedure of a

particular objective or cost function. This function is commonly defined as a norm

of the error signal e (n). The most commonly employed norms are the mean-

square error (MSE).

3.2. The Mean-Square Error (MSE).

From Figure 3, we defined The MSE (cost function) as

𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2]. (5)

From equation (3) we write the equation (5) as follow:

So,

𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2]

𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑤𝑇𝑥(𝑛)|2]

𝜉(𝑛) = [𝑑2(𝑛) − 2𝑤𝑇𝐸[𝑑(𝑛)𝑥(𝑛)] + 𝑤𝑇𝐸[𝑥(𝑛)𝑤𝑇𝐸[𝑥(𝑛)𝑥𝑇(𝑛)]𝑤

Where,

𝑅= 𝐸[𝑥(𝑛)𝑥𝑇(𝑛)],

𝑝 = 𝐸[𝑑(𝑛)𝑥𝑇(𝑛)].

𝜉(𝑛) = 𝑥𝑑𝑑(0) − 2𝑝 + 2𝑅𝑤 (6)

Where R and p are the input-signal correlation matrix and the cross-correlation vector between the reference signal and the input signal. The gradient vector of the MSE function with respect to the adaptive filter coefficient vector is given by

∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (7)

That minimizes the MSE cost function, is obtained by equating the gradient vector to zero. Assuming that R is non-singular, one gets that

∇𝑤𝜉(𝑛) = 0

Page 9: Adaptive filters

7

Figure 4 Error surface with two weights

𝑤𝑜 = 𝑅−1 𝑝 (8)

This system of equations is known as the Wiener-Hopf equations, and the filter whose weights satisfy the Wiener-Hopf equations is called a Wienerfilter.

3.3. Mean Square Error Surface

From Equation (7) the mean square error for filter is a quadratic function of the

filter coefficient vector 𝒘 and has a single minimum point. For example, for a

filter with only two coefficients (𝑤0, 𝑤1). The mean square error function is a

bowl-shaped surface, with a single minimum point. At this optimal operating

point the mean square error surface has zero gradient.

Page 10: Adaptive filters

8

4. Method of Steepest Descent To solve the Wiener-Hopf equations (Eq.7) for tap weights of the optimum spatial

filter, we basically need to compute the inverse of a p-by-p matrix made up of the

different values of the autocorrelation function. We may avoid the need for this

matrix inversion by using the method of steepest descent. Starting with an initial

guess for optimum weight 𝒘𝒐, say 𝒘(𝟎), a recursive search method that may

require many iterations (steps) to converge to 𝒘𝒐 is used.

The method of steepest descent is a general scheme that uses the following steps

to search for the minimum point of any convex function of a set of parameters:

1. Start with an initial guess of the parameters whose optimum values are to

be found for minimizing the function.

2. Find the gradient of the function with respect to these parameters at the

present point.

3. Update the parameters by taking a step in the opposite direction of the

gradient vector obtained in Step 2. This corresponds to a step in the direction

of steepest descent in the cost function at the present point. Furthermore,

the size of the step taken is chosen proportional to the size of the gradient

vector.

4. Repeat Steps 2 and 3 until no further significant change is observed in the

parameters.

To implement this procedure in the case of the transversal filter shown in Figure

3, we recall (equation 7)

∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (9)

Where ∇ is the gradient operator defined as the column vector,

∇ = [𝜕

𝜕𝑤0

𝜕

𝜕𝑤1…

𝜕

𝜕𝑤𝑁−1]

𝑇

(10)

According to the above procedure, if 𝑤(𝑛) is the tap-weight vector at the 𝑛𝑡ℎ

iteration, the following recursive equation may be used to update 𝑤(𝑛).

𝑤(𝑛 + 1) = 𝑤(𝑛) − 𝜇∇𝑘𝜉 (11)

Where 𝜇 positive scalar is call Step-Size, and ∇𝑘𝜉 denotes the gradient vector ∇𝑘𝜉

evaluated at the point 𝑤 = 𝑤(𝑘). Substituting (Eq.9) in (Eq. 11), we get

𝑤(𝑛 + 1) = 𝑤(𝑛) − 2𝜇(𝑅𝑤(𝑛) − 𝑝) (12)

Page 11: Adaptive filters

9

As we shall soon show, the convergence of 𝒘(𝒏) to the optimum solution

𝑤𝑜 and the speed at which this convergence takes place are dependent on the

size of the step-size parameter μ. A large step-size may result in divergence of

this recursive equation.

To see how the recursive update 𝑤(𝑘) converges toward𝑤𝑜, we rearrange Eq.

(12) as

𝑤(𝑘 + 1) = (Ι − 2𝜇𝑹)𝒘(𝑘) + 2𝜇𝒑 (13)

Where 𝚰 is the N-by-N identify matrix. Next we subtract 𝑤𝑜form both side for Eq.

(13) and rearrange the result to obtain

𝑤(𝑘 + 1) − 𝑤𝑜 = (Ι − 2𝜇𝑹)(𝒘(𝑘) − 𝒘𝒐) (14)

Defining the 𝑐(𝑘) as

𝑐(𝑛) = 𝑤(𝑛) − 𝑤𝑜

And 𝑅 = 𝑄Λ𝑄𝑇

Where 𝚲 is a diagonal matrix consisting of the eigenvalues 𝜆0, 𝜆0, … 𝜆𝑁−1 of R and

the columns of 𝑄 contain the corresponding orthonormal eigenvectors,

and Ι =𝑄𝑄𝑇, Substituting Eq.(14) we get

𝑐(𝑛 + 1) = 𝑄(I − 2μΛ)𝑄𝑇𝑣(𝑘) (15)

Pre-multiplying Eq. (15) by 𝑄𝑇 we have 𝑄𝐻𝑐(𝑛 + 1) = (I − μΛ)𝑄𝐻𝑐(𝑛) (16)

Notation: 𝑣(𝑛) = 𝑄𝐻𝑐(𝑛)

𝑣(𝑛 + 1) = (I − μΛ)𝑣(𝑛), 𝑘 = 1,2, . . , 𝑁 (17)

Initial conditions: 𝑣(0) = 𝑄𝐻𝑐(0) = 𝑄𝐻[𝑤(0) − 𝑤𝑜]

𝑣𝑘(𝑛) = (1 − 𝜇𝜆𝑚𝑎𝑥)𝑛𝑣𝑘(0), 𝑘 = 1,2, … . , 𝑁

Convergence (stability):

When n=0, 0 < 𝜇 <2

𝜆𝑚𝑎𝑥 Stability condition

𝜆𝑚𝑎𝑥 = max {𝜆1, 𝜆2, … , 𝜆𝑁}

𝜆𝑚𝑎𝑥 Is the maximum of the eigenvalues𝜆0 ,𝜆1, … . , 𝜆𝑁−1. The left limit in refers to

the fact that the tap-weight correction must be in the opposite direction of the

gradient vector. The right limit is to ensure that all the scalar tap-weight

parameters in the recursive equations (17) decay exponentially as 𝑘 increases.

Page 12: Adaptive filters

10

5. The Least Mean Squares (LMS) Algorithm In any event, care has to be exercised in the selection of the learning-rate parameter 𝝁 for the method of steepest descent to work. Also, a practical limitation of the method of steepest descent is that it requires knowledge of the spatial correlation functions 𝒓𝒅𝒙(𝒋, 𝒏)and 𝒓𝒙𝒙(𝒋, 𝒏)now, when the filter operates in an unknown environment, these correlation functions are not available, in which case we are forced to use estimates in their place. The least-mean-square algorithm results from a simple and yet effective method of providing for these estimates. The least-mean-square (LMS) algorithm is based on the use of instantaneous estimates of the autocorrelation function 𝑹 and the cross-correlation function 𝒑.These estimates are deduced directly from the defining equations (18) and (19) as follows:

𝑅= 𝐸[𝑥(𝑛)𝑥𝑇(𝑛)] ⟹ 𝑅′ = 𝑥(𝑛)𝑥𝑇(𝑛) (18)

𝑝 = 𝐸[𝑑(𝑛)𝑥𝑇(𝑛)] ⇒ 𝑝′ = 𝑥(𝑛)𝑑(𝑛) (19)

Now call Eq. (12): 𝑤(𝑛 + 1) = 𝑤(𝑛) − 2𝜇(𝑅𝑤(𝑛) − 𝑝)

𝑤(𝑛 + 1) = 𝑤(𝑛) − 2𝜇[(𝑥(𝑛)𝑥𝑇(𝑛)𝑤(𝑛) − 𝑥(𝑛)𝑑(𝑛)]

𝑤(𝑛 + 1) = 𝑤(𝑛) − 2𝜇𝑥(𝑛)[(𝑥(𝑛)𝑥𝑇(𝑛)𝑤(𝑛) − 𝑑(𝑛)]

When 𝑒′(𝑛) = (𝑥(𝑛)𝑥𝑇(𝑛)𝑤(𝑛) − 𝑑(𝑛)

So, 𝑤(𝑛 + 1) = 𝑤(𝑛) − 2𝜇𝑥(𝑛)𝑒′(𝑛) (20)

Equation (20) describe Least-Mean-Square (LMS) Algorithm.

Figure 5 Adaptive Filter with LMS

Page 13: Adaptive filters

11

Summary of the LMS algorithm

Input: Tap-weight vector, 𝒘(𝒏), Input vector, 𝒙(𝒏). and desired output, 𝒅(𝒏). Output: Filter: output, 𝒚(𝒏)

Tap-weight vector update 𝒘(𝒏 + 𝟏)

1. Filtering:

𝒚(𝒏) = 𝒘𝑻𝒙(𝒏) 2. Error Estimation:

𝒆(𝒏) = 𝒅(𝒏) − 𝒚(𝒏)

3. Tap-Weight Vector Adaptation:

𝒘(𝒏 + 𝟏) = 𝒘(𝒏) − 𝟐𝝁𝒙(𝒏)𝒆′(𝒏)

Where 𝑥(𝑛) = [ 𝑥(𝑛)𝑥 … 𝑥(𝑛 − 𝑁 + 1)]𝑇 . Substituting this result in Eq. (11),

we get

𝑤(𝑛 + 1) = 𝑤(𝑛) + 2𝜇𝑒(𝑛)𝑥(𝑛) (21)

This is referred to as the LMS recursion it suggests a simple procedure for recursive adaptation of the filter coefficients after arrival of every new input sample 𝑥(𝑛) and its corresponding desired output sample, 𝑑(𝑛) Equations (3), (4), and (21), in this order, specify the three steps required to complete each iteration of the LMS algorithm. Equation (3) is referred to as filtering. It is performed to obtain the filter output. Equation (4) is used to calculate the estimation error. Equation (21) is tap-weight adaptation recursion.

5.1. Convergence in the Mean-Sense A detailed analysis of convergence of the LMS algorithm in the mean square is much more complicated than convergence analysis of the algorithm in the mean. This analysis is also much more demanding in the assumptions made concerning the behavior of the weight vector 𝒘(𝒏) computed by the LMS algorithm (Haykin, 1991). In this subsection we present a simplified result of the analysis.

The LMS algorithm is convergent in the mean square if the learning-rate parameter 𝜇 satisfies the following condition,

0 < 𝜇 < 𝑡𝑟[𝑅𝑥] (22)

Page 14: Adaptive filters

12

Where 𝑡𝑟[𝑅𝑥] is the trace of the correlation matrix 𝑅, from matrix algebra, we know that

𝑡𝑟[𝑅𝑥] = ∑ 𝜆𝑘 ≥ 𝜆𝑚𝑎𝑥 (23)

And Convergence condition – Convergence in the mean sense

0 < 𝜇 <2

𝜆𝑚𝑎𝑥 (24)

5.2. Convergent in the Mean Square Sense For an LMS algorithm convergent in the mean square, the final value of 𝜉(∞) the mean-squared error 𝜉(𝑛) is a positive constant, which represents the steady-state condition of the learning curve. In fact, 𝜉(∞) is always in excess of the minimum mean-squared error J- realized by the corresponding Wiener filter for a stationary environment. The difference between 𝜉(∞) and 𝜉

𝑚𝑖𝑛 is called the

excess mean-squared error:

𝜉𝑒𝑥 = 𝜉(∞) − 𝜉𝑚𝑖𝑛

(25)

And Convergence condition – Convergence in the mean square sense

0 < 𝜇 <2

𝜆𝑚𝑎𝑥 (26)

The ratio of 𝜉𝑒𝑥 to 𝜉𝑚𝑖𝑛

is called the miss-adjustment:

𝑀 = 𝜉𝑒𝑥

𝜉𝑚𝑖𝑛

(27)

It is customary to express the miss-adjustment M as a percentage. Thus, for example, a miss-adjustment of 10 percent means that the LMS algorithm produces a mean-squared error (after completion of the learning process) that is 10percent greater than the minimum mean squared error 𝜉

𝑚𝑖𝑛.Such a

performance is ordinarily considered to be satisfactory. Another important characteristic of the LMS algorithm is the settling time. However, there is no unique definition for the settling time. We may, for example, approximate the learning curve by a single exponential with average time constant 𝝉, and so use 𝝉, as a rough measure of the settling time. The smaller the value of 𝝉 ,is, the faster will be the settling time. To a good degree of approximation, the miss-adjustment M of the LMS algorithm is directly proportional to the learning-rate parameter 𝝁, whereas the average time constant 𝝉 is inversely proportional to the learning-rate parameter 𝝁 .

Page 15: Adaptive filters

13

Figure 7 Step-Size small

We therefore have conflicting results in the sense that if the learning-rate parameter is reduced so as to reduce the miss-adjustment, then the settling time of the LMS algorithm is increased. Conversely, if the learning-rate parameter is increased so as to accelerate the learning process, then the miss-adjustment is increased.

6. Simulation and Results

Our scenario in this simulation we have signal send thorough channel and received with noise ,

at receiver we have the training signal (reference) , we try to extract our signal using (LMS) with

specific value of (𝝁).

Figure 6 Adaptive Filter (Noise Cancellation)

In the first case we assign small value for 𝝁 =0.0002, as result:

Page 16: Adaptive filters

14

As we see our received signal still noisy, next we may try other value of 𝝁

In Figure above we choose large value of step-size (𝝁 = 𝟎. 𝟒), and the Signal can’t recover by

receiver.

As we see in figure.9 our signal has be recover when the vlue of Step-Size (𝝁 =0.005). To recover our signal in this system we must choose value of step-size carefully, So the step-size not be small size (slow convergence) and not be large (fast convergence), it must be between (𝟎. 𝟎𝟎𝟎𝟐 < 𝝁 < 𝟎. 𝟒) to have stable system.

Figure 8 Step-Size Large

Figure 9 Step-Size Acceptable

Page 17: Adaptive filters

15

Conclusion

Adaptive filtering involves the changing of filter parameters (coefficients) over

time, to adapt to changing signal characteristics. Over the past three decades,

digital signal processors have made great advances in increasing speed and

complexity, and reducing power consumption. As a result, real-time adaptive

filtering algorithms are quickly becoming practical and essential for the future of

communications, both wired and wireless. The LMS algorithm is by far the most

widely used algorithm in adaptive filtering for several reasons, the main features

that attracted the use of the LMS algorithm are low computational complexity,

proof of convergence in stationary environment, unbiased convergence in the

mean to the Wiener solution, and stable behavior when implemented with finite-

precision arithmetic.

Compare to other algorithm like Method of Steepest Descent which dependent

on the updated value of weights (coefficients) in iterative fashion and continually

seeking the bottom point of the error surface of the filter.

Reference

(1) Adaptive Filtering Algorithms and Practical Implementation,Third Edition, 2008 Springe.

(2) Principles of Adaptive Filters and Self-learning Systems, Springer-Verlag London Limited 2005.

(3) Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi,John Wiley

& Sons 2000.

(4) Adaptive Filtering Fundamentals of Least Mean Squares with MATLAB, Alexander D. Poularikas,CRC

Press 2015.

(5) The Scientist and Engineer's Guide to Digital Signal Processing .

Page 18: Adaptive filters

16

Annex Implementation Adaptive Filter Using LMS t=1:0.025:5; desired=5*sin(2*3.*t);

noise=5*sin(2*50*3.*t);

refer=5*sin(2*50*3.*t+ 3/20);

primary=desired+noise;

subplot(4,1,1); plot(t,desired); ylabel('desired');

subplot(4,1,2); plot(t,refer); ylabel('refer');

subplot(4,1,3); plot(t,primary); ylabel('primary');

order=2; mu=0.005; n=length(primary) delayed=zeros(1,order); adap=zeros(1,order); cancelled=zeros(1,n);

for k=1:n, delayed(1)=refer(k); y=delayed*adap'; cancelled(k)=primary(k)-y; adap = adap + 2*mu*cancelled(k) .* delayed; delayed(2:order)=delayed(1:order-1); end

subplot(4,1,4); plot(t,cancelled); ylabel('cancelled');