81
An Introduction to adaptive filtering & it’s applications By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq

An Introduction to adaptive filtering & it’s applications

  • Upload
    joy

  • View
    114

  • Download
    3

Embed Size (px)

DESCRIPTION

An Introduction to adaptive filtering & it’s applications. By Asst.Prof.Dr.Thamer M.Jamel Department of Electrical Engineering University of Technology Baghdad – Iraq. Introduction. Linear filters : the filter output is a linear function of the filter input Design methods: - PowerPoint PPT Presentation

Citation preview

Introduction to adaptive filtering & it’s applicationsBy
Linear filters :
the filter output is a linear function of the filter input
Design methods:
Optimal filter design
of the error signal
it is based on a priori
statistical information
it is not possible to design
a Wiener filter in the first
place.
the signal and/or noise characteristics are often nonstationary and the statistical parameters vary with time
An adaptive filter has an adaptation algorithm, that is meant to monitor the environment and vary the filter transfer function accordingly
based in the actual signals received, attempts to find the optimum filter design
Adaptive filter
The basic operation now involves two processes :
1. a filtering process, which produces an output signal in response to a given input signal.
2. an adaptation process, which aims to adjust the filter parameters (filter transfer function) to the (possibly time-varying) environment
Often, the (average) square value of the error signal is used as the optimization criterion
Adaptive filter
Because of complexity of the optimizing algorithms most adaptive filters are digital filters that perform digital signal processing
When processing
analog signals,
The generalization to adaptive IIR filters leads to stability problems
It’s common to use
a FIR digital filter
Used to provide a linear model of an unknown plant
Applications:
Used to provide an inverse model of an unknown plant
Applications:
Applications of Adaptive Filters: Prediction
Used to provide a prediction of the present value of a random signal
Applications:
Used to cancel unknown interference from a primary signal
Applications:
Example: Acoustic Echo Cancellation
Based on the method of steepest descent
Move towards the minimum on the error surface to get to minimum
gradient of the error surface estimated at every iteration
LMS Adaptive Algorithm
Simple, no matrices calculation involved in the adaptation
In the family of stochastic gradient algorithms
Approximation of the steepest – descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two input signals:
1.) Filtering process, producing output signal.
2.) Desired signal (Training sequence)
Adaptive process: recursive adjustment of filter tap weights
LMS Algorithm Steps
Stability of LMS
The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy
Here max is the largest eigenvalue of the correlation matrix of the input data
More practical test for stability is
Larger values for step size
Increases adaptation rate (faster adaptation)
Increases residual mean-squared error
Given the following function we need to obtain the vector that would give us the absolute minimum.
It is obvious that
give us the minimum.
STEEPEST DESCENT EXAMPLE
Now lets find the solution by the steepest descend method
We start by assuming (C1 = 5, C2 = 7)
We select the constant . If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. I would select = 0.1.
The gradient vector is:
STEEPEST DESCENT EXAMPLE
As we can see, the vector [c1,c2] converges to the value which would yield the function minimum and the speed of this convergence depends on .
Initial guess
LMS – CONVERGENCE GRAPH
This graph illustrates the LMS algorithm. First we start from guessing the TAP weights. Then we start going in opposite the gradient vector, to calculate the next taps, and so on, until we get the MMSE, meaning the MSE is 0 or a very close value to it.(In practice we can not get exactly error of 0 because the noise is a random process, we could only decrease the error below a desired minimum)
Example for the Unknown Channel of 2nd order:
Desired Combination of taps
Desired Combination of taps
Channel Equalisation
Introduction
Wireless communication is the most interesting field of communication these days, because it supports mobility (mobile users). However, many applications of wireless comm. now require high-speed communications (high-data-rates).
What is the ISI
Cause of ISI
ISI is imposed due to band-limiting effect of practical channel, or also due to the multi-path effects (delay spread).
Definition of the Equalizer:
the equalizer is a digital filter that provides an approximate inverse of channel frequency response.
Need of equalization:
is to mitigate the effects of ISI to decrease the probability of error that occurs without suppression of ISI, but this reduction of ISI effects has to be balanced with prevention of noise power enhancement.
Types of Equalization techniques
Equalization Techniques
algorithm
Multiplying-operations
complexity
convergence
tracking
LMS
Low
slow
poor
MMSE
The LMS Equation
The Least Mean Squares Algorithm (LMS) updates each coefficient on a sample-by-sample basis based on the error e(n).
This equation minimises the power in the error e(n).
55
The value of µ (mu) is critical.
If µ is too small, the filter reacts slowly.
If µ is too large, the filter resolution is poor.
The selected value of µ is a compromise.
56
Audio Noise Reduction
A popular application of acoustic noise reduction is for headsets for pilots. This uses two microphones.
58
Setting the Step size (mu)
The rate of convergence of the LMS Algorithm is controlled by the “Step size (mu)”.
This is the critical variable.
60
“Input” = Signal + Noise.
“Output” starts at
zero and grows.
“Error” contains
the noise.
Partial Updating Weights.
Sub-band adaptive filtering.
Adaptive Kalman filtering.
Affine Projection Method.
Time-Space adaptive processing.
Non-Linear adaptive filtering:-
d(n) = speech + noise