31
CHANNEL EQUALISATION BY- AJIT KUMAR PANDA POONAN SAHOO SAYANTAN DAS SURAJ CHOUDHURY

Channel Equalisation

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Channel Equalisation

CHANNEL EQUALISATIONBY- AJIT KUMAR PANDA POONAN SAHOO SAYANTAN DAS SURAJ CHOUDHURY

Page 2: Channel Equalisation

THREATS IN DIGITAL COMMUNICATION

•There are four main threats in the process of digital communication

Inter Symbol Interference (ISI)

Multipath Propagation

Co-channel Interference

Presence of noise in the channel

Page 3: Channel Equalisation

INTER SYMBOL INTERFERENCE: Inter Symbol Interference

in Digital Transmission  Inter-symbol interference

(ISI) arises when the data transmitted through the channel is dispersive, in which each received pulse is affected somewhat by adjacent pulses and due to which interference occurs in the transmitted signals.

It is difficult to recover the original data from one channel sample

Page 4: Channel Equalisation

CO-CHANNEL INTERFERENCE:

Co-channel Interference (CCI) and Adjacent Channel Interference (ACI) occur in communication systems due to multiple access techniques using space, frequency or time.

CCI occurs in cellular radio and dual-polarized microwave radio, for efficient utilization of the allocated channels frequencies by reusing the frequencies in different cells.

Page 5: Channel Equalisation

MULTI-PATH PROPAGATION:

Within telecommunication channels multiple paths of propagation commonly occur. In practical terms this is equivalent to transmitting the same signal through a number of separate channels, each having a different attenuation and delay.

Consider an open-air radio transmission channel that has three propagation paths, as illustrated in Fig. These could be

- Direct- Earth Bound - Sky Bound

Fig1.2b describes how a receiver picks up the transmitted data. The direct signal is received

first whilst the earth and sky bound are delayed. All three of the signals are attenuated with the sky path suffering the most. Multipath interference between consecutively transmitted signals will take place

if one signal is received whilst the previous signal is still being detected. In Fig1.2 this would occur if the symbol transmission rate is greater than 1/τ

where, τ represents transmission delay. Because bandwidth efficiency leads to high data rates, multi-path interference commonly occurs.

Page 6: Channel Equalisation

EQUALIZER Equalization is the process to remove ISI and noise

effects from the channel It is located at the receiver end of the channel

It is an inverse filter placed at the front end of the receiver

The transfer function of the equalizer is just inverse of the transfer function of the channel

Equalization is an iterative process of reducing the mean square error the difference between desired response and output of filter used in equalizer

Page 7: Channel Equalisation

TYPES OF EQUALIZERS:

Equalizers are of two types

Linear equalizers aim at reducing ISI in linear channels using various algorithms like Least Mean Square(LMS), Recursive Least Square(RLS) and normalized LMS

Non linear equalizers equalize non-linear channels. They mainly use Neural Networks(NN) and Multilayer Perception(MLP) algorithms for equalization

EQUALIZERS

LINEAR

NON LINEAR

Page 8: Channel Equalisation

Linear Adaptive Filters:

An adaptive filter is a computational device that attempts to model the relationship between two signals in real time in an iterative manner

Here the output is compared to the desired signal and accordingly the parameters of adaptive filter are varied and so it is known as self designing filter.

Page 9: Channel Equalisation

Applications of Adaptive Filters: Identification

Parameters u=input of adaptive filter=input to plant y=output of adaptive filter d=desired response=output of plant ---> e=d-y=estimation error

Applications: System identification

Used to provide a linear model of an unknown plant

Page 10: Channel Equalisation

Applications of Adaptive Filters: Inverse Modeling

Parameters u=input of adaptive filter=output to plant y=output of adaptive filter d=desired response=delayed system input e=d-y=estimation error

Applications: Channel Equalization

Used to provide an inverse model of an unknown plant

Page 11: Channel Equalisation

The channel equalization model:

Page 12: Channel Equalisation

Stochastic Gradient Approach: Most commonly used type of Adaptive Filters Define cost function as mean-squared error

Difference between filter output and desired response

Based on the method of steepest descent Move towards the minimum on the error surface to get to minimum Requires the gradient of the error surface to be known

Most popular adaptation algorithm is LMS Derived from steepest descent Doesn’t require gradient to be know: it is estimated at every iteration

Least-Mean-Square (LMS) Algorithm

signal

error

vector

input

tap

parameter

rate

-learning

vector

weight-tap of

value old

vector

weigth-tap of

value update

Page 13: Channel Equalisation

LMS algorithm

• Introduced by Widrow & Hoff in 1959

• Simple, no matrices calculation involved in the adaptation

• In the family of stochastic gradient algorithms

• Approximation of the steepest – descent method

• Based on the MMSE criterion.(Minimum Mean square Error)

• Adaptive process containing two important signals:

• 1.) Filtering process, producing output signal.

• 2.) Desired signal (Training sequence)

• Adaptive process: recursive adjustment of filter tap weights

Page 14: Channel Equalisation

Least-Mean-Square (LMS) Algorithm continued.... The LMS Algorithm consists of two basic processes

that is followed in the adaptive equalization processes: Training : It refers to adapting to the training sequence Tracking: keeps track of the changing characteristics of

the channel.

Page 15: Channel Equalisation

LMS Algorithm Steps:

Filter output

Estimation error

Tap-weight adaptation

1

0

*M

kk nwknunz

nzndne

neknunw1nw *kk

Page 16: Channel Equalisation

Derivation of the LMS MSE expression: Error=E=(x(n)-x(n)’) Square error=E=(x(n)-x(n)’)2 Using minimum mean square error criterion , we

differentiate the expression dE/dw=d/dw((x(n)-x(n)’)2) Applying chain rule and substitution of x(n)’ ,we get dE/dw=2(x(n)-x(n)’)*d/dw(x(n)- Ʃw*s(n-i)) dE/dw=2(e(n))(s(n-i)) From this we can derive an update equation for every new

sample n using steepest descent and gradient method as w(n+1)= -u*(dE/dw) so,w(n+1)=2*u*e(n)*s(n-i) for i=0,1,2,3...........

Page 17: Channel Equalisation

Stability of LMS:

The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy

Here max is the largest eigen value of the correlation matrix of the input data.

More practical test for stability is

The value of step size has to be a trade off between fast convergence rates and less steady state misadjustment.

Larger values for step size Increases adaptation rate (faster adaptation) Increases residual mean-squared error

max

10

power signalinput

10

Page 18: Channel Equalisation

LMS-Pros & cons:

LMS – Advantage:• Simplicity of implementation

• Not neglecting the noise like Zero forcing equalizer

• Stable and robust performance against different signal conditions

LMS – Disadvantage: Slow Convergence

Demands using of training sequence as reference ,thus decreasing the communication BW.

Page 19: Channel Equalisation

NLMS-Normalised LMS algorithm Is mainly required to provide better performance than LMS

as LMS performance is slow Uses normalization technique to provide a variable step size as step size ‘u’ is divided by instantaneous signal power thus

providing more stability and faster convergence. Is equivalent to running the LMS recursion for a new sample of

inputs every time recursion or the NLMS operation is carried out.W(n+1) = w(n) + (1/xT(n)x(n)) * e(n) x(n)

The step size value for the input vector is calculatedµ (n) = 1/xT(n)x(n)

The filter tap weights are updated in preparation for the next iteration

W(n+1) = w(n) + 2*µ (n) * e(n) * x(n)

Page 20: Channel Equalisation

Results for LMS algorithm:

Convergence is faster with increased step size . Plot is for noise=30 dB

Page 21: Channel Equalisation

Results for NLMS algorithm:

Convergence is faster in case of NLMS algorithm

It provides a more stable output.

Page 22: Channel Equalisation

NON-LINEAR CHANNEL EQUALISATION

Page 23: Channel Equalisation

Need For Non-Linear Equalizer: Linear Equalizers do not perform well on channels

having deep spectral nulls in the pass band.

To compensate distortion linear equalizer places too much gain in the vicinity of spectral nulls thereby enhancing the noise present in these frequencies.

BER is better in Non-linear channel equalizer

Linear equalizer-inverse problem Non-linear equalizer-pattern classification

Page 24: Channel Equalisation

Non-Linear Channel Equalizer: t k denotes a sequence

of T spaced complex symbols of an BPSK constellation, where 1/T denotes the symbol rate and k denotes the discrete time index.

A widely used model for a linear dispersive channel is an FIR filter whose output at the k th instant is given by

ak= ∑ hi * t k-i

Schematic Diagram of a Non-Linear Wireless Digital Communication system with channel equalizer

Nh-1

i=0

Page 25: Channel Equalisation

Continued…

where hi- denotes the FIR filter weights Nh- denotes the FIR order.

Considering the channel to be a nonlinear one the NL block introduces channel nonlinearity to the filter output.

The transmitted signal t k after being passed through the nonlinear channel and added with the additive noise arrives at the receiver, which is denoted by r k .The received signal at the kth time instant is given by r k.

The purpose of equalizer attached at the receiver front end is to recover the transmitted sequence t k or its delayed version t k-

1 ,where t is the propagation delay associated with the physical channel.

Page 26: Channel Equalisation

Neural Network:

Started in 1800s as an effort to describe how human mind performs.

It is applied to computational models with Turing ‘s B-type machine and Perceptron

A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use.

Page 27: Channel Equalisation

Continued:

Today in general form a neural network is a machine that is designed by using electronic components or is simulated in software on a digital computer.

To achieve good performance, neural networks employ a massive interconnection of simple computing cells referred to as “Neurons” or “processing units”

The procedure is called a learning algorithm, the function of which is to modify the synaptic weights of the network in an orderly fashion to attain a desired design objective. McCulloch and Pitts have developed the neural networks for different computing machines.

Page 28: Channel Equalisation

Artificial Neural Network: Artificial Neural Network (ANN)

have become a powerful tool for many complex applications including functional approximation, nonlinear system identification, motor control, pattern recognition, adaptive channel equalization and optimization.

ANN is capable of performing nonlinear mapping between the input and output space due to its large parallel interconnection between different layers and the nonlinear processing characteristics.

Page 29: Channel Equalisation

Continued:

An artificial neuron basically consists of a computing element that performs the weighted sum of the input signal and the connecting weight. The weighted sum is added with the bias called threshold and the resultant signal is passed through a nonlinear activation function. Common types of activation functions are sigmoid and hyperbolic tangent.

Each neuron is associated with three parameters whose learning can be adjusted. These are the connecting weights, the bias and the slope of the nonlinear function.

For the structural point of view a NN may be single layer or it may be multilayer

Page 30: Channel Equalisation

Multi-layer Perceptron:

The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks.

The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers

To provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.

Page 31: Channel Equalisation

Continued…

Generally MLP is trained using popular error back-propagation algorithm.

The scheme of MLP using four layers is shown. Si represent the inputs s1, s2, ….. , sn to the network, and yk represents the output of the final layer of the neural network.

The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by W i ,W ji ,W kj respectively.

The final output layer of the MLP may be expressed as