Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
1 1
TEMPORAL BF
ADAPTIVE
ANTENNAS
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
2
1-Introduction
In traditional (conventional, past ) array antennas the main beam is
steered to directions of SOI, it is called beam steered arrays, phased
arrays, or scanned arrays
In traditional array antennas, electronic beam steering the beam is steered
via phase shifters where the change of the phase of the current is carried out
directly at each antenna element at RF frequencies
In modern beam steered array antennas; SA arrays , the pattern is shaped using
signal processing to perform certain optimum criteria
Digital beam formed antennas arrays (DBF) or AA arrays are modern beam
steered arrays when adaptive algorithms are employed
The digital implementation of these algorithms requires that the array outputs
be digitized through the use of A/D converters
This digitization can be performed at either IF or baseband frequencies
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
3
The robust design of an adaptive array system is
a multi disciplinary process,
Signal processing
Transceiver design
Array design
Antenna element design
Signal propagation characteristics
It includes:
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
4
Adaptive beamforming (ABF)-2
Application on angle
of arrival (AOA)
A base-station with
circular adaptive
antenna arrays
Mobile communication
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
5
Temporal reference BF
-It uses the temporal signal properties, such as embedded training sequences,
to form a reference. This signal is known to both the TX and RX and is sent
from the TX to RX during the training period
-It uses the Weiner Hoph solution (equation ) to get the weight vector from
the reference and the received signal
XXX pRW1
)()(1
1
kXkXK
R HK
k
RXX is an MxMcorrelation matrix of array antenna output X(m)
p is an Mx1 cross correlation vector between X(m) and the reference signal d(m)
)()( *
1
kdkXpK
k
Then it uses the optimization adaptation algorithm to continuously update
the weight vector W such that the BF can receive a signal arriving from a
several directions and attenuate signals from other directions
RXX= ARSSAH + Rnn
)]().([ kXkXER H
XX
)]().([ kSkSER H
SS
)]()([ * kdkXEpX
OR
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
6
Example of reference signal:
CDMA: spreading code of the user
GSM: Training sequence in each frame
0 7
The first frame is 8 time slots
Includes 26 bits training sequence
Generally, the process of generating the reference signal is very specific
and system dependence
The reference signal does not need to be an exact replica of the desired signal,
just be correlated with the desired and uncorrelated with the interference
In general to generate a reference signal you need to know simple information
like the frequency and type of modulation of the desired signal
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
7
Weiner and steepest descent methods
-In non blind algorithms , the BF in
the RX uses the information of the
training signal to compute the
optimal weight vector, Wopt
-The optimum weight vector; Wopt is
determined on the base of minimizing the
mean squared error between the desired
signal d(t) and the array output X(k) or X(t)
Output y(t)
X weights
Desired d(t)
-There are two methods of calculating
Wopt before performing the adaptive
algorithms, Weiner Hoph method (solution
or equation) and Steepest descent method
i.e., Weiner and steepest descent methods are based on minimizing the
mean squared error and are explained in the following:
x2(t)
xM(t)
x1(t)
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
8
oWeiner solution (method, equation)
-Let y(k) and d(k) denote the sampled
signal of y(t) and d(t) at time instant k
, respectively. Then the error signal is
given by
-The mean squared error is defined
by the cost function
)()()( kykdke (1)
])([2
keEJ (2)
-Substituting eqn(1) in eqn(2) given y(k) as
, one gets )()( kXWky H
y(k), d(t), e(t) are single
element matrix [ ]1x1
2)()([ kykdEJ
])}()()}{()([{ *kykdkykdE
])}()()}{()([{ *kXWkdkXWkdE HH
])()()(*)()()()([2
WkXkXWkdkXWWkXkdkdE HHHH
WRWpWWpkdE XX
H
X
HH
X ])([2
Output y
X weights
x2(t)
xM(t)
x1(t)
e(t) Desired d(t)
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
RX is the MxM autocorrelation matrix of the array output vector X(k),
)()(*..)()(*)()(*{)]()([ 21 kdkxkdkxkdkxEkXkdEP M
HH
X
pX is the M x1 cross-correlation vector between the array output vector
X(k) and the desired signal d(k)
-The cost function J attains its minimum when all the elements of its gradient vector
are simultaneously zero, i.e., when the mean squared error J is minimized, the
gradient vector will be equal to an M x 1 null vector, i.e.,
-The gradient vector of J, is defined by where denotes the
conjugate derivative with respect to the complex vector W
)(J *2)(
W
JJ
*W
)]()([ kXkXER H
XX
)(*)({
.
.
)(*)({
)(*)({
2
1
kdkxE
kdkxE
kdkxE
M
)]()([ * kdkXEPX
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
10
optXXX WRp XXXopt pRW
1 (5)
WRpW
JJ XXX 22002)(
*
(3)
-At which the cost function attains its minimum as
022)( optXXXWWRpJ
opt(4) (4)
optWWJJmin optXX
H
optX
H
optopt
H
Xd WRWpWWp 2
Substitute for Wopt and px (5)
(6) optXX
H
optd WRWJ 2
min
2
d is the power of the desired signal
optXX
H
opt WRW is the output power of array at Wopt
WRWpWWpkdEJ XX
H
X
HH
X ])([2
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
oSteepest descent method
-It overcomes the matrix inversion which is computationally intensive and can lead
to an unstable result
-Wopt is calculated in a recursive way. It begins with an initial value W(0) for the
weight vector, which is chosen arbitrarily, typically, W(0) is set equal to a column
vector of an identity matrix
1
0
0
X weights
x2(t)
xM(t)
x1(t)
e(t)
Desired d(t)
-The gradient vector of the cost
function is computed
at time k , i.e., the kth iteration
)))((( kWJ
W(0)
)(WJ
-The next guess of the weight vector is
computed by making a change in the
initial or present guess in a direction
opposite to that of the gradient vector
Select
μ W(k+1)
)))(((2
1)()1( kWJkWkW (7)
where μ is a positive real-valued constant
that is referred to as the step size parameter
or weighting constant
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
12 12 12
-Substitute for from eqn(3) in eqn(7) we get ))(( WJ
)]([)()1( kWRpkWkW XXX (8)
)]()([)()1( * kekXEkWkW (9)
-The iteration process is repeated until the algorithm converges onto the optimal value
of the weight vector Wopt , which would satisfy
J(Wopt) ≤ J(W) for all W (11)
-The stability of this method depends on μ & RX . For stability and convergence of
this algorithm
max
20
(12)
where λmax, is the largest eigen value of RX . It would indeed converge to
the optimum Wiener solution
)}()]()([)](*)([{)()1( kWkXkXEkdkXEkWkW H
)}]()()(*){([)()1( kWkXkdkXEkWkW H
Substitute (8),(9 )in (3)
)](*)([2/)]()1([2)))((( kekXEkWkWkWJ (10)
WRpW
JJ XXX 22002)(
*
)))(((
2
1)()1( kWJkWkW ( ) (3) (7)
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
13 13 13
-Assuming there is a single minima, it is logical to consider that successive
corrections to the weight vector in the direction of the negative of the gradient
vector should eventually lead to the minimum mean squared error Jmin, at which
the weight vector assumes its optimum value Wopt
In reality, however, exact measurements of the gradient vector are not possible since
this would require prior knowledge of RXX & pX , consequently, the gradient vector
must be estimated from the available data,
i.e., the weight vector is updated in accordance with an
adaptive algorithm that adapts to the incoming data
EX: two element array
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
14 14 14
Least Mean Squares Algorithm (LMS)
A significant of the LMS algorithm is its simplicity. It does not require measurements
of the relevant correlation functions, nor does it require matrix inversion
-It uses the steepest descent method but substitute for the expected values of (10)
)]()([2))(( * kekXEWJ by the instantaneous estimate )()(2))(( * kekXWJ
-Start from eqn (7) )))]((([2
1)()1( kWJkWkW
)](*)(2[2
1)( kekXkW
)]()()()[()()1( * kWkXkdkXkWkW H
)](*)([)()1( kekXkWkW (13)
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
15
-Hence the algorithm is
)()()()1( * kekXkWkW
)()()( kykdke
)()( kXWky H
μ
Output signal y(k)
W(k)
∑ X X
∑
∑ Desired signal
e(k) Error signal
X(k)
Array output vector
Array
antenna
d(k) + -
delay
W(k-1)
μX(k)e*(k)
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
16
The response of LMS algorithm is determined by three principle factors μ,
number of weights and eigen values of R of the input data
-The disadvantages of LMS algorithm are : •It must go through many iterations before satisfactory convergence is achieved (slower
convergence )
•It may not track the desired signal in a satisfactory manner if the signal characteristics are
rapidly changed
•μ is too small:
Effect of step size parameter μ
-Convergence is slow and we will have the overdamped case
-If the convergence is slower than the changing angles of arrival, it is possible that the adaptive
array cannot acquire the signal of interest fast enough to track the changing signal
-The LMS algorithm will overshoot the optimum weights of interest and we have the
underdamped case. The weights will oscillate about the optimum weights but
will not accurately track the solution desired
•μ is too large:
It is therefore imperative to choose a step size in a range that insures convergence
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
17 17
Example: array of 8 elements θo=30o, θinterference = -60o
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
18
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
19
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
20
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
21
Direct Matrix Inversion (DMI) or Sample Matrix Inversion (SMI) Algorithm
It uses K samples X(k), k = 1,2, . . . , K of the array signals to get estimate of R by
means of a simple averaging scheme
where denotes the estimate at the kth instant of time and X(k) denotes the
array signal sample, also known as the array snapshot, at the kth instant of time
)(kRX
)()(1
)(1
iXiXk
kR Hk
i
X
(14)
1
)1()1()()1(
k
kXkXkRkkR
H
XX
(15)
The process of estimating the , may be combined to update the inverse
from array signal samples using the matrix inversion lemma as follows
)(kRX
)(
1kRX
XX pkRkW )1()1(1
(16)
)()()( *
1
idiXkpk
i
X
(17)
)()1()(1
)1()()()1()1()(
1
1111
kXkRkX
kRkXkXkRkRkR
X
H
X
H
XXX
optXX WkWRkRk
)()(1
IRo
X
1)0(
1
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
22 22 4/10/2017 LECTURES 22
-The advantages of DMI(SM{) algorithm are :
•Faster than LMS because the time taken by snapshots or blocks are less than the convergence
time of LMS
-The disadvantages of DMI(SM) algorithm are :
•R may be ill conditioned which make problems when converted
•In case of large arrays there is a challenge in matrix inversion
• The sample matrix of DMI algorithm is the average estimate of the correlation matrix of the
array using K time samples (snapshots)
• Each snapshot ( time sample) is called a block , k is the block number , K is the block length
• Since we use K length block of data, DMI is called block adaptive approach. The adapting of
the weights are done block by block
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
23 23 4/10/2017 LECTURES 23
The SMI pattern is similar to the LMS pattern and was generated with no iterations. The time
taken by the total number of snapshots K is less than the time to convergence for the LMS
algorithm
Example: array of 8 elements θo=30o, θinterference =-60o
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
Since the signal sources can change or slowly move with time, we might want to
deemphasize the earliest data samples and emphasize the most recent ones. This can
be accomplished by modifying the estimate correlation matrix R and vector p as:
Recursive Least Squares algorithm (RLS)
)()()()()(1
1
1 kXkXiXiXkR HHk
i
ik
X
)()()()()(ˆ **1
1
1 kXkdiXidkpk
i
ik
)()()1(ˆ kXkXkR H (18)
)()()1(ˆ * kXkdkp (19)
)()()(1
1
1 iXiXkR Hk
i
k
X
)()()(ˆ *1
1
1 iXidkpk
i
k
This is called a weighted estimate and α is called the forgetting factor
It is a positive number between 0,1, we restore the ordinary least squares algorithm α = 1
Let us break up the summation into two terms: the summation for values up to i =
k−1 and last term for i = k
Thus, future values for R and p estimates can be found using previous values
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
25 25 4/10/2017 LECTURES 25
-The RLS algorithm can be described by the following equations
-Its advantage over LMS is that it is faster than the simple LMS
- Generally it is easy updating the inverse of the correlation matrix
)]1()()(*)[()1()( kWkXkdkGkWkW H
)()()( 1 kXkRkG
))()1()(
)1()()()1()1((
1)(
1
1111
kXkRkX
kRkXkXkRkRkR
X
H
X
H
XXX
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
26
-It uses the method of least squares to adjust the weight vector W(k)
-We choose the weight vector W(k), so as to minimize a cost function that consists of
the sum of squared errors over a time window
-In the exponentially weighted RLS algorithm, at time k, the weight vector is chosen
to minimize the cost function
where
and λ is a positive constant close to, but less than one
k is the iteration number, i.e., the observation time of a sample snapshot of vector X(k)
-Notice that , with the arrival of new data samples X(k), estimates are updated
recursively, introduce a weighting factor λk-1 to the sum-of-error-squares in eqn (18)
-Also the earlier samples are weakened by the weighting factor ( forgetting factor)
-The RLS algorithm is obtained from minimizing eqn(18) by expanding the
magnitude squared and applying the matrix inversion lemma
Recursive Least Squares algorithm (RLS) “ANOTHER APPROACH”
2
1
1 )()( iekJk
i
k
(18)
)()()()()()( iXkWidiyidie H
ADAPTIVE ANTENNAS. PROF. A.M.ALLAM
27
-The RLS algorithm can be described by the following equations
where I is the M x M identity matrix, and δ is a small positive constant
called the regularization parameter, which is assigned with a small value
for high SNR and a large value for low SNR
-Its advantage is , it is faster than the simple LMS algorithm. This
improvement in performance, however, is achieved at the expense of a
large increase in computational complexity
)1()()()( kWkXkdkH
)()1()(1
)()1()(
1
1
kXkPkX
kXkPkK
H
IP 1)0(
)1()()()1()( 11 kPkXkKkPkP H
)()()1()( kkKkWkW