EE401: Advanced Communication Theory
Professor A. ManikasChair of Communications and Array Processing
Imperial College London
Principles of Decision and Detection Theory
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 1 / 84
Table of Contents1 Basic Decision Theory
Hypothesis Testing in an M-aryComm SystemTerminologyM-ary Decision CriteriaExamplesExample: Gaussian LikelihoodFunctions (LFs)Example: Signal + Gaussian NoiseExample: Rectangular andGaussian LFs
2 Basic Concepts on OptimumReceiversThe Concept of "ContinuousSampling"Optimum M-ary ReceiversGaussian Multivariable DistributionLikelihood Functions (ContinuousSampling)
3 Optimum Receivers usingMatched FiltersKnown Signals in White NoiseSignals and Matched Filters inFrequency Domain
4 Optimum Receivers in Non-WhiteNoiseKnown Signals in Non-WhiteNoiseA Special CaseOutput SNRA Special Case: Output SNR(Signal plus white Noise)Approximation To MatchedFilter Solution
Whitening Filter5 Optimum M-ary Rx based onSignal ConstellationIntroduction - OrthogonalSignalsThe “Approximation Theorem”of an Energy SignalComments on the "Approximation"Theorem
Gram-SchmidtOrthogonalizationM-ary Signals: Energy andCross-CorrelationSignal ConstellationDistance between two M-arysignalsExamples of SignalConstellation DiagramExamples of Plots of DecisionVariables (QPSK-Receiver)Optimum M-ary Receiver usingthe Signal-VectorsEncoding M-ary SignalsImportant RelationshipsBetween pe and pe ,cs
6 Appendices:Appendix-1: Proof ofEquation-26Appendix-2: Walsh-HadamardOrthogonal Sets and SignalsAppendix-3: IS95 - WalshFunction of order-64
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 2 / 84
Basic Decision Theory
Basic Decision TheoryDetection Theory is concerned with determining the existence of asignal in the presence of noise - and can be classified as follows:
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 3 / 84
Basic Decision Theory
N.B.:1 Adaptive or Learning :
F system parameters change as a function of input;F diffi cult to analyze;F mathematically complex
2 Non-Parametric :F fairly constant level of performance because these are based on generalassumptions on the input pdf;
F easier to implement.
3 Consider a parametric detector which has been designed for Gaussianpdf input.
F if input is actually Gaussian : then parametric’s performance issignificantly better;
F if input 6=Gaussian but still symmetric: then non-parametric detectorperformance may be much better than the performance of theparametric detector
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 4 / 84
Basic Decision Theory Hypothesis Testing in an M-ary Comm System
Hypothesis Testing in an M-ary Comm System
A Hypothesis , a statement of a possible conditionIn an M-ary Comm. System we have M hypotheses:
r t =s t +n t( ) ( ) ( )iDecision rule=?
(To determine whichsignal is present)
Di
where si (t)= one of M signal (channel symbols)
hypotheses:H1 : s1(t) is
is being sent︷ ︸︸ ︷present with probability Pr(H1)
H2 : s2(t) is present with probability Pr(H2). . . ...HM : sM (t) is present with probability Pr(HM )
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 5 / 84
Basic Decision Theory Hypothesis Testing in an M-ary Comm System
The statistics of the observed signal
r(t) = si (t) + n(t) (1)
are affected by the presence of s1(t), or s2(t), ..., or sM (t)
If si (t), ∀i are known
then their distributions are known
and the problem is translated to make a decision about one of theM distributions after having observed r(t)
This is called ‘Hypothesis Testing’
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 6 / 84
Basic Decision Theory Terminology
Terminology
A priori probabilities :
Pr(H1),Pr(H2), ...,Pr(HM )
(these are calculated BEFORE the experiment is performed)
A posterior probabilities
Pr(H1/r),Pr(H2/r), ...,Pr(HM/r)
That is, if r =observation variablethen we have M Conditional Probabilities
Pr(Hi/r), ∀i ∈ [1, . . . ,M ]
known as a POSTERIOR PROBABILITIES(since these are calculated AFTER the experiment is performed).
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 7 / 84
Basic Decision Theory Terminology
Pr(Hi/r) ∀i : diffi cult to find.A more natural approach is to find
Pr(r/Hi ), ∀i (2)
since in general pdfr/Hi , ∀iI are known orI can be found
Likelihood Functions (LF) :
pdfr/H1(r), pdfr/H2(r), ..., pdfr/HM (r) (3)
i.e. the M conditional probability density functions pdfr/Hi (r), ∀i , areknown as "Likelihood Functions"Likelihood Ratio (LR) : The ratio
pdfr/Hi (r)pdfr/Hj (r)
for i 6= j (4)
is known as "Likelihood Ratio"Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 8 / 84
Basic Decision Theory Terminology
MAP CriterionDECISION−choose hypothesisHi i.e. Di :
iff Pr(Hi/r) > Pr(Hj/r), ∀j : j 6= i (5)D1 : iff Pr(H1/r) > Pr(Hj/r), ∀j 6= 1D2 : iff Pr(H2/r) > Pr(Hj/r), ∀j 6= 2· · · · · ·DM : iff Pr(HM/r) > Pr(Hj/r), ∀j 6= M
This detection is known as max a posterior probability (MAP)criterion
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 9 / 84
Basic Decision Theory Terminology
Equivalent MAP expressions:
Di : iff Pr(Hi )×pdfr/Hi (r) > Pr(Hj )×pdfr/Hj (r); ∀j : j 6= im
Di = maxj ,∀j(Pr(Hj )× pdfr/Hj (r)︸ ︷︷ ︸
,Gj
)
Note:I The above can be easily proven using the Bayes rule
Pr(Hi/r) =pdfr/Hi (r).Pr(Hi )
pdfr (r)(6)
I Gj is known as "decision variable"I In this topic the symbol G will be used to represent a "decisionvariable".
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 10 / 84
Basic Decision Theory Terminology
Correlation between two analogue energy signals r(t) and s(t) ofduration Tcs :
corr ,∫ Tcs
0r(t).s(t).dt (7)
Correlation between the discretised versions r and s of the signalsr(t) and s(t):
corr , rT s (8)
wherer = [r1, r2, .... rL]
T (9)
ands = [s1, s2, .... sL]
T (10)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 11 / 84
Basic Decision Theory M-ary Decision Criteria
M-ary Decision Criteria
Consider the sets of parameters P1 and P2 where:P1 denotes the set of parameters Pr(H1),Pr(H2), ..., Pr(HM )P2 represents the set of costs/weights Cij , ∀i , j
associated with the transition probabilities Pr(Di |Hj )(i.e. one weight for every element of the channel transition matrix F -see EE303, Topic on "Comm Channels")
Estimate/identify the likelihood functions
pdfr |H1(r), pdfr |H2(r), ..., pdfr |HM (r)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 12 / 84
Basic Decision Theory M-ary Decision Criteria
Decision: choose the hypothesis Hi (i.e. Di )with the maximum Gi (r)where Gi (r) depends on the chosen criterion as follows:
F BAYES CriterionF Minimum Probability of Error (min(pe)) CriterionF MAP CriterionF MINIMAX CriterionF Newman-Pearson (N-P ) CriterionF Maximum Likelihood (ML ) Criterion
P1 P2 choose Hypothesis with max(Gj (r))
Bayes known known Gj (r) ,weight j × Pr(Hj )×pdfr |Hjmin(pe ) known unknown Gj (r) , Pr(Hj )× pdfr |Hjor MAP
Minimax unknown known Gj (r) ,weight j× pdfr |HjN-P unknown unknown by solving a constraint optim. problem
ML don’t care don’t care Gj (r) , pdfr |Hj
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 13 / 84
Basic Decision Theory M-ary Decision Criteria
Notes:I Note-1:if an approximate/initial solution is required then any informationabout the sets of parameters P1 and/or P2 can be ignored. In thiscase the Maximum Likelihood (ML) Criterion should be used.
I Note-2:
weight j ,M
∑i=1i 6=j
(Cij − Cjj
)(11)
or (since the term for i = j is equal to zero) simply,
weight j =M
∑i=1
(Cij − Cjj
)(12)
I Note-3:Sometimes, for convenience, Gj will be used (i.e. Gj , Gj (r)) - i.e. theargument will be ignored.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 14 / 84
Basic Decision Theory M-ary Decision Criteria
Decision Criteria: Mathematical Architecture
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 15 / 84
Basic Decision Theory Examples
ExamplesExample: Gaussian LFs
A Brth,ji
r0 1Dj Di
pdf ( )r/Hj r pdf ( )r/Hi r
(Volts)
By solving Gj (r) = Gi (r) the decision threshold rth,ji can beestimated
area-A = Pr(Dj |Hi ) and area-B = Pr(Di |Hj )
pe ,cs =M
∑j=1
M
∑i=1i 6=j
Pr(Dj ,Hi ) = symbol error probability/rate (SER)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 16 / 84
Basic Decision Theory Examples
Example: Signal + Gaussian Noise
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 17 / 84
Basic Decision Theory Examples
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 18 / 84
Basic Decision Theory Examples
Example: Rectangular and Gaussian LFs
A discrete channel employs two equally probable symbols and thelikelihood functions are given by the following expressions
pdfr |H0 =12rect
{ r2
}pdfr |H1 = N
(0,σ =
12
)If the detector employed has been designed in an optimum way, findthe decision rule and model the above discrete channel.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 19 / 84
Basic Decision Theory Examples
(Volts)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 20 / 84
Basic Concepts on Optimum Receivers
Basic Concepts on Optimum Receivers
Receiver = Detector + Decision Device
Consider a M-ary communication model in which one of M signalssi (t), for i = 1, 2, ..M, is received in the time interval (0,Tcs) in thepresence of white noisei.e.
r(t) =
s1(t)
or s2(t)· · ·
or sM (t)
+ n(t), 0 ≤ t ≤ Tcs (13)
where r(t) denotes the received (observable) signal.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 21 / 84
Basic Concepts on Optimum Receivers
Tcs
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 22 / 84
Basic Concepts on Optimum Receivers The Concept of "Continuous Sampling"
The Concept of "Continuous Sampling"Continuous Sampling helps to get a probabilistic description of acontinuous signal:
1. assume that L amplitude samples of the received signal
r(t), 0 ≤ t ≤ Tcsare available, i.e. r1, r2, . . . , rL,(with sampling frequency 1
4t )
where rk =
s1k
or s2k. . .
or sMk
+ nk ; k = 1, . . . , L
or, in more compact form,
r =
s1
or s2· · ·
or sM
+ n where
r =
[r1, r2, . . . , rL
]Tn =
[n1, n2, · · · , nL
]Ts i =
[si1, si2, · · · , siL
]TProf. A. Manikas (Imperial College) EE.401: Detection Theory v.16 23 / 84
Basic Concepts on Optimum Receivers The Concept of "Continuous Sampling"
i.e. r = s i + n (L samples)
2. take the limit as L→ ∞, ∆t → 0, L× ∆t → Tcs
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 24 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Optimum M-ary Receivers
Objective : to design a receiver which operates on r(t) and choosesone of the following M hypotheses:
H1 : r(t) = s1(t) + n(t)H2 : r(t) = s2(t) + n(t)· · · · · ·HM : r(t) = sM (t) + n(t)
or
H1 : r = s1 + nH2 : r = s2 + n· · · · · ·HM : r = sM + n
Optimum Decision Rule:depends on the likelihood functions pdfr |Hi|(r)
Optimum Receiver - Initial Conceptual Structure:
Computelikelihoodfunctions
M
OPTIMUM RECEIVER
∆t Comparevariables and get max
Di?r t =s t +n t( ) ( ) ( )i
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 25 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Assumptions :I Noise: PSDni (f ) =
N02 ⇒PSDn(f ) =
N02 rect
{f2B
}Therefore, the received signal is sampled at intervals
∆t =12B
(14)
I Sampling: the samples are uncorrelated and statistically independent
Example - ML Receiver with "Cont. Sampling" :
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 26 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Gaussian Multivariable Distribution
Consider the Gaussian random vector: r = [r1, r2, ..., rL]T
Then
pdf(r) =1√
(2π)L det(Rrr )exp
{−(r − µ
r)TR−1rr (r − µ
r)
2
}(15)
where
mean = µr= E{r} = [E{r1}, E{r2}, ..., E{rL}]T (16)
cov(r) = Rrr = E{(r − µ
r
) (r − µ
r
)}T(17)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 27 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
If r = s + n then the mean µrand covariance Rrr are given as follows:
µr= s and Rrr = Rnn (18)
proof :
E{r} = µr, E{s + n} = E{s}+ E{n} = E{s} = s (19)
cov{r} , Rrr = E{(r − E{r})(r − E{r})T
}= E
{(r − s)(r − s)T
}= ε{nnT }
⇒ cov{r} = Rnn = σ2nIL = N0BIL (20)
Note that:
If Rnn = σ2nIL then det(Rnn) =(σ2n)L
(21)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 28 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Likelihood Functions (Continuous Sampling)
pdfr/Hi(r): Based on the above (for AWGN) and for “ContinuousSampling”we have
pdfr/Hi (r) =
(1√2πσ2n
)L· exp
{− (r − s i )
T (r − s i )2σ2n
}(22)
However{
σ2n = N0B4t = 1
2B
}=⇒ σ2n =
N02·4t
Therefore
pdfr/Hi (r) =
(√∆t
πN0
)L· exp
{− (r − s i )
T (r − s i ) · ∆tN0
}and for “Continuous Sampling”, i.e. L −→ ∞,4t −→ 0, L×4t −→ Tcswe have
pdfr/Hi (r(t)) = const · exp{− 1N0.∫ Tcs
0(r(t)− si(t))2dt
}(23)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 29 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
ML APPROACH using "Continuous Sampling":Rule : choose the hypothesis Hi with the maximum likelihoodpdfr/Hi(r(t)) where
pdfr/Hi(r(t)) = const · exp{− 1N0.∫ Tcs
0(r(t)− si (t))2dt
}(24)
for i = 1, . . . ,M
MAP APPROACH using "Continuous Sampling":This is a more general approach than the Maximum LikelihoodRule : choose the hypothesis Hi with themaximum Pr(Hi )× pdfr/Hi (r(t)) where
pdfr/Hi (r(t)) = const. exp{− 1N0.∫ Tcs
0(r(t)− si (t))2dt
}(25)
for i = 1, . . . ,M
i.e. max{Pr(Hi )× pdfr/Hi (r(t))
}= max
{∫ Tcs
0r(t)s∗i (t)dt +
N02ln (Pr (Hi ))−
12Ei
}(26)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 30 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Equation-26 (for proof see Appendix 2) suggests that an M-aryreceiver will have the following form:
s (t)1
0
Tcs
+DC1
s (t)2
0
Tcs
+DC2
s (t)M
0
Tcs
+DCM
G1
G2
GM
CompareDecisionVariables
and
choosemaximum
Di
If Gi=maxthen
Gi
Di
r t( )
t
Tcs
Tcs
Tcs
Tcs
OPTIMUM Mary RECEIVER: CORREL. RECEIVER
where DCi ≡ N02 ln(Pr(Hi ))−
12Ei with i = 1, 2, . . . ,M
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 31 / 84
Basic Concepts on Optimum Receivers Optimum M-ary Receivers
Note that if the a priori probabilities are equal,i.e.
Pr(H1) = Pr(H2) = · · · = Pr(Hi ) (27)
and the signals have the same energy,i.e.
E1 = E2 = · · · = EMthen Equation-26 is simplified as follows:
max{Pr(Hi )× pdfr/Hi (r(t))
}= max{pdfr/Hi (r(t))}
= max{∫ Tcs
0r(t)s∗i (t) · dt
}(28)
N.B. - ML receiver: similar structure to MAP with DCi = 12Ei
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 32 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
Optimum M-ary Receivers using Matched FiltersKnown Signals in White Noise
Consider the output of one branch of the correlation receiver:
CORRELATOR
s (t)i
0
Tcs 1r t =s t +n t( ) ( ) ( )i
at point-1 = output =∫ Tcs
0r(t) · si (t) · dt
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 33 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
In this section we will try to replace the correlator of a correlationreceiver with a linear filter (known as Matched Filter ).
MATCHED FILTER
Linearfilterh (t)i
Tcs1 2r t =s t +n t( ) ( ) ( )i
at point-1 =∫ t
0r(u) · hi (t − u) · du
at point-2 = output =∫ Tcs
0r(u) · hi (Tcs − u) · du
N.B.:compare correlator’s o/p (point-1) with matched filter’s o/p (point-2).
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 34 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
If we choose the impulse response of the linear filter as
hi (t) = si (Tcs − t) 0 ≤ t ≤ Tcs (29)
then that linear filter, defined by the above equation, is calledMatched Filter.
at point-2 =∫ Tcs
0r(u) · si (u) · du
=⇒ at point-2 =∫ Tcs
0r(t) · si (t) · dt
N.B.:Output Of Correlator =
↑only at time t=Tcs
Output Of Matched Filter
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 35 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
example:
Reversal inTime
Tcs
s (t)i
h (t)i
0
t
t
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 36 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
CORRELATION RECEIVER:
s (t)1
0
Tcs
+DC1
s (t)2
0
Tcs
+DC2
s (t)M
0
Tcs
+DCM
G1
G2
GM
CompareDecisionVariables
and
choosemaximum
Di
If Gi=maxthen
Gi
Di
r t( )
t
Tcs
Tcs
Tcs
Tcs
OPTIMUM Mary RECEIVER: CORREL. RECEIVER
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 37 / 84
Optimum Receivers using Matched Filters Known Signals in White Noise
or, MATCHED FILTER RECEIVER:
+DC1
+DC2
+DCM
G1
G2
GM
CompareDecisionVariables
and
choosemaximum
Di
If Gi=maxthen
Gi
Di
r t( )
t
Tcs
Tcs
Tcs
Tcs
OPTIMUM Mary RECEIVER: MATCHED FILTER. RECEIVER
Tcs
h t =s T tM M cs( ) ( )
Tcs
h t =s T t2 2( ) ( )cs
Tcs
h t =s T t1 1( ) ( )cs
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 38 / 84
Optimum Receivers using Matched Filters Signals and Matched Filters in Frequency Domain
Signals and Matched Filters in Freq. Domain
Let h(t) ={s(Tcs − t) 0 ≤ t ≤ Tcs0 elsewhere
Time DomainFT7−→ Freq. Domain
s(t)FT7−→ S(f ) =
∫ Tcs0 s(t)e−j2πftdt
h(t)FT7−→ H(f ) =
∫ ∞−∞ h(t)e
−j2πftdt
=∫ Tcso s(Tcs − t)e−j2πftdt
=∫ Tcs0 s(u)e−j2πf (Tcs−u)du
= e−j2πfTcs∫ Tcs0 s(u)e+j2πfudu
= e−j2πfTcs · S∗(f )
thereforeH(f ) = e−j2πfTcs · S∗(f ) (30)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 39 / 84
Optimum Receivers in Non-White Noise Known Signals in Non-White Noise
Known Signals in Non-White NoiseNow let us assume that the noise n(t) is described as follows:
n(t) ={zero meanRnn(τ) = not necessarily white or Gaussian
(31)
Letr(t) = s(t)︸︷︷︸
completely known
+ n(t) (32)
objective: design a linear filter h(t) such that
SNRoutat t=Tcs
= max (33)
MATCHED FILTER
Linearfilterh(t)
Tcsr t =s t +n t( ) ( ) ( )
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 40 / 84
Optimum Receivers in Non-White Noise Known Signals in Non-White Noise
output =∫ Tcs
0h(τ) ·
s(Tcs−τ)+n(Tcs=τ)
⇓︷ ︸︸ ︷r(Tcs − τ) ·dt
=∫ Tcs
0h(τ) · s(Tcs − τ)dt︸ ︷︷ ︸
=s(Tcs )
+∫ Tcs
0h(τ) · n(Tcs − τ)dτ︸ ︷︷ ︸
n(Tcs )
∴ SNRout =E{s(Tcs )2}E{n(Tcs )2}
=s(Tcs )2
E{n(Tcs )2}(34)
∴ maxh(t)
(SNRout ) ={minh(t)E{n(Tcs )2} with s(Tcs ) = constant
}(35)
∴ minh(t)
ξ where ξ = E{n(Tcs )2} − λ · s(Tcs ) (36)
i.e
optimum impulse response , hopt = argminh(t)
ξ (37)
where ξ = E{n(Tcs )2} − λ·s(Tcs )
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 41 / 84
Optimum Receivers in Non-White Noise Known Signals in Non-White Noise
The solution of Equation 37 is the solution of the following
FREDHOLM INTEGRAL EQUATION of 1st KIND:
∫ Tcs
0hopt (z).Rnn(τ − z).dz = s(Tcs − τ); 0 ≤ τ ≤ Tcs (38)
GENERAL EXPRESSION FOR MATCHED FILTERS
Proof of the above result (Equation 38) is not important. What isreally important is that the proof does not involve any assumptionabout the pdf or PSD of the noise (i.e. that the noise is whiteGaussian).
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 42 / 84
Optimum Receivers in Non-White Noise A Special Case
A Special Case
In the case of white noise the above Equation (Equation-38) can besolved easily as follows:∫ Tcs
0hopt (z) · N02 · δ(τ − z)︸ ︷︷ ︸
=Rnn(τ−z )
· dz = s(Tcs − τ) (39)
=⇒ hopt (τ) · N02 = s(Tcs − τ) =⇒
hopt = 2N0· s(Tcs − τ) (40)
Be careful - SNR is not influenced by the factor 2N0
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 43 / 84
Optimum Receivers in Non-White Noise Output SNR
Output SNR
Consider next you have got the optimum impulse response h0(t) (byusing Equation-38). This impulse response provides the maximumoutput-SNR which can be estimated as follows:
MATCHED FILTER
Linearfilterh (t)opt
Tcsr t =s t +n t( ) ( ) ( )
Important Question: SNRout =?we can answer this question as follows: ↘
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 44 / 84
Optimum Receivers in Non-White Noise Output SNR
o/p =∫ Tcs
0hopt (τ). r(Tcs − τ)
↑s(Tcs−τ)+n(Tcs−τ)
dτ
=∫ Tcs
0hopt (τ).s(Tcs − τ)dτ︸ ︷︷ ︸
=s(Tcs )
+∫ Tcs
0hopt (τ).n(Tcs − τ)dτ︸ ︷︷ ︸
=n(Tcs )
(41)
therefore,
SNRoutmax
=E{s(Tcs )2}E{n(Tcs )2}
=s(Tcs )2
E{n(Tcs )2}= (42)
= · · · · · ·
. . . (for you). . .
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 45 / 84
Optimum Receivers in Non-White Noise Output SNR
. . . (for you). . .
i.e. SNR outmax
= s(Tcs )2
E{n(Tcs )2} =∫ Tcs0 hopt (z) · s(Tcs − z)dz (43)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 46 / 84
Optimum Receivers in Non-White Noise A Special Case: Output SNR (Signal plus white Noise)
A Special Case: Output SNR (Signal plus white Noise)For white noise we have seen that: hopt (z) =
2N0.s(Tcs − z)
Then Equation-43 becomes:
SNR outmax
=∫ Tcs
0
2N0· s(Tcs − z)︸ ︷︷ ︸=hopt
· s(Tcs − z) · dz
= 2N0
∫ Tcs
0s2(Tcs − z) · dz =
=⇒ SNR outmax
= 2 EN0 for white noise (44)
Provided that the filter is matched to the signal, it is obvious fromEquation-44 that
SNIR outmax
6=f{signal waveform}6=f{signal bandwidth}6=f{peak power}6=f{time duration}
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 47 / 84
Optimum Receivers in Non-White Noise Approximation To Matched Filter Solution
Approximation To Matched Filter Solution
We have seen that the matched filter can be found by solving thefollowing integral equation:∫ Tcs
0hopt (z) · Rnn(τ − z) · dz = s(Tcs − τ); 0 ≤ τ ≤ Tcs
MATCHED FILTER GENERAL EQUATION(a Fredholm Integral of the 1st kind.)
However, it is very diffi cult to solve the above equation in a generalcase.
N.B.: solution=easy when noise=white
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 48 / 84
Optimum Receivers in Non-White Noise Approximation To Matched Filter Solution
In order to find an approximation to matched filter solution, i.e.
w0(z) ' hopt(z)
we need to relax the condition 0 ≤ τ ≤ Tcs to −∞ ≤ τ ≤ ∞
0 ≤ τ ≤ Tcs to −∞ ≤ τ ≤ ∞
.Then ∫ ∞
−∞w0(z)↘
Rnn(τ − z) · dz = s(T0↗
Maximizes SNR at some time
− τ) (45)
However the above integral is a convolution integral
∴ W0(f ) · PSDn(f ) = S∗(f ).e−j2πfT0 (46)
=⇒ W0(f ) =S∗(f ).e−j2πfT0
PSDn(f )(47)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 49 / 84
Optimum Receivers in Non-White Noise Whitening Filter
Whitening Filter
r(t) = s(t)↑
known
+
non-white↓n(t) : i/p o/p: s(t) + ni (t)
↑white
whitening filter:I attenuates−→regions in Frequ Domain in which NOISE=LARGEI accentuates−→regions in Frequ Domain in which NOISE=LOW
combined Whitening Filter-Matched Filter transfer function:
r(t) output
combined transfer function≡ L(f ) = S∗(f ).e−j2πfT0
PSDn(f )
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 50 / 84
Optimum M-ary Rx based on Signal Constellation Introduction - Orthogonal Signals
Optimum M-ary Rx based on Signal ConstellationIntroduction - Orthogonal Signals
Two signals ci (t) and cj (t) are called orthogonal signals(i.e. ci (t) ⊥ cj (t)) in the interval [a, b] iff:∫ b
aci (t) · c∗j (t)dt =
{Eci for i = j0 for i 6= j (48)
c ti( )
c tj( ) a
b
=
{Eci for i = j0 for i 6= j
Note:I if Eci = 1 then the signals are called orthonormalI∫ ba ci (t).c
∗i (t) dt = Eci = the energy of ci (t) in the interval [a, b].
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 51 / 84
Optimum M-ary Rx based on Signal Constellation The “Approximation Theorem” of an Energy Signal
The “Approximation Theorem”of an Energy Signal
Theorem : Consider an energy signal s(t).Furthermore consider aset of D orthogonal signals
{c1(t), c2(t), ..., cD (t)}
ThenI The signal s(t) can be approximated by a linear combination of theorthogonal signals ci (t) as follows:
s(t) =D
∑i=1
w∗i ci (t) = wHs c(t) (49)
where{w s = [w1,w2, . . . ,wD ]T
c(t) = [c1(t), c2(t), . . . , cD (t)]TI w s is a vector of coeffi cients or weights
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 52 / 84
Optimum M-ary Rx based on Signal Constellation The “Approximation Theorem” of an Energy Signal
I The best approximation is the one which uses the following coeffi cients
w s =1E c�∫ bas(t) · c∗(t)dt = optimum weights (50)
whereE c = [Ec1 ,Ec2 , . . . ,EcD ]
T (51)
and � denotes Hadamard multiplication
I The above coeffi cients are optimum in the sense that they minimizethe energy Eε of the error signal
ε(t) = s(t)− s(t) (52)
�
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 53 / 84
Optimum M-ary Rx based on Signal Constellation The “Approximation Theorem” of an Energy Signal
Comments on the "Approximation" Theorem
The above theorem states that any energy signal can beapproximated by a linear combination of a set of orthogonal signalsand can be equivalent described by the following diagram:
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 54 / 84
Optimum M-ary Rx based on Signal Constellation The “Approximation Theorem” of an Energy Signal
The energy of the error signal ε(t) is minimum when the set ofsignals
{ε(t), c1(t), c2(t), ..., cD (t)} = is an orthogonal set
i.e if
ε(t) ⊥ ci (t), ∀i
or∫ ba
(s(t)− wHs c(t)
).c∗(t).dt = 0
then Eε = min
(53)
In generals(t) ≈ s(t) (54)
However, ifs(t) = s(t) (55)
then {c1(t), c2(t), ..., cD (t)} , a complete set of orthog. signals.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 55 / 84
Optimum M-ary Rx based on Signal Constellation Gram-Schmidt Orthogonalization
Gram-Schmidt Orthogonalization
Suppose that we have a set of energy signals
{s1(t), s2(t), ..., sM (t)} = non-orthogonal
and we wish to construct a set of orthogonal signals
{c1(t), c2(t), ..., cD (t)} = orthogonal
One popular approach to construct this orthogonal set is theGRAM-SCHMIDT orthogonalisation , described as follows:
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 56 / 84
Optimum M-ary Rx based on Signal Constellation Gram-Schmidt Orthogonalization
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 57 / 84
Optimum M-ary Rx based on Signal Constellation Gram-Schmidt Orthogonalization
Example
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 58 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
M-ary Signals: Energy and Cross-Correlation
Binary Com. Systems : use 2 possible waveforms {s0(t), s1(t)}or {s1(t), s2(t)}
M-ary Com. Systems : use M possiblewaveforms { s1(t), ..., sM (t)}Note: these waveforms are energy signals of duration TcsThe M signals (or channel symbols) are characterized by their energyEi
Ei =∫ Tcs
0s2i (t)dt ∀i (56)
Furthermore their similarity (or dissimilarity) is characterized by theircross-correlation
ρij =1√EiEj
∫ Tcs
0si (t) · s∗j (t)dt (57)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 59 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
The M signals may be also expressed, by using the Approx. Theoremdescribed in the previous section, as a linear combination of a set ofD orthogonal signals {ck (t)}
si (t) = wHsi c(t); ∀i (58)
where c(t) = [c1(t), c2(t), . . . , cD (t)]T (59)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 60 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
Let us define the following (D ×D)matrix
Rcc =∫ Tcs
0c(t).c(t)H .dt
=
∫ Tcs0 c21 (t)dt,
∫ Tcs0 c1(t)c*2 (t)dt, ...,
∫ Tcs0 c1(t)c*D (t).dt∫ Tcs
0 c2(t).c*1 (t).dt,∫ Tcs0 c22 (t).dt ...,
∫ Tcs0 c2(t)c*D (t).dt
..., ..., ..., ...∫ Tcs0 cD (t).c*1 (t).dt,
∫ Tcs0 cD (t).c*2 (t).dt, ...,
∫ Tcs0 c2D (t).dt
i.e.
Rcc =∫ Tcs
0c(t).c(t)H .dt =
1 0 ... 00 1 ... 0... ... ... ...0 0 ... 1
= ID (60)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 61 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
Then, by using Equation-56 in conjunction with Equation-58, theenergy Ei of the signal si (t), ∀i , can be expressed as follows:
Ei =∫ Tcs
0wHsi c(t)︸ ︷︷ ︸=si (t)
· c(t)Hw si︸ ︷︷ ︸=s∗i (t)
· dt
= wHsi
(∫ Tcs
0c(t)c(t)Hdt
)w si
= wHsi ·Rcc · w si= wHsi · ID · w si= wHsi w si=
∥∥w si∥∥2 (61)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 62 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
Similarly, by using Equation-57 in conjunction with Equation-58 thecross-correlation coef. ρij ∀ij becomes
ρij =1√EiEj
∫ Tcs
0wHsi · c(t) · c(t)
H · w sj · dt
=1√EiEj
· wTsi ·(∫ Tcs
0c(t) · c(t)H · dt
)· w sj
=1√EiEj
wHsi ·Rcc · w sj
=1√EiEj
wHsi · ID · w sj
=1√EiEj
wHsi w sj =wHsi w sj∥∥w si∥∥ ∥∥∥w sj∥∥∥
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 63 / 84
Optimum M-ary Rx based on Signal Constellation M-ary Signals: Energy and Cross-Correlation
i.e.Ei = wHsi w si =
∥∥w si∥∥2 (62)
ρij =1√EiEj
wHsi · w sj =wHsi w sj∥∥w si∥∥ ∥∥∥w sj∥∥∥ (63)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 64 / 84
Optimum M-ary Rx based on Signal Constellation Signal Constellation
Signal ConstellationWith reference to the figure below
If the error signal ε(t) = 0
then the knowledge of the vector w s is as good asknowing the transmitted signal s(t)
or, in the case of M signals (M-ary system){knowledge of si (t)}={knowledge of w si }, ∀i
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 65 / 84
Optimum M-ary Rx based on Signal Constellation Signal Constellation
Therefore, we may represent the signal si (t)by a point (w si ) in a D-dimensional Euclidean space with
D ≤ M (64)
The set of points (vectors) specified by the columns of the matrix
W =[w s1 , w s2 , ..., w sM
](65)
is known as “signal constellation” .
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 66 / 84
Optimum M-ary Rx based on Signal Constellation Distance between two M-ary signals
Distance between two M-ary signals
The distance between two signals si (t) and sj (t) is the Euclideandistance between their associate vectors w si and w sj
wsj
wsi
dij
Origin
Ddimensional spacei.e. dij =
∥∥∥w si − w sj∥∥∥=√(w si − w sj )H (w si − w sj )
=√wHsi w si + w
Hsj w sj − 2Re(wHsi w sj )
=√Ei + Ej − 2ρij
√EiEj
d2ij = Ei + Ej − 2ρij√EiEj (66)
It is clear from the above that the Euclidean distance dij of twosignals indicates, like the cross-correlation coeffi cient, the similarity ordissimilarity of the signals.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 67 / 84
Optimum M-ary Rx based on Signal Constellation Distance between two M-ary signals
An Important Bound involving dij :I If pe ,cs(si (t)) denotes the prob. of error associated with the channelsymbol si (t), it can be found that (see S. Haykin p.498)
pe ,cs(si (t)) ≤M∑j=1j 6=i
T{
dij√2N0
}(67)
Then, the symbol error probability pe ,sc is bounded as follows:
pe ,sc ≤ 1M
M∑i=1
M∑j=1j 6=i
T{
dij√2N0
}(68)
I N.B.: the following expression was used to go from Equ.67 to Equ.68
pe ,sc =1M
M∑i=1
pe ,cs(si (t)) (see S.Haykin p.498) (69)
“minimum distance”: dmin , min∀i ,j{dij}
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 68 / 84
Optimum M-ary Rx based on Signal Constellation Examples of Signal Constellation Diagram
Examples of Signal Constellation DiagramConsider an M-ary System having the following signals
{s1(t), s2(t), . . . , sM (t)} with 0 ≤ t ≤ Tcs
M-ary ASKI channel symbols: si (t) = Ai · cos(2πFc t) whereAi =given= 2i − 1−M (say)
I dimensionality of signal space = D = 1I if M = 4 then
s t1( )0100 11 10s t2( ) s t3( ) s t4( )
Es1 Es2 Es3 Es4Origin
I if M = 8 then
s t1( )000
Es1
001s t2( )
Es2
011s t3( )
Es3
010s t4( )
Es4Origin
s t5( )110
Es5
s t6( )
Es6
101s t7( )
Es7
100s t8( )
Es8
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 69 / 84
Optimum M-ary Rx based on Signal Constellation Examples of Signal Constellation Diagram
M-ary PSK
I channel symbols: si (t) = A. cos(2πFc t + 2π
M . (i − 1) + φ)
for i = 1, 2, ..,I dimensionality of signal-space = D = 2
M = 4 & φ = 0◦ M = 4 & φ = 45◦ M = 8 & φ = 0◦
s t1( )
01
0011
s t2( )
s t3( )
Es1
Es2
Es3 Origin
10s t4( )
Es4
s t1( )00Es1
01s t2( )
Es2
11s t3( )
Es3
Origin
10s t4( )
Es4
450Origin
s t1( )000 Es1
001s t2( )
Es2
011s t3( )
Es3
010s t4( )
Es4
s t5( )110Es5
s t6( )
Es6
101s t7( )
Es7
100s t8( )
Es8
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 70 / 84
Optimum M-ary Rx based on Signal Constellation Examples of Signal Constellation Diagram
M-ary FSK:very diffi cult to be represented using constellation diagram
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 71 / 84
Optimum M-ary Rx based on Signal Constellation Examples of Plots of Decision Variables (QPSK-Receiver)
Examples of Plots of Decision Variables (QPSK- Receiver)
1.5 1 0.5 0 0.5 1 1.51.5
1
0.5
0
0.5
1
1.5
Re
Im
8 6 4 2 0 2 4 6 8
x 106
8
6
4
2
0
2
4
6
8x 106
Re
Im"good" "bad"
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 72 / 84
Optimum M-ary Rx based on Signal Constellation Optimum M-ary Receiver using the Signal-Vectors
Optimum M-ary Receiver using the Signal-Vectors
Let us consider again the general M-ary problem:
r t =s t +n t( ) ( ) ( )iDecision rule=?
(To determine whichsignal is present)
Di
where si (t) = one of M signals (channel symbols)
Hypotheses:H1 : s1(t) is present with probability Pr(H1)H2 : s2(t) is present with probability Pr(H2)· · · · · · · · · · · ·HM : sM (t) is present with probability Pr(HM )
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 73 / 84
Optimum M-ary Rx based on Signal Constellation Optimum M-ary Receiver using the Signal-Vectors
we have seen that the MAP criterion, using the “CONTINUOUSSAMPLING" concept, is described by the following rule:Rule: choose the hypothesis Hi with the maximum
Pr(Hi )× pdfr/Hi (r(t))
where
max{Pr(Hi )× pdfr/Hi (r(t))
}= max
{∫ Tcs
0r(t)s∗i (t)dt +
N02ln(Pr(Hi ))−
12Ei
}(70)
Remember that this rule can also be described by the following twoequivalent receivers:
↘
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 74 / 84
Optimum M-ary Rx based on Signal Constellation Optimum M-ary Receiver using the Signal-Vectors
⇐⇒
s (t)1
0
Tcs
+DC1
s (t)2
0
Tcs
+DC2
s (t)M
0
Tcs
+DCM
G1
G2
GM
CompareDecisionVariables
and
choosemaximum
Di
If Gi=maxthen
Gi
Di
r t( )
t
Tcs
Tcs
Tcs
Tcs
OPTIMUM Mary RECEIVER: CORREL. RECEIVER
where
DCi =N02ln(Pr(Hi ))−
12Ei (71)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 75 / 84
Optimum M-ary Rx based on Signal Constellation Optimum M-ary Receiver using the Signal-Vectors
Now let us use the Approx. Theorem of energy signals to representthe received signals r(t) as well as the M-ary signal si (t), ∀i ,
i.e.
r(t) = wHr c(t)si (t) = wHsi c(t) =⇒ s∗i (t) = (w
Hsi c(t))
∗
= c(t)Hw si ∀iIn this case the MAP receiver (Equ-70) becomes
max{Pr (Hi)× pdfr/Hi(r(t))
}= max
{∫ Tcs
0r(t)s∗i (t)dt +
N02ln(Pr(Hi ))−
12Ei
}= max
{∫ Tcs
0wHr c(t) · c(t)Hw sidt +
N02ln(Pr(Hi ))−
12Ei
}= max
{wHr (
∫ Tcs
0c(t) · c(t)Hdt)w si +
N02ln(Pr(Hi ))−
12Ei
}= max
{wHr w si +
N02ln(Pr(Hi ))−
12Ei
}Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 76 / 84
Optimum M-ary Rx based on Signal Constellation Optimum M-ary Receiver using the Signal-Vectors
i.e. max{Pr(Hi )× pdfr/Hi(r(t))
}= max
{wHr w si +
N02ln(Pr(Hi ))−
12Ei
}(72)
The above equation may be implemented as follows:
or
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 77 / 84
Optimum M-ary Rx based on Signal Constellation Encoding M-ary Signals
Encoding M-ary SignalsThe performance of M-ary systems is evaluated by means of theaverage probability of symbol error pe ,cs , which, for M > 2, isdifferent than the average probability of bit error (or Bit-Error-RateBER), pe .
That is{pe ,cs 6= pe for M > 2pe ,cs = pe for M = 2
(73)
However, because we transmit binary data, the probability of bit errorpe is a more natural parameter for performance evaluation than pe ,cs .Although, these two probabilities are related, i.e. pe =f{pe ,cs} theirrelationship depends on the encoding approach which is employed bythe digital modulator for mapping binary digits to M-ary signals(channel symbols)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 78 / 84
Optimum M-ary Rx based on Signal Constellation Important Relationships Between pe and pe ,cs
Important Relationships Between BER and SER
If the encoder provides a mapping where adjacent symbols differ inone binary digit then it can be proved (see Haykin p500) that
1γcs· pe ,cs ≤ pe ≤ pe ,cs (74)
e.g. M-ary PSK (for Gray code, otherwise diffi cult):
BER = pe =1
γcs· pe ,cs (75)
I N.B.: Note that Gray encoder provides a mapping where adjacentsymbols differ in one binary digit. This property is very importantbecause the most likely errors (due to channel noise effects) are relatedwith an erroneous selection (wrong decision) of an adjacent symbol.
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 79 / 84
Optimum M-ary Rx based on Signal Constellation Important Relationships Between pe and pe ,cs
If all M-ary signals are equally likely, then it can be proved (seeHaykin p.500) that
pe =2γcs−1
2γcs − 1 · pe ,cs (76)
which, for large γcs is simplified to
pe =12· pe ,cs (77)
Example:
M-ary FSK: BER = pe =2γcs−1
2γcs − 1 · pe ,cs (78)
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 80 / 84
Appendices: Appendix-1: Proof of Equation-26
Appendix-1: Proof of Equation-26The above rule can be rewritten as follows:
max{Pr(Hi )× pdfr/Hi(r(t))
}= max
{ln (Pr(Hi )) + ln (const.)−
1N0
∫ Tcs
0(r(t)− si (t))2dt
}= max
{ln (Pr(Hi )) −
1N0
∫ Tcs
0(r(t)− si (t))2dt
}= max
{ln (Pr(Hi )) −
1N0
∫ Tcs
0r2(t)dt − 1
N0
∫ Tcs
0s2i (t)dt +
2N0
∫ Tcs
0r(t)s∗i (t)dt
}= max
{ln (Pr(Hi ))−
1N0Er −
1N0Ei +
2N0
∫ Tcs
0r(t)s∗i (t)dt
}= max
{N02ln (Pr(Hi ))−
12Ei +
∫ Tcs
0r(t)s∗i (t)dt
}= max
{∫ Tcs
0r(t)s∗i (t)dt +
N02ln (Pr (Hi ))−
12Ei
}
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 81 / 84
Appendices: Appendix-2: Walsh-Hadamard Orthogonal Sets and Signals
Appendix-2: Walsh-Hadamard Orthogonal Sets and Signals
H1 = [1]
H2 =
[1 11 −1
](2× 2) matrix
H64 = H2 ⊗H2 ⊗H2 ⊗H2 ⊗H2 ⊗H2︸ ︷︷ ︸6−times
where ⊗ denotes the Kronecker product of two matricese.g. for two matrices A and B
if A =
[A11 A12A21 A22
]then A⊗B =
[A11B A12BA21B A22B
]Properties:I HT
64 H64 = 64 I64I Let hi denote the i-th column of H64i.e. H64 =
[h1, h2, . . . , h64
]
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 82 / 84
Appendices: Appendix-2: Walsh-Hadamard Orthogonal Sets and Signals
The following example illustrates an application of “Walsh Hadamard”in a mobile communications where (each user in the same cell andsame frequency channel is assigned one column of Walsh matrix,
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 83 / 84
Appendices: Appendix-3: IS95 - Walsh Function of order-64
Appendix-3: IS95 - Walsh Function of order-64
Prof. A. Manikas (Imperial College) EE.401: Detection Theory v.16 84 / 84