Upload
ashok815
View
223
Download
0
Embed Size (px)
Citation preview
8/18/2019 Communications Short Course
1/47
Lecture 1 2
Information
Lecturer: Dr. Darren Ward, Room 811C
Email: [email protected], Phone: (759) 46230
Reference books: B.P. Lathi, Modern Digital and Analog Communication Systems,
Oxford University Press, 1998 S. Haykin, Communication Systems, Wiley, 2001 L.W. Couch II, Digital and Analog Communication Systems,
Prentice-Hall, 2001
Course material: http://www.ee.ic.ac.uk/dward/
Lecture 1 3
Aims:
The aim of this part of the course is to give you an understanding of
how communication systems perform in the presence of noise.
Objectives:
By the end of the course you should be able to: Compare the performance of various communication systems Describe a suitable model for noise in communications Determine the SNR performance of analog communication systems Determine the probability of error for digital systems
Understand information theory and its significance in determiningsystem performance
Lecture 1 4
Lecture 1
1. What is the course about, and how does it fit together
2. Some definitions (signals, power, bandwidth, phasors)
See Chapter 1 of notes
Lecture 1 16
Definitions
Signal: a single-valued function of time that conveys information
Deterministic signal: completely specified function of time
Random signal: cannot be completely specified as function of time
8/18/2019 Communications Short Course
2/47
Lecture 1 17
Definitions
Analog signal: continuous function of time with continuous amplitude
Discrete-time signal: only defined at discrete points in time, amplitude
continuous
Digital signal: discrete in both time and amplitude (e.g., PCM signals,see Chapter 4)
Lecture 1 18
Definitions
Instantaneous power:
Average power:
For periodic signals, with period (see Problem sheet 1):
22
2
)()()(
t g Rt i R
t v p ===
−∞→=2 /
2 /
2)(
1lim
T
T T dt t g
T P
−=2 /
2 /
2)(
1 o
o
T
T
o
dt t g
T
P
oT
Lecture 1 19
Definitions
Bandwidth: extent of the significant spectral content of a
signal for positive frequencies
Baseband signal, B/W = B Bandpass signal, B/W = 2B
Lecture 1 20
Bandwidth
3dB bandwidth
null-to-null bandwidth
Magnitude-square spectrum
noise equivalent bandwidth0
8/18/2019 Communications Short Course
3/47
Lecture 1 21
Phasors
General sinusoid:
)2cos()( θ π += ft At x
{ }t f j j e Aet x π θ 2)( ℜ=
Lecture 1 32
Phasors
Alternative representation:
)2cos()( θ π += ft At x
t f j jt f j jee
Aee
At x
π θ π θ 22
22)( −−+=
Lecture 1 43
Phasors
Anti-clockwise rotation (positive frequency):
Clockwise rotation (negative frequency): )2exp( t f j π −
)2exp( t f j π
t f j jt f j jee
Aee
At x
π θ π θ 22
22)( −−+=
Lecture 1 44
Summary
1. The fundamental question: How do communications
systems perform in the presence of noise?
2. Some definitions: Signals
Average power
Bandwidth: significant spectral content for positive frequencies Phasors – complex conjugate representation (negative frequency)
t f j jt f j j ee A
ee A
t x π θ π θ 22
22)( −−+=
−∞→
=
2 /
2 /
2)(
1lim
T
T T dt t x
T P
8/18/2019 Communications Short Course
4/47
Lecture 2 1
Lecture 2
1. Model for noise
2. Autocorrelation and Power spectral density
See Chapter 2 of notes, sections 2.1, 2.2, 2.3
Lecture 2 2
Sources of noise
Lecture 2 3
Sources of noise
1. External noise synthetic (e.g. other users) atmospheric (e.g. lightning) galactic (e.g. cosmic radiation)
2. Internal noise shot noise thermal noise
Average power of thermal noise:
Effective noise temperature:
kTBP =
kB
PT e =
Temperature of fictitious thermal
noise source at i/p, that would berequired to produce same noise
power at o/p
Lecture 2 4
Example
Consider an amplifier with 20 dB power gain and a bandwidth of
Assume the average thermal noise at its output is
1. What is the amplifier’s effective noise temperature?
2. What is the noise output if two of these amplifiers are cascaded?
3. How many stages can be used if the noise output must be less than 20
mW?
MHz20= B
W102.2 11−×=oP
8/18/2019 Communications Short Course
5/47
Lecture 2 5
Gaussian noise
Gaussian noise: amplitude of noise signal has a Gaussian probability
density function (p.d.f.)
Central limit theorem : sum of n independent random variables
approaches Gaussian distribution as n→∞
Lecture 2 6
Noise model
Model for effect of noise is additive Gaussian noise channel:
For n(t) a random signal, need more info:
What happens to noise at receiver?
Statistical tools
Lecture 2 7
Motivation
How often is a decision error made?
1
0
Error for 0
Error for 1
Data
Data +
noise
Noise
only
1 0 11 0
Lecture 2 8
Random variable
A random variable x is a rule that assigns a real number xi to the ith
sample point in the sample space
8/18/2019 Communications Short Course
6/47
Lecture 2 9
Probability density function
Probability that the r.v. x is within a certain range is:
Example, Gaussian pdf:
=
8/18/2019 Communications Short Course
7/47
Lecture 2 13
Example
Consider the signal:
where θ is a random variable, uniformly distributed over 0 to 2π
1. Calculate its average power using time averaging.
2. Calculate its average power (mean-square) using statistical averaging.
)()(
θ ω +=
t j ce At x
Lecture 2 14
Autocorrelation
How can one represent the
spectrum of a random process?
Autocorrelation:
NOTE: Average power is
{ })()()(x τ τ += t xt x E R
{ })0(
)(
x
2
R
t x E P
=
=
Lecture 2 15
Frequency content
H(f)
Original LPF 1kHzLPF 4kHz
Original
speech
Filtered speech(smaller bandwidth)
Lecture 2 16
Power spectral density
PSD measures distribution of power with frequency, units watts/Hz
Hence,
Average power:
Wiener-Khinchine theorem:
{ })(FT
)()(
x
2
xx
τ
τ τ τ π
R
d e R f S f j
=
= −
∞
∞−
df e f S R f j τ π
τ 2
xx )()( ∞
∞−
=
df f S RP ∞
∞−== )()0( xx
8/18/2019 Communications Short Course
8/47
Lecture 2 17
Power spectral density
Thermal noise: df kT
kTBP
B
B −==
2
White noise:
PSD is same for all frequencies
2)( o
N f S =
Lecture 2 18
Summary
Power spectral density:
Additive Gaussian noise channel:
Expectation operator:{ } dx x p xg xg E
∞
∞−
= )()]([)( x
pdf random
variable
Autocorrelation:
White noise:
2
)( o N
f S =
{ })()()(x
τ τ += t xt x E R
τ τ τ π
d e R f S f j2
xx )()( −
∞
∞− =
8/18/2019 Communications Short Course
9/47
Lecture 3 1
Lecture 3
Representation of band-limited noise Why band-limited noise?
(See Chapter 2, section 2.4)
Noise in an analog baseband system
(See Chapter 3, sections 3.1, 3.2)
Lecture 3 2
Analog communication system
Lecture 3 3
Receiver
Predetection filter: removes out-of-band noise has a bandwidth matched to the transmission bandwidth
Predetection Filter Detector
Lecture 3 6
Bandlimited Noise
For any bandpass (i.e., modulated) system, the predetection
noise will be bandlimited
Bandpass noise signal can be expressed in terms of two
baseband waveforms
)sin()()cos()()( t t nt t nt n cscc ω ω −=
PSD of n(t) is centred about f c (and – f c)
PSDs of nc(t) and ns(t) are centred about 0 Hz
bandpass baseband carrier baseband carrier
8/18/2019 Communications Short Course
10/47
Lecture 3 7
PSD of n(t)
In slice shown (for ∆ f small):
)2cos()(k k k k
t f at n θ π +=
2W
Lecture 3 8
)cos()( k k k k t at n θ ω +=
[ ]t t at n ck ck k k ω θ ω ω ++−= )(cos)(
cck k ω ω ω ω +−= )(let
[ ]
[ ] )sin()(sin
)cos()(cos)(
t t a
t t at n
ck ck k
ck ck k k
ω θ ω ω
ω θ ω ω
+−−
+−=
B A B A B A sinsincoscos)cos( use −=+
A B
term)(t nc
term)(t ns
Lecture 3 9
Example – frequency
n(t)
nc(t)
ns(t)
W=1000 Hz
Lecture 3 10
Example – time
n(t)
nc(t)
ns(t)
W=1000 Hz
8/18/2019 Communications Short Course
11/47
Lecture 3 11
Example – histogram
n(t)
nc(t)
ns(t)
W=1000 Hz
Lecture 3 12
Probability density functions
+=k
k k k t at n )cos()( θ ω
Each waveform is Gaussian distributed Central limit theorem
Mean of each waveform is 0
[ ] +−=k
k ck k c t at n θ ω ω )(cos)(
[ ] +−=k
k ck k s t at n θ ω ω )(sin)(
Lecture 3 13
Average power
Power in ak cos(ω t+θ ) is E{ak 2 } /2 (see Example 1.1, or study group sheet 2, Q1)
+=k
k k k t at n )cos()( θ ω
=k
k n
a E P
2
}{2
( )
( )
+−=
+−=
k
k ck k s
k
k ck k c
t at n
t at n
θ ω ω
θ ω ω
)(sin)(
)(cos)(
Average power in n(t) is:
Average power in nc(t) and ns(t) is: ,
2
}{2
=k
k n
a E P
c =k
k n
a E P
s
2
}{2
n(t) , nc(t) and ns(t) all have same average power!
Lecture 3 14
Average power
From Lecture 2:
What is the average power in n(t) ?
Find using the power spectral density (PSD):
W N W W N
df N
df f S P
oo
W f
W f
oc
c
2)(2
22
2
)(
=−==
=
−+
−
∞
∞−
Average power in n(t) , n c(t) and n s(t) is: 2N oW
(one for positive freqs,
one for negative)
2W
8/18/2019 Communications Short Course
12/47
Lecture 3 15
Power spectral densities
PSD of n(t):
PSD of nc(t) and ns(t):
2W
W --W
Lecture 3 16
Example – power spectral densities
n(t)
nc(t)
ns(t)
W=1000 Hz
Lecture 3 17
Example - zoomed
n(t)
nc(t)
ns(t)
W=1000 Hz
Lecture 3 18
Phasor representation
)sin()()cos()()( t t nt t nt n cscc ω −=
)()()(let t jnt nt g sc +=
t t nt t jn
t t jnt t net g
cscs
cccc
t j c
ω ω
ω ω ω
sin)(cos)(
sin)(cos)()(
−+
+=
t j cet gt n ω
)()( so ℜ=
8/18/2019 Communications Short Course
13/47
Lecture 3 19
Analog communication system
Performance measure:
Compare systems with same transmitted power
outputreceiveratpowernoiseaverage
outputreceiveratpowermessageaverage=oSNR
Lecture 3 20
Baseband system
Lecture 3 21
Baseband SNR
W N df N
P oW
W
o N
== − 2
Transmitted (message) power is: PT
Noise power is:
SNR at receiver output:
W N PSNRo
T =base
Lecture 3 22
Summary
Bandpass noise representation:)sin()()cos()()( t t nt t nt n
cscc ω −=
All waveforms have same:
Probability density function (zero-mean Gaussian)
Average power
nc(t) and ns(t) have same power spectral density
Baseband SNR:
W N
PSNR
o
T =base
8/18/2019 Communications Short Course
14/47
Lecture 4 1
Lecture 4
Noise (AWGN) in AM systems: DSB-SC AM, synchronous detection AM, envelope detection
(See Chapter 3, section 3.3)
Lecture 4 2
Analog communication system
Performance measure:
outputreceiveratpowernoiseaverage
outputreceiveratpowermessageaverage=oSNR
Compare systems with same transmitted power
Lecture 4 3
Amplitude modulation
Modulated signal:
m(t) is message signal
Modulation index:
m p is peak amplitude of message
[ ] t t m At s cc ω cos)()( AM +=
c
p
A
m= µ
Lecture 4 4
Amplitude modulation
8/18/2019 Communications Short Course
15/47
Lecture 4 5
DSB-SC
t t m At scc
ω cos)()( SC-DSB =
Lecture 4 6
Synchronous detection
Signal after multiplier:
)2cos1)(()( t t m At y ccSC DSB ω +=−
[ ][ ] )2cos1()(
cos2cos)()(
t t m A
t t t m At y
cc
ccc AM
ω
ω ω
++=
×+=
Lecture 4 7
Noise in DSB-SC
Transmitted signal:
Predetection signal:
Receiver output (after LPF):
t t m At s cc ω cos)()( SC-DSB =
t t nt t nt t m At x cscccc ω ω ω sin)(cos)(cos)()( −+=
Transmitted signal Bandlimited noise
)()()( t nt m At y cc +=
Lecture 4 8
SNR of DSB-SC
Output signal power:
( ) P At m AP ccs22
)( ==
Output noise power:
W N df N df P oW
W o N
−
∞
∞−
=== 2PSD
Output SNR:
W N P ASNRo
c
2
2
SC-DSB =
power of message
PSD of nc(t)
PSD of bandpass noise n(t)
PSD of baseband noise nc(t)
8/18/2019 Communications Short Course
16/47
Lecture 4 9
SNR of DSB-SC
Transmitted power:
Output SNR:
baseSC-DSB SNRW N
PSNR
o
T ==
DSB-SC has no performance advantage over baseband
2)cos)((
22 P A
t t m AP c
ccT == ω
Lecture 4 10
Noise in AM (synch. detector)
Predetection signal:
Transmitted signal Bandlimited noise
Receiver output:
Output signal power:
Output noise power:
W N P o N 2=
Output SNR:
)()()( t nt m At y cc ++=
[ ] t t nt t nt t m At x cscccc ω ω sin)(cos)(cos)()( −++=
Pt mPS == )(2
W N
PSNR
o
AM
2
=
Lecture 4 11
Noise in AM (synch. detector)
Transmitted signal:
Transmitted power:
Output SNR:
The performance of AM is always worse than baseband
[ ] t t m At s cc ω cos)()( AM +=
22
2 P AP cT +=
base22 SNR
P A
P
P A
P
W N
PSNR
cco
T AM
+
=
+
=
Lecture 4 13
Predetection signal:
Transmitted signal Bandlimited noise
[ ] t t nt t nt t m At x cscccc ω ω sin)(cos)(cos)()( −++=
Noise in AM, envelope detector
8/18/2019 Communications Short Course
17/47
Lecture 4 14
Receiver output:
Noise in AM, envelope detector
)()]()([ of envelope)( 22
t nt nt m A x(t)t y
scc +++=
=
y(t)
Lecture 4 15
Small noise case:
Large noise case:
Envelope detector has a threshold effect
Not really a problem in practice
detectorssynchronouof output
)()()(
=
++≈ t nt m At y cc
)(cos)]([)()( t t m At E t yncn θ ++≈
Noise in AM, envelope detector
Lecture 4 16
Example
An unmodulated carrier (of amplitude Ac and frequency f c) and
bandlimited white noise are summed and then passed through an ideal
envelope detector.
Assume the noise spectral density to be of height N o /2 and bandwidth
2W , centred about the carrier frequency.
Assume the input carrier-to-noise ratio is high.
1. Calculate the carrier-to-noise ratio at the output of the envelope detector,
and compare it with the carrier-to-noise ratio at the detector input.
Lecture 4 17
Summary
Envelope detector: threshold effect for small noise, performance is same as synchronous
detector
baseSC-DSB SNRSNR =
Synchronous detector:
base2 SNRP A
PSNR
c
AM
+
=
8/18/2019 Communications Short Course
18/47
Lecture 5 1
Lecture 5
Noise in FM systems pre-emphasis and de-emphasis
(See section 3.4)
Comparison of analog systems
(See section 3.5)
Lecture 5 2
Frequency modulation
FM waveform:
)(cos
)(22cos)( FM
t A
d mπ k t π f At s
c
t
f cc
θ
τ τ
=
+= ∞−
θ (t) is instantaneous phase
Instantaneous frequency:
Frequency is proportional to message signal
)(
)(
2
1
t mk f dt
t d f
f c
i
+=
= θ
π
Lecture 5 4
FM waveforms
Message:
AM:
FM:
Lecture 5 5
FM frequency deviation
Deviation ratio:
p f mk f =∆
W
f ∆= β
where W is bandwidth of message signal
Commercial FM uses: ∆ f =75 kHz and W =15 kHz
Frequency deviation (max. departure of carrier wave from f c):
Instantaneous frequency varies between f c-k f m p and f c+k f m p,
where m p is peak message amplitude
B d id h id i FM i
8/18/2019 Communications Short Course
19/47
Lecture 5 6
Bandwidth considerations
Carson’s rule:
( ) ( )W f W BT +∆=+≈ 212 β
Lecture 5 7
FM receiver
Discriminator: output is proportional to deviation of instantaneous
frequency away from carrier frequency
Lecture 5 8
Noise in FM versus AM
AM: Amplitude of modulated
signal carries message Noise adds directly to
modulated signal Performance no better than
baseband
FM: Frequency of modulated
signal carries message Zero crossings of modulated
signal important Effect of noise should be less
than in AM
Lecture 5 9
Noise in FM
Predetection signal:
( )
∞−
=
−++=
t
f
cscccc
d mk t
t π f t nt π f t nt t π f At x
τ τ π φ
φ
)(2)(where
)2sin()()2cos()()(2cos)(
If carrier power is much larger than noise power:
1. Noise does not affect signal power at output
2. Message does not affect noise power at output
Transmitted signal Bandlimited noise
A ti A ti
8/18/2019 Communications Short Course
20/47
Lecture 5 10
Assumptions
1. Noise does not affect signal power at output
Signal component at receiver:( ))(2cos)( t t π f At x ccs φ +=
Instantaneous frequency:
)()(
2
1)( t mk
dt
t d t f f i ==
φ
π
Output signal power:
Pk P f S 2
=
power of message signal
Lecture 5 11
Assumptions
2. Signal does not affect noise at output
Message-free component at receiver:
Instantaneous frequency:
≈
+
== −
c
s
cc
si
At n
dt d
t n At n
dt d
dt t d t f )(
21
)()(tan
21)(
21)( 1
π π θ
π
We know the PSD of ns(t), but what about its derivative??
( ) )2sin()()2cos()(2cos)( t π f t nt π f t nt π f At x csccccn −+=
Lecture 5 12
)(t nsdt
d
Acπ 2
1)(t f i
Fourier theory property:
)(2)(
)()(
f X f jdt
t dx
f X t x
π ⇔
⇔
)( f N s f j Ac π π 22
1
)( f F i
dt
t dn
At f s
c
i
)(
2
1)(
π ≈
Discriminator output:
Lecture 5 13
PSD property:
PSD of discriminator output:
)( f X )( f H )()()( f X f H f Y =
)( f S X
)()()(2
f S f H f S X Y =
)( f N sc A
f j )( f F i
)(||
)(2
2
f S A
f f S N
c
F =)( f S N
8/18/2019 Communications Short Course
21/47
Lecture 5 14
PSD of ns(t)
PSD of
PSD after LPF
dt
t dn
A
s)(
2
1
π
Lecture 5 15
PSD of LPF noise term:
W f N
A
f
dt
t dn
A
o
c
s
c
10
Predetection signal is:
Predetection SNR is:
Threshold point is:
Cannot arbitrarily increase SNR FM by increasing β β β β
( ) )2sin()()2cos()()(2cos)( t π f t nt π f t nt t π f At x cscccc −++= φ
T o
c pre
B N ASNR
2
2=
10)1(4
2
>
+ β W N
A
o
c
Pre emphasis and De emphasis Example
8/18/2019 Communications Short Course
22/47
Lecture 5 19
Pre-emphasis and De-emphasis
Can improve output SNR by about 13 dB
Lecture 5 20
Example
The improvement in output SNR afforded by using pre-
emphasis and de-emphasis in FM is defined by:
If H de(f) is the transfer function of the de-emphasis filter,
find an expression for the improvement, I .
emphasis- /de-prepower withnoiseoutputaverage
emphasis- /de-preoutpower withnoiseoutputaverageemphasis- /de-prewithout
emphasis- /de-prewith
=
=
SNR
SNR I
Lecture 5 21
Analog system performance
Parameters: Single-tone message Message bandwidth AM system FM system
Performance:
)2cos()( t f t m mπ =
m f W =
1= µ
5= β
baseFM
base AM
baseSC DSB
SNRSNR
SNRSNR
SNRSNR
275
31
=
=
=−
W B
W B
W B
FM
AM
SC DSB
12
2
2
=
=
=−
Bandwidth:
Lecture 5 22
Analog system performance
Summary
8/18/2019 Communications Short Course
23/47
Lecture 5 23
Summary
Noise in FM: Increasing carrier power reduces noise at receiver output Has threshold effect
Pre-emphasis
Comparison of analog modulation schemes: AM worse than baseband DSB/SSB same as baseband FM better than baseband
Lecture 6 Digital vs Analog
8/18/2019 Communications Short Course
24/47
Lecture 6 1
Lecture 6
Digital communication systems Digital vs Analog communications Pulse Code Modulation
(See sections 4.1, 4.2 and 4.3)
Lecture 6 2
Digital vs Analog
Analog message:
Digital message:
Lecture 6 3
Digital vs Analog
Analog: Recreate waveform
accurately Performance criterion is
SNR at receiver output
Digital: Decide which symbol was
sent Performance criterion is
probability of receiver
making a decision error
Advantages of digital:1. Digital signals are more immune to noise
2. Repeaters can re-transmit a noise-free signal
Lecture 6 4
Sampling: discrete in time
Nyquist Sampling Theorem:
A signal whose bandwidth is limited to W Hz can be reconstructed
exactly from its samples taken uniformly at a rate of R>2W Hz.
Maximum information rate Pulse-code modulation
8/18/2019 Communications Short Course
25/47
Lecture 6 5
Maximum information rate
How many bits can be transferred over a channel of bandwidth B Hz(ignoring noise)?
Signal with a bandwidth of B Hz is not distorted over this channel
Signal with a bandwidth of B Hz requires samples taken at 2B Hz
Can transmit:
2 bits of information per second per Hz
Channel
B Hz
Lecture 6 6
Pulse code modulation
Represent an analog waveform in digital form
Lecture 6 7
Quantization: discrete in amplitude
Round amplitude of each sample to nearest one of a finite number of
levels
Lecture 6 8
Encode
Assign each quantization level a code
Sampling vs Quantization Quantization noise
8/18/2019 Communications Short Course
26/47
Lecture 6 9
p g Q
Sampling: Non-destructive if f s>2W Can reconstruct analog waveform exactly by using a low-pass filter
Quantization: Destructive Once signal has been rounded off it can never be reconstructed
exactly
Lecture 6 10
Q
Sampled signal
Quantized signal
(step size of 0.1)
Quantization error
Lecture 6 11
Quantization noise
Let ∆ be the separation between quantization levels
where L=2n is the no. of quantization levels
m p is peak allowed signal amplitude
Round-off effect of quantizer ensures that |q|< ∆ /2, where q is a
random variable representing the quantization error
Assume q is zero mean with uniform pdf, so mean square error is:
L
m p2=∆
12
)(}{
22 /
2 /
12
22
∆==
=
∆
∆− ∆
∞
∞−
dqq
dqq pqq E
Lecture 6 12
Quantization noise
Let message power be P
Noise power is:
Output SNR of quantizer:
or in dB:
( )n
p L
m
N
m
q E P
p
2
2222
2
231212
mean)zero(since}{
×==
∆=
=
n
p N
S Q
m
P
P
PSNR 2
22
3×==
dB 3
log1002.6210
+=
p
Qm
PnSNR
Bandwidth of PCM Nonuniform quantization
8/18/2019 Communications Short Course
27/47
Lecture 6 13
Each message sample requires n bits
If message has bandwidth W Hz, then PCM contains 2nW bits per
second
Bandwidth required is:
SNR can be written:
Small increase in bandwidth yields a large increase in SNR
nW BT =
W B
p
QT
m
PSNR / 2
22
3×=
Lecture 6 15
q
For audio signals (e.g. speech), small signal amplitudes occur more
often than large signal amplitudes
Better to have closely spaced quantization levels at low signal
amplitudes, widely spaced levels at large signal amplitudes
Quantizer has better resolution at low amplitudes (where signal spendsmore time)
Uniform quantization Non-uniform quantization
Lecture 6 16
Nonuniform quantization
Uniform quantizer is easier to implement that nonlinear
Compress signal first, then use uniform quantizer, then expand signal
(i.e., compand)
Lecture 6 17
Companding
Summary
8/18/2019 Communications Short Course
28/47
Lecture 6 18
Analog communications system: Receiver must recreate transmitted waveform Performance measure is signal-to-noise ratio
Digital communications system: Receiver must decide which symbol was transmitted Performance measure is probability of error
Pulse-code modulation:
Scheme to represent analog signal in digital format
Sample, quantize, encode
Companding (nonuniform quantization)
Lecture 7 Digital receiver
8/18/2019 Communications Short Course
29/47
Lecture 7 1
Performance of digital systems in noise: Baseband ASK
PSK, FSK Compare all schemes
(See sections 4.4, 4.5)
Lecture 7 2
s(t)Filter +
Demodulator
Lecture 7 3
Baseband digital system
s(t)
Lecture 7 4
Gaussian noise, probability
=
8/18/2019 Communications Short Course
30/47
Lecture 7 5
White noise
LPF (use for baseband)
BPF (use for bandpass)
W N df N
P oW
W
o==
− 2
W N
df N df N P
o
W f
W f
oW f
W f
oc
c
c
c
222
=
+=
+−
−−
+
−
NOTE: For zero-mean noise,
variance ≡ average power
i.e., σ 2=P
Lecture 7 6
Transmitted signal
s0(t)
Noise signal
n(t)
Received signal
y0(t)= s0(t)+n(t)
2 / )(if Error At yo > ∞
=2 /
2
0),0(
Ae dnP σ N
Lecture 7 7
Baseband system – “1” transmitted
Transmitted signal
s1(t)
Noise signal
n(t)
Received signal
y1(t)= s1(t)+n(t)
2 / )(if Error 1 At y < ∞−
=2 / 2
1),(
A
e dn AP σ N
Lecture 7 8
Baseband system – errors
Possible errors:
1. Symbol “0” transmitted, receiver decides “1”
2. Symbol “1” transmitted, receiver decides “0”
Total probability of error:
110 eeoe P pP pP +=
Probability of
“0” being sent
Probability of making an
error if “0” was sent
Baseband system – errors How to calculate Pe?
8/18/2019 Communications Short Course
31/47
Lecture 7 9
For equally-probable symbols:
10
2
1
2
1eee PPP +=
Can show that Pe0 = Pe1
Pe0Pe1
Hence, Pe = ½ Pe0+ ½ Pe0= Pe0
∞
=
2 /
2),0(
Ae
dnP σ N
Lecture 7 10
1. Complementary error function (erfc in Matlab)
dnn
P A
e ∞
−=
2 / 2
2
2exp
2
1
σ π σ
=
−= ∞
22erfc
)exp(2)erfc(
21
2
σ
π
AP
dnnu
e
u
2. Q-function (tail function)
=
−=
∞
σ
π
2
2
exp
2
1)(
2
AQP
dnn
uQ
e
u
Lecture 7 11
Baseband error probability
Lecture 7 12
Example
Consider a digital system which uses a voltage level of 0 volts to
represent a “0”, and a level of 0.22 volts to represent a “1”. The digital
waveform has a bandwidth of 15 kHz.
If this digital waveform is to be transmitted over a baseband channel
having additive noise with flat power spectral density of N o /2=3 x 10-8
W/Hz, what is the probability of error at the receiver output?
Amplitude-shift keying Synchronous detector
8/18/2019 Communications Short Course
32/47
Lecture 7 13
)cos()(
0)(
1
0
t At s
t s
cω =
=
Lecture 7 14
Identical to analog synchronous detector
Lecture 7 15
ASK – “0” transmitted
Predetection signal:
)sin()()cos()(0)(0 t t nt t nt x cscc ω ω −+=
ASK “0” Bandpass noise
After multiplier:
Receiver output:
)()(0 t nt y c=
[ ] )2sin()()2cos(1)()cos()sin(2)()(cos2)(
)cos(2)()(2
00
t t nt t n
t t t nt t n
t t xt r
cscc
ccscc
c
ω ω
ω ω ω
ω
−+=
−=
×=
Lecture 7 16
ASK – “1” transmitted
Predetection signal:
)sin()()cos()()cos()(1 t t nt t nt At x csccc ω ω ω −+=
ASK “1” Bandpass noise
After multiplier:
Receiver output:
)()(1 t n At y c+=
[ ][ ][ ] )2sin()()2cos(1)(
)cos()sin(2)()(cos2)(
)cos(2)()(2
11
t t nt t n A
t t t nt t n A
t t xt r
cscc
ccscc
c
ω ω
ω ω ω
ω
−++=
−+=
×=
PDFs at receiver output Phase-shift keying
8/18/2019 Communications Short Course
33/47
Lecture 7 17
ASK – “0” transmitted:
ASK – “1” transmitted:
)()(of PDF 0 t nt y c= )()(of PDF 1 t n At y c+=
Same as baseband!
=
22erfc
21
ASK,σ
APe
Lecture 7 18
)cos()(
)cos()(
1
0
t At s
t At s
c
c
ω
ω
=
−=
FSK
PSK
Lecture 7 19
PSK demodulator
Band-pass filter bandwidth matched to modulated signal bandwidth
Carrier frequency is ω c
Low-pass filter leaves only baseband signals
Lecture 7 20
PSK – “0” transmitted
Predetection signal:
)sin()()cos()()cos()(0 t t nt t nt At x csccc ω ω ω −+−=
PSK “0” Bandpass noise
After multiplier:
Receiver output:
)()(0 t n At y c+−=
[ ][ ][ ] )2sin()()2cos(1)(
)cos()sin(2)()(cos2)(
)cos(2)()(2
00
t t nt t n A
t t t nt t n A
t t xt r
cscc
ccscc
c
ω ω
ω ω ω
ω
−++−=
−+−=
×=
PSK – “1” transmitted
PSK – PDFs at receiver output
8/18/2019 Communications Short Course
34/47
Lecture 7 21
Predetection signal:
)sin()()cos()()cos()(1 t t nt t nt At x csccc ω ω ω −+=
PSK “1” Bandpass noise
After multiplier:
Receiver output:
)()(1 t n At y c+=
[ ][ ][ ] )2sin()()2cos(1)(
)cos()sin(2)()(cos2)(
)cos(2)()(2
11
t t nt t n A
t t t nt t n A
t t xt r
cscc
ccscc
c
ω ω
ω ω ω
ω
−++=
−+=
×=
Lecture 7 22
PSK – “0” transmitted:
PSK – “1” transmitted:
)()(of PDF 0 t n At y c+−= )()(of PDF 1 t n At y c+=
Set threshold at 0:
"1"decide ,0 if
"0"decide ,0 if
>
<
y
y
Lecture 7 23
PSK – probability of error
∞
∞
+−=
−=
0 2
2
0
2
0
2
)(exp
2
1
),(
dn An
APe
σ π σ
σ N
∞−
∞−
−−=
=
0
2
2
02
1
2
)(exp
2
1
),(
dn An
APe
σ π σ
σ N
=
2erfc
21PSK,
σ APe
Probability of bit error:
Lecture 7 24
Frequency-shift keying
)cos()(
)cos()(
11
00
t At s
t At s
ω
ω
=
=FSK
PSK
FSK detector FSK
8/18/2019 Communications Short Course
35/47
Lecture 7 25
)cos()(
)cos()(
11
00
t At s
t At s
ω
ω
=
=
Lecture 7 26
Receiver output:
PDFs same as for PSK, but variance is doubled:
=
σ 2erfc
21FSK, APe
[ ]
[ ])()()(
)()()(
01
1
01
0
t nt n At y
t nt n At y
cc
cc
−+=
−+−=
Independent noise sources,
variances add
Lecture 7 27
Digital performance comparison
Lecture 7 28
Summary
Probability of error for PSK and FSK:
=
2erfc
2
1PSK,
σ
APe
=
σ 2erfc
2
1FSK,
APe
Comparison of digital systems:
PSK best, then FSK, ASK and baseband same
==
∞
22erfc),0(
21
2 /
2
σ
σ A
dnP A
e N
For a baseband (or ASK) system:
Lecture 8 Why information theory?
8/18/2019 Communications Short Course
36/47
Lecture 8 1
Information theory Why? Information
Entropy Source coding (a little)
(See sections 5.1 to 5.4.2)
Lecture 8 2
What is the performance of the “best” system?
“What would be the characteristics of an ideal system, [one that] is not
limited by our engineering ingenuity and inventiveness but limited
rather only by the fundamental nature of the physical universe”
Taub & Schilling
Lecture 8 3
Information
The purpose of a communication system is to convey
information from one point to another
What is information?
Definition:
bits )(log1
log)( 22 p p
s I −=
=
Information in
symbol s
Probability of
occurrence of
symbol s
Conventional unit
of information
Lecture 8 4
Properties of I(s)
1. If p=1, I(s)=0
(symbol that is certain to occur conveys no information)
2. 0
8/18/2019 Communications Short Course
37/47
Lecture 8 5
Suppose we have two symbols:
Each has probability of occurrence:
Each symbol represents:
In this example, one symbol = one information bit,but it is not always so!
1
0
1
0
=
=
s
s
21
10 == p p
( ) ninformatioof bit1log)(21
2 =−=s I
Lecture 8 6
Symbols: may be binary (“0” and “1”) can have more than 2 symbols, e.g. letters of the alphabet, etc.
Sequence of symbols is random (otherwise no information is conveyed)
Definition:
If successive symbols are statistically independent, the information
source is a zero-memory source (or discrete memoryless source)
How much information is conveyed by symbols?
Lecture 8 7
Entropy
Definition:
( )−=k
k k p pS H all
2log)(
where { } alphabetis ,,,
21 K
sssS =
symbol
k k s p symbolof occurenceof yprobabilitis
Entropy: average information per symbol
collection of all possible symbols
=k
k p
all
1 :thatNoteWe’re certain that the symbol
comes from the known alphabet
Lecture 8 8
Example – binary source
Alphabet:
Probabilities:
{ } 10 , ssS =
10 1 p p −=
Entropy:
( )
( ) ( )121121 all
2
log1log)1(
log)(
p p p p
p pS H k
k k
−−−−=
−=
How to represent (encode) each symbol?
1,0let 10 == ss
this requires 1 bit/symbol to transmit
Example – three symbol alphabet
l h b
Example – three symbol alphabet
8/18/2019 Communications Short Course
38/47
Lecture 8 9
Alphabet:
Probabilities:
{ } C B AS ,,=
Entropy:
1.0,2.0,7.0 === C B A p p p
( )
lbits/symbo 157.1
log)( all
2
=
−= k
k k p pS H
How to represent (encode) each symbol?
10
01
00let
=
=
=
C
B
A
this requires 2 bits/symbol to transmit
Lecture 8 10
Symbols generated at
rate of 1 symbol/sec
Bitstream generated at
rate of 2 bits/sec
System needs to
process 2 bits/sec
Code words
Lecture 8 11
Source coding
Amount of information we need to transmit, is determined
(amongst other things) by how many bits we need to
transmit for each symbol
In the binary case, only 1 bit required to transmit each symbol
In the {A,B,C} case, 2 bits required to transmit each symbol
Lecture 8 12
Examples
Telephone:
Cell phone:
Speech waveform 8000 symbols/sec 64000 bits/secSystem needs to
process 64 kb/s
Speech waveform 8000 symbols/sec 13000 bits/secSystem needs to
process 13 kb/s
Source vs channel coding Source coding
8/18/2019 Communications Short Course
39/47
Lecture 8 13
Source coding: minimize the number of bits to be transmitted
Channel coding: add extra bits to detect/correct errors
Lecture 8 14
All symbols do not need to be encoded with the same
number of bits
Example:
1.0,2.0,7.0 === C B A p p p
11
10
0let
=
=
=
C
B
A
6 symbols 8 bits
Lecture 8 15
Average codeword length
Definition:
=k
k k l p L all
Probability of occurrenceof symbol sk
Number of bits used torepresent symbol sk
Example: 1.0,2.0,7.0 === C B A p p p
11,10,0let === C B A
lbits/symbo3.1
21.022.017.0
=
×+×+×= L
Lecture 8 16
Source coding
Use variable-length code words
Symbol that occurs frequently (i.e., relatively high pk )
should have short code word
Symbol that occurs rarely should have long code word
Summary
Information content (of a particular symbol):
8/18/2019 Communications Short Course
40/47
Lecture 8 17
Information content (of a particular symbol):
bits )(log1
log)( 22 p p
s I −=
=
Entropy (for a complete alphabet, is the average
information content per symbol):
( ) lbits/symbo log)( all
2−=k
k k p pS H
Source coding:
How many bits do we need to represent each symbol?
Lecture 9
Source coding theorem
Source coding
8/18/2019 Communications Short Course
41/47
Lecture 9 1
Source coding theorem
Huffman coding algorithm
(See sections 5.4.2, 5.4.3)
Lecture 9 2
All symbols do not need to be encoded with the same
number of bits
Example:
1.0,2.0,7.0 ===
C B A p p p11,10,0 === C B A
6 symbols 8 bits
lbits/symbo3.1
21.022.017.0
=
×+×+×= L
probabilities
code words
average codeword length
Lecture 9 3
Source coding
How can we reduce the number of bits we need to transmit?
What is the minimum number of bits we need for a
particular symbol?
(Source coding theorem)
How can we encode symbols to achieve this minimum
number of bits?
(Huffman coding algorithm)
Lecture 9 4
Equal probability symbols
Alphabet:
Probabilities:
{ } B AS ,=
5.0,5.0 == B A p p
Code words: 1,0 == B A
Requires 1 bit for each symbol
Example:
In general, for n equally-likely symbols:
Probability of occurrence of each symbol is p=1/n
Number of bits to represent each symbol is
( )n p
l22
log1
log =
=
Unequal probabilities? Unequal probabilities ? (cont.)
8/18/2019 Communications Short Course
42/47
Lecture 9 5
Alphabet:
Probabilities:
{ } K sssS ,,, 21 =
K p p p ,,, 21
Any random sequence of N symbols (large N ):
s1: N × p1 occurrences
s2: N × p2 occurrences
Particular sequence of N symbols:
Probability of this particular sequence occurring:
{ } ,,,,,,, 1233121 sssssssS N =
××=
×××××××=
21
21
1233121)( Np Np
N
p p
p p p p p p pS p
Lecture 9 6
Probability of any sequence of N symbols occurring:
××=21
21)( Np Np
N p pS p
Number of bits required to represent a sequence of N symbols:
( )
)()(log
)(log)(log
log)(
1log
all
2
222121
212221
S H N p p N
p Np p Np
p pS p
l
k
k k
Np Np
N
N
=−=
−−−=
××−=
=
Average number of bits for one symbol is:
)(S H N
l L N ==
Lecture 9 7
Source Coding Theorem:
For a general alphabet S , the minimum average codeword
length is given by the entropy, H(S).
Minimum codeword length
Significance:
For any practical source coding scheme, the average codeword length
will always be greater than or equal to the source entropy, i.e.,
How can we design an efficient coding scheme?
lbits/symbo)(S H L ≥
Lecture 9 8
Huffman coding algorithm
Optimum coding scheme – yields the shortest average
codeword length
Example
Consider a five-symbol alphabet having the probabilities
Huffman coding algorithm
Uniquely decodable
8/18/2019 Communications Short Course
43/47
Lecture 9 9
Consider a five symbol alphabet having the probabilities
indicated:
1. Calculate the entropy of the alphabet.
2. Using the Huffman algorithm, design a source coding
scheme for this alphabet, and comment on the average
codeword length achieved.
1.0,3.0,4.0,15.0,05.0:iesProbabilit
,,,,:Symbols
===== E DC B A p p p p p
E DC B A
Lecture 9 10
Uniquely decodable
i.e., only one way to break bit stream into valid code words
Instantaneous
i.e., know immediately when a code word has ended
Lecture 9 11
Summary
Source coding theorem:
For a general alphabet S , the minimum average codeword
length is given by the entropy, H(S).
Huffman coding algorithm:
Practical coding scheme that yields the shortest average
codeword length
Lecture 10
How much information can be reliably transferred over a
Reliable transfer of information
8/18/2019 Communications Short Course
44/47
Lecture 10 1
y
noisy channel?
(Channel capacity)
What does information theory have to say about analog
communication systems?
(See sections 5.5, 5.6)
Lecture 10 2
If channel is noisy, can information be transferred reliably?
How much information?
Lecture 10 3
Information rate
Definition:
Intuition:
R can be increased arbitrarily by increasing symbol rate r
For noisy channel, errors are bound to occur
Is there a value of R where probability of error is arbitrarily small?
bits/sec H r R =
Avg no. of information
bits transferred per second
Avg. no. of symbols
per second
Avg. no. of information
bits per symbol
Lecture 10 4
Channel capacity
Definition:
Channel capacity, C , is maximum rate of information
transfer over a noisy channel with arbitrarily small
probability of error
Channel Capacity Theorem
If R≤ C , then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with
an arbitrarily small probability of error
Channel capacity theorem is a surprising result:
Channel capacity Channel capacity
8/18/2019 Communications Short Course
45/47
Lecture 10 5
Channel capacity theorem is a surprising result:
Gaussian noise has PDF
This is non-zero for all noise amplitudes Sometimes (however infrequent) noise must over-ride signal → bit
error But, theorem says we can transfer information without error!!
Basic limitation due to noise is on speed ofcommunication, not on reliability
So what is the channel capacity C ??
Lecture 10 6
Hartley-Shannon Theorem
For an additive white Gaussian noise channel, the
channel capacity is:
+=
N
S
P
P BC 1log2
Bandwidth of
the channel
Average signal power
at the receiver
Average noise power
at the receiver
Lecture 10 7
Example
Consider a baseband system
Noise power is:
B N df N
P o B
B
o N
−
==
2
Channel capacity:
+=
B N
P BC
o
S 1log2
Lecture 10 8
Example
Consider a baseband channel with a bandwidth of B=4 kHz. Assume a
message signal with an average power of Ps=10 W, is transmitted over
this channel which has additive noise with a flat spectral density of
height No /2 with No=10-6 W/Hz.
1. Calculate the channel capacity of this channel.
2. If the message signal is amplified by a factor of n before transmission,
calculate the channel capacity when (a) n=2, and (b) n=10.
3. If the bandwidth of the channel is doubled to 8 kHz, what is the
channel capacity now?
Example Comments
8/18/2019 Communications Short Course
46/47
Lecture 10 9
Ps=10
No=10-6
B=4000
No=10-6
Lecture 10 10
More signal power increases capacity, but increase is slowCan increase capacity arbitrarily through P
S
More bandwidth allows more symbols per second, but also increases
the noise
Can show that:
Cannot increase capacity arbitrarily through B
+=
N
S
P
P BC 1log2
o
S
B N
PC 44.1lim =
∞→
Lecture 10 11
More comments
+=
N
S
P
P BC 1log2
This is capacity of an ideal “best” system
How can we design something that comes close? Through channel coding, modulation/demodulation
schemes But, no deterministic method exists to do it !
Lecture 10 12
Information theory and analog
Optimum communication system achieves the largest SNR at the
receiver output
Optimum analog system
Maximum rate that information can arrive at receiver:
Optimum analog system
Assume that channel noise is AWGN having PSD: N /2
8/18/2019 Communications Short Course
47/47
Lecture 10 13
( )inin
SNR BC += 1log2
Maximum rate that information can leave receiver:
( )out out
SNRW C += 1log2
Ideally, no information is lost:
inout C C =
Equating gives:
( ) 11 / −+= W Binout
SNRSNR
For any increase in bandwidth, output SNR increases
exponentially
Lecture 10 14
Assume that channel noise is AWGN, having PSD: N o /2
Average noise power at demodulator input is: B N Po N
=
SNR at receiver input:
W N
P
B
W
B N
PSNR
o
T
o
T
in ==
Transmitted powerBaseband SNR
SNR at receiver output:
11 /
−
+=
W B
baseout SNR
B
W SNR
Bandwidth spreading ratio
transmission bw/message bw
Lecture 10 15
Analog performance
Ideal performance Actual performance
Lecture 10 16
Summary
Information rate: R = r H bits/sec
Channel Capacity Theorem:
If R≤ C , then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with an
arbitrarily small probability of error
Hartley-Shannon Theorem (Gaussian noise channel):
Analog communication systems:
Information theory tells us the best SNR
+=
N
S
P
P BC 1log2 bits/sec