Upload
ghulam-shabbir
View
219
Download
0
Embed Size (px)
Citation preview
8/13/2019 Lecture Coding
1/92
EC 723Satellite Communication Systems
Mohamed Khedrhttp://webmail.aast.edu/~khedr
http://webmail.aast.edu/~khedrhttp://webmail.aast.edu/~khedr8/13/2019 Lecture Coding
2/92
2006-02-16 Lecture 9 2
Syllabus
Tentatively
Week 1 Overview
Week 2 Orbits and constellations: GEO, MEO and LEO
Week 3 Satellite space segment, Propagation and
satellite links , channel modellingWeek 4 Satellite Communications Techniques
Week 5 Satellite Communications Techniques II
Week 6 Satellite Communications Techniques III
Week 7 Satellite error correction Techniques
Week 8 Satellite error correction TechniquesIIMultiple access
Week 9 Satellite in networks I, INTELSAT systems ,VSAT networks, GPS
Week 10 GEO, MEO and LEO mobile communications
INMARSAT systems, Iridium , Globalstar,Odyssey
Week 11 Presentations
Week 12 Presentations
Week 13 Presentations
Week 14 PresentationsWeek 15 Presentations
8/13/2019 Lecture Coding
3/92
2006-02-16 Lecture 9 3
Block diagram of a DCS
FormatSource
encoder
FormatSource
decode
Channel
encoder
Pulse
modulate
Bandpass
modulate
Channel
decoder
Demod.
SampleDetect
Channel
Digital modulation
Digital demodulation
8/13/2019 Lecture Coding
4/92
2006-02-16 Lecture 9 4
Channel coding: Transforming signals to improve
communications performance by increasingthe robustness against channel impairments
(noise, interference, fading, ..) Waveform coding: Transforming waveforms
to better waveforms
Structured sequences: Transforming data
sequences into better sequences, havingstructured redundancy.Better in the sense of making the decision
process less subject to errors.
What is channel coding?
8/13/2019 Lecture Coding
5/92
2006-02-16 Lecture 9 5
What is channel coding?
Coding is mapping of binary source (usually) output sequences of
length kinto binary channel input sequences n (>k) A block code is denoted by (n,k)
Binary coding produces2kcodewords of length n. Extra bits incodewords are used for error detection/correction
In this course we concentrate on two coding types: (1) block,
and (2) convolutional codesrealized by binary numbers:
Block codes: mapping of information source into channelinputs done independently: Encoder output depends only onthe current blockof input sequence
Convolutional codes: each source bitinfluences n(L+1)channel input bits. n(L+1)is the constraint length and Lis thememory depth. These codes are denoted by (n,k,L).
(n,k)
block coder
k-bits n-bits
8/13/2019 Lecture Coding
6/92
2006-02-16 Lecture 9 6
Error control techniques
Automatic Repeat reQuest (ARQ) Full-duplex connection, error detection codes
The receiver sends a feedback to the transmitter,saying that if any error is detected in the receivedpacket or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively). The transmitter retransmits the previously sent
packet if it receives NACK.
Forward Error Correction (FEC)
Simplex connection, error correction codes The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC) Full-duplex, error detection and correction codes
8/13/2019 Lecture Coding
7/92
8/13/2019 Lecture Coding
8/922006-02-16 Lecture 9 8
Channel models
Discrete memory-less channels
Discrete input, discrete output
Binary Symmetric channels
Binary input, binary output
Gaussian channels
Discrete input, continuous output
8/13/2019 Lecture Coding
9/922006-02-16 Lecture 9 9
Linear block codes
Let us review some basic definitions firstwhich are useful in understanding Linearblock codes.
8/13/2019 Lecture Coding
10/922006-02-16 Lecture 9 10
Some definitions
Binary field : The set {0,1}, under modulo 2 binary
addition and multiplication forms a field.
Binary field is also called Galois field, GF(2).
011
101
110
000
111
001
010
000
Addition Multiplication
8/13/2019 Lecture Coding
11/922006-02-16 Lecture 9 11
Some definitions
Examples of vector spaces The set of binary n-tuples, denoted by
Vector subspace: A subset S of the vector space is called a
subspace if:
The all-zero vector is in S.
The sum of any two vectors in S is also in S.
Example:
.ofsubspaceais)}1111(),1010(),0101(),0000{( 4V
nV
nV
)}1111(),1101(),1100(),1011(),1010(),1001(),1000(
),0111(),0101(),0100(),0011(),0010(),0001(),0000{(4V
8/13/2019 Lecture Coding
12/922006-02-16 Lecture 9 12
Some definitions
Spanning set: A collection of vectors ,
the linear combinations of which include all vectors ina vector space V, is said to be a spanning set for V orto span V. Example:
Bases: A spanning set for V that has minimal cardinality is
called a basis for V. Cardinality of a set is the number of objects in the set.
Example:
.forbasisais)0001(),0010(),0100(),1000( 4V
.spans)1001(),0011(),1100(),0110(),1000( 4V
nG vvv ,,, 21
8/13/2019 Lecture Coding
13/92
2006-02-16 Lecture 9 13
Linear block codes
Linear block code (n,k)A set with cardinality is called a
linear block code if, and only if, it is asubspace of the vector space .
Members of C are called code-words.
The all-zero codeword is a codeword.
Any linear combination of code-words is acodeword.
nV
nVC k
2
nk VCV
8/13/2019 Lecture Coding
14/92
2006-02-16 Lecture 9 14
Linear block codescontd
nV
kV C
Bases of C
mapping
8/13/2019 Lecture Coding
15/92
2006-02-16 Lecture 9 15
(a) Hamming distance d(ci, cj) 2t 1.(b) Hamming distance d(ci, cj) 2t. Thereceived vector is denoted by r.
8/13/2019 Lecture Coding
16/92
2006-02-16 Lecture 9 16
# of bits for FEC
Want to correct terrors in an (n,k) code
Data word d=[d1,d2, . . . ,dk] => 2kdata
words
Code word c=[c1,c2, . . . ,cn] => 2ncode
words
Data wordCode word
dj cj
cj00
1110
01110
101
100
010
111
001011
000
8/13/2019 Lecture Coding
17/92
2006-02-16 Lecture 9 17
Representing codes by vectors
Code strength is measured by Hamming distance that tells how
different code words are: Codes are more powerful when their minimum Hamming
distance dmin(over all codes in the code family) is large
Hamming distance d(X,Y) is the number of bits that are differentbetween code words
(n,k) codes can be mapped into n-dimensional grid:
3-bit repetition code 3-bit parity code
valid code word
8/13/2019 Lecture Coding
18/92
2006-02-16 Lecture 9 18
Error Detection
If a code can detect
a tbit error, then cjmust be within aHamming sphere of
t For example,if
cj=101, and t =1,then 100,111, and
001 lie in theHamming sphere.
Code word
cj
cj 110
101
100
010
111
001011
000
8/13/2019 Lecture Coding
19/92
2006-02-16 Lecture 9 19
Error Correction
To correct anerror, theHamming
spheres arounda code wordmust be
nonoverlapping,dmin=2 t +1
cj
cj 110
101
100
010
111
001011
000
8/13/2019 Lecture Coding
20/92
2006-02-16 Lecture 9 20
6-D Code Space
8/13/2019 Lecture Coding
21/92
2006-02-16 Lecture 9 21
Block Code Error Detection and Correction
(6,3) code 23=> 26,dmin=3
Can detect 2 bit errors,correct 1 bit 110100 sent; 110101
received
Erasure: Suppose codeword 110011 sent but
two digits were erased(xx0011), correct codeword has smallestHamming distance
Message
Code-word
1 2
000 000000 4 2
100 110100 1 3
010 011010 3 2
110 101110 3 3
001 101001 3 2
101 011101 2 3011 110011 2 0
111 000111 3 1
8/13/2019 Lecture Coding
22/92
2006-02-16 Lecture 9 22
Geometric View
Want codeefficiency, so thespace should be
packed with asmany code wordsas possible
Code words should
be as far apart aspossible tominimize errors
2nn-tuples,Vn
2kn-tuples, subspace
of codewords
8/13/2019 Lecture Coding
23/92
2006-02-16 Lecture 9 23
Linear block codescontd
The information bit stream is chopped into blocks of k bits. Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.
Data blockChannel
encoderCodeword
k bits n bits
rateCode
bitsRedundant
n
kR
n-k
c
8/13/2019 Lecture Coding
24/92
2006-02-16 Lecture 9 24
Linear block codescontd
The Hamming weight of vector U, denoted byw(U), is the number of non-zero elements inU.
The Hamming distance between two vectors
UandV, is the number of elements in whichthey differ.
The minimum distance of a block code is
)()( VUVU, wd
)(min),(minmin ii
jiji
wdd UUU
8/13/2019 Lecture Coding
25/92
2006-02-16 Lecture 9 25
Linear block codescontd
Error detection capability is given by
Error correcting-capability tof a code, which isdefined as the maximum number ofguaranteed correctable errors per codeword, is
2
1mindt
1minde
8/13/2019 Lecture Coding
26/92
2006-02-16 Lecture 9 26
Linear block codescontd
For memory less channels, the probabilitythat the decoder commits an erroneousdecoding is
is the transition probability or bit error probability
over channel.
The decoded bit error probability is
jnjn
tj
M ppj
nP
)1(
1
jnjn
tj
B ppj
nj
nP
)1(
1
1
p
8/13/2019 Lecture Coding
27/92
2006-02-16 Lecture 9 27
Linear block codescontd
Discrete, memoryless, symmetric channel model
Note that for coded systems, the coded bits aremodulated and transmitted over channel. Forexample, for M-PSK modulation on AWGN channels(M>2):
where is energy per coded bit, given by
Tx. bits Rx. bits
1-p
1-p
p
p
MN
REMQ
MMN
EMQ
Mp cbc
sin
log2
log
2sin
log2
log
2
0
2
20
2
2
cE bcc ERE
1
0 0
1
8/13/2019 Lecture Coding
28/92
2006-02-16 Lecture 9 28
Linear block codescontd
A matrix G is constructed by taking as itsrows the vectors on the basis, .
nVkV
C
Bases of C
mapping
},,,{ 21 kVVV
knkk
n
n
kvvv
vvv
vvv
21
22221
11211
1
V
V
G
8/13/2019 Lecture Coding
29/92
2006-02-16 Lecture 9 29
Linear block codescontd
Encoding in (n,k) block code
The rows of G, are linearly independent.
mGU
kn
k
kn
mmmuuu
mmmuuu
VVV
V
V
V
2221121
2
1
2121
),,,(
),,,(),,,(
8/13/2019 Lecture Coding
30/92
8/13/2019 Lecture Coding
31/92
2006-02-16 Lecture 9 31
Linear block codescontd
Systematic block code (n,k) For a systematic code, the first (or last) k
elements in the codeword are information bits.
matrix)(
matrixidentity][
knk
kk
k
k
k
P
IIPG
),...,,,,...,,(),...,,(
bitsmessage
21
bitsparity
2121 kknn mmmpppuuu U
8/13/2019 Lecture Coding
32/92
2006-02-16 Lecture 9 32
Linear block codescontd
For any linear code we can find anmatrix , which its rows areorthogonal to rows of :
H is called the parity check matrix andits rows are linearly independent.
For systematic linear block codes:
nkn )(HG
0GH T
][ Tkn PIH
8/13/2019 Lecture Coding
33/92
2006-02-16 Lecture 9 33
Linear block codescontd
Syndrome testing: Sis syndrome of r, corresponding to the error
pattern e.
Format Channelencoding
Modulation
Channel
decodingFormat
Demodulation
Detection
Data source
Data sink
U
r
m
m
channel
or vectorpatternerror),....,,(
or vectorcodewordreceived),....,,(
21
21
n
n
eee
rrr
e
r
eUr
TT eHrHS
8/13/2019 Lecture Coding
34/92
2006-02-16 Lecture 9 34
Linear block codescontd
Standard array1. For row , find a vector in of minimumweight which is not already listed in the array.
2. Call this pattern and form the row as the
corresponding coset
kknknkn
k
k
22222
22222
221
UeUee
UeUee
UUU
zero
codeword
coset
coset leaders
kni 2,...,3,2 nV
ie th:i
8/13/2019 Lecture Coding
35/92
2006-02-16 Lecture 9 35
Linear block codescontd
Standard array and syndrome table decoding1. Calculate
2. Find the coset leader, , corresponding to .
3. Calculate and corresponding .
Note that
If , error is corrected.
If , undetectable decoding error occurs.
TrHS
iee S
erU m
)( e(eUee)UerU ee
ee
8/13/2019 Lecture Coding
36/92
2006-02-16 Lecture 9 36
Linear block codescontd
Example: Standard array for the (6,3) code
010110100101010001
010100100000
100100010000
111100001000
000110110111011010101101101010011100110000000100000101110001011111101011101100011000110110000010
000110110010011100101000101111011011110101000001
000111110011011101101001101110011010110100000000
Coset leaders
coset
codewords
8/13/2019 Lecture Coding
37/92
2006-02-16 Lecture 9 37
Linear block codescontd
111010001
100100000
010010000
001001000
110000100
011000010
101000001
000000000
(101110)(100000)(001110)estimatedisvectorcorrectedThe
(100000)
issyndromethistoingcorrespondpatternError
(100)(001110):computedisofsyndromeThe
received.is(001110)
ted.transmit(101110)
erU
e
HrHSr
r
U
TT
Error pattern Syndrome
8/13/2019 Lecture Coding
38/92
8/13/2019 Lecture Coding
39/92
8/13/2019 Lecture Coding
40/92
2006-02-16 Lecture 9 40
Example of the block codes
8PSK
QPSK
[dB]/ 0NEb
B
P
8/13/2019 Lecture Coding
41/92
2006-02-16 Lecture 9 41
Convolutional codes
Convolutional codes offer an approach to error controlcoding substantially different from that of block codes.
A convolutional encoder:
encodes the entire data stream, into a single codeword.
does not need to segment the data stream into blocks of fixed
size (Convolutional codes are often forced to block structure by periodictruncation).
is a machine with memory.
This fundamental difference in approach imparts a
different nature to the design and evaluation of the code. Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction techniques.
8/13/2019 Lecture Coding
42/92
8/13/2019 Lecture Coding
43/92
8/13/2019 Lecture Coding
44/92
8/13/2019 Lecture Coding
45/92
8/13/2019 Lecture Coding
46/92
2006-02-16 Lecture 9 46
A Rate Convolutional encoder
Encoder)101(m )1110001011(U
0 0 15t
1u
2u11
21 uu
0 0 06t
1u
2u00
21 uu
Time Output Time Output(Branch word) (Branch word)
Eff i d
8/13/2019 Lecture Coding
47/92
2006-02-16 Lecture 9 47
Effective code rate
Initialize the memory before encoding the first bit (all-
zero) Clear out the memory after encoding the last bit (all-
zero)
Hence, a tail of zero-bits is appended to data bits.
Effective code rate :
L is the number of data bits and k=1is assumed:
data Encoder codewordtail
ceff RKLn
LR
)1(
E d i
8/13/2019 Lecture Coding
48/92
2006-02-16 Lecture 9 48
Encoder representation
Vector representation: We define n binary vector withK elements (one
vector for each modulo-2 adder). The i:th elementin each vector, is 1 if the i:th stage in the shiftregister is connected to the corresponding modulo-
2 adder, and 0 otherwise. Example:
m
1u
2u
21 uu)101(
)111(
2
1
g
g
8/13/2019 Lecture Coding
49/92
2006-02-16 Lecture 9 49
Encoder representationcontd
Impulse response representation: The response of encoder to a single one bit that
goes through it.
Example:
11001
01010
11100
111011:sequenceOutput
001:sequenceInput
21 uu
Branch wordRegister
contents
1110001011
1110111
0000000
1110111
OutputInput m
Modulo-2 sum:
E d t ti td
8/13/2019 Lecture Coding
50/92
2006-02-16 Lecture 9 50
Encoder representationcontd
Polynomial representation: We define n generator polynomials, one for each
modulo-2 adder. Each polynomial is of degreeK-1or
less and describes the connection of the shiftregisters to the corresponding modulo-2 adder.
Example:
The output sequence is found as follows:
22)2(
2
)2(
1
)2(
02
22)1(
2
)1(
1
)1(
01
1..)(
1..)(
XXgXggX
XXXgXggX
g
g
)()(withinterlaced)()()( 21 XXXXX gmgmU
E d t ti td
8/13/2019 Lecture Coding
51/92
2006-02-16 Lecture 9 51
Encoder representationcontd
In more details:
1110001011
)1,1()0,1()0,0()0,1()1,1()(
.0.0.01)()(.01)()(
1)1)(1()()(
1)1)(1()()(
432
432
2
432
1
422
2
4322
1
U
U
gmgm
gm
gm
XXXXX
XXXXXXXXXXXX
XXXXX
XXXXXXXX
8/13/2019 Lecture Coding
52/92
St t di td
8/13/2019 Lecture Coding
53/92
2006-02-16 Lecture 9 53
State diagramcontd
A state diagram is a way to represent
the encoder.
A state diagram contains all the statesand all possible transitions between
them.
Only two transitions initiating from astate
Only two transitions ending up in a state
St t di td
8/13/2019 Lecture Coding
54/92
2006-02-16 Lecture 9 54
State diagramcontd
10 01
00
11
Currentstate
input Nextstate
output
00
0 00
1 11
01
0 11
1 00
10
0 10
1 01
11
0 01
1 10
0S
1S
2S
3S
0S
2S
0S
2S
1S
3S
3S
1S
0S
1S2S
3S
1/11
1/00
1/01
1/10
0/11
0/00
0/01
0/10
Input
Output
(Branch word)
8/13/2019 Lecture Coding
55/92
T ellis contd
8/13/2019 Lecture Coding
56/92
2006-02-16 Lecture 9 56
Trelliscontd
A trellis diagram for the example code
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
6t
1t
2t
3t
4t 5t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
Trellis contd
8/13/2019 Lecture Coding
57/92
2006-02-16 Lecture 9 57
Trelliscontd
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6t
1t
2t
3t
4t 5t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
8/13/2019 Lecture Coding
58/92
Soft and hard decision decoding
8/13/2019 Lecture Coding
59/92
2006-02-16 Lecture 9 59
Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decisionwhether one or zero is transmitted and providesno other information for the decoder such thathow reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. Theside information provides the decoder with ameasure of confidence for the decision.
Soft and hard decoding
8/13/2019 Lecture Coding
60/92
2006-02-16 Lecture 9 60
Soft and hard decoding
Regardless whether the channel outputs hard or soft decisions
the decoding rule remains the same: maximize the probability
However, in soft decoding decision region energiesmust beaccounted for, and hence Euclidean metric dE, rather that
Hamming metric dfreeis used
Transition for Pr[3|0] is indicated
by the arrow
0ln ( , ) ln ( | )jm j mjp p y x
y x
Decision regions
8/13/2019 Lecture Coding
61/92
2006-02-16 Lecture 9 61
Decision regions
Coding can be realized by soft-decoding or hard-decoding principle
For soft-decoding reliability (measured by bit-energy) of decision regionmust be known
Example: decoding BPSK-signal: Matched filter output is a continuosnumber. In AWGN matched filter output is Gaussian
For soft-decoding
several decisionregion partitionsare used
Transition probability
for Pr[3|0], e.g. prob.that transmitted 0
falls into region no: 3
Soft and hard decision decoding
8/13/2019 Lecture Coding
62/92
2006-02-16 Lecture 9 62
Soft and hard decision decoding
ML soft-decisions decoding rule:
Choose the path in the trellis with minimumEuclidean distance from the receivedsequence
ML hard-decisions decoding rule:
Choose the path in the trellis with minimum
Hamming distance from the receivedsequence
The Viterbi algorithm
8/13/2019 Lecture Coding
63/92
2006-02-16 Lecture 9 63
The Viterbi algorithm
The Viterbi algorithm performs Maximumlikelihood decoding.
It finds a path through trellis with the largest
metric (maximum correlation or minimumdistance).
At each step in the trellis, it compares the partialmetric of all paths entering each state, and keepsonly the path with the largest metric, called thesurvivor, together with its metric.
Example of hard decision Viterbi decoding
8/13/2019 Lecture Coding
64/92
2006-02-16 Lecture 9 64
Example of hard-decision Viterbi decoding
)100( m
)1100111011(
U)101(m)1110001011(U
)0110111011(Z
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3 0 1 2
3
2
3
20
2
30
ii ttS ),(
Branch metric
Partial metric
Example of soft decision Viterbi decoding
8/13/2019 Lecture Coding
65/92
2006-02-16 Lecture 9 65
Example of soft-decision Viterbi decoding
)101( m)1110001011( U
)101(m)1110001011(U
)1,3
2
,1,3
2
,1,3
2
,3
2
,3
2
,3
2
,1(
Z
5/3
-5/3
4/3
0
0
1/3
1/3
-1/3
-1/3
5/3
-5/3
1/3
1/3
-1/3
6t
1t
2t
3t
4t 5t
-5/3
0 -5/3 -5/3 10/3 1/3 14/3
2
8/3
10/3
13/33
1/3
5/35/3
ii ttS ),(
Branch metric
Partial metric
1/3
-4/35/3
5/3
-5/3
Free distance of Convolutional codes
8/13/2019 Lecture Coding
66/92
2006-02-16 Lecture 9 66
Free distance of Convolutional codes
Distance properties: Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the followingapproach is used to find the minimum distance between allpairs of codewords:
Since the code is linear, the minimum distance of the code isthe minimum distance between each of the codewords and theall-zero codeword.
This is the minimum distance in the set of all arbitrary longpaths along the trellis that diverge and remerge to the all-zeropath.
It is called the minimum free distance or the free distance ofthe code, denoted by
ffree dd or
Free distance
8/13/2019 Lecture Coding
67/92
2006-02-16 Lecture 9 67
Free distance
2
0
1
2
1
0
2
1
1
2
1
0
0
2
1
1
0
2
0
6t
1t
2t
3t
4t 5t
Hamming weightof the branchAll-zero pathThe path diverging and remerging toall-zero path with minimum weight
5fd
Interleaving
8/13/2019 Lecture Coding
68/92
2006-02-16 Lecture 9 68
Interleaving
Convolutional codes are suitable for memoryless
channels with random error events.
Some errors have bursty nature:
Statistical dependence among successive error events(time-correlation) due to the channel memory.
Like errors in multipath fading channels in wirelesscommunications, errors due to the switching noise,
Interleaving makes the channel looks like as amemoryless channel at the decoder.
Interleaving
8/13/2019 Lecture Coding
69/92
2006-02-16 Lecture 9 69
Interleaving
Interleaving is done by spreading the codedsymbols in time (interleaving) beforetransmission.
The reverse in done at the receiver by
deinterleaving the received sequence.Interleaving makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:
Block interleaving
Convolutional or cross interleaving
Interleaving
8/13/2019 Lecture Coding
70/92
2006-02-16 Lecture 9 70
Interleaving
Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
Let us use a block interleaver 3X3
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
A1 A2 A3 B1 B2 B3 C1 C2 C3
Interleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3
A1 B1 C1 A2 B2 C2 A3 B3 C3
Deinterleaver
A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors
Concatenated codes
8/13/2019 Lecture Coding
71/92
2006-02-16 Lecture 9 71
Concatenated codes
A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate). Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon codesas the outer code
The purpose is to reduce the overall complexity, yetachieving the required error performance.
Interleaver Modulate
Deinterleaver
Inner
encoder
Inner
decoderDemodulate
C
hannel
Outer
encoder
Outer
decoder
Input
data
Output
data
Optimum decoding
8/13/2019 Lecture Coding
72/92
2006-02-16 Lecture 9 72
Optimum decoding
If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability oferror is theMaximum likelihooddecoder.
ML decoder, selects a codeword among all the
possible codewords which maximizes the likelihoodfunction where is the receivedsequence and is one of the possible codewords:
)( )(mp U|Z Z
)(mU
)(max)(ifChoose )(allover
)()( mmm pp(m)
U|ZU|ZUU
ML decoding rule:
codewords
to search!!!
L2
ML decoding for memory-less channels
8/13/2019 Lecture Coding
73/92
2006-02-16 Lecture 9 73
ML decoding for memory less channels
Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
and equivalently, the log-likelihood function becomes
The path metric up to time index , is called the partial pathmetric.
1 1
)(
1
)()(
21,...,...,,
)( )|()|()|,...,...,,()(21
i
n
j
m
jiji
i
m
ii
m
izzz
m uzpUZpUZZZppi
U|Z
1 1
)(
1
)()(
)|(log)|(log)(log)( i
n
j
m
jijii
m
ii
m
uzpUZppm U|ZUPath metric Branch metric Bit metric
ML decoding rule:Choose the path with maximum metric among
all the paths in the trellis.
This path is the closest path to the transmitted sequence.
""i
Binary symmetric channels (BSC)
8/13/2019 Lecture Coding
74/92
2006-02-16 Lecture 9 74
Binary symmetric channels (BSC)
If is the Hamming distance between Zand U, then
Modulatorinput
1-p
p
p
1
0 0
1
)( )(mm dd UZ,
)1log(1
log)(
)1()( )(
pL
p
pdm
ppp
nm
dLdm mnm
U
U|Z
Demodulatoroutput
)0|0()1|1(1)1|0()0|1(
pppppp
ML decoding rule:Choose the path with minimum Hamming distance
from the received sequence.
Size of coded sequence
AWGN channels
8/13/2019 Lecture Coding
75/92
2006-02-16 Lecture 9 75
AWGN channels
For BPSK modulation the transmitted sequencecorresponding to the codeword is denoted bywhere and
and .
The log-likelihood function becomes
Maximizing the correlation is equivalent to minimizing theEuclidean distance.
)(mU
cij Es
)(
1 1
)()( m
i
n
j
m
jijiszm SZ,UInner product or correlation
between Zand S
ML decoding rule:Choose the path which with minimum Euclidean distance
to the received sequence.
),...,,...,( )()()(1)( m
ni
m
ji
m
i
m
i sssS ,...),...,,()()(
2
)(
1
)( m
i
mmm SSSS
Soft and hard decisions
8/13/2019 Lecture Coding
76/92
2006-02-16 Lecture 9 76
Soft and hard decisions
In hard decision:
The demodulator makes a firm or hard decisionwhether one or zero is transmitted and provides noother information for the decoder such that howreliable the decision is.
Hence, its output is only zero or one (the output isquantized only to two level) which are called hard-bits.
Decoding based on hard-bits is called thehard-decision decoding.
Soft and hard decision-contd
8/13/2019 Lecture Coding
77/92
2006-02-16 Lecture 9 77
Soft and hard decision cont d
In Soft decision: The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with ameasure of confidence for the decision.
The demodulator outputs which are called soft-bits, are quantized to more than two levels.
Decoding based on soft-bits, is called thesoft-decision decoding.
On AWGN channels, 2 dB and on fadingchannels 6 dB gain are obtained by usingsoft-decoding over hard-decoding.
The Viterbi algorithm
8/13/2019 Lecture Coding
78/92
2006-02-16 Lecture 9 78
The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding. It find a path through trellis with the largest metric
(maximum correlation or minimum distance).
It processes the demodulator outputs in an iterative
manner. At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path withthe largest metric, called the survivor, together with itsmetric.
It proceeds in the trellis by eliminating the least likelypaths.
It reduces the decoding complexity to !12 KL
The Viterbi algorithm - contd
8/13/2019 Lecture Coding
79/92
2006-02-16 Lecture 9 79
The Viterbi algorithm cont d
Viterbi algorithm:A. Do the following set up: For a data block ofLbits, form the trellis. The trellis
hasL+K-1sections or levels and starts at time and
ends up at time .
Label all the branches in the trellis with theircorresponding branch metric.
For each state in the trellis at the time which isdenoted by , define a parameter
B. Then, do the following:
it
}2,...,1,0{)( 1 KitS ii ttS ),(
1t
KLt
8/13/2019 Lecture Coding
80/92
Example of Hard decision Viterbid di
8/13/2019 Lecture Coding
81/92
2006-02-16 Lecture 9 81
decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6t
1t
2t
3t
4t 5t
)101(m )1110001011(U)0110111011(Z
Example of Hard decision Viterbid di td
8/13/2019 Lecture Coding
82/92
2006-02-16 Lecture 9 82
decoding-contd Label al the branches with the branch metric
(Hamming distance)
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0
ii ttS ),(
8/13/2019 Lecture Coding
83/92
Example of Hard decision Viterbid di td
8/13/2019 Lecture Coding
84/92
2006-02-16 Lecture 9 84
decoding-contd i=3
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3
0
2
30
Example of Hard decision Viterbid di td
8/13/2019 Lecture Coding
85/92
2006-02-16 Lecture 9 85
decoding-contd i=4
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3 0
3
2
3
0
2
30
Example of Hard decision Viterbid di td
8/13/2019 Lecture Coding
86/92
2006-02-16 Lecture 9 86
decoding-contd i=5
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3 0 1
3
2
3
20
2
30
Example of Hard decision Viterbid di td
8/13/2019 Lecture Coding
87/92
2006-02-16 Lecture 9 87
decoding-contd i=6
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3 0 1 2
3
2
3
20
2
30
Example of Hard decision Viterbi decoding-contd
8/13/2019 Lecture Coding
88/92
2006-02-16 Lecture 9 88
cont d
Trace back and then:
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t
3t
4t 5t
1
0 2 3 0 1 2
3
2
3
20
2
30
)100( m)0000111011( U
Example of soft-decision Viterbi decoding
8/13/2019 Lecture Coding
89/92
2006-02-16 Lecture 9 89
p g
)101( m)1110001011( U
)101(m)1110001011(U
)1,3
2
,1,3
2
,1,3
2
,3
2
,3
2
,3
2
,1(
Z
5/3
-5/3
4/3
0
0
1/3
1/3
-1/3
-1/3
5/3
-5/3
1/3
1/3
-1/3
6t
1t
2t
3t
4t 5t
-5/3
0 -5/3 -5/3 10/3 1/3 14/3
2
8/3
10/3
13/33
1/3
5/35/3
ii ttS ),(
Branch metric
Partial metric
1/3
-4/3
5/3
5/3
-5/3
8/13/2019 Lecture Coding
90/92
2006-02-16 Lecture 9 90
8/13/2019 Lecture Coding
91/92
2006-02-16 Lecture 9 91
Trellisdiagram for
K= 2, k= 2,
n= 3
convolutional
code.
8/13/2019 Lecture Coding
92/92
State diagram for
K 2 k 2 3