Upload
others
View
6
Download
7
Embed Size (px)
Citation preview
Modern Coding Theory
Daniel J. Costello, Jr.
Coding Research GroupDepartment of Electrical EngineeringUniversity of Notre DameNotre Dame, IN 46556
2009 School of Information TheoryNorthwestern UniversityAugust 10, 2009
The author gratefully acknowledges the help of Ali Pusane and Christian Koller in the preparation of this presentation.
OutlinePre-1993 era (ancient coding theory)
Coding theory playing field
Post-1993 era (modern coding theory)
Waterfalls and error floors
Thresholds and minimum distance
Turbo Codes LDPC CodesWaterfall performance
(thresholds)EXIT charts Density evolution
Error floor performance(distance properties)
Uniform interleaver analysis
Pseudo- and near-codeword analysis
2
Ancient Coding TheoryChannel capacity and coding theory playing field
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
3
Ancient Coding TheoryChannel capacity and coding theory playing field
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
4
Ancient Coding TheoryChannel capacity and coding theory playing field
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd BPSK/QPSK capacity
5
Ancient Coding TheoryChannel capacity and coding theory playing field
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd BPSK/QPSK capacity
6
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate (BER) of 10-5
7
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate (BER) of 10-5
7
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate (BER) of 10-5
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
For a target bit error rate (BER) of 10-5
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
For a target bit error rate (BER) of 10-5
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (504,3,6)
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (504,3,6)Pioneer QLI (2,1,31)
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (504,3,6)Pioneer QLI (2,1,31)
Voyager
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (504,3,6)Pioneer QLI (2,1,31)
Voyager
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
LDPC (8096,3,6)
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (504,3,6)Pioneer QLI (2,1,31)
Turbo (65536,18)
Voyager
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
LDPC (8096,3,6)
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate (BER) of 10-5
LDPC (107) IRLDPC (504,3,6)
Pioneer QLI (2,1,31)
Turbo (65536,18)
Voyager
7
Hamming (7,4)
Uncoded BPSK/QPSK
Cut
off R
ate
LDPC (8096,3,6)
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code
8
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code
8
Waterfall
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code(65536,18) Turbo code
9
Waterfall
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code(65536,18) Turbo code
9
WaterfallWaterfall
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code(65536,18) Turbo code
9
Waterfall
Error floor
Waterfall
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code(65536,18) Turbo codeIrregular LDPC code with block length 10
7
10
Waterfall
Error floor
Waterfall
10-6
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC code(65536,18) Turbo codeIrregular LDPC code with block length 10
7
10
Waterfall
Error floor
Waterfall
Waterfall
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate of 10-10
11
Cut
off R
ate
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate of 10-10
11
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
For a target bit error rate of 10-10
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
For a target bit error rate of 10-10
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
For a target bit error rate of 10-10
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
Pioneer QLI (2,1,31)
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
Pioneer QLI (2,1,31)
Voyager
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
Pioneer QLI (2,1,31)Turbo (65536,18)
Voyager
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
LDPC (8096,3,6)
Pioneer QLI (2,1,31)Turbo (65536,18)
Voyager
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
2.0
1.0
0.00.0 2.0 4.0 6.0 8.0 10.0 12.0-2.0
Power E!ciency, Eb/N0 (dB)
Ban
dwid
thE
!ci
ency
,!(b
its/
2D)
Cap
acity
Bou
nd
BPSK/QPSK capacity
Golay (24,12)
BCH (255,123)
RS (64,32)
For a target bit error rate of 10-10
LDPC (8096,3,6)
Pioneer QLI (2,1,31)Turbo (65536,18)
Voyager
11
Hamming (7,4)
Cut
off R
ate
Uncoded BPSK/QPSK
LDPC (107) IR ???
Modern Coding Theory
TURBO Codes
12
Encoder 1
Encoder 2u!
v(0) = (v(0)0 , v(0)
1 , . . . , v(0)K!1)
v(1) = (v(1)0 , v(1)
1 , . . . , v(1)K!1)
v(2) = (v(2)0 , v(2)
1 , . . . , v(2)K!1)
u = (u0, u1, . . . , uK!1)
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
13
Encoder 1
Encoder 2u!
v(0) = (v(0)0 , v(0)
1 , . . . , v(0)K!1)
v(1) = (v(1)0 , v(1)
1 , . . . , v(1)K!1)
v(2) = (v(2)0 , v(2)
1 , . . . , v(2)K!1)
u = (u0, u1, . . . , uK!1)
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
An information block is encoded by two constituent convolutional encoders.
13
Encoder 1
Encoder 2u!
v(0) = (v(0)0 , v(0)
1 , . . . , v(0)K!1)
v(1) = (v(1)0 , v(1)
1 , . . . , v(1)K!1)
v(2) = (v(2)0 , v(2)
1 , . . . , v(2)K!1)
u = (u0, u1, . . . , uK!1)
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
An information block is encoded by two constituent convolutional encoders.
A pseudo-random interleaver that approximates a random shuffle (permutation) is used to change the order of the data symbols before they are encoded by the second encoder.
13
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
The encoders have a recursive systematic structure.
14
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
The encoders have a recursive systematic structure.
14
u!
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
The encoders have a recursive systematic structure.
14
u!
Modern Coding TheoryTurbo Codes: Parallel concatenated codes (PCCs)
The encoders have a recursive systematic structure.
14
v = (v(0),v(1),v(2))The transmitted codeword is .
Modern Coding TheoryTurbo Codes: Iterative decoding
15
Decoder 1 Decoder 2
!!1
!r L(u)
LA(u) LE(u!)
LE(u) LA(u!)
Modern Coding TheoryTurbo Codes: Iterative decoding
Decoding can be done iteratively using two constituent soft-input soft-output (SISO) decoders.
For short constraint length constituent convolutional encoders, optimum SISO decoding of the constituent codes is possible using the BCJR algorithm.
15
Decoder 1 Decoder 2
!!1
!r L(u)
LA(u) LE(u!)
LE(u) LA(u!)
A priori LLRs:
A posteriori (APP) log-likelihood ratios (LLRs):
Extrinsic LLRs:LE(ui) = L(ui)! LA(ui)
Modern Coding TheoryTurbo Codes: SISO decoder
16
L(ui) = log
!u:ui=0
!(u)!
u:ui=1!(u)
!(u) ! p(r|u)" p(u)
SISO decoderr
LA(ui) LE(ui)
LA(ui) = log
!u:ui=0
p(u)!
u:ui=1p(u)
The BCJR algorithm operates on the trellis representation of the convolutional code. The logarithmic version of the BCJR algorithm is preferred in practice, since it is simpler to implement and provides greater numerical stability than the probabilistic version.
The logarithm of the path metric can be written as a sum of branch metrics leaving a trellis state si at time i.
Es/N0
vi
17
Overview of BCJR Algorithm
Channel signal-to-noise ratioReceived signal vectorSignal vector corresponding to symbol ui
!(u)
log !(u) =K!1!
i=0
!(si, ui)
ri
!(si, ui) =uiLA(ui)
2! Es
N0||ri ! vi||2
The likelihoods of the different states at time i are determined by a forward and a backward recursion
with the initial conditions
and the max* operation:
!(sK) =!
0 , sK = 0!" , sK #= 0
max![x, y] = max[x, y] + log(1 + exp{!|x! y|})
18
Overview of BCJR Algorithm
!(si+1) = max(si,ui,si+1)!B("si+1)
[!(si) + "(si, ui)]
!(si) = max(si,ui,si+1)!B(si")
["(si, ui) + !(si+1)]
!
!
!(s0) =!
0 , s0 = 0!" , s0 #= 0
The a posteriori log likelihood ratios are then given by
The max* operations can be replaced by the simpler max operation, resulting in the suboptimum max-log-MAP algorithm.
19
Overview of BCJR Algorithm
!
!
L(ui) = max(si,ui,si+1)!B(ui=0)
[!(si) + "(si, ui) + #(si+1)]
! max(si,ui,si+1)!B(ui=1)
[!(si) + "(si, ui) + #(si+1)]
Step 1: Compute the branch metrics .
Step 2: Initialize the forward and backward metrics
and .
Step 3: Compute the forward metrics , i=0,1,…,K-1.
Step 4: Compute the backward metrics , i=K-1,K-2,…,0.
Step 5: Compute the APP L-values , i=0,1,…,K-1.
Step 6 (optional): Compute the hard decisions , i=0,1,…,K-1.
!(si, ui)
20
Summary of the (log-domain) BCJR Algorithm
!(s0)!(sK)
!(si)
!(si+1)
L(ui)
ui
We use the mapping 0 → +1
1 → -1
ES/N0 = ¼
The received vector is normalized by
There is no apriori information available, i.e., .
!"#"$
%
&
%'('%'%
&'('&'&
%'('%'&
&'('&'%
!"!"#$"#%
LA(ui) = 0
21
Example: Decoding of a 2-state recursive systematic (accumulator) convolutional code
!Es.
22
Example: Decoding of a 2-state convolutional code
Computation of the branch metric
!(s0 = 0, u0 = 1) = !Es
N0||ri ! vi||2
= !0.25"!(!0.8 + 1)2 + (!0.1 + 1)2
"
= !0.2125
Computation of the forward metrics
23
Example: Decoding of a 2-state convolutional code
!(s2 = 0) = max(s1,u1,s2)!B("s2=0)
[!(s1) + "(s1, u1)]
= max[(!0.2125! 0.0625), (!1.1125! 1.0625)]= !0.275
24
Example: Decoding of a 2-state convolutional code
Computation of the backward metrics
!(s2 = 1) = max(s2,u2,s3)!B(s2=1")
["(s2, u2) + !(s3)]
= max[(!0.1625! 0.18), (!3.0625! 1.78)]= !0.3425
Decoding of the second bit
Hard decision decoded sequence:
25
Example: Decoding of a 2-state convolutional code
L(u1) = max[(!0.2125! 1.5625! 0.3425), (!1.1125! 1.0625! 2.1425)]!max[(!0.2125! 0.0625! 2.1425), (!1.1125! 0.5625! 0.3425)]
= !2.1175 + 2.0175 = !0.1
u = ( 0, 1, 0, 1 )
Modern Coding Theory
Waterfall Performance: EXIT Charts
26
0.0 0.5 1.00.1 0.2 0.3 0.4 0.6 0.7 0.8 0.9
0.0
0.5
1.0
0.1
0.2
0.3
0.4
0.6
0.7
0.8
0.9
IE , IA
IA, IE
Modern Coding TheoryTurbo Codes: EXIT charts for PCCs
Extrinsic information transfer (EXIT) charts are used to analyze the iterative decoding behavior of the SISO constituent decoders.
EXIT charts plot the extrinsic mutual information
at a SISO decoder output versus the a priori mutual information
at the decoder input, assuming the LLRs are Gaussian distributed.
27
Decoder 2
Decoder 1
IE [ui;LE(ui)]
IA[ui;LA(ui)]
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
IA,IE
I E,I A
Decoder 1
Decoder 2
Modern Coding TheoryTurbo Codes: EXIT charts for PCCs
Successful decoding is possible if the two curves do not cross; otherwise, decoding fails.
The curves depend on the bit signal-to-noise ratio (SNR) .
The iterative decoding threshold of a turbo decoder is the smallest SNR for which the two curves do not cross.
28
Eb/No
Outer
Encoder
Inner
Encoder
u vv! u!
Outer
Decoder
Inner
Decoder
!!1
!
r L(u)
LE(u!)
LA(u!)
LA(v!)
LE(v!)
Modern Coding TheoryTurbo Codes: Serially concatenated codes (SCCs)
An information block is first encoded by the outer encoder and the codeword is then interleaved to form the input to the inner encoder.
Again, decoding can be done iteratively using SISO decoders.
29
v! u!
Modern Coding TheoryTurbo Codes: EXIT charts for SCCs
The iterative decoding threshold of an SCC can also be determined using EXIT charts.
Successful decoding is possible if the curves do not cross.
Only the EXIT curve of the inner decoder depends on the channel SNR!
30
IA, IE
IE , IA
Outer Decoder
Inner Decoder
Modern Coding Theory
Error-Floor Performance:
Uniform Interleaver Analysis
31
Modern Coding TheoryTurbo Codes: Uniform interleaver analysis
32
The distance spectrum of a concatenated code ensemble can be analyzed using a uniform distribution of interleavers.
A uniform interleaver is a probabilistic device that maps an input block of weight d and length N into all its possible output permutations with equal probability.
The constituent encoders are decoupled by the uniform interleaver and can thus be considered to be independent.
In this way, we obtain the average Weight Enumerating Function (WEF) of the code ensemble, which represents the expected number of codewords with a given weight.
Modern Coding TheoryTurbo Codes: Uniform interleaver analysis
33
Example The average WEF of an SCC ensemble is given by:
The quantity represents the expected number of codewords of weight that are generated by weight input sequences.
ACSCCw,d
wd
ACSCCw,d =
N1!
d1=1
ACouterw,d1
ACinnerd1,d"
N1
d1
#
Outer
Encoder
Inner
Encoder!1
lengthK
weightw
N1 N
dd1
Modern Coding TheoryTurbo Codes: The union bound
34
The average WEF of the code ensemble can be used in the union bound to upper bound the bit error probability with maximum likelihood (ML) decoding as follows:
where is the code rate.
is dominated by the low weight codeword terms. If their multiplicity decreases as the interleaver size increases, then the error probability also decreases and we say that the code exhibits interleaver gain.Iterative decoding of turbo codes behaves like ML decoding at high SNR.
,
R
KPb
Pb !12
N!
d=1
K!
w=1
w
KAw,d erfc
"#dREb
N0
$
1e-08
1e-07
1e-06
1e-05
1e-04
1e-03
1e-02
1e-01
1e+00
0 1 2 3 4 5 6 7
Pb(E
)
Eb/N0 (dB)
(b)
PCCC Simulation 10000PCCC Bound 100
PCCC Bound 1000PCCC Bound 10000
(3,1,4) Simulation
Modern Coding TheoryTurbo Codes: The union bound
35
Modern Coding Theory
Asymptotic Distance Properties of
Turbo Codes
36
Modern Coding TheoryAsymptotic distance properties
37
[Breiling, 2004] The minimum distance of a PCC with K=2 constituent encoders and block length is upper bounded by
[Kahale, Urbanke; 1998] The minimum distance of a multiple PCC with K > 2 branches grows as
dmin(N) = O!N
K!2K
"
N
dmin(N) ! O(lnN)
Asymptotic distance properties of PCCs
Modern Coding TheoryAsymptotic distance properties
38
[Kahale, Urbanke; 1998] and [Perotti, Benedetto, 2006] The minimum distance of an SCC grows as
where
and is the minimum distance of the outer code.
dmin(N) = O(N!)
dC0min ! 2dC0
min
< ! <dC0
min ! 1dC0
min
dC0min
,
Asymptotic distance properties of SCCs
RepN
q AccN!1 !m Acc
N
Modern Coding TheoryAsymptotic distance properties
39
Asymptotic distance properties of Multiple Serially Concatenated Codes (MSCCs)
Parallel and single serially concatenated codes are not asymptotically good, i.e., their minimum distance does not grow linearly with block length as the block length goes to infinity.
Multiple serially concatenated codes (MSCCs) with 3 or more serially concatenated encoders can be asymptotically good.
The most interesting MSCCs are repeat multiple accumulate (RMA) codes, due to their simple constituent encoders.
For the ensemble of RMA codes:
It was shown in [Pfister; 2003] that the minimum distance of RMA codes grows linearly with block length for
It was shown in [Pfister and Siegel; 2003] that the distance growth rate for an infinite number of accumulators meets the Gilbert-Varshamov bound (GVB).
Distance growth rates for any finite number of accumulators were presented in [Fagnani and Ravazzi; 2008] and [Kliewer, Zigangirov, Koller, and Costello; 2008].
Modern Coding TheoryAsymptotic distance properties
40
dC0min ! 2
We observe that the minimum distance growth rates are close to the GVB.
The growth rates get closer to the GVB when the number of serially concatenated component encoders or the minimum distance of the outer code increases.
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.550
0.05
0.1
0.15
0.2
0.25
0.3
0.35
R
δ min
GVBRqAARqAAA[1/(1+D)]AA[1/(1+D)]AAA
q = 2
q = 3
q = 4
q = 5q = 6
Modern Coding TheoryAsymptotic distance growth rate coefficients
41
Modern Coding Theory
LDPC Codes
42
Modern Coding TheoryLDPC Codes
Definition An LDPC block code is a linear block code whose parity-check matrix H has a small number of ones in each row and column.
An (N, J, K)-regular LDPC block code is a linear block code of length N and rate R ≥ 1 − J/K whose parity-check matrix H has exactly J ones in each column and K ones in each row, where J, K << N.
43
Modern Coding TheoryLDPC Codes: Parity-check matrix representation
Example The parity-check matrix of a rate R = 1/2, (10,3,6)-regular LDPC block code:
H consists of 10 columns and 5 rows corresponding to 10 code symbols and 5 parity-check equations. Each code symbol is included in J=3 parity-check equations and each parity-check equation contains K=6 code symbols.
H =
!
""""#
1 0 1 1 1 0 1 1 0 00 1 1 0 0 1 1 0 1 11 1 0 1 1 1 0 1 0 00 1 1 0 1 0 1 0 1 11 0 0 1 0 1 0 1 1 1
$
%%%%&
44
H
Modern Coding TheoryLDPC Codes: Tanner graph representation
Definition A Tanner graph is a bipartite graph such that
Each code symbol is represented by a “variable node”;
Each parity-check equation is represented by a “check node”;
An edge connects a variable node to a check node if and only if the corresponding code symbol participates in the corresponding parity-check equation.
Each variable node has degree J and each check node has degree K for an (N, J, K)-regular code.
45
variable node check node
H =
!
""""#
1 0 1 1 1 0 1 1 0 00 1 1 0 0 1 1 0 1 11 1 0 1 1 1 0 1 0 00 1 1 0 1 0 1 0 1 11 0 0 1 0 1 0 1 1 1
$
%%%%&
46
Example A rate R = 1/2, (10,3,6)-regular LDPC block code:
Parity-check matrix
Corresponding Tanner graph
Modern Coding TheoryLDPC Codes: Cycles in Tanner graphs
Definition A cycle of length 2L in a Tanner graph is a path consisting of 2L edges such that the start node and end node are the same.
Definition The length of the shortest cycle in a Tanner graph is called the girth.
47
Modern Coding TheoryLDPC Codes: Cycles in Tanner graphs
A cycle of length 4
H =
!
""""#
1 0 1 1 1 0 1 1 0 00 1 1 0 0 1 1 0 1 11 1 0 1 1 1 0 1 0 00 1 1 0 1 0 1 0 1 11 0 0 1 0 1 0 1 1 1
$
%%%%&
48
Modern Coding TheoryLDPC Codes: Cycles in Tanner graphs
A cycle of length 4
H =
!
""""#
1 0 1 1 1 0 1 1 0 00 1 1 0 0 1 1 0 1 11 1 0 1 1 1 0 1 0 00 1 1 0 1 0 1 0 1 11 0 0 1 0 1 0 1 1 1
$
%%%%&
48
A cycle of length 6
H =
!
""""#
1 0 1 1 1 0 1 1 0 00 1 1 0 0 1 1 0 1 11 1 0 1 1 1 0 1 0 00 1 1 0 1 0 1 0 1 11 0 0 1 0 1 0 1 1 1
$
%%%%&
Modern Coding TheoryLDPC Codes: Message-passing decoding
Message-passing (belief propagation) is an iterative decoding algorithm that uses the structure of the Tanner graph.
In each iteration of the algorithm:
Each variable node sends a message (“extrinsic information”) to each check node it is connected to;
Each check node sends a message (“extrinsic information”) to each variable node it is connected to;
For each code symbol (variable node), we compute the a posteriori probability that the symbol takes on the value “1”, given all the received symbols and that all the parity-check equations are satisfied.
49
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration
Each variable node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration
Each variable node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
The variable node computation includes the incoming message from the channel.
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration
Each variable node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
The variable node computation includes the incoming message from the channel.
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration
Each variable node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
The variable node computation includes the incoming message from the channel.
50
The most commonly used message is the log likelihood ratio (LLR) of a symbol. This is calculated using the probabilities and that the symbol takes on the values 0 and 1, respectively, and is given by .log(p0/p1)
p0p1
At each decoding iteration (continued...)
51
At each decoding iteration (continued...)
Each check node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
51
At each decoding iteration (continued...)
Each check node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
51
n3,8
n3,8 = log!
1+(tanh(m1,3
2 )·tanh(m2,3
2 )·tanh(m4,3
2 )·tanh(m5,3
2 )·tanh(m6,3
2 ))
1!(tanh(m1,3
2 )·tanh(m2,3
2 )·tanh(m4,3
2 )·tanh(m5,3
2 )·tanh(m6,3
2 ))
"
At each decoding iteration (continued...)
Each check node computes an “extrinsic” LLR for each neighboring edge based on what was received from the “other” neighboring edges.
51
n3,8
After any number of iterations, decisions can be made on each variable node using a final update.
52
After any number of iterations, decisions can be made on each variable node using a final update.
52
After any number of iterations, decisions can be made on each variable node using a final update.
52
Let the LLR
If , then the variable node is decoded as a 0.
Otherwise, the variable node is decoded as a 1.
This assumes the mapping 0 → +1 and 1 → -1.
y1 + y3 + y4y1 + y2 + y3 y2 + y3 + y4
The following example illustrates a simple “bit-flipping” version of message-passing decoding for the binary symmetric channel (BSC). (On a BSC, “channel errors” occur independently with probability p<0.5, and the transmitted bits are received correctly with probability 1-p.)
53
1 0 0 1Received (corrupted) codeword
1 + y2 + y3 1 + y3 + y4 y2 + y3 + y4
54
1 0 0 1
1 + 0 + y3 1 + y3 + y4 0 + y3 + y4
55
1 0 0 1
1 + 0 + 0 1 + 0 + y4 0 + 0 + y4
56
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
57
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
57
1 0 0 1
= 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 0
57
1 0 0 1
= 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 1= 0
57
1 0 0 1
= 1
58
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 1= 1
Flip, Flip, Flip,
= 0
59
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 1= 1
Flip, Stay Flip, Flip, Stay, Stay,
= 0
60
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 1= 1
Flip, Stay Flip, Flip Flip, Stay, Flip Stay, Flip
= 0
60
1 0 0 1
0 + 0 + 11 + 0 + 11 + 0 + 0
= 1= 1
Flip, Stay Flip, Flip Flip, Stay, Flip Stay, Flip
= 0
61
1 1 0 1
61
1 1 0 1
y1 + y3 + y4y1 + y2 + y3 y2 + y3 + y4
1 + y2 + y3 1 + y3 + y4 y2 + y3 + y4
62
1 1 0 1
1 + 1 + y3 1 + y3 + y4
63
1 1 0 1
1 + y3 + y4
1 + 1 + 0 1 + 0 + y4
64
1 1 0 1
1 + 0 + y4
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
65
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
65
1 1 0 1
= 0
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
65
1 1 0 1
= 0= 0
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
65
1 1 0 1
= 0= 0 = 0
66
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
= 0= 0 = 0
Stay, Stay, Stay,
67
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
= 0= 0 = 0
Stay, Stay Stay, Stay, Stay, Stay,
68
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
= 0= 0 = 0
Stay, Stay Stay, Stay Stay, Stay, Stay Stay, Stay
68
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
= 0= 0 = 0
Stay, Stay Stay, Stay Stay, Stay, Stay Stay, Stay
Decoded codeword
68
1 1 0 1
1 + 1 + 0 1 + 0 + 1 1 + 0 + 1
= 0= 0 = 0
Stay, Stay Stay, Stay Stay, Stay, Stay Stay, Stay
Decoded codeword
The author acknowledges Dr. Lara Dolecek’s permission to use this example in this talk.
Modern Coding Theory
Waterfall Performance:
Density Evolution
69
Modern Coding TheoryLDPC Codes: Iterative decoding threshold
Observation As the block length N of an LDPC block code increases, its performance on a BPSK modulated AWGN channel improves - up to a limit.
This limit is called the iterative decoding threshold and it depends on the channel quality, specified in terms of the SNR for an AWGN channel, and the “degree profile” of the code.
If the SNR falls below the threshold, then iterative decoding will fail, and the BER will be bounded away from zero.
Example The iterative decoding threshold for (3,6)-regular LDPC block codes is Eb/No=1.11 dB (0.93 dB from capacity).
70
Comparing the performance of (3,6)-regular LDPC block codes with the (3,6) iterative decoding threshold and the rate R=1/2 BPSK AWGN channel capacity.
T. E. Fuja More on LDPC Codes
Fundamental Limits to Message Passing, continued
0 0.184 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 310
!6
10!5
10!4
10!3
10!2
10!1
Eb/N
0
BE
R
block length 2kblock length 10kblock length 200k
Capacity Threshold
Figure 2: Comparing the performance of (3, 6) LDPC codes with the (3, 6) threshold and the rate-1/2 BPSK AWGN capacity.
University of Notre Dame
71
Modern Coding TheoryLDPC Codes: Density evolution
Messages exchanged over the Tanner graph are random variables with probability density functions (PDFs) that evolve at each iteration.
Density evolution is a technique used to compute the thresholds of LDPC block code ensembles.
The channel SNR is fixed and the PDFs of the messages exchanged over the Tanner graph* are tracked.
The lowest channel SNR at which the PDFs “converge” to the correct values is called the iterative decoding threshold.
* The Tanner graph is assumed to be cycle-free, i.e., it corresponds to a very long (infinite length) code with very large (infinite) girth.
72
Modern Coding TheoryLDPC Codes: Irregular LDPC block codes
By varying the “degree profiles” (the distribution of variable and check node degrees), irregular LDPC code ensembles with better iterative decoding thresholds than regular ensembles can be found.
Global optimization techniques can be used in conjunction with density evolution to find optimum degree profiles that minimize iterative decoding thresholds.
73
Modern Coding TheoryLDPC Codes: Irregular LDPC block codes
74
10-5
10-4
10-3
10-2
10-1
100
0 0.2 0.4 0.6 0.8 1 1.2 1.4
BE
R
Eb/N0 (dB)
(200000,3,6)-regular LDPC codeIrregular LDPC code with block length 240000
Threshold Threshold
Modern Coding TheoryLDPC Codes: Protograph-based codes
Regular or irregular ensembles of LDPC block codes can be constructed from a projected graph (protograph) using a copy-and-permute operation.
0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
0 1 2 3 0 1 2 3 0 1 2 3
A B C A B C A B C A B C
A B C A B C A B C
75
Modern Coding TheoryLDPC Codes: Protograph-based codes
An ensemble of protograph-based LDPC block codes is obtained by placing random permutors on each edge of the protograph.
76
The copy-and-permute operation preserves the degree profile of a Tanner graph.
Since the iterative decoding threshold is a function of the degree profile only, an ensemble of protograph-based block codes has the same threshold as the underlying protograph.
!
!
!!!
!!
Modern Coding TheoryLDPC Codes: Convolutional counterparts
77
A (time-varying) convolutional code is the set of sequences satisfying the equation , where the parity-check matrix is given by
is the syndrome former (transposed parity-check) matrix and, for a rate code, the elements , are binary submatrices.
The value , called the syndrome former memory, is determined by the maximal width (in submatrices) of the nonzero area in the matrix .
vHTconv = 0 Hconv
v
Hconv =
!
"""""""""#
. . . . . .Hms(t) · · · H0(t)
Hms(t + 1) · · · H0(t + 1). . . . . . . . .
Hms(t + ms) · · · H0(t + ms). . . . . .
$
%%%%%%%%%&
HTconv
(c! b)" cHi(t), i = 0, 1, · · · , ms
R = b/c
ms
Hconv
Modern Coding TheoryLDPC Codes: Convolutional counterparts
78
A (time-varying) convolutional code is the set of sequences satisfying the equation , where the parity-check matrix is given by
The constraint length is equal to the maximal width (in symbols) of the nonzero area in the matrix and is given by .If the submatrices , the code is periodically time-varying with period .If the submatrices , do not depend on , the code is time-invariant.
vHTconv = 0 Hconv
v
Hconv =
!
"""""""""#
. . . . . .Hms(t) · · · H0(t)
Hms(t + 1) · · · H0(t + 1). . . . . . . . .
Hms(t + ms) · · · H0(t + ms). . . . . .
$
%%%%%%%%%&
!s = (ms + 1)c
PHi(t) = Hi(t + P ), i = 0, 1, · · · , ms, !t
Hconv
!s
Hi(t), i = 0, 1, · · · , ms t
Modern Coding TheoryLDPC Codes: Convolutional counterparts
The parity-check matrix of an -regular low-density parity-check (LDPC) convolutional code has exactly ones in each row and ones in each column of , where .
An LDPC convolutional code is called irregular if its row and column weights are not constant.
(!s, J,K)K
J J, K << !s
79
Hconv
Modern Coding TheoryLDPC Codes: Convolutional counterparts
(10,3,6) time-varying convolutional code
Hconv =
!
"""""""""""""""""""#
. . .
. . .
. . .
11 1 11 1 0 1 10 0 1 1 0 1 10 1 0 0 1 1 0 1 1
0 1 0 1 0 1 0 1 1 11 0 1 0 0 1 1 0 1
0 1 1 0 0 1 11 0 1 1 0
0 0 01
. . .
. . .
. . .
$
%%%%%%%%%%%%%%%%%%%&
(ms = 4, c = 2, b = 1, R = 1/2)
80
Time-invariant LDPC convolutional code with rate and .
Graph has infinitely many nodes and edges.
Node connections are localized within one constraint length.
R = 1/3
81
Graph Structure of LDPC Convolutional Codes
ms = 2
ms + 1
Proc. Proc. Proc. Proc.
Decoding windowvariable node
check node
r1t
r0t
r2t v2
t!I(ms+1)+1
v1t!I(ms+1)+1
v0t!I(ms+1)+1
(I ! 1)1 2 I
(size = !s · I symbols)
I
!s · I
Taking advantage of the localized structure of the graph, identical, independent processors can be pipelined to perform iterations in parallel.
The pipeline decoder operates on a finite window of length received symbols that slides along the received sequence.
Once an initial decoding delay (latency) of received symbols has elapsed, the decoder produces a continuous output stream.
I
!s · I
Pipeline Decoder Architecture for LDPC Convolutional Codes
82
Modern Coding TheoryLDPC Convolutional Codes: Iterative decoding threshold
83
For terminated transmission over an AWGN channel, regular LDPC convolutional codes have been shown to achieve better iterative decoding thresholds than their block code counterparts:
[Lentmaier, Sridharan, Zigangirov, Costello, ISIT 2005]
(3,6)
(4,8)
(5,10)
0.46 dB 1.11 dB
0.26 dB 1.61 dB
0.21 dB 2.04 dB
(J, K) (Eb/N0)conv (Eb/N0)block
Modern Coding Theory
Error-Floor Performance:
Pseudo- and Near-Codeword Analysis
84
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Very largeblock length
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Iterative decoding
Very largeblock length
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Sparse coderepresentation
Iterative decoding
Very largeblock length
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Sparse coderepresentation
Iterative decoding
Very largeblock length
*
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
85
The requirements of capacity-approaching code performance can be summarized as follows:
Capacity-approachingperformance
Sparse coderepresentation
Iterative decoding
Very largeblock length
*
* Good sparse codes are typically designed using EXIT charts or density evolution. These codes exhibit capacity-approaching performance in the waterfall region. However, they can have much worse performance when lower BER values are required, i.e., they can exhibit error floors.
86
Error floors of (3,6)-regular LDPC codes for block lengths between 2048 and 8096.
[T. Richardson, “Error floors of LDPC codes,” Allerton 2003]
0.5 1.0 1.5 2.0 2.5 3.0 3.5 Es/N
0 [dB]
Modern Coding TheoryIterative Decoding: Capacity-approaching performance
87
The error floor behavior observed in LDPC code performance curves is commonly attributed both to properties of the code and the sparse parity-check representation that is used for decoding.
These include
pseudo-codewords (due to locally operating decoding algorithms)
near-codewords (due to trapping sets)
poor asymptotic distance properties (effects can be seen in the ML decoding union bound)
Modern Coding TheoryIterative Decoding: Pseudo-codewords
88
Message-passing iterative decoders for LDPC codes are known to be subject to decoding failures due to pseudo-codewords. These failures can cause the high channel SNR performance to be worse than that predicted by the ML decoding union bound, resulting in the appearance of error floors.
The low complexity of message-passing decoding comes from the fact that the algorithm operates locally on the Tanner graph representation.
However, since it works locally, the algorithm cannot distinguish if it is acting on the graph itself, or some finite cover of the graph.
Pseudo-codewords are defined as codewords in codes corresponding to graph covers.
Modern Coding TheoryIterative Decoding: Pseudo-codewords
89
Consider the same 3-cover of the protograph given earlier, this time with the nodes reordered so that similar nodes are grouped together.
0 1 2 30 1 2 30 1 2 3
A B CA B CA B C
Modern Coding TheoryIterative Decoding: Pseudo-codewords
90
Consider the following assignment of values to variable nodes
01 1 10 0 0 0 0 0 0 0
Modern Coding TheoryIterative Decoding: Pseudo-codewords
90
Consider the following assignment of values to variable nodes
01 1 10 0 0 0 0 0 0 0
Modern Coding TheoryIterative Decoding: Pseudo-codewords
90
Consider the following assignment of values to variable nodes
01 1 10 0 0 0 0 0 0 0
! = (13 , 1
3 , 0, 13 )
({1,0,0},{1,0,0},{0,0,0},{1,0,0}) is a valid codeword in the cubic cover. Therefore is a valid pseudo-codeword of the original graph.
Modern Coding TheoryIterative Decoding: Near-codewords
With message-passing decoding of LDPC codes, error floors are usually observed when the decoding algorithm converges to a near-codeword, i.e., a sequence that satisfies most of the parity-check equations, but not all of them.
This behavior is usually attributed to so-called trapping sets in the Tanner graph representation.
[T. Richardson, “Error floors of LDPC codes,” Allerton 2003]91
Modern Coding TheoryIterative Decoding: Near-codewords
92
An (a,b)-generalized trapping set is a set of “a” variable nodes such that the induced subgraph includes “b” check nodes of odd degree.
Example Consider the following (3,1)-generalized trapping set
Modern Coding TheoryIterative Decoding: Near-codewords
92
An (a,b)-generalized trapping set is a set of “a” variable nodes such that the induced subgraph includes “b” check nodes of odd degree.
Example Consider the following (3,1)-generalized trapping set
even-degree
Modern Coding TheoryIterative Decoding: Near-codewords
92
An (a,b)-generalized trapping set is a set of “a” variable nodes such that the induced subgraph includes “b” check nodes of odd degree.
Example Consider the following (3,1)-generalized trapping set
even-degree odd-degree
Modern Coding TheoryIterative Decoding: Near-codewords
92
An (a,b)-generalized trapping set is a set of “a” variable nodes such that the induced subgraph includes “b” check nodes of odd degree.
Example Consider the following (3,1)-generalized trapping set
even-degree odd-degreeIf these three variable nodes are “erroneously” received, it is hard for the decoding algorithm to converge to their correct values, since only one of the neighboring check nodes can detect a problem.
Modern Coding Theory
Asymptotic Distance Properties of
LDPC Codes
93
Modern Coding TheoryAsymptotic distance properties
94
Randomly constructed code ensembles usually have asymptotically good distance properties, i.e., their minimum distance grows linearly with block length as the block length goes to infinity.
The ensemble of linear block codes meets the Gilbert-Varshamov bound, the best known bound on asymptotic distance growth rate.
Sparse codes, on the other hand, do not have asymptotically good distance properties in general.
Regular LDPC codes usually are asymptotically good, whereas only some ensembles of irregular LDPC codes possess this property.
Modern Coding TheoryAsymptotic distance properties
Regular LDPC code ensembles are asymptotically good, i.e., their minimum distance grows linearly with block length [Gallager, 1962].
Expander codes (generalized LDPCs) are also asymptotically good if the expanded check nodes (subcodes) have dmin≥3 [Barg, Zémor, 2004].
Some irregular LDPC code ensembles can be asymptotically good [Litsyn, Shevelev; 2003] and [Di, Richardson, Urbanke, 2006].
Some irregular LDPC code ensembles based on protographs can also be asymptotically good [Divsalar, Jones, Dolinar, Thorpe; 2005].
95
Asymptotic distance properties of LDPC block codes
Modern Coding TheoryAsymptotic distance properties
96
For an ensemble of LDPC block codes:
as asymptotically good codes.
dfree ! !free"s !s !"#For an ensemble of LDPC convolutional codes:
as asymptotically good codes.
For regular ensembles, LDPC convolutional codes can achieve larger distance growth rates than their block code counterparts.
(3,6)(4,8)
0.023 0.0830.063 0.191
(J, K) !min !free
[Sridharan, Truhachev, Lentmaier, Costello,
Zigangirov, IT Trans. 2007]
Asymptotic distance properties of LDPC codes
dmin ! !minN N !"#
Modern Coding TheoryAsymptotic distance properties
The random permutation representation of protograph-based LDPC block codes allows the uniform interleaver analysis to be used to compute their asymptotic distance growth rates [Divsalar, 2006].
This technique can be extended to analyze ensembles of unterminated, protograph-based, periodically time-varying LDPC convolutional codes.
The free distance growth rates of the LDPC convolutional code ensembles exceed the minimum distance growth rates of the corresponding LDPC block code ensembles [Mitchell et al., 2008].
97
Asymptotic distance properties of protograph-based LDPC convolutional codes
98
! !
!"#$%&"#%'()#
*%'*+'%,"-#,$"#
./)0,$#/%,1)(#2)/#
-122"/"3,#/%,"(4#
13*'+-13.#/%,"(#
./"%,"/#,$%3#5#6####
!"#$%&'&()*++#,-''.,/012,2'34'+5&('3*+,2'.6",7*"6.,'3,18'&'98*%:" 78#9#:;
<3,/)-+*,1)3 =",$)- >"(+',( ?)3*'+(1)3(#@#A+,+/"#!)/B
!"#$%&'(")*"+,%"(-.*/&)0"(#'12/3('&/"*
C3("DE'" !!
!"#$#%&!
!"#$#%#'#(&
!"#)$#)%#)'&!
!$#%&!
!%#'&!
!'#(&!
A)/#,$"#/"D%1313.#"3("DE'"(#01,$#.*-F"#4)"$G)H)I#0"#*%'*+'%,"-#,$"#')0"/#
E)+3-#)3#,$"#2/""#-1(,%3*"#./)0,$#/%,"#+(13.#,$"#3)3+312)/D#*+,#D",$)-6
dmin/N or dfree/!s
Block code growth rate !min
Convolutional code growth rate !free