View
18
Download
0
Category
Preview:
DESCRIPTION
Density Evolution, Capacity Limits, and the "5k" Code Result (L. Schirber 11/22/11). - PowerPoint PPT Presentation
Citation preview
Page 1 of 37
Density Evolution, Capacity Limits, and the "5k" Code Result (L. Schirber 11/22/11)
• The Density Evolution (DE) algorithm calculates a "threshold Eb/N0 " - a performance bound between practical codes and the capacity limit - for classes of regular low-density parity-check (LDPC) codes [2].
– A BER simulation result for a R =1/2 code with N = 5040 is compared to the DE threshold Eb/N0 and to the “channel capacity” Eb/N0 .
• TOPICS– Result: Half-rate, "5k" Code: BER Curve and Bandwidth-Efficiency Plot – Channel Models and Channel Capacity [1]
• Bandlimited Channels and Spectral Efficiency – Review of the LLR Decoder Algorithm [1]– Density Evolution Algorithm
• Analytical Development [1], [3]• Algorithm Description and Examples: R = code rate = 1/3 and 1/2
[1] Error Correction Coding: Mathematical Methods and Algorithms, by Moon (2005)[2] Gallager, "Low-Density Parity-Check Codes" , MIT Press (1963)[3] Barry, “Low-Density Parity-Check Codes”, Georgia Institute of Technology, (2001)
4-Cycle Removed vs Original LDPC Code: (N = 5040, M = 2520) Codes
• We try the 4-cycle removal algorithm (see Lecture 18) with a larger matrix.
• There is a slight improvement with 4-cycles removed, although there are only 11 word errors at BER = 1e-5.
• Note that in these cases every word error is a decoder failure (Nw = Nf ); hence there are no undetected codeword errors (Na1 = Na2 = 0).
Page 2 of 37
2.7 million seconds = 31 days so the run for Eb/N0 = 1.75 dBtook a CPU month on a PC!
0.8 1 1.2 1.4 1.6 1.8 210
-6
10-5
10-4
10-3
10-2
10-1
100
LDPC Code Comparison: N,K = 5040,2522 vs N,K = 5040,2520
Eb/No (dB)
rate
4-cycle=19 ber
4-cycle=0 ber
4-cycle=19 wer4-cycle=0 wer
uncoded bit error rate
N=1080 vs 5040 LDPC Codes : Gap to Capacity at a Given BER
Page 3 of 37
• The half-rate, "5k" code is within about 1.6 dB at BER = 1e-5 of the Shannon limit Eb/N0 (magenta dash-dotted line).
• The Density Evolution (DE) threshold Eb/N0 (black dashed line) is also shown.
0 1 2 3 4 5 6 7 8 9 1010
-7
10-6
10-5
10-4
10-3
10-2
10-1
LDPC Code Comparison: N,K = 1080,540 vs N,K = 5040,2520
Eb/No (dB)
rate
N = 1080 berN = 5040 ber
uncoded bit error rate
Shannon limit
DE thresholdBER line
5k Code Result in the Bandwidth Efficiency Plane
• BER performance is summarized by giving the required Eb/N0 to reach a certain BER level.
– We choose a 10-5 BER level, and report a minimum Eb/N0 =1.75 dB necessary for that BER for the 5k code.
• The "density evolution threshold" (red diamond) and the "capacity limit for a binary input AWGN channel" (green circle) are compared in the plot and table.
Page 4 of 37
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 30
0.5
1
1.5
2
2.5
3
3.5Half-Rate Code Result: Bandwidth Efficiency Plane
Eb/N0 (dB)
ET
A (
bit/
s/H
z)
ETA for AWGN Channel
ETA for Binary-input AWGN channel
Capacity Eb/N0 (R = 1/2)DE threshold Eb/N0 ((3,6)-regular)
Eb/N0 for BER=1e-5 (5k code)
Channel Models and Channel Capacity
• We consider 3 “input X, output Y” channel models,– (1) the simple binary symmetric channel or BSC,– (2) the additive white Gaussian noise channel or AWGNC, and– (3) the binary-input AWGN channel or BAWGNC.
• We calculate the mutual information function I(X;Y) from the definitions for each channel. – The resulting channel capacity C (from [1]), the maximum possible mutual information over all input distributions for X , is given and plotted versus a SNR measure.
Page 5 of 37
BSC model
pp
1 - p
1 - p
0
1 1
0X YChannel
Calculation of Channel Capacity C: Binary Symmetric Channel Example (1 of 5)
• Exercise 1.31 (from [1]) For a BSC with crossover probability p having input X and output Y, let the probability of inputs be P(X = 0) = q and P(X = 1) = 1 - q.
– (a) Show that the mutual information is
Page 6 of 30
ppppYHYXI 22 log)1(log)();( – (b) By maximizing over q show that the channel capacity per channel use
is XXHpHC sourcebinary a ofentropy the)(with )(1 22
)2()1|1(log)}1()1|1({
)0|1(log)}0()0|1({
)1|0(log)}1()1|0({
)0|0(log)}0()0|0({
)|(log)}()|({)|(log),()|(
)|(entropy
lconditiona of definition thefrom , variablesrandom {0,1}both are and As
)1()|()();(
:ninformatio mutualfor formula general following thefromStart )a(:
2
2
2
2
2
1
2
12
2
1
2
12
XYPXPXYP
XYPXPXYP
XYPXPXYP
XYPXPXYP
xyPxPxyPxyPxyPXYH
XYH
YX
XYHYHYXI
solution
j iijiij
j iijij
Page 7 of 37
Calculation of Channel Capacity: Binary Symmetric Channel Example (2 of 5)
# (4))-(1log)1(log)();(
BSC. for then informatio mutual for theresult theyields (1) into (3) Inserting
)3()1(log)1(log
)1(log)}1)(1()1{(log})1({
)1(log)}1)(1{(log}{
log)}1({)1(log})1{()|(
hence ; 0)( and iesprobabilit lconditiona theknow weBSC For the
)2()1|1(log)}1()1|1({
)0|1(log)}0()0|1({
)1|0(log)}1()1|0({
)0|0(log)}0()0|0({)|(
)1()|()();()(continued )a(:
22
22
22
22
22
2
2
2
2
ppppYHYXI
pppp
pqpqpppqqp
pqpppq
pqppqpXYH
qXP
XYPXPXYP
XYPXPXYP
XYPXPXYP
XYPXPXYPXYH
XYHYHYXIsolution
BSC model
pp
1 - p
1 - p
0
1 1
0
Calculation of Channel Capacity: Binary Symmetric Channel Example (3 of 5)
.),()(: and ofy probabilitjoint thefrom
)( expresscan that weRecall . and of in terms )( express toneed We
#1/2. at maximum absolutean has 1, 1/2for
negative and 1/2, for zero 1/2,0for positive is of derivative theAs
1ln
2ln
1)1(
1
1)1()1ln(
1ln
2ln
1
)1ln()1(ln2ln
1)(
of derivative theTake :
1/2. at occurs for maximum The
1.0for )1(log)1(log)(Consider :
. 0)(again where,for valuesallfor be to takewhich we
ons,distributiinput allfor );( maximize toneed wecapacity thefind To
(1))-(1log)1(log)();( )b(:
2
1
22
22
iijj
j
xXyYPyYPYX
yYPqpYH
ufu
uuf
u
u
uuu
uuu
du
df
uuuuuf
fproof
uf
uuuu -u ufLemma
qXPq
YXIC
ppppYHYXIsolution
Calculation of Channel Capacity: Binary Symmetric Channel Example (4 of 5)
)4(2 with )1(log)1(log
)21(log)21()2(log)2(
)(log)()(
:(4)in as written becan for entropy the(3) and (3) usingThen
)3()0(121)1)(1(
)1()1|1()0()0|1()1(
)3(2)1()1(
)1()1|0()0()0|0()0(
1).( and 0) (for sexpressiondown can write we(2) From
)2()1()1|()0()0|(
)1,()0,(),()(
)1()1(log)1(log)();(
)(continued )b(:
22
22
2
12
2
1
22
pqqpuuuuu
pqqppqqppqqppqqp
yYPyYPYH
Yba
bYPpqqpqppq
XPXYPXPXYPYP
apqqpqpqp
XPXYPXPXYPYP
YPYP
XPXyYPXPXyYP
XyYPXyYPxXyYPyYP
ppppYHYXI
solution
jjj
jj
jji
ijj
Calculation of Channel Capacity: Binary Symmetric Channel Example (5 of 5)
#)6()1(log)1(log1)(1)};({max
:Channel BSC for then informatio mutual maximum for the expression the
derived have weand (bit), 1 then is )(for valuemaximum that thesee We
)5(2
1
)21(
)2/1()21(
2
11/2 and 2
) 0)((again : 1/2 that requires also
which 1/2, for occurs )(for maximum The Lemma. theuseNext
)4(2 with )1(log)1(log)(
)3(21)1(;)3(2)0(
)2()1()1|()0()0|()(
)1()1(log)1(log)();(
)(continued )b(:
222
22
22
)(pppppHYXIC
YH
p
pqpqpupqqpu
qXPq
uYH
pqqpuuuuuYH
bpqqpYPapqqpYP
XPXyYPXPXyYPyYP
ppppYHYXI
solution
XPBSC
jjj
BSC model
pp
1 - p
1 - p
0
1 1
0
Page 10 of 37
Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 1 of 3)
• Example 1.10. Suppose we have a input alphabet Ax ={-a, a} (e.g., BPSK modulation with amplitude a) with P(X = a) = P(X = -a) = 1/2. Let N ~ N(0,2) and Y = X + N. Find the mutual information and channel capacity.
Page 11 of 37
Gaussian. are iesprobabilit lconditiona that theand 2/1)( that useNext
)2()'()'|(
)|(log)()|(
)(
)|(log)}()|({
)()(
),(log),();(
ies.probabilit lconditiona of in terms );( express torelationsy probabilit Use
)(
)(log)()||()...1())()(||),(();(
:ninformatio mutualfor formLiebler -Kullback thefromstart We:
' |
|2|
|2|
2
2
xP
dyxPxyp
xypxPxyp
dyyp
xypxPxyp
dyxPyp
yxpyxpYXI
YXI
xQ
xPxPQPDypxPyxpDYXI
solution
X
Xx XY
XYXXY
Y
XYXXY
XY
XYXY
xYXXY
x
x y
y x
y x
x
AA A
A A
A A
A
Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 2 of 3)
)4()]}|()|([5{.log)]}|()|([5{.)();(
.)(log)( )( is variablerandom continuous a for )( that Recall
)3()]}|()|([5{.log)|()]}|()|([5{.log)|(2
1
)|(log)|()|(log)|(2
1
)|()|((5.
)|(log)|(
)|()|((5.
)|(log)|(
2
1);(
Gaussian. are iesprobabilit lconditiona that theand 2/1)( that useNext
)2()'()'|(
)|(log)()|();(
)1())()(||),(();( :
2
2
22
22
22
' |
|2|
dyaypaypaypaypYHYXI
dyypypYHYYH
dyaypaypaypaypaypayp
dyaypaypaypayp
dyaypayp
aypayp
aypayp
aypaypYXI
xP
dyxPxyp
xypxPxypYXI
ypxPyxpDYXIsolution
y
x
x y
X
Xx XY
XYXXY
YXXY
A
AA A
Page 13 of 30
Calculation of Channel Capacity: Binary input AWGN Channel (BAWGNC) ( 3 of 3)
Page 13 of 30on.distributi uniform theison distributi maximizing theas (7), toequal is channel BAWGN
for thecapacity thei.e., ,),;(log),;(2log2
1:
)7(),;(log),;(2log2
1);(
write tous allows (4)in (6) and (5) Using
)6(2
1
2
1
2
1),;(
: same with the-/at centered Gaussians twoof average theis which ),;( Define
)5(2log2
1)(
:known isentropy aldifferenti the ance with variPDFGaussian afor that Recall
)4()]}|()|([5{.log)]}|()|([5{.)();( :
22
222
22
222
2
)(
2
)(2
22
22
2
2
2
2
2
2
dyayayeCClaim
dyayayeYXI
eeay
aay
eYH
dyaypaypaypaypYHYXIsolution
BAWGNC
ayay
Aside: Probability Function (y;a,1)• The probability function is a function of y with two parameters:
amplitude a and noise variance 2. We set 2 to 1 here for convenience.
• (y ;a, 2) is the average of two Gaussians – with separation 2a and common variance 2 - at a given y.
– It has a shape resembling a Gaussian with variance 2 for small SNR = a2/2, and two separated Gaussians with variance 2 for large SNR.
Page 14 of 372
2
2
)(
2
)(
2
2
SNR
22
1),;(
2
2
2
2
awith
eeayayay
2
)(
2
)(
2
22
22
1)1,;(
. of valuessfor variou
1 with versusplotted is function The
SNRySNRy
eeSNRy
SNRa
y
-10 -8 -6 -4 -2 0 2 4 6 8 100
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4phi(y,a,v) for unit variance
yph
i(y;a
,1)
a = 1/10
a = 0.8
a = 1.5a = 3
a = 7
Calculation of Channel Capacity: AWGN Channel
• Example 1.11. Let X ~ N(0,x2) and N ~ N(0,n
2) , independent of X. Let Y = X + N. Then Y ~ N(0,x
2+ n2 ). Find the mutual information and capacity.
Page 15 of 30
on.distributiGaussian theison distributi maximizing or the , 1log2
1:
)3(1log2
1log
2
1)()();(
find toLemma apply the and (1)in (2) Use
#.2log2
1log
2
12log
2
1
2
log2log
2
1
log2
1
2
1log
2
1log)(:
2log2
1)(),0(~ :
)2()()|()|()|(
:)|( express-re entropy to lconditiona andentropy of properties Use
ninformatio mutualfor form one )1()|()();( :
2
2
2
2
2
22
22
2
222
22
22
222
22
222
2
22
2
2
2
n
xAWGNC
n
x
n
xn
X
CClaim
NHYHYXI
eeXEe
eXEeEXHproof
eXHXLemma
NHXNHXNXHXYH
XYH
XYHYHYXIsolution
N
Capacity vs SNR for 3 Channel Models (from [1])
• Capacity (bits/channel use) is determined for the
– Binary Symmetric Channel (BSC)
– AWGN Channel (AWGNC)
– Binary-input AWGN Channel (BAWGNC)
Page 16 of 37
)3( y probabilit
)1(log)1(log1
)2(8
1),;(
2log2
1)1,;(log)1,;(
)1(1log2
11log
2
1
variancenoise amplitude, ;Let
22
2
)(
2
)(
2
2
22
22
2
2
22
2
2
2
2
2
Qcrossoverpwhere
ppppC
eeaywhere
edyyyC
aC
aa
SNR
c
ccccBSC
ayay
BAWGNC
AWGNC
1. SNR 0for same
nearly the are capacities channel
BAWGN and AWGN The:
.or SNR of functions 3 for the
dB) 10 to30-(or 10 and 0.001
between SNRagainst plotted is
note
C
0 1 2 3 4 5 6 7 8 9 100
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
SNR
Cap
acity
(bi
ts/c
hann
el u
se)
Capacities for BSC, BAWGN, and AWGN Channels
AWGN channel
Binary input AWGN channelBSC (binary symmetric channel)
Bandlimited Channel Analysis: Capacity Rate• Assume that the channel is band-limited, i.e., the frequency content in any input, noise, or
output signal is bounded above by frequency W in Hz.
– By virtue of the Nyquist-Shannon Sampling theorem, then it is sufficient to choose a sampling frequency of 2W to adequately sample X, the channel input signal.
• Recall that the channel has capacity C in units of bits or bits per channel use, which is the maximal mutual information between input X and output Y.
• We can define a "capacity rate" - denoted here by C' to differentiate it from capacity C - in bit/s as the maximum possible rate of transfer of information for the bandlimited channel:
constant a ;2'
second
useschannelW
usechannel
bitsC
s
bitC
• We define the spectral efficiency for a bandlimited channel as the ratio of the data rate (Rd) to W. The maximum spectral efficiency is equal to C'/W.
Page 17 of 37
Aside: Shannon-Nyquist Sampling Theorem (from Wikipedia)
• The Nyquist-Shannon (or Shannon-Nyquist) sampling theorem states:
– Theorem: If a function s(t) contains no frequencies higher than W hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2W) seconds apart.
• Suppose a continuous time signal s(t) is sampled at a finite number (Npts) of equally-spaced time values with sampling interval t .
– In other words, we are given a “starting time” t0 along with a sequence of values s[n] where s[n] = s(tn) for n = 1,2,…, Npts with tn = t0 + (n-1)t and where t = t2 – t1 .
• If the signal is band-limited by W where W ≤ 1/(2t) , then the theorem says we can reconstruct the signal exactly: i.e., given the Npts values s[n] we can infer what the (continuous) function s(t) has to be for all t.
Page 18 of 37
Spectral Efficiency Curve: Band-limited AWGN Channel
• For a given Eb over N0 in dB, we find for the band-limited AWGN channel that there is a limiting value for spectral efficiency (measured in bit/second per hertz).
– In other words, we cannot expect to transmit at a bit rate (Rd) greater than that times W, with W the channel bandwidth.
• The Shannon limit is the minimum Eb / N0 for reliable communication.
Page 19 of 37
-2 -1 0 1 2 3 4 510
-3
10-2
10-1
100
101
Spectral Efficiency vs Eb over N0
Eb/N0 (dB)
ET
A (
bit/
s/H
z)
Shannonlimit
“keep out” region
channel AWGN limited-
Band for the59.1:0
dBN
ElimitShannon b
Spectral Efficiency for AWGN, BAWGN, and Quaternary-input AWGN Channels
• The maximum spectral efficiencies () for AWGN, binary input AWGN, and quaternary input AWGN channels are shown above. For large Eb/N0 , goes to 1 (bit/s) / Hz for the BAWGNC and 2 (bit/s) / Hz for the QAWGNC.
– Next we work through the details of constructing these curves.
Page 20 of 37
Spectral Efficiency : Linear scale Spectral Efficiency : Log scale
-2 -1 0 1 2 3 4 5 6 7
0.5
1
1.5
2
2.5
3
3.5Spectral Efficiency vs Eb over N0
Eb/N0 (dB)
ET
A (
bit/
s/H
z)
AWGN channel
Binary-input AWGN channelQuaternary-input AWGN channel
-2 -1 0 1 2 3 4 5 6 710
-2
10-1
100
101
Spectral Efficiency vs Eb over N0
Eb/N0 (dB)E
TA
(bi
t/s/
Hz)
AWGN channel
Binary-input AWGN channelQuaternary-input AWGN channel
Procedure for Generating Spectral Efficiency vs SNR and Eb/N0 in dB Curves for Band-limited AWGN
Channels• 1. Choose a range of (receiver) SNR, e.g., SNR=
[.001: .01: 10].
• 2. Find the capacity C = f (SNR) in bits/channel use for each SNR.
• 3. Determine the capacity bit rate C’ = 2WC in bit/s, where the channel is used 2W times per second, W the channel bandwidth, and = 1 for AWGNC or QAWGNC, 1/2 for QAWGNC.
• 4. Calculate the max spectral efficiency = C’/W with units of (bit/s)/Hz.
• 5. For each SNR also determine the corresponding (received) Eb/N0 in dB:
Page 21 of 37
00
2
2
2
or vs of plots allows .5
/)1(log2.4
)()1(log2.3
1log2
1.2
10]:.01:[.001 1.
1),(limit -band channel, :1
N
ESNR
SNRSNR
N
E
Hz
sbitSNRC
W
C
LawHartleyShannons
bitSNRWWCC
usechannel
bitSNRCC
SNR
HzWAWGNexample
bb
AWGNC
0 5 100
0.5
1
1.5
2
2.5
3
3.5
SNR
spec
tral
eff
icie
ncy
(bit/
s/H
z)
Spectral Efficiency vs SNR; AWGN Channel
0 2 4 60
0.5
1
1.5
2
2.5
3
3.5
Eb/N0 (dB)
spec
tral
eff
icie
ncy
(bit/
s/H
z)
Spectral Efficiency; AWGN Channel
SNRN
ESNR
N
E bb
00
allows which
• 6. Plot vs SNR and vs Eb/N0 in dB.
Generating Spectral Efficiency Curves, Example 2: Band-limited, Binary-input AWGN Channel
Page 22 of 37
SNRN
ESNRSNR
N
E
Hz
sbitSNRf
W
C
s
bitSNRWfWCC
eeaywhere
dySNRySNRyeSNRf
usechannel
bitsSNRfCC
SNR
aSNRWatts
VaHz
WwithAWGNexample
bb
ayay
BAWGNC
00
2
)(
2
)(
2
2
22
2
2
2
that allows 2
.5
/)(.4
BAWGNCfor 2/1)(2.3
8
1),;(
)1,;(log)1,;(2log2
1)(
)(.2
10]:.01:[.001 1.
2/1, ),(
variancenoise ),( amplitude signal , )(limit -
band channel; input -Binary:2
2
2
2
2
0 5 100
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
SNR
spec
tral
eff
icie
ncy
(bit/
s/H
z)
BAWGNC spectral efficiency vs SNR
-5 0 5 100
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1BAWGNC eta vs Eb/N0 in dB
Eb/N0 (dB)
spec
tral
eff
icie
ncy
(bit/
s/H
z)
plot of (SNR)for 0 < SNR < 10
plot of (Eb /N0)with Eb /N0 in dB
Message Passing Formulas in the LLR Decoder Algorithm
• The LLR decoder computes check LLRs from bit LLRs, and then bit LLRs from check LLRs.
– Assume that (cj | r) is approximately equal to (cj | r\n) for j n.
• We can visualize these computations as passing LLRs along edges of the Parity Check graph.
Page 23 of 30
cn' , n' in Nm,n
(3,6) Code: Parity Check Tree from Bit cn
cj, j in Nm',n'
cn
zm , m in Mnroot checks
root
tier 1 checks
tier 2
tier 1
. . .. . .
. . .
j
m',n'
'',
,
)]|(5.0tanh[tanh2
.',1:',
)|0(
)|1(log)|(:
'\1
','
',,
\,
\,\,,
nm
nm
j njnm
mnnmj jnm
nnm
nnmnnmnm
c
nnAnczwith
zP
zPzdefinition
N
NN
r
r
rr
mn
n
m nmncn
m nmncn
n
nnn
rLcor
rLc
cP
cPcdefinition
',' ','''
,
)|(
)|(
)|0(
)|1(log)|(:
M
M
r
r
r
rr
zm' ,m' in Mn',m
Page 24 of 37
. and failure decoding a declare Otherwise,
toloop , iterations # if Otherwise, . then ,0 If
0.set else ,0 if 1Set :decision tentativea Make
)34.15(],1[
Compute : ..., 1,2, For :
)33.15(],1[2
tanhtanh2
Compute :1 ),( with ,each For :
1. counter loop Set the
Set
1 ),( with , allfor 0Set :
y reliabilit channel theand , iterations ofnumber max the, vector received the, :
Codes LDPCBinary for Algorithm Decoding Likelihood Log Iterative
][
][,
][
]1[,
]1[1][
,
[0]
][,
,
Stop
Update Node CheckStopc
update nodeBit
update node Check
tionInitializa
rInput
15.2 Algorithm
LA
cc
rL
Nn
nmAnm
l
rL
nmAnm
LLA
nl
nn
m
lnmnc
ln
j
ljm
ljl
nm
ncn
lnm
c
n
nm
M
N
adjustment to remove intrinsic information
LLR LDPC Decoder Algorithm [2]
Ground Rules and Assumptions for Density Evolution
• Density evolution tracks the iteration-to-iteration PDFs calculated in the log likelihood ratio (LLR) LDPC decoder algorithm (using BPSK over AWGN).
– The analysis presented here makes several simplifying assumptions:
• 1. The code is regular with wc = column weight, wr = row weight, and the code length N is very large.
• 2. The Tanner graph is a tree, or no cycles exist in the graph.
• 3. The all-zero codeword is sent, so the received vector is Gaussian.
• 4. The bit and check LLRs - the n for 1 n N and m,n for 1 m M and 1 n N - are consistent random variables, and are identically distributed over n and m.
– The means of the check LLRs - denoted by [l] for the mean at iteration l - satisfy a recurrence relation, which is described next.
Page 25 of 37 variancenoise)(amplitude, signal
. variablereal offunction bounded a is )(
)1(2
2
1
]1[2
21][
AWGNBPSKa
xxwhere
wa
rw
lc
l
Density Evolution Analysis (1 of 6)
• Suppose we map 0 to a and 1 to -a, with a denoting a signal amplitude for a (baseband BPSK) transmitted waveform, (over an assumed 1 ohm load).
Page 26 of 37
(1) codewordbinary a ],[ amplitude signal)21( cct Vdtransmitteaa
20
0
222 ][E
2
1
}/{2),0(~
)2( ... 1,2, for ν
ijjibb
i
iii
andN
TNER
awithwhere
Nitror
N
tr
• Vector r is assumed to be equal to the mapped codeword plus random noise from the channel (i.e., ideal synchronization and detection are assumed).
– Here we assume the channel is AWGN, so each component of the noise is a (uncorrelated) Gaussian random variable with zero mean and known variance 2 (found from a, the code rate R, and ratio Eb / N0 )
Rd = 1 bit/sTb = 1/Rd = 1s = bit time
Encoder
R = K/N, A, G
m +c rtSignal Mapper(e.g., BPSK)
a
De-Mapper and Decoder
A, L
mc ˆ,ˆ
Density Evolution Analysis (2 of 6)
• Suppose (without loss of generality) that the all-zero codeword is transmitted, which implies for our current definitions that tn = a for n = 1,2, …, N.
(3) 1for
implies codeword zero-all the with )21(
N nat
a
n cct
)4(1for ),(~
or 2
)(exp
2
1)( have we, allfor that assumingHere
),0(~ ... 1,2, for ν
2
2
2
2
Nnar
arrpnat
whereNitror
n
nnn
iiii
N
Ntr
• The PDF for rn with this all-zero codeword assumption is Gaussian, with the same mean and variance for each n:
Rd = 1 bit/sTb = 1/Rd = 1s = bit time
Encoder
R = K/N, A, G
m = 0 +c = 0 rtSignal Mapper(e.g., BPSK)
a
De-Mapper and Decoder
A, L
mc ˆ,ˆ
Density Evolution Analysis (3 of 6)
• Recall that the LLR decoder algorithm (Algorithm 15.2) initializes bit LLRs or bit “messages” – the (cn | r) or n - to a constant (Lc) times rn.
Page 28 of 37
value0)(iteration -zeroth theis 1 , 2
LLR,bit theof definition theis 1 ; )|0(
)|1()|(
2]0[
lNnra
rL
NncP
cPc
nncn
n
nnn
r
rr
• Hence we see that the initial bit LLRs are all Gaussian, each with variances equal to twice their means. We call these random variables consistent .
)5(1for 4
,2
~2
and ),~2
2
2
2[0]
2[0]2 Nn
aar
aar nnnn
NN(
• Although the initial PDFs of the bit LLRs are consistent, subsequent iterations are not in general; however, we assume that all bit LLR PDFs are consistent.
• Also assume that the n are identically distributed over n: i.e., the means of the bit LLRs or m[l] are the same for each n, but do vary with iteration l.
(6) )2,(~ :consistent and ddistributey identicall
are LLRsbit the iterations all and allfor that Assume][][][ lll
n mm
l n
N
Aside: Consistent Random Variables
• Define a random variable to be consistent if:
– 1. it is Gaussian, and
– 2. its variance is equal to twice its mean in absolute value:
Page 29 of 37
||4
)(
2
)(
2
2
2
2
2
||4
1|)|2,;(
2
1),;(
a
ax
ax
ea
aaxf
becomes
eaxf
• For density evolution we assume that the bit and check LLRs are consistent random variables. Furthermore, their statistics depend only on iteration number (l), not on (indices) m or n.
• If the mean of the LLR increases towards infinity, the corresponding bit (or check) estimate becomes more certain.
-20 -15 -10 -5 0 5 10 15 20 25 30 350
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5Consistent Random Variables
x
p(x)
(pr
obab
ility
fun
ctio
n)
a = 1/3
a = 1
a = 2a = 5
a = 15
Density Evolution Analysis (4 of 6)
• Furthermore, assume the check LLRs are consistent and identically distributed.
• Assume the LDPC code is (wc , wr)-regular, and the Tanner graph is cycle-free.
• The bits (cj) in check m besides n will be distinct – by the cycle-free assumption - and assuming they are also conditionally independent on r\n allows us to use the tanh rule to relate the bit LLRs and check LLRs:
(7) 1 ),( with , allfor and any for )2,(~
: ddistributey identicall are and variablesrandom
consistent are that thehave we iterations all and and allfor that Assume
}|{,;)|0(
)|1(|
iteration for LLR
][][][,
][,
\,\,
\,\,,
][,
,
nmAnml
l nm
nirczwithzP
zPz
lcheck
lllnm
lnm
inj jnmnnm
nnmnnmnm
lnm
nm
N
rr
rr
N
convention local with 17) (Lecture work previous
from reversed signs (8) 2
tanh 2
tanh,
][][,
nmj
lj
lnm
N
Page 30 of 37
Density Evolution Analysis (5 of 6)
• Take the expectation of both sides of (8).
• Define a function (x) as below, plotted on the right along with tanh(x/2). Recast (9) in terms of (x) to write down (10).
•
)9( 2
tanh2
tanh (8)2
tanh 2
tanh,,
][][,
][][,
nmnm j
lj
lnm
j
lj
lnm EE
NN
-10 -8 -6 -4 -2 0 2 4 6 8 10-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1plot of Psi(x), density evolution function
x
y
Psi(x)
tanh(x/2)
)10()()( :recast becan )9(
0,)(
0,0
0,)(
)(
).( from )( Define .0
,4
)(exp
2tanh
4
1)(
)2,( ~ )2/tanh()(
1][][
22/1
rwll m
xxf
x
xxf
x
xfxxfor
dux
xuuxxf
xxuwithuExf
N
Page 31 of 37
Aside: Function (x) and Inverse
• We will need to evaluate the bounded function (x) , where x is any real number and y ranges between -1 to 1. The inverse function also needs to be evaluated, and its evaluation (near y = 1 or -1) leads to numerical instabilities.
Page 32 of 37
-10 -8 -6 -4 -2 0 2 4 6 8 10-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1plot of Psi(x), density evolution function
x
y
Psi(x)
tanh(x/2)
x = -1(y) , -1 < y < 1
y = (x), for any x,… but only -10 < x < 10 shown
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-20
-15
-10
-5
0
5
10
15
20
y
x
Psi inverse function
Density Evolution Analysis (6 of 6)
• From the bit LLR update equation in the LLR decoding algorithm (with some re-shuffling of operations).
• Take the expected value of both sides of (11).
)11(]1[,
]0[]1[,
][ nn m
lnmnm
lnmnc
ln rL
MM
)10()()( 1][][
rwll m
)12()1(22 ]1[
2
2]1[
2
2][ l
cm
ll wa
Ea
mn
M
• Plug (12) into (10) to develop (13), a recurrence relation for sequence [l]. Initialize the calculations with [l] = 0 for l = 0.
)13()1(2
)1(2
)(
1
]1[2
21][
1
]1[2
21]1[][
r
r
r
w
lc
l
w
lc
wll
wa
orwa
m
Page 33 of 37
Density Evolution Algorithm
)11(]1[,
]0[]1[,
][ nn m
lnmnm
lnmnc
ln rL
MM
02
2022
1
]1[
0
1][
max][
]1[][1
]0[
max0
42,
2
1,)1(
4
. if however, early,Exit
. 1,2,... for , one previous thefrom find to and Use.3
.0: 0 mean to LLRcheck theInitialize .2
.)2000( iterations ofnumber max a and (100), mean LLRcheck max a , ratio
Choose ./-1 rate code Define . weightsrow andcolumn the),,( Choose 1.
N
REaN
TT
REaw
N
RE
Ll
LN
E
wwRww
b
bb
b
w
lc
bl
l
ll
b
rcrc
r
0,)(
0,0
0,)(
)( ).( from )( Define
.0for4
)(exp
2tanh
4
1)(Define
22/1
xxf
x
xxf
xxfx
xdux
xuuxxf
Page 34 of 37
Density Evolution: Example 1 (Example 15.8 in [1] )
Page 35 of 37
.800 and 100, , 1.8dB] dB 1.764 dB [1.76 Choose
.3/1/-1 ; (4,6) ),( :parameters code LDPC Choose:1
max0
LN
E
wwRwwexample
b
rcrc
0 100 200 300 400 500 600 700 8000
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4Density evolution: Eb/N0(dB)=1.76 ,R = 0.33333
iterations
mea
n of
che
ck L
LR
0 100 200 300 400 500 6000
10
20
30
40
50
60
70
80
90
100Density evolution: Eb/N0(dB)=1.764 ,R = 0.33333
iterations
mea
n of
che
ck L
LR
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100Density evolution: Eb/N0(dB)=1.8 ,R = 0.33333
iterations
mea
n of
che
ck L
LR
Eb /N0 = 1.76 dB, max = 100 0.381
Eb/N0 = 1.764 dB, max = 100 max after ~550 iters
Eb/N0 = 1.8 dB, max = 100 max after ~55 iters
• Check LLR mean value approaches a constant if Eb/N0 is less than the threshold {Eb/N0}t , or approaches infinity if Eb/N0 > {Eb/N0}t . Here {Eb/N0}t 1.764 dB.
note: LLR = 30P(zm,n = 0) > 1-10-12
cn = 0 for all n
Density Evolution: Example 2
Page 36 of 37
.2000 and 100, dB], 1.2 dB 1.19 dB [1.16Pick
.2/1/-1 ; (3,6) ),( :parameters code LDPC Choose:2
max0
LN
E
wwRwwexample
b
rcrc
Eb /N0 = 1.16 dB, max = 100 0.785
Eb/N0 = 1.19 dB, max = 100 0.945
Eb/N0 = 1.2 dB, max = 100 max after ~128 iters
• Here {Eb/N0}t 1.2 dB.
0 20 40 60 80 100 120 1400
10
20
30
40
50
60
70
80
90
100Density evolution: Eb/N0(dB)=1.2 ,R = 0.5
iterations
mea
n of
che
ck L
LR
0 200 400 600 800 1000 1200 1400 1600 1800 20000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1Density evolution: Eb/N0(dB)=1.19 ,R = 0.5
iterations
mea
n of
che
ck L
LR
0 200 400 600 800 1000 1200 1400 1600 1800 20000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8Density evolution: Eb/N0(dB)=1.16 ,R = 0.5
iterations
mea
n of
che
ck L
LR
Comparing Density Evolution Results (from [4]) : Comparisons to [1], p 659
• 3 Density Evolution cases were attempted; the thresholds produced are listed in red.
• Apparently, there is a slight (<.05 dB) discrepancy between Moon's results (taken from [4]) in Table 15.1 and mine.
– However, his Example 15.8 and Figure 15.11 suggest a threshold of 1.76, not 1.73 for the R = 1/3 rate case.
0 100 200 300 400 500 6000
10
20
30
40
50
60
70
80
90
100Density evolution: Eb/N0(dB)=1.764 ,R = 0.33333
iterations
mea
n of
che
ck L
LR
Page 37 of 37[4] "Analysis of Sum-Product Decoding of Low-Density Parity-Check Codes Using a Gaussian Approximation", byChung, Richardson, and Urbanke, IEEE Transactions in Information Theory, vol. 47, no. 2, (2001)
Recommended