46
SIMULATED PERFORMANCE OF LOW-DENSITY PARITY- CHECK CODES A MATLAB IMPLEMENTATION LAKEHEAD UNIVERSITY FACULTY OF ENGINEERING 2006 By: Dan Dechene Kevin Peets Supervised by: Dr. Julian Cheng

Report 4th

  • Upload
    linhmta

  • View
    222

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Report 4th

SIMULATED PERFORMANCE OF LOW-DENSITY PARITY-

CHECK CODES

A MATLAB IMPLEMENTATION

LAKEHEAD UNIVERSITY – FACULTY OF ENGINEERING

2006

By:

Dan Dechene Kevin Peets

Supervised by: Dr. Julian Cheng

Page 2: Report 4th

ii

TABLE OF CONTENTS 1.0 Introduction..........................................................................................................1

1.1 Digital Communication............................................................................2

2.0 Channel Coding ...................................................................................................3 2.1 Shannon Theorem for Channel Coding ...................................................3 2.2 Hamming Code ........................................................................................5 2.3 Tanner Graph Representation...................................................................7

3.0 LDPC ...................................................................................................................8

3.1 Introduction to LDPC ..............................................................................8 3.1.1 Parity Check Matrix..................................................................8

3.1.1.1 Classifying the Matrix................................................8 3.1.1.2 Methods of Generation ..............................................9

3.1.2 Minimum Distance of LDPC Codes .........................................9 3.1.3 Cycle Length of LDPC Codes ..................................................9 3.1.4 Linear Independence.................................................................10

3.2 LDPC System Overview..........................................................................11 3.3 Generation for Simulation........................................................................13 3.4 Encoding ..................................................................................................15

3.4.1 Linear Independence Problem ..................................................17 3.5 Decoding ..................................................................................................17

3.5.1 Hard Decision vs. Soft Decision Decoding ..............................17 3.5.2 SPA Algorithm ..........................................................................19

3.5.2.1 Computing Messages .................................................20 3.5.2.2 Initialization ...............................................................22 3.5.2.3 Soft-Decision .............................................................24 3.5.2.4 Simulation Computation ............................................25

4.0 Results..................................................................................................................26 5.0 Problems Encountered .........................................................................................29 6.0 Future Work .........................................................................................................31

6.1 Increase Efficiency of Simulation Algorithms.........................................31 6.2 Lower Memory Requirements of Parity-Check Matrix...........................31 6.3 VLSI Implementation ..............................................................................31

7.0 Conclusion ...........................................................................................................32 References .................................................................................................................34 Appendix A – Code....................................................................................................35 Appendix B – Simulink Model ..................................................................................43

Page 3: Report 4th

iii

TABLE OF FIGURES Figure 1: Communication System Block Diagram....................................................2

Figure 2: Graphical Representation of Hamming (7,4) Code....................................5

Figure 3: All Possible Codewords for Hamming (7,4) Code.....................................6

Figure 4: Bipartite Tanner Graph ...............................................................................7

Figure 5: Length 4 Cycle ...........................................................................................10

Figure 6: Length 6 Cycle ...........................................................................................10

Figure 7: LDPC System Overview ............................................................................11

Figure 8: Flowchart to create Parity-Check matrix, H...............................................14

Figure 9: Likelihood functions for BPSK modulation over an AWGN channel .......18

Figure 10: Representation of Nodes ..........................................................................20

Figure 11: Representation of Nodes...........................................................................20

Figure 12: Flowchart for Decoding............................................................................25

Figure 13: MacKay’s Results.....................................................................................27

Figure 14: Simulated Results .....................................................................................27

Figure 15: Performance of simulations vs. Hamming with Shannon’s limit .............27

Page 4: Report 4th

(1)

1.0 INTRODUCTION

In the early nineties, turbo codes and its new iterative decoding technique were

introduced, employing this new coding scheme and its decoding algorithm, it was

possible to achieve performance within a few tenths of a dB from the Shannon limit for a

bit error rate of 10-5 [1]. This discovery not only had a major impact on the

telecommunications industry, but it also kicked off major research into the area of

channel maximizing coding schemes using iterative decoding now that they knew it was

possible to achieve.

In 1962, Robert Gallager had originally proposed Low-Density Parity-Check codes, or

LDPC codes [2] as a class of channel coding, but implementation of these codes required

a large amount of computing power due to the high complexity and memory requirements

of the encoding/decoding operations, so they were forgotten. A few years after turbo

codes made their appearance, David MacKay rediscovered LDPC codes [3], and he

showed that LDPC codes were also capable of approaching the Shannon limit using

iterative decoding techniques.

An LDPC code is a linear block code characterised by a very sparse parity-check matrix.

This means that the parity check matrix has a very low concentration of 1’s in it, hence

the name “low-density parity-check” code. The sparseness of LDPC codes is what has

interested researchers, as it can lead to excellent performance in terms of bit error rates.

The purpose of this paper was to gain an understanding of LDPC codes and utilize that’s

knowledge to construct a test a series of algorithms to simulate their performance.

This paper will begin with a basic background of digital communications and channel

coding theory and then carry the basic principles forward and apply them to LDPC.

Page 5: Report 4th

(2)

1.1 DIGITAL COMMUNICATION

Digital communication is a fundamental requirement of the modern world. Many current

analog transmission systems are converting to digital such cable TV. The advantages

allow content to be dynamic as well as introduce new features that were impossible over

an analog system.

Figure 1: Communication System Block Diagram

Figure 1 shows a model of a communication system. A digital message originates from

the source (this could have been obtained from an analog signal via an analog-to-digital

converter). These digital signals are then passed through a source encoder. The source

encoder removes the redundancy of the system; much the same way as computer file

compression operates. Following source encoding, the signal is then passed through the

channel encoder which adds controlled redundancy to the signal, the signal is then

modulated and transmitted over the channel. The reverse process occurs in the receiver.

This paper focuses on the channel encoder/decoder blocks – channel coding. The purpose

of channel coding is to add controlled redundancy into the transmitted signal to increase

the reliability of transmission and lower transmission power requirements.

Page 6: Report 4th

(3)

2.0 CHANNEL CODING

Channel coding is a way of introducing controlled redundancy into a transmitted binary

data stream in order to increase the reliability of transmission and lower power

transmission requirements. Channel coding is carried out by introducing redundant parity

bits into the transmitted information stream. The requirement of a channel coding scheme

only exists because of the noise introduced in the channel. Simple channel coding

schemes allow the received of the transmitted data signal to detect errors, while more

advanced channel coding schemes provide the ability to recover a finite about of

corrupted data. This results in more reliable communication, and in many cases,

eliminates the need for retransmission.

Although channel coding provides many benefits, there is an increase in the number of

bits being transmitted. This is important when selecting the best channel coding scheme

to achieve the required bit error rate for a system.

2.1 SHANNON THEOREM FOR CHANNEL CODING

Communication over noisy channels can be improved by the use of a channel code C, as

demonstrated by C. E. Shannon in 1948 with his famous channel coding theorem:

“Let a discrete channel have the capacity C and a discrete source the

entropy per second H. If H ≤ C there exists a coding system such that the

output of the source can be transmitted over the channel with an arbitrarily

small frequency of errors (or an arbitrarily small equivocation). If H > C it

is possible to encode the source so that the equivocation is less than H.”

[4]

This theorem states that below a maximum code rate R, which is equal to the capacity of

the channel, it is possible to find error correcting codes capable of achieving any given

Page 7: Report 4th

(4)

probability of error. While Shannon proposed this theorem, he provided no insight in how

to achieve this capacity. The evidence of the search for this coding scheme can be seen by

the rapid development of capacity improving schemes.

“When Shannon announced his theory in the July and October issues of

the Bell System Technical Journal in 1948, the largest communications

cable in operation at that time was capable of carrying 1800 voice

conversations. Twenty-five years later, the highest capacity cable was

capable of carrying 230000 simultaneous conversations” [5].

Researchers are continuously looking for ways to improve capacity. Currently, the only

measure that can be used for code performance is its proximity to Shannon’s Limit.

Shannon’s limit can be expressed in a number of different ways. Shannon’s limit for a

band-limited channel is:

⎟⎟⎠

⎞⎜⎜⎝

⎛+⋅=

N

S

PP

BC 1log2

For a system with no bandwidth limit, this equation becomes:

⎟⎟⎠

⎞⎜⎜⎝

⎛+⋅=

02 21log

21

NE

RC B

The SNR for a coded system also depends on the code rate, R {(bits in original message)

/ (bits sent on channel)} and can be found as:

022 ))1(log)1()(log1(

NECRppppR B=−⋅−+⋅+

Page 8: Report 4th

(5)

where p is the BER for a given SNR and R is the code rate. The above equations can be

solved to have an expression that has only contains p and SNR. To solve this equation

requires numerical computation.

00222 21log

21))1(log)1()(log1(

NE

RNE

RppppR BB ⋅⎟⎟⎠

⎞⎜⎜⎝

⎛+⋅=−⋅−+⋅+

From the above, the Shannon’s limit for a code rate of ½ can be shown to be 0.188dB [6].

2.2 HAMMING CODE

Hamming (7,4) code is an relatively simple starting point in order to understand channel

coding. It is block code that generates 3 parity bits for every 4 bits of data. Hamming

code operates off even parity. Hamming (7,4) is a very simplistic code to understand as it

can be graphically represented by a Venn diagram.

Figure 2 shows how a Hamming (7,4)’s parity bits are calculated. The data bits s1, s2, s3,

and s4 are placed in the middle of the Venn diagram as shown, then the parity bits t5, t6,

and t7 are assigned in a manner such that each of the 3 circles has an even number of ones

(even parity).

Figure 2: Graphical Representation of Hamming 7,4 Code [3]

Page 9: Report 4th

(6)

Figure 3: All Possible Codewords for Hamming 7,4 Code [3]

Figure 3 shows the constructed codeword for the given Figure 2 above. Another

interesting property of any channel coding scheme is its minimum distance. For

Hamming (7,4), this minimum distance it 3. This means that given an arbitrary codeword,

it will take 3 bits flipping to produce any other possible codeword. In terms of decoding,

this means that it is possible to correct a finite number of errors. For Hamming (7,4)

code, it is able to detect single and dual bit errors, but is only able to correct single bit

errors. It is important to note, that if 3 or more bit errors occur, than the decoder will be

unable to correct the bit errors, and in fact may be unable to detect that a bit error

occurred. The following equations represent the characteristics of Hamming Code in

terms of minimum distance, number of detectable and number of correctable errors.

12 += nMD 1+= np

Where MD is the minimum distance, n is the number of errors the code can correct, and p

is the number of errors the code can detect. Again, it is important to note, that if sufficient

noise is present, then the codeword may be corrupted in a matter than Hamming code is

unable to detect and correct errors.

This means that minimum distance plays an important role as a characteristic of a given

code. Minimum distance in general is defined as the fewest number of bits that must flip

to in any given codeword, to be distinguished as another. A large minimum distance

makes for a good coding scheme, as it increases the noise immunity in the system. It is

often very difficult to determine the minimum distance for a given code. This is the case

because there exists k2 possible codewords in any given coding scheme, therefore

Page 10: Report 4th

(7)

computing minimum distance requires that )12(*2 1 −− kk comparisons be performed. It is

obvious that as the blocklength increases, measuring the minimum distance would require

a large amount of computational power. There are methods that have been proposed to

measure the minimum distance of these codes, however they will not be discussed here.

Although Hamming (7,4) code does not provide a large gain in terms of error rate

performance versus an uncoded system, it provides an excellent first step in studying

coding theory.

2.3 TANNER GRAPH REPRESENTATION

Tanner Graphs are pictorial ways of representing the parity check matrix of block codes.

We are interested in these graphs as they can represent the H matrices of LDPC code. The

rows of a parity check matrix are represented by the check nodes, while the columns are

represented by variable nodes. If there is a 1 at a given position (j,i), where j is the row

index and i is the column index, an edge is used to show this connection in the Tanner

graph. Figure 4 illustrates a Tanner graph of an implementation of Hamming (7,4) code.

f0 f1 f2

c0 c1 c2 c3 c4 c5 c6

⎥⎥⎥

⎢⎢⎢

⎡=

100110101011100010111

Hf0

f1

f2

c0 c1 c2 c3 c4 c5 c6

f0 f1 f2

c0 c1 c2 c3 c4 c5 c6

f0 f1 f2

c0 c1 c2 c3 c4 c5 c6

⎥⎥⎥

⎢⎢⎢

⎡=

100110101011100010111

Hf0

f1

f2

c0 c1 c2 c3 c4 c5 c6

⎥⎥⎥

⎢⎢⎢

⎡=

100110101011100010111

Hf0

f1

f2

c0 c1 c2 c3 c4 c5 c6

Figure 4: Tanner Graph of a Hamming (7,4) Code

Page 11: Report 4th

(8)

3.0 LDPC 3.1 INTRODUCTION TO LDPC

Robert E. Gallager originally discovered low-Density Parity-Check Codes (or LDPC

Codes) in 1962 [2]. They are a class of linear block codes that approach Shannon’s

Channel Capacity Limit (See section 2.1). LDPC Codes are characterized by the

sparseness of ones in the parity-check matrix. This low number of ones allows for a large

minimum distance of the code, resulting in improved performance. Although proposed in

the early 1960’s, it has not been since recently that codes have emerged as a promising

area of research in achieving channel capacity. This is part due to the large amount of

processing power required to simulate the code. In the case of any coding scheme larger

blocklength codes provide better performance, but require more computing power.

Performance of a code is measured through its bit error rate (BER) vs. signal to noise

ratio ⎟⎟⎠

⎞⎜⎜⎝

0NEB in dB. The curve of a good code will show a dramatic drop in BER as SNR

improves. The best codes have a cliff drop at an SNR slightly higher than the Shannon’s

limit.

3.1.1 PARITY-CHECK MATRIX

3.1.1.1 Classifying the Matrix

LDPC codes are classified into two different classes of codes: regular and irregular codes.

Regular codes are the set of codes in which there is a constant number of wC 1’s

distributed throughout each column and a constant number of wR 1’s per row. For a

determined column weight (wC) we can determine the row weight as N*wC /(N-k), N is

the blocklength of the code and k is the message length. Irregular codes are those of

which do not belong to this set (do not maintain a consistent row weight).

Page 12: Report 4th

(9)

3.1.1.2 Methods of Generation

In the 1960’s, Gallager published the existence of the class of LDPC codes, but provided

no insight into how to generate the parity-check matrix (also known as the ‘H’ matrix).

There have been many methods proposed by various researchers [3][6][7] as to methods

of generation. Several methods include:

• Random Generation subject to constraints

• Density Evolution

• Finite Geometry

In terms of generation there are several key concerns to examine when generating the

parity-check matrix such as minimum distance, cycle length and linear independence.

3.1.2 MINIMUM DISTANCE OF LDPC CODES

As discussed in section 2.2, the minimum distance is a property of any coding scheme.

Ideally this minimum distance should be as large as possible, but there is a practical limit

on how large this minimum distance can be. LDPC posses a large problem when

calculating this minimum distance efficiently as an effective LDPC code requires rather

large blocklengths. Using random generation it is very difficult to specify the minimum

distance as a parameter, rather minimum distance will become a property of the code.

3.1.3 CYCLE LENGTH OF LDPC CODES

Using a Tanner Graph it is possible to view the definition of the minimum cycle length of

a code. It is the minimum number of edges travelled from one check node to return to the

same check node. Length 4 and Length 6 cycles with the corresponding parity-check

matrix configurations are shown in Figures 5 and 6 respectively.

Page 13: Report 4th

(10)

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

=

..........

..1..1..

..........

..1..1..

..........

H

⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢

=

..............

......1..1..

..............

..1..1......

..............

..1......1..

..............

H

Figure 5: Length 4 Cycle Figure 6: Length 6 Cycle

It has been shown that the existence of these cycles degrade the performance during

iterative decoding process [7]. Therefore when generating the parity-check matrix, the

minimum cycle length permitted must be determined. It is possible control the minimum

cycle length when generating the matrix, however computational complexity and time

increases exponentially with each increase in minimum cycle length.

3.1.4 LINEAR INDEPENDENCE

The generator matrix G, is defined such that:

mGc T=

Where,

Check Nodes Check Nodes

Variable Nodes Variable Nodes

Page 14: Report 4th

(11)

c = [c1, c2, .. , cN]T – Codeword

m = [m1, m2, .. , mK]T – Message Word

G = k by n Generator matrix

In order to guarantee the existence of such a matrix G, the linear independence of all rows

of the parity-check matrix must be assured. In practical random generation, this becomes

very difficult. The method used to approach this problem will be studied in further depth

in section 3.3 – Generation for Simulation.

3.2 LDPC SYSTEM OVERVIEW

Figure 7: LDPC System Overview

Where:

• m Message

• c Codeword

• x Modulated signal

• n AWGN noise

• y Received signal

• c Estimated codeword

• m Estimated message

Note: All above signals are vectors in terms of simulation implementation

Channel n

x

m c y

c m Message Source LDPC Encoder

LDPC Decoder (SPA)

Message Destination

+

BPSK Modulator

Retrieve Message from Codeword

Page 15: Report 4th

(12)

Message Source:

The message source is the end-user transmitting the data. In terms of mobile

communications, the message source would be the end-user transmitting his/her

voice information. The simulation utilized a random message generator. This

generator creates a message vector with equal a priori probability:

5.0]0Pr[]1Pr[ ==== ii mm .

LDPC Encoder:

The LDPC encoder is implemented at the end-user transmitting the data. In terms

of simulation implementation, encoding is done via a generator matrix. This is

covered in further detail in section 3.4.

BPSK Modulator:

The BPSK (Binary Phase Shift Keying) modulator maps the input binary signals,

to an analog signal for transmission. In simulation, the BPSK signal is represented

by the mapping: },{}1,0{ bb EE −→ .

Channel:

The channel is the medium of which information is transmitted from the

transmitter to the receiver. In mobile communication, this is a wireless channel,

and for other applications this could be copper or fibre optics. The addition of

noise normally occurs in the channel. In the simulations, the channel is modelled

as an AWGN (Additive White Gaussian Noise) channel. The resulting noise added

to the system follows the zero-mean normal distribution, with variance NO/2, and

NO is the single-sided noise power spectral density.

Page 16: Report 4th

(13)

LDPC Decoder:

The decoder is implemented at the end-user receiving the information. In terms of

simulation implementation, decoding is a process that loops through passing

messages back and forth along the tanner graph (See Section 2.3: Tanner Graphs)

under certain conditions are satisfied, or a maximum number of passes have

occurred. This is discussed in much more detail in section 3.5. It is obvious that in

mobile communications, the handset would require both the encoder and decoder

as a pair to allow for bi-directional communication.

Retrieve Message From Codeword:

This simple process retrieves the estimated message from the estimated

codeword. In the simulation this is done via a simple function call following

estimating the codeword.

Message Destination:

The message destination is the end-user receiving the data. In a mobile

communications environment, this would be the user receiving the voice

information of the other user. In the simulations, there is no message destination;

rather the estimated message is compared to the transmitted message in order to

detect whether a transmission error occurred.

3.3 GENERATION FOR SIMULATION

The method used for generating the H matrix in this paper was random generation with

constraints. The algorithm used to generate this routine allows for 4 input parameters:

• N Block/Codeword Length

• k Message Bits

Page 17: Report 4th

(14)

• wC Column Weight (# of 1’s per column)

• reltol Tolerance Variable used to control regularity

The row weight (wR) is computed as wCN/(N-k). In order to guarantee that wR is a whole

number, the value is rounded up if it contains a decimal value, setting the maximum

allowed number of 1’s per row. In order to allow for sufficiently fast computation of the

H matrix, only cycles of length 4 are avoided in the algorithm. The algorithm for

generation of the matrix is shown in Figure 8 below.

Figure 8: Flowchart to create the Parity-Check matrix, H

Page 18: Report 4th

(15)

3.4 ENCODING

Practical encoding of LDPC can be difficult thing to implement. Due to the nature of

communications systems, it normally requires real-time operation. Encoding of codes,

especially of higher blocklengths can be quite difficult to implement in hardware

however there are several methods of generating H such that encoding can be done via

shift registers [11], however these methods will not be discussed here.

In terms of simulation, encoding can be done via matrix multiplication, as memory

allotment of most personal computers can handle these operations with rather large

blocklengths. In Section 3.1.4, it was determined that we can compute the codeword c

using:

mGc T=

This paper will now examine how to generate this matrix G. In order to determine the

relationship of the parity bits to the H matrix, we will use the following definition of the

syndrome. The definition is similar to that of Hamming Code (Section 2.2). We define a

complete set of successful parity-checks as:

0=Hc

Where:

T

Ncccc ],...,,[ 21=

NkNH ×− = N-k by N Parity-Check Matrix

The location of the parity-bits in the codeword is arbitrary, therefore we will form our

codeword such that:

c = [ p : m ]T

Page 19: Report 4th

(16)

Where:

m = [m1, m2, …, mK]T – Message Word

p = [p1, p2, … , pN-k]T – Parity Bits

Therefore:

H [ p : m ]T = 0

H can be partitioned as:

H = [ X : Y ]

Where:

X = N-k by N-k Sub-matrix

Y = N-k by k Sub-matrix

From this we can find:

Xp + Ym = 0

Using modulo-2 arithmetic we can solve for p as:

p=X-1Ym

Then we solve for c as:

c = [(X-1Y)T : I]Tm

Where I is the k by k identity matrix

Page 20: Report 4th

(17)

And we define G as:

G = [(X-1Y)T : I]

3.4.1 LINEAR INDEPENDENCE PROBLEM

In the above expression for G it is evident that X must be invertible of GF(2). Random

generation of H does not normally guarantee this linear independence of the sub-matrix

X. This problem was solved by rearranging the columns of H to guarantee the sub-matrix

X is invertible. This is done by performing Gauss-Jordan elimination on a dummy H.

When the current diagonal element of X is a zero element, that column is swapped with

the next column containing a non-zero element in that row. The resulting column swaps

are performed on the actual H matrix and recorded for the purpose of rearranging the

codeword bits c following encoding, in order for the Syndrome (Hc=0) to be satisfied

using the original H matrix. The information on rearrangement is also required at the

receiver to recover the original message. This method of rearranging columns was

devised by Arun Avudainayagam, a Masters student at the University of Florida while

working on a simulation toolbox for coding [8].

An important note is that such a problem can easily be avoided by utilizing other methods

of generating the parity-check matrix which result in a linearly independent sub-matrix

X.

3.5 DECODING

3.5.1 HARD DECISION VS. SOFT DECISION DECODING

There are generally two classes of decoding techniques: Hard and Soft decision decoding.

Hard decision decoding involves making a decision on the value of a bit at the point of

reception, such as a MAP (Maximum A Posteriori Probability) decoder. Such a decoder

Page 21: Report 4th

(18)

forms a decision based on a boundary that minimizes the probability of bit error. Figure 9

shows the likelihood functions for BPSK modulation over an AWGN (Additive White

Gaussian Noise) channel.

Figure 9: Likelihood functions for BPSK modulation over an AWGN channel

The values of the likelihood functions in the above figure are given by:

⎥⎥⎦

⎢⎢⎣

⎡ +−

= O

b

NEy

O

eN

syf

2)(

11)|(π

and

⎥⎥⎦

⎢⎢⎣

⎡ −−

= O

b

NEy

O

eN

syf

2)(

21)|(π

The optimal choice for a MAP receiver to minimize the probability of error would be to

choose a decision boundary α such that ]Pr[min errorα

. The probability of error as a

function of α can be found as:

Page 22: Report 4th

(19)

( ) ( )

dyeN

SdyeN

SNo

Ey

O

NoEy

OERROR

bb

⎥⎥

⎢⎢

⎡ +−∞⎥

⎢⎢

⎡ −−

∞− ∫∫ +=

22

1*]Pr[1*]Pr[)(Pr 12 α

α

ππα

The optimal value of α is the value of α that minimizes the above equation. This α will

become the decision threshold (boundary) for a MAP receiver. Note: These above

expressions are only valid for BPSK modulation over an AWGN channel.

A decision for a soft-decision decoder is not so clear. Soft-decoding requires processing

of the received codeword vector prior to making a decision on the value of the bits. There

is a large amount of research into various methods of soft-decision decoding. This paper

examines the performance of a Message Passing Algorithm known as the Sum-Product

Algorithm Decoder (SPA Decoder).

3.5.2 SPA ALGORITHM

As stated in the previous section, this paper examines the performance under simulation

for the Sum-Product Decoder. This decoding method is based on passing messages back

on forth between the check and variable nodes on the Tanner Graph (See section 2.3:

Tanner Graph Representation). Following each iteration, the algorithm determines a new

soft-decision a posteriori probability and forms an estimate of the codeword. In order to

understand the algorithm, we will define the following notation:

]|Pr[:)(:

)(:

ii

thi

ybcbPicodewordbitnofbitic

nbyknmatrixcheckparityH

=

Page 23: Report 4th

(20)

],~)(|Pr[:)(

]~)(&|Pr[:)(:~:~

1),(:

1),(:

iijiiij

jijijji

ii

jj

thi

thj

yjCbrbcbq

iRbqbcSatisfiedfNodeCheckbrjrowlessCsetjC

icolumnlessRsetiRcolumniforijHwherelocationsrowofsetC

rowjforijHwherelocationscolumnofsetR

∈=

∈=

=

=

In the algorithm messages are passed from check to variable node and from variable to

check node as messages rji(b) and qij(b) respectively. This message passing can be seen

below in figures 10 and 11 below by use of the Tanner graph representation.

Figure 10: Representation of nodes Figure 11: Representation of nodes

3.5.2.1 Computing the Messages

By making using Bayes’ theorem, qij(b) can be solved as follows:

]Pr[]|Pr[*]~)(&,|Pr[

]&,~)(|Pr[)(

SatisfiedfinvolvingequationsCheckybcjCbrybcSatisfiedfinvolvingequationsCheck

SatisfiedareininvolvedarecthatequationsCheckyjCbrbcbq

j

iiijiiij

iiijiiij

=∈==

∈==

qij(b)

fj

ci

rji(b) fj

ciqij(b)

rji(b)

Page 24: Report 4th

(21)

)1()()()(

:

~

brbPiKbq

bcgivensatisfiedbeingequationsCheckofceindependenofassumptiontheWith

jCikkiijij

i

∏∈

=

=

By using (1), we can solve for both cases, where b=0 and b=1:

)3()1()1()1(

)2()0()0()0(

~

~

=

=

jCikkiijij

jCikkiijij

rPiKqand

rPiKq

1)1()0(: =+ ijijij qqyprobabilittotalunityguaranteetochosenareKWhere and Pi(0)

and Pi(1) are found using equation (10).

Since all n-k check equations in H utilize even parity, rji(b) can be solved for using the

following result:

0== ∑∈ jRi

ij cf over mod-2 arithmetic (4)

It can be shown that for M bits, the probability of the set containing an even number of

ones is [3]:

∏ =−+=M

i

ibitbitseven ])1Pr[21(21

21]Pr[ (5)

If there are an even number of bits that equal 1 that are attached to check node j (not

including bit i), then bit i must be 0 to satisfy even parity constraints. Therefore the

Page 25: Report 4th

(22)

probability that the check node j is satisfied given ci = 0 is given as follows using (4) and

(5):

)6(])1Pr[21(21

21)0(

~∏∈

=−+=iRjk

kji cr

Using (6), we can substitute qkj(1) for Pr[ck = 1] as this is the definition of qij(1) and the

product in (6) is over the set in which qij(1) exists.

)7()]1(21[21

21)0(

~∏∈

−+=iRjk

kjji qr

By using the required condition that rji(0)+rji(1)=1 and (7), rji(1) solves as:

)8()]1(21[21

21)1(

~∏∈

−−=iRjk

kjji qr

Equations (2), (3), (7) and (8) represent the set of messages passed between check and

variable nodes as shown in Figure 10 and 11.

3.5.2.2 Initialization

It is apparent from equations (2), (3) (7) and (8) that they are functions of the other. This

is obvious from the fact that the message passing algorithm passes messages back and

forth between check and variable nodes. Since ],~)(|Pr[)( iijiiij yjCbrbcbq ∈==

and ]|Pr[)( ii ybcbPi == , and since initially no messages have been passed before the

iterations begin, these two equations can be equated and message qij(b) at iteration 0

becomes:

Page 26: Report 4th

(23)

)()( bPibqij = , Pi(b) is found using equation (10) found below.

The final expression required for computation is the value of Pi(b). It can be found as

follows:

10],|Pr[)( =−===== bforEbxandbforEbxyxsbPi im

By Bayes’ Theorem:

)9(]Pr[*]|Pr[

]Pr[*]|Pr[)(

}{

xsxsyxsxsy

bPi

Ebxmmi

mmi

====

=∑±∈

Making the assumption of equal a priori probabilities results in:

5.0]0Pr[]1Pr[ ==== mm ss

Pr[yi | sm = x] is the value of likelihood function evaluated at x. It is dependant on the

channel parameters and modulation scheme. This simulation is utilizing BPSK over an

AWGN channel. Therefore, the likelihood functions are given by

){),|( Ebxxsyf m ±∈= . By manipulating the above expression for Pi(b) with the

substituted values evaluated using the likelihood function as in Figure 9, Pi(b) can be

solved as:

)10(

1

1)(4

⎥⎦

⎤⎢⎣

⎡ −

+

=O

i

Nxy

e

bPi

Where x is mean of the two likelihood function: }{ bEx ±∈ , b is the bit mapped to x, yi

is the continuous time signal received for the ith bit and NO is the single-sided noise

power spectral density.

Page 27: Report 4th

(24)

NOTE: The above equation has been derived for an AWGN channel with noise variance

2ON utilizing BPSK modulation and is only valid for such a channel.

3.5.2.3 Soft-Decision

Following each iteration, the series of parity-checks are performed on the estimated

codeword and the syndrome is computed (ie. Hc=Syn), but thus far we have not

established how this codeword is estimated. The Pr[ci = b | All Check Nodes satisfied, yi]

can be computed using the independence assumption and Bayes’ Theorem as below.

],|Pr[)( SatisfiedNodesCheckAllybcbQ iii ==

]Pr[]Pr[*]|Pr[)(

SatisfiedNodesCheckAllbcbcSatisfiedNodesCheckAllbQ ii

i==

=

)11()()()( brbPiKbQiCj

jiii ∏∈

=

Where Ki is chosen such that Qi(0) + Qi(1) = 1 and Pi(b) is found in equation (10).

⎩⎨⎧ >

=otherwise

Qifc i

i 05.0)1(1

ˆ

Following computation of ĉ, the syndrome is calculated as:

Syn = Hĉ

Page 28: Report 4th

(25)

If: Syn = [0](n-k)x1 or algorithm has reached its maximum number of iterations the

algorithm is exited, otherwise it proceeds to its next iteration. It is important to note, that

Syn[0] does not guarantee that the estimated codeword is the correct codeword (that ĉ=c).

It only represents that the estimated codeword satisfies all parity checks (it is a legitimate

codeword). Such a codeword that satisfies the syndrome (generates a zero vector) but is

not the original codeword is an undetectable error. For large blocklength codes with a

good minimum distance, the probability of these undetectable errors is very low [3].

3.5.2.4 Simulation Computation

Thus far this paper has examined the computations required to calculate the variables

used in the algorithm. The decoding algorithm itself is shown by the flowchart (Figure

12) below.

Page 29: Report 4th

(26)

Figure 12: Flowchart for Decoding

4.0 RESULTS

This paper has examined the LDPC error control coding scheme. In order to verify the

proper simulation implementation of the coding scheme, results from David Mackay’s

Information Theory, Inference, and Learning Algorithms were used as a basis for

comparison.

Parameters used by MacKay [1, p.565] were:

• Regular LDPC Codes

• Blocklengths of 96, 204, 408 and 816

• Column weight (wC) of 3

• ½ Rate Codes

• 13 Iterations for SPA Decoder

Parameters utilized for this simulation were:

• Regular LDPC Codes

• Blocklengths of 96, 204 and 408 (insufficient processing power to compute 816)

• Column weight (wC) of 3

• ½ Rate Codes

• 13 Iterations for SPA Decoder

Page 30: Report 4th

(27)

Figure 13: MacKay’s Results [3, P 565] Figure 14: Simulated Results

Figures 13 and 14 above respectively show Mackay’s results and the results from this

simulation. The simulation verifies Mackay’s results within a reasonable tolerance.

Section 2.2 discussed the Hamming (7,4) coding scheme. The below figure (Figure 15),

compares the performance of the above simulation blocklengths versus Hamming (7,4).

Figure 15: Performance of simulations vs. Hamming (7,4) with Shannon’s limit

Page 31: Report 4th

(28)

The above figure shows a key benefit of utilizing LDPC coding. The sharp performance

curves are apparent when comparing the performance of the LPDC codes versus that of

Hamming (7,4), even for relatively low blocklengths. It can be seen that as the

blocklengths increase, as does the performance.

In order to compare these two codes, the standard way to represent the BER vs. SNR is

Eb/NO where NO is the single-sided noise power spectral density, and Eb is the average

energy per message bit (not symbol bit). This scheme takes into account the added

requirement of transmitting additional parity bits. This relationship can be computed as:

⎟⎠⎞

⎜⎝⎛+⎟⎟

⎞⎜⎜⎝

⎛=

⎟⎟⎠

⎞⎜⎜⎝

⎛=

RNE

dBNE

orRNE

dBNE

O

s

O

b

O

s

O

b

1log10log10)(

log10)(

1010

10

For a code rate of ½ (such as that used in simulation)

( )

dBNE

dBNE

orNE

dBNE

O

s

O

b

O

s

O

b

01.3log10)(

2log10log10)(

10

1010

+⎟⎟⎠

⎞⎜⎜⎝

⎛=

+⎟⎟⎠

⎞⎜⎜⎝

⎛=

This increase in 3.01dB represents the increase in power required to transmit the

additional parity bits and still maintain a constant Signal-to-Noise ratio of the message bit

energy to noise power spectral density.

Page 32: Report 4th

(29)

Re-examining Figure 15, it can be seen at a target Bit Error Rate (BER) of 10-4 the

blocklength of 408 results in approximately a 2dB drop reduction in the Eb/NO

requirement versus a blocklength of 96. Shannon’s Noisy Coding Theorem [1, p.133] has

shown that as this blocklength tends towards infinity (∞), the performance graph will

approach that of Shannon’s limit for a given code rate (See Section 2.1: Shannon’s

Theorem for Channel Coding), which can be seen on the graph as 0.188dB.

5.0 PROBLEMS ENCOUNTERED

Several problems were encountered in working on this project. This paper will briefly

examine the problems encountered and solutions developed.

Computational Time:

The original implementation of the SPA decoder algorithm was an N2 process.

This process took an exceeding large amount of processing time, even for low

blocklengths of 96 bits. Without modifications, this paper would have been unable

to accurately simulate results for the larger blocklengths, and would have been

unable to compare simulations with MacKay’s results. The solution used was to

find both the row and column supports (Rj and Ci) while generating H in the

MakeH routine. Using these in the SPA decoding algorithm reduced it to an N-

time process. In addition it also reduced the computation time of the generation

routine (MakeH) to N2 from N3 while still performing the same function

(generating H subject to column weight, regularity and cycle constraints).

Linear Independence:

When generating the parity-check matrix randomly, it is difficult to guarantee

linear independence of each row. The solution to the problem was a solution used

by Arun Avudainayagam, a Master’s student at the University of Florida [8]. It

Page 33: Report 4th

(30)

involved rearranging the columns of H, to make the left most (n-k)x(n-k) sub-

matrix of H linearly independent. Please reference section 3.4.1 for more detailed

information

Binary Matrix Inversion:

When computing the generation matrix G, there is a requirement to invert the left

most (n-k)x(n-k) sub-matrix of H. The matrix inversion routine integrated into

Matlab was unable to perform a binary inversion over GF(2). The solution

involved creating a routine to efficiently perform this inversion over GF(2). This

was done utilizing Gauss-Jordan Elimination. The exclusive-or function (XOR) in

Matlab proved crucial to making this an efficient algorithm.

Regularity Control:

The routine for generating H (as seen in section 3.3) randomly places ones

starting from column 1 to column N. Each row only allows wR ones per row to be

place. In order to modify this regularity constraint, a tolerance variable is

introduced. The value of the variable must be less than or equal to 1. When the

value is 1, it generates a regular LDPC code. When the tolerance variable is less

than 1, the routine allows up to one more additional 1 per row when:

Nreltoli *>

Where i is the current column counter, reltol is the tolerance variable and N is the

blocklength of the code being generated.

Page 34: Report 4th

(31)

6.0 FUTURE WORK

Following completion of the project, several objectives for future work were proposed.

These prospects for future study will be briefly examined.

6.1 INCREASE EFFICIENCY OF SIMULATION ALGORITHMS:

The project successfully ran simulations of lower blocklengths of code (96, 204 and 408

bit codewords). Future research would branch into more efficient methods of simulating

the performance of LDPC codes. This could be either devising a more efficient algorithm

or develop an algorithm utilizing distributed computing. The resulting product would

provide an efficient way to simulate larger blocklengths and examine the performance of

these larger blocklengths.

6.2 LOWER MEMORY REQUIREMENTS OF THE PARITY-CHECK MATRIX:

Utilize a different method to generate the parity-check matrix that leads to lower memory

requirements and more practical implementation. Various researchers have proposed

several methods. Examining such methods or developing a new one would be more

beneficial to proceed to practical implementation of the system.

6.3 VLSI IMPLEMENTATION:

Following completion of lowering the memory requirements of the parity-check matrix, it

would be natural to proceed to implementing LDPC in hardware. A practical system

would allow other performance measurements to be determined such as maximum

transmission rate with various blocklengths and latency of the system.

Page 35: Report 4th

(32)

7.0 CONCLUSION

Low-Density Parity-Check Codes (LDPC) were discovered in the early 1960’s by Robert

Gallager. These codes had been largely forgotten after their invention until their

rediscovery in the mid-nineties. This was due in part to the high complexity of decoding

the messages and the lack of computing power when these codes were originally

invented.

LDPC codes are, along with the turbo-codes, currently the best performing channel

coding schemes, as they can theoretically reach Shannon’s limit, and have been shown to

come close extremely close in simulations. LDPC codes can be thought of as a generic

term for a class of error correcting codes distinguished from others by a very sparse

parity-check matrix, H. LDPC performance improves as block length increases, so they

can theoretically achieve Shannon’s limit as block length goes to infinity.

LDPC allows for the reliable transmission, or storage, of data in noisy environments.

Even short codes provide a substantial coding gain over uncoded, or low complexity

coded systems. These results allows for lower transmission power, transmission over

noisier channels, with the same, if not better reliability.

This paper, has provided a milestone in implementing such an error control coding

scheme and has helped the project team to develop a good understanding LDPC codes

overall. The project provided an opportunity to use theoretical knowledge and write

Matlab algorithms using both source code and Simulink modelling. Within an acceptable

tolerance, the simulations successfully in recreating results published in 2003 by MacKay

[3] using identical parameters, meaning the implementation was successful.

Moving forward, LDPC codes will be become a more viable coding scheme for practical

implementation. The coding scheme has already been chosen as a standard for the DVB-

S2 protocol used in digital video broadcasting [9] and is also used in some data storage

applications. Overall, the decoding of LDPC has become a very hot topic of research over

Page 36: Report 4th

(33)

the past few years as real-time encoding will become the overall goal in terms of

application integration. It is also worth nothing that there are hardware implementations

that are roughly 103 times faster than the current software implementations [1].

Page 37: Report 4th

(34)

REFERENCES

[1] Guilloud, F., (2004). Generic Architecture for LDPC Codes Decoding. Paris:

Telecom Paris. [2] Gallager, R. G. Low-Density Parity-Check Codes. MIT Press, Cambridge, MA,

1963. [3] MacKay, D. J. C., (2003). Information Theory, Inference, and Learning

Algorithms. United Kingdom: Cambridge University Press. [4] Shannon, C. E. A Mathematical Theory of Communication. Bell System

Technical Journal, vol. 27, pp. 379-423, 623-656, July, October, 1948 [5] Lucent Technologies. Information Theory: The Growth of System Capacity.

http://www.lucent.com/minds/infotheory/what.html (viewed March 2006). [6] Lin, S., & Costello, D. J. Jr. (2004). Error Control Coding (Second Edition).

Upper Saddle River, New Jersey: Pearson Prentice Hall. [7] Ryan, W. E., (2003). An Introduction to LDPC Codes. Arizona: University of Arizona. [8] Arun Avudainayagam. Arun’s Page (Personal university project website).

http://arun-10.tripod.com (viewed February 2006). [9] Morello, A., Mignone, V., (2004). DVB-S2 Ready For Lift-Off. EBU Technical

Review. (October 2004). [10] Ryan, W. E., (2001). An Introduction to LDPC Codes. Unpublished Notes. [11] Hu, X. Y., Eleftheriou, E., Arnold, D. M., Dholakia, A. (2001). Efficient

Implementations of the Sum-Product Algorithm for Decoding LDPC Codes. IEEE.

[12] Huang, F., (1997). Evaluation of Soft Output Decoding for Turbo Codes.

Virginia: Virginia Polytechnic Institute. [13] Yao, E., Nikolic, B., Anantharam, V., (2003). Iterative Decoder Architectures.

IEEE Communications (Aug 2003, P132-140). [14] Andrews, K., Dolinar, S., Thorpe, J., (2005). Encoders for Block-Circulant

LDPC Codes. California Institute of Technology. Submitted to ISIT, currently unpublished.

Page 38: Report 4th

(35)

APPENDIX A: MATLAB CODE

A.1 – MakeH.m function [ParCheck,Rj,Ci]=MakeH(blocklength,nummessagebits,wc,reltol); %Generates a Parity-Check Matrix H with given inputs: %Blocklength - N - length of codeword %nummessagebits - k - length of message %wc - column weight - # of 1's per column %reltol - controls regularity -> 1 for regular LDPC generation, but may not %ever finish rows=blocklength-nummessagebits; cols=blocklength; wr=ceil(cols*wc/rows); %True all the time only if H is a Regular LDPC code -> Target for Irregular LDPC counter(1:rows,1:wr)=0; rowcount(1:rows)=0; %Generate H subject to constraints for i=1:1:cols for k=1:1:wc common(1)=2; while(max(common)>=2) common(1:rows)=0; randnum=round(rand*(rows-1))+1; while(((rowcount(randnum)>=wr && i/cols<=reltol) || (rowcount(randnum)>wr && i/cols>reltol)) || length(find(counter(randnum,:)==i))>=1) randnum=round(rand*(rows-1))+1; end countertemp=counter(randnum,:); countertemp(rowcount(randnum)+1)=i; %Guaranteeing no length 4 cycles on the tanner graph for j=1:1:rows if(j~=randnum) for l=1:1:wr if(length(find(counter(j,:)==countertemp(l)))>=1 && countertemp(l)~=0) common(j)=common(j)+1; end end end end end %Valid Bit Location, write it!!! counter(randnum,rowcount(randnum)+1)=i; rowcount(randnum)=rowcount(randnum)+1; colcounter(i,k)=randnum; ParCheck(randnum,i)=1; end i %Display current column end

Page 39: Report 4th

(36)

Rj=counter; Ci=colcounter;

A.2 – ParseH.m function [Gprime,newcol]=ParseH(mat1) [rows,cols]=size(mat1); %Column Rearrangement to guarantee non-singular Matrix tempH=mat1; for i=1:rows NewColPosition(i)=0; end %Performs Gauss-Jordan on dummy variable to move columns of H to make %submatrix A invertable for i=1:1:rows if tempH(i,i)==0 for k=(i+1):1:cols if (tempH(i,k)==1) spot=k; break; end end tempcol=tempH(:,spot); tempH(:,k)=tempH(:,i); tempH(:,i)=tempcol; tempcol=mat1(:,spot); mat1(:,k)=mat1(:,i); mat1(:,i)=tempcol; NewColPosition(i)=spot; end for j=1:1:rows if j~=i if tempH(j,i)==1 tempH(j,:)=xor(tempH(i,:),tempH(j,:)); end end end end %Reassign Matrices to proper location augmat(1:rows,1:rows)=mat1(1:rows,1:rows); B(1:rows,1:(cols-rows))=mat1(1:rows,(rows+1):cols); clear('mat1'); clear('tempH'); newcol=NewColPosition; %Augment Identity Matrix with Square Matrix for i=1:1:rows for j=1:1:rows if(i==j) augmat(i,j+rows)=1;

Page 40: Report 4th

(37)

end if(i~=j) augmat(i,j+rows)=0; end end end %Begin GF2 Inversion for i=1:1:rows if(augmat(i,i)==0 && i~=rows) swflag=0; for k=i+1:1:rows if(augmat(k,i)==1) temp=augmat(i,:); augmat(i,:)=augmat(k,:); augmat(k,:)=temp; swflag=1; break; end end if(swflag==0 || (i==rows && augmat(rows,rows)==0)) disp('Matrix was not invertable -> Singular') done=0; break; end end for j=1:1:rows if(augmat(j,i)==1 && j~=i) augmat(j,:)=xor(augmat(i,:),augmat(j,:)); end end end %Augment with Identity matrix to create a full generation matrix Ainv(1:rows,1:rows)=augmat(1:rows,(rows+1):2*rows); Gprime=BinaryMultiply(Ainv,B); for i=1:1:(cols-rows) for j=1:1:(cols-rows) if(i==j) Gprime(rows+i,j)=1; end if(i~=j) Gprime(rows+i,j)=0; end end end clear('augmat');

A.3 – Run.m function [SNR,BER]=Run(G,H,Rj,Ci,Iter,NewCol) % Generator Matrix

Page 41: Report 4th

(38)

% H Parity Check Matrix % Row Support % Column Support % Number of Iterations for the decoder % column rearrangement for parity-check columns SNR=[1 1.5 2 2.5 3] ; %SNR VECTOR -> Change this when needed No=1; %Noise Power Spectral Density (Single sided) Rate=(size(H,2)-size(H,1))/size(H,2); Amp=sqrt(No*R*10.^(SNR/10)); %Eb/No = 10log10(amp^2/R*No) msgsize=48 %MessageSize warning off MATLAB:divideByZero done=0; MaxErrors=msgsize*10; var=No/2; for i=1:1:length(SNR) BitErrors=0; numtime=0; while(BitErrors<MaxErrors) %Until a certain amount of Bit Errors message=round(rand(msgsize,1)); %Random message sequency Codeword=LDPCencode(G,NewCol,message); %Encode Tx=BPSKoverAWGN(Amp(i),Codeword,No); %Modulate [Rx,NumIt,Succ]=SPADecode(H,Rj,Ci,Tx,var,Iter,NewCol); %Decode BitErrors=BitErrors+xor(Rx,message); %Add Bit Errors to total if(mod(numtime,100)==0) %Display status of number of runs numtime end if (numtime>1000000) %Exit if running too long done=1; break; end numtime=numtime+1; end BER(i)=BitErrors/(numtime*msgsize); %Calculate BER if(done==1) break; end end A.4 – LDPCencode.m

function [Codeword]=LDPCencode(G,NewColArrangement,Message) %Encodes the given message CodewordTemp=BinaryMultiply(G,Message); %Create Codeword rows=length(NewColArrangement); %Perform position adjustments based on column rearrangment of H for i=rows:-1:1 if(NewColArrangement(i)~=0)

Page 42: Report 4th

(39)

TempBit=CodewordTemp(i); CodewordTemp(i)=CodewordTemp(NewColArrangement(i)); CodewordTemp(NewColArrangement(i))=TempBit; end end Codeword=CodewordTemp; clear('Temp'); clear('CodewordTemp'); A.5 – BinaryMultiply.m function [result]=BinaryMultiply(mat1,mat2) %Performs GF2 (binary) Multiplication of 2 Matrices [row1,col1]=size(mat1); [row2,col2]=size(mat2); if(col1==row2) for i=1:1:row1 for j=1:1:col2 result(i,j)=mod(sum(and(mat1(i,:),transpose(mat2(:,j)))),2); end end end if(col1~=row2) disp('Error.. Matrices cannot be multiplied') end

A.6 – BPSKoverAWGN.m function [ReceivedWord]=BPSKoverAWGN(Amp,Codeword,No) %Modulates codeword over BPSK channel for i=1:1:length(Codeword) ReceivedWord(i)=Amp*(-1).^Codeword(i)+AWGN(No); end

A.7 – AWGN.m function [Noise]=AWGN(No) %Returns noise given No x1=rand; x2=rand; y1=sqrt(-2*log(x1))*cos(2*pi*x2); Noise=sqrt(No/2)*y1;

Page 43: Report 4th

(40)

A.8 – SPADecode.m function [RxMessage,NumIterationsPerformed,Success]=SPADecode(H,Rj,Ci,Codeword,Variance,MaxIterations,newcol) %This function performs the sum-product algorithym for decoding the %Received message vector. Assumes H is a sparse binary matrix (H in %expanded form). Also assumes BPSK for channel encoding %0 => -Amp 1 => Amp sizeofH=size(H); rows=sizeofH(1); %Number of rows of parity check=length of message cols=sizeofH(2); %Number of cols of parity check=length of codeword var=Variance; Success=0; factor=1; factor1=1; %Initialization for i=1:1:cols for k=1:1:size(Ci,2) Pi(Ci(i,k),i)=1/(1+exp(-2*Codeword(i)/var)); Pi(Ci(i,k),i)=1-Pi(Ci(i,k),i); qij0(Ci(i,k),i)=1-Pi(Ci(i,k),i); qij1(Ci(i,k),i)=Pi(Ci(i,k),i); end end %SPA Routine for count=1:1:MaxIterations %Calculate Messages passed from Check to Variable Nodes for j=1:1:rows for kp=1:1:size(Rj,2) if(Rj(j,kp)~=0) temp=0.5; for k=1:1:size(Rj,2) if(Rj(j,k)~=0 && Rj(j,k)~=Rj(j,kp)) temp=temp*(1-2*qij1(j,Rj(j,k))); end end rji0(j,Rj(j,kp))=0.5+temp; rji1(j,Rj(j,kp))=1-rji0(j,Rj(j,kp)); end end end %Calculate Messages passed from Variable to Check Nodes for i=1:1:cols for kp=1:1:size(Ci,2) temp0=1; temp1=1;

Page 44: Report 4th

(41)

for k=1:1:size(Ci,2) if(Ci(i,k)~=Ci(i,kp)) temp0=temp0*rji0(Ci(i,k),i); temp1=temp1*rji1(Ci(i,k),i); end end temp0=temp0*(1-Pi(Ci(i,kp),i)); temp1=temp1*Pi(Ci(i,kp),i); factor1(i)=temp0+temp1; temp0=temp0/factor1(i); temp1=temp1/factor1(i); qij0(Ci(i,kp),i)=temp0; qij1(Ci(i,kp),i)=temp1; end end %Make soft decision -> Calculate estimated codeword for i=1:1:cols temp0=1; temp1=1; for k=1:1:size(Ci,2) temp0=temp0*rji0(Ci(i,k),i); temp1=temp1*rji1(Ci(i,k),i); end temp0=temp0*(1-Pi(Ci(i,1),i)); temp1=temp1*(Pi(Ci(i,1),i)); factor(i)=temp0+temp1; temp0=temp0/factor(i); temp1=temp1/factor(i); Qi(i)=temp1; if Qi(i)>0.5 CodeEst(i)=1; else CodeEst(i)=0; end end %Check to see if all parity checks are satified val=sum(BinaryMultiply(CodeEst,transpose(H))); if(val==0 && sum(CodeEst)~=0) NumIterationsPerformed=count; Success=1; break; end end %If not sucessful if(Success==0) NumIterationsPerformed=MaxIterations; end %Get estimated message from estimated codeword RxMessage=GetMessage(CodeEst,newcol,cols-rows);

Page 45: Report 4th

(42)

A.9 – GetMessage.m function [Message]=GetMessage(Codeword,NewColArrangement,MessageLength) %Returns Message from codeword for i=1:1:MessageLength if(NewColArrangement(i)~=0) TempBit=Codeword(i); Codeword(i)=Codeword(NewColArrangement(i)); Codeword(NewColArrangement(i))=TempBit; end end Message(1:MessageLength)=Codeword((length(Codeword)-MessageLength)+1:length(Codeword));

A.10 – MAP.m (Used to Generate Likelihood Functions in Section 3.5.1) Eb=1; No=1; x1=-5:.01:2; x2=-2:0.01:5; y1=1/sqrt(2*pi*No/2).*exp(-(x1-(-sqrt(Eb))).^2/No); y2=1/sqrt(2*pi*No/2).*exp(-(x2-(sqrt(Eb))).^2/No); plot(x1,y1,x2,y2)

Page 46: Report 4th

(43)

APPENDIX B: SIMULINK MODEL