Turbo Coding for Satellite and Wireless Communications_split_1

Embed Size (px)

Citation preview

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    1/18

    Chapter 9

    LOW DENSITY PARITY CHECK CODES

    In 1962, R.G.Gallager [162] introduced a class of error correcting codes calledLow-Density Parity-Check (LDPC) codes. These codes have parity check ma-

    trices that are sparse, i.e., contain mostly 0s and have only a few 1s . Although

    the sparseness of the parity check matrix results in low decoding complexity,

    still the decoding complexity was high enough to make the implementation ofthe LDPC codes infeasible until recently. It is interesting to note that the iter-

    ative decoding procedure proposed by Gallager [162] is practically the same

    as the message passing schemes used for decoding of the turbo and turbo-like

    codes today. In spite of all this, apart from a few references [164] [165] [163]to Gallagers work, overall the subject remained unknown to the information

    theory community. It was only after the discovery of turbo codes in 1993

    [6] that interest in Low-Density Parity-Check codes was rekindled and LDPC

    codes were re-discovered independently by MacKay and Neal [167] and Wiberg

    [166]. In the past few years, there has been a considerable amount of research

    work on LDPC codes [168], [171], [14], [176], [174], [179], [178] and [180].

    9.1. Gallager Codes: Regular Binary LDPC Codes

    Coding for error correction is one of the many tools available for achieving

    reliable data transmission in communication systems [162]. Shannon showed

    that for any channel with a defined capacity, there exist coding schemes that,

    when decoded with an optimal decoder, achieve arbitrarily small error prob-

    ability for all transmission rates below capacity. A fundamental problem of

    information theory is to make practical codes whose performance approaches

    the Shannon limit. The diagram of general error-correcting communication

    system is depicted in Figure 9.1.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    2/18

    178 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    The aim of an error-correction coding scheme is to encode the information

    sequence in such a way that the distribution of the encoded symbols is very close

    to the probability distribution that maximizes the mutual information betweenthe input and the output of the channel.

    By doing this, an error-correcting code minimizes the probability of residual

    errors after decoding, while introducing as little redundancy as possible during

    encoding. The codes that Shannon used in his proof were random block codes,

    which are introduced in the next section.

    9.2. Random Block Codes

    Consider a channel with the input alphabet and the output alphabet Wemake the following definitions:

    An (n, k) block code is a mapping from to A binary input

    message x of length kis mapped to a codeword of length n.

    The rate of communication is i.e., kbits of information are sent

    in n channel uses.

    A decoder is a mapping from to Received channel outputs

    are mapped to the information sequence

    The probability of block error of a code, given a distribution over input

    messages and a channel model, is:

    The optimal decoder is the decoder that minimizes

    Figure 9.2 summarizes the operation of an (n, k) block code.

    According to Shannons channel coding theorem: for any rate R < C, and

    any there is some N such that for any n >N, there are (n, k) codes

    with that ensure that the probability of error does not exceed Also,

    Shannons proof of the channel coding theorem indicates that for a large class

    of channels, almost all randomly selected long codes are good in the above

    sense.The abundance of good codes, however, does not translate itself into the

    ease of finding easily decodable codes. Shannon relates this problem to the

    difficulty of giving an explicit construction for a good approximation to arandom sequence [1].

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    3/18

    Low Density Parity Check Codes 179

    9.2.1 Generator Matrix

    A linear block code of block length n and rate k/n can be described by a

    generator matrix G of dimension that describes the mapping from sourcewords s to codewords (where the vectors, s and t are column vectors).

    It is common to consider G in systematic form, so that the first

    k transmitted symbols are the source symbols. The notation indicates

    the concatenation of matrix A with matrix B; represents the identity

    matrix. The remaining symbols are the parity-checks.

    9.2.2 Parity Check Matrix

    A linear block code is also described by a parity check matrix H of dimensionwhere If the corresponding generator matrix is written

    in systematic form as above, then H has the form Note that for

    codes over finite fields Each row of the parity-check matrix

    describes a linear constraint satisfied by all codewords. and hence

    the parity-check matrix can be used to detect errors in the received vector:

    where e is the error vector and z is the syndrome vector. If the syndromevector is null, we assume that there has been no error. Otherwise, the decoding

    problem is to find the most likely error vector that explains the observed

    syndrome given the assumed properties of the channel. The operation of the

    linear error-correcting codes is summarized in Figure 9.3.

    9.3. Regular Binary LDPC Codes: Original GallagerCodes

    Low-density parity-check codes are defined in terms of a sparse parity-check

    matrix H that consists almost entirely of zeroes. Gallager defined (n, p, q)

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    4/18

    180 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    LDPC codes to have a block length n and a sparse parity-check matrix with

    exactly p 1s per column and q 1s per row, where and q > p. Figure 9.4

    shows a code constructed by Gallager [162]. In thiscode , every codeword bit participates in exactly p parity-check equations and

    every such check equation involves exactly q codeword bits. If all the rows are

    linearly independent then the rate of the code is (q p )/q, otherwise the rate

    is (n m )/n, where m is the dimension of the row space ofH.

    In Gallagers construction of Figure 9.4, the matrix is divided intop subma-

    trices, each containing a single 1 in each column. The first of these submatricescontains all its 1s in descending order; i.e., the ith row contains all its 1s in

    columns (i l)q + 1 to iq. The lower two sections of the matrix are column

    permutations of the upper section. By considering the ensemble of all matrices

    formed by such column permutations, Gallager proved several important re-

    sults. These include the fact that the error probability of the optimum decoder

    decreases exponentially for sufficiently low noise and sufficiently long block

    length, for fixed p. Also, the typical minimum distance increases linearly withthe block length.

    9.3.1 Construction of Regular Gallager Codes

    One of the attractions of the LDPC codes is their simple description in terms ofa random sparse parity-check matrix, making it easy to construct for any rate.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    5/18

    Low Density Parity Check Codes 181

    Many good codes can be built by specifying a fixed weight for each row and

    each column, and constructing at random subject to those constraints. However,

    the best LDPC codes use further design criteria. Here is the basic constraintsof Gallager code construction.

    The parity-check matrix has a fixed column weight p and a fixed row

    weight q.

    The parity-check matrix is divided into p submatrices, each containing a

    single 1 in each column.

    Without loss of generality, the first submatrix is constructed in some

    predetermined manner.

    The subsequent submatrices are random column permutations of the first

    submatrix.

    Since H is not in systematic form, Gaussian elimination using row operations

    and reordering of columns needs to be performed to derive a parity-check matrix

    Then the original H has to be redefined to include the column

    reordering as per the Gaussian elimination. The corresponding generator matrix

    is then

    G is not in general sparse, so the encoding complexity is per block.

    However, with simple modifications of the structure ofH, the encoding com-

    plexity can be reduced significantly [174].

    9.4. Decoding

    There are two decoding schemes used to achieve a reasonable balance between

    the complexity and the probability of decoding error. The first is particularlysimple but is applicable only to the BSC at rates far below the channel capacity.

    The second scheme, which decodes directly from the a posteriori probabilitiesat the channel output, assumes that the code words from an (n, p, q) code are

    used with equal probability on an arbitrary binary-input channel.

    9.4.1 Introduction of Gallagers Decoding

    In the first decoding scheme, the decoder computes all the parity checks and

    then changes any digit that is contained in more than some fixed number of un-satisfied parity-check equations. Using these new values, the parity checks are

    recomputed, and the process is repeated until the parity checks are all satisfied.If the parity-check sets are small, the decoding complexity is reasonable, since

    most of the parity-check sets will contain either one transmission error or no

    transmission errors. Thus when most of the parity-check equations checking

    on a digit are unsatisfied, there is a strong indication that the digit given is in

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    6/18

    182 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    error. Suppose that a transmission error has occurred in the first digit of the

    code in Figure 9.4. Then the parity checks 1, 6, and 11 would be violated, and

    all three parity-check equations checking digit 1 would be violated. On theother hand, at most, one of the three equations checking on any other digit inthe block would be violated.

    The second decoding scheme, called probabilistic decoding, is an iterative

    decoding regarding a posteriori probabilities via the parity-check set tree. The

    most significant feature of this decoding scheme is that the computation per

    digit per iteration is independent of the block length. Furthermore it can be

    shown that the average number of iterations required to decode is bounded by a

    quantity proportional to the log of the log of the block length. The weak bound

    on the probability of error was derived in Gallagers paper [162].

    In Figure 9.1, the channel adds noise to the vector t with the resulting received

    signal r being given by

    The decoders task is to infer s given the received message r, and the assumed

    noise properties of the channel. The optimal decoder returns the message s that

    maximizes the a posteriori probability

    It is often not practical to implement the optimal decoder. Indeed, the general de-

    coding problem is known to be NP-complete [169]. For generalized Gallagers

    constructions, the decoding procedure using bipartite graphs is introduced as

    follows.

    9.4.2 Syndrome Decoding Based on Tanners Graph

    For syndrome decoding, the most probable vector x ( according to the channel

    model) has to be found, which explains the observed syndrome vector

    The vector x is then our estimate of the error vector. The components ofx are the

    noise symbols. The exact decoding problem is known to be NP-complete even

    when the column weight is fixed to be 3, therefore, an approximate algorithmmust be used. Here we introduce the details of the decoding procedure described

    in [174].The iterative probabilistic decoding algorithm is known as a sum/product

    [166] or belief propagation [177] algorithm. At each step, we estimate the

    posterior probability of the value of each noise symbol, given the received signal

    and the channel properties. The process is best viewed as a message passing

    algorithm on the bipartite graph defined by H in which we have two sets ofnodes: the nodes representing the noise symbols, and the nodes representing

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    7/18

    Low Density Parity Check Codes 183

    the check symbols ( See Figure 9.5 ). Nodes and are connected if the

    corresponding matrix entry is non-zero. The directed edges show the

    causal relationships: the state of a check node is determined by the state of thenoise nodes to which it is connected. We refer to the neighbors of a noise node

    as its children and to the neighbors of a check node as its parents.

    At each step of the decoding algorithm each noise node sends messages

    to each child which are supposed to approximate the nodes belief thatit is in state a ( value of 0 or 1 in the binary case), given messages received

    from all its other children. Also, each check sends messages to each

    parent approximating the probability of checki being satisfied if the parent

    is assumed to be in state a, taking into account messages received from all

    its other parents. After each step we examine the messages and produce a

    tentative decoding. The decoding algorithm consists of iteratively updating

    these messages until the tentative decoding satisfies the observed syndrome

    vector (declare a success) or a preset maximum number of iterations is reached(declare a failure). The maximum number of iterations may be set to perhaps

    ten times the typical number, improving the success rate while imposing little

    overhead on the average decoding time. Although it is in principle possible

    for the decoder to converge to the wrong noise vector, this is not observed in

    practice. That is, (empirically) all decoding errors are detected.

    If the underlying graph has a tree structure, the algorithm is known to con-

    verge to the true posterior distribution after a number of iterations equal to the

    diameter of the tree. The problem is that there are many cycles in the graph

    and occasionally the algorithm fails to converge at all. One should take care toavoid short cycles in the graph.

    9.4.2.1 Initialization. The algorithm is initialized by setting each

    message to the a priori probability that the jth noise symbol is a. In

    the case of a BSC would be equal to the crossover probability.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    8/18

    184 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    For the binary-input AWGN channel, the transmitted bit map to

    the transmitted signal and the output is where vis a zero mean normally distributed random variable with variance . We set

    and let the signal amplitude control the signal to noise ratio.

    We declare the received bit ify > 0 and ify < 0 wheremod 2. Define

    Ify > 0,

    Ify < 0,

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    9/18

    Low Density Parity Check Codes 185

    then

    and

    Since the likelihood of this bit being in erroris:

    and

    9.4.2.2 Updating The messages that checki sends to parent j

    should be the probability of checki being satisfied if the parent was in state a. In

    the sense it is used here, checki is satisfied if it agrees with the corresponding

    syndrome symbol In syndrome decoding, is not necessarily zero. Thelaws of probability tell us:

    Hence, we sum over all configurations x for which the check is satisfied and

    the parent is in state a and add up the probability of the configuration (product

    of associated Q messages). For node we update the outgoing message tonode for each value a as follows:

    where denotes the set of indices of the parents of the node and

    denotes the indices of all parents except node j. The probability of thecheck being satisfied is either 0 or 1 for any given configuration x.

    R can be calculated efficiently by treating the partial sums of a parity check

    as the states in a Markov chain, with transition probabilities given by the ap-

    propriate Q values. The forward-backward algorithm is used to calculate theforward and backward probabilities

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    10/18

    186 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    according to the probabilities given by the Q messages. The calculation of

    is then straightforward:

    9.4.2.3 Updating The messages that noise node j sends to

    check i should be the belief the parent has that it is in state a, based on the

    information from all other children. Applying Bayes theorem:

    Treating the symbols of z as independent, we take the product of all the other

    childrens votes for state a, weighted by the prior. For node we update the

    outgoing message to node for each value a as follows:

    where denotes the set of indices of the children of node and is the

    prior probability that is in state a. The normalizing constant ensuresThe update may be implemented using a forward-backward

    algorithm.

    9.4.2.4 Tentative Decoding. After updating the Q and R messages we

    calculate, for each index and possible states a, the quantity

    The vector is the tentative error vector. If this satisfies the syndrome equation

    then we terminate the decoding, declaring a success. Otherwise we

    iterate, updating R and Q again until either decoding is successful or we declare

    a failure after a fixed number of iterations (for example, 500). Figure 9.8 shows

    the evolution of the bit error probability as a function of the iteration number

    [178].The most significant feature of this decoding scheme is that the computation

    per digit per iteration is independent of the block length. Furthermore it can be

    shown that the average number of iterations required to decode is bounded bya quantity proportional to the log of the log of the block length.

    9.5. New Developments

    Gallagers codes attracted little attention prior to 1995, but there has been arecent surge of interest since their performance was recognized. Davey and

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    11/18

    Low Density Parity Check Codes 187

    MacKay [13] introduced non-binary versions of Gallagers codes. For non-

    binary versions of the codes, messages are encoded using symbols from a finite

    field with more than two elements, each parity-check becomes more complex

    but decoding remains tractable. Although the non-binary codes have an alter-

    native representation as binary codes, the non-binary decoding algorithm is not

    equivalent to the binary algorithm. These changes can help if the parity-check

    matrix is constructed carefully.

    Luby, Mitzenmacher, Shokrollahi and Spielman [171] introduced parity-

    check matrices with highly non-uniform column-weight distributions. In 1998,

    Davey and MacKay [173] presented irregular non-binary codes that outper-

    formed the best known turbo codes. Gallager considered codes whose parity-

    check matrix had fixed row and column weights (a construction referred to as

    regular). They relaxed this constraint and produced irregular LDPC codes

    that have a variety of row and column weights. High weight columns help the

    decoder to identify some errors quickly, making the remaining errors easier to

    correct.

    9.5.1 MacKays Constructions

    The presense of short cycles in the bipartite graph of the LDPC codes result in

    the loss performance in the belief propagation decoder. The next figure shows

    a fragment of a bitpartite graph with short cycles of length 4 indicated by the

    bold lines.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    12/18

    188 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    If the state of the three noise symbols are changed to for an

    arbitrary then only one of the five checks is affected. The decoder

    then has to deal with a wrong majority verdict of 4 to 1.

    By ensuring that any two columns of the parity-check matrix have no more

    than one overlapping non-zero elements, one can avoid cycles of length 4 [14].

    Having no cycles of length 4 does not necessarily guarantee that the minimumdistance is greater than 4 [174]. The example shown in the next figure shows

    that a case where minimum cycle length is 6 while the minimum distance is 4.

    Situations like this, however, are rare since almost for all randomly generated

    codes, the minimum distance increases linearly with the blocklength. MacKay

    [14] presented several methods for constructing codes with no cycles of length

    4. These methods are listed below:

    Construction 1A In this technique, the column weight t is fixed, e.g.,

    and columns of weight are added to the matrix at random, keeping the row

    weight, as uniform as possible while avoiding overlap of, more than one

    between any two columns. This is shown in Figure 9.9 (a).Construction 2A This construction is similar to 1A. The onle differece is

    that some (up to m/2) of the columns can have weight 2. These columns are

    constructed by placing one identity matrix on top of another as shown

    in Figure 9.9 (d).

    Constructions 1B, 2B In these construction methods, some of the columns

    of the 1A or 2A matrices are deleted, in such a way that the bipartite graph ofthe resulting matrix does not have cycles of length less than l, (e.g. , ).

    With binary matrices, adding more than weight-2 columns results

    in an increased probability of low weight codewords. With non-binary codes,however, it is possible to add more weight-2 columns [174]. The resulting

    matrix is called an Ultra-light matix. Two construction techniques for Ultra-

    light matrices called UL-A and UL-B are given in [174]. These techniques can

    be considered as a recursive extention of 2A construction.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    13/18

    Low Density Parity Check Codes 189

    UL-AAfter constructing a matrix with weight-2 identity matrices, place

    two indentity matrices next to one of previous identity matrices.

    This process is repeated until m columns of weight-2 have been constructed.This scheme is shown in Figure 9.9 (e).UL-B construction is similar to UL-A, except that the smaller identity blocks

    are placed so that each row has weight of at most 2 before the higher weight

    columns are filled. This scheme is shown in Figure 9.9 (f).

    9.5.2 Irregular Matrices

    In the original Gallager codes, all the columns (also all the rows) of the parity

    checkmatrix have the same weight. These are called the regular LDPC codes.One can construct codes whose parity check matrix has columns (and rows) with

    different weights. A method for the construction of the parity check matrix for

    irregular LDPC codes was proposed by Luby et al. [171]. Here, we present a

    briefsummary of this construction scheme as given in [174]. Readers interested

    in more detail, may refer to [172].

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    14/18

    190 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    Construction IR Let and denote the fraction of columns and rows with

    weight i and, n and m denote the block length and the number of parity checks,

    respectively. Then the total number of non-zero elements in the parity checkmatrix is,

    The second equality expresses the fact that the number of edges incident to

    the left nodes is equal to the number of edges incident to the right nodes.

    Considering a bipartite graph with Tleft nodes and T right

    nodes For each columns of weight j in our matrix, label j left(message) nodes with that columns index. Similarly, label i right (check)

    nodes with the index of each reow of weight i. Then connect each node to

    Finally, the parity check matrix is obtained by permuting the labels of the

    right nodes while avoiding duplicate edges, i.e., to make sure that the right

    labels beloging to a given row of weight i match the left nodes of differentcolumns.

    9.6. Performance Analysis of LDPC Codes

    The performance of error-correcting codes was bounded by Shannon in 1948

    [1]. However, until the arrival of turbo codes in 1993 [6], practical coding

    schemes for most non-trivial channels fell far short of the Shannon limit. Turbo

    codes marked the beginning of near Shannon limit performance for the additive

    white Gaussian noise channel. Two years later MacKay and Neal rediscovered

    Gallagers long neglected low-density parity-check codes and showed that, de-

    spite their simple description, they too have excellent performance.

    9.6.1 Comparison of Empirical Results

    Figure 9.10 [174] presents the performance of different LDPC codes and turbo

    codes showing that they can match and sometimes exceed the performance of

    turbo codes. All codes shown have rate 1/4. The aim is to achieve the lowest

    bit error rate for a given signal to noise ratio. That is, the best codes lie towards

    the bottom-left of the figure.

    On the right is a good regular binary LDPC code, as reported by MacKay[14]. Such codes were introduced by Gallager in 1962 but their quality was not

    recognized until the computing power allowed sufficiently long block length

    versions to be implemented. The curve labeled 4Galileo shows the perfor-

    mance of a concatenated code developed at NASAs Jet Propulsion Laboratory

    based on a constraint length 15, rate 1/4 convolutional code. This code was

    developed for deep space communication and requires an extremely computer

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    15/18

    Low Density Parity Check Codes 191

    intensive decoder. Until it was eclipsed by turbo codes, it represented the state

    of the art in error-correction.

    Luby et. al first investigated irregular constructions of LDPC codes andreported the results labeled Luby. Their methods for choosing matrix param-

    eters are not directly applicable to non-binary codes so alternative construction

    methods are developed in [174]. The binary irregular code labeled Irreg GF(2)

    was constructed using the alternative methods for finding construction param-

    eters. Although the block length is just 1/4 the length of the Luby code, the

    performance is considerably better.

    Regular LDPC codes defined over non-binary fields can outperform the bi-

    nary irregular codes, as shown by the code labeled Reg GF(16), a regular code

    defined over the finite field with 16 elements.

    The code Irreg GF(8) was constructed by combining both modifications. It

    beats the best known turbo codes, at least for bit error rates above making

    it the best error correcting code of rate 1/4 for the Gaussian channel currently

    known. Not only is the error-correction performance better than that of the

    turbo code, the block length is less than that of the turbo code. Another key

    difference between LDPC codes and turbo codes is that, empirically, all errors

    made by the LDPC decoding algorithm are detected errors. That is, the decoder

    reports the fact that a block has been incorrectly decoded.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    16/18

    192 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    Recent results by Richardson, Shokrollahi and Urbanke [178] have shown

    that extremely long block length ( bits) irregular LDPC codes can perform

    within 0.1dB of the Shannon limit (see Figure 9.11). Empirical results werepresented for rate 1/2 codes.

    9.6.2 Analysis of LDPC Codes Performance

    The analysis of a low-density code of long block length is difficult because of

    the immense number of codewords involved. It is simpler to analyze a whole

    ensemble of such codes because the statistics of an ensemble permit one toaverage over quantities that are not tractable in individual codes. From the

    ensemble behavior, one can make statistical statements about the properties of

    the member codes. Furthermore, one can with high probability find a code with

    these properties by random selection from the ensemble.

    For a wide variety of channels, the Noisy Channel Coding Theorem of In-formation Theory proves that if properly coded information is transmitted at a

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    17/18

    Low Density Parity Check Codes 193

    rate below the channel capacity, then the probability of decoding error can be

    made arbitrarily small with the increase of the code length. The theorem does

    not, however, relate the code length to the computation time or the equipmentcost necessary to achieve this low error probability.

    The minimum distance of a code is the number of positions in which the

    two nearest codewords differ. Over the ensemble, the minimum distance of

    a member code is a random variable, and it can be show that the distribution

    function of this random variable can be upper bounded by a function such as

    the one sketched in Figure 9.12 [162]. As the block length increases, for fixed

    and q > p, this function approaches a unit step at a fixed fraction

    of the block length. Thus, for large n, practically all the codes in the ensemble

    have a minimum distance of at least In Table 9.1 [ 162], this ratio of typical

    minimum distance to block length is compared to that for a parity-check code

    chosen at random, i.e., with a matrix filled in with equiprobable independent

    binary digits. It should be noted that for all the specific nonrandom procedures

    known for constructing codes, the ratio of the minimum distance to block lengthappears to approach 0 with increasing block length.

    Although this result for the BSC shows how closely low-density codes ap-

    proach the optimum, the codes are not designed primarily for use on this chan-

    nel. The BSC is an approximation to physical channels only when there is a

    receiver that makes decisions on the incoming signal on a bit-by-bit basis. Since

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd

  • 8/3/2019 Turbo Coding for Satellite and Wireless Communications_split_1

    18/18

    194 TURBO CODING FOR SATELLITE AND WIRELESS COMMUNICATIONS

    the decoding procedures described earlier actually use the channel a posteriori

    probabilities, and since a bit-by-bit decision throws away available information,

    we are actually interested in the probability of decoding error of a binary-input,continuous-output channel. If the noise affects the input symbols symmetri-

    cally, then this probability can again be bounded by an exponentially decreasing

    function of the block length, but the exponent is a rather complicated function

    of the channel and code. It is expected that the same type of result holds for

    a wide class of channels with memory, but no analytical results have yet been

    derived.

    9.7. Summary

    In this Chapter, the original LDPC code and its variants are introduced, along

    with the decoding procedure. The first description of an iterative decoding algo-

    rithm was by Gallager in 1962, for his low-density parity-cheek codes that have

    a simple description and a largely random structure. MacKay [14] proved that

    sequences of low-density parity-check codes exist that, when decoded with an

    optimal decoder, approach arbitrarily close to the Shannon limit. The iterative

    decoding algorithm makes decoding practical and is capable of near Shannon

    limit performance.

    Low-density parity-check codes and turbo codes have several features in

    common:

    Both have a strong pseudo-random element in their construction

    Both can be decoded using an iterative belief propagation algorithm

    Both have shown to achieve near Shannon limit error-correction perfor-

    mance

    Low-density parity-check codes are also shown to be useful for communicating

    over channels which make insertions and deletions as well as additive (substi-

    tution) errors. Error-correction for such channels has not been widely studied,

    but is of importance whenever synchronization of sender and receiver is imper-

    fect. Davey [174] introduced concatenated codes using novel non-linear inner

    codes that he called watermark codes, and LDPC codes over non-binary fieldsas outer codes. The inner code allows resynchronization using a probabilistic

    decoder, providing soft outputs for the outer LDPC decoder. Error-correction

    performance using watermark codes is several orders of magnitude better than

    any comparable results in the literature.

    Simpo PDF Merge and Split Unregistered Version - http://www.simpopd