Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

Embed Size (px)

DESCRIPTION

8878061993 OIST BhopalMTech -CSE II Semester RGPV Bhopal

Citation preview

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    1/79

    UNIT VINFORMATION THEORY, CODING & CRYPTOGRAPHY (MCSE 202)

    PREPARED BY ARUN PRATAP SINGH 5/26/14 MTECH2nd SEMESTER

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    2/79

    PREPARED BY ARUN PRATAP SINGH 1

    1

    ADVANCE CODING TECHNIQUES : REED SOLOMON CODES

    UNIT : V

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    3/79

    PREPARED BY ARUN PRATAP SINGH 2

    2

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    4/79

    PREPARED BY ARUN PRATAP SINGH 3

    3

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    5/79

    PREPARED BY ARUN PRATAP SINGH 4

    4

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    6/79

    PREPARED BY ARUN PRATAP SINGH 5

    5

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    7/79

    PREPARED BY ARUN PRATAP SINGH 6

    6

    SPACE TIME CODES :

    A spacetime code (STC) is a method employed to improve the reliability of

    datatransmission in wireless communication systems using multiple transmitantennas.STCsrely on transmitting multiple,redundant copies of a data stream to thereceiver in the hope that at

    least some of them may survive thephysical path between transmission and reception in a good

    enough state to allow reliable decoding.

    Space time codes may be split into two main types:

    http://en.wikipedia.org/wiki/Transmission_(telecommunications)http://en.wikipedia.org/wiki/Wirelesshttp://en.wikipedia.org/wiki/Antenna_(radio)http://en.wikipedia.org/wiki/Redundancy_(information_theory)http://en.wikipedia.org/wiki/Receiver_(radio)http://en.wikipedia.org/wiki/Transmission_mediumhttp://en.wikipedia.org/wiki/Transmission_mediumhttp://en.wikipedia.org/wiki/Receiver_(radio)http://en.wikipedia.org/wiki/Redundancy_(information_theory)http://en.wikipedia.org/wiki/Antenna_(radio)http://en.wikipedia.org/wiki/Wirelesshttp://en.wikipedia.org/wiki/Transmission_(telecommunications)
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    8/79

    PREPARED BY ARUN PRATAP SINGH 7

    7

    Spacetime trellis codes (STTCs) distribute atrellis code over multiple antennas and multiple

    time-slots and provide both coding gain anddiversity gain.

    Spacetime block codes (STBCs) act on a block of data at once (similarly toblock codes)and

    also providediversity gain but doesn't provide coding gain.

    STC may be further subdivided according to whether the receiver knows thechannel impairments.

    Incoherent STC,the receiver knows the channel impairments through training or some other form

    of estimation. These codes have been studied more widely, anddivision algebras over number

    fields have now become the standard tool for constructing such codes.

    Innoncoherent STC the receiver does not know the channel impairments but knows the statistics

    of the channel. In differential spacetime codes neither the channel nor the statistics of the

    channel are available.

    http://en.wikipedia.org/wiki/Space%E2%80%93time_trellis_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_trellis_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_trellis_codehttp://en.wikipedia.org/wiki/Convolutional_codehttp://en.wikipedia.org/wiki/Diversity_receptionhttp://en.wikipedia.org/wiki/Space%E2%80%93time_block_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_block_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_block_codehttp://en.wikipedia.org/wiki/Block_codehttp://en.wikipedia.org/wiki/Diversity_receptionhttp://en.wikipedia.org/wiki/Channel_(communications)http://en.wikipedia.org/w/index.php?title=Coherent_space-time_code&action=edit&redlink=1http://en.wikipedia.org/wiki/Division_algebrashttp://en.wikipedia.org/wiki/Noncoherent_STChttp://en.wikipedia.org/wiki/Differential_space%E2%80%93time_codehttp://en.wikipedia.org/wiki/Differential_space%E2%80%93time_codehttp://en.wikipedia.org/wiki/Differential_space%E2%80%93time_codehttp://en.wikipedia.org/wiki/Differential_space%E2%80%93time_codehttp://en.wikipedia.org/wiki/Noncoherent_STChttp://en.wikipedia.org/wiki/Division_algebrashttp://en.wikipedia.org/w/index.php?title=Coherent_space-time_code&action=edit&redlink=1http://en.wikipedia.org/wiki/Channel_(communications)http://en.wikipedia.org/wiki/Diversity_receptionhttp://en.wikipedia.org/wiki/Block_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_block_codehttp://en.wikipedia.org/wiki/Diversity_receptionhttp://en.wikipedia.org/wiki/Convolutional_codehttp://en.wikipedia.org/wiki/Space%E2%80%93time_trellis_code
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    9/79

    PREPARED BY ARUN PRATAP SINGH 8

    8

    CONCATENATED CODES :

    Concatenated codesare error-correcting codes that are constructed from two or more simplercodes in order to achieve good performance with reasonablecomplexity.Originally introduced by

    Forney in 1965 to address a theoretical issue, they became widely used in space communications

    in the 1970s.Turbo codes and other modern capacity- approaching codes may be regarded as

    elaborations of this approach.

    http://www.scholarpedia.org/article/Complexityhttp://www.scholarpedia.org/article/Turbo_codehttp://www.scholarpedia.org/article/Turbo_codehttp://www.scholarpedia.org/article/Complexity
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    10/79

    PREPARED BY ARUN PRATAP SINGH 9

    9

    The inner code is a short block code like that envisioned by Shannon, with rate rclose to C,blocklength n, and therefore 2nrcodewords. The inner decoder decodes optimally, so its complexity

    increases exponentially with n; for large enough nit achieves a moderately low decoding error

    probability.

    The outer code is an algebraic Reed-Solomon (RS) code (Reed and Solomon, 1960) of

    length 2nr over the finite field with 2nr elements, each element corresponding to an inner

    codeword. (The overall block length of the concatenated code is therefore N=n2nr , which is

    exponential in n, so the complexity of the inner decoder is only linear in N.) The outer decoder

    uses an algebraic error-correction algorithm whose complexity is only polynomial in the RS code

    length 2nr; it can drive the ultimate probability of decoding error as low as desired.

    http://www.scholarpedia.org/article/Concatenated_codes#RS60http://www.scholarpedia.org/article/Concatenated_codes#RS60
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    11/79

    PREPARED BY ARUN PRATAP SINGH 10

    10

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    12/79

    PREPARED BY ARUN PRATAP SINGH 11

    11

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    13/79

    PREPARED BY ARUN PRATAP SINGH 12

    12

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    14/79

    PREPARED BY ARUN PRATAP SINGH 13

    13

    Figure 3: Simple repeat-accumulate code with iterative decoding.

    All capacity-approaching codes are now regarded as ``codes on graphs," in which a (possibly

    large) number of simple codes are interconnected according to some graph topology. Any such

    code may be regarded as a (possibly elaborate) concatenated code.

    http://www.scholarpedia.org/article/File:Concatenated_codes_figure_3.gif
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    15/79

    PREPARED BY ARUN PRATAP SINGH 14

    14

    TURBO CODING AND LDPC CODES :

    In information theory, turbo codes (originally in French Turbocodes) are a class of high-

    performance forward error correction (FEC) codes developed in 1993, which were the first

    practical codes to closely approach thechannel capacity,a theoretical maximum for thecode

    rate at which reliable communication is still possible given a specific noise level. Turbo codes are

    finding use in3G mobile communications and (deep space)satellitecommunications as well as

    other applications where designers seek to achieve reliable information transfer over bandwidth-

    or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes

    are nowadays competing withLDPC codes,which provide similar performance.

    The name "turbo code" arose from the feedback loop used during normal turbo code decoding,

    which was analogized to the exhaust feedback used for engine turbocharging.Hagenauer has

    argued the term turbo code is a misnomer since there is no feedback involved in the encoding

    process.

    An example encoder

    There are many different instances of turbo codes, using different component encoders, input/output

    ratios, interleavers, and puncturing patterns. This example encoder implementation describes a

    classic turbo encoder, and demonstrates the general design of parallel turbo codes.

    This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit block of

    payload data. The second sub-block is n/2parity bits for the payload data, computed using a recursive

    systematic convolutional code (RSC code). The third sub-block is n/2 parity bits for a

    knownpermutation of the payload data, again computed using an RSC convolutional code. Thus, two

    redundant but different sub-blocks of parity bits are sent with the payload. The complete block

    has m+ nbits of data with a code rate ofm/(m+ n). Thepermutation of the payload data is carried out

    by a device called aninterleaver.

    Hardware-wise, this turbo-code encoder consists of two identical RSC coders, 1and C2, as depicted

    in the figure, which are connected to each other using a concatenation scheme, called parallel

    concatenation:

    http://en.wikipedia.org/wiki/Information_theoryhttp://en.wikipedia.org/wiki/Forward_error_correctionhttp://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theoremhttp://en.wikipedia.org/wiki/Code_ratehttp://en.wikipedia.org/wiki/Code_ratehttp://en.wikipedia.org/wiki/3Ghttp://en.wikipedia.org/wiki/Deep_Space_Networkhttp://en.wikipedia.org/wiki/Satellitehttp://en.wikipedia.org/wiki/Telecommunicationhttp://en.wikipedia.org/wiki/LDPC_codehttp://en.wikipedia.org/wiki/Joachim_Hagenauerhttp://en.wikipedia.org/wiki/Convolutional_codehttp://en.wikipedia.org/wiki/Permutationhttp://en.wikipedia.org/wiki/Permutationhttp://en.wikipedia.org/wiki/Interleaverhttp://en.wikipedia.org/wiki/Interleaverhttp://en.wikipedia.org/wiki/Permutationhttp://en.wikipedia.org/wiki/Permutationhttp://en.wikipedia.org/wiki/Convolutional_codehttp://en.wikipedia.org/wiki/Joachim_Hagenauerhttp://en.wikipedia.org/wiki/LDPC_codehttp://en.wikipedia.org/wiki/Telecommunicationhttp://en.wikipedia.org/wiki/Satellitehttp://en.wikipedia.org/wiki/Deep_Space_Networkhttp://en.wikipedia.org/wiki/3Ghttp://en.wikipedia.org/wiki/Code_ratehttp://en.wikipedia.org/wiki/Code_ratehttp://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theoremhttp://en.wikipedia.org/wiki/Forward_error_correctionhttp://en.wikipedia.org/wiki/Information_theory
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    16/79

    PREPARED BY ARUN PRATAP SINGH 15

    15

    In the figure, Mis a memory register. The delay line and interleaver force input bits dkto appear in

    different sequences. At first iteration, the input sequence dk appears at both outputs of the

    encoder,xkand y1kor y2kdue to the encoder's systematic nature. If the encoders C1and C2are used

    respectively in n1and n2iterations, their rates are respectively equal to

    The decoder

    The decoder is built in a similar way to the above encoder. Two elementary decoders are

    interconnected to each other, but in serial way, not in parallel. The decoder operates on lower

    speed (i.e., ), thus, it is intended for the encoder, and is

    for correspondingly. yields asoft decision which causes delay. The same delay is caused

    by the delay line in the encoder. The 's operation causes delay.

    http://en.wikipedia.org/wiki/Turbo_code#Soft_decision_approachhttp://en.wikipedia.org/wiki/File:Turbo_encoder.svghttp://en.wikipedia.org/wiki/Turbo_code#Soft_decision_approach
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    17/79

    PREPARED BY ARUN PRATAP SINGH 16

    16

    An interleaver installed between the two decoders is used here to scatter error bursts coming

    from output. DIblock is a demultiplexing and insertion module. It works as a switch, redirecting

    input bits to at one moment and to at another. In OFF state, it feeds

    both and inputs with padding bits (zeros).

    Consider a memorylessAWGN channel, and assume that at k-th iteration, the decoder receives a pair

    of random variables:

    where and are independent noise components having the same variance . is a k-th bit

    from encoder output.

    Redundant information is demultiplexed and sent through DI to (when ) and

    to (when ).

    yields a soft decision; i.e.:

    and delivers it to . is called the logarithm of the likelihood

    ratio(LLR). is the a posteriori probability(APP) of the data bit which

    shows the probability of interpreting a received bit as . Taking the LLR intoaccount, yields a hard decision; i.e., a decoded bit.

    It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be used

    in . Instead of that, a modifiedBCJR algorithm is used. For , theViterbi algorithms

    an appropriate one.

    http://en.wikipedia.org/wiki/Additive_white_Gaussian_noisehttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/BCJR_algorithmhttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/File:Turbo_decoder.svghttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/BCJR_algorithmhttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/Additive_white_Gaussian_noise
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    18/79

    PREPARED BY ARUN PRATAP SINGH 17

    17

    However, the depicted structure is not an optimal one, because uses only a proper

    fraction of the available redundant information. In order to improve the structure, a feedback

    loop is used (see the dotted line on the figure).

    Soft decision approach-

    The decoder front-end produces an integer for each bit in the data stream. This integer is a measure

    of how likely it is that the bit is a 0 or 1 and is also called soft bit. The integer could be drawn from the

    range [127, 127], where:

    127 means "certainly 0"

    100 means "very likely 0"

    0 means "it could be either 0 or 1"

    100 means "very likely 1"

    127 means "certainly 1"

    etc.

    This introduces a probabilistic aspect to the data-stream from the front end, but it conveys more

    information about each bit than just 0 or 1.

    For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal

    analog voltage is above or below a given threshold voltage level. For a turbo-code decoder, the front

    end would provide an integer measure of how far the internal voltage is from the given threshold.

    To decode the m+ n-bit block of data, the decoder front-end creates a block of likelihood measures,

    with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for

    each of the n2-bit parity sub-blocks. Both decoders use the sub-block of mlikelihoods for the payload

    data. The decoder working on the second parity sub-block knows the permutation that the coder used

    for this sub-block.

    LDPC CODES :

    Ininformation theory,a low-density parity-check(LDPC) codeis alinearerror correcting code,

    a method of transmitting a message over anoisy transmission channel,[1][2]and is constructed

    using a sparsebipartite graph.[3]LDPC codes arecapacity-approaching codes,which means that

    practical constructions exist that allow the noise threshold to be set very close (or

    even arbitrarilyclose on theBEC)to the theoretical maximum (theShannon limit)for a symmetric

    memoryless channel. The noise threshold defines an upper bound for the channel noise, up to

    which the probability of lost information can be made as small as desired. Using iterativebelief

    propagation techniques, LDPC codes can be decoded in time linear to their block length.

    http://en.wikipedia.org/wiki/Information_theoryhttp://en.wikipedia.org/wiki/Linear_block_codehttp://en.wikipedia.org/wiki/Error_correcting_codehttp://en.wikipedia.org/wiki/Signal_noisehttp://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-1http://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-1http://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-1http://en.wikipedia.org/wiki/Bipartite_graphhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-3http://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-3http://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-3http://en.wikipedia.org/wiki/Category:Capacity-approaching_codeshttp://en.wikipedia.org/wiki/Binary_erasure_channelhttp://en.wikipedia.org/wiki/Shannon-Hartley_theoremhttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/Shannon-Hartley_theoremhttp://en.wikipedia.org/wiki/Binary_erasure_channelhttp://en.wikipedia.org/wiki/Category:Capacity-approaching_codeshttp://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-3http://en.wikipedia.org/wiki/Bipartite_graphhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-1http://en.wikipedia.org/wiki/Low-density_parity-check_code#cite_note-1http://en.wikipedia.org/wiki/Signal_noisehttp://en.wikipedia.org/wiki/Error_correcting_codehttp://en.wikipedia.org/wiki/Linear_block_codehttp://en.wikipedia.org/wiki/Information_theory
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    19/79

    PREPARED BY ARUN PRATAP SINGH 18

    18

    LDPC codes are finding increasing use in applications requiring reliable and highly efficient

    information transfer over bandwidth or return channel-constrained links in the presence of

    corrupting noise. Implementation of LDPC codes has lagged behind that of other codes,

    notablyturbo codes.The fundamental patent for Turbo Codes expired on August 29, 2013.

    Function-

    LDPC codes are defined by a sparseparity-check matrix.Thissparse matrix is often randomly

    generated, subject to thesparsity constraintsLDPC code construction is discussedlater.These

    codes were first designed by Robert Gallager in 1962.

    Below is a graph fragment of an example LDPC code usingForney's factor graph notation.In this

    graph, nvariable nodes in the top of the graph are connected to (nk) constraint nodes in the

    bottom of the graph. This is a popular way of graphically representing an (n, k) LDPC code. The

    bits of a valid message, when placed on the T'sat the top of the graph, satisfy the graphicalconstraints. Specifically, all lines connecting to a variable node (box with an '=' sign) have the

    same value, and all values connecting to a factor node (box with a '+' sign) must sum,modulo two,

    to zero (in other words, they must sum to an even number).

    Ignoring any lines going out of the picture, there are eight possible six-bit strings corresponding

    to valid codewords: (i.e., 000000, 011001, 110010, 101011, 111100, 100101, 001110, 010111).

    This LDPC code fragment represents a three-bit message encoded as six bits. Redundancy is

    used, here, to increase the chance of recovering from channel errors. This is a (6, 3) linear code,

    with n= 6 and k= 3.

    Once again ignoring lines going out of the picture, the parity-check matrix representing this graphfragment is

    http://en.wikipedia.org/wiki/Turbo_codehttp://en.wikipedia.org/wiki/Parity-check_matrixhttp://en.wikipedia.org/wiki/Sparse_matrixhttp://en.wikipedia.org/wiki/Sparsityhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#Code_constructionhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#Code_constructionhttp://en.wikipedia.org/wiki/Factor_graphhttp://en.wikipedia.org/wiki/Modular_arithmetichttp://en.wikipedia.org/wiki/Linear_codehttp://en.wikipedia.org/wiki/File:Ldpc_code_fragment_factor_graph.svghttp://en.wikipedia.org/wiki/Linear_codehttp://en.wikipedia.org/wiki/Modular_arithmetichttp://en.wikipedia.org/wiki/Factor_graphhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#Code_constructionhttp://en.wikipedia.org/wiki/Low-density_parity-check_code#Code_constructionhttp://en.wikipedia.org/wiki/Sparsityhttp://en.wikipedia.org/wiki/Sparse_matrixhttp://en.wikipedia.org/wiki/Parity-check_matrixhttp://en.wikipedia.org/wiki/Turbo_code
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    20/79

    PREPARED BY ARUN PRATAP SINGH 19

    19

    In this matrix, each row represents one of the three parity-check constraints, while each column

    represents one of the six bits in the received codeword.

    In this example, the eight codewords can be obtained by putting theparity-check matrix H into

    this form through basicrow operations:

    From this, thegenerator matrix Gcan be obtained as (noting that in the special case of

    this being a binary code ), or specifically:

    Finally, by multiplying all eight possible 3-bit strings by G, all eight valid codewords are obtained.

    For example, the codeword for the bit-string '101' is obtained by:

    Example Encoder Figure 1 illustrates the functional components of most LDPC encoders.

    http://en.wikipedia.org/wiki/Parity-check_matrixhttp://en.wikipedia.org/wiki/Row_operationshttp://en.wikipedia.org/wiki/Generator_matrixhttp://en.wikipedia.org/wiki/Generator_matrixhttp://en.wikipedia.org/wiki/Row_operationshttp://en.wikipedia.org/wiki/Parity-check_matrix
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    21/79

    PREPARED BY ARUN PRATAP SINGH 20

    20

    LDPC Encoder

    During the encoding of a frame, the input data bits (D) are repeated and distributed to a set of

    constituent encoders. The constituent encoders are typically accumulators and each accumulator is

    used to generate a parity symbol. A single copy of the original data (S0,K-1) is transmitted with the parity

    bits (P) to make up the code symbols. The S bits from each constituent encoder are discarded.

    In some cases a parity bit is encoded by a second constituent code (serial concatenation), but more

    typically the constituent encoding for the LDPC is done in parallel.

    In an example using the DVB-S2 rate 2/3 code the encoded block size is 64800 symbols (N=64800)

    with 42300 data bits (K=43200) and 21600 parity bits (M=21600). Each constituent code (check node)

    encodes 16 data bits except for the first parity bit which encodes 8 data bits. The first 4680 data bits

    are repeated 13 times (used in 13 parity codes), while the remaining data bits are used in 3 parity

    codes (irregular LDPC code).

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    22/79

    PREPARED BY ARUN PRATAP SINGH 21

    21

    For comparison, classic turbo codes typically use two constituent codes configured in parallel, each

    of which encodes the entire input block (K) of data bits. These constituent encoders are recursive

    convolutional codes (RSC) of moderate depth (8 or 16 states) that are separated by a code interleaver

    which interleaves one copy of the frame.

    The LDPC code, in contrast, uses many low depth constituent codes (accumulators) in parallel, each

    of which encode only a small portion of the input frame. The many constituent codes can be viewed

    as many low depth (2 state) 'convolutional codes' that are connected via the repeat and distribute

    operations. The repeat and distribute operations perform the function of the interleaver in the turbo

    code.

    The ability to more precisely manage the connections of the various constituent codes and the level

    of redundancy for each input bit give more flexibility in the design of LDPC codes, which can lead to

    better performance than turbo codes in some instances. Turbo codes still seem to perform better than

    LDPCs at low code rates, or at least the design of well performing low rate codes is easier for Turbo

    Codes.

    As a practical matter, the hardware that forms the accumulators is reused during the encoding process.

    That is, once a first set of parity bits are generated and the parity bits stored, the same accumulator

    hardware is used to generate a next set of parity bits.

    Decoding

    As with other codes, the maximum likelihood decoding of an LDPC code on the binary symmetric

    channel is an NP-complete problem. Performing optimal decoding for a NP-complete code of any

    useful size is not practical.

    However, sub-optimal techniques based on iterativebelief propagation decoding give excellent results

    and can be practically implemented. The sub-optimal decoding techniques view each parity check that

    makes up the LDPC as an independent single parity check (SPC) code. Each SPC code is decoded

    separately using soft-in-soft-out (SISO) techniques such as SOVA, BCJR, MAP, and other derivates

    thereof. The soft decision information from each SISO decoding is cross-checked and updated with

    other redundant SPC decodings of the same information bit. Each SPC code is then decoded again

    using the updated soft decision information. This process is iterated until a valid code word is achieved

    or decoding is exhausted. This type of decoding is often referred to as sum-product decoding.

    The decoding of the SPC codes is often referred to as the "check node" processing, and the cross-

    checking of the variables is often referred to as the "variable-node" processing.

    In a practical LDPC decoder implementation, sets of SPC codes are decoded in parallel to increase

    throughput.

    http://en.wikipedia.org/wiki/Maximum_likelihood_decodinghttp://en.wikipedia.org/wiki/Binary_symmetric_channelhttp://en.wikipedia.org/wiki/Binary_symmetric_channelhttp://en.wikipedia.org/wiki/NP-completehttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/Belief_propagationhttp://en.wikipedia.org/wiki/NP-completehttp://en.wikipedia.org/wiki/Binary_symmetric_channelhttp://en.wikipedia.org/wiki/Binary_symmetric_channelhttp://en.wikipedia.org/wiki/Maximum_likelihood_decoding
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    23/79

    PREPARED BY ARUN PRATAP SINGH 22

    22

    In contrast, belief propagation on thebinary erasure channel is particularly simple where it consists of

    iterative constraint satisfaction.

    For example, consider that the valid codeword, 101011, from the example above, is transmitted across

    a binary erasure channel and received with the first and fourth bit erased to yield ? 01? 11. Since

    the transmitted message must have satisfied the code constraints, the message can be represented

    by writing the received message on the top of the factor graph.

    In this example, the first bit cannot yet be recovered, because all of the constraints connected to it

    have more than one unknown bit. In order to proceed with decoding the message, constraints

    connecting to only one of the erased bits must be identified. In this example, only the second constraint

    suffices. Examining the second constraint, the fourth bit must have been zero, since only a zero in that

    position would satisfy the constraint.

    This procedure is then iterated. The new value for the fourth bit can now be used in conjunction with

    the first constraint to recover the first bit as seen below. This means that the first bit must be a one to

    satisfy the leftmost constraint.

    Thus, the message can be decoded iteratively. For other channel models, the messages passed

    between the variable nodes and check nodes are real numbers, which express probabilities and

    likelihoods of belief.

    This result can be validated by multiplying the corrected codeword rby the parity-check matrix H:

    Because the outcome z (the syndrome) of this operation is the three one zero vector, the

    resulting codeword ris successfully validated.

    http://en.wikipedia.org/wiki/Binary_erasure_channelhttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decodinghttp://en.wikipedia.org/wiki/File:Ldpc_code_fragment_factor_graph_w_erasures_decode_step_2.svghttp://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decodinghttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Binary_erasure_channel
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    24/79

    PREPARED BY ARUN PRATAP SINGH 23

    23

    While illustrative, this erasure example does not show the use of soft-decision decoding or soft-

    decision message passing, which is used in virtually all commercial LDPC decoders.

    NESTED CODES :

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    25/79

    PREPARED BY ARUN PRATAP SINGH 24

    24

    BLOCK CODES :

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    26/79

    PREPARED BY ARUN PRATAP SINGH 25

    25

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    27/79

    PREPARED BY ARUN PRATAP SINGH 26

    26

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    28/79

    PREPARED BY ARUN PRATAP SINGH 27

    27

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    29/79

    PREPARED BY ARUN PRATAP SINGH 28

    28

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    30/79

    PREPARED BY ARUN PRATAP SINGH 29

    29

    CONVOLUTIONAL CHANNEL CODING :

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    31/79

    PREPARED BY ARUN PRATAP SINGH 30

    30

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    32/79

    PREPARED BY ARUN PRATAP SINGH 31

    31

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    33/79

    PREPARED BY ARUN PRATAP SINGH 32

    32

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    34/79

    PREPARED BY ARUN PRATAP SINGH 33

    33

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    35/79

    PREPARED BY ARUN PRATAP SINGH 34

    34

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    36/79

    PREPARED BY ARUN PRATAP SINGH 35

    35

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    37/79

    PREPARED BY ARUN PRATAP SINGH 36

    36

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    38/79

    PREPARED BY ARUN PRATAP SINGH 37

    37

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    39/79

    PREPARED BY ARUN PRATAP SINGH 38

    38

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    40/79

    PREPARED BY ARUN PRATAP SINGH 39

    39

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    41/79

    PREPARED BY ARUN PRATAP SINGH 40

    40

    OR

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    42/79

    PREPARED BY ARUN PRATAP SINGH 41

    41

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    43/79

    PREPARED BY ARUN PRATAP SINGH 42

    42

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    44/79

    PREPARED BY ARUN PRATAP SINGH 43

    43

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    45/79

    PREPARED BY ARUN PRATAP SINGH 44

    44

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    46/79

    PREPARED BY ARUN PRATAP SINGH 45

    45

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    47/79

    PREPARED BY ARUN PRATAP SINGH 46

    46

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    48/79

    PREPARED BY ARUN PRATAP SINGH 47

    47

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    49/79

    PREPARED BY ARUN PRATAP SINGH 48

    48

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    50/79

    PREPARED BY ARUN PRATAP SINGH 49

    49

    DISTANCE PROPERTIES AND TRANSFER FUNCTION REPRESENTATION :

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    51/79

    PREPARED BY ARUN PRATAP SINGH 50

    50

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    52/79

    PREPARED BY ARUN PRATAP SINGH 51

    51

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    53/79

    PREPARED BY ARUN PRATAP SINGH 52

    52

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    54/79

    PREPARED BY ARUN PRATAP SINGH 53

    53

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    55/79

    PREPARED BY ARUN PRATAP SINGH 54

    54

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    56/79

    PREPARED BY ARUN PRATAP SINGH 55

    55

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    57/79

    PREPARED BY ARUN PRATAP SINGH 56

    56

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    58/79

    PREPARED BY ARUN PRATAP SINGH 57

    57

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    59/79

    PREPARED BY ARUN PRATAP SINGH 58

    58

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    60/79

    PREPARED BY ARUN PRATAP SINGH 59

    59

    DECODING CONVOLUTIONAL CODES :

    Several algorithms exist for decoding convolutional codes. For relatively small values of k,

    the Viterbi algorithm is universally used as it provides maximum likelihood performance and is

    highly parallelizable. Viterbi decoders are thus easy to implement in VLSI hardware and in

    software on CPUs withSIMD instruction sets.

    Longer constraint length codes are more practically decoded with any of several sequential

    decoding algorithms, of which the Fano algorithm is the best known. Unlike Viterbi decoding,

    sequential decoding is not maximum likelihood but its complexity increases only slightly with

    constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used

    in thePioneer program of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-

    decoded codes, usually concatenated with large Reed-Solomon error correction codes that

    steepen the overall bit-error-rate curve and produce extremely low residual undetected error

    rates.

    Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most

    likely codeword. An approximate confidence measure can be added to each bit by use of theSoft

    output Viterbi algorithm.Maximum a posteriori (MAP) soft decisions for each bit can be obtained

    by use of theBCJR algorithm.

    http://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/VLSIhttp://en.wikipedia.org/wiki/SIMDhttp://en.wikipedia.org/wiki/Sequential_decodinghttp://en.wikipedia.org/wiki/Sequential_decodinghttp://en.wikipedia.org/wiki/Robert_Fanohttp://en.wikipedia.org/wiki/Pioneer_programhttp://en.wikipedia.org/wiki/Reed-Solomon_error_correctionhttp://en.wikipedia.org/wiki/Soft_output_Viterbi_algorithmhttp://en.wikipedia.org/wiki/Soft_output_Viterbi_algorithmhttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/BCJR_algorithmhttp://en.wikipedia.org/wiki/BCJR_algorithmhttp://en.wikipedia.org/wiki/Maximum_a_posteriorihttp://en.wikipedia.org/wiki/Soft_output_Viterbi_algorithmhttp://en.wikipedia.org/wiki/Soft_output_Viterbi_algorithmhttp://en.wikipedia.org/wiki/Reed-Solomon_error_correctionhttp://en.wikipedia.org/wiki/Pioneer_programhttp://en.wikipedia.org/wiki/Robert_Fanohttp://en.wikipedia.org/wiki/Sequential_decodinghttp://en.wikipedia.org/wiki/Sequential_decodinghttp://en.wikipedia.org/wiki/SIMDhttp://en.wikipedia.org/wiki/VLSIhttp://en.wikipedia.org/wiki/Maximum_likelihoodhttp://en.wikipedia.org/wiki/Viterbi_algorithmhttp://en.wikipedia.org/wiki/Algorithm
  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    61/79

    PREPARED BY ARUN PRATAP SINGH 60

    60

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    62/79

    PREPARED BY ARUN PRATAP SINGH 61

    61

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    63/79

    PREPARED BY ARUN PRATAP SINGH 62

    62

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    64/79

    PREPARED BY ARUN PRATAP SINGH 63

    63

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    65/79

    PREPARED BY ARUN PRATAP SINGH 64

    64

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    66/79

    PREPARED BY ARUN PRATAP SINGH 65

    65

    VITERBI ALGORITHM FOR MLSE :

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    67/79

    PREPARED BY ARUN PRATAP SINGH 66

    66

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    68/79

    PREPARED BY ARUN PRATAP SINGH 67

    67

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    69/79

    PREPARED BY ARUN PRATAP SINGH 68

    68

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    70/79

    PREPARED BY ARUN PRATAP SINGH 69

    69

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    71/79

    PREPARED BY ARUN PRATAP SINGH 70

    70

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    72/79

    PREPARED BY ARUN PRATAP SINGH 71

    71

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    73/79

    PREPARED BY ARUN PRATAP SINGH 72

    72

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    74/79

    PREPARED BY ARUN PRATAP SINGH 73

    73

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    75/79

    PREPARED BY ARUN PRATAP SINGH 74

    74

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    76/79

    PREPARED BY ARUN PRATAP SINGH 75

    75

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    77/79

    PREPARED BY ARUN PRATAP SINGH 76

    76

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    78/79

    PREPARED BY ARUN PRATAP SINGH 77

    77

  • 7/13/2019 Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

    79/79

    78