Turbo 338

Embed Size (px)

Citation preview

  • 7/29/2019 Turbo 338

    1/18

    ECED 4504Digital Transmission Theory

    Jacek Ilow

    (based on the slides by Matthew Valenti)

    Soft Decision Decoding of Convolutional Codes

    Intro to Turbo Codes

  • 7/29/2019 Turbo 338

    2/18

    Performance of

    Hard Decision Decoding

    Consider the K=7 Odenwalder codes

    r=, generator (133,171), dfree = 10

    r=, generator (133,145,171), dfree = 14

    Distance spectra (first six non-zero terms)

    Assume BPSK modulation.

    Find the performance of hard decision decoding

    d ad d

    10 11 36

    12 38 211

    14 193 1,404

    16 1,331 11,633

    18 7,275 77,433

    20 40,406 502,690

    d ad d

    14 1 1

    16 7 20

    18 11 53

    20 35 184

    22 90 555

    24 279 1,961

    r=1/2 r=1/3

    Number of paths with

    output weight d

    Sum of the input

    weights for all

    paths with output

    weight d

  • 7/29/2019 Turbo 338

    3/18

    Bit Error Rate of

    Hard Decision Decoding

    Bit error rate:

    Pairwise error probability:

    Since BPSK modulation:

    p Qr

    N

    b

    o

    HG

    2

    P d

    d

    jp p

    d

    jp p

    d

    dp p

    j d j

    jd

    d

    j d j

    jd

    dd d

    2

    1

    2

    21

    2 2

    1

    11

    2 21

    ( )

    /

    / /

    HG

    KJ

    F

    HGI

    KJ F

    HG I

    KJ

    S||

    T

    |

    |

    a f

    a f a f

    odd d

    even d

    P P d P d b dd d

    d free

    free

    free

    2 2( ) ( )Free distance asymptote

    only valid for small pUnion bound

  • 7/29/2019 Turbo 338

    4/18

    System Model:

    Soft Decision Decoding

    estimates of

    data bits

    L data bits (L+m)/r

    code bits

    rate r = k/n

    convolutional

    encoder

    BPSK

    modulator

    +AWGN

    n(t)

    s(t)

    decisionstatistic

    Viterbi

    decoder

    r(t)

    x c

    x r

    x

    f(t)

    zdtTb

    0

    correlator

  • 7/29/2019 Turbo 338

    5/18

    Soft Decision Decoding

    Using the Viterbi Algorithm

    The Viterbi algorithm can be used to performsoft decision decoding.

    The only difference is with the branch metric. With hard decision decoding, we used Hamming

    distance.

    With soft-decision decoding, we use the squaredEuclidian distance.

    Soft decision decoding outperforms harddecision decoding by about 2 dB. One of the biggest benefits of convolutional codes.

    d wH c r c r,

    d r ce i i2 22

    c r r c, g

  • 7/29/2019 Turbo 338

    6/18

    Performance of

    Soft Decision Decoding

    For soft decision decoding, a different pairwise error

    probability is used.

    Depends on modulation and type of channel.

    Based on Euclidian distance

    For BPSK over an AWGN channel:

    Where de is the Euclidian distance between the all-zeros

    code word and the code word of Hamming weight d.

    So the analysis is the same as for hard-decision, only

    now we use the above expression for the pairwise

    error probabilty

    P d Qd

    NQ

    rd

    N

    e b

    o

    2

    02

    2( ) HG KJ HG

    E

    P P d QE rd

    Nb d

    d d

    db

    od dfree free

    H

    G

    22

    ( )

  • 7/29/2019 Turbo 338

    7/18

    Comparison:

    Soft-Decision vs. Hard-Decision

    0 1 2 3 4 5 6 7 8 9 10

    10-7

    10-6

    10-5

    10-4

    10-3

    10-2

    10-1

    100

    Eb/No in dB

    BER

    Pb of uncodedBPSK

    r=1/2r=1/3

    Soft-decision

    Coding gain @ BER = 10-5

    5.4 dB for r=1/2

    5.7 dB for r=1/3

    Hard-decisiondecoding

    Soft-decision

    decoding

  • 7/29/2019 Turbo 338

    8/18

    Principles of Good Code Design

    Consider the BER of a block code:

    Where is the average number of information

    bits associated with weight d code words. How can we minimize Pb?

    For k = 10,000 which distance spectrum is better?

    We want the probabilityof a low weight code word tobe small!

    Pa f

    kQ

    rd

    N kQ

    rd

    Nb

    d d b

    od d

    nd b

    od d

    n

    HG KJ HG 2 2

    min min

    f ad d d /

    d ad d

    3 2 6

    6 4 12

    9 10 40

    12 100 450

    15 1,000 5,000

    18 5,000 40,000

    d ad d

    5 10 100

    6 50 500

    7 100 1000

    8 350 3500

    9 1000 10,000

    10 10000 100,000

  • 7/29/2019 Turbo 338

    9/18

    Review: Convolutional Codes

    A convolutional encoderencodes a streamof data.

    The size of the code word is

    unbounded.

    The encoder is a Finite ImpulseResponse(FIR) filter.

    k binary inputs

    n binary outputs

    K -1 delay elements Coefficients are either 1 or 0

    All operations over GF(2)

    Addition: XOR

    Constraint Length K = 3

    D Dim

    )0(

    ix

    )1(

    ix

  • 7/29/2019 Turbo 338

    10/18

    Recursive SystematicConvolutional (RSC) Encoding

    An RSCencoder is constructedfrom a standard convolutionalencoder by feeding back one of

    the outputs. An RSC code is systematic.

    The input bits appear directly inthe output.

    An RSC encoder is an Infinite

    Impulse Response(IIR) Filter. An arbitrary input will cause a

    good (high weight) output withhigh probability.

    Some inputs will cause bad (low

    weight) outputs.

    D D

    im

    )0(

    ix

    )1(ix

    ix

    ir

  • 7/29/2019 Turbo 338

    11/18

    Parallel Concatenated Codeswith Nonuniform Interleaving

    A stronger code can be created by encoding inparallel.

    A nonuniform interleaverscrambles the ordering ofbits at the input of the second encoder.

    Uses a pseudo-random interleaving pattern. It is very unlikely that both encoders produce low

    weight code words.

    MUX increases code rate from 1/3 to 1/2.

    RSC#1

    RSC#2

    NonuniformInterleaver

    MUX

    Input

    ParityOutput

    Systematic Output

    ix

  • 7/29/2019 Turbo 338

    12/18

    Random Coding Interpretation

    of Turbo Codes

    Random codesachieve the best performance.

    Shannon showed that as n, random codes achieve

    channel capacity.

    However, random codes are not feasible.

    The code must contain enough structure so that decodingcan be realized with actual hardware.

    Coding dilemma:

    All codes are good, except those that we can think of.

    With turbo codes: The nonuniform interleaver adds apparentrandomness to

    the code.

    Yet, they contain enough structure so that decoding is

    feasible.

  • 7/29/2019 Turbo 338

    13/18

    Comparison of a Turbo Code

    and a Convolutional Code First consider a K=12 convolutional code.

    dmin = 18

    d = 187

    Now consider the original turbo code.

    C. Berrou, A. Glavieux, and P. Thitimasjshima, NearShannon limit error-correcting coding and decoding: Turbo-codes, in Proc. IEEE Int. Conf. on Commun., Geneva,Switzerland, May 1993, pp. 1064-1070.

    Same complexity as the K=12 convolutional code

    Constraint length 5 RSC encoders

    k = 65,536 bit interleaver

    Minimum distance dmin = 6

    ad = 3 minimum distance code words

    Minimum distance code words have average informationweight of only f

    d

    2

  • 7/29/2019 Turbo 338

    14/18

    0.5 1 1.5 2 2.5 3 3.5 4

    10-8

    10-6

    10-4

    10-2

    100

    Eb/N

    oin dB

    BER

    Convolutional CodeCC free distance asymptoteTurbo CodeTC free distance asymptote

    Comparison of

    Minimum-distance Asymptotes

    Convolutional code:

    Turbo code:

    o

    bb

    N

    EQP 18187

    o

    b

    o

    b

    o

    bdd

    b

    N

    EQ

    N

    EQ

    N

    rdEQ

    k

    faP

    6102.9

    )6)(5.0(2

    536,65

    )2)(3(

    2

    5

    minmi nmi n

    187mindc

    18min d

    6min d

  • 7/29/2019 Turbo 338

    15/18

    The Turbo-Principle

    Turbo codes get their name because the

    decoder uses feedback, like a turbo engine.

    We will go over the decoding algorithm next time.

  • 7/29/2019 Turbo 338

    16/18

    0.5 1 1.5 210

    -7

    10-6

    10-5

    10-4

    10-3

    10-2

    10-1

    100

    Eb/No in dB

    BER

    1 iteration

    2 iterations

    3 iterations6 iterations

    10 iterations

    18 iterations

    Performance as a Function of

    Number of Iterations K = 5

    r = 1/2

    k = 65,536

    Log-MAP algorithm

  • 7/29/2019 Turbo 338

    17/18

    Summary of Performance Factors

    and Tradeoffs

    Latency vs. performance Frame (interleaver) size k

    Complexity vs. performance Decoding algorithm

    Number of iterations

    Encoder constraint length K

    Spectral efficiency vs. performance

    Overall code rate r Other factors

    Interleaver design

    Puncture pattern

    Trellis termination

  • 7/29/2019 Turbo 338

    18/18

    0.5 1 1.5 2 2.510

    -7

    10-6

    10-5

    10-4

    10-3

    10-2

    10-1

    Tradeoff: BER Performance versusFrame Size (Latency)

    K = 5

    Rate r = 1/2

    18 decoder iterations

    AWGN Channel

    0.5 1 1.5 2 2.5 310

    -8

    10-6

    10-4

    10-2

    100

    Eb/N

    oin dB

    BER

    K=1024

    K=4096

    K=16384

    K=65536