Convolutional Code(II)

Embed Size (px)

Citation preview

  • 7/30/2019 Convolutional Code(II)

    1/115

    Chapt er10Convolut ional Codes

    Dr. Chih-Peng Li ()

  • 7/30/2019 Convolutional Code(II)

    2/115

    Table of Cont ent s

    10.1 Encoding of Convolutional Codes

    10.2 Structural Properties of Convolutional Codes

    10.3 Distance Properties of Convolutional Codes

  • 7/30/2019 Convolutional Code(II)

    3/115

    Convolut ional Codes

    Convolutional codes differ from block codes in that the encoder

    contains memory and the n encoder outputs at any time unit

    depend not only on the kinputs but also on mprevious input

    blocks.

    An (n, k, m) convolutional code can be implemented with a k-

    input, n-output linear sequential circuit with input memory m.

    Typically, n and kare small integers with k

  • 7/30/2019 Convolutional Code(II)

    4/115

    Shortly thereafter, Wozencraft proposed sequential decoding as an

    efficient decoding scheme for convolutional codes, and

    experimental studies soon began to appear.

    In 1963, Massey proposed a less efficient but simpler-to-

    implement decoding method called threshold decoding.

    Then in 1967, Viterbi proposed a maximum likelihood decodingscheme that was relatively easy to implement for cods with small

    memory orders.

    This scheme, called Viterbi decoding, together with improved

    versions of sequential decoding, led to the application of

    convolutional codes to deep-space and satellite communication in

    early 1970s.

    Convolut ional Codes

  • 7/30/2019 Convolutional Code(II)

    5/115

    Convolut ional Code

    A convolutional code is generated by passing the information

    sequence to be transmitted through a linear finite-state shift

    register.

    In general, the shift register consists ofK(k-bit) stages and n

    linear algebraic function generators.

  • 7/30/2019 Convolutional Code(II)

    6/115

    Convolt uional Code

    Convolutional codes

    k= number of bits shifted into the encoder at one time

    k=1 is usually used!!n = number of encoder output bits corresponding to the k

    information bits

    Rc = k/n = code rateK= constraint length, encoder memory.

    Each encoded bit is a function of the present input bits and their

    past ones.

    Note that the definition of constraint length here is the same as

    that of Shu Lins, while the shift registers representation is

    different.

  • 7/30/2019 Convolutional Code(II)

    7/115

    Encoding of Convolut ional Code

    Example 1:

    Consider the binary convolutional encoder with constraint

    length K=3, k=1, and n=3.

    The generators are: g1=[100], g2=[101], and g3=[111].

    The generators are more conveniently given in octal form

    as (4,5,7).

  • 7/30/2019 Convolutional Code(II)

    8/115

    Encoding of Convolut ional Code

    Example 2:

    Consider a rate 2/3 convolutional encoder.

    The generators are: g1=[1011], g2=[1101], and g3=[1010].In octal form, these generator are (13, 15, 12).

  • 7/30/2019 Convolutional Code(II)

    9/115

    Repr esent at ions of Convolut ional Code

    There are three alternative methods that are often used

    to describe a convolutional code:

    Tree diagram

    Trellis diagram

    State disgram

  • 7/30/2019 Convolutional Code(II)

    10/115

    Repr esent at ions of Convolut ional Code

    Tree diagram

    Note that the tree diagram in theright repeats itself after the third

    stage.This is consistent with the fact thatthe constraint length K=3.

    The output sequence at each stage isdetermined by the input bit and thetwo previous input bits.

    In other words, we may sat that the3-bit output sequence for each inputbit is determined by the input bit andthe four possible states of the shift

    register, denoted as a=00, b=01,c=10, and d=11.

    Tree diagram for rate 1/3,K=3 convolutional code.

  • 7/30/2019 Convolutional Code(II)

    11/115

    Repr esent at ions of Convolut ional Code

    Trellis diagram

    Tree diagram for rate 1/3, K=3 convolutional code.

  • 7/30/2019 Convolutional Code(II)

    12/115

    Repr esent at ions of Convolut ional Code

    State diagram

    0 1

    0 1

    0 1

    0 1

    a a a c

    b a b c

    c b c d

    d b d d

    State diagram for rate 1/3,

    K=3 convolutional code.

  • 7/30/2019 Convolutional Code(II)

    13/115

    Repr esent at ions of Convolut ional Code

    Example: K=2, k=2, n=3 convolutional code

    Tree diagram

  • 7/30/2019 Convolutional Code(II)

    14/115

    Repr esent at ions of Convolut ional Code

    Example: K=2, k=2, n=3 convolutional code

    Trellis diagram

  • 7/30/2019 Convolutional Code(II)

    15/115

    Repr esent at ions of Convolut ional Code

    Example: K=2, k=2, n=3 convolutional code

    State diagram

  • 7/30/2019 Convolutional Code(II)

    16/115

    Repr esent at ions of Convolut ional Code

    In general, we state that a rate k/n, constraint length K,

    convolutional code is characterized by 2kbranches emanating

    from each node of the tree diagram.

    The trellis and the state diagrams each have 2k(K-1)possible states.

    There are 2kbranches entering each state and 2kbranches leaving

    each state.

  • 7/30/2019 Convolutional Code(II)

    17/115

    the encoder consists of an m= 3-stage shift register together withn=2 modulo-2 adders and a multiplexer for serializing theencoder outputs.

    The mod-2 adders can be implemented as EXCLUSIVE-OR gates.Since mod-2 addition is a linear operation, the encoder is alinear feedforward shift register.

    All convolutional encoders can be implemented using a linearfeedforward shift register of this type.

    Example: A (2, 1, 3) binary convolutional codes:

    10.1 Encoding of Convolut ional Codes

  • 7/30/2019 Convolutional Code(II)

    18/115

    10.1 Encoding of Convolut ional Codes

    The information sequence u =(u0, u1, u2, ) enters the encoder

    one bit at a time.

    Since the encoder is a linear system, the two encoderoutputsequence can

    be obtained as the convolution of the input sequence u with the

    two encoder impulse response.

    The impulse responses are obtained by letting u =(1 0 0 ) and

    observing the two output sequence.

    Since the encoder has an m-time unit memory, the impulseresponses can last at most m+1 time units, and are written as :

    ),,,(and),,,( )()()()()()()()( 222

    1

    2

    0

    21

    2

    1

    1

    1

    0

    1 vv ==

    ),,,(

    ),,,()2()2(

    1

    )2(

    0

    )2(

    )1()1(

    1

    )1(

    0

    )1(

    m

    m

    ggg

    ggg

    =

    =

    g

    g

  • 7/30/2019 Convolutional Code(II)

    19/115

    10.1 Encoding of Convolut ional Codes

    The encoder of thebinary (2, 1, 3) code is

    The impulse response g(1) and g(2) are called the generator

    sequences of the code.

    The encoding equations can now be written as

    where * denotes discrete convolution and all operations are mod-2.

    The convolution operation implies that for all l 0,

    where

    )1111(

    )1101()2(

    )1(

    =

    =

    g

    g

    )2()2(

    )1()1(

    guv

    guv

    ==

    ( ) ( ) ( ) ( ) ( )

    0 1 1

    0

    , 1,2,.m

    j j j j j

    l l i i l l l m m

    i

    u g u g u g u g j =

    = = + + + =

    0 for all .l iu l i = 1, there are n generator polynomials

    for each of the kinputs.

    Each set ofn generators represents the connections from one of

    the shift registers to the n outputs.

    The length Ki of the ith shift register is given by

    where is the generator polynomial relating the ith input

    to thejth output, and the encoder memory orderm is

    ( )( ) ,1,degmax1 kiDK jinji = g

    ( )( )Djig

    ( ) .degmaxmax

    1

    11

    j

    i

    ki

    njki

    iKm g

    ==

  • 7/30/2019 Convolutional Code(II)

    41/115

    10.1 Encoding of Convolut ional Codes

    Since the encoder is a linear system, and u(i)(D) is the ith input

    sequence and v(j)(D)is thejth output sequence, the generator

    polynomial can be interpreted as the encoder transfer

    function relating input i to outputj.

    As with k-input, n-output linear system, there are a total ofkn

    transfer functions.

    These can be represented by the k n transfer function matrix

    ( )( )Djig

    ( )

    ( ) ( ) ( ) ( ) ( ) ( )( )

    ( )( )

    ( )( )

    ( )

    ( ) ( ) ( ) ( ) ( ) ( )

    1 2

    1 1 1

    1 2

    2 2 2

    1 2

    D

    n

    n

    n

    k k k

    D D D

    D D D

    D D D

    =

    g g g

    g g gG

    g g g

  • 7/30/2019 Convolutional Code(II)

    42/115

    10.1 Encoding of Convolut ional Codes

    Using the transfer function matrix, the encoding equation for an

    (n, k, m) code can be expressed as

    where is the k-tuple of

    input sequences and is

    the n-tuple of output sequences.

    After multiplexing, the code word becomes

    ( ) ( ) ( )DDD GUV =( ) (1) (2) ( )( ), ( ), , ( )kD D D D U u u u

    ( ) (1) (2) ( )( ), ( ), , ( )nD D D D

    V v v v

    ( )( )

    )( )

    )( )

    ).121 nnnnn

    DDDDDD vvvv

    +++=

    10 1 E di f C l i l C d

  • 7/30/2019 Convolutional Code(II)

    43/115

    10.1 Encoding of Convolut ional Codes

    Example 10.7

    For the previous (3, 2, 1) convolutional code

    For the input sequences u(1)(D) = 1+D2 and u(2)(D)=1+D, the

    encoding equations are

    and the code word is

    ( )

    ++

    = 11

    11

    D

    DDD

    DG

    ( ) ( )( ) ( )( ) ( )( )[ ] [ ]

    [ ]3233

    2321

    ,D1,D1

    11

    111,1,,

    DD

    D

    DDDDDDDDD

    +++=

    ++++== vvvV

    ( ) ( ) ( ) ( )9 9 6 9 2

    8 9 10 11

    D 1 1

    1

    D D D D D D

    D D D D D

    = + + + + +

    = + + + + +

    v

    1 0 1 1 1 1

    0 1 1 1 0 0

    10 1 E di f C l t i l C d

  • 7/30/2019 Convolutional Code(II)

    44/115

    10.1 Encoding of Convolut ional Codes

    Then, we can find a means of representing the code word v(D)

    directly in terms of the input sequences.

    A little algebraic manipulation yields

    where

    is a composite generator polynomial relating the ith inputsequence to v(D).

    ( ) ( ) ( ) ( )DD ink

    i

    i guv =

    =1

    D

    ( ) ( ) ( ) ( ) ( ) ( ) ( )1 2 11 , 1 ,nn n n ni i i ig D D D D D D i k + + g g g

    10 1 E di f C l t i l C d

  • 7/30/2019 Convolutional Code(II)

    45/115

    10.1 Encoding of Convolut ional Codes

    Example 10.8

    For the previous (2, 1, 3) convolutional codes, the composite

    generator polynomial is

    and foru(D)=1+D2

    +D3

    +D4

    , the code word is

    again agreeing with previous calculations.

    ( ) ( ) ( ) 765432221 1 DDDDDDDDDD ++++++=+= ggg

    ( ) ( ) ( )

    ( ) ( )

    2

    4 6 8 3 4 5 6 7

    3 7 9 11 14 15

    1 1

    1

    D D D

    D D D D D D D D D

    D D D D D D D

    =

    = + + + + + + + + +

    = + + + + + + +

    v u g

    10.2 St r uct ur al Pr oper t ies of Convolut ional

  • 7/30/2019 Convolutional Code(II)

    46/115

    Codes

    Since a convolutional encoder is a sequential circuit, its

    operation can be described by a state diagram.

    The state of the encoder is defined as its shift register contents.

    For an (n, k, m) code with k> 1, the ith shift register contains Kiprevious information bits.

    Defined as the total encoder memory, the encoder

    state at time unit l is the binary K-tuple of inputs

    and there are a total 2Kdifferent possible states.

    1

    k

    ii

    K K=

    ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )kKl

    k

    l

    k

    lKlllKlll kuuuuuuuuu 21

    22

    2

    2

    1

    11

    2

    1

    1 21

    10.2 St r uct ur al Pr oper t ies of Convolut ionalC d

  • 7/30/2019 Convolutional Code(II)

    47/115

    Codes

    For a (n, 1, m) code, K= K1 = m and the encoder state at time

    unit l is simply .

    Each new block ofkinputs causes a transition to a new state.

    There are 2kbranches leaving each state, one corresponding to

    each different input block. Note that for an (n, 1, m) code, there

    are only two branches leaving each state.

    Each branch is labeled with the kinputs causing the transition

    and n corresponding outputs .

    The states are labeledS

    0,S

    1,,S

    2K-1, where by conventionS

    irepresents the state whose binary K-tuple representation

    b0,b1,,bK-1 is equivalent to the integer

    ( )mlll uuu 21

    )()2()1( k

    lll uuu ))()2()1( nlll

    0 1 1

    0 1 1

    2 2 2KK

    i b b b

    = + + +

    10.2 St r uct ur al Pr oper t ies of Convolut ionalC d

  • 7/30/2019 Convolutional Code(II)

    48/115

    Assuming that the encoder is initially in state S0 (all-zero state),

    the code word corresponding to any given information

    sequence can be obtained by following the path through the

    state diagram and noting the corresponding outputs on thebranch labels.

    Following the last nonzero information block, the encoder is

    return to state S0by a sequence ofm all-zero blocks appendedto the information sequence.

    Codes

    10.2 St r uct ur al Pr oper t ies of Convolut ionalC d

  • 7/30/2019 Convolutional Code(II)

    49/115

    Encoder state diagram of a (2, 1, 3) code

    Ifu = (1 1 1 0 1), the code word

    v = (1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1)

    Codes

    10.2 St r uct ur al Pr oper t ies of Convolut ionalC d

  • 7/30/2019 Convolutional Code(II)

    50/115

    Encoder state diagram of a (3, 2, 1) code

    Codes

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    51/115

    The state diagram can be modified to provide a complete

    description of theHamming weights of all nonzero code words

    (i.e. a weight distribution function for the code).

    State S0 is split into an initial state and afinal state, the self-

    loop around state S0 is deleted, and each branch is labeled with

    a branch gainXi,where i is the weight of the n encoded bits on

    that branch.Eachpath connecting the initial state to the final state

    represents a nonzero code word that diverge from and remerge

    with state S0 exactly once.Thepath gain is the product of the branch gains along a path,

    and the weight of the associated code word is the power ofXin

    the path gain.

    Codes

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    52/115

    Codes

    Modified encoder state diagram of a (2, 1, 3) code.

    The path representing the sate sequence S0S1S3S7S6S5S2S4S0has path gain X2X1X1X1X2X1X2X2=X12.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    53/115

    Codes

    Modified encoder state diagram of a (3, 2, 1) code.

    The path representing the sate sequence S0S1S3S2S0has path gain X2X1X0X1 =X12.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    54/115

    Codes

    The weight distribution function of a code can be determined

    by considering the modified state diagram as a signal flow

    graph and applyingMasons gain formula to compute its

    generating function

    whereAi is the number of code words of weight i.

    In a signal flow graph, a path connecting the initial state to the

    final state which does not go through any state twice is called a

    forward path.

    A closed path starting at any state and returning to that state

    without going through any other state twice is called a loop.

    ( ) ,ii

    iXAXT =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    55/115

    Codes

    Let Cibe the gain of the ith loop.

    A set of loops is nontouching if no state belongs to more than one

    loop in the set.

    Let {i} be the set of all loops, {i, j} be the set of all pairs of

    nontouching loops, {i,j, l} be the set of all triples of

    nontouching loops, and so on.

    Then define

    where is the sum of the loop gains, is the product of

    the loop gains of two nontouching loops summed over all pairs of

    nontouching loops, is the product of the loop gains of

    three nontouching loops summed over all nontouching loops.

    ,1 ''''''''''

    '''

    ''

    '

    ,,,

    ++= ljlji ijji iii CCCCCC

    i

    iC '''

    '

    ,

    j

    ji

    iCC

    ''''

    ''''''

    ''

    ,,lj

    ljii

    CCC

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    56/115

    Codes

    And i is defined exactly like , but only for that portion of the

    graph not touching the ith forward path; that is, all states along

    the ith forward path, together with all branches connected to

    these states, are removed from the graph when computing i.

    Masons formula for computing the generating function T(X) of

    a graph can now be states as

    where the sum in the numerator is over all forward paths and Fiis the gain of the ith forward path.

    ( ) ,i i

    i

    F

    T X

    =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    57/115

    Codes

    ( )( )

    ( )( )( )

    ( )( )( )

    ( )( )

    ( )XCSSLoop

    XCSSSSLoop

    XCSSSSSLoop

    XCSSSLoop

    XCSSSSLoop

    XCSSSSSSSLoop

    XCSSSSSSSSLoop

    XCSSSSSLoopXCSSSSSSSLoop

    XCSSSSSSLoop

    XCSSSSSSSSLoop

    =

    =

    =

    =

    =

    =

    =

    =

    =

    =

    =

    1177

    4

    103563

    5

    935673

    8252

    3

    71421

    861463521

    9

    514673521

    2

    414631

    7

    31425631

    3

    2146731

    8

    114256731

    :11

    :10

    :9

    :8

    :7

    :6

    :5

    :4:3

    :2

    :1Example (2,1,3) Code:

    There are 11 loops in the

    modified encoder state

    diagram.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    58/115

    Codes

    Example (2,1,3) Code: (cont.)There are 10 pairs of nontouching loops :

    ( )

    ( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )

    ( ) ( )( ) ( )( ) ( )

    ( ) ( )4

    118

    4

    117

    4

    107

    7

    82

    8

    97

    9

    116

    3

    114

    3

    84

    8

    113

    4

    82

    11loop8loop:10pairLoop

    11loop7loop:9pairLoop

    8loop2loop:8pairLoop10loop7loop:7pairLoop

    9loop7loop:6pairLoop

    11loop6loop:5pairLoop

    11loop4loop:4pairLoop

    8loop3loop:3pairLoop

    11loop3loop:2pairLoop

    8loop2loop:1pairLoop

    XCC,

    XCC,

    XCC,XCC,

    XCC,

    XCC,

    XCC,

    XCC,

    XCC,

    XCC,

    =

    =

    =

    =

    =

    =

    =

    =

    =

    =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    59/115

    Codes

    Example (2,1,3) Code : (cont.)

    There are two triples of nontouching loops :

    There are no other sets of nontouching loops. Therefore,

    ( )( ) ( )811107

    4

    1184

    11loop10,loop7,loop:2tripleLoop11loop8,loop4,loop:1tripleLoop

    XCCCXCCC

    ==

    ( )

    ( )( ) 3845247843384

    453342738

    21

    1

    XXXXXXXXXXXXXX

    XXXXXXXXXXX

    +=+++++++++++

    ++++++++++=

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    60/115

    Codes

    Example (2,1,3) Code : (cont.)

    There are seven forward paths in this state diagram :

    ( )( )

    ( )( )

    ( )( )( ).:7pathFoward

    :6pathFoward:5pathFoward

    :4pathFoward

    :3pathFoward

    :2pathFoward

    :1pathFoward

    7

    704210

    7

    604635210

    8

    5046735210

    6

    4046310

    11304256310

    7

    20467310

    12

    1042567310

    XFSSSSS

    XFSSSSSSSSXFSSSSSSSSS

    XFSSSSSS

    XFSSSSSSSS

    XFSSSSSSS

    XFSSSSSSSSS

    =

    ==

    ==

    =

    =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    61/115

    Codes

    Example (2,1,3) Code : (cont.)

    Forward paths 1 and 5 touch all states in the graph, and

    hence the sub graph not touching these paths contains no

    states. Therefore,

    1 = 5 = 1.

    The subgraph not touching forward paths 3 and 6:

    3 = 6 = 1 -X

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    62/115

    Codes

    Example (2,1,3) Code : (cont.)

    The subgraph not touching

    forward path 2:

    2 = 1 -X

    The subgraph not touching

    forward path 4:

    4 = 1 (X + X) + (X2)

    = 1 2X+X2

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    63/115

    Example (2,1,3) Code : (cont.)

    The subgraph not touching

    forward path 7:

    7 = 1 (X + X4 + X5) + (X5)

    = 1 XX4

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    64/115

    Example (2,1,3) Code : (cont.)

    The generating function for this graph is then given by

    T(X) provides a complete description of the weight

    distribution of all nonzero code words that diverge from andremerge with state S0 exactly once.

    In this case, there is one such code word of weight 6, three of

    weight 7, five of weight 8, and so on.

    ( )( ) ( )

    ( ) ( ) ( )

    +++++=

    +=

    +++++++

    =

    1098763

    876

    3

    477826

    11712

    25115321

    21

    11121111

    XXXXXXXXXX

    XX

    XXXXXXXXX

    XXXXX

    XT

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    65/115

    Example (3,2,1) Code : (cont.)

    There are eight loops, six pairs of nontouching loops, and one

    triple of nontouching loops in the graph of previous modified

    encoder state diagrams : (3, 2, 1) code, and

    ( )

    ( ) ( ).221

    1

    5432

    6543426

    3212342

    XXXXX

    XXXXXXX

    XXXXXXXX

    ++=

    ++++++

    +++++++=

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    66/115

    ( ( (

    ( ) ( ) ( )( ) ( )

    ( ).2

    111

    111

    11111

    1111

    876543

    636

    3743248

    5626234

    356245325

    XXXXXXXXXX

    XXXXXXXX

    XXXXXXXXX

    XXXXXXXXXXF

    i

    ii

    +++=+++

    ++++

    +++++

    ++++=

    Example (3,2,1) Code : (cont.)There are 15 forward path in this graph, and

    Hence, the generating function is

    This code contains two nonzero code word of weight 3, five

    of weight 4, fifteen of weight 5, and so on.

    ( ) +++=++

    +++=543

    5432

    876543

    1552221

    2 XXXXXXXXXXXXXXXT

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    67/115

    Additional information about the structure of a code can beobtained using the same procedure.

    If the modified state diagram is augmented by labeling each

    branch corresponding to a nonzero information block with Yj

    ,wherej is the weight of the kinformation bits on the branch, and

    labeling every branch with Z, the generating function is given

    by

    The coefficient Ai,j,l denotes the number of code words withweight i, whose associated information sequence has weightj,

    and whose length is lbranches.

    ( ) .,,,,

    ,,

    l

    lji

    ji

    lji ZYXAZYXT =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    68/115

    The augment state diagram for the (2, 1, 3) codes.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    69/115

    Example (2,1,3) Code:For the graph of the augment state diagram for the (2, 1, 3)

    codes, we have:

    ( )( ) ( ) ( )( ) ( ) 749746384324

    633334222

    748744

    435322424637748

    744533633748744

    324435233633

    744422637533748

    1

    )

    (

    )

    (1

    ZYXZYZYXZZYX

    ZYYZXZZYXZZXY

    ZYXZYX

    ZYXZYXZYXZYXZYX

    ZYXZYXZYXZYXZYX

    XYZZYXZYXXYZYZXZYX

    ZYXZYXZYXZYXZYX

    ++=

    +

    ++++++++++

    ++++++

    ++++=

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    70/115

    Example (2,1,3) Code: (cont.)

    Hence the generating function is

    ( )

    ( )( ) ( )

    52847526

    32447737

    8483222526

    731126378412

    11

    1)1(

    111

    ZYXYZXZYX

    ZYXXYZYZXXYZZYX

    ZYXZYXZZXYZYX

    XYZZYXXYZZYXZYXFi

    ii

    +=

    ++++++

    ++=

    ( )

    ( )

    ( )+++++

    +++=

    +=

    948474628

    736347526

    52847526

    2

    ,,

    ZYZYZYZYX

    ZYZYYZXZYX

    ZYXYZXZYXZYXT

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    71/115

    Example (2,1,3) Code: (cont.)

    This implies that the code word of weight 6 has length 5

    branches and an information sequence of weight 2, one code

    word of weight 7 has length 4 branches and information

    sequence weight 1, another has length 6 branches and

    information sequence weight 3, the third has length 7 branches

    and information sequence weight 3, and so on.

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    72/115

    The state diagram can be usedto obtain the distance property

    of a convolutional code.

    Without loss of generality, weassume that the all-zero code

    sequence is the input to the

    encoder.

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    73/115

    First, we label the branches of the state diagram as eitherD0=1,D1,D2, orD3, where the exponent ofD denotes the Hamming

    distance between the sequence of output bits corresponding to

    each branch and the sequence of output bits corresponding to theall-zero branch.

    The self-loop at node a can be eliminated, since it contributes

    nothing to the distance properties of a code sequence relative tothe all-zero code sequence.

    Furthermore, node a is split into two nodes, one of which

    represents the input and the other the output of the state diagram.

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    74/115

    Use the modified state diagram, we can obtain four stateequations:

    The transfer function for the code is defined as T(D)=Xe/Xa. Bysolving the state equations, we obtain:

    3

    2 2

    2

    c a b

    b c d

    d c d

    e b

    X D X DX

    X DX DX

    X D X D X

    X D X

    = +

    = +

    = +

    =

    ( )6

    6 8 10 12

    2

    6

    2 4 8

    1 2

    d

    d

    d

    DT D D D D D a D

    D

    =

    = = + + + + =

    ( )( )

    ( )

    =

    dodd0

    deven2 26d

    da

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    75/115

    The transfer function can be used to provide more detailedinformation than just the distance of the various paths.

    Suppose we introduce a factorNinto all branch transitions

    caused by the input bit 1.Furthermore, we introduce a factor ofJinto each branch of the

    state diagram so that the exponent ofJwill serve as a counting

    variable to indicate the number of branches in any given pathfrom node a to node e.

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    76/115

    The state equations for the state diagram are:

    Upon solving these equations for the ratioXe/Xa, we obtain thetransfer function:

    3

    2 2

    2

    c a b

    b c d

    d c d

    e b

    X JND X JNDX

    X JDX JDX

    X JND X JND X

    X JD X

    = +

    = +

    = +

    =

    ( )( )

    3 6

    2

    3 6 4 2 8 5 2 8 5 3 10

    6 3 10 7 3 10

    , ,

    1 1

    2

    J NDT D N J

    JND J

    J ND J N D J N D J N D

    J N D J N D

    =

    += + + +

    + + +

    The Tr ansf er Funct ion of a Convolut ional

    Code

  • 7/30/2019 Convolutional Code(II)

    77/115

    The exponent of the factorJindicates the length of the path thatmerges with the all-zero path for the first time.

    The exponent of the factorNindicates the number of 1s in the

    information sequence for that path.

    The exponent ofD indicates the distance of the sequence of

    encoded bits for that path from the all-zero sequence.

    Reference:

    John G. Proakis, Digital Communications, Fourth Edition, pp.

    477 482, McGraw-Hill, 2001.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    78/115

    An important subclass of convolutional codes is the class of

    systematic codes.

    In a systematic code, the first koutput sequences are exact

    replicas of the kinput sequences, i.e.,

    v(i) = u(i), i = 1, 2, , k,

    and the generator sequences satisfy( )

    ,...k,,, iij

    ijj

    i 21if0

    if1=

    ==g

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    79/115

    The generator matrix is given by

    where I is the k kidentity matrix, 0 is the k kall-zero matrix,

    and Pl is the k (nk) matrix

    =

    mmm

    mm

    m

    P0P0P0PI

    P0P0P0PI

    P0P0P0PI

    G 120

    110

    210

    ( ) ( ) ( )( ) ( ) ( )

    ( ) ( ) ( )

    ,

    ,

    2

    ,

    1

    ,

    ,2

    2

    ,2

    1

    ,2

    ,1

    2

    ,1

    1

    ,1

    =

    ++

    ++

    ++

    n

    lk

    k

    lk

    k

    lk

    n

    l

    k

    l

    k

    l

    n

    l

    k

    l

    k

    l

    l

    ggg

    gggggg

    P

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    80/115

    And the transfer matrixbecomes

    Since the first koutput sequences equal the input sequences, they

    are called information sequences and the last nkoutputsequences are calledparity sequences.

    Note that whereas in general kn generator sequences must be

    specified to define an (n, k, m) convolutional code, only k(n k)sequences must be specified to define a systematic code.

    Systematic codes represent a subclass of the set of all possiblecodes.

    ( )

    ( )( ) ( )( )( )( ) ( )( )

    ( )( ) ( )( )

    =

    +

    +

    +

    DD

    DD

    DD

    D

    n

    k

    k

    k

    nk

    nk

    gg

    gg

    gg

    G

    1

    2

    1

    2

    1

    1

    1

    100

    010

    001

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    81/115

    Example (2,1,3) Code:Consider the (2, 1, 3) systematic code. The generator

    sequences are g(1)=(1 0 0 0) and g(2)=(1 1 0 1). The generator

    matrix is

    =

    10001011

    1000101110001011

    G

    The (2, 1, 3)

    systematic code.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    82/115

    Example (2,1,3) Code:

    The transfer function matrix is

    G(D) = [1 1 +D +D3].For an input sequence u(D) = 1 + D2 + D3, the information

    sequence is

    and the parity sequence is

    ( ) ( ) ( ) ( )( ) 323211 111 DDDDDDD ++=++== guv

    ( ) ( ) ( ) ( ) ( ) ( )( )65432

    33222

    1

    11

    DDDDDD

    DDDDDDD

    ++++++=

    ++++== guv

    g(1)=(1 0 0 0) and g(2)=(1 1 0 1).

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    83/115

    One advantage of systematic codes is that encoding is somewhatsimpler than for nonsystematic codes because less hardware is

    required.

    For an (n, k, m) systematic code with k > n k, there exits amodified encoding circuit which normally requires fewer than K

    shift register states.

    The (2, 1, 3) systematiccode requires only one

    modulo-2 adder with

    three inputs.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    84/115

    Example (3,2,2) Systematic Code:Consider the (3, 2, 2) systematic code with transfer function

    matrix

    The straightforward realization of the encoder requires a total

    of K= K1

    + K2

    = 4 shift registers.

    ( )

    + ++= 2

    2

    110101

    DDDDG

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    85/115

    Example (3,2,2) Systematic Code:Since the information sequences are given by v(1)(D) = u(1)(D)

    and v(2)(D) = u(2)(D), and the parity sequence is given by

    ( )( ) ( )( ) ( )( ) ( )( ) ( )( ) ,3223

    1

    13 DDDDD guguv +=

    The (3, 2, 2) systematic encoder, and it requires

    only two stages of encoder memory rather than 4.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    86/115

    A complete discussion of the minimal encoder memory requiredto realize a convolutional code is given by Forney.

    In most cases the straightforward realization requiring Kstates of

    shift register memory is most efficient.In the case of an (n,k,m) systematic code with k>n-k, a simpler

    realization usually exists.

    Another advantage of systematic codes is that no inverting circuitis needed for recovering the information sequence from the code

    word.

    i d h h h d i i

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    87/115

    Nonsystematic codes, on the other hand, require an inverter to

    recover the information sequence; that is, an n kmatrix G-1(D)

    must exit such that

    G(D)G-1(D) = IDl

    for some l 0, where I is the kkidentity matrix.

    Since V(D) = U(D)G(D), we can obtain

    V(D)G-1(D) = U(D)G(D)G-1(D) = U(D)Dl ,

    and the information sequence can be recovered with an l-time-

    unit delay from the code word by letting V(D) be the input to

    the n-input, k-output linear sequential circuit whose transfer

    function matrix is G-1(D).

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

    F ( 1 ) d f f i i G(D) h

  • 7/30/2019 Convolutional Code(II)

    88/115

    For an (n, 1, m) code, a transfer function matrix G(D) has afeedforward inverse G-1(D) of delay l if and only if

    GCD[g(1)(D), g(2)(D),, g(n)(D)] =Dl

    for some l 0, where GCD denotes the greatest common divisor.For an (n, k, m) code with k> 1,

    be the determinants of the distinct k ksubmatrices of the

    transfer function matrix G(D).

    A feedforward inverse of delay l exits if and only if

    for some l 0.

    let ( ), 1, 2, , ,in

    D ik

    =

    kn

    li DkniD = =,,2,1:)(GCD

    E l (2 1 3) C d

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

  • 7/30/2019 Convolutional Code(II)

    89/115

    Example (2,1,3) Code:For the (2, 1, 3) code,

    and the transfer function matrix

    provides the required inverse of delay 0 [i.e., G(D)G1(D) = 1].

    The implementation of the inverse is shown below

    2 3 2 3 0GCD 1 , 1 1D D D D D D + + + + + = =

    ( )2

    1

    2

    1 D DD

    D D

    + += +

    G

    10.2 St r uct ur al Pr oper t ies of Convolut ional

    Codes

    E l (3 2 1) C d

  • 7/30/2019 Convolutional Code(II)

    90/115

    Example (3,2,1) Code:For the (3, 2, 1) code , the 2 2 submatrices ofG(D) yield

    determinants 1 +D +D2, 1 +D2, and 1. Since

    there exists a feedforward inverse of delay 0.

    The required transfer function matrix is

    given by:

    2 21 , 1 , 1 1GCD D D D + + + =

    ( )10 01 1

    1

    D D

    D

    = +

    G

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

    T d t d h t h h f df d i d t

  • 7/30/2019 Convolutional Code(II)

    91/115

    To understand what happens when a feedforward inverse does notexist, it is best to consider an example.

    For the (2, 1, 2) code with g(1)(D) = 1 +D and g(2)(D) = 1 +D2 ,

    and a feedforward inverse does not exist.

    If the information sequence is u(D) = 1/(1 +D) = 1 +D +D2

    + , the output sequences are v(1)

    (D) = 1 and v(2)

    (D) = 1 +D;that is, the code word contains only three nonzero bits eventhough the information sequence has infinite weight.

    If this code word is transmitted over a BSC, and the three

    nonzero bits are changed to zeros by the channel noise, thereceived sequence will be all zeros.

    2

    GCD 1 , 1 1 ,D D D + + = +

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

    A MLD will then produce the all zero code word as its

  • 7/30/2019 Convolutional Code(II)

    92/115

    A MLD will then produce the all-zero code word as itsestimated, since this is a valid code word and it agrees

    exactly with the received sequence.

    The estimated information sequence will beimplying an infinite number of decoding errors caused by a

    finite number (only three in this case) of channel errors.

    Clearly, this is a very undesirable circumstance, and thecode is said to be subject to catastrophic error propagation,

    and is called a catastrophic code.

    and

    for a code to be noncatastrophic.

    ( ) 0,D =u

    ( )

    ( )( )

    ( )( )

    ( )1 2

    GCD , , ,

    n l

    D D D D

    = g g g ( )GCD i D( ): 1,2,...., are necessary and sufficient conditionslni Dk = =

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

    Any code for which a feedforward inverse exists is

  • 7/30/2019 Convolutional Code(II)

    93/115

    Any code for which a feedforward inverse exists isnoncatastrophic.

    Another advantage of systematic codes is that they are always

    noncatastrophic.A code is catastrophic if and only

    if the state diagram contains a loop

    of zero weight other than theself-loop around the state S0.

    Note that the self-loop around

    the state S3 has zero weight.

    State diagram of a (2, 1, 2)

    catastrophic code.

    10.2 St r uct ur al Pr oper t ies of Convolut ionalCodes

    In choosing nonsystematic codes for use in a

  • 7/30/2019 Convolutional Code(II)

    94/115

    In choosing nonsystematic codes for use in acommunication system, it is important to avoid the

    selection of catastrophic codes.

    Only a fraction 1/(2n 1) of (n, 1, m) nonsystematic

    codes are catastrophic.

    A similar result for (n, k, m) codes with k> 1 is stilllacking.

    The performance of a convolutional code depends on the

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    95/115

    The performance of a convolutional code depends on thedecoding algorithm employed and the distance properties of the

    code.

    The most important distance measure for convolutional codes isthe minimum free distance dfree, defined as

    where v and v are the code words corresponding to the

    information sequences u and u, respectively.

    In equation above, it is assumed that ifu and u are ofdifferent lengths, zeros are added to the shorter sequence so

    that their corresponding code words have equal lengths.

    ( )freemin , : ,d d v v u u

    dfree is the minimum distance between any two code words in the

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    96/115

    dfree is the minimum distance between any two code words in thecode. Since a convolutional code is a linear code,

    where v is the code word corresponding to the information

    sequence u.

    dfree is the minimum-weight code word of any length produced by

    a nonzero information sequence.

    ( ){ }( ){ }

    ( ){ }

    free

    min :

    min : 0

    min : 0 ,

    d w

    w

    w

    = +

    =

    =

    v v u u

    v u

    uG u

    Also it is the minimum weight of all paths in the state diagram

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    97/115

    Also, it is the minimum weight of all paths in the state diagramthat diverge from and remerge with the all-zero state S0, and it

    is the lowest power ofXin the code-generating function T(X).

    For example, dfree = 6 for the (2, 1, 3) code of example 10.10(a),and dfree = 3 for the (3, 2, 1) code of Example 10.10(b).

    Another important distance measure for convolutional codes is

    the column distance function (CDF).Letting

    denote the ith truncation of the code word v, and

    denote the ith truncation of the code word u.

    [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( )1 2 1 2 1 20 0 0 1 1 1, , ,n n ni i ii v v v v v v v v v=v

    [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( )1 2 1 2 1 20 0 0 1 1 1, , ,k k ki i ii u u u u u u u u u=u

    The column distance function of order i di is defined as

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    98/115

    The column distance function of orderi, di, is defined as

    where v is the code word corresponding to the information

    sequence u.

    di is the minimum-weight code word over the first (i + 1) time

    units whose initial information block is nonzero.

    In terms of the generator matrix of the code,

    [ ] [ ]( ) [ ] [ ]{ }

    [ ] [ ]{ }

    0 0

    0

    min , : ,

    min : 0

    i i i

    i

    d d

    w

    =

    v v u u

    v u

    [ ] [ ] [ ]i i i

    =v u G

    [G]i is a k(i + 1) n(i + 1) submatrix of G with the form

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    99/115

    [G]i is a k(i + 1) n(i + 1) submatrix ofG with the form

    or

    [ ]0 1

    0 1

    0

    , ,i

    i

    i

    i m =

    G G GG G

    G

    G

    [ ]

    0 1 1

    0 2 1

    1

    1

    10 1

    0 1 10

    1

    0 1

    0

    , .

    m m

    m m m

    m

    m

    mmi

    m

    i m

    = >

    G G G G

    G G G GGG G

    G GG GG

    G G GG

    GG G

    G

    Then[ ] [ ]( ) [ ]{ }

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    100/115

    Then

    is seen to depend only on the first n(i + 1) columns ofG and this

    accounts for the name column distance function.The definition implies that di cannot decrease with increasing i

    (i.e., it is a monotonically nondecreasing function ofi).

    The complete CDF of the (2, 1, 16) code with

    is shown in the figure of the following page.

    [ ] [ ]( ) [ ]{ }0min : 0i i id w= u G u

    ( ) 2 5 6 8 13 16

    3 4 7 9 10 11 12 14 15 16

    1 ,

    1

    D D D D D D D D

    D D D D D D D D D D

    = + + + + + + +

    + + + + + + + + + +

    G

    Column distance function of a (2,1,16) code

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    101/115

    Co u d sta ce u ct o o a ( , , 6) code

    Two cases are of specific interest: i = m and i.

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    102/115

    pFori = m, dm is called the minimum distance of a convolutional

    code and will also be denoted dmin.

    From the definition di = min{w([u]i[G]i):[u]00}, we see thatdmin represents the minimum-weight code word over the first

    constraint length whose initial information block is nonzero.

    For the (2,1,16) convolutional code, dmin = d16 = 8.Fori, limidi is the minimum-weight code word of any

    length whose first information block is nonzero.

    Comparing the definitions of limidi and dfree, it can be shownthat for noncatastrophic codes

    freelim ii

    d d

    =

    di eventually reaches dfree and then increases no more. Thisll h ithi th t f t i t l th (i

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    103/115

    yusually happens within three to four constraint lengths (i.e.,

    when i reaches 3m or 4m).

    Above equation is not necessarily true for catastrophic codes.Take as an example the (2,1,2) catastrophic code whose

    state diagram is shown in Figure10.11.

    For this code, d0 = 2 and d1 = d2 = = limidi =3, sincethe truncated information sequence [u]i =(1, 1, 1, , 1)

    always produces the truncated code word [v]i =(1 1, 0 1, 0 0,

    0 0,, 0 0), even in the limit as i.

    Note that all paths in the state diagram that diverge from and

    remerge with the all-zero state S0 have weight at least 4, and

    dfree = 4.

    Hence, we have a situation in which limidi = 3 dfree =

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    104/115

    , i i free4.

    It is characteristic of catastrophic codes that an infinite-weight

    information sequence produces a finite-weight code word.

    In some cases, as in the example above, this code word can

    have weight less than the free distance of the code. This is due

    to the zero weight loop in the state diagram.

    In other words, an information sequence that cycles around this

    zero weight loop forever will itself pick up infinite weight

    without adding to the weight of the code word.

    In a noncatastrophic code, which contains no zero weight loopother than the self loop around the all zero state S all infinite

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    105/115

    p g pother than the self-loop around the all-zero state S0, all infinite-

    weight information sequences must generate infinite-weight

    code words, and the minimum-weight code word always has

    finite length.

    Unfortunately, the information sequence that produces the

    minimum-weight code word may be quite long in some cases,

    thereby causing the calculation ofdfree to be a rather formidable

    task.

    The best achievable dfree for a convolutional code with a given

    rate and encoder memory has not been determined exactly.

    Upper and lower bound on dfree for the best code have been

    obtained using a random coding approach.

    A comparison of the bounds for nonsystematic codes with theb d f t ti d i li th t f di t i

    10.3 Dist ance Pr oper t ies of Convolut ional

    Codes

  • 7/30/2019 Convolutional Code(II)

    106/115

    bounds for systematic codes implies that more free distance is

    available with nonsystematic codes of a given rate and encoder

    memory than with systematic codes.This observation is verified by the code construction results

    presented in succeeding chapters, and has important

    consequences when a code with large dfree must be selected foruse with either Viterbi or sequential decoding.

    Heller (1968) derived the upper bound on the minimum free

    distance of a rate 1/n convolutional code:

    ( )1

    free1

    2min 1

    2 1

    l

    lld K l n

    +

    Maximum Free Dist ance Codes

    Rate

  • 7/30/2019 Convolutional Code(II)

    107/115

    Rate

    Maximum Free Dist ance Codes

    Rate 1/3

  • 7/30/2019 Convolutional Code(II)

    108/115

    Rate 1/3

    Maximum Free Dist ance Codes

    R t 1/4

  • 7/30/2019 Convolutional Code(II)

    109/115

    Rate 1/4

    Maximum Free Dist ance Codes

  • 7/30/2019 Convolutional Code(II)

    110/115

    Rate 1/5

    Maximum Free Dist ance Codes

    Rate 1/6

  • 7/30/2019 Convolutional Code(II)

    111/115

    Maximum Free Dist ance Codes

    Rate 1/7

  • 7/30/2019 Convolutional Code(II)

    112/115

    Maximum Free Dist ance Codes

    Rate 1/8

  • 7/30/2019 Convolutional Code(II)

    113/115

    Maximum Free Dist ance Codes

    Rate 2/3

  • 7/30/2019 Convolutional Code(II)

    114/115

    Rate k/5

    Maximum Free Dist ance Codes

    Rate 3/4 and 3/8

  • 7/30/2019 Convolutional Code(II)

    115/115

    Rate k/7