Victor Tomashevich

Embed Size (px)

Citation preview

  • 7/28/2019 Victor Tomashevich

    1/6

    2

    Convolutional CodesVictor Tomashevich, Advisor: Pavol Hanus

    I. INTRODUCTION

    The difference between block codes and convolutional codes is the encoding principle. In the blockcodes, the information bits are followed by the parity bits. In convolutional codes the information bitsare spread along the sequence. That means that the convolutional codes map information to code bitsnot block wise, but sequentially convolve the sequence of information bits according to some rule.

    The code is defined by the circuit, which consists of different number of shift registers allowingbuilding different codes in terms of complexity.Consider the following circuit (all registers are initially zero):

    Fig.1. Encoding circuit of rate codeThe code bits ),,( )1(1

    )1(0 xx and ),,(

    )2(1

    )2(0 xx are obtained as follows:

    ii ux )1( and 1

    )2( iii uux (1)

    Then the codewords are generated:

    ),,()),(),(( 10)2(

    1)1(

    1)2(

    0)1(

    0 xxxxxxx (2)

    A convolutional encoder encodes K information bits to N >K code bits in each time step. It isobvious, that for this particular code 1K and 2N , giving the code a rate 2/1R . The encodingprocedure is not memoryless, as the code bits depend on the information bits encoded at past timesteps. This is another big difference from block codes, as the block codes are memoryless.

    The fact that the convolutional codes have memory allows them to operate well, when K and N arepretty small. The block codes have to have long block lengths (e.g., Reed-Solomon code used in CDdrives has 2048N ), because they are memoryless and their performance improves with blocklength.

    II. CONVOLUTIONAL CODES

    A. Properties of convolutional codes [1]

    The convolutional code is linear: any linear combination of code bit sequences isagain a valid code bit sequence. We can therefore specify the distance properties ofthe code by investigating only the weights of non-zero sequences.

    The encoding mapping xu is bijective, i.e., each code bit sequence uniquelycorresponds to an information bit sequence.

    Code bits generated at time step i are affected by information bits up to M time stepsMiii ,2,1 back in time, i.e., M is the maximal delay of information bits in

    the encoder. The code memory is the minimal number of shift registers required to construct an

    encoding circuit for the code. The code memory is at most KM . The constraint length is the overall number of information bits affecting code bits

    generated at time step i : KMKKM )1(

    iuix

  • 7/28/2019 Victor Tomashevich

    2/6

    3

    A convolutional code is systematic if the N code bits generated at time step icontain the K information bits

    Let us consider some examples:

    B. Descriptions of convolutional codesThere are multiple ways to describe convolutional codes, the main are [1]:

    Trellis Matrix description

    First, we consider the description using a trellis. Trellis is a directed graph with at most Snodes,whereS is the vector representing all the possible states of the shift registers at each time stepi . In allthe examples the rate 2/1R code described in Fig.1. is used.

    As described in [1], we can define a convolutional code, using a generator matrix that describes the

    encoding function xu :

    Gux (3)

    where

    iu

    ix )1(iu

    )2(iu

    )1(ix

    )2(ix

    )3(ix

    Fig.2. This rate 2/1R code has delay 1M ,memory 1, constraint length 2, and is systematic

    Fig.3. This rate 3/2R code hasdelay 1M , memory 2, constraint length 4, andis not systematic

    0 0

    1

    0

    1

    0s 1s 2s0|00

    1|11

    0|00

    1|10

    0|01 1|11

    We get the trellis section by considering theoperation of the encoding circuit:

    The trellis section has KM 2 nodes for eachtime step.

    It is obvious that 1 ii us , and the codeword

    is obtained by ii ux )1( and iii sux

    )2( .

    The branches are labeled withii

    xu | . So we

    input a new information bit iu and it is

    written into the shift register, thus ii us 1 ,

    and we get a codeword ix , generated with

    this transition. The trellis is growing exponentially withM .

  • 7/28/2019 Victor Tomashevich

    3/6

    4

    M

    M

    M

    GGGG

    GGGG

    GGGG

    G210

    210

    210

    (4)

    The )( NK submatrices MmGm ,,1,0, , with elements from )2(GF specify how an

    information bit block miu , delayed mtime steps, affects the code bit block ix :

    M

    m

    mmii iGux0

    , (5)

    For this code (in Fig.1.) we need to specify two submatrices, 0G and as the memory of this code

    is 1M , also 1G . The size of the submatrices must be )( NK , and as we have a rate 2/1R

    code, both submatrices will be of size 21 .

    The matrix 0G governs how iu affects ),()2()1(

    iii xxx : as iu affects the first bit of the codeword as

    well as the second bit of the codeword the matrix 0G = 11 .The matrix 1G governs how 1iu affects ix : as 1iu affects only the second bit of the codeword but

    not the first one the matrix 1G = 10 .For 3 information bit long sequence ),,( 210 uuuu using (3) we get

    Guuuxxxxxx ),,())(),(),(( 210)2(

    2)1(

    2)2(

    1)1(

    1)2(

    0)1(

    0 (6)

    As we have 3 information bits at the input and we consider 3 time steps the generator matrix G must

    be 33 . The generator matrix is obtained considering the operation of the encoding circuit:

    0u

    )()2(

    0)1(

    0 xx

    At time 0t we input 0u , as there is

    nothing in the register both code bits areaffected by 0u . The generator matrix at this

    time step is:

    11

    G

    0u

    1u

    )()2(

    1

    )1(

    1 xx

    At time 1t we input 1u . As 0u is stored at

    the register it affects only the second bit of

    the code word, 1u affects both bits of thecodeword. The generator matrix at this timestep is:

    11

    0111

    G

  • 7/28/2019 Victor Tomashevich

    4/6

    5

    C. Puncturing of convolutional codesThe idea of puncturing is to delete some bits in the code bit sequence according to a fixed rule. In

    general the puncturing of a rate NK/ code is defined using N puncturing vectors. Each tablecontains pbits, where p is the puncturing period. If a bit is 1 then the corresponding code bit is not

    deleted, if the bit is 0, the corresponding code bit is deleted. The N puncturing vectors are combined

    in a pN puncturing matrixP .

    Consider code in Fig.1. , without puncturing, the information bit sequence )0,1,1,0,0(u generates

    the (unpunctured) code bit sequence )01,01,11,00,00(NPx . The sequence NPx is punctured using a

    puncturing matrix:

    1001

    01111P

    The puncturing period is 4. Using1

    P , 3 out of 4 code bits )1(i

    x and 2 out of 4 code bits )2(i

    x of the

    mother codes are used, the others are deleted. The rate of the punctured code is thus

    5/4)23/()44(2/1 R and uis encoded to )01,1,1,0,00( XXXx = )01,1,1,0,00(The performance of the punctured code is worse than the performance of the mother code. Theadvantage of using puncturing is that all punctured codes can be decoded by a decoder that is able todecode the mother code, so only one decoder is needed. Using different puncturing schemes one canadapt to the channel, using the channel state information, send more redundancy, if the channel qualityis bad and send less redundancy/more information if the channel quality is better.

    D. Decoding of convolutional codesThe very popular decoding algorithm for convolutional codes, used in GSM standard for instance, isthe Viterbi algorithm. It uses the Maximum likelihood decoding principle. The Viterbi algorithmconsists of the following steps for each time index [1]:

    1. For all Sstate nodes at time step :1/,,1,0, NLjj

    Compute the metrics for each path of the trellis ending in the state node. The metricat the next state node is obtained by adding the path metric to the metric at theprevious corresponding state node. The path metric is obtained by the following

    formula, : jjjjj yxyx 2211 (7)

    where x is the sent bit, y is the received bit

    If the two paths merge, choose the one with the largest metric, the other one isdiscarded. If the metrics are equal, the survivor path is chosen by some fixed rule.

    1u

    2u

    )( )2(2)1(

    2 xx

    At time 2t we input 2u . As 1u is stored atthe register it affects only second bit of the

    generated codeword, 2u affects both bits of

    the codeword and 0u doesnt affect anything

    as it was already deleted from the register.Finally we obtain the whole generatormatrix:

    11

    0111

    0111

    G

  • 7/28/2019 Victor Tomashevich

    5/6

    6

    2. If the algorithm processed d trellis sections or more, choose the path with the largest metric,go back through the trellis and output the code bit sequence and the information bit sequence.

    The parameter d is the decision delay of the algorithm and specifies how many receivedsymbols have to be processed until the first block of decoded bits is available. As a rule ofthumb, the decision delay is often set to M5 .

    Let us consider a following encoding circuit:

    In Fig.4. the corresponding trellisis depicted:

    Fig. 4.Trellis of convolutional codeThe encoded sequence )11,11,11,11,11,11,11( x (BPSK-modulated bitsconsidered). In the received sequence there are some errors introduced. Fig.5. and Fig.6. show thathard-decision Viterbi algorithm is performing well in case of single errors.

    +1+1 +1+1+1+1+1+1+1+1+1+1-1+1

    +1+1

    0 -2

    +2

    0

    0

    -2

    +2

    0

    -2

    0

    0

    0

    +2

    -2

    +2

    j=0 j=6j=5j=4j=3j=2j=1 j=7

    +2

    -4

    +8+6+4

    +12+10+8+6

    +20+2

    +4+2

    -2

    0+6

    +6

    +4+20+2

    +4+2+4+2

    ju +1 +1 +1 +1 +1 +1 +1 - No errorFig. 5. Performance of Viterbi algorithm in case of one single error+1+1 +1+1+1+1+1+1+1+1+1-1-1+1

    +1+1

    0

    +2

    -2

    +2

    0

    0

    0

    0

    -2

    +2

    +2

    0

    -2

    +2

    -2

    +2

    -2

    -2

    +2

    -2

    +2

    -2

    0

    0

    0

    0

    0

    0

    0

    0

    00

    00

    00

    00

    -2-2-2-2+2+20 +2 +2

    j=0 j=6j=5j=4j=3j=2j=1 j=7

    +2

    -4

    +6+4+2

    +10+8+6+4

    +4+2+2

    +2+2

    -2

    0 +4

    +4

    +2+4+20

    +6+4+2+4

    ju +1 +1 +1 +1 +1 +1 +1 - No error

    ju

    jx2

    js1 js2

    + 1 + 1 + 1 + 1

    - 1 - 1

    - 1 + 1

    + 1 - 1 + 1 - 1

    - 1 + 1

    - 1 - 1

    jj ss 21 , jj xx 21 , 1211 , jj ss

    + 1 / + 1 + 1

    - 1 / - 1 - 1

    + 1 / + 1 + 1

    - 1 / - 1 + 1

    - 1 / - 1 + 1

    + 1 / + 1 - 1

    + 1 / + 1 - 1

    - 1 / - 1 - 1

    jx1

  • 7/28/2019 Victor Tomashevich

    6/6

    7

    Fig. 6. Performance of Viterbi algorithm in case of two single errorsFig.7. shows that the performance of hard-decision Viterbi algorithm degrades significantly in case ofburst errors.

    +1+1 +1+1+1+1+1+1+1+1-1+1-1-1

    +1+1

    0

    +2

    -2

    +2

    0

    0

    +2

    -2

    -2

    +2

    -2

    0

    +2

    -2

    +2

    0

    0

    -2

    +2

    -2

    +2

    -2

    0

    0

    0

    0

    0

    0

    0

    0

    00

    00

    00

    00

    -2-2-2-2+2+20 +2 +2

    j=0 j=6j=5j=4j=3j=2j=1 j=7

    +2

    -2

    +8+6+8

    +12+10+8+2

    +20+4

    00

    -2

    -2 +6

    +10

    +8+2+4+6

    +8+6+8+2

    ju +1 -1 -1 +1 +1 +1 +1 - 2 decoding errorsFig. 7. Performance of the Viterbi algorithm in case of error burstsFig.8. shows that the application of the soft-decisions Viterbi algorithm improves the situation as theerrors are corrected.

    jjjjjj

    m

    j ylxylx 222111)(

    +1+1 +1+1+1+1+1+1+1+1-1+1-1-1

    +1+1

    0

    +4

    -2.5

    +2.5

    -1.5

    +1.5

    +1

    -1

    -2.5

    +2.5

    -2.5

    -1.5

    +2.5

    -2.5+2.5

    0

    0

    -4

    +4

    -4

    +4

    -4

    0

    0

    0

    0

    0

    0

    -1.5

    +1.5

    00

    00

    00

    -1.5

    +1.5

    -4-4-4-2.5

    +4+2.5+1.5 +4 +4

    j=0 j=6j=5j=4j=3j=2j=1 j=7

    +2.5

    -2.5

    +9.5+7.5+8.5

    +17.5+13.5+9.5+5.5

    +0.50+3.5

    +3+1.5

    -2.5

    -2.5 +7.5

    +12.5

    +8.5+0.5+3.5+6

    +8.5+7.5+8.5+1

    G B G GG GG GB G G BB B

    2 0.5 2 22 22 22 0.50.5 20.5 0.5

    5.0

    2jl

    Good channeljl - channel state information

    Bad channel

    Fig. 8. Performance of the soft-decision Viterbi algorithm in case of error bursts

    III. CONCLUSION

    Convolutional codes are very easy to implement. Convolutional codes use smaller codewords incomparison to block codes, achieving the same quality. Puncturing techniques can be easily applied toconvolutional codes. This allows generating a set of punctured codes out of one mother code. Theadvantage is that to decode them all only one decoder is needed and adaptive coding scheme can bethus implemented. One of the most important decoding algorithms is the Viterbi algorithm that uses

    the principle of maximum likelihood decoding. The Viterbi decoding uses hard decisions is thereforevery vulnerable to error bursts. UsingSoft instead of Hard decisions for Viterbi decoding improves theperformance.

    REFERENCES[1] M. Tuechler and J. Hagenauer. Channel coding lecture script, Munich University of

    Technology, pp. 111-162, 2003