4
Sum-Product decoding of convolutional codes Toshiyuki SHOHON , Yuuichi OGAWA †† , Haruo OGIWARA †† Oyama National College of Technology, Oyama, Tochigi, Japan †† Nagaoka University of Technology, Nagaoka, Niigata, Japan E-mail: [email protected], †††[email protected] Abstract This article proposes two methods to improve the sum-product soft-in/soft-out decoding performance of convolutional codes. The first method is to transform a parity check equation in such a way as to remove cycles of length four in a Tanner graph of a convolutional code, and performs sum-product algorithm (SPA) with the transformed parity check equation. This method improves the performance of (7, 5) 8 convolutional code (CC1). However, for (45, 73) 8 convolutional code (CC2), the method does not effect. The second proposed method is to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. This method improves the performance for both convolutional codes (CC1, CC2). The performance is close to that by BCJR algorithm and it is less complex than BCJR algorithm. 1. Introduction Soft-input and soft-output (SISO) decoding of convolutional code is used for decoding of turbo code [1]. It is also ap- plied to turbo equalization [2],[3] with convolutional code. Most widely used SISO algorithm for convolutional code is modified BCJR algorithm. It is deficient as the decoding complexity is large so as to apply for high speed decoding. Sum-product algorithm (SPA) used for low density par- ity check (LDPC) codes has low decoding complexity and suited for parallel processing. It may be expected to apply SPA for decoding of convolutional code. However, straight forward applications of SPA to decoding of convolutional code can not realize good performance. This article proposes two method to improve decod- ing performance of SPA decoding of convolutional code. The first method converts a Tanner graph with four-cycles into one without four-cycles. Similar method is proposed for block codes [4]. This article derives an equivalent and straight forward method with good use of the property of convolutional code. By the method, performance of (7, 5) 8 code (CC1) is improved but that of (45, 73) 8 code (CC2) is not. This im- plies that four-cycles is not always the primary factor of the poor SPA decoding performance. To improve the perfor- mance of CC2, second method is proposed. The method is to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. By us- ing the two methods, performance of SPA decoding become close to that of BCJR decoding. An application of SPA to the trellis diagram of a con- volutional code has been proposed. It derives the BCJR al- gorithm as an instance of SPA [5]. The methods proposed in this article are quite different as they are applied on the Tanner graph of the parity check equations of convolutional codes. This article is organized as follows: section 2 discusses removing of four-cycles in a Tanner graph in polynomial form, and section 3 proposes a method to use a higher order parity check equation in comparison with a normal parity check equation for SPA decoding. Section 4 discusses the decoding complexity comaprison with BCJR algorithm. 2. Removing of cycles of length four in Tanner graph 2.1 Parity check equation of convolutional codes As an example, consider a convolutional code with rate 1/2 and a generating matrix G(D) given by G(D) = [1 G 1 (D)/G 0 (D)]. (1) Let X (D) and P (D) denote polynomials of an information sequence and a parity sequence, respectively. For the sake of simplicity, X (D) and P (D) are also denoted by X and P , respectively. The parity check equation is given by H (X, P )= G 1 (D)X (D)+ G 0 (D)P (D). (2) The parity check equation H (X, P )=0 iff (X, P ) is a code word. The order of the parity check equation H (X, P ) is the maximum order of G 0 (D) and G 1 (D). In this article, the order of H (X, P ) is denoted by ν 0 . 2.2 Algorithm A method of removing of four-cycles (RmFC) in a Tanner graph for block codes is proposed [4]. The discussion is based on a parity check matrix. For convolutional codes, the same method is discussed by using polynomials in this subsection. It is more suited to convolutional codes. As an example code CC1 is taken up. The generating matrix G(D) of the code is G(D) = [1 (D 2 + 1)/(D 2 + D 1 + 1)]. The parity check equation H (X, P ) of the code is given by H (X, P ) =(D 2 + 1)X (D) +(D 2 + D 1 + 1)P (D). (3) Figure 1 shows the Tanner graph of the code. There are four- cycles as shown by bold lines. An information bit x k and a parity bit p k at the same time instance k belong to a four- cycle. To remove the cycles, auxiliary bit node sequence A(D) given by A(D)= X (D)+ P (D) (4) is added and an auxiliary check node sequence given by 978-1-4244-4378-9/09/$25.00 ©2009 IEEE Proceedings of IWSDA’09 64

Sum Product Dec Better Than Bcjr

Embed Size (px)

Citation preview

Page 1: Sum Product Dec Better Than Bcjr

Sum-Product decoding of convolutional codesToshiyuki SHOHON†, Yuuichi OGAWA††, Haruo OGIWARA††

† Oyama National College of Technology, Oyama, Tochigi, Japan†† Nagaoka University of Technology, Nagaoka, Niigata, Japan

E-mail: †[email protected], †††[email protected]

Abstract This article proposes two methods to improvethe sum-product soft-in/soft-out decoding performance ofconvolutional codes. The first method is to transform a paritycheck equation in such a way as to remove cycles of lengthfour in a Tanner graph of a convolutional code, and performssum-product algorithm (SPA) with the transformed paritycheck equation. This method improves the performance of(7, 5)8 convolutional code (CC1). However, for (45, 73)8convolutional code (CC2), the method does not effect. Thesecond proposedmethod is to use a higher order parity checkequation in comparison with a normal parity check equationfor SPA decoding. This method improves the performancefor both convolutional codes (CC1, CC2). The performanceis close to that by BCJR algorithm and it is less complex thanBCJR algorithm.

1. IntroductionSoft-input and soft-output (SISO) decoding of convolutionalcode is used for decoding of turbo code [1]. It is also ap-plied to turbo equalization [2], [3] with convolutional code.Most widely used SISO algorithm for convolutional code ismodified BCJR algorithm. It is deficient as the decodingcomplexity is large so as to apply for high speed decoding.

Sum-product algorithm (SPA) used for low density par-ity check (LDPC) codes has low decoding complexity andsuited for parallel processing. It may be expected to applySPA for decoding of convolutional code. However, straightforward applications of SPA to decoding of convolutionalcode can not realize good performance.

This article proposes two method to improve decod-ing performance of SPA decoding of convolutional code.The first method converts a Tanner graph with four-cyclesinto one without four-cycles. Similar method is proposedfor block codes [4]. This article derives an equivalent andstraight forward method with good use of the property ofconvolutional code.

By the method, performance of (7, 5)8 code (CC1) isimproved but that of (45, 73)8 code (CC2) is not. This im-plies that four-cycles is not always the primary factor of thepoor SPA decoding performance. To improve the perfor-mance of CC2, second method is proposed. The method is touse a higher order parity check equation in comparison witha normal parity check equation for SPA decoding. By us-ing the two methods, performance of SPA decoding becomeclose to that of BCJR decoding.

An application of SPA to the trellis diagram of a con-volutional code has been proposed. It derives the BCJR al-gorithm as an instance of SPA [5]. The methods proposedin this article are quite different as they are applied on theTanner graph of the parity check equations of convolutional

codes.This article is organized as follows: section 2 discusses

removing of four-cycles in a Tanner graph in polynomialform, and section 3 proposes a method to use a higher orderparity check equation in comparison with a normal paritycheck equation for SPA decoding. Section 4 discusses thedecoding complexity comaprison with BCJR algorithm.

2. Removing of cycles of length four in Tannergraph

2.1 Parity check equation of convolutionalcodes

As an example, consider a convolutional code with rate 1/2and a generating matrixG(D) given by

G(D) = [1 G1(D)/G0(D)]. (1)

Let X(D) and P (D) denote polynomials of an informationsequence and a parity sequence, respectively. For the sake ofsimplicity, X(D) and P (D) are also denoted by X and P ,respectively. The parity check equation is given by

H(X, P ) = G1(D)X(D) + G0(D)P (D). (2)

The parity check equationH(X, P ) = 0 iff (X, P ) is a codeword. The order of the parity check equation H(X, P ) isthe maximum order of G0(D) and G1(D). In this article,the order ofH(X, P ) is denoted by ν0.

2.2 AlgorithmA method of removing of four-cycles (RmFC) in a Tannergraph for block codes is proposed [4]. The discussion isbased on a parity check matrix. For convolutional codes,the same method is discussed by using polynomials in thissubsection. It is more suited to convolutional codes. As anexample code CC1 is taken up. The generating matrixG(D)of the code is G(D) = [1 (D2 + 1)/(D2 + D1 + 1)]. Theparity check equationH(X, P ) of the code is given by

H(X, P ) =(D2 + 1)X(D)

+ (D2 + D1 + 1)P (D). (3)

Figure 1 shows the Tanner graph of the code. There are four-cycles as shown by bold lines. An information bit xk and aparity bit pk at the same time instance k belong to a four-cycle. To remove the cycles, auxiliary bit node sequenceA(D) given by

A(D) = X(D) + P (D) (4)

is added and an auxiliary check node sequence given by

978-1-4244-4378-9/09/$25.00 ©2009 IEEE Proceedings of IWSDA’0964

Page 2: Sum Product Dec Better Than Bcjr

Ck Ck+1 Ck+2

xk xk+1 xk+2xk−1xk−2 pk pk+1 pk+2pk−1pk−2

Fig. 1 Tanner graph of [1 D2+1

D2+D1+1] code.

CkC

k Ck+1 C′

k+1 Ck+2 C′

k+2

xk xk+1 xk+2xk−1xk−2 ak ak+1 ak+2ak−1ak−2 pk pk+1 pk+2pk−1pk−2

Fig. 2 Tanner graph after removing of cycles of length four.

H1(X, P, A) = X(D) + P (D) + A(D) (5)

is added. As the result, the cycles are removed. Substitutingeq.(4) into eq.(3), we have

H2(X, P, A) = DP (D) + (D2 + 1)A(D). (6)

Equations(5) and (6) are new parity check equations and thecorresponding Tanner graph is given by Fig. 2. It shows thecycle is removed and girth is 8.

In the similar way, the parity check equation for CC2after removing the cycles is given by

H1(X, P, A) =(D + 1)X(D)

+ D2P (D) + A(D) (7)H2(X, P, A) =D5X(D) + P (D)

+ (D3 + 1)A(D) (8)

and girth becomes 6.

2.3 Bit error rate performanceFigure 3 shows bit error rate (BER) performance given bysimulation of CC1 and CC2 after RmFC over the additivewhite Gaussian noise (AWGN) channel. In the simulation,the number of information bits in a block is 1024, that oftransmitted block is 104 and that of maximum iteration ofSPA decoding is 100. It shows that performance of CC1 bySPA is only 0.1dB inferior to that by BCJR algorithm, butthat of CC2 by SPA is much worse than that by BCJR algo-rithm. This result show that RmFC improves performance ofsome convolutional code, but it does not for other code.

3. Decoding based on higher order paritycheck equation

3.1 Higher order parity check equationTo improve the performance of CC2, second method is pro-posed. The method is to use a higher order parity checkequation in comparison with a normal parity check equationfor SPA decoding. The parity check equation is given by

0 2 410−4

10−2

100

BE

R

Eb/N0 [dB]

BCJR

Sum-Product

Sum-ProductOriginal

RmFC

(a) CC1

0 2 4 610−6

10−4

10−2

100

Eb/N0 [dB]

BE

R

BCJR

Sum-Product

Sum-ProductRmFC

Original

(b) CC2

Fig. 3 bit error rate performance

H ′(X, P ) =M(D)H(X, P ) (9)=M(D)G1(D)X(D)

+ M(D)G0(D)P (D) (10)=G′

1(D)X(D) + G′

0(D)P (D) (11)

where

65

Page 3: Sum Product Dec Better Than Bcjr

Table 1 HOPCE for CC1.ν nfc G′0 G′12 2 7 53 5 11 174 3 33 215 5 43 716 5 153 1017 6 321 2078 9 653 4019 8 1503 103110 15 3113 2241

Table 2 HOPCE for CC2.ν nfc G′0 G′15 11 45 736 31 157 1117 23 261 3278 18 507 6259 16 1427 104510 5 3101 200711 11 5303 601112 11 14441 1004713 12 24033 3100114 14 61033 4200115 9 120003 14411116 7 304003 21011117 17 640201 44210718 19 1450003 1014111

G′

0(D) =M(D)G0(D) (12)G′

1(D) =M(D)G1(D) (13)

andM(D) is a non-zero polynomial without terms ofX, P .We call H(X, P ) original parity check equation. The de-rived check equation is equivalent to the original check equa-tion since we have

⎧⎪⎪⎨⎪⎪⎩

H ′(X, P ) = M(D)H(X, P ) = 0iff H(X, P ) = 0

H ′(X, P ) = M(D)H(X, P ) �= 0iff H(X, P ) �= 0.

(14)

From eqs.(11) ∼ (13), the order of H ′(X, P ) is higherthan that of H(X, P ). Therefore, we call H ′(X, P ) higherorder parity check equation (HOPCE). From eq.(11)∼ (13),the order, ν, of H ′(X, P ) is controlled by the order ofM(D). Table 1 and Table 2 show the HOPCEs for CC1and CC2, respectively. Since a method to get the optimumH ′(X, P ) is not known, the check equations given by aheuristic method is shown. The method is as follows. First,get H ′(X, P )’s for all M(D)’s with the predefined order.Next, select a H ′(X, P ) with the minimum nfc that is thenumber of four-cycles per check node.

3.2 Number of four-cycles per check nodeLetCt be check node at time instance t. The number of four-cycles per check node concerningCt, nfc,t, is defined by thenumber of four-cycles constructed byCk andCt (t ≥ k+1).

Since nfc,k+1 (k ≥ t + 1) does not count the four-cycles concerning Ck (k ≤ t), the number of four-cycles ateach time instance becomes countable without duplication.In addition, since nfc,t does not change with t for convolu-tional codes, suffix t can be omitted as nfc.

5 1010−4

10−3

BE

R

Order of HOPCE

(a) CC1

5 10 15 2010−5

10−4

10−3

BE

R

Order of HOPCE

(b) CC2

Fig. 4 Bit error rate performance for order of HOPCE.

Picking up CC1 as an example, we count nfc. Equation(2) becomes

H(X, P ) =(D2 + 1)X(D)

+ (D2 + D + 1)P (D). (15)

The Tanner graph is given by Fig. 1. It shows that four-cycles constructed by Ck and Ct (t ≥ k + 1) are

1. Ck → pk−1 → Ck+1 → pk → Ck

2. Ck → xk → Ck+2 → pk → Ck.

Therefore, we have nfc = 2.

3.3 BER performance for order of HOPCEFigure 4 shows performance versus constraint length. Thesimulations are carried out at Eb/N0 = 5.0 [dB] and 100maximum iterations. The leftmost plot corresponds to theoriginal parity check equation. The figure shows that per-formance is improved by using the HOPCE though BERdoes not monotonously decrease with the order. The bestperformance is obtained by ν = 4, nfc = 3, (G′

0, G′

1) =(33, 21)8 for CC1, and by ν = 10, nfc = 5, (G′

0, G′

1) =(3101, 2007)8 for CC2.

66

Page 4: Sum Product Dec Better Than Bcjr

0 2 4 6

10−4

10−2

Eb/N0 [dB]

BE

R

BCJR

SPA

SPAOriginal

HOPCE

(a) CC1

0 2 4 610−6

10−4

10−2

100

Eb/N0 [dB]

BE

R

BCJR

SPA

SPAOriginal

HOPCE

(b) CC2

Fig. 5 Bit error rate performance

3.4 BER performanceFigure 5 shows the performance of SPA decoding using thebest parity check equation given in the previous subsection.The simulation conditions are the same as the previous sec-tion. By using the HOPCEs, performance is improved 0.9dBand 1.3dB compared to the conventional SPA decoding andonly inferior 0.1dB and 0.3 dB to the BCJR decoding atBER=10−4 for CC1 and CC2, respectively.

4. Decoding complexityThe decoding complexity of the proposed algorithm is com-pared to that of BCJR algorithm. Table 4 shows the numberof operations needed for 1 bit SISO decoding. In the table,symbols k and n show the number of input bits and the num-ber of output bits per time instance, respectively. Symbolsmand dc show the number of delay elements in the encoder anddegree of check node, respectively.

The average number of SPA iterations for CC1 was 2.7and that for CC2 was 1.4 at Eb/N0 = 6[dB]. In consider-ation of SPA iterations, the number of SPA operations forCC1 becomes 105.3 and that for CC2 becomes 74.2. ForCC1, the number of operations of SPA is comparable to thatof BCJR. For CC2, the number of operations of SPA is 0.14

Table 3 Total number of operations

sum-product BCJR(3 · 2k − 1)2m

addition dc − 2 + k2n + 2n− 1

(1 + 2k+1)2m+1

multiplication 4dc − 1 + (2k + n− 1)2n

+ 6n + 1special operation

(tanh, tanh−1, log, exp) 2dc 2n + k2n + 1

total: CC1 39 101CC2 53 521

times of that of BCJR.

5. ConclusionSum-Product decoding of convolutional code is discussed.The conventional method has been known to give poor per-formance. The reason has been considered to the existenceof four-cycles in the Tanner graph. A method to removethe cycles is discussed in polynomial form which is moresuited to convolutional codes than the matrix form alreadyproposed for block codes. The method is applied to two con-volutional codes which are optimum codes for turbo codes.Performance of one code (CC1) is improved by the proposedmethod and performance is within 0.1dB from that by BCJRdecoding. Performance of the other code (CC2) is insuffi-cient.

To improve the performance, this paper proposes amethod to use a higher order parity check equation in com-parison with a normal parity check equation for SPA decod-ing. By the method performance becomes within 0.1dB and0.3dB from that of BCJR decoding for CC1 and CC2, re-spectively.

References[1] Berrou, C., Glavieux, A., “Near optimum error correct-

ing coding and decoding: Turbo-codes,” IEEE Trans.Commun., vol.44, no.9, pp.1261–1271, Sep. 1996.

[2] Catherine Douillard, Michel Jezequel, Claude Berrou,Annie Picart, Pierre Didier, and Alain Glavieux, “It-erative correction of inter-symbol interference: Turbo-Equalization,” European Trans. Telecommun., vol.6,no.5, pp.507–511, 1995.

[3] Christophe Laot, Alain Glavieux, and Joel Labat,“Turbo equalization: adaptive equalization and channeldecoding jointly optimized,” IEEE J. Selected Areas inCommun., vol.19, no.9, pp.1744–1752, Sep.2001.

[4] S. Sankaranarayanan and B. Vasic, “Iterative decodingof linear block codes: A parity-check orthogonalizationapproach,” IEEE Trans. Inform. Theory, vol.51, no.9,pp.3347–3353, Sep. 2005.

[5] F. R. Kschischang, B. J. Frey and H. A. Loeliger, “Fac-tor graph and the sum-product algorithm,” IEEE Trans.Inform. Theory, vol.47, no.2, pp.498– 519, 2001.

67