4
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, N[ARCH 1996 649 where e, is not necessarily contained in the subalgebra F,X. Another generalization is provided by Theorem 9 which indicates that it may seem worth to study further generalizations of repeati-d-root cyclic codes. For this, however, it appears that the currently available techniques from both algebraic coding and representation theory need a further refinement and generalization. ACKNOWLEDGMENT The author wishes to thank W. Willems and the referees for helpful comments. REFERENCES [1] E. F. Assmus Jr. and H. F. Mattson Jr., “New 5-designs,” J. Comb. Theory, vol. 6, pp. 122-151, 1969. [2] S. D. Berman, “On the theory of group codes,” Kibernetika, vol. 3, pp. [3] G. Castagnoli, 3. L. Massey, P. A. Schoeller, and N. Von Seemann, “On repeated-root cyclic codes,” ZEEE Trans. Inform. Theory, vol. 37, pp. 337-342, 1991. [4] C. L. Chen, “Some results on algebraically structured error-correcting codes,” Ph.D. dissertation, Univ. of Hawaii, Honolulu, HI, 1969. [5] C. W. Curtis and I. Reiner, Representation Theory of Finite Groups and Associative Algebras. [6] I. B. Damgard, “Concatenated group codes and their exponents,” ZEEE Trans. Inform. Theory, vol. IT-33, pp. 849-854, 1987. [7] B. Huppert, Endliche Gruppen 1. 181 T. Kasami, S. Lin, and W.W. Peterson, “New generalizations of the Reed-Muller codes-Part I: Primitive codes,” IEEE Trans. Inform. Theory, vol. IT44, pp. 189-199, 1968. London, UK Cambridge Univ. Press, 1983. [lo] P. Landrock and I. B. Damgard, “Ideals and codes in group algebras,” Math. Inst., Univ. Aarhus, preprint, vol. 12, 1987. 1111 R. A. Liebler and K.-H. Zimmermann, “Combinatorial S,-modules as codes,” J. Algeb. Combin., vol. 4, pp. 47-68, 1995. [12] J. H. van Lint, “Repeated-root cyclic codes,” ZEEE Trans. Inform. Theory, vol. 37, pp. 343-345, 1991. [13] F. J. MacWilliams, “Codes and ideals in group algebras,” in R.C. Bose and T.A. Dowling Eds., Combinatorial Mathematics and its Applications. Chapel Hill, NC: Univ. of North Carolina Press, 1969, [14] F. J. MacWilliams and N. J. A. Sloane, The Theory ofError-Correcting Codes. Amsterdam, The Netherlands: North-Holland, 1978. [15] A. Poli and L. Huguet, Error Correcting Codes: Theory and Applica- tions. Paris, France: Prentice-Hall, 1992. 1161 E. Prange, “Cyclic error-correcting codes in two symbols,” Eliectronics Research Directorate, Air Force Cambridge Res. Ctr., 1957. [17] H. T. Ward, “Visible codes,” Arch. Math., vol. 54, pp. 307-312, 1990. [18] -, “Divisors of codeword weights,” ZEEE Trans. Inform. Theory, [19] K.-H. Zimmermann, “The weight distribution of indecomposalble cyclic [20] -, “On the majority decodable distance of codes in filtrations of [21] __ , Beitriige zur algebraischen Codierungstheoriemittels snodularer Bayreuth, Germany: Bayreuther Math. Schriften, 1221 __ , “Integral Hecke-modules, integral analogs of generalized 31-39, 1967. New York Wile, 1962. Berlin, Germany: Springer, 1967. [9] P. Landrock, Finite ,Group Algebras and their Modules. pp. 317-329. vol. IT-29, pp. 337-342, 1983. codes over 2-groups,” J. Comb. Theory, vol. A60, pp. 85-10?,, 1992. characteristic p > 0,” Arch. Math., vol. 59, pp. 513-520, 1992. Darstellungstheorie. 1994. Reed-Muller codes, and linear codes,” preprint, 1995. An Algebraic Procedure for Decoding Beyond egCH Jan E. M. Nilsson, Member, ZEEE Abstract-We present a new way to find all possible error patterns up to a given weight greater than the designed error correcting capability (eBcH). Possible error positions are localized by indicators. Our method can be seen as an extension of the step-by-step decoder introduced by Massey. Index Terms- Algebraic Decoding, beyond egCH, indicator function, high-speed decoding. I. INTRODUCTION The well-known Berlekamp-Massey decoding algorithm decodes up to the designed error correcting capability (eBCH). In the last years, algebraic decoding of cyclic codes beyond eBCH (eECH follows from the BCH bound) and up to half the true minimum Hamming distance d, of the code has been intensively studied; see for example [l], [2], [5]. Here, we present a new algorithm for decoding beyond eBCH, suitable when 2eBCH + 1 = d,. Let t,,, be an integer larger than t, where 2t + 1 = d,. Our algonthm generates all error patterns of weight t,,, and less. Clearly, not all error patterns of weight t,,, and less can be corrected. Despite that fact, for simplicity, a decoder generating all error patterns of weight up to t,,, will be called a t,,,-error correcting decoder. The decoder proposed outputs a set (list)-of error patterns of different weights (and the corresponding codewords). Every error pattern e corresponds to a codeword 2 = y+e, where y is the received sequence. The procedure is suitable, in particular, for high-speed applications and then for correction of at most t + 2 errors, where t is at most four or five. Other methods for finding error patterns of weight greater than t are presented in [3, p. 2311, [4, p. 2681, [12]. The idea of step-by-step decoding for block codes was introduced by Prange [ll]. Based on that idea, Massey suggested step-by-step decoding procedures for BCH codes [8]; ‘see also [6], [16]. For BCH codes the relation between the syndromes and the number of errors (indicator function) follows from the determinant given in the Peterson-Gorenstein-Zierler decoder [ 81. However, the syndromes known at the receiver only allow us to calculate the determinants necessary for correcting up to eBCH errors. In the algorithm we propose, additional relations to extend the error-correcting capability are used. 11. PRELIMINARIES We will consider decoding of binary cyclic codes beyond eBCH. Let t = eBCH (at + 1 5 The decoder that corrects up to t errors is described in [8]; see also [7, p. 2741. It uses an indicator to check whether the weight of the error pattern is reduced or increased when a position is inverted. Let us define def S, = e(nJ)=eo+elaJ +ezcu2~ +...+e,-lCr(n-l)~, j 2 o Manuscnpt received August 30, 1994; revised October 9, 1995. The matenal in this correspondence was presented in part at the Sixth Joint Swedish-Russian International Workshop on Information Theory, Molle, Sweden, August 22-27, 1993. The author is wlth FOA 72, National Defence Research Establishment, S- 581 11 Linkoping, Sweden. Publisher Item Identifier S 0018-9448(96)01182-0. 0018-9448/96$05.00 0 1996 IEEE

An algebraic procedure for decoding beyond eBCH

  • Upload
    jem

  • View
    218

  • Download
    1

Embed Size (px)

Citation preview

Page 1: An algebraic procedure for decoding beyond eBCH

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, N[ARCH 1996 649

where e, is not necessarily contained in the subalgebra F,X. Another generalization is provided by Theorem 9 which indicates that it may seem worth to study further generalizations of repeati-d-root cyclic codes. For this, however, it appears that the currently available techniques from both algebraic coding and representation theory need a further refinement and generalization.

ACKNOWLEDGMENT

The author wishes to thank W. Willems and the referees for helpful comments.

REFERENCES

[1] E. F. Assmus Jr. and H. F. Mattson Jr., “New 5-designs,” J. Comb. Theory, vol. 6, pp. 122-151, 1969.

[2] S. D. Berman, “On the theory of group codes,” Kibernetika, vol. 3, pp.

[3] G. Castagnoli, 3. L. Massey, P. A. Schoeller, and N. Von Seemann, “On repeated-root cyclic codes,” ZEEE Trans. Inform. Theory, vol. 37, pp. 337-342, 1991.

[4] C. L. Chen, “Some results on algebraically structured error-correcting codes,” Ph.D. dissertation, Univ. of Hawaii, Honolulu, HI, 1969.

[5] C. W. Curtis and I. Reiner, Representation Theory of Finite Groups and Associative Algebras.

[6] I. B. Damgard, “Concatenated group codes and their exponents,” ZEEE Trans. Inform. Theory, vol. IT-33, pp. 849-854, 1987.

[7] B. Huppert, Endliche Gruppen 1. 181 T. Kasami, S. Lin, and W.W. Peterson, “New generalizations of the

Reed-Muller codes-Part I: Primitive codes,” IEEE Trans. Inform. Theory, vol. IT44, pp. 189-199, 1968.

London, UK Cambridge Univ. Press, 1983.

[lo] P. Landrock and I. B. Damgard, “Ideals and codes in group algebras,” Math. Inst., Univ. Aarhus, preprint, vol. 12, 1987.

1111 R. A. Liebler and K.-H. Zimmermann, “Combinatorial S,-modules as codes,” J. Algeb. Combin., vol. 4, pp. 47-68, 1995.

[12] J. H. van Lint, “Repeated-root cyclic codes,” ZEEE Trans. Inform. Theory, vol. 37, pp. 343-345, 1991.

[13] F. J. MacWilliams, “Codes and ideals in group algebras,” in R.C. Bose and T.A. Dowling Eds., Combinatorial Mathematics and its Applications. Chapel Hill, NC: Univ. of North Carolina Press, 1969,

[14] F. J. MacWilliams and N. J. A. Sloane, The Theory ofError-Correcting Codes. Amsterdam, The Netherlands: North-Holland, 1978.

[15] A. Poli and L. Huguet, Error Correcting Codes: Theory and Applica- tions. Paris, France: Prentice-Hall, 1992.

1161 E. Prange, “Cyclic error-correcting codes in two symbols,” Eliectronics Research Directorate, Air Force Cambridge Res. Ctr., 1957.

[17] H. T. Ward, “Visible codes,” Arch. Math., vol. 54, pp. 307-312, 1990. [18] -, “Divisors of codeword weights,” ZEEE Trans. Inform. Theory,

[19] K.-H. Zimmermann, “The weight distribution of indecomposalble cyclic

[20] -, “On the majority decodable distance of codes in filtrations of

[21] __ , Beitriige zur algebraischen Codierungstheorie mittels snodularer Bayreuth, Germany: Bayreuther Math. Schriften,

1221 __ , “Integral Hecke-modules, integral analogs of generalized

31-39, 1967.

New York Wile, 1962.

Berlin, Germany: Springer, 1967.

[9] P. Landrock, Finite ,Group Algebras and their Modules.

pp. 317-329.

vol. IT-29, pp. 337-342, 1983.

codes over 2-groups,” J. Comb. Theory, vol. A60, pp. 85-10?,, 1992.

characteristic p > 0,” Arch. Math., vol. 59, pp. 513-520, 1992.

Darstellungstheorie. 1994.

Reed-Muller codes, and linear codes,” preprint, 1995.

An Algebraic Procedure for Decoding Beyond egCH

Jan E. M. Nilsson, Member, ZEEE

Abstract-We present a new way to find all possible error patterns up to a given weight greater than the designed error correcting capability (eBcH). Possible error positions are localized by indicators. Our method can be seen as an extension of the step-by-step decoder introduced by Massey.

Index Terms- Algebraic Decoding, beyond egCH, indicator function, high-speed decoding.

I. INTRODUCTION The well-known Berlekamp-Massey decoding algorithm decodes

up to the designed error correcting capability (eBCH). In the last years, algebraic decoding of cyclic codes beyond eBCH (eECH follows from the BCH bound) and up to half the true minimum Hamming distance d,,, of the code has been intensively studied; see for example [l], [2], [ 5 ] . Here, we present a new algorithm for decoding beyond eBCH, suitable when 2eBCH + 1 = d,,,. Let t,,, be an integer larger than t , where 2t + 1 = d,,,. Our algonthm generates all error patterns of weight t,,, and less. Clearly, not all error patterns of weight t,,, and less can be corrected. Despite that fact, for simplicity, a decoder generating all error patterns of weight up to t,,, will be called a t,,,-error correcting decoder. The decoder proposed outputs a set (list)-of error patterns of different weights (and the corresponding codewords). Every error pattern e corresponds to a codeword 2 = y+e, where y is the received sequence. The procedure is suitable, in particular, for high-speed applications and then for correction of at most t + 2 errors, where t is at most four or five. Other methods for finding error patterns of weight greater than t are presented in [3, p. 2311, [4, p. 2681, [12].

The idea of step-by-step decoding for block codes was introduced by Prange [ll]. Based on that idea, Massey suggested step-by-step decoding procedures for BCH codes [8]; ‘see also [6], [16]. For BCH codes the relation between the syndromes and the number of errors (indicator function) follows from the determinant given in the Peterson-Gorenstein-Zierler decoder [ 81. However, the syndromes known at the receiver only allow us to calculate the determinants necessary for correcting up to eBCH errors. In the algorithm we propose, additional relations to extend the error-correcting capability are used.

11. PRELIMINARIES We will consider decoding of binary cyclic codes beyond eBCH.

Let t = eBCH (at + 1 5 The decoder that corrects up to t errors is described in [8]; see also [7, p. 2741. It uses an indicator to check whether the weight of the error pattern is reduced or increased when a position is inverted. Let us define

def S, = e ( n J ) = e o + e l a J +ezcu2~ +. . .+e , - lCr(n- l )~ , j 2 o

Manuscnpt received August 30, 1994; revised October 9, 1995. The matenal in this correspondence was presented in part at the Sixth Joint Swedish-Russian International Workshop on Information Theory, Molle, Sweden, August 22-27, 1993.

The author is wlth FOA 72, National Defence Research Establishment, S- 581 11 Linkoping, Sweden.

Publisher Item Identifier S 0018-9448(96)01182-0.

0018-9448/96$05.00 0 1996 IEEE

Page 2: An algebraic procedure for decoding beyond eBCH

650 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO 2, MARCH 1996

where e denotes the error pattern. The relation between the syndromes and the number of errors (indicator function) follows from the determinant det (L,) , where

/ si 1 0 0 ... 0 ) I. (1) s3 sa SI 1 . . ’ 0

. . . . . . . . . . . . . . . , . . . . . . . . L,= I \s23--1 s23--2 s 2 3 - 3 S2j-4 ... s3 )

When the weight of the error pattern e is 3 - 1 or less, det (L,) = 0 and when the weight of e is 3 or 3 + 1, det ( L j ) # 0; 1 5 2 5 n - 1; see [8] Notice that for fields GF(2m), we have S,,, = S;k, for integers k 2 1. Only the determmants (Ai, Lz, . . . , Lt) necessary for correcting up to t = eBCH errors can be calculated from the syndromes known at the receiver, S1, SZ, . . . , Szt. Let S;’j and det(’) ( L,) denote the syndrome and determinant, respectively, when position 2 in the received sequence y is inverted; i.e., 5’;‘) = S, +cyz3. The weight of an error vector e is denoted wt (e).

A. Correction oft + 1 Errors In this case we have to use the unknown syndrome Szt+i . From

(l), it follows that det (Lt+i) can be expressed in the following form:

(2)

where A is obtained from the known syndromes S,, j = 1 , 3 , . “ , 2 t - 1. Now, det(Lt+1) = 0 if wt(e) < t + 1 and det (Lt+l) # 0 if t + 1 5 wt ( e ) 1. t + 2. Hence, if t + 1 errors have occurred and one error is corrected with the step-by-step procedure, we get det(Li+l) = 0.

Assume t + 1 errors have occurred and one error is on position i ; in that case when position i is inverted (t errors remain)

(3)

det (&+I) = A + Szt+l det (At-1)

det(”)Lt+l) = A(’) + Sly+, det<”(At-1) = 0

where A(’) is obtained from the known syndromes $*). Notice

Since for t errors det(’j((Lt+l) = 0, det(’)((Lt-l) # 0, we can ealculate S2t+1 from (3). Let the n positions be inverted step by step. In that case, the “correct” syndrome value will be obtained from (3) t + 1 times if t + 1 errors have occurred. Whenever a syndrome value appears exactly t + 1 times, we have identified a possible error pattern with errors on positions where that value occurs. The question is, whether the same syndrome value can appear more than t+l ames.

It is of interest to see whether two different valid error pattems e and e’ can result in the same sequence of syndrome values 5’1, S3, . . . , S~t-1, SZt+l Clearly, we must have e = e’ + e; where c is a codeword in the code C, and wt(e) + wt(e’) 2 d,,,. Let S, = e(Cy3), S; = e’(&), and C, = c(oi3); 2 = 1 , 3 , . - . , 2 t + 1. Then

s, = s; + c, , j = 1,3, ’ . . , 2 t + 1.

The condition c E C implies C, = 0 for 3 = 1,3,...,2t - 1. However, could have any value. Notice that C~ t+ l = 0 does not imply that c is the zero codeword. However, Czt+l = 0 implies that we are in a smaller code C’ C C with a mnimum distance larger or equal to 2t + 3. Hence, if Czt+l = 0, we must have d ( e , e’) 2 2t+ 3, where d(e, e’) is the Hamming distance between e and e‘. On the other hand, if # 0, then also S 2 t + i # Skt+l. We conclude that e and e’ can only result in the same sequence of syndrome values if d(e,e’) 2 2t + 3.

Now assume we want to correct an error pattem et+l of weight t + 1. Let

et+l = c + e‘, c E C’

where the “wrong” error patteem e’ satisfies wt (e’) 2 t + 2. In this situation, the same sequence of syndrome values could be obtaihed from et+l and e’. If we invert a “correct” position, the weight of et+l is reduced to t, det (&+I) = 0, and we can calculate Szt+l from (3). If we invert a “wrong” position (a position not reducing the weight of et+l), the weight of e’ can be reduced to at most t+ 1. In that case it is clear that det (Lt+1) # 0 and (3) has to give us a different tentative syndrome value. However, an error pattern of a weight of at most t + 2 has to exist if a “wrong” position is inverted. Consequently, in all cases when a “wrong” position is inverted, det (Lt+l) # 0, since the value of det (Lt+l) is determined by the error pattern of the smallest weight.

Hence, an error pattern of weight t + 1 is not uniquely determined if the same tentative syndrome value appears more than t + 1 times, and such cases do not have to be considered. We conclude that all possible (valid) error patterns of weight t + 1 are directly obtained by comparing the values of the generated syndromes in the way descnbed above. However, if no valid error pattern of weight t + 1 exists it cannot be excluded that a tentative syndrome value appears t + 1 times. Therefore, we must check that the results obtained are codewords.

B. Correction of More than t + 1 Errors In this correspondence we only briefly describe correction of t + 2,

and more, errors. More details can be found in [lo]; see also [9]. For correction of t + 2 errors we have to use the two unknown

syndromes Szt+l, Szt+3. The idea is the same as for t + 1 error correction. Assume we have an error on position i. In that case when position i is inverted (t + 1 errors remain and det(’)((Lt) # 0)

det(’)(Lt+Z) = B(”+C(z~(S~j )+I)2+D~z~S~“, ! I +S& det(”)Lt)

= 0 (4)

where B(‘) , C(‘), and D(‘) are obtained from the known syndromes Si’), j = 1,3,. - . , 2 t - 1. To obtain possible error patterns of weight t + 2, we proceed in the following way. First, generate I?(”), C‘”, D(’) and det(”) ( L t ) from the known syndromes. Second, try all elements for which Szt+1 is defined and calculate S2t+3 from (4) for 0 5 i 5 n - 1. Then, all possible error patterns of weight t+ 2 follow directly by comparing the obtained syndromes Szt+3 as in the case “t + 1” error correction. lf Szt+l E GF ( 2 m ) , we have to cany out 2m comparisons. Important is that, for every 2, d’), C(‘), D(’), and det(”)(At) have to be computed only once. It is then fairly easy to generate a large number of tentative syndromes S z t + ~ .

For correction of t + 3 and more errors, the same idea can be extended. However, the decoding complexity, in particular in terms of comparisons in the final step, grows very fast. Whenever we have to work with k unknown syndromes over GF (2m), we have to try for k - 1 of them, the 2m possible values and all 2n‘k-1 combinations.

III. THE DECODING ALGORITHM In the following, for simplicity, only correction of at most t + 1

errors is treated. We let e be a binary n-vector containing tentative error positions. Whenever a tentative error vector of weight at most t exists, we obtain, based on the determinants, the complete indicators

I d5f 1, if3 errors possible in y

for all 0 5 3 5 t - 1; see [8]. As illustrated in the previous section, A(’) is obtained from the known syndromes. Notice that det(’)(Lt-l) is obtained during the calculation of In Step 4) of the algorithm below (t + 1 error correction), we calculate n tentative

- ( 0 , otherwise

and

Page 3: An algebraic procedure for decoding beyond eBCH

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, MARCH 1996 65 1

syndrome values for Szt+l denoted by A,, i = 0,1, . . . , n- 1, where

det(')(Lt-l) # 0, A, E GF(2"). ( 5 )

When det('j)(Lt-1) = 0 we set A, = 00, which simply indicates that this value should not be considered in the comparison.

Let us introduce an operation C t comparing the content, excluding the value ''CO,'' in a vector A = (Ao, A I , . . . , An-l). When exactly t different positions have the same value, those positions are stored in a first vector of length t . When exactly t other different positions have the same value (different from the first value), those positions are stored in a second vector and so on. We let Ct(A) be the result: a set of vectors of length t containing indices. The number of vectors found is denoted by I Ct(A) I.

Example I :

C3(A) = C3(a2,a11,a6,cy4,a13,00,cy6, 4 1 3 2 4 5 6

a , a ,a ,a , a ,a! ,cy13,a2,00,a4)

I C3(A) (= 3,C;(A) = (0,9,14).

Hence, C i (A) indicates that the same value (here a') appears exactly 0 three times in the positions 0, 9, and 14.

The Algorithm: Step 1 Calculate the known syndromes S, , S:'). Thereafter, compute I o , I I , . . ,1t-1, I ~ ' ) , I ~ ' ) , . . . , and A(') , det(')(Lt-l); 0 5 i 5 n - 1 . Step 2 If IO = 1 (no error) store the codeword and stop. Step 3 (Correction of up to t errors) Reset e. If IJ = 1, invert plosition i of e if I,'!1 = 1; j 5 t - 1; 0 5 i 5 n - 1. If j such positions exist, check if y + e is a codeword, in which case store it. If no codeword is found, reset e and if It-1 = 0, invert posiiion i of e if = 1; 0 5 i 5 n - 1. If t such positions exist, check if y + e is a codeword, in which case store it. Step 4 (Correction of t + 1 errors) Calculate A, where A, is obtained from (5) above. If I Ct+'(A) I # 0, generate the I Cttl(A) I tentative error patterns found. The j t h error pattern is ghen by

C,"+'(A), 1 5 j 51 C"+'(A) I . Store the valid codewords obtained from those error patterns.

The algorithm has been tested and verified by simulations. We have noted that when no error pattern e of weight at most t exists, Io , I l , . . , It-1 can indicate w (e) 5 t . In that case the generated error pattern may be wrong. It is therefore necessary to check whether the result "y + e" is a codeword. Only the k information positions have to be inverted for t-error correction if the syndrome polynomial is employed see [13]. However, all n positions have to be inverted if we want to correct more than t errors. Here, therefore, it is no simplification to consider only R positions in Step 3. The indicators required in those steps are obtained as intermediate results in the calculation of the indicators required for Step 4; see [lo]. By taking the structure of the code into consideration, the decoder can be simplified. For example, if the minimum distance of the code is seven

stop.

and we find an error pattern of weight one or two, the algorithm can be stopped, (no other tentative error patterns of weight four or less can exist). Next, we give an example of the calculations required for correcting four errors in the case of 3-error correcting BCH codes.

Example 2: L e t t = eBCH = 3. For 3-error correcting BCH codes three syndromes are known 5'1 , S3. and Ss. The indicators for 3-error correction are the following:

Io = 1 iff SI = S3 = Sg = 0; IO = 0 otherwise.

11 = 1 iff IO = 0,S3 = $,S5 = S:; Il = 0 otherwise.

12 = 1 iff lo = 0,11 = 0, S," + S1SE + $S3 + S," = 0; 12 = 0 otherwise.

Here we have added additional conditions for zero and one error to increase the probability that the indicators directly detect that no error pattern of weight at most t exists. For four error correction we use

det (L1) = A + S7 det (Lz)

where

We then obtain

A = Sl0 +S:S3 +S:S5 +STSSS~ +SI S," +Sz, det ( L z ) = S: + S3. Notice that det ( L z ) is required already for the calculation of 11. If two or three errors have occurred, det ( L z ) # 0. Assume one position is corrected on position i and three errors remain, then det (L4) = 0 and

Finally, calculate the vector A and if the same value appears exactly at four different positions in A, those positions identify a valid error pattern of weight four. Notice that whenever det(')((Lz) = 0, we know directly that an additional error was introduced. Hence, those

U positions do not have to be considered.

IV. DECODING COMPLEXITY

Decoding complexity is closely related to the desired application and the type of implementation technology used. Furthermore, the decoder can be simplified whenever only one valid error pattern can exist, i.e., eBCH < t,,, 5 (d,;,, - 1)/2. Here we investigate the case

eBCH = (dmm - 1)/2; tmax > eBCH.

Compared to other known methods [4, p. 2681, [2], [12], our decoder has similar pros and cons as the step-by-step decoder. In particular, if tmax is small, an efficient hardware implementation allowing a very high decoding speed can be obtained [lo], [16]. On the other hand, if tmaX is large, it is better to extend the Berlekamp-Massey algorithm (BMA). Such an extension, for e B c H + 1 errors, is described in [4, p. 2681; see also [3, p. 2311. Below we compare our method with an extension of the BMA algorithm. A high-speed hardware implementation is considered.

In [lo], based on results in [14], a parallel step-by-step decoder (PSSD) and a parallel version of the BMA followed by Chien Search (PBMD) are compared. The PSSD is much faster (shorter decoding delay) than the PBMD, i.e., about Gtlrlog, tl times faster ( t = eBCH). The cost of such high-speed decoding has also been investigated. With cost, space complexity (chip area) is assumed. The PSSD decoder has about 2, 4, and G times higher space complexity

Page 4: An algebraic procedure for decoding beyond eBCH

652 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 42, NO 2, MARCH 1996

than the PBMD decoder if t = 3, t = 4,, and t = 5, respectively. On the other hand, the PSSD consists of n identical branches. It is, therefore, regular and simple to design even if the space complexity (chip area) becomes large. At least for t 5 5 the gain in speed (decoding delay) is larger, much larger in many cases, with PSSD compared to PBMD than the increase in hardware. Furthermore, if short decoding delay is essenbal only the PSSD may saasfy the requirements.

Next we point out what decoding beyond egCH involves for additional computations with our method. Correction of the same number of errors, but for a code with that designed error correcting capability, is our reference.

The cost of (t + 1)-error correction of a t-error correcting code is not so high compared to (t + 1)-error correction of a (t + 1)-error correcting code. In the first case, n indicators (tentative syndromes) and in the second case k + l (see [13]) indicators have to be calculated. In both cases. the most complex indicator is obtained by calculating and checking whether det (&+I) = 0. The main difference is that to obtain A,, an inveision has to be carried out in the first case; see (5). Finally, in the first case we have to compare the content in the vector A of tentative syndrome; see Example 1. Such an operation is not required in the second case. The first case has longer decoding delay (two to three times longer dependent on n) [lo]. However, very high decoding speed can be maintained. In fact, the extension of the BMA to t + 1 errors suggested in [4, p. 2721 would result in similar additional computations, i.e., both an inversion and a comparison of the content in vector have to be carried out as a final step. Roughly, our method has about 3t / [ log, tl times better performance, in terms of decoding delay and speed, than an extended BMA algorithm. One reason why (t + 1)-error correction of a t-error correcting code is relatively simple is that at most

l n l ( t + 1>1 error patterns of weight t + 1 can exist. All such pattems can be obtained from only one vector A of length n.

As a remark, let us point out that the decoding rate can be very high with our method also for t + 2 error correction; see [IO]. This is possible since a large number of vectors can be sorted (compared) very efficiently in a pipelined fashion, e.g.. by applying a parallel Batcher sorting network [15, p. 4261.

Finding error patterns far beyond the correcting capability of the code (i e., far beyond t errors if 2t + 1 = d,,,) is not an easy task. With our method it is easy to correct t + 1 errors and also fairly easy to correct t + 2 errors. However, our method is not suitable if t is large, because the determinants are complex to calculate; see [lo]. How complex an indicator we can afford to calculate is dependent on the application.

The conclusion is that our method has a better performance, in terms of decoding delays and decoQng speeds, than existing methods. The difference in speed can be explained by the error pattems being generated in a more direct fashion than with the extended Berlekamp-Massey algorithm.

V. CONCLUSIONS I

I We have presented a new procedure for finding all possible error

codes. The procedure is suitable, in particular, when 2eBCH + 1 = d,,,. Possible error positions are localized by indicators and our method can be seen as an extension of the step-by-step decoder introduced by Massey. The procedure allows low decoding delays and high decoding speeds. It is intended for correction of (totally) up to five or six errors. Unfortunately, it is complex to implement if we want to generate error patterns of great weight (say, a weight greater

patterns up to a given weight higher than eBCH for binary cyclic

than six or seven). It is also complicated to generate error patterns of weight greater than eBCH + 2. On the other hand, no procedure is known for which such generation is simple.

REFERENCES

[l] P Bows, J. C M Janssen, M van Asperdt, and H. C. A van Tilborg, “Algebrax decodmg beyond egCH of some binary cyclic codes when e > egCH,” IEEE Trans Inform Theory, vol 36, pp 214-222, Jan 1990

121 I. M Duursma and R Kotter, “Error-locatmg pairs for cyclic codes,” IEEE Trans. Inform Theory, vol 40, pp 1108-1121, July 1994

[3] E. R. Berlekamp, AZgebrazc Coding Theory New York McGraw-Hill, 1968.

[4] R. E. Blahut, Theory and Practzce of Error Control Codes Reading, MA: Addision-Wesley, 1983

[5] G. L. Feng and K K Tzeng, “A generalization of the Berlekamp-Massey algorithm for multisequence shift-register synthesis with applications to decoding cyclic codes,” IEEE Trans Inform. Theory, vol. 37, pp 1274-1287, Sept 1991

[6] T Hwang, “Parallel decodmg of binary BCH codes,” Electron Lett, vol 27, no. 24, pp. 2223-2224, Nov. 1991

[7] F J MacWilhams and N J A Sloane, The Theory of Error-Correcting Codes New York North-Holland, 1977

[8] J L Massey, “Step-by-step decoding of the Bose-Chaudhun-Hocquenghem codes,” IEEE Trans Inform Theory, vol IT-11, pp 580-585, Oct. 1965

[9] J. E M Nilsson, “Decoding of BCH codes beyond the designed error correctmg capabihty,” in Proc 6th Joint Swedish-Russian Workshop on Information Theory (Molle, Sweden, Aug 22-27, 1993), pp 204-208

[lo] -, “On hard and soft decoding of block codes,” Lznkopzng Studies zn Science and Technology, dissert no 333, Linkoping, Sweden, 1994

[l l] E Range, “The use of coset equivalence in the analysis and decodmg of group codes,” Tech Rep AFCRC-TR-59-164, USAF Cambridge Res Ctr, Bedford, MA, June 1959

[12] E. Schulz, “Algebrasche Decodierung uber die halbe BCH-Grenze hmans,” Ph D dissertation (in German), Fortschr -Ber VDI Reihe 10, no 107, Dusseldorf VDI-Verlag, 1989

[13] Z. Swaja, “On step-by-step decoding of the BCH binary codes,” IEEE Trans. Inform Theory, vol IT-13, pp 350-351, Apr 1967

[14] X. Youzh, “Contnbuhons to the decodmg of Reed-Solomon and related codes,” Lznkopzng Studies in Science and Technology, dissert no 257, Lmkoping, Sweden, 1991

[15] N. Weste and K. Eshraghlan, Prznczples of CMOS VLSI Design Read- ing, MA: Addison-Wesley, 1985

[16] S. W. We1 and C. H. Wei, “High-speed hardware decoder for double- error correctmg binary BCH codes,” Proc. Inst Elec Eng , vol 136, pt. I, pp. 227-231, June 1989.

-