Upload
baoming
View
217
Download
5
Embed Size (px)
Citation preview
Kite Codes over Groups
Xiao Ma∗, Shancheng Zhao∗, Kai Zhang∗, and Baoming Bai†
∗School of ECE, Sun Yat-sen University
Guangzhou 510006, China†State Key Lab. of ISN, Xidian University
Xi’ an 710071, China
Abstract—Kite codes, which were originally defined over thebinary field, are generalized to arbitrary abelian groups in thispaper. Kite codes are a special class of prefix rateless codes overgroups, which can generate potentially infinite (or as many asrequired) random-like parity-check symbols. In this paper, weconsider four kinds of Kite codes, which are binary Kite codes,Kite codes over one-dimensional lattices, Kite codes over M-PSK signal constellations and Kite codes over multi-dimensionallattices. It is shown by simulations that the proposed codesperform well over additive white Gaussian noise channels.
Index Terms—Adaptive coded modulation, codes over groups,group codes, lattice codes, LDPC codes, RA codes, rate-compatible codes, rateless coding, Raptor codes.
I. INTRODUCTION
Binary linear rateless coding is an encoding method that
can generate potentially infinite parity-check bits for any
given fixed-length binary sequence. Fountain codes constitute
a class of rateless codes, which were first mentioned in [1].
The first practical realizations of Fountain codes were LT-
codes invented by Luby [2]. LT-codes are linear rateless
codes that transform k information bits into infinite coded
bits. LT-codes have encoding complexity of order log k per
information bit on average. The complexity is further reduced
by Shokrollahi using Raptor codes [3]. For relations among
random linear Fountain codes, LT-codes and Raptor codes
and their applications over erasure channels, we refer the
reader to [4]. In addition to their success in erasure channels,
Fountain codes have also been applied to other binary mem-
oryless symmetric channels [5][6][7]. However, it has been
proved [6] that no universal Raptor codes exist for binary input
memoryless symmetric channels (BIMSCs) other than erasure
channels. Recently, a new class of binary rateless codes, called
Kite codes, has been proposed for additive white Gaussian
noise (AWGN) channels [8]. Simulation results have shown
that Kite codes perform well in a wide range from decoding
rate 0.1 to 0.9.
In this paper, we generalize the original Kite codes from the
binary field to arbitrary abelian groups. Kite codes are a special
class of prefix rateless forward error correction (FEC) codes
over arbitrary abelian groups, which can generate as many
as required random-like parity-check symbols. A Kite code is
specified by its dimensions, a real sequence and an abelian
group along with a subset of bijective transformations over it.
By picking tailored parameters, Kite codes can be constructed
as low-density parity-check (LDPC) codes, group codes [9]
and lattice codes [10]. This makes them more attractive for
certain applications.
II. KITE CODES OVER ABELIAN GROUPS
A. Finite Abelian Groups and Their Transformations
A finite abelian group (A,+) is defined as a finite set Atogether with a binary operation + : A × A 7→ A satisfying
the following conditions [11].
1) There exists an element θ ∈ A such that α + θ = αholds for any α ∈ A. It can be proved that θ is unique.
We call θ the identity element of A and denote it by 0for convenience.
2) For any α, β, γ ∈ A, (α + β) + γ = α + (β + γ).3) For any α, β ∈ A, α + β = β + α.
4) For any α ∈ A, there exists an element β ∈ A such
that α + β = 0. It can be proved that β is uniquely
determined by α. We call such a β the negative element
of α and denote it by −α for convenience.
A transformation f : A 7→ A assigns to each α ∈ A exactly
one element β ∈ A. The collection of all transformations
defined over A is denoted by T . The collection of all bijective
transformation is denoted by S. A transformation f is called an
endomorphism of A if f(α+β) = f(α)+f(β) for all α, β ∈A. An endomorphism f is also called an automorphism if it
is bijective. The collection of all endomorphisms is denoted
by End(A). We use 0A to denote the zero transformation that
assigns to each α ∈ A the identity element, and 1A to denote
the identical transformation that transforms each α ∈ A to
itself.
We give some examples that are commonly used in the
coding theory and practice.
Examples.
1) Consider the additive group Fq of the finite field of size
q. For any nonzero α ∈ Fq , we can define a bijective
transformation β 7→ αβ, β ∈ Fq .
2) Consider the additive group of the modulo ring Zm. For
any α which is coprime with m, we can define a bijective
transformation β 7→ αβ, β ∈ Zm.
3) Consider the M-ary phase-shift keying (M-PSK) signal
constellation X =
exp(
2kπ√−1
M
)
, 0 ≤ k < M
. For
any α ∈ X , we can define a bijective transformation
β 7→ αβ, β ∈ X .
4) Consider a quotient lattice Λ/Λ0, where Λ ⊂ Rm is a
lattice with Λ0 as a sub-lattice. For any α ∈ Λ/Λ0, we
2011 IEEE Information Theory Workshop
978-1-4577-0437-6/11/$26.00 ©2011 IEEE 481
Buffer of size k for information sequence
At time t, generate a random variable ;
randomly select summands from the buffer; for each
selected summands, find its image under a random
transformation in ; generate the parity-check sum.
PRNG
w
t-1
v
w
v
v -- systematic information sequence of length k
w -- parity-check sequence of length r = n - k
B(k; p
t
) -- binomial distribution with k Bernoulli trials and success probability p
t
PRNG -- pseudo-random number generator
);(~
tt
pkBθ
t
θ
s
t
Ω
Fig. 1. A systematic encoding algorithm for the proposed Kite code.
can define a bijective transformation β 7→ α + β, β ∈Λ/Λ0.
B. Systematic Rateless Codes Over Groups
Let Aq be a finite abelian group of size q. Let A∞q be the
set of all infinite sequences over Aq . A systematic rateless
code with degree of freedom k is defined as
Cq[∞, k]∆=
c = (v, w) ∈ A∞q | v ∈ Ak
q
, (1)
which is a finite subset of A∞q with size qk. Similar to
linear codes, for c = (v, w) ∈ Cq[∞, k], we call c, v and
w coded sequence, information sequence and parity-check
sequence, respectively. With a little bit of confusion, the degree
of freedom k is also referred to as the dimension of the
code Cq[∞, k]. For n ≥ k, we use Cq[n, k] to denote the
prefix code with length n of Cq[∞, k]. For the prefix code,
the coding rate is k/n. If the information sequence v is
drawn from a memoryless source with entropy H(V ), the
information rate is then kH(V )/n bits/symbol. In particular,
if the source is uniformly distributed over Aq , the information
rate is (k log2 q)/n bits/symbol.
We now propose a special class of rateless codes,
called Kite codes. An ensemble of Kite codes, denoted by
K[∞, k; p,Aq,Ω], is specified by its dimension k, a real
sequence p = (p0, p1, · · · , pt, · · ·) with 0 < pt < 1 for t ≥ 0and a subset Ω of bijective transformations over Aq. The real
sequence p is called p-sequence for convenience. The encoder
of a Kite code consists of a buffer of size k, a pseudo-random
number generator (PRNG) and an accumulator, as shown in
Fig. 1.
Let v = (v0, v1, · · · , vk−1) be the information sequence to
be encoded. The corresponding codeword is written as c =(v, w), where w = (w0, w1, · · · , wt, · · ·) is the parity-check
sequence. The task of the encoder is to compute wt for any
t ≥ 0. The encoding algorithm is described as follows.
Algorithm 1: (Recursive Encoding Algorithm for Kite
Codes)
1) Initialization: Initially, set w−1 = 0. Equivalently, set
the initial state of the accumulator to be zero.
2) Recursion: For t = 0, 1, · · · , T − 1 with T as large
as required, compute wt recursively according to the
following procedures.
Step 2.1: At time t ≥ 0, sampling k independent
identically distributed (i.i.d.) random transformations
Ht = (Ht,0,Ht,1, · · · ,Ht,k−1), where
PrHt,i = ht,i =
pt/|Ω|, ht,i ∈ Ω ⊆ S1 − pt, ht,i = 0A
for 0 ≤ i < k;
Step 2.2: Compute the t-th parity-check symbol by wt =wt−1 + st, where st =
∑
0≤i<k ht,i(vi).
Remark. If Ω consists of automorphisms of Aq , the result-
ing codes are group codes [9].
C. Normal Graphical Realizations of Kite Codes
The prefix Kite code K[n, k; p,Aq, Ω] can be represented
by a normal graph [12], which is described as follows and
also shown in Fig. 2.
1) There are in total k information variable nodes, repre-
sented by =©, each of which corresponds to a component
of v.
2) There are in total r∆= n − k check variable nodes,
represented by = , each of which corresponds to a
component of w.
3) Check variable nodes are connected to information vari-
able nodes via two types of nodes in between, which,
represented by by h© and + , are called transformation
nodes and check nodes, respectively.
For decoding, the receiver can perform the sum-product
algorithm (SPA) over the Normal graph [12][13] according
to the following schedules, where the messages processed at
nodes and passed along edges can be in principle probability
mass functions (pmf) of the corresponding random variables.
Algorithm 2: (Decoding Algorithm for Kite Codes: A
Schedule)
1) Initialization: Initially, the messages for information
variable nodes and check variable nodes are assumed
to be computable from the channel observations. These
messages are passed to check nodes. Note that the
messages received at the check nodes from the in-
formation variable nodes need to be permuted by the
transformation nodes.
2) Iterations: One iteration includes two phases as follows.
Step 2.1: The check node takes as input the messages
from the transformation nodes and check variable nodes
and computes the extrinsic messages which are delivered
(permuted if required) to information variable nodes and
check variable nodes;
Step 2.2: The information variable nodes and check vari-
able nodes take the (possibly permuted) messages from
check nodes and deliver as outputs extrinsic messages
to the check nodes.
2011 IEEE Information Theory Workshop
482
Information variable node: k in total
Parity-check variable node: r in total
Check node: r in total
Transformation node
h
h
Fig. 2. A normal graph of a Kite code.
D. Design of Kite Codes and Their Improvements
Given k, a finite abelian group Aq and a subset Ω of
bijective transformations, the performance of a Kite code is
mainly affected by the p-sequence. The task to optimize a Kite
code is to select the whole p-sequence such that all the prefix
codes are good enough. This is a multi-objective optimization
problem and could be very complicated. For simplicity, we
partition the p-sequence into nine segments, each of which
corresponds to a parameter qi, 1 ≤ i ≤ 9. Specifically, for
t ≥ 0, pt is set to be qi if the coding rate k/(k + t + 1) falls
into the interval [i/10, (i+1)/10). Then designing a Kite code
is equivalent to selecting the parameters q = (q9, q8, · · · , q1).This can be done by a greedy optimization algorithm [8] based
on simulation or density evolution [14]. Firstly, we choose q9
such that the prefix code of rate 0.9 is as good as possible.
Secondly, we choose q8 with fixed q9 such that the prefix
code of rate 0.8 is as good as possible. Thirdly, we choose
q7 with fixed (q9, q8) such that the prefix code of rate 0.7is as good as possible. This process continues until q1 is
selected. A construction example of Kite code defined over
F2 for k = 1890 has been given in [8]. For convenience, the
parameters are reproduced in Table I.
Given the parameters q, a randomly generated Kite code
may have information variable nodes with extremely low
degrees (even zero degrees for high-rate codes). This typically
-6 -4 -2 0 2 4 6 8 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
SNR(dB)
R
a
t
e
s
(
b
i
t
s
/
B
P
S
K
)
simulation results
channel capacity
Fig. 3. Performance of the binary Kite code with k = 1890.
causes higher error-floors for Kite codes with high coding
rates. To improve the performance near the error-floor region,
we can modify a randomly generated Kite code by adding
edges between low-degree information variable nodes and low-
degree check nodes.
III. EXAMPLES OF KITE CODES
In this section, we will give some specific examples of
Kite codes and their performances over AWGN channels.
For numerical results, we take the K[n, 1890; p,Aq, Ω] as
examples.
A. Binary Kite Codes
Consider the Kite codes over F2 with Ω = 1A. They are
prefix binary LDPC codes.
Example 1: Consider K[n, 1890; p, F2, 1A]. The p-
sequence is shown in Table I. The signal-to-noise ratio (SNR)
required to achieve bit-error rate (BER) around 10−4 for
different coding rate are shown in Fig. 3 under the assumptions
of AWGN channels, BPSK modulation and SPA decoding al-
gorithm. The decoding algorithm is carried out with maximum
iteration Imax = 50. Also shown in Fig. 3 is the channel
capacity curve. It can be seen from Fig. 3 that the gaps between
the coding rates and the capacities are around 0.15 bits/BPSK
in the SNR range of −5.0 ∼ 7.0 dB. The gaps can be further
narrowed as shown in [8] by increasing the data length k. To
the best of our knowledge, no simulation results are reported in
the literature to illustrate that one coding method can produce
good codes in such a wide range.
B. Kite Codes over One-Dimensional Lattice
Consider the Kite codes over A = Z/mZ, referred to as
Z-Kite codes. Generally, we choose the lattice point in a
coset with least Euclidean distance to the origin as its coset
representative.
Example 2: Consider K[2100, 1890; p, Z/3Z, Ω], where
Z/3Z = −1, 0,+1 and Ω the set of bijective transfor-
mations over Z/3Z. Different from conventional settings for
channel coding, we allow the information sequence v to have
2011 IEEE Information Theory Workshop
483
TABLE ITHE p-SEQUENCE OF KITE CODES WITH k=1890.
t [0,210) [210,472) [472,810) [810,1260) [1260,1890) [1890,2835) [2835,4410) [4410,7560) [7560,17010)
pt
q9 q8 q7 q6 q5 q4 q3 q2 q1
0.0249 0.0072 0.0045 0.0034 0.0021 0.0016 0.0010 0.0006 0.0004
TABLE IISOURCE ENTROPIES AND THEIR DISTRIBUTIONS.
H(V )(bits/symbol) Pr(−1) Pr(0) Pr(+1)log2(3) 0.3333 0.3334 0.33331.3333 0.1886 0.6228 0.18861.2222 0.1595 0.6810 0.15951.0000 0.1135 0.7730 0.11350.8889 0.0946 0.8108 0.0946
non-uniform a priori probability, which may yield shaping
gain [15][16]. As an example, if the entropy of the information
symbol is equal to 1.0 bits/symbol, we have information
rate 0.9 bits/channel symbol. The source distributions and
their entropies H(V ) considered in this paper are shown in
Table II. The symbol error rates (SER) are shown in Fig. 4
when decoded with SPA. It is worth pointing out that the
a priori probabilities are incorporated into the initialization
of the decoding algorithm for information variable nodes but
not for check variable nodes. This is because, under random
bijective transformations, the check variable nodes are usually
uniformly distributed over Z/3Z. In addition, the a priori
probabilities have also been taken into account in the process
of evaluating transmission power given that the average (per
channel symbol) transmission power allocation are the same
for information sequence and parity-check sequence. The
decoding algorithm is carried out with maximum iteration
Imax = 50. It can be seen that, at information rate 0.9
bits/channel symbol and SER = 10−4, this Z-Kite code has
a gain over the binary Kite code in Sec. III-A about 0.6 dB.
Note that this is a rough comparison given that one is SER
and the other is BER.
C. Kite Codes over M-PSK Constellation
Consider the Kite codes over AM =
exp(
2kπ√−1
M
)
, 0 ≤ k < M
, referred to as MPSK-Kite
codes here.
Example 3: Consider K[2100, 1890; p,A8,Ω], where Ω is
the set of bijective transformations over A8. The BER is shown
in Fig. 5 when decoded with SPA. Also shown in Fig. 5 is the
performance of a modified code of K[2100, 1890; p,A8, Ω].The decoding algorithm is carried out with maximum iteration
Imax = 50. It can be seen that the modified code reveals a
lower error floor.
D. Kite Codes over 4-Dimensional Checkerboard Lattice
Consider the Kite codes over A = Λ/Λ0 [17], where Λis a m-dimensional lattice in Euclidean space and Λ0 is a
sub-lattice of Λ. It is referred to as Lattice-Kite codes here.
The coded sequence of Lattice-Kite codes is transformed into
3 3.5 4 4.5 5 5.5 6 6.5 7 7.5
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
Eb/No(dB)
S
y
m
b
o
l
E
r
r
o
r
R
a
t
e
[2100, 1890] Z-Kite code
Fig. 4. Performances of Kite codes [2100, 1890] over Z/3Z with differentsource distributions. From left to right, the curves correspond to informationrates 0.80, 0.90, 1.10, 1.20 and 1.43 bits/channel symbol, respectively.
6 6.5 7 7.5 8 8.5 9
10
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
Eb/No(dB)
S
y
m
b
o
l
E
r
r
o
r
R
a
t
e
8PSK-Kite Code: 2.7 bits/channel symbol
Modified 8PSK-Kite Code: 2.7 bits/channel symbol
Fig. 5. Performances of Kite code [2100, 1890] over 8-PSK.
signal point sequence by mapping each component (coset) to
a lattice point in it with least energy.
Example 4: Consider the 4-D Lattice-Kite code
K[2100, 1890; p, D4/4D4, Ω1] and 2-D Lattice-Kite code
K[2100, 1890; p, Z2/4Z2,Ω2], where Ω1 and Ω2 are the
set of bijective transformations of D4/4D4 and Z2/4Z2,
respectively. Note that Z2/4Z2 is equivalent to 16QAM
signal constellation. The SERs are shown in Fig. 6 when
decoded with SPA. It can be seen that the coding gain of
4-D Lattice-Kite code over 2-D Lattice-Kite code (16QAM)
is about 1.5 dB at error rate 10−5. We observed that the
4-D Lattice-Kite code converges very fast. The decoding
2011 IEEE Information Theory Workshop
484
5 6 7 8 9 10 11
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
Eb/No(dB)
S
y
m
b
o
l
E
r
r
o
r
R
a
t
e
2-D Lattice-Kite code: 3.6 bits/2-dimension
4-D Lattice-Kite code: 3.6 bits/2-dimension
Shannon Limit of 16QAM
Fig. 6. Performances of Kite code [2100, 1890] over D4/4D4 and Z2/4Z2.
algorithm for 4-D Lattice-Kite code is carried out with
maximum iteration Imax = 5, whereas Imax = 50 for 2-D
Lattice-Kite code.
IV. CONCLUSION
In this paper, we have proposed a new class of rateless
forward error correction codes, called Kite codes, over arbi-
trary abelian groups which can be applied to AWGN channels.
The Kite codes can generate potentially infinite parity-check
bits with linear complexity. Four kinds of Kite codes, which
are binary Kite codes, Z-Kite codes, MPSK-Kite codes and
Lattice-Kite codes, are considered in this paper. Numerical
results showed that the proposed codes perform well over
AWGN channels.
REFERENCES
[1] J. Byers, M. Luby, M. Mitzenmacher, and A. Rege, “A digital fountainapproaches to reliable distribution of bulk data,” in Proc. ACM SIG-
COMM’98, (Vancouver, BC, Canada), pp. 56–67, Jan. 1998.
[2] M. Luby, “LT-codes,” in Proc. 43rd Annu. IEEE Symp. Foundations of
computer Science (FOCS), (Vancouver, BC, Canada), pp. 271–280, Nov.2002.
[3] A. Shokrollahi, “Raptor codes,” IEEE Trans. Inform. Theory, vol. 52,pp. 2551–2567, Jun. 2006.
[4] D. MacKay, “Fountain codes,” IEE Proc.-Commun., vol. 152, pp. 1062–1068, Dec. 2005.
[5] R. Palanki and J. Yedidia, “Rateless codes on noisy channels,” in Proc.
2004 IEEE Int. Symp. Inform. Theory, (Chicago, IL), p. 37, June/July2004.
[6] O. Etesami and A. Shokrollahi, “Raptor codes on binary memorylesssymmetric channels,” IEEE Trans. Inform. Theory, vol. 52, pp. 2033–2051, May 2006.
[7] P. Pakzad and A. Shokrollahi, “Design principles for Raptor codes,” inProc. 2006 IEEE Inform. Theory Workshop, (Punta del Este, Uruguay),pp. 165–169, Mar. 2006.
[8] X. Ma, K. Zhang, B. Bai, and X. Zhang, “Serial concatenation ofRS codes with Kite codes: Performance analysis, iterative decodingand design.” Submitted to IEEE Trans. Inform. Theory, see alsohttp://eprintweb.org/S/article/cs/1104.4927, April 2011.
[9] G. D. Forney Jr. and M. Trott, “The dynamics of group codes:state spaces, trellis diagrams, and canonical encoders,” IEEE Trans.
Infor. Theory, vol. 39, pp. 1491 –1513, Sep. 1993.
[10] N. Sommer, M. Feder, and O. Shalvi, “Low-density lattice codes,” IEEE
Trans. Inform. Theory,, vol. 54, pp. 1561 –1585, Apr. 2008.
[11] Z.-X. Wan, Lectures on finite fields and galois rings. World ScientificPublishing Company, Oct 2003.
[12] G. D. Forney Jr., “Codes on graphs: Normal realizations,” IEEE
Trans. Inform. Theory, vol. 47, pp. 520–548, Feb. 2001.[13] X. Ma and B. Bai, “A unified decoding algorithm for linear codes based
on partitioned parity-check matrices,” in Information Theory Workshop,
2007. ITW ’07. IEEE, pp. 19 –23, Sep. 2007.[14] T. Richardson and R. Urbanke, “The capacity of low-density par-
ity check codes under message-passing decoding,” IEEE Trans. In-
form. Theory, vol. 47, pp. 599–618, Feb. 2001.[15] G. D. Forney Jr., R. Gallager, G. Lang, F. Longstaff, and S. Qureshi, “Ef-
ficient modulation for Band-limited channels,” IEEE Journal. Selected
Areas in Communications, vol. 2, pp. 632 – 647, Sep 1984.[16] X. Ma and L. Ping, “Coded modulation using superimposed binary
codes,” IEEE Trans. Inform. Theory, vol. 50, pp. 3331 – 3343, Dec.2004.
[17] J. Conway and N. Sloane, “A fast encoding method for lattice codes andquantizers,” IEEE Trans. Inform. Theory, vol. 29, pp. 820 – 824, Nov.1983.
2011 IEEE Information Theory Workshop
485