Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Principles of Communication
Information Theory (2)
LC 1-11
Lecture 25, 2008-12-16
2
Content
Source Coding and Channel CodingError Control TechniquesThree types of FECParity check codesLinear block codesConvolutional codes
Interleaving
3
Source Coding To keep original information as accurate as possible To represent the information using as few digits as
possible Chap. 3
4
Channel Coding to improve communications performance by increasing
the robustness against channel impairments (noise, interference, fading, ..)
Results of channel coding (+: advantage, -: disadvantage) Reduced PB with fixed SNR (+) More redundant data bits (-) Require higher BW and data rate (-) Increased system complexity (-)
Two ways for conducting channel coding Waveform coding (Sec 3.5) Structured sequence
5
Error Control Techniques to provide ways for error detection/correction by
adding structured redundancy to data packets Automatic Repeat reQuest (ARQ)
Full-duplex connection, Correct errors by retransmissions Require feedback (acknowledgement), and monitoring
strategies, which consumes extra bandwidth/power Implementation is easy
Forward Error Correction (FEC) Simplex connection, no feedback required Add structured message into data packet before transmission,
based on which receivers detect and/or correct transmission errors
Implementation is complex
6
FEC Functions and Types FEC has two functions Error detection: detect whether there is
transmission error e.g.: Parity bits, linear block codes
Error correction: correct the transmission errors e.g.: linear block codes
Three types of FEC to study Parity check codes Linear block codes Convolutional codes
7
Parity-Check Codes Use linear sums of the information bits, called parity
check bits, for error detection or correction. Code rate: Rc = k/n Message word length: k bits, Code word length: n bits use (n-k) redundant bits in each k data bits as parity-check
code
8
Single Parity Check Code
constructed by adding a single-parity bit to a block of data bits. Even parity: summation of all bits in the code word is even Odd parity: summation of all bits in the code word is odd
Encoder: for information bitsCalculate code bit: c0 such as
Decoder: check received bits to see ifsatisfy even/odd parity or not
Code rate Rc= k/(k+1) Can detect only odd number of bit errors, but can not correct bit
errors
{ }1 2, , , kc c c
0 1
0,1,k
evenc c c
odd
⊕ ⊕ ⊕ =
0 1 kc c c⊕ ⊕ ⊕
9
FEC Coding Gain With fixed PB, we can reduce transmit power (or work in low
SNR) Coding gain: difference of the required SNR to achieve certain PB
between with and without channel coding
(dB) / 0NEb
BP
AF
B
D
C
E Uncoded
Coded[dB][dB] [dB]
c0u0
−
=
NE
NEG bb
FEC effective only when the input PB is small enough. Not helpful if PB is too large.
10
Linear Block CodesI. Linear block codes
A class of parity check codes Main characteristics: takes k message digits, transforms
into n-digit codeword
II. Linear block codes described by: (n, k) codes code rate Rc = k/n
III. Description of (n,k) linear block codes1) Input:
2k possible tuples of k-bit message word
2) Output: 2n possible tuples of n-bit codeword
3) Encoder: assign a unique codeword U to each message word m One-to-one mapping from m to U
[ ]kmm ,,1 =m
[ ]nuu ,,1 =U
11
4) Decoder: from received codeword U, find the message word m Note: only 2k different code words are U possibly sent, but all 2n
possible n-bit words can be received due to transmission error Decoder need to find optimal rule to map each received word into a
message word5) Design objective
Increase the distance between code words so as to optimize error performance, Easy decoding
6) “Linear” means: If V1 and V2 are two arbitrary code words of an (n,k) linear block
code, then V1⊕ V2 is also a valid codeword There is an all-zero word, because V1 ⊕ V1=0 should be a valid
codeword Example: {000, 010, 101, 111} forms a linear block code with 4
codewords
12
Example Which of the following can be a linear block code?
[ ]1 2,m m=m
00 0000
01 0101
10 1010
11 1111
00 0000
01 0001
10 1010
11 1111
[ ]1 2 3 4, , ,u u u u=U
13
Property of Linear Block Code Codeword mapping satisfy:
encode encode
encode
if , ,
theni i j j
i j i j
→ →
⊕ → ⊕
m U m U
m m U U
14
Metric of Codewords Hamming distance: the total number of different bits
between two binary words
Minimum distance of a linear block codes: The smallest hamming distance can be found between
codewords
{ }( )min
,min ,
i ji jd d
∀=
U UU U
( ), number of different bits between and i j i jd =U U U U
15
Generator Matrix Encoding: select 2k codewords from 2n possible n-bit words, and
assign each selected codeword to a message word Practical n, k are usually large, efficient ways have to be find to
construct the mapping In a properly designed linear block code, all the codewords can
be generated by a special matrix called generator matrix Codeword mapping becomes simply matrix multiplication
11 12 1
21 22
2
1
2
1
2
2
1 [ , , , ] : message word [ , , , ] : code word
: generator mat rix
n
n
k k kn
k
n
m m mu uv v vv v v
v v v
u
=
=
==
U mmU
G
G
16
Systematic Linear Block Codes A special encoding scheme:
codeword=[k message digits, n-k parity digits] Advantage: reduce complexity
Generator matrix becomes
1,1 1,
( )
,1 ,
1 0
0 1
n k
k k n kk n
k k n k
p p
p p
−
× −×
−
= =
G I P
17
Parity-Check Matrix For kxn generator matrix G, there is a (n-k)xn matrix
H, such that GHT=0. H is called parity-check matrix Does H always exist? Think from solutions of system
of linear equations. For systematic codes , then
( )[ ]k k n kk n × −×
=G I P
( ) ( )[ ]T
n kn k n n k k −− × − ×=H P I
18
Example (5,2) code with generator matrix
0 0 0 0 0 0 00 1 0 1 1 1 11 0 1 0 1 0 01 1 1 1 0 1 1
1 1 1 0 00 1 0 1 00 1 0 0 1
∈ ⇔ ∈ =
m U
H
1 0 1 0 00 1 1 1 1
1 0 01 1 1
=
=
G
P
19
Parity Check U is a valid codeword generated by G if and only if
UHT=0. If received word r contains errors, then possibly
rHT≠0 (unless errors accidentally change a codeword into another valid codeword)
20
Syndrome Testing
Channel encoding and decoding block diagram
error vector Note: decoder is not simple multiplication!
EncoderG Channel Decoder
Hm m̂U r
11
1
1 1
message word: [ , , ]
code word:
received word:
kk
n k n
n n
m m×
× ×
× ×
• =
• =
• = +
m
U m G
r U e
1[ , , ], where : binary digitsn ie e e=e
21
Syndrome Result of parity check to determine whether r is a valid
codeword Property:
If S=0, then r is a valid codeword (note: may still have non-zero error e)
If S≠0, then r is not a valid codeword (there is non-zero error e)
Parity check: determine syndrome to be zero or not.
is the syndrome of T=S rH r
( )1 ( ) 1( ) T T
n n kn k n × −× − ×= + =S U e H e H
22
Properties of syndrome Linear block codes have one-to-one mapping between
correctable error patterns and syndromes Useful for decoder to use syndromes for error correction
H do not have zero columns Otherwise there is undetectable error patterns
All columns of H must be unique Identical columns makes error patterns indistinguishable
23
Syndrome look-up table
A table shows the error pattern and syndromes
Error patterns: the first column of standard array
Error pattern
ej
Syndrome Sj
,
1, , 2
Tj j
n kj −
=
=
S e H
2
3
2
1
n k−
Uee
e
2
3
2
1
n k−
SSS
S
24
Example
1 0 1 0 00 1 1 1 1
1 0 01 1 1
1 0 01 1 11 0 00 1 00 0 1
T
=
= =
G
P
H
Error Pattern Syndrome
00000 000
00001 001
00010 010
00100 100
01000 111
10010 110
10001 101
00011 011
25
Error Correction Decoding Decoding with error correction (Assume syndrome look-
up table is constructed)1. Calculate syndrome of received word: 2. Locate error pattern ej from syndrome look-up table
3. Detect codeword:
Such a procedure is easy for VLSI hardware implementation. Practical decoders are usually implemented by dedicated hardware.
T=S rH
lookupj→S e
jerU +=
26
Convolutional Codes A type of FEC codes where
each k-bit tuple (symbol) is transformed into an n-bit tuple the transformation is a function of the last K information
tuples Described by three parameters (n,k,K)
Code rate: Rc = k/n K: constraint length
Typical implementation Modular-2 convolution of the message binary sequence with a
K-stage encoding shift register
27
Encoder
1( , , )nu u=U
1 2( , , , )km m m=m
tm 1−tm ktm −
1u nu
1 2 kK
General encoder for ( , , ) convolutional codes : input block length : output block length : constraint length (memory length)During each clock cycle, bits are shifted into register
n k KknK
k as a block, bits are output as weighted summation of register contents.
There are always bits stored in the registers.n
kK
Major difference fromlinear block codes:outputs depend on pastmessage blocks.
If k=1, then it isbit-by-bit shifting.
28
Example Block diagram:
Output relation:
( )m t
1( )u t 2 ( )u t
1 2 3Code rate: ½3-bit registers.
Both input and output are time sequences
( ) ( )1 2( ), ( ) ( ) ( 2 ), ( ) ( 1) ( 2 )u t u t m t m t m t m t m t= = + − + − + −U
29
Interleaver The error correcting/detecting capability of channel codes are
usually limited Even with the currently stronger codes such as BCH, Reed-Solomon codes,
the error correcting capability is usually only ten to twenty error bits Higher error correcting/detecting capability requires codes with larger n,k,
which can be very complex
In most practical applications, errors happen in a busty style A lot of errors happen continuously in a short time, but other times errors are
relatively rare
Interleaver: change bursty errors to random errors Result: each code block has only a few errors, and errors evenly distributed
mnm
n
uu
uu
1
111
→
→
mU
U
1
input
Output order
One interleaving method: row-wise in, column-wise out