Part III – Advanced Coding Techniques
Jose Vieira
SPL — Signal Processing LaboratoryDepartamento de Electronica, Telecomunicacoes e Informatica / IEETA
Universidade de Aveiro, Portugal
2010
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 1 / 68
Ocean StoreThe Infinite disk
Client Client
Client
IDDATA
IDDATA
IDDATA
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 2 / 68
Error Correction CodeUnpredictable Channel Conditions
Coder Decoder
em
Erasure Channel
cy m
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 3 / 68
Digital Fountains
Questions
Dissemination of data: How to encode data files to distribute them bya huge number of disks around the world?
Resilience: How can we encode data files to make any encoded datauseful?
Ratelessness: How to generate an infinite number of codewords?
Answer: random coding!
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 4 / 68
Digital Fountains
Questions
Dissemination of data: How to encode data files to distribute them bya huge number of disks around the world?
Resilience: How can we encode data files to make any encoded datauseful?
Ratelessness: How to generate an infinite number of codewords?
Answer: random coding!
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 4 / 68
Digital Fountains
Questions
Dissemination of data: How to encode data files to distribute them bya huge number of disks around the world?
Resilience: How can we encode data files to make any encoded datauseful?
Ratelessness: How to generate an infinite number of codewords?
Answer: random coding!
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 4 / 68
Digital Fountains
Questions
Dissemination of data: How to encode data files to distribute them bya huge number of disks around the world?
Resilience: How can we encode data files to make any encoded datauseful?
Ratelessness: How to generate an infinite number of codewords?
Answer: random coding!
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 4 / 68
Essential Textbooks and Papers
David J. C. Mackay, Information Theory, Inference and LearningAlgorithms˝, Cambridge, 2004
Mackay, D. J. C., Fountain Codes˝, IEE Proceedings -Communications, Vol.152, N.6, pp.1062-1068, December, 2005
Luby, Michael, LT Codes˝, Proceedings of the 43rd Annual IEEESymposium on Foundations of Computer Science (FOCS’02),pp.271-280, IEEE, November, 2002
Maymounkov, Petar, Online Codes˝, New York University, NewYork, November, 2002
Fragouli, Christina, Boudec, Jean-Yves Le, and Widmer, Jorg,Network Coding: Na Instant Primer˝, ACM SIGCOMM ComputerCommunication Review, Vol.36, N.1, pp.63-68, January, 2006
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 5 / 68
Suplementar Papers
Shokrollahi, Amin, Raptor Codes˝, IEEE Transactions onInformation Theory, Vol.52, N.6, pp.2551-2567, June, 2006
Shamai, Shlomo, Telatar, I. Emre, and Verdu, Sergio, FountainCapacity˝, IEEE Transactions on Information Theory, Vol.53, N.11,pp.4372-4376, November, 2007
Dimakis, Alexandros G., Prabhakaran, Vinod, and Ramchandran,Kannan, Decentralized Erasure Codes for Distributed NetworkedStorage˝, IEEE Transactions on Information Theory, Vol.52, N.6,pp.2809-2816, June, 2006
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 6 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 7 / 68
Linear CodesCoding with Real Numbers
One linear combination
[c1
]1×1
=[− g1 −
]1×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 8 / 68
Linear CodesCoding with Real Numbers
Two linear combinations[c1
c2
]2×1
=
[− g1 −− g2 −
]2×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 9 / 68
Linear CodesCoding with Real Numbers
Adding redundancy to a signal — N > Kc1
c2...
cN
N×1
=
− g1 −− g2 −
...− gN −
N×K
|m|
K×1
c = Gm
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 10 / 68
Linear CodesCoding with Real Numbers
Adding redundancy to a signal — N > Kc1
c2...
cN
N×1
=
− g1 −− g2 −
...− gN −
N×K
|m|
K×1
c = Gm
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 10 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 11 / 68
Erasures
Erasures
Lost samples at known positions
How to recover the message m from incomplete c?c1
c2
c3
c4
c5
N×1
=
− g1 −− g2 −− g3 −− g4 −− g5 −
N×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 12 / 68
Erasures
If only L samples of c are received we have:c1
Xc2
c3
Xc4
c5
N×1
=
− g1 −− Xg2 −− g3 −− Xg4 −− g5 −
N×K
|m|
K×1
Define J = {1, 3, 5} as the set of the received samples
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 13 / 68
Erasures
Solve the following system of equations to obtain the original signal mc1
c3
c5
L×1
=
− g1 −− g3 −− g5 −
L×K
|m|
K×1
c(J) = G (J)m
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 14 / 68
Erasures
Depending on L (number of received samples), there are three possiblesituations
L < K – Underdetermined system of equations. In general not enoughinformation to recover m uniquely. Additional restrictions can beimposed in order to get an unique solution.
L = K – Determined system of equations, one solution (max.).
m = G (J)−1c(J)
L > K – Overdetermined system of equations. In general there is notan unique solution. In the field R we can choose the least squaressolution, the one that best approximates all the equations in L2 sense.
m = (G (J)TG (J))−1G (J)T c(J) = G (J)†c(J)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 15 / 68
Erasures
Depending on L (number of received samples), there are three possiblesituations
L < K – Underdetermined system of equations. In general not enoughinformation to recover m uniquely. Additional restrictions can beimposed in order to get an unique solution.
L = K – Determined system of equations, one solution (max.).
m = G (J)−1c(J)
L > K – Overdetermined system of equations. In general there is notan unique solution. In the field R we can choose the least squaressolution, the one that best approximates all the equations in L2 sense.
m = (G (J)TG (J))−1G (J)T c(J) = G (J)†c(J)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 15 / 68
Erasures
Depending on L (number of received samples), there are three possiblesituations
L < K – Underdetermined system of equations. In general not enoughinformation to recover m uniquely. Additional restrictions can beimposed in order to get an unique solution.
L = K – Determined system of equations, one solution (max.).
m = G (J)−1c(J)
L > K – Overdetermined system of equations. In general there is notan unique solution. In the field R we can choose the least squaressolution, the one that best approximates all the equations in L2 sense.
m = (G (J)TG (J))−1G (J)T c(J) = G (J)†c(J)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 15 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 16 / 68
Correcting Errors
Errors
Errors: Lost samples at unknown positions
Coder Decoderm c y
e
m
How can we find the errors positions?
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 17 / 68
Correcting Errors
Consider an N ×N orthogonal matrix partitioned in the following way:
F =
G
N×K
∣∣∣∣∣∣∣∣∣ H
N×(N−K)
[
GT
HT
][GH] =
[GTG GTHHTG HTH
]=
[I 00 I
]So we have HTG = 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 18 / 68
Correcting Errors
Consider an N ×N orthogonal matrix partitioned in the following way:
F =
G
N×K
∣∣∣∣∣∣∣∣∣ H
N×(N−K)
[
GT
HT
][GH] =
[GTG GTHHTG HTH
]=
[I 00 I
]So we have HTG = 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 18 / 68
Correcting Errors
Consider an N ×N orthogonal matrix partitioned in the following way:
F =
G
N×K
∣∣∣∣∣∣∣∣∣ H
N×(N−K)
[
GT
HT
][GH] =
[GTG GTHHTG HTH
]=
[I 00 I
]So we have HTG = 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 18 / 68
Correcting Errors
Received y and given F , if there are no errors, then
y = c = Gm
We can use the matrix H to test the received signal y
s = HT y = HT c = HTG︸ ︷︷ ︸=0
m = 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 19 / 68
Correcting Errors
Received y and given F , if there are no errors, then
y = c = Gm
We can use the matrix H to test the received signal y
s = HT y = HT c = HTG︸ ︷︷ ︸=0
m = 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 19 / 68
Correcting Errors
The received vector y is a corrupted version of c at unknownpositions, example
y1
y2
y3
y4
y5
=
c1
c2
c3
c4
c5
+
00e3
00
y = c + e
The matrix H is used to verify that the received signal y is a codeword
s = HT y = HT c + HT e = HT e 6= 0
The syndrome s is a linear combination of the columns of HT wherethe ei are the coefficients.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 20 / 68
Correcting Errors
The received vector y is a corrupted version of c at unknownpositions, example
y1
y2
y3
y4
y5
=
c1
c2
c3
c4
c5
+
00e3
00
y = c + e
The matrix H is used to verify that the received signal y is a codeword
s = HT y = HT c + HT e = HT e 6= 0
The syndrome s is a linear combination of the columns of HT wherethe ei are the coefficients.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 20 / 68
Correcting Errors
The received vector y is a corrupted version of c at unknownpositions, example
y1
y2
y3
y4
y5
=
c1
c2
c3
c4
c5
+
00e3
00
y = c + e
The matrix H is used to verify that the received signal y is a codeword
s = HT y = HT c + HT e = HT e 6= 0
The syndrome s is a linear combination of the columns of HT wherethe ei are the coefficients.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 20 / 68
Correcting Errors
The received vector y is a corrupted version of c at unknownpositions, example
y1
y2
y3
y4
y5
=
c1
c2
c3
c4
c5
+
00e3
00
y = c + e
The matrix H is used to verify that the received signal y is a codeword
s = HT y = HT c + HT e = HT e 6= 0
The syndrome s is a linear combination of the columns of HT wherethe ei are the coefficients.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 20 / 68
Correcting ErrorsExample: repetition code
c1
c 1=c 2
[ ]2=m
=22
c
c2
Repetition code:
F =
G[11
∣∣∣∣H
1−1
]Suppose we code
c = Gm =
[11
][2] =
[22
]
If an error occurs y = c + e =[22
]+
[10
]=
[32
]We can verify that thereceived vector y has an error
HT y = [1− 1]
[32
]= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 21 / 68
Correcting ErrorsExample: repetition code
c 1=c 2
=23
y
c1
c2
Repetition code:
F =
G[11
∣∣∣∣H
1−1
]Suppose we code
c = Gm =
[11
][2] =
[22
]If an error occurs y = c + e =[
22
]+
[10
]=
[32
]
We can verify that thereceived vector y has an error
HT y = [1− 1]
[32
]= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 22 / 68
Correcting ErrorsExample: repetition code
y
c 1=c 2
g
h c1
c2Repetition code:
F =
G[11
∣∣∣∣H
1−1
]Suppose we code
c = Gm =
[11
][2] =
[22
]If an error occurs y = c + e =[
22
]+
[10
]=
[32
]We can verify that thereceived vector y has an error
HT y = [1− 1]
[32
]= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 23 / 68
Linear CodesCoding with Real Numbers
c1
c2
c3
M e s sa g e
s u bs p a
c e
C o d e w o r d s s u b s p a c e
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 24 / 68
Linear CodesCoding with Real Numbers
c1
c2
c3
g 1g 2
h
−=
=
12/12/1
2/12/11001
HG
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 25 / 68
Linear CodesCoding with Real Numbers
c1
c2
c3
=11
m
=
111
c
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 26 / 68
Linear CodesCoding with Real Numbers
Linear code: F =
G 1 00 1
1/2 1/2
∣∣∣∣∣∣H
11−2
To code the message m =
[11
]we get c = Gm =
111
If an error occurs y = c + e =
111
+
00−1/2
=
11
1/2
To verify that the received vector y has an error
s = HT y = [ 1 1 −2 ]
11
1/2
= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 27 / 68
Linear CodesCoding with Real Numbers
Linear code: F =
G 1 00 1
1/2 1/2
∣∣∣∣∣∣H
11−2
To code the message m =
[11
]we get c = Gm =
111
If an error occurs y = c + e =
111
+
00−1/2
=
11
1/2
To verify that the received vector y has an error
s = HT y = [ 1 1 −2 ]
11
1/2
= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 27 / 68
Linear CodesCoding with Real Numbers
Linear code: F =
G 1 00 1
1/2 1/2
∣∣∣∣∣∣H
11−2
To code the message m =
[11
]we get c = Gm =
111
If an error occurs y = c + e =
111
+
00−1/2
=
11
1/2
To verify that the received vector y has an error
s = HT y = [ 1 1 −2 ]
11
1/2
= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 27 / 68
Linear CodesCoding with Real Numbers
Linear code: F =
G 1 00 1
1/2 1/2
∣∣∣∣∣∣H
11−2
To code the message m =
[11
]we get c = Gm =
111
If an error occurs y = c + e =
111
+
00−1/2
=
11
1/2
To verify that the received vector y has an error
s = HT y = [ 1 1 −2 ]
11
1/2
= 1 6= 0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 27 / 68
Correcting ErrorsThe Syndrome as a linear combination of columns of HT
s1...
sN−K
=
HT | | |h1 h2 · · · hN
| | |
e1
e2
e3......
eN
s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 28 / 68
Correcting ErrorsThe Syndrome as a linear combination of columns of HT
s1...
sN−K
=
HT | | |h1 h2 · · · hN
| | |
e1
e2
e3......
eN
s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 28 / 68
Correcting ErrorsBrute force approach
Problem
Find the linear combination of vectors hi that best approximates s
The error vector e is a sparse vector, so we want the sparsest solution
Brute force approach: test all error patterns
Equivalent to solve the following optimization problem:
Problem
min ‖e‖0 s.t. s = HT e
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 29 / 68
Correcting ErrorsBrute force approach
Problem
Find the linear combination of vectors hi that best approximates s
The error vector e is a sparse vector, so we want the sparsest solution
Brute force approach: test all error patterns
Equivalent to solve the following optimization problem:
Problem
min ‖e‖0 s.t. s = HT e
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 29 / 68
Correcting ErrorsBrute force approach
Problem
Find the linear combination of vectors hi that best approximates s
The error vector e is a sparse vector, so we want the sparsest solution
Brute force approach: test all error patterns
Equivalent to solve the following optimization problem:
Problem
min ‖e‖0 s.t. s = HT e
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 29 / 68
Correcting ErrorsBrute force approach
Search for the ”best” linear combination s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
Find the minimum of ‖s − s‖2 for all error patterns and each numberof errors
n No Combinations
1 s = eihi N
2 s = eihi + ejhj
(N2
)...
......
L s =∑
i=J eihi∑L
n=1
(Nn
)This is a NP hard combinatorial problem.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 30 / 68
Correcting ErrorsBrute force approach
Search for the ”best” linear combination s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
Find the minimum of ‖s − s‖2 for all error patterns and each numberof errors
n No Combinations
1 s = eihi N
2 s = eihi + ejhj
(N2
)...
......
L s =∑
i=J eihi∑L
n=1
(Nn
)This is a NP hard combinatorial problem.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 30 / 68
Correcting ErrorsBrute force approach
Search for the ”best” linear combination s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
Find the minimum of ‖s − s‖2 for all error patterns and each numberof errors
n No Combinations
1 s = eihi N
2 s = eihi + ejhj
(N2
)...
......
L s =∑
i=J eihi∑L
n=1
(Nn
)This is a NP hard combinatorial problem.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 30 / 68
Avoiding the combinatorial explosionSolutions
Solution 1: Use coding matrices G and parity check matrices H witha convenient structure:
HammingDFT - (BCH cyclic codes)DCTetc.
Solution 2: Use random matrices and L1 minimization to obtain asparse solution for the error vector
Solution 3: Use sparse random matrices and fast algorithms thattake advantage of sparsity (LDPC and LT codes)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 31 / 68
Avoiding the combinatorial explosionSolutions
Solution 1: Use coding matrices G and parity check matrices H witha convenient structure:
HammingDFT - (BCH cyclic codes)DCTetc.
Solution 2: Use random matrices and L1 minimization to obtain asparse solution for the error vector
Solution 3: Use sparse random matrices and fast algorithms thattake advantage of sparsity (LDPC and LT codes)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 31 / 68
Avoiding the combinatorial explosionSolutions
Solution 1: Use coding matrices G and parity check matrices H witha convenient structure:
HammingDFT - (BCH cyclic codes)DCTetc.
Solution 2: Use random matrices and L1 minimization to obtain asparse solution for the error vector
Solution 3: Use sparse random matrices and fast algorithms thattake advantage of sparsity (LDPC and LT codes)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 31 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 32 / 68
Solution 1Coding with structured matrices
Choose βi as the roots of unity in any field (finite or not). Note thatthe roots of unity are the solutions of an = 1
Construct the Vandermonde matrix
β0
0 β01 · · · β0
N−1
β10 β1
1 · · · β1N−1
......
. . ....
βN−10 βN−1
1 · · · βN−1N−1
with βi 6= βj
These codes can correct at least N−K2 errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 33 / 68
Solution 1Coding with the DFT
In a Galois field, only certain values of N have roots of unity
In the Complex field C the roots of unity of order N are βi = e j 2πN
i -DFT matrix
These codes are known as the BCH codes
A codeword c is generated by evaluating the IDFT of a zero paddedmessage vector m
||c||
=
IDFT
|m|00
=
G
|m|
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 34 / 68
Solution 1Coding with the DFT
The syndrome s is part of the DFT of e
s = HT e
The complete equation will be[s ′
s
]=
[GT
HT
]e
If we have a way of obtaining s ′ then we could calculate the error eby inverse transform.
As e is sparse with only L values different from zero, s is a linearcombination of only L components:
sn =L∑
k=1
aksn−k
If we know 2L values of the syndrome we can correct L errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 35 / 68
Solution 1Coding with the DFT
The syndrome s is part of the DFT of e
s = HT e
The complete equation will be[s ′
s
]=
[GT
HT
]e
If we have a way of obtaining s ′ then we could calculate the error eby inverse transform.
As e is sparse with only L values different from zero, s is a linearcombination of only L components:
sn =L∑
k=1
aksn−k
If we know 2L values of the syndrome we can correct L errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 35 / 68
Solution 1Coding with the DFT
The syndrome s is part of the DFT of e
s = HT e
The complete equation will be[s ′
s
]=
[GT
HT
]e
If we have a way of obtaining s ′ then we could calculate the error eby inverse transform.
As e is sparse with only L values different from zero, s is a linearcombination of only L components:
sn =L∑
k=1
aksn−k
If we know 2L values of the syndrome we can correct L errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 35 / 68
Solution 1Coding with the DFT
The syndrome s is part of the DFT of e
s = HT e
The complete equation will be[s ′
s
]=
[GT
HT
]e
If we have a way of obtaining s ′ then we could calculate the error eby inverse transform.
As e is sparse with only L values different from zero, s is a linearcombination of only L components:
sn =L∑
k=1
aksn−k
If we know 2L values of the syndrome we can correct L errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 35 / 68
Solution 1Coding with the DFT
The syndrome s is part of the DFT of e
s = HT e
The complete equation will be[s ′
s
]=
[GT
HT
]e
If we have a way of obtaining s ′ then we could calculate the error eby inverse transform.
As e is sparse with only L values different from zero, s is a linearcombination of only L components:
sn =L∑
k=1
aksn−k
If we know 2L values of the syndrome we can correct L errors
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 35 / 68
Solution 1Decoding example
0 5 10 15
0
0.2
0.4
0.6
0.8
1
n
Error vector
Consider the followingexample:
N = 16
Two errors at J = {2, 14}
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 36 / 68
Solution 1Decoding example
2 4 6 8 10 12 14 16
−0.1
−0.05
0
0.05
0.1
0.15
k
Syndrome
In this example, due to theerror vector symmetry, thesyndrome is a real vector
We have the followingdifference equation
sn = a1sn−1 + a2sn−2
We have two unknowns andwith the four known elementsof the syndrome we can form{
s3 = a1s2 + a2s1
s4 = a1s3 + a2s2
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 37 / 68
Solution 1Stability Problems
Due to the structure of the coding matrix G and the parity checkmatrix H, the syndrome reconstruction is very sensitive to burst oferrors
This is a direct consequence of the structure of the matrix H.Contiguous row vectors are almost colinear, leading to badconditioned system of equations
To improve the reconstruction stability, we have to modify thestructure of H
On real number codes we have stability problems. On finite fieldsthose codes behave poorly for burst errors
Conclusion: The coding matrix structure is fundamental for bothfields: Real and Finite
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 38 / 68
Solution 1Stability Problems
Due to the structure of the coding matrix G and the parity checkmatrix H, the syndrome reconstruction is very sensitive to burst oferrors
This is a direct consequence of the structure of the matrix H.Contiguous row vectors are almost colinear, leading to badconditioned system of equations
To improve the reconstruction stability, we have to modify thestructure of H
On real number codes we have stability problems. On finite fieldsthose codes behave poorly for burst errors
Conclusion: The coding matrix structure is fundamental for bothfields: Real and Finite
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 38 / 68
Solution 1Stability Problems
Due to the structure of the coding matrix G and the parity checkmatrix H, the syndrome reconstruction is very sensitive to burst oferrors
This is a direct consequence of the structure of the matrix H.Contiguous row vectors are almost colinear, leading to badconditioned system of equations
To improve the reconstruction stability, we have to modify thestructure of H
On real number codes we have stability problems. On finite fieldsthose codes behave poorly for burst errors
Conclusion: The coding matrix structure is fundamental for bothfields: Real and Finite
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 38 / 68
Solution 1Stability Problems
Due to the structure of the coding matrix G and the parity checkmatrix H, the syndrome reconstruction is very sensitive to burst oferrors
This is a direct consequence of the structure of the matrix H.Contiguous row vectors are almost colinear, leading to badconditioned system of equations
To improve the reconstruction stability, we have to modify thestructure of H
On real number codes we have stability problems. On finite fieldsthose codes behave poorly for burst errors
Conclusion: The coding matrix structure is fundamental for bothfields: Real and Finite
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 38 / 68
Solution 1Stability Problems
Due to the structure of the coding matrix G and the parity checkmatrix H, the syndrome reconstruction is very sensitive to burst oferrors
This is a direct consequence of the structure of the matrix H.Contiguous row vectors are almost colinear, leading to badconditioned system of equations
To improve the reconstruction stability, we have to modify thestructure of H
On real number codes we have stability problems. On finite fieldsthose codes behave poorly for burst errors
Conclusion: The coding matrix structure is fundamental for bothfields: Real and Finite
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 38 / 68
Solution 1 + 1/2Turbo Codes
The first attempt to randomise the coding matrix structure wasachieved with turbo codes
The message is codded with two different coding matrices usually theDFT and a column permuted version
Those codes perform better than the BCH codes for both fields. Theycome close to the Shannon limit
DFTcoder
DFTcoderP
C0
C1
m
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 39 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 40 / 68
Solution 2Coding with random matrices
Consider a N × K random real number coding matrix G
c = Gm
To correct errors we need to evaluate the syndrome with a paritycheck matrix H which must obey the condition
HTG = 0
The columns of H should be orthogonal to the columns of G . Thiscan be obtained by applying the Gram-Schmidt orthogonalizationalgorithm to a N × N random matrix
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 41 / 68
Solution 2Coding with random matrices
Problem
How to solve the underdetermined system of equations
s = HT e
As H has no structure a general method must be found
Additional restrictions must be applied to the vector e in order todefine an unique solution, e.g.:
Minimum energy - min ‖e‖2 (L2 norm)Sparsest - min ‖e‖0 (L0 pseudonorm)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 42 / 68
Solution 2Coding with random matrices
Problem
How to solve the underdetermined system of equations
s = HT e
As H has no structure a general method must be found
Additional restrictions must be applied to the vector e in order todefine an unique solution, e.g.:
Minimum energy - min ‖e‖2 (L2 norm)Sparsest - min ‖e‖0 (L0 pseudonorm)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 42 / 68
Solution 2Coding with random matrices
Problem
How to solve the underdetermined system of equations
s = HT e
As H has no structure a general method must be found
Additional restrictions must be applied to the vector e in order todefine an unique solution, e.g.:
Minimum energy - min ‖e‖2 (L2 norm)Sparsest - min ‖e‖0 (L0 pseudonorm)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 42 / 68
Solution 2Coding with random matrices
Problem
How to solve the underdetermined system of equations
s = HT e
As H has no structure a general method must be found
Additional restrictions must be applied to the vector e in order todefine an unique solution, e.g.:
Minimum energy - min ‖e‖2 (L2 norm)Sparsest - min ‖e‖0 (L0 pseudonorm)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 42 / 68
Solution 2Coding with random matrices
Problem
How to solve the underdetermined system of equations
s = HT e
As H has no structure a general method must be found
Additional restrictions must be applied to the vector e in order todefine an unique solution, e.g.:
Minimum energy - min ‖e‖2 (L2 norm)Sparsest - min ‖e‖0 (L0 pseudonorm)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 42 / 68
Solution 2L0 to L1 equivalence
Donoho and Elad in 2001 founded empirically and theoretically thatinstead of solving the hard L0 problem to find the sparsest solution
Problem
min ‖e‖0 s.t. s = HT e
They could solve the easiest L1 problem and under certain conditions,still obtain the same sparsest solution
Problem
min ‖e‖1 s.t. s = HT e
This problem can be solved by Linear Programing using the Simplexalgorithm or Interior Point methods
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 43 / 68
Solution 2L0 to L1 equivalence
Donoho and Elad in 2001 founded empirically and theoretically thatinstead of solving the hard L0 problem to find the sparsest solution
Problem
min ‖e‖0 s.t. s = HT e
They could solve the easiest L1 problem and under certain conditions,still obtain the same sparsest solution
Problem
min ‖e‖1 s.t. s = HT e
This problem can be solved by Linear Programing using the Simplexalgorithm or Interior Point methods
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 43 / 68
Solution 2L0 to L1 equivalence
Donoho and Elad in 2001 founded empirically and theoretically thatinstead of solving the hard L0 problem to find the sparsest solution
Problem
min ‖e‖0 s.t. s = HT e
They could solve the easiest L1 problem and under certain conditions,still obtain the same sparsest solution
Problem
min ‖e‖1 s.t. s = HT e
This problem can be solved by Linear Programing using the Simplexalgorithm or Interior Point methods
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 43 / 68
Solution 2Coding with random matrices
We can write the equation to solve in the following form
s1...
sN−K
= e1
|h1
|
+ e2
|h2
|
+ · · ·+ eN
|hN
|
The syndrome s is a linear combination of L vectors hi
We want to find the linear combination of vectors hi that better“explains” the syndrome using the smallest number of vectors hi
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 44 / 68
Solution 2Coding with random matrices
L <1 + 1/M(HT )
2= ebp
ebp is the Equivalent Break Point and is an estimate of the maximumnumber of correctable errors
M(A) is the mutual incoherence of matrix A
Definition
M(HT ) = maxi 6=j
∣∣∣hTi hj
∣∣∣ , such that ‖hk‖2 = 1
How to choose H?
All the sets of N − K columns of HT should be linearly independent
Ideally, hTi hj ≈ 0 i 6= j
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 45 / 68
Solution 2Coding with random matrices
L <1 + 1/M(HT )
2= ebp
ebp is the Equivalent Break Point and is an estimate of the maximumnumber of correctable errors
M(A) is the mutual incoherence of matrix A
Definition
M(HT ) = maxi 6=j
∣∣∣hTi hj
∣∣∣ , such that ‖hk‖2 = 1
How to choose H?
All the sets of N − K columns of HT should be linearly independent
Ideally, hTi hj ≈ 0 i 6= j
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 45 / 68
Solution 2Coding with random matrices
L <1 + 1/M(HT )
2= ebp
ebp is the Equivalent Break Point and is an estimate of the maximumnumber of correctable errors
M(A) is the mutual incoherence of matrix A
Definition
M(HT ) = maxi 6=j
∣∣∣hTi hj
∣∣∣ , such that ‖hk‖2 = 1
How to choose H?
All the sets of N − K columns of HT should be linearly independent
Ideally, hTi hj ≈ 0 i 6= j
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 45 / 68
Solution 2Coding with random matrices
L <1 + 1/M(HT )
2= ebp
ebp is the Equivalent Break Point and is an estimate of the maximumnumber of correctable errors
M(A) is the mutual incoherence of matrix A
Definition
M(HT ) = maxi 6=j
∣∣∣hTi hj
∣∣∣ , such that ‖hk‖2 = 1
How to choose H?
All the sets of N − K columns of HT should be linearly independent
Ideally, hTi hj ≈ 0 i 6= j
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 45 / 68
Solution 2Coding with random matrices
L <1 + 1/M(HT )
2= ebp
ebp is the Equivalent Break Point and is an estimate of the maximumnumber of correctable errors
M(A) is the mutual incoherence of matrix A
Definition
M(HT ) = maxi 6=j
∣∣∣hTi hj
∣∣∣ , such that ‖hk‖2 = 1
How to choose H?
All the sets of N − K columns of HT should be linearly independent
Ideally, hTi hj ≈ 0 i 6= j
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 45 / 68
Solution 2Coding with random matrices
Random matrices are the solution
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 46 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 47 / 68
Solution 3Coding with random sparse matrices
With random sparse matrices is possible to find efficient algorithms tocode and decode.
This algorithms make use of the sparsity and avoid the slower L1optimisation
We will introduce two different types of codes that uses sparsematrices
LDPC codes – Low-Density Parity-Check codes [Gallager 1968]LT codes – The first rateless erasure codes [Luby 2002]Online codes – Almost the first rateless erasure code [Maymounkov2002]
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 48 / 68
Solution 3Coding with random sparse matrices
With random sparse matrices is possible to find efficient algorithms tocode and decode.
This algorithms make use of the sparsity and avoid the slower L1optimisation
We will introduce two different types of codes that uses sparsematrices
LDPC codes – Low-Density Parity-Check codes [Gallager 1968]LT codes – The first rateless erasure codes [Luby 2002]Online codes – Almost the first rateless erasure code [Maymounkov2002]
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 48 / 68
Solution 3Coding with random sparse matrices
With random sparse matrices is possible to find efficient algorithms tocode and decode.
This algorithms make use of the sparsity and avoid the slower L1optimisation
We will introduce two different types of codes that uses sparsematrices
LDPC codes – Low-Density Parity-Check codes [Gallager 1968]LT codes – The first rateless erasure codes [Luby 2002]Online codes – Almost the first rateless erasure code [Maymounkov2002]
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 48 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes
The parity check matrix is highly sparse: the number of nonzeroelements grows linearly with N
Due to sparsity low complexity algorithms exists
The parity check matrix can be generated randomly but must obeycertain rules
Usually N is very large (1000 to 10000 or more)
Usually the coding matrix is not sparse, which implies a codingcomplexity quadratic with N
As N becomes large the LDPC codes approach the Shannon limit[MacKay1999]
An “optimal” LDPC code can get within ≈ 0.005 dB of channelcapacity
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 49 / 68
Solution 3LDPC codes example
The LDPC parity check matrix can be represented by a Tanner graph
HT1 0 1 0 0 1 1 00 1 1 0 1 0 1 01 0 0 1 1 0 0 10 0 0 1 0 1 1 1
s = HT e
e1 e2 e3 e4 e5 e6 e7 e8
s1 s2 s3 s4
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 50 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 51 / 68
Fountain Codes
What is a Digital Fountain ?
Is a new paradigm for data transmission that changes the standardapproach where a user must receive an ordered stream of datasymbols to one where the user must receive enough symbols toreconstruct the original information.
With a Digital Fountain is possible to generate an infinite data streamfrom a K symbol file. Once the receiver gets any K symbols from thestream it can reconstruct the original message.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 52 / 68
Fountain Codes
Digital Fountain ?
The name Digital Fountain comes from the analogy with a water fountainfilling a glass of water. The glass must be filled up, not with some specificdrops of water.
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 53 / 68
Fountain CodesConcept
One linear combination
[c1
]1×1
=[− g1 −
]1×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 54 / 68
Fountain CodesConcept
Two linear combinations[c1
c2
]2×1
=
[− g1 −− g2 −
]2×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 55 / 68
Fountain CodesConcept
Infinite number of linear combinationsc1
c2...
N×1
=
− g1 −− g2 −
...
N×K
|m|
K×1
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 56 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Fountain CodesConcept
On Fountain Codes, each line gi of G is generated online
Each gi has only a finite number of 1’s (degree)
The number of 1’s is a random variable with distribution ρ
The symbols to combine (XOR) are chosen randomly
We can generate linear combinations as needed
The receiver can be filled with codewords until it is sufficient todecode the original message
The K original symbols can be recovered from K (1 + ε) codedsymbols with probability 1− δ
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 57 / 68
Online CodeDecoding
1 Find a codeword c with all message symbols decoded except one
2 Recover that message symbol mx = c ⊕m1 ⊕m2 ⊕ · · · ⊕mi−1 wherem1,m2, . . . ,mi−1 are the recovered symbols associated with c
3 Apply the previous steps until no more message symbols left
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 58 / 68
Online CodeDecoding Example 1
c0 c1 c2 c3
1 0 1 1
Received codewords
Symbols to decodes0 s1 s2
1s0 = c0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 59 / 68
Online CodeDecoding Example 2
c0 c1 c2 c3
1 0 1 1
s0 s1 s2
1 1s1 = c1 ⊕ s0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 60 / 68
Online CodeDecoding Example 3
c0 c1 c2 c3
1 0 1 1
s0 s1 s2
1 1 0s2 = c2 ⊕ s0
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 61 / 68
Online CodeDecoding Example 4
c0 c1 c2 c3
1 0 1 1
s0 s1 s2
1 1 0
Done
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 62 / 68
Online CodeDistribution Example
0 10 20 30 40 50 60 70 80 900
0.05
0.1
0.15
0.2
0.25
0.3
0.35Distribution ρ with δ = 0.05 ε = 0.5
degreeJose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 63 / 68
Online CodeDecoding simulation
0 10 20 30 40 50 60 70 80 90 1000.94
0.95
0.96
0.97
0.98
0.99
1
Test number
Per
cent
of d
ecod
ed x
δ = 0.05 ε = 0.5
Experimental
(1−δ)
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 64 / 68
Outline
1 Linear Codes in any FieldCorrecting ErasuresCorrecting Errors
2 Coding MatricesStructured MatricesRandom MatricesSparse Random Matrices
3 Coding with Sparse Random MatricesFountain CodesApplications
Jose Vieira (IEETA, Univ. Aveiro) MAPtele 2010 65 / 68