Upload
vuduong
View
228
Download
3
Embed Size (px)
Citation preview
Error-Correcting Codes for Low-DelayStreaming Communications
by
Ahmed Badr
A thesis submitted in conformity with the requirementsfor the degree of Doctor of Philosophy
Graduate Department of Electrical and Computer Engineering (ECE)University of Toronto
c© Copyright 2014 by Ahmed Badr
Abstract
Error-Correcting Codes for Low-Delay StreamingCommunications
Ahmed Badr
Doctor of Philosophy
Graduate Department of Electrical and Computer Engineering (ECE)
University of Toronto
2014
This thesis develops a new class of error-correcting codes for low-delay streaming over packet-erasure
channels. Such codes must operate sequentially on the incoming source stream, and must reconstruct
each source packet within a fixed delay of T packets. We show that both the fundamental limits, as well
as structural properties of such streaming codes, are different from classical codes.
In our study, we consider successively finer approximations of burst erasure channels, that capture the
dominant error events associated with the streaming setup. In the basic model, we consider a channel that
in any sliding window of length W , introduces either an erasure burst of length B, or up to N erasures in
arbitrary positions. We show that there is an inherent tradeoff between the achievable values of B and N
and propose a class of codes, Maximum Distance and Span (MiDAS) Codes, that are at most one unit of
delay from the upper bound in this framework. We also show that the burst-erasure correction capability
is determined by the column span of the codes, whereas the isolated-erasure correction capability is
determined by the column distance of the codes. We then consider a more sophisticated model that
introduces both a burst and an isolated erasure in the same window and propose another class of codes
that improve upon the performance of MiDAS codes. Through simulation results over the Gilbert-Elliott
and Fritchman channels, we show that our proposed codes outperform classical codes for a wide range of
channel parameters. We also consider a different extension of our basic model, where one source packet
arrives every M channel packets. We show that a simple adaptation of the streaming codes designed for
M = 1 is sub-optimal and propose a capacity achieving construction for burst erasure channels for any
M ≥ 1.
In the final part of the thesis, we study Multicast Streaming Codes (Mu-SCo) that simultaneously
serve two users : one user, whose channel introduces a burst erasure of length B1 and tolerates a delay
of T1 and a second user, whose channel introduces a burst erasure of length B2 and tolerates a delay
of T2. We show that the streaming capacity intricately depends on the burst and delay parameters and
provide explicit constructions that attain the capacity for a wide range of parameters. A special class
of Mu-SCo - Diversity Embedded Streaming Code (DE-SCo) - achieves the minimum possible delay for
ii
the weaker user, without sacrificing the performance of the stronger user.
A common principle in all our code constructions is a layering approach. We split each source packet
into two groups, apply a different level of error protection to each group and combine the parity-checks
in a careful manner so as to preserve the sequential recovery of the source packets from a variety of
erasure patterns.
Supervisor: Ashish Khisti
Title: Assistant Professor of Electrical and Computer Engineering (ECE), University of Toronto
iii
Acknowledgements
First and foremost, I praise God, the almighty, for providing me this opportunity and granting me
the capability to proceed successfully.
I would like to express my deep appreciation and thanks to my thesis advisor, Prof. Ashish Khisti, for
his guidance, support and encouragement. He has been a great role model for me as a mentor, researcher
and instructor. I remember he used to tell my colleagues and collaborators something like “Ahmed is
our expert in streaming codes” to encourage me. Under his guidance, I successfully overcame many
difficulties and learned a lot. He contributed to my fruitful graduate school experience by encouraging
my research work, introducing me to collaborators and demanding high quality research.
I would like to thank my committee members, Prof. Frank Kschischang and Prof. Stark Draper for
the fertile discussions and valuable suggestions. Prof. Frank has done a surprising job in encouraging
me through his unique advice on both the academic and personal levels. Prof. Stark has shown a
remarkable dedication and direction through his invaluable feedback, suggestions and encouragement.
Additionally, I would like to thank Prof. Wei Yu and Prof. Roxana Smarandache for accepting to be on
my dissertation committee and for the priceless suggestions and feedback that has significantly improved
my final draft.
My venerate regard goes to all my course instructors, especially, Prof. Frank Kschischang, Prof.
Wei Yu, Prof. Ben Liang, Prof. Teng Joon Lim and Prof. Bruce Francis for the valuable lectures and
materials which have a direct impact on my research. Besides research, I was given the opportunity to
TA with Prof. Ashish Khisti, Prof. Wei Yu, Prof. Kostas Plataniotis, Prof. Raymond Kwong and Prof.
Stark Draper who have significantly improved my teaching skills.
I acknowledge my collaborators in University of Toronto, Devin Lui, Louis Tan and Pratik Patil,
in MIT, Emin Martinian and in HP-Labs, Wai-tian (Dan) Tan and John Apostolopoulos (currently in
Cisco Systems) who have generously provided me with new ideas and suggestions. Extending the main
model to various practical models such as multicast, parallel channels and unequal source-channel rates
would have been much harder without the help and insights of Devin, Louis and Pratik. The idea Emin
suggested of combining two codes to construct a multicast code has tremendously helped in the progress
we achieved in this setup. Dan and John have been extremely generous with their time, ideas, feedback
and suggestions despite the long distance between us.
It has been a privilege to interact with several students in University of Toronto. I feed honored
to know Sameh Sorour, Hayssam Dahrouj, Gokul Sridharan, Louis Tan, Farrokh Etezadi, Pratik Patil,
Devin Lui and Rafid Mahmood who have shaped my graduate school life.
A special thanks and gratitude goes to Judith Levene, Darlene Gorzo, Diane Silva and Jayne Leake
for the tremendous administrative effort they provided. It would have been impossible to conduct such
work without their help.
Finally and most importantly, I would like to express my deep gratitude and thankfulness to my wife
Mai for her support and encouragement in the last two years of my PhD where I was most productive.
It would have been impossible without your sincere love and endless patience. Additionally, I would
like to thank my parents, Atef and Wafaa, for standing by me through all my ups and downs. You are
wonderful parents and wonderful friends. Aliaa and Alaa, I could not ask for better sisters and friends.
Your faith in me was crucial in keeping me motivated and encouraged to proceed successfully.
iv
Dedication
To my wife, daughter, parents and sisters.
v
Bibliographic Note
• Portions of Chapters 3 and 4 are presented at the International Conference on Computer Commu-
nications (INFOCOM) [1], the Canadian Workshop on Information Theory (CWIT) [2] and the
International Symposium on Information theory (ISIT) [3] with co-authors Ashish Khisti, Wai-Tian
Tan and John Apostolopoulos.
• Chapter 5 is a joint work with Pratik Patil, Ashish Khisti and Wai-Tian Tan. It is presented at the
Canadian Workshop on Information Theory [4] and the Asilomar Conference on Signals, Systems
& Computers [5] where it received the third best student paper award. This work together with
that in Chapter 3 are also submitted to the IEEE Transactions on Information Theory [6].
• Chapter 7 is presented at the Global Communications Conference (GLOBECOM) [7] with co-
authors Ashish Khisti and Emin Martinian. This work is also published in the IEEE Journal on
Selected Areas in Communications (JSAC) [8].
• Chapter 8 is presented in the Allerton Conference on Communication, Control, and Computing [9]
with co-authors Devin Lui and Ashish Khisti and is also submitted to the IEEE Transactions on
Information Theory [10].
vi
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Streaming Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Forward Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Channel Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.1 Gilbert-Elliott Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.2 Fritchman Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.1 Structural Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.2 Tree Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.3 Streaming Codes for Burst Erasures . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.4 m-MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.5 LDPC Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.6 Rateless Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.7 Channels with Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Background 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Channel Model and Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.1 Isolated Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2 Burst Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Code Constructions for Isolated Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.1 Interleaved-MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.2 m-MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Code Constructions for Burst Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.1 Maximally-Short Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.2 MS Codes using m-MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
vii
2.7 Converse Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7.1 Isolated Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7.2 Burst Erasure Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Maximum Distance And Span (MiDAS) Codes 25
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Maximum Distance And Span (MiDAS) Codes . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5.1 Code Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5.2 Example - MiDAS (N,B, T ) = (2, 3, 4) and W ≥ T + 1 = 5 . . . . . . . . . . . . . 31
3.6 MiDAS Codes using MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.1 Example - MiDAS-MDS (N,B, T ) = (2, 3, 4) and W ≥ T + 1 = 5 . . . . . . . . . . 33
3.6.2 Example - MiDAS-MDS (N,B, T ) = (2, 3, 5) and W ≥ T + 1 = 6 . . . . . . . . . . 35
3.6.3 General Code Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.7 Non-Ideal Erasure Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.8 Numerical Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.9 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.9.1 Gilbert-Elliott Channel Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.9.2 Fritchman Channel Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4 Partial Recovery Codes (PRC) 48
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Partial Recovery Codes (PRC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.1 Code Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.3 Robust PRC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4 PRC using MDS codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4.1 Code Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5 Unequal Source-Channel Rates 62
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.4 Performance Analysis of Baseline Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.4.1 m-MDS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.4.2 Maximally Short (MS) Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.5 Capacity Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
viii
5.6 Code Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.6.1 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.6.2 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.6.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.7 Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.8 Robust Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.9 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6 Algebraic Properties of Streaming Codes 81
6.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 Column Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 Column Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.4 Column Distance Column Span Tradeoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.5 symbol Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.5.1 Column Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.5.2 Column Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7 Diversity Embedded Streaming Codes (DE-SCo) 89
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.3 Properties of MS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.1 Vertical Interleaving for (αB,αT ) MS . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.2 Memory of MS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3.3 Urgent and Non-Urgent symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3.4 Off-Diagonal Interleaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.3.5 Source Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.4 Interference Avoidance Streaming Codes (IA-SCo) . . . . . . . . . . . . . . . . . . . . . . 95
7.4.1 IA-SCo - Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.4.2 General Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7.5 Diversity Embedded Streaming Codes (DE-SCo) . . . . . . . . . . . . . . . . . . . . . . . 98
7.5.1 Converse Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.5.2 DE-SCo - Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.5.3 DE-SCo Construction for Integer α . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.5.4 DE-SCo Construction for Non-Integer α . . . . . . . . . . . . . . . . . . . . . . . . 106
7.6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
8 Multicast Streaming Codes (Mu-SCo) 111
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.3.1 Large-Delay Regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
ix
8.3.2 Low-Delay Regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.4 Multicast Capacity in Large-Delay Regime (Theorem 8.1) . . . . . . . . . . . . . . . . . . 116
8.4.1 Achievability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.4.2 Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.5 Achievability Scheme in Region (e) (Theorem 8.2) . . . . . . . . . . . . . . . . . . . . . . 118
8.5.1 Decoding at Receiver 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8.5.2 Decoding at Receiver 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.5.3 Construction of C3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.6 Converse Proof in Region (e) (Theorem 8.2) . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.7 Upper and Lower Bounds in Region (f) (Theorem 8.3) . . . . . . . . . . . . . . . . . . . . 128
8.7.1 Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.7.2 Upper Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.8 Special Cases in Region (f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
8.8.1 Achievability Scheme in Region (f) at T1 = B1 (Proposition 8.1) . . . . . . . . . . 130
8.8.2 Converse Proof in Region (f) at T2 = B2 (Proposition 8.2) . . . . . . . . . . . . . . 131
8.8.3 Conjectured Capacity in Region (f) . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
9 Conclusion 137
APPENDICES 139
A Background 139
A.1 Proof of Lemma 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
A.2 Information Theoretic Converse of Theorem 2.3 . . . . . . . . . . . . . . . . . . . . . . . . 140
B Maximum Distance And Span (MiDAS) Codes 144
B.1 Decoding Analysis of MiDAS-MDS code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
B.1.1 Burst Erasure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
B.1.2 Isolated Erasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
C Partial Recovery Codes (PRC) 146
C.1 Decoding Analysis of Partial Recovery Codes in Theorem 4.1 . . . . . . . . . . . . . . . . 146
C.1.1 Erasure Burst followed by an Isolated Erasure . . . . . . . . . . . . . . . . . . . . . 146
C.1.2 Isolated Erasure followed by an Erasure Burst . . . . . . . . . . . . . . . . . . . . . 149
C.2 Proof of Lemma C.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
D Unequal Source-Channel Rates 152
D.1 Proof of Lemma 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
E Diversity Embedded Streaming Codes (DE-SCo) 155
E.1 Proof of Proposition 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
E.2 Example of DE-SCo {(B1, T1), (B2, T2)} = {(2, 3), (4, 8)} . . . . . . . . . . . . . . . . . . . 155
E.2.1 Encoder: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
E.2.2 Decoder: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
x
E.3 Proof of Recursion for DE-SCo with Integer α . . . . . . . . . . . . . . . . . . . . . . . . . 157
E.4 Proof of Recursion for DE-SCo with Non-integer α . . . . . . . . . . . . . . . . . . . . . . 158
F Multicast Streaming Codes (Mu-SCo) 160
F.1 Example - {(1, 2), (2, 4)} in Region (b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
F.2 Proof of Lemma 8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
F.3 Proof of Lemma 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
F.3.1 T1 > 2(B1 − k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
F.4 Examples of Code Construction in Region (e) . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.4.1 Example (1): {(4, 5), (7, 10)} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.4.2 Example (2): {(3, 5), (7, 9)} ⇒ k = 1,m = 1 . . . . . . . . . . . . . . . . . . . . . . 167
F.5 Proof of Lemma 8.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Bibliography 172
xi
List of Tables
1.1 Different multimedia streaming applications and their corresponding Media access control
Service Data Unit (MSDU) in bytes, Bitrate in Mbps, and maximum tolerable delay in ms
according to the IEEE 802.11 standard in [11]. All listed applications use User Datagram
Protocol (UDP) as a transport layer protocol. The last column indicates the delay in
packets and is computed as Delay(Pkts) = (Delay(ms) × 10−3) × (Bitrate(Mbps) ×106)/(MSDU(B)× 8). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 A systematic erasure code of rate R = 12 which simultaneously recovers a burst of length
B = 3 with a maximum delay of T = 5 packets. Each column denotes a channel packet
transmitted at the time index shown in the first row. Shaded columns are erased channel
packets while the rest are perfectly received at the destination. . . . . . . . . . . . . . . . 3
2.1 A (2,3) MS code construction where each source packet s[.] is divided into three symbols
s0[.], s1[.] and s2[.] and a (5, 3) LD-BEBC code is then applied across the diagonal to
generate two parity-check symbols generating a rate 3/5 code. Each column corresponds
to one channel packet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1 MiDAS code construction for (N,B) = (2, 3), a delay of T = 4 and rate R = 4/9. . . . . . 31
3.2 MiDAS-MDS code construction for (N,B) = (2, 3), a delay of T = 4 and rate R = 4/9.
Such construction replaces m-MDS codes in MiDAS codes with block MDS codes. . . . . 33
3.3 MiDAS-MDS code construction for (N,B) = (2, 3), a delay of T = 5 and rate R = 10/19
with block MDS constituent codes. We note that each of the parity-check symbols pvj [t]
is combined with uj [t− 5] for j = {0, 1, . . . , 11} but are omitted for simplicity. . . . . . . . 35
3.4 Achievable (N,B) over channel C(N,B,W ≥ T + 1) for different code constructions.
Similar tradeoffs for the first three codes can be achieved for W < T + 1 by replacing T
with W − 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Channel and code parameters used in Figures 3.6a and 3.7a . . . . . . . . . . . . . . . . . 41
3.6 Channel and code parameters used in Figures 3.9 and 3.10 . . . . . . . . . . . . . . . . . . 44
4.1 A Partial Recovery Code with B = 3 achieving a rate of 59 for a delay of T = 7. . . . . . . 52
4.2 Channel and Code Parameters used in Simulations. . . . . . . . . . . . . . . . . . . . . . . 60
5.1 Code construction for (M = 2, B = 3, T = 3) achieving a rate of R = 710 . . . . . . . . . . . 73
5.2 Channel and code parameters used in Figures 5.9 and 5.10 . . . . . . . . . . . . . . . . . . 77
xii
7.1 An example illustrating a vertical interleaving of step size α = 2 used to construct a (2, 4)
MS code from a (1, 2) MS code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.2 A source expansion example with parameters (p, 3). Each source packet s[i] is expanded
into 3 packets s[3i], s[3i+1] and s[3i+2]). A (3B, T ) is then applied to s[·] to generate x[·].The channel packets in the original stream is denoted by x[i] = (x[3i], x[3i+1], x[3i+2]).
Shaded cells are erased channel packets while the remaining ones are perfectly received
by the destination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.3 Rate 2/3 code constructions that satisfy user 1 with (B1, T1) = (1, 2) and user 2 with
B2 = 2. A T2 = αT+T = 6 is achieved using IA-SCo while the minimum T2 = αT+B = 5
is achieved using DE-SCo which will be discussed in Section 4.5. . . . . . . . . . . . . . . 96
8.1 Summary of capacity expressions, code constructions and converse proofs of all regions
in the considered multicast model with two users of parameters {(B1, T1), (B2, T2)}. Theacronym PEC stands for “Periodic Erasure Channel”. . . . . . . . . . . . . . . . . . . . . 116
8.2 Mu-SCo Construction for (B1, T1) = (4, 4) and (B2, T2) = (5, 6). This point achieves the
upper bound C+f given in Theorem 8.3 as stated in Proposition 8.1 since T1 = B1 = 4. . . 130
C.1 The decoding analysis of PRC codes for various erasure patterns in Channel II. The
erasures are shaded grey boxes whereas the parity-check packets used to recover the v[·]packets are marked using bold borders. The u[·] packets are recovered by subtracting the
combined p1[·] parity-checks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
E.1 Rate 3/5 DE-SCo construction that satisfy the region (a) point described by user 1 with
(B1, T1) = (2, 3) and user 2 with (B2, T2) = (2B1, 2T1 +B1) = (4, 8). . . . . . . . . . . . . 156
F.1 Rate 3/5 Mu-SCo construction that satisfy the region (b) point described by user 1 with
(B1, T1) = (1, 2) and user 2 with (B2, T2) = (2, 4). . . . . . . . . . . . . . . . . . . . . . . . 161
F.2 Rate 5/11 Mu-SCo Construction for the point, (B1, T1) = (4, 5) and (B2, T2) = (7, 10)
lying in region (e). This point is also illustrating case (A) defined by T1 ≤ 2(B1 − k).
For the causal part of parity-check symbols of C1 shifted back to time i− t, we use ←−p j [i]
instead of ←−p j [i]∣∣i−t
for simplicity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
F.3 Rate 5/11 Mu-SCo Construction for the point, (B1, T1) = (3, 5) and (B2, T2) = (7, 9)
lying in region (e). This point is also illustrating case (B) defined by T1 > 2(B1 − k).
For the causal part of parity-check symbols of C1 shifted back to time i− t, we use ←−p j [i]
instead of ←−p j [i]∣∣i−t
for simplicity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
xiii
List of Figures
1.2 Burst Distribution for a Gilbert Channel with β = 0.2. . . . . . . . . . . . . . . . . . . . . 4
1.4 Burst Distribution associated with a Fritchman Channel with N + 1 = 10 states and
β = 0.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 A diagram illustrating the three step approach used for designing low-delay streaming
codes for practical channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 The periodic erasure channel used in the converse proof of Theorem 2.2. The shaded
packets are erased while the remaining ones are perfectly received by the destination. . . . 22
2.2 The periodic erasure channel used in the converse proof of Theorem 2.3. The shaded
packets are erased while the remaining ones are perfectly received by the destination. . . . 23
3.1 An example of the sliding window channel in Definition 3.1 with N = 2, B = 3 and
W = 5, i.e., C(2, 3, 5). In any sliding window of length W = 5, there is either a single
erasure burst of maximum length B = 3, or no more than N = 2 isolated erasures. The
shaded packets are erased while the remaining ones are perfectly received by the destination. 26
3.2 The periodic erasure channel in the proof Theorem 3.1. The shaded packets are erased
while the remaining ones are perfectly received by the destination. . . . . . . . . . . . . . 28
3.3 A window of Teff+1 channel packets showing the decoding steps of a MiDAS code when an
erasure burst takes place. Shaded columns are erased channel packets while the remaining
ones are perfectly received by the destination. In the case of isolated erasures, the v[·]and u[·] packets are recovered separately using the pv[·] parities in the window [0, Teff−1]
and pu[·] parities in the window [0, Teff ], respectively. . . . . . . . . . . . . . . . . . . . . 29
3.4 A Non-ideal erasure pattern in Section 3.7. Grey and white squares denote erased and
unerased packets respectively. In the window of length 6 spanning the interval [i, i + 5],
the pattern is neither a burst of length B = 3 nor N = 2 isolated erasures and is thus not
included in C(2, 3, 6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5 Achievable tradeoff between N and B for different code constructions. The rate is fixed
to R = 0.6 and the delay is fixed to T = 40 and W = T + 1. The uppermost curve
marked with ‘o’ is the upper bound in (3.1). The MiDAS codes are shown with broken
lines marked with ‘×’ and are at most one delay unit from the upper bound. The E-RLC
codes in [1] are shown with broken lines marked with ‘△’. . . . . . . . . . . . . . . . . . . 41
3.6 Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (5× 10−4, 0.5). . 42
3.7 Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (5× 10−5, 0.2). . 43
xiv
3.8 Simulation over a Gilbert-Elliott Channel with (α, β) = (5 × 10−4, 0.5). All codes are
evaluated using a decoding delay of T = 12 packets and a rate of R = 12/23 ≈ 0.52. . . . 44
3.9 Simulation Experiments for Fritchman Channel Model with (N , α, β) = (8, 10−5, 0.5). . . 45
3.10 Simulation Experiments for Fritchman Channel Model with (N , α, β) = (11, 2× 10−5, 0.75). 46
4.1 An example of channel II in Definition 4.1: In any sliding window of length W = 5 there
is either a single erasure burst of length up to B = 3 and possibly one isolated erasure, or
N = 2 isolated erasures. This channel is denoted by CII(2, 3, 5). The shaded packets are
erased while the remaining ones are perfectly received by the destination. . . . . . . . . . 49
4.2 An isolated erasure associated with a burst in channel CII(N,B,W = 2T +B). Grey and
white squares denote erased and unerased packets respectively. . . . . . . . . . . . . . . . 50
4.3 Achievable rate of PRC and PRC-MDS codes designed for channel CII(N = 1, B,W ≥2T + B). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4 Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (10−5, 0.1). . . . 58
4.5 Simulation Experiments for Fritchman Channel Model with (N+1, α, β) = (9, 2×10−5, 0.5). 59
5.1 Each source packet s[i] arrives just before the transmission of X[i, :] and needs to be
reconstructed at the destination after a delay of T macro-packets. . . . . . . . . . . . . . . 63
5.2 Each source packet s[i] is split intoM sub-packets, i.e., s[i] = (w[i, 1],w[i, 2], . . . ,w[i,M ]).
The expanded source stream is then encoded using a Maximally-Short code. The decoder
recovers each w[i, j] once y[i+T, j] is received which ensures that s[i] is recovered by the
end of the macro-packet i+ T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 Achievable rates for different code constructions for the case of unequal source-channel
rates for the C(N = 1, B,W = M(T + 1)) channel. We fix the delay to T = 5 macro-
packets and let M = 20. The line marked with squares corresponds to the capacity
in Theorem 5.1. The line marked with circles corresponds to the rate achieved by the
adapted MS code (5.7) whereas the line marked with + corresponds to the rate of the
m-MDS code (5.6). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 Construction of Parity-Check Packets. As in the MS code, each source packet s[t] is
divided into two groups, uvec[t] and vvec[t]. A m-MDS code is applied to the vvec[·]packets and a repetition code is applied to the uvec[·] packets. The resulting parities are
then combined to generate the parity-check packets qvec[t] = pvec[t] + uvec[t− T ]. . . . . . 67
5.5 Reshaping of Channel Packets. The three groups of packets, uvec[i], vvec[i] and qvec[i]
are reshaped into U[i, :], V[i, :] and Q[i, :] which are denoted by vertically, diagonally and
grid hatched boxes, respectively. These reshaped packets are then concatenated to form
the channel macro-packet X[i, :]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.6 Encoding of source packets into macro-packets. Each source packet is split into two
groups. A repetition code is applied to the U[t, :] group with a delay of T macro-packets
and is denoted by vertically hatched boxes as shown in the first figure. A m-MDS code
is applied to the V[t, :] group which is denoted by diagonally hatched boxes to generate
the parity-checks P[i, :] denoted by the horizontally hatched boxes as shown in the second
figure. The combination of the resulting parity-checks of the two groups is indicated in
the last figure with grid hatched boxes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
xv
5.7 Decoding for the burst pattern starting from x[i, 1]. The grey are denotes an erasure
burst of length B. The horizontally hatched parity-checks in the second figure are used
to recover the erased V[i, :], . . . ,V[i+ b, :] packets. The third figure shows the recovery of
u[i] using the parity-checks in macro-packet i+ T . . . . . . . . . . . . . . . . . . . . . . . 72
5.8 The periodic erasure channel used in the converse proof of Theorem 5.1. Grey and white
squares denote erased and unerased packets respectively. . . . . . . . . . . . . . . . . . . 74
5.9 Simulation over a Gilbert Channel with α = 10−5 and β varied on the x-axis. All codes
are of rate R = 914 and evaluated using a decoding delay of T = 4 macro-packets. Each
macro-packet consists of M = 20 channel packets. . . . . . . . . . . . . . . . . . . . . . . . 78
5.10 Simulation Experiments for Fritchman Channel Model withN+1 = 20 states and (α, β) =
(10−5, 0.5). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.1 The source stream {s[t]} is causally mapped into an output stream {x[t]}. The channel
of user i introduces an erasure burst of length Bi, and each user tolerates a delay of Ti,
for i = 1, 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2 A vertical interleaving approach to construct a (2B, 2T ) MS code from a (B, T ) MS code. 91
7.3 One period illustration of the Periodic Erasure Channel for T +B < T2 ≤ αT +B. White
squares denote unerased packets. Black and grey squares denote erased packets to be
recovered using C1 and C2 respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.4 One period illustration of the Periodic Erasure Channel for T2 ≤ T + B. White squares
denote unerased packets. Black and grey squares denote erased packets to be recovered
using C1 and C2 respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.5 A {(2, 5) − (4, 12)} DE-SCo code construction is illustrated in the above figure. The
parity-check symbols p[t] and q[t] of a (2, 5) MS code along the main diagonal is added to
another (2, 5) MS code parity-check symbols y[t] and z[t] but applied along the opposite
diagonal and shifted by T +B = 7 (i.e., the two parity-check checks at time instant t are
p[t] + y[t − 7] and q[t] + z[t − 7]). Shaded columns are erased channel packets while the
remaining ones are perfectly received by the destination. . . . . . . . . . . . . . . . . . . 102
7.6 Simulation experiments over Gilbert channel. Each users sees a different Gilbert channels,
the first with α1 = 10−4, while the second with α2 = 10−5. . . . . . . . . . . . . . . . . . 108
7.7 Simulation experiments over Fritchman channel. Each users sees a different instances of
Fritchman channel, the first with α1 = 10−4 and a total of N1+1 = 5 states, whereas the
second with α2 = 10−5 and a total of N2 + 1 = 12 states. . . . . . . . . . . . . . . . . . . 109
8.1 Capacity behavior in the (T1, T2) plane. We hold B1 and B2 as constants with (B2 > B1),
so the regions depend on the relation between T1 and T2 only. The dashed line shows the
contour of constant capacity in regions (a), (b), (c) and (d). . . . . . . . . . . . . . . . . . 113
8.2 A graphical illustration of the structure of the code construction. The labels on the
right show the layers spanned by each set of parity-check symbols. The labels at the
bottom show the intervals in which each set of parity-check symbols combine erased source
symbols. Note that the construction x[i] in (8.24) involves an overlap between p1[·] andp2[·] as shown. The shaded packets cannot be recovered at user 2. To recover these we
use a third layer of parity-check packets p3[·] that are embedded in the last T1− (B1− k)
rows as shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
xvi
8.3 Main steps of finding the upper bound for the {(4, 5), (7, 10)} point lying in region (e)
through one period illustration of the Periodic Erasure Channel. Grey and white squares
denote erased and unerased packets respectively while hatched squares denote packets
revealed to the receiver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
8.4 One period of the periodic erasure channel used to prove an upper bound on capacity in
region (e). Grey and white squares denote erased and unerased packets respectively.. . . . 126
8.5 One period of the periodic erasure channel used to prove the first upper bound in region
(f). Grey and white squares denote erased and unerased packets respectively. . . . . . . . 129
8.6 Main steps of finding the upper bound for the {(2, 3), (4, 4)} point lying in region (f)
through one period illustration of the Periodic Erasure Channel. Grey and white squares
denote erased and unerased packets respectively. Note that the packet at time 2 is recov-
ered by both codes C1 and C2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.7 One period of the periodic erasure channel used to prove an upper bound on capacity
in region (f) for the special case T2 = B2. Grey and white squares denote erased and
unerased packets respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.8 Capacity behavior in the (B2, T2) plane. We hold B1 and T1 as constants, so the regions
depend on the relation between T2 and B2 only. The dashed line gives the contour of
constant capacity in region (e) as well as in the special case of T1 = B1 in region (f). . . . 135
8.9 An example of region (f) in the (B2, T2) plane for (B1, T1) = (10, 16). The dashed lines
give some examples of the contour of constant conjectured capacity in region (f). This
conjecture is proved for the cases B2 = T1 which is the left vertical edge of the triangle,
T2 = T1 +B1 which is the upper horizontal edge of the triangle and T2 = B2 which is the
right 45-degrees edge. It is also proved for the special case of T1 = B1 which is not shown
in this figure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
A.1 An erasure channel with B erasures in a burst starting at time c used in proving L2 in
Lemma 2.1. Grey and white squares denote erased and unerased symbols respectively. . . 140
A.2 The periodic erasure channel used in proving the upper bound of the single user scenario
in Theorem 2.3, but with indication of which packets are in the groups Vi and Wi. Grey
and white squares denote erased and unerased packets respectively. . . . . . . . . . . . . . 141
A.3 A channel introducing a single burst of length B in the interval [j, j+B−1] used in proving
Lemma A.1. Grey and white squares denote erased and unerased packets respectively. . . 142
C.1 The periodic erasure channel used in proving Lemma C.1, and indicating the first and last
windows of interest, W0 and WB−1, respectively. Grey and white squares denote erased
and unerased packets respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
D.1 Different erasure patterns considered in the analysis of the decoder. The index j at the
left of each row, indicates the starting location of each burst in macro-block i. The shaded
blocks shows the packets that are erased. . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
F.1 One period illustration of the Periodic Erasure Channel in Figure A.2 to be used for
proving the multicast upper bound provided in Lemma 8.1. Grey and white squares
denote erased and unerased packets respectively. . . . . . . . . . . . . . . . . . . . . . . . 162
xvii
F.2 Diagonal Embedding of parity-checks for the construction in Section 8.5. The parity-
checks p3[·] in layer (4) are generated using a (B3, T3) MS code to the last B1− k parity-
checks of p1[·] in layer 3. The parity-checks p3[·] are shifted back by T1 units as discussed
before and only the causal part of these parities are used. . . . . . . . . . . . . . . . . . . 163
xviii
List of Definitions
2.1 Definition (Linear Block Code) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Definition (Minimum Distance d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Definition (Convolutional Code) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Definition (Column Distance - Symbols) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Definition (Streaming Capacity) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Definition (Isolated Erasure Channel - CI(N,W )) . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Definition (Burst Erasure Channel - CB(B,W )) . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Definition (Burst or Isolated Erasures Channel - C(N,B,W )) . . . . . . . . . . . . . . . . 25
4.1 Definition (Burst and Isolated Erasures Channel - CII(N,B,W )) . . . . . . . . . . . . . . 48
4.2 Definition (Associated Isolated Erasure) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Definition (Partial Recovery Code (PRC)) . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1 Definition (Streaming Capacity - Unequal Source-Channel Rates) . . . . . . . . . . . . . . 63
6.1 Definition (Column Distance - Packets) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 Definition (Column Span - Packets) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.3 Definition (Column Distance - Symbols) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.4 Definition (Column Span - Symbols) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.1 Definition (Diversity Embedded Streaming Codes (DE-SCo)) . . . . . . . . . . . . . . . . 90
7.2 Definition (Source Expansion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.1 Definition (Multicast Streaming Capacity) . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8.2 Definition (Causal and Non-Causal Parts of a Parity-Check) . . . . . . . . . . . . . . . . . 122
xix
Chapter 1
Introduction
1.1 Motivation
A number of multimedia applications require communication systems that must satisfy stringent delay
constraints. For example the end-to-end delay in live video conferencing must be less than 250 ms [12,
Table 1, p. 7]. Similarly, other applications such as interactive gaming and voice over IP also impose
stringent delay constraints. Table 1.1 provides more examples of streaming applications with their
corresponding delay constraints according to the IEEE 802.11 standard in [11].
As the use of mobile devices becomes increasingly common, such streaming applications must be
supported over wireless links. It is well known that wireless channels introduce a significant number
of packet losses due to fading, mobility and other impairments. Thus it becomes necessary to develop
efficient error-correction techniques for real-time streaming communication over such channels.
Error control mechanisms that are used in practice are either Forward Error Correction (FEC)
based or retransmission based. In retransmission based mechanisms, if the transmitter receives no
acknowledgment for a given packet within a certain time, the packet is retransmitted to the receiver.
The delay associated with such a mechanism depends on the round-trip delay in the underlying network
which can alone approach the delay constraints [13–15]. Furthermore many streaming applications
involve broadcast situations where the same source stream must be delivered to multiple destinations.
In such cases, FEC provide a natural alternative and will be the focus of this thesis.
1.2 Streaming Setup
In this section, we define the main streaming setup to be used throughout the thesis. At each time
instant t ≥ 0, a source packet s[t] ∈ Fkq arrives at the encoder which is a vector of k source symbols
each over Fq, i.e., s[t] = (s0[t], s1[t], . . . , sk−1[t]), where sj [t] ∈ Fq for j ∈ {0, . . . , k − 1} is the jth source
symbol in the source packet at time t. The encoder generates a channel packet x[t] ∈ Fnq causally, i.e.,
x[t] = ft(s[0], . . . , s[t]), t ≥ 0. (1.1)
Each channel packet is a vector of n symbols, i.e., x[t] = (x0[t], x1[t], . . . , xn−1[t]), where xj [t] ∈ Fq for
j ∈ {0, . . . , n− 1} is the jth channel symbol in the channel packet at time t. The parameter of interest
1
Chapter 1. Introduction 2
Table 1.1: Different multimedia streaming applications and their corresponding Media access controlService Data Unit (MSDU) in bytes, Bitrate in Mbps, and maximum tolerable delay in ms accord-ing to the IEEE 802.11 standard in [11]. All listed applications use User Datagram Protocol (UDP)as a transport layer protocol. The last column indicates the delay in packets and is computed asDelay(Pkts) = (Delay(ms)× 10−3)× (Bitrate(Mbps)× 106)/(MSDU(B)× 8).
Application Bitrate (Mbps) MSDU (B) Delay (ms) Delay (Pkts)SDTV 4 - 5 1500 200 66 - 83Video Conf. 0.128 - 2 512 100 3 - 48Interactive Gaming 0.5 50 16 80(Controller-Console)Interactive Gaming 1 512 50 12(Console-Internet)Internet Streaming 0.1 - 4 512 200 4 - 195(Video + Audio)Internet Streaming 0.064 - 0.256 418 200 3 - 15(Audio)Video Phone 0.5 512 100 12Remote UI 0.5-1.5 700 100 9− 26
is the rate of such streaming code and is given by R = k/n.
The channel considered is a packet erasure channel where each transmitted packet is either erased or
perfectly received at the decoder. In particular, the channel output at time t is given by y[t] = ⋆ if the
channel introduces an erasure at time t and y[t] = x[t] if it does not.
The decoder tolerates a maximum delay of T packets, i.e.,
s[t] = γt(y[0], . . . ,y[t+ T ]), (1.2)
where γt is the decoding function at time t. The source packet s[t] is declared lost if s[t] 6= s[t].
In this thesis, we will investigate the loss probabilities of different error-correcting codes over various
channel models.
1.3 Forward Error Correction
In forward error correction, an error-correcting code is used to add some redundancy a priori to protect
the source data against channel impairments. This helps the receiver to recover missing packets. There
are two main categories of error-correcting codes, block codes and convolutional codes.
The encoder of a (n, k) block code maps a vector s ∈ Fkq of k information symbols into a codeword
x ∈ Fnq which is a vector of n symbols. The rate of such a code is defined by R = k/n. Systematic
codes constitute a particular class of block codes where the first k symbols of the codeword are the
information symbols, i.e., x = (s,p) where p ∈ Fn−kq is a vector of n− k parity-check symbols. Together
with block codes, convolutional codes [16] form a commonly implemented class of error-correcting codes
with a sequential encoding nature. In particular, at each time t ≥ 0, one source packet s[t] ∈ Fkq arrives
at the encoder and a channel packet x[t] ∈ Fnq is produced at each time instant t which can depend only
on the present and past source packets. The rate of such code is given by R = k/n. Similar to block
codes, the code is said to be systematic if each channel packet x[t] contains the source packet s[t], i.e.,
x[t] = (s[t],p[t]) where p[t] ∈ Fn−kq is the parity-check packet at time t and can be represented by a
Chapter 1. Introduction 3
Table 1.2: A systematic erasure code of rate R = 12 which simultaneously recovers a burst of length
B = 3 with a maximum delay of T = 5 packets. Each column denotes a channel packet transmitted atthe time index shown in the first row. Shaded columns are erased channel packets while the rest areperfectly received at the destination.
0 1 2 3 4 5 6 7 8
k = 1 s[0] s[1] s[2] s[3] s[4] s[5] s[6] s[7] s[8]n− k = 1 p[0] p[1] p[2] p[3] p[4] p[5] p[6] p[7] p[8]
⇓Recover
s[0], s[1], s[2]
vector of n−k symbols each in Fq, i.e., p[t] = (p0[t], p1[t], . . . , pn−k−1[t]). We note that a strong relation
exists between the two classes of codes. For example, a block code can be obtained by truncating,
terminating or tail-biting a convolutional code [17].
However, classical codes are not designed for sequential recovery1 of the source packets at the desti-
nation. Consider for example a rate 1/2 systematic code in Table 1.2. At each time instant, a channel
packet consisting of the source packet and a parity-check packet, i.e., x[i] = (s[i],p[i]) is transmitted.
Suppose that the channel introduces three erasures in the interval [0, 2], i.e., x[0], x[1] and x[2] are
erased. The decoder starts collecting parity-check packets until they are enough to recover the erased
source packets. In particular, by time 5, all erased source packets can be simultaneously recovered which
incurs a maximum delay of T = 5 corresponding to the recovery of s[0]. Ideally a streaming code should
have the property that s[0] must be reconstructed before s[1], which in turn must be reconstructed before
s[2] as (1.2) suggests. We will investigate constructions with such property in this thesis.
1.4 Channel Models
In practice, channels introduce both burst and isolated losses often captured by statistical models (see
e.g., [20–23]) such as the Gilbert-Elliott (GE) [24, 25] and Fritchman [26] channels. The Gilbert-Elliott
Channel model is largely used for the emulation of burst error patterns in transmission channels (e.g., [27–
30]). Also, Fritchman and related higher order Markov models are commonly used to model fade-
durations in mobile links (e.g., [31]).
We briefly introduce these channel models, as our focus will be on constructing streaming codes for
such models.
1.4.1 Gilbert-Elliott Channel Model
A Gilbert-Elliott channel is a two-state Markov model (cf. Figure 1.1). In the “good state” each channel
packet is lost with a probability of ε whereas in the “bad state” each channel packet is lost with a
1We note that sequential decoding is a commonly used term in coding theory which refers to a specific technique fordecoding tree codes (cf. [18,19]). To avoid confusion, we use the term sequential recovery to refer to decoding each packetat its respective deadline.
Chapter 1. Introduction 4
Good Bad
1-
!
1-!
Figure 1.1: Gilbert Channel Model
0 5 10 15 20 25 300
0.05
0.1
0.15
0.2
Burst Length
Pro
babi
lity
of O
ccur
renc
e
Figure 1.2: Burst Distribution for a GilbertChannel with β = 0.2.
���� ��
��
��
����
�
����������
�������������������������
����������������������
���������������
� �
������
�
Figure 1.3: Fritchman Channel Model
8 10 12 14 16 18 20 22 24 26 28 300
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Burst LengthP
roba
bilit
y of
Occ
urre
nce
Figure 1.4: Burst Distribution associated with aFritchman Channel with N + 1 = 10 states and
β = 0.6.
probability of 1. We note that the average loss rate of the Gilbert-Elliott channel is given by
Pr(E) = β
β + αε+
α
α+ β. (1.3)
where α and β denote the transition probability from the good state to the bad state and vice versa. As
long as the channel stays in the bad state the channel behaves as a burst erasure channel. The length of
each burst is a Geometric random variable with mean of 1β. Figure 1.2 illustrates the burst distribution
associated with a Gilbert channel when β = 0.2. When the channel is in the good state it behaves as an
i.i.d. erasure channel with an erasure probability of ε. The gap between two successive bursts is also a
geometric random variable with a mean of 1α. Finally note that ε = 0 results in a Gilbert channel [24],
which only results in burst losses.
1.4.2 Fritchman Channel Model
Figure 1.3 shows an example of a Fritchman channel model [26] with a total of N + 1 states. One of the
states is the good state and the remaining N states are bad states. We again let the transition probability
from the good state to the first bad state E1 to be α whereas the transition probability between any
two bad states equals β. Let ε be the probability of a packet loss in good state. The channel erases
each packet in any bad state with probability 1. The burst length distribution in a Fritchman model is
a hyper-geometric (cf. Figure 1.4) random variable instead of a geometric random variable.
Chapter 1. Introduction 5
StatisticalChannelModel
SlidingWindowModel
StreamingCodes
ExtractDominant
ErasurePatterns
Low-Delay
Streaming
Codes
Run
Simulations
Figure 1.5: A diagram illustrating the three step approach used for designing low-delay streaming codesfor practical channels.
1.5 Research Methodology
The ultimate goal of the thesis is to design low-delay streaming codes for practical channels, e.g.,
Gilbert-Elliott and Fritchman channels. As discussed in the previous section, these channels introduce
both burst and isolated erasures. Therefore good streaming codes must simultaneously correct both
types of patterns. We note that both the structure of codes and the associated fundamental limits are
expected to be different for streaming communication. For example, it is well known that the Shannon
capacity of an erasure channel only depends on the fraction of erasures. However when delay constraints
are imposed, the actual pattern of packet losses also becomes relevant2. The decoding delay over channels
with burst losses can be very different than isolated or random losses.
To tackle such problem, we consider a three-step approach (cf. Figure 1.5):
Step 1 We start by considering different statistical models such as Gilbert-Elliot and Fritchman channel
models. Depending on the parameters of such models, the dominant erasure patterns might vary,
e.g., burst erasures, i.i.d. erasures or a mixture of both. We extract this dominant erasure events in
these models and consider a sliding window channel model where the erasure patterns are locally
constrained to belong to this set.
Step 2 We construct low-delay streaming codes that recover from erasure patterns introduced by the
considered sliding window channel models. The source packets are split into two groups and
Unequal Error Protection (UEP) is applied to each group. We then discuss the connection between
the error correction properties of the proposed streaming codes and their underlying algebraic
properties, column distance and column span.
Step 3 The performance of the proposed codes is then simulated over Gilbert-Elliott and Fritchman chan-
nels and observed to outperform earlier schemes such asm-MDS codes [32–34] and MS codes [35–37]
which are designed for channels that introduce either isolated or burst erasures but not both.
2A subtle distinction exists between the setup considered by Shannon and that in this thesis. In the former, a stochasticchannel model is considered under a vanishing error probability assumption. In this thesis, we consider adversarial channelswhere only some set of erasure patterns are allowed and we only allow for zero error probability.
Chapter 1. Introduction 6
One weakness of these codes is that they require the knowledge of the burst length and delay a priori.
Hence, they force a conservative design in practice, i.e., we design the code for the longest burst thereby
incurring a higher overhead (or a larger delay) even when the channel is relatively good. Moreover, there
is often a flexibility in the allowable delay. Techniques such as adaptive media playback [38] have been
designed to tune the play-out rate as a function of the received buffer size to deal with a temporary
increase in delay. Hence it is not desirable to have to fix the delay during the design stage either.
We propose new code constructions — Diversity Embedded Streaming Codes (DE-SCo) and Multicast
Streaming Codes (Mu-SCo) — which can recover within different delays depending on the length of the
burst erasure. This adds flexibility to the design in practice.
1.6 Related Work
Problems involving real-time coding and compression have been studied from many different perspectives
in related literature.
1.6.1 Structural Properties
Some structural properties of real-time codes have been studied in e.g., [39–41], and a dynamic program-
ming based formulation is proposed. Encoding and decoding is done in real-time. The encoding and
decoding strategies that jointly minimize an average distortion measure are expressed as solutions to a
dynamic programming problem.
1.6.2 Tree Codes
In another line of work, Schulman [42] and Sahai [43] study coding techniques based on tree codes in
a streaming setup with discrete memoryless channels. Sukhavasi and Hassibi [44] have proposed linear
time-invariant tree codes for the class of i.i.d. erasure channels, which are attractive due to low decoding
complexity. However these works consider the case where the probability of error decays exponentially
in the decoding delay which is relevant in some control applications.
1.6.3 Streaming Codes for Burst Erasures
In [35,37] a class of error-correcting codes are proposed for real-time streaming communication over over
burst erasure channels. These codes attain the maximum rate for a given burst length and decoding
delay. However these codes are highly sensitive to isolated erasures. In this thesis we extend this line
of work to develop new streaming codes that correct both burst and isolated erasures. In a follow-up
work to [35,37], references [45–47] study a setup where the channel can introduce multiple erasure bursts
with a certain minimum guard interval separating them. However these works do not aim for a robust
extension which is the focus of this thesis. Furthermore the constructions presented in this thesis are
based on a layered approach and are very different from those considered in [45–47].
1.6.4 m-MDS Codes
Properties of convolutional codes over large field sizes, which are relevant to packet erasure channels, have
been studied in [32–34, 48, 49]. In particular it is shown that the column distance of any convolutional
Chapter 1. Introduction 7
code must satisfy a generalized singleton bound. A class of convolutional codes that attain the maximum
column distance profile is proposed. While these constructions are not designed for delay constrained
decoding, they constitute an important building block in the streaming codes proposed in this thesis.
1.6.5 LDPC Convolutional Codes
In [50, 51], the tradeoff between the decoding performance of low density parity check (LDPC) convo-
lutional codes and latency over erasure channels with and without memory is studied. In this work,
the latency considered is the decoding latency which is defined as the sum of the time taken to receive
the entire codeword and the time needed to decode which is a function of the decoding complexity. A
windowed decoding scheme is considered which is a slight modification of the classical belief propagation
(BP) scheme that is computational impractical for the large block-length LDPC convolutional codes con-
sidered. The authors show that the structure of LDPC convolutional code ensembles is suitable to obtain
performance close to the theoretical limits over the memoryless erasure channel, both for the classical BP
decoder and windowed decoding. However, the same structure imposes limitations on the performance
over erasure channels with memory. In this thesis, we are interested in constructing low-delay streaming
codes for channels that introduce a mixture of correlated and uncorrelated erasures.
1.6.6 Rateless Codes
In other related works, adaptations of rateless codes for streaming are discussed in literature. In [52,53],
the use of rateless codes over packet erasure channels with real-time requirements is studied. Digital
fountain codes with overlapped sliding windows are considered. The use of overlapped windows can
be considered as virtual extended blocks. This scheme together with belief-propagation decoder yields
superior performance in terms of packet recovery. In [54], a generalization of rateless codes is proposed
that provides Unequal Error Protection (UEP). These codes can provide unequal recovery times, i.e.,
given a target bit error rate, different parts of information bits can be decoded after receiving different
amounts of encoded bits. This property can be appealing in streaming applications. In [55, 56], the
performance of LT codes for single-server streaming to diverse users is investigated. Different users
have different channel conditions as well as different decoding capabilities. Optimization of the degree
distribution is proposed and solved using linear programming approach.
1.6.7 Channels with Feedback
Streaming over channels when feedback is available has been studied in literature from different per-
spectives. Sahai [57] showed that for Discrete Memoryless Channels (DMC), feedback generally provides
dramatic gains in the error exponents when fixed end-to-end delay is considered. In [58], the authors
proposed a new scheme which combines the benefits of network coding and Automatic Repeat reQuest
(ARQ) by acknowledging degrees of freedom instead of original packets. The motivation behind this
work is that ARQ do not work well in broadcast scenario because retransmitting a packet that some
receivers did not get is wasteful for the others that already have it. In [59–61], real-time streaming
over blockage channels with delayed feedback is studied. A multi-burst transmission protocol is pro-
posed which achieves good delay-throughput tradeoffs within this framework. This protocol is a balance
between the extremes of ARQ and Send-Until-ACK and can be further enhanced if augmented with
coding.
Chapter 1. Introduction 8
In other works a comparison of block and streaming codes for low delay systems is provided in [62].
References [63, 64] study an extension of streaming codes to parallel channels with burst erasures
whereas [65] studies streaming codes that can correct multiple bursts. We point the reader to [65–70]
for additional works on error control mechanisms for streaming.
1.7 Thesis Outline
In this thesis, we study robust low-delay streaming codes for a certain class of practical channels.
1. In Chapter 2, we review two classes of codes, m-MDS codes and Maximally-Short codes and
justify that they achieve the capacity of channels that introduce only isolated or burst erasures,
respectively. We also, propose an alternative code construction that will be used in subsequent
chapters.
2. In Chapter 3, we investigate channels that introduce both burst and isolated erasures. We then
propose a class of codes - Maximum Distance And Span (MiDAS) Codes - based on a layered
design that achieve a rate close to the capacity of such channels. These codes are simulated over
practical channel models such as Gilbert-Elliott and Fritchman channels where they provide better
performance compared to codes designed for either only bursts or isolated erasures.
3. In Chapter 4, we propose a class of codes - Partial Recovery Codes (PRC) - which corrects an
additional set of erasure patterns to that considered by MiDAS codes. Although, PRC only
guarantees partial recovery, simulation results show that these codes can outperform MiDAS codes
in some cases.
4. In Chapter 5, another extension of the framework in Chapter 3 is considered where the source-
arrival and channel transmission rates are mismatched. We show that a simple adaptation of the
codes designed for the equal rates case is not optimal. We then characterize the capacity for burst
erasure channels and propose a class of codes based on layering structure that achieves the capacity
in the unequal rates case.
5. In Chapter 6, we discuss the connection between the erasure correction properties of the proposed
streaming codes in Chapters 3 and 5 and their underlying algebraic properties. We show that
the tradeoff between the burst-erasure correction and isolated-erasure correction capability of any
streaming code can be translated to a fundamental tradeoff between the column distance and
column span of any convolutional code.
6. In Chapter 7, we consider a multicast model with two users whose channels introduce burst erasures
of lengths B1 and B2 respectively. Also, the two users can tolerate different decoding delays, T1
and T2. At the single user rate of the first user, we propose a class of codes - Diversity Embedded
Streaming Codes (DE-SCo) - which attains the minimum delay at the second user. These codes
provide a flexible tradeoff between the channel quality and receiver delay, i.e., when the channel
conditions are good, the source stream is recovered with a low delay, whereas when the channel
conditions are poor the source stream is still recovered, albeit with a larger delay.
7. In Chapter 8, we consider a more general problem. For any {(B1, T1), (B2, T2)}, we show that
the streaming capacity intricately depends on the burst-delay parameters and provide explicit
Chapter 1. Introduction 9
constructions that attain the capacity for a wide range of parameters. In particular we identify
regimes where (a) parity-checks of the two users can be combined such that interference is avoided,
(b) the weaker user treats parity-checks of the stronger user as side information to decode part of
the source stream and (c) the capacity is achieved by a single user code.
8. Conclusions and future work are presented in Chapter 9.
Chapter 2
Background
2.1 Introduction
In this chapter, we discuss some preliminaries in Section 2.2. We then introduce the streaming setup
in Section 2.3. In Section 2.4, we focus on two basic channel models, the isolated erasure model which
serves as an approximation for the i.i.d. erasure channel and the burst erasure model which serves as an
approximation for the Gilbert channel. Many practical channels do consist of a mixture of both burst
and isolated erasure patterns. These will be treated in subsequent chapters. In Section 2.5, we discuss
streaming codes for the isolated erasure channel. In particular, we review a class of convolutional codes,
m-MDS codes, and investigate their distance properties. We also review an alternative construction
with a reduced field size and decoding complexity which involves a diagonal interleaving step to convert
a MDS block code to a convolutional code. Similarly, for the burst erasure channel, we discuss two code
constructions in Section 2.6. Finally, we provide a converse in Section 2.7 for both channel models which
uses a Periodic Erasure Channel (PEC) argument. This justifies that the constructions in Sections 2.5
and 2.6 are capacity achieving for the isolated erasure and burst erasure models respectively.
2.2 Preliminaries
In this section, we discuss some basic properties of block and convolutional codes that will be used
throughout the thesis.
2.2.1 Block Codes
Block codes map a vector of source packets to a vector of channel packets. Of particular interest is the
subclass of linear block codes.
Definition 2.1 (Linear Block Code). A (n, k) linear block code is a vector subspace of Fnq of dimension
k. The codewords, x ∈ Fnq , are linear combinations of the information packets s ∈ F
kq , i.e., x = s ·G,
where G is the k × n generator matrix associated with the linear block code C.
If the generator matrix G is in the form G = [Ik H], where Ik is the k × k identity matrix, the
corresponding code is called a systematic code. In this case, the information packets are embedded
10
Chapter 2. Background 11
in the coded packets since x = s · G = s · [Ik H] = [s sH] = [s p], where p = sH ∈ Fn−kq is the
parity-check packet.
In literature, block codes are traditionally designed to maximize the associated minimum distance.
For any block code C, the minimum distance is the minimum number of positions in which any two
distinct codewords differ. For the special case of linear block codes, this is equivalent to the minimum
number of non-zero positions among all non-zero codewords. The rigorous definition of the minimum
distance is as follows.
Definition 2.2 (Minimum Distance d). For any (n, k) block code C, the minimum distance d is given
by,
d = minx1,x2∈C,x1 6=x2
wt(x1 − x2), (2.1)
where wt(x) is the Hamming weight, i.e., the number of non-zero positions in the codeword x. If C is a
linear block code, this is equivalent to,
d = minx∈C,x 6=0
wt(x), (2.2)
where 0 is the zero vector of length n.
The minimum distance of a block code determines the underlying error detection and correction
capabilities. In particular, a block code of distance d can successfully decode if no more than d − 1
erasures take place in any codeword of length n. An upper bound on the minimum distance of a block
code of block length n and dimension k is given by d ≤ n − k + 1 and is known as the Singleton
bound [71]. Block codes that achieve equality in the Singleton bound are called Maximum Distance
Separable (MDS) codes. Examples of non-trivial MDS codes include Reed-Solomon codes and their
extended versions [72, 73]. A (n, k) Reed-Solomon code is known to exist over any field of size greater
than the block length n. Classical erasure decoding of such code has O(k2) complexity (e.g., [74]).
2.2.2 Convolutional Codes
Convolutional codes [16] do not encode information packets block by block, but transform a whole
sequence of information packets into a sequence of encoded packets by convolving the information packets
with a set of generator coefficients.
Definition 2.3 (Convolutional Code). A convolutional code C of rate R = kn
maps a stream of infor-
mation packets s[t] ∈ Fkq , where t is the time index that satisfies t ≥ 0, to a stream of encoded packets
x[t] ∈ Fnq as follows: the encoded packet x[i] depends on the information packet s[i] and the m previ-
ous information packets, i.e., x[i] = fi(s[i], s[i − 1], . . . , s[i −m]) where fi is the encoding function at
time i. The variable m denotes the memory of the code and m+ 1 is called the constraint length. The
corresponding code C is denoted by (n, k,m) convolutional code.
Consider a (n, k,m) linear convolutional code that maps an input source stream s[i] = (s0[i], . . . ,
sk−1[i])† ∈ F
kq to an output channel stream x[i] = (x0[i], . . . , xn−1[i])
† ∈ Fnq using a memory m encoder1.
1We use † instead of the conventional T to denote the vector/matrix transpose operation since T is used to denote thedelay constraint. Throughout this thesis, we will treat s[i] and x[i] as column vectors. For convenience, we will not usethe † notation when the dimensions are clear.
Chapter 2. Background 12
In particular, the output packet at time i, x[i], is a linear combination of the source packets s[i], s[i −1], . . . , s[i−m], i.e.,
x[i] =
(m∑
l=0
s†[i− l] ·Gl
)†
, (2.3)
where G0, . . . ,Gm are k × n matrices with elements in Fq.
One way of representing convolutional codes is to consider the first j + 1 output packets which can
be expressed as follows,
[x†[0],x†[1], . . . ,x†[j]
]=[s†[0], s†[1], . . . , s†[j]
]·Gs
j , (2.4)
where
Gsj =
G0 G1 . . . Gj
0 G0 Gj−1
.... . .
...
0 . . . G0
∈ F(j+1)k×(j+1)nq , j ≥ 0, (2.5)
is the truncated generator matrix to the first (j + 1)n columns and Gl = 0k×n if l > m.
Similar to block codes, the convolutional code is systematic if we can express each generator matrix
in the following form,
G0 = [Ik H0], Gl = [0k×k Hl], l = 1, . . . ,m, (2.6)
where Ik denotes the k × k identity matrix, 0k×k denotes the k × k zero matrix, and Hl ∈ Fk×(n−k)q for
l = 0, 1, . . . ,m. For a systematic convolutional code, (2.3) reduces to
x[i] =
[
s[i]
p[i]
]
, p[i] =
(m∑
l=0
s†[i− l] ·Hl
)†
. (2.7)
The parameter of interest is the column distance [75, 76] defined as follows.
Definition 2.4 (Column Distance - Symbols). The jth column distance in terms of symbols of Gsj
in (2.5) is defined as
dj = mins=[s[0],s[1],...,s[j]]
s[0] 6=0
wt([x[0], . . . ,x[j]]) (2.8)
where recall that each channel packet x[i] has n symbols, i.e., x[i] = (x0[i], . . . , xn−1[i]) and wt denotes
the Hamming weight, i.e., wt(v) counts the number of non-zero symbols in the vector v.
It is well-known that for any (n, k,m) convolutional code dj ≤ (n− k)(j+1)+1 for all j ≥ 0 [32,33].
A special class of convolutional codes, m-MDS codes [32–34], satisfy this bound with equality for j =
{0, . . . ,m}.
Theorem 2.1 (Gluesing-Luerssen et al. [34]). There exists a class of (n, k,m) systematic convolutional
codes whose column distance is given by,
dj = (n− k)(j + 1) + 1, for j ∈ {0, 1, . . . ,m}. (2.9)
Chapter 2. Background 13
In this thesis, we call them m-MDS codes2 where m is the memory of the code.
Throughout the thesis, we only consider systematic3 m-MDS codes in (2.6) and (2.7). We refer the
reader to [34] for the specific choice of Hl that achieves (2.9).
The most popular decoding algorithm for convolutional codes is the Viterbi algorithm [77]. Although
widely adopted in practice, the Viterbi algorithm suffers from a decoding complexity which is exponential
in the constraint length, m + 1. For the special case of erasure channels, convolutional codes can be
decoded using a simple technique which involves matrix inversion. In [49, Remark 3.3], it has been
noted that the complexity associated with decoding each erased packet in a (n, k,m) m-MDS code with
a delay of j ≤ m packets is given by that of inverting a (j + 1)(n− k)× (j + 1)(n− k) matrix which is
O(j3(n− k)3). Furthermore, such code is shown to exist over a field whose size grows exponentially in
m and n [78, Theorem 6.3.3].
2.3 System Model
At each time instant t ≥ 0, a source packet s[t] ∈ Fkq arrives at the transmitter. This stream of source
packets {s[t]}t≥0 is encoded causally into channel packets {x[t]}t≥0,
x[t] = ft(s[0], . . . , s[t]), (2.10)
where x[t] ∈ Fnq .
The channel considered is a packet erasure channel. In particular, some of the transmitted channel
packets are erased while the rest are perfectly received at the destination, i.e., the channel output at the
receiver at time t is given by y[t] = ⋆, if the channel introduces an erasure at time t or y[t] = x[t] if it
does not.
Furthermore, the receiver tolerates a delay of T packets, i.e.,
s[t] = γt(y[0],y[1], . . . ,y[t+ T ]), (2.11)
and Pr(s[t] 6= s[t]) = 0, ∀t ≥ 0.
The source stream is an i.i.d. sequence and we assume that each packet is uniformly sampled over
the finite field Fkq . The rate of the code is defined as the ratio of the size of each source packet to that
of each channel packet, i.e., R = k/n.
Definition 2.5 (Streaming Capacity). A rate R is achievable with a delay of T over the considered
channel, if there exists a streaming code, defined by (2.10) and (2.11), of this rate over some field of
size q such that every source packet s[i] can be decoded at the destination with a delay of T packets. The
supremum of all achievable rates is the streaming capacity.
2.4 Channel Model and Main Results
In this section, we consider a sliding window erasure channel model where in a window of a given length,
only a set of erasure patterns is allowed. We study two classes of sliding window erasure channels
2Convolutional codes which satisfy (2.9) are called MDS convolutional codes in literature [32, 33]. However, in thisthesis we call them m-MDS codes as in [34] to avoid confusion with MDS block codes.
3Thus the word systematic is dropped for convenience.
Chapter 2. Background 14
which approximate the two extremes of temporal correlation between erasures, i.i.d. erasures and burst
erasures.
2.4.1 Isolated Erasure Channel
We define the class of isolated erasure channels as follows.
Definition 2.6 (Isolated Erasure Channel - CI(N,W )). A channel is an isolated erasure channel if in
any sliding window of length W , the channel can introduce no more than N < W erasures in arbitrary
positions. We denote such channel by CI(N,W ).
The capacity of such channels is provided in the following Theorem.
Theorem 2.2 (Capacity of CI(N,W )). For any isolated erasure channel channel CI(N,W ), the capacity
is given by,
C(N,W, T ) =Teff −N + 1
Teff + 1, (2.12)
where Teff = min(W − 1, T ) is the effective delay and T is the decoding delay which satisfies T ≥ N .
Remark 2.1. Let 0 < T < N and consider a channel which introduces N erasures in a burst. One
can easily see that for such pattern, the recovery of the first erasure packet within a delay of T < N
is impossible. Hence, the capacity in this case is simply 0. This explains why the case of T ≥ N is
considered in Theorem 2.2.
Remark 2.2. Note that the capacity expression in (2.12) only depends on T and W via the minimum
min(W − 1, T ). This dictates that if W < T + 1, the effective delay is W − 1.
The proof of Theorem 2.2 is divided into two main parts. The achievability is provided through
discussing two different coding schemes in Sections 2.5.1 and 2.5.2. The converse uses a periodic erasure
channel argument similar to that used in [79, Section 6.10] and [37] and is provided in Section 2.7.1.
2.4.2 Burst Erasure Channel
We start by defining the class of burst erasure channels as follows.
Definition 2.7 (Burst Erasure Channel - CB(B,W )). The channel is called a burst erasure channel
if in any sliding window of length W , the channel introduces no more than a single burst of maximum
length B < W . We denote such channel by CB(B,W ).
Remark 2.3. The channel CB(B,W ) is equivalent to a channel in which bursts are of maximum length
B and the guard separation between successive bursts is at least W − 1.
The capacity of burst erasure channels is given in the following Theorem.
Theorem 2.3 (Capacity of CB(B,W )). For any channel CB(B,W ) and delay constraint T ≥ B, the
capacity is given by,
C(B,W, T ) =Teff
Teff +B(2.13)
where Teff = min(W − 1, T ).
Chapter 2. Background 15
We note that Remarks 2.1 and 2.2 also apply to the case of burst erasure channels.
Martinian and Sundberg provided a class of codes, Maximally-Short (MS) codes in [35], which
achieves the capacity in (2.13) for the case when W ≥ T + 1. We summarize the main encoding
and decoding steps of these codes in Section 2.6.1 and show that through a simple modification (cf.
Remark 2.5), these constructions achieve the capacity for any sliding window length, i.e., W > B. We
then provide an alternative construction which uses m-MDS codes as constituent codes in Section 2.6.2.
The converse is provided in Section 2.7.2.
2.5 Code Constructions for Isolated Erasure Channel
In this section, we provide two code constructions which achieve the capacity of isolated erasure channels
in Theorem 2.2.
2.5.1 Interleaved-MDS Codes
We note that in [35], the authors pointed out that MDS block codes can be converted to a convolutional
code using a diagonal interleaving technique. In this section, we provide the detailed steps for such
construction. As the name suggests, we start by constructing a MDS block code (see Section 2.2.1).
Then, we use diagonal interleaving to convert it to a convolutional code while keeping the distance
properties of the constituent MDS block code.
Encoder
The encoding steps are as follows:
• Split each source packet s[i] ∈ FTeff−N+1q into Teff −N +1 symbols, i.e., s[i] = (s0[i], . . . , sTeff−N [i])
where sj [i] ∈ Fq for j = {0, . . . , Teff −N}, i ≥ 0.
• Apply a systematic (Teff + 1, Teff − N + 1) MDS code to the source symbols diagonally, i.e., the
codeword starting at time i is given by,
di = (s0[i], s1[i + 1], . . . , sTeff−N [i+ Teff −N ],
p0[i+ Teff −N + 1], p1[i+ Teff −N + 2], . . . , pN−1[i+ Teff ]) (2.14)
• Concatenate the generated parity-check symbols to the source symbols to construct the channel
input, i.e.,
x[i] = (s0[i], . . . , sTeff−N [i], p0[i], . . . , pN−1[i]) (2.15)
Decoder
It can be readily verified that for the isolated erasure channel in Definition 2.6 which introduces no more
than N erasures in any sliding window of length W , there will be at most N erasures in any codeword di.
These codewords belong to a (Teff +1, Teff −N +1) systematic MDS code which is capable of recovering
N erasures with a maximum delay of Teff . Hence, a (Teff +1, Teff −N +1) Interleaved-MDS code of rate
Chapter 2. Background 16
R can recover from up to N = (1 − R)(Teff + 1) erasures in any sliding window of length W within a
delay of T packets, which upon rearranging can recover (2.12).
2.5.2 m-MDS Codes
In this section, we present an alternative construction which achieves the capacity in (2.12). Instead
of constructing a block code and then diagonally interleaving it to generate a convolutional code, we
consider m-MDS convolutional codes in Theorem 2.1 and discuss their erasure correction properties in
the streaming setup.
Lemma 2.1. Consider a systematic (n, k,m) m-MDS code and suppose that the symbols in x[i],
x[i] = (x0[i], . . . , xk−1[i], xk[i], . . . , xn−1[i])
= (s0[i], . . . , sk−1[i], p0[i], . . . , pn−k−1[i]) (2.16)
are transmitted in the time interval [i · n, (i+ 1) · n− 1] over the channel4, i.e., the channel symbol xj [i]
is transmitted at time i · n+ j. The following properties hold for each j = 0, 1, . . . ,m.
L1. If no more than N = (n− k)(j +1) transmitted symbols are erased in the interval [0, (j+1)n− 1],
then s[0] = (s0[0], . . . , sk−1[0]) can be recovered by time (j + 1)n− 1.
L2. If the channel introduces an erasure burst of maximum length B = (n − k)(j + 1) symbols in the
interval [c, c + B − 1], where 0 ≤ c ≤ k − 1, then all erased source packets are recovered by time
(j + 1)n− 1.
L3. If the channel introduces an erasure burst of maximum length B symbols in the interval [c, c+B−1],where 0 ≤ c ≤ k − 1, followed by a total of no more than I isolated erasures such that B + I =
(n− k)(j + 1), then all the erased packets in the burst are recovered by time (j + 1)n− 1.
Proof. See Appendix A.1.
We now discuss how the properties in Lemma 2.1 can be applied to our system model.
Corollary 2.1. Consider a systematic (n, k,m) m-MDS code of rate R = knwhere the entire channel
packet x[i] = (x0[i], . . . , xn−1[i]) ∈ Fnq is transmitted in time-slot i ≥ 0. For each j = 0, 1, . . . ,m, we
have the following,
P1. Suppose that in the window [0, j], the channel introduces no more than N = (1−R)(j+1) erasures
in arbitrary locations, then s[0] is recovered by time t = j.
P2. Suppose an erasure burst of maximum length B = (1−R)(j+1) happens in the interval [0, B− 1],
then all the packets s[0], . . . , s[B − 1] are simultaneously recovered by time t = j.
Proof. Since the channel erases entire packets, an erasure of N packets is equivalent to the erasure of
N = nN = (n − k)(j + 1) symbols in Lemma 2.1. Property L1 in Lemma 2.1 guarantees that s[0]
is recovered by time t = (j + 1)n − 1, when any (n − k)(j + 1) symbols are erased in the interval
[0, (j +1)n− 1]. It immediately follows that if no more than n−kn
(j +1) packets in the interval [0, j] are
4Note that in this statement, we are only transmitting one symbol over Fq in each channel use. Subsequently, we willadapt these properties for transmitting a packet over Fn
q in each channel use in Corollary 2.1.
Chapter 2. Background 17
erased, the packet s[0] can be recovered. Furthermore, the interval [0, (j+1)n− 1] consisting of (j+1)n
symbols in Lemma 2.1 corresponds to the interval [0, j] consisting of j + 1 packets in Corollary 2.1 and
thus s[0] is recovered by time j. Thus Property P1 follows upon substituting R = kn.
Property P2 follows in an analogous fashion upon using property L2 in Lemma 2.1 with c = 0.
From P1 in Corollary 2.1, it follows that
N = (1−R)(Teff + 1), (2.17)
is achieved using a (n, k, Teff) m-MDS code of rate R = knand Teff = min(W − 1, T ). In particular, if
the channel introduces up to (1 −R)(Teff + 1) erasures in the window [0, Teff], it follows from Property
P1 in Corollary 2.1 that s[0] is recovered at t = Teff . Once s[0] has been recovered, its effect can be
subtracted out from all parity-checks involving s[0]. Using the same property, s[1] is guaranteed to be
recovered at time t = Teff + 1. This argument can be successively repeated until all the erased sybmols
are recovered. Upon rearranging (2.17), one can recover (2.12) and the achievability follows.
Remark 2.4. Although both m-MDS and Interleaved-MDS codes achieve the capacity of CI(N,W ), the
Interleaved-MDS codes have a lower decoding complexity and field-size when compared to m-MDS codes
as discussed in Section 2.2. In particular, a (Teff +1, Teff −N +1) Interleaved-MDS code exists over any
field of size greater than Teff + 1 and the associated decoding complexity is quadratic in Teff , whereas a
(n, k, Teff) m-MDS code is known to exist over a field whose size is exponential in Teff and the associated
decoding complexity is cubic in Teff .
2.6 Code Constructions for Burst Erasure Channel
In the previous section, m-MDS codes designed to recover the maximum number of isolated erasures in
a given sliding window are discussed. In this section, we discuss the Maximally Short (MS) codes which
can recover from the longest burst in a given window.
2.6.1 Maximally-Short Codes
A class of systematic convolutional codes, Maximally-Short (MS) codes, is proposed in [35–37] that
correct the longest burst erasure for a given rate and delay. The construction involves designing a
low-delay burst erasure block code and then applying a diagonal interleaving approach to construct
the streaming code. Although this line of work only considers a single erasure burst during the entire
duration, it easily follows that the resulting codes can correct multiple erasure bursts provided that the
separation between them is at least T packets or equivalently W ≥ T +1, i.e., Teff = T . We first discuss
the construction for this case and then show that it can be easily extended to any W > B.
Encoder
The construction in [37] is described in three steps.
• Maximum Distance Separable (MDS) Code: We start by constructing a (T, T −B) system-
atic MDS code over a finite field Fq. We note that a (T, T −B) MDS code is capable of correcting
Chapter 2. Background 18
B erasures in arbitrary locations (including burst erasures). The corresponding generator matrix
can be expressed as,
G =[
IT−B H]
(2.18)
where recall that Ik denotes the k × k identity matrix whereas H is a (T −B)×B matrix.
• Low Delay-Burst Erasure Block Code (LD-BEBC): We construct a systematic (T +B, T )
LD-BEBC from the previously constructed (T, T −B) MDS Code with the generator matrix given
by,
G⋆ =
[
IB 0B×(T−B) IB
0(T−B)×B IT−B H
]
, (2.19)
where 0a×b is the a× b all zeros matrix. Thus after splitting the information packets b ∈ FTq into
two groups,
b = (u,n), where u ∈ FBq and n ∈ F
T−Bq , (2.20)
the resulting codeword is given by
d = b ·G⋆ = (u,n,u+ n† ·H) = (b, r), (2.21)
where we have used (2.20) and introduced r = u+n† ·H to denote the parity-check packets in d in
the last step. One can show that the codeword d has the property that it is capable of correcting
any burst of length B with a delay of at most T packets (cf. [35, 36]).
• Diagonal Interleaving: In this step, we convert the LD-BEBC constructed in the second step
to a streaming code. We start by splitting each source packet s[t] ∈ FTq into T symbols, i.e.,
s[t] = (s0[t], . . . , sT−1[t]), where sj [t] ∈ Fq for j ∈ {0, . . . , T − 1}. We then apply a (T + B, T )
LD-BEBC diagonally as follows. The information vector is constructed by collecting symbols
diagonally as follows,
bt = (s0[t], s1[t+ 1], . . . , sT−1[t+ T − 1]). (2.22)
The corresponding diagonal codeword,
dt = btG⋆ = (bt, rt)
= (s0[t], s1[t+ 1], . . . , sT−1[t+ T − 1], p0[t+ T ], . . . , pB−1[t+ T +B − 1]), (2.23)
is then constructed according to (2.21) where rt = (p0[t+ T ], . . . , pB−1[t+ T +B − 1]), and
pj [i] = sj [i− T ] + hj(sB[i− j − T +B], sB+1[i − j − T + B + 1], . . . ,sT−1[i− j − 1]),
j = 0, . . . , B − 1, (2.24)
where hj(v) denotes the mapping produced by multiplying the vector v by the jth column of H
in (2.18). The parities pj [i] ∈ Fq are then appended diagonally to the source stream to produce
Chapter 2. Background 19
Table 2.1: A (2,3) MS code construction where each source packet s[.] is divided into three symbolss0[.], s1[.] and s2[.] and a (5, 3) LD-BEBC code is then applied across the diagonal to generate twoparity-check symbols generating a rate 3/5 code. Each column corresponds to one channel packet.
[i− 1] [i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i+ 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i+ 2] s1[i+ 3] s1[i+ 4]
s2[i− 1] s2[i] s2[i+ 1] s2[i+ 2] s2[i+ 3] s2[i+ 4]
s0[i− 4] + s2[i− 2] s0[i− 3] + s2[i− 1] s0[i − 2] + s2[i] s0[i− 1] + s2[i+ 1] s0[i] + s2[i+ 2] s0[i+ 1] + s2[i+ 3]
s1[i− 4] + s2[i− 3] s1[i− 3] + s2[i− 2] s1[i− 2] + s2[i− 1] s1[i− 1] + s2[i] s1[i] + s2[i+ 1] s1[i+ 1] + s2[i+ 2]
the channel input stream. The channel packet at time t is given by
x[t] =
[
s[t]
p[t]
]
=
s0[t]...
sT−1[t]
p0[t]...
pB−1[t]
∈ FT+Bq . (2.25)
We refer to the above construction as a (B, T ) MS-MDS code. The MS code is a time-invariant
convolutional code [80]. The inputs to the convolutional code are source packets s[t] ∈ FTq , while the
outputs are channel packets x[t] ∈ FT+Bq . Hence, the rate of a (B, T ) MS code is given by R = T
T+Bwhich
matches the capacity expression in (2.13) for the case when W ≥ T + 1. We emphasize that the actual
transmitted packet is given in (2.25). Since the LD-BEBC code in (2.23) is applied to symbols from
different source packets (different time instants), the diagonal interleaving of such block code introduces
a non-zero memory to the construction and in turn the overall code in (2.25) is a convolutional code.
Decoder
The structure of the diagonal codeword (2.23) is also important in decoding. Suppose that packets
x[t], . . . ,x[t + B − 1] are erased. It can be readily verified that there are no more than B erasures in
a burst (with possible wrap-around bursts) in each diagonal codeword {dt} (cf. (2.23)). Since each
codeword is a (T + B, T ) LD-BEBC, it recovers each erased packet with a delay of no more than T
packets. This in turn implies that all erased packets are recovered.
Remark 2.5. A (B, T ) MS-MDS code achieves the capacity of the CB(B,W ) channel for W ≥ T + 1
where T is the decoding delay. A (B, Teff) constructed by replacing T with Teff = min(W −1, T ) achieves
the capacity of CB(B,W ) in (2.13) for all W > B.
Example: (2,3) MS-MDS Code
Suppose we wish to construct a code capable of correcting any packet burst erasure of length B = 2
with delay T = 3. A LD-BEBC (2.19) for these parameters is
c = (b0, b1, b2, b0 + b2, b1 + b2). (2.26)
Chapter 2. Background 20
To construct the MS-MDS code, we divide the source packets into T = 3 symbols. The diagonal
codeword (2.23) is of the form
dt=(s0[t], s1[t+ 1], s2[t+ 2], s0[t] + s2[t+ 2], s1[t+ 1] + s2[t+ 2]) (2.27)
and the channel input x[t] is given by
x[t] =[
s0[t], s1[t], s2[t], s0[t− 3] + s2[t− 1], s1[t− 3] + s2[t− 2]]†
. (2.28)
The resulting channel input stream is illustrated in Table. 2.1. Note that the rate of this code is 3/5 as
it introduces two parity-check symbols for each three source symbols. It can be easily verified that this
code corrects a burst erasure of length 2 with a worst-case time delay 3.
2.6.2 MS Codes using m-MDS Codes
The construction of MS-MDS codes, presented in [35, 36] and summarized in Section 2.6.1, involves
first constructing a specific low-delay block code and then converting it into a streaming code using a
diagonal interleaving technique. Thus the problem of constructing a streaming code is reduced to the
problem of constructing a block code with certain properties. While such a simplification is appealing,
unfortunately it does not appear to easily generalize when seeking extensions of MS codes for channels
that introduce other erasure patterns than bursts.
In this section we present an alternative construction of the MS codes discussed in Section 2.6.1. The
proposed construction eliminates the intermediate step of constructing a block code. We refer to this
version as MS-mMDS code.
Encoder
The encoding steps are as follows,
• Source Splitting: Split each source packet s[i] ∈ Fkq into two groups u[i] ∈ F
ku
q and v[i] ∈ Fkv
q as
follows,
s[i] = (u0[i], . . . , uku−1[i]︸ ︷︷ ︸
=u[i]
, v0[i], . . . , vkv−1[i]︸ ︷︷ ︸
=v[i]
), (2.29)
where ku + kv = k, i.e., u[i] constitutes the first ku symbols in s[i] whereas v[i] constitutes the
remaining kv symbols.
• m-MDS Parity-Checks: Apply a (ku + kv, kv, Teff) m-MDS code of rate Rv = kv
ku+kv on the
packets v[i] and generate parity-check packets,
pv[i] =
Teff∑
j=0
v†[i− j] ·Hvj
†
, pv[i] ∈ Fku
q , (2.30)
where the matrices Hvj ∈ F
kv×ku
q are associated with a m-MDS code (2.6).
Chapter 2. Background 21
• Repetition Code: Superimpose the u[·] packets onto pv[·] and let
q[i] = pv[i] + u[i− Teff ]. (2.31)
• Channel Packet Generation: Concatenate the generated parity-checks to the source packets
so that the channel input at time i is given by x[i] = (u[i],v[i],q[i]) ∈ Fnq , where n = 2ku + kv.
In our construction discussed above, we select k = Teff , ku = B, kv = Teff − B and n = Teff + B.
Clearly the rate of the proposed code R = knmatches the capacity expression in (2.13).
Decoder
For decoding, we suppose that the first erasure burst of length B spans the interval [0, B − 1]. We are
then guaranteed that there are no additional erasures in the interval [B, Teff +B − 1] for the CB(B,W )
channel. We claim that each s[0], s[1], . . . , s[B− 1] is recovered by time t = Teff , Teff +1, . . . , Teff +B− 1
respectively.
The decoder proceeds in two steps.
• Simultaneously recover v[0], . . . ,v[B − 1] by time t = Teff − 1. In this step the decoder proceeds
as follows. For each j ∈ {B, . . . , Teff − 1}, the decoder recovers the parity-check packets pv[j], by
subtracting the unerased u[j − Teff ] from the associated q[j] = pv[j] + u[j − Teff ] packets. These
recovered parity-checks can then be used to recover v[0], . . . ,v[B − 1].
Note that using property P2 in Corollary 2.1 and substituting R = Rv and j = Teff − 1 we get,
(1−Rv)Teff = B, (2.32)
and hence the recovery of v[0], . . . ,v[B − 1] by time t = Teff − 1 is guaranteed.
• Sequentially recover u[0], . . . ,u[B − 1] at times Teff , . . . , Teff + B − 1, respectively. Consider the
parity-checks q[j] = u[j − Teff ] + pv[j] for j ∈ {Teff , . . . , Teff + B − 1}, which are available to the
decoder. Upon the recovery of v[0], . . . ,v[B − 1] in the previous step, the required pv[j] can be
computed, subtracted from q[j], and the underlying u[·] packets can be sequentially recovered by
their deadlines.
Upon completion of the two steps stated above5, the recovery of s[i] by time i+Teff for i ∈ {0, . . . , B−1}follows. Any subsequent burst, starting at time t ≥ Teff +B, can be corrected in a similar fashion. Since
the rate of the code is clearly given by (2.13), the achievability proof of Theorem 2.3 is complete.
Remark 2.6. Although both MS-mMDS and MS-MDS codes achieve the capacity of CB(B,W ), the
MS-MDS codes have a lower decoding complexity and field-size when compared to MS-mMDS codes due
to the use of block MDS codes insted of m-MDS codes as constituent codes. In particular, a (B, Teff)
MS-MDS code uses a (Teff , Teff−B) block MDS code which is known to exist over any field of size greater
than Teff and the associated decoding complexity is quadratic in Teff , whereas a (B, Teff) MS-mMDS code
uses a (Teff , Teff −B, Teff) m-MDS code which is known to exist over a field whose size is exponential in
Teff and the associated decoding complexity is cubic in Teff (cf. Section 2.2).
5The decoding steps of the MS code might remind the reader of the interference cancellation scheme used in the DecisionFeedback Equalizer (DFE) which is known as Dirty Paper Coding (DPC) [81].
Chapter 2. Background 22
· · ·N Teff −N + 1 N Teff −N + 1 N Teff −N + 1
Figure 2.1: The periodic erasure channel used in the converse proof of Theorem 2.2. The shaded packetsare erased while the remaining ones are perfectly received by the destination.
Remark 2.7. We note that the MS codes are not feasible for channels introducing isolated erasures. To
see this consider two isolated erasures one at t = 0 and the other at t = Teff . In this case both u[0] as
well as its repeated copy are erased.
2.7 Converse Proofs
2.7.1 Isolated Erasure Channel
To establish the converse of Theorem 2.2, we separately consider the cases W ≥ T + 1 and W < T + 1.
When W ≥ T + 1, consider a periodic erasure channel (PEC) with a period of τP = T + 1 packets and
suppose that in every such period the first N packets are erased (see Figure 2.1). While this PEC is not
included in CI(N,W ) which introduces no more than N erasures in a sliding window of length W , we
nonetheless show that any code for such channel and delay T is also feasible for the PEC.
We denote the recovery period of the source packet s[i] by Wi = [i, i+ T ]. The recovery steps are as
follows,
• At i = 0, the recovery period, W0 = [0, T ], has exactly N erasures in the interval [0, N − 1] and
thus s[0] can be recovered at time T which incurs a delay of T packets. Once s[0] is recovered, its
effect can be cancelled from all future channel packets.
• For i = 1, W1 = [1, T + 1] which has a total of N erasures in the interval [1, N − 1] ∪ {T + 1}.Hence, s[1] can be similarly decoded within a delay of T packets. The effect of s[1] can then be
cancelled from all future packets.
• For any i ≥ 0, it can be easily verified that the corresponding decoding window, Wi, has no more
than N erasures. Hence, s[i] can be recovered with a delay of T packets and its effect can then be
cancelled from future packets.
Thus using the capacity of the periodic erasure channel, we have
R ≤ 1− N
T + 1. (2.33)
For the case when W < T + 1, we consider a periodic erasure channel with a period of τP = W where
in each period the first N packets are erased and the remaining W − N packets are not erased. Such
a channel by construction satisfies the isolated erasure channel CI(N,W ) in Definition 2.6, i.e., in any
window Wi = [i, i + W − 1] of length W there exists no more than N isolated erasures. Thus every
source packet on such a channel must be recovered with a delay of T packets, i.e., we have that
R ≤ 1− N
W. (2.34)
Chapter 2. Background 23
· · ·B Teff B Teff
Figure 2.2: The periodic erasure channel used in the converse proof of Theorem 2.3. The shaded packetsare erased while the remaining ones are perfectly received by the destination.
Rearranging (2.33) and (2.34) and using Teff = min(W − 1, T ), we recover (2.12). This completes
the converse proof of Theorem 2.2.
Remark 2.8. We note that in the case of W < T + 1, the capacity matches the Shannon capacity of
erasures channels, i.e., the capacity expression only depends on the fraction of erasures in the channel.
While when W ≥ T + 1, the decoding delay T gives rise to a tighter capacity expression that depends on
T .
2.7.2 Burst Erasure Channel
We again use a periodic erasure channel argument similar to that used in the converse proof in Sec-
tion 2.7.1. For the case when W ≥ T + 1, we consider a periodic erasure channel with a period of
τP = T +B and suppose that in every such period the first B packets are erased (see Figure 2.2). The
decoding steps are as follows,
• The recovery period of s[0], W0 = [0, T ], has exactly B erasures in a burst. Hence, s[0] can be
recovered at time T , i.e., with a delay of T packets.
• We cancel the effect of s[0] from all future channel packets and consider W1 = [1, T ]. Again, such
period has exactly B−1 erasures and thus s[1] can be recovered at time T +1 which incurs a delay
of T packets.
• Repeating the previous two steps, one can see that in any Wi = [i, i + T ] for i ≥ 0, there are no
more than B erasures in a burst. Hence, the corresponding s[i] can be recovered with a delay of
T .
The capacity of the considered periodic erasure channel in Figure 2.2 is an upper bound on the achievable
rate over the channel CB(B,W ≥ T + 1), i.e.,
R ≤ 1− B
T +B. (2.35)
The capacity expression for the case where W < T +1 can be deduced using the same argument but
replacing each T with a Teff = W − 1.
A more rigorous proof is established in Appendix A.2 which will be helpful when considering a
multicast setup in Chapter 8.
2.8 Conclusion
In this chapter, we introduce the streaming setup that will be considered throughout the thesis. At each
time instant, a source packet arrives at the encoder which causally generates a channel packet to be
transmitted over the channel. A sliding window channel model is considered where erasures are locally
Chapter 2. Background 24
constrained. The decoder is a delay constrained decoder where each source packet has to be recovered
within a delay of T packets. We consider two basic channel models. The first is the isolated erasure
channel where no more than N isolated erasures can take place in any window of length W packets.
In this case, we show that m-MDS codes, proposed in [34], achieve the capacity of such channels. We
also discuss another capacity achieving construction where block MDS codes are diagonally interleaved
to construct the streaming code which first appeared in [35]. These codes exist over a smaller field
size and require a lower decoding complexity when compared to m-MDS codes. The second channel
is a burst erasure channel where a single burst of maximum length B can be erased in any window of
length W packets. In this case, Maximally Short codes [35–37] which use the same diagonal interleaving
property are proved to achieve the capacity. Similar to the case of isolated erasure channel, we propose an
alternative construction that uses m-MDS codes as constituent codes instead of MDS block codes. The
converse proofs for both channels use a periodic erasure channel approach similar to that in [79, Section
6.10].
The two discussed classes of codes, m-MDS codes and MS codes are designed for i.i.d. erasure channels
and burst erasure channels respectively but not channels that introduce a mixture of both patterns.
However, practical channels introduce both patterns which is captured by many statistical models such
as Gilbert-Elliott channels. This motivates designing new streaming codes which can simultaneously
recover from both burst and isolated erasures. Such codes will be studied in Chapters 3, 4, 5 and 6.
Moreover, for a channel that only introduces burst erasures, MS codes require the knowledge of the
burst length and delay a priori which forces a conservative design in practice. Hence, a streaming code
should be capable of recovering different burst lengths with different delay constraints. Codes with such
property will be investigated in Chapters 7 and 8.
Chapter 3
Maximum Distance And Span
(MiDAS) Codes
3.1 Introduction
As discussed in the previous chapter, m-MDS codes and Maximally-Short codes are designed for channels
that either introduce isolated erasures or burst erasures, respectively. However, in practice, channels
introduce both burst and isolated losses often captured by statistical models such as the Gilbert-Elliott
(GE) channel. Therefore the underlying codes must simultaneously correct both types of patterns. The
central question we address in this chapter is how to construct streaming codes for such channels.
In this chapter, we consider a sliding window erasure channel model which introduces either burst
or isolated erasures in a given window. We show that in such models there exists an inherent tradeoff
between the burst-erasure correction and isolated-erasure correction capability of streaming codes, i.e.,
given a fixed rate and delay, a streaming code that can correct a long burst cannot recover from many
isolated erasures and vice versa. We then propose a new class of streaming codes – Maximum Distance
And Span (MiDAS) codes – based on a layered approach. We first construct a burst erasure code and
then concatenate additional layer of parity-check packets to enable recovery from isolated erasures. We
show that MiDAS codes achieve such a tradeoff with a gap of at most one delay unit. In practice, our
proposed constructions can be easily adapted if the number of isolated erasures to be corrected varies
based on channel conditions.
Numerical simulations over Gilbert-Elliott and Fritchman channel models indicate significant gains
in the residual loss probability of the proposed codes over baseline schemes designed for either burst or
isolated erasure channels only.
3.2 System Model
We use the same streaming model in Section 2.3. We consider a class of packet erasure channels where
the erasure patterns are locally constrained. In particular, we consider a class of channels that introduce
a mixture of burst and isolated erasures. We precisely define this class of channels as follows.
25
Chapter 3. Maximum Distance And Span (MiDAS) Codes 26
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 · · ·
Figure 3.1: An example of the sliding window channel in Definition 3.1 with N = 2, B = 3 and W = 5,i.e., C(2, 3, 5). In any sliding window of length W = 5, there is either a single erasure burst of maximumlength B = 3, or no more than N = 2 isolated erasures. The shaded packets are erased while theremaining ones are perfectly received by the destination.
Definition 3.1 (Burst or Isolated Erasures Channel - C(N,B,W )). In any sliding window of length W ,
the channel can introduce one of the following patterns,
• A single erasure burst of maximum length B < W , or
• A maximum of N ≤ B erasures in arbitrary locations.
We use the notation C(N,B,W ) to denote such a channel.
Figure 3.1 provides an example of C(N,B,W ) when N = 2, B = 3 and W = 5. Note that the
condition N ≤ B follows since a burst erasure is a special type of erasure pattern. The condition B < W
guarantees that in any window of length W there is at least one non-erased packet1.
Remark 3.1. Note that in the special case when N = B, Definition 3.1 reduces to the isolated erasure
channel in Definition 2.6, i.e., C(N,N,W ) = CI(N,W ) while in the special case of N = 1, the definition
reduces to the burst erasure channel in Definition 2.7, i.e. C(N = 1, B,W ) = CB(B,W ). In the later
case the guard separation between successive bursts is at least W − 1.
In practice, we can view C(N,B,W ) as an approximation of statistical models such as the Gilbert-
Elliott (GE) channel model. As discussed in Section 1.4.1, a GE channel is in one of two states. In the
good state, it behaves an an i.i.d. erasure channel, while in the bad state, it behaves as a burst erasure
channel. Thus the interval consisting of a burst loss corresponds to the bad state, whereas a window
comprising of isolated erasures corresponds to the good state. The values of N , B and W will depend
on the underlying channel parameters. Some insights into the connections between the sliding window
and statistical channel models will be presented in our simulation results.
3.3 Main Results
We establish the following upper and lower bounds on the capacity.
Theorem 3.1 (Upper bound on C(N,B,W )). Any achievable rate R for C(N,B,W ) and delay T ≥ B
must satisfy
(R
1−R
)
B +N ≤ Teff + 1, (3.1)
where Teff = min(T,W − 1).
To interpret (3.1), note that Teff +1 = min(T +1,W ), and T + 1 denotes the active duration of each
source packet, i.e., each source packet s[i] arrives at time t = i and must be decoded by time t = i+ T .
1If this condition is violated, it can be easily shown that the capacity is trivially zero.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 27
When W > T + 1 the upper bound in (3.1) is governed by the decoding delay, otherwise it depends on
W . One can also interpret (3.1) as follows. When the rate R and delay T are fixed, there exists a tradeoff
between the achievable values of B and N . We cannot have a streaming code that can simultaneously
correct long erasure bursts and many isolated erasures.
The proof of Theorem 3.1 is provided in Section 3.4. We further propose a class of streaming codes,
Maximum Distance And Span (MiDAS) codes, that achieve a tradeoff that is at most one unit of delay
from the upper-bound in (3.1).
Theorem 3.2 (Maximum Distance And Span (MiDAS) Codes). For the C(N,B,W ) channel and delay
constraint T ≥ B, there exists a code of rate R that satisfies
(R
1−R
)
B +N > Teff , (3.2)
where Teff = min(T,W − 1).
The proof of Theorem 3.2 is presented in Section 3.5. As will be apparent, our construction is
based on a layered approach. We first construct a (B, Teff) MS code which achieves the capacity of
C(N ′ = 1, B,W ) channel with a decoding delay of T . Then we append an additional layer of parity-
checks that enables us to correct N erasures in any sliding window of length W within a delay of T . By
directly comparing (3.2) and (3.1), we see that the proposed codes are within one unit of decoding delay
from the upper bound.
Remark 3.2. The upper and lower bounds in (3.1) and (3.2) can be interpreted as follows. For a given
rate R, if the effective delay, Teff , is increased by one unit, the number of recoverable isolated erasures
N can be increased by one unit while fixing B or, the length of the recoverable burst can be increased by1−RR
units while fixing N .
While the m-MDS and Interleaved-MDS codes achieve the extreme point of the upper bound (3.1)
corresponding to N = B, the Maximally Short (MS) codes achieve the other extreme point, correspond-
ing to N = 1. In particular, the capacity in (2.12) and (2.13) are the maximum achievable rates for
C(N = B,B,W ) and C(N = 1, B,W ) channels respectively. Note that the MS codes can only achieve
N = 1 and are highly sensitive to isolated losses over the channel. In [35, Section V-B], some examples
of codes with higher N were reported using a numerical search but a general approach for constructing
robust streaming codes remained elusive. We also remark that the upper bound (3.1) establishes that
some of the R = 1/2 codes found via a computer search in [35] are indeed optimal with respect to
Theorem 3.1. In Section 3.5, we present an alternative perspective that extends to achieve the tradeoff
in (3.2).
Remark 3.3. Note that Theorem 3.2 does not explicitly state the field-size q. The underlying construc-
tions are based on m-MDS codes discussed in Section 2.5.2 which are known to exist for field-sizes that
increase exponentially in Teff . However, we also provide an alternate construction in Section 3.6, that
attains (3.2), and whose field-size increases as O(T 3eff). This alternative construction replaces m-MDS
codes with diagonally interleaved MDS block codes and thus attains a reduced decoding complexity as
discussed in Section 2.2.
We start by proving the upper bound in Theorem 3.1 in Section 3.4. We provide a robust extension
of MS codes – MiDAS codes – in Section 3.5 and study their optimality with respect to Theorem 3.1.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 28
· · ·B
B −N + 1
T −N + 1 B
B −N + 1
T −N + 1 B
B −N + 1
T −N + 1
Figure 3.2: The periodic erasure channel in the proof Theorem 3.1. The shaded packets are erased whilethe remaining ones are perfectly received by the destination.
In Section 3.6, we provide an alternative construction of MiDAS codes achieving the same tradeoff in
Theorem 3.2 but with a reduced field-size and decoding complexity. We then compare the performance
of the two constructions through an example in Section 3.7. A numerical comparison between the rate
achieved by MiDAS codes and that by baseline codes is included in Section 3.8. Finally, simulation
results are provided in Section 3.9 to compare the performance of proposed codes to that of baseline
codes over statistical models.
3.4 Upper bound
To establish the upper bound in Theorem 3.1, we separately consider the cases where W ≥ T + 1 and
W < T +1. When W ≥ T +1, consider a periodic erasure channel with a period of τP = T +B −N + 1
and suppose that in every such period the first B packets are erased (see Figure 3.2). While such a
channel is not included in C(N,B,W ), we nonetheless show that any code for C(N,B,W ) and delay T
is also feasible for the proposed periodic erasure channel.2
Consider the first period that spans the interval [0, τP − 1]. We note the following
• The first B−N +1 packets, {s[i]}0≤i≤B−N , must be all recovered with delay T since the recovery
window [i, i + T ] of each such packet only have a burst of length B or smaller. Thus all these
packets are recovered by time t = τP − 1.
• The recovery window of each of the N − 1 packets, {s[i]}B−N+1≤i≤B−1 is [i, i + T ] which sees
two bursts. The first burst spans [i, B − 1] and is of length B − i. The second burst spans
[T +B−N +1, i+T ] and is of length i+N −B. Thus the total number of erased packets in each
recovery period is exactly N . Thus any feasible code over the C(N,B,W ) channels guarantees that
each such packet is also recovered at time i+ T .
• The recovery window of each of the remaining packets in the first period, s[B], . . . , s[τP − 1], again
sees a single-erasure burst of length B at the end of the window. Hence, each of these packets is
also guaranteed to be recovered within a delay of T packets, in particular, by time τP − 1.
We have thus shown that all the packets in the first period spanning [0, τP − 1] can be recovered with
delay T . We can repeat the same argument for all the remaining periods and thus the claim follows.
Thus using the capacity of the periodic erasure channel, we have
R ≤ 1− B
T +B −N + 1. (3.3)
For the case when W < T + 1, we consider a periodic erasure channel with a period of τP =
W +B −N where in each period the first B packets are erased and the remaining W −N packets are
2An information theoretic argument similar to that in Appendix A.2 can be used to prove the upper bound in (3.1) butwill not be presented in this thesis.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 29
ku Symbols
kv Symbols
ku
Symbols
ks Symbols
u[0]
v[0]
u[−Teff ]
+pv[0]
pu[0]
u[1]
v[1]
u[−Teff + 1]
+pv[1]
pu[1]
u[0]
v[0]
u[−Teff ]
+pv[0]
pu[0]
u[B − 1]
v[B − 1]
u[−Teff +B − 1]
+pv[B − 1]
pu[B − 1]
u[B]
v[B]
u[−Teff +B]
+pv[B]
pu[B]
u[Teff ]
v[Teff ]
u[0]+
pv[Teff ]
pu[Teff ]
u[Teff − 1]
v[Teff − 1]
u[−1]+pv[Teff − 1]
pu[Teff − 1]
u[Teff ]
v[Teff ]
u[0]+
pv[Teff ]
pu[Teff ]
Erased Packets Recover v[0], · · · ,v[B − 1] using pv[·] Recover u[0]
Figure 3.3: A window of Teff + 1 channel packets showing the decoding steps of a MiDAS code whenan erasure burst takes place. Shaded columns are erased channel packets while the remaining onesare perfectly received by the destination. In the case of isolated erasures, the v[·] and u[·] packets arerecovered separately using the pv[·] parities in the window [0, Teff − 1] and pu[·] parities in the window[0, Teff ], respectively.
not erased. Such a channel by construction is a C(N,B,W ) channel. In any window Wi = [i, i+W − 1]
of length W there exists either a single burst of maximum length B, or up to N isolated erasures. Thus
every erased packet on such a channel must be recovered, i.e., we have that
R ≤ 1− B
W +B −N. (3.4)
Rearranging (3.3) and (3.4) and using Teff = min(W − 1, T ), we easily recover (3.1). This completes
the proof of the upper bound.
3.5 Maximum Distance And Span (MiDAS) Codes
3.5.1 Code Construction
Our proposed construction is based on a layered approach. As illustrated in Figure 3.3, we first construct
a Generalized MS code for C(N = 1, B,W ) channel and then concatenate an additional layer of parity
packets when N > 1.
Encoder
The encoding steps are as follows,
• Source Splitting: Split each source packet s[i] ∈ Fkq into two groups u[i] ∈ F
ku
q and v[i] ∈ Fkv
q as
follows,
s[i] = (u0[i], . . . , uku−1[i]︸ ︷︷ ︸
=u[i]
, v0[i], . . . , vkv−1[i]︸ ︷︷ ︸
=v[i]
), (3.5)
where ku + kv = k, i.e., u[i] constitutes the first ku symbols in s[i] whereas v[i] constitutes the
remaining kv symbols.
• m-MDS Parity-Checks of v[·]: Apply a (ku + kv, kv, Teff) m-MDS code of rate Rv = kv
ku+kv on
Chapter 3. Maximum Distance And Span (MiDAS) Codes 30
the packets v[i] and generate parity-check packets
pv[i] =
(Teff∑
l=0
v†[i− l] ·Hvl
)†
, pv[i] ∈ Fku
q , (3.6)
where the matrices Hvl ∈ F
kv×ku
q are associated with a m-MDS code (2.6).
• Repetition Code: Superimpose the u[·] packets onto pv[·] and let
q[i] = pv[i] + u[i− Teff ]. (3.7)
• m-MDS Parity-Checks of v[·]: Apply a (ku + ks, ku, Teff) m-MDS code of rate Ru = ku
ku+ks to
the u[i] packets and generate additional parity-check packets,
pu[i] =
(Teff∑
l=0
u†[i− l] ·Hul
)†
, pu[i] ∈ Fks
q , (3.8)
where Hul ∈ F
ku×ks
q are matrices associated with a m-MDS code (2.6).
• Channel Packet Generation: Concatenate the generated parity-checks q[i] and pu[i] to the
source packets so that the channel input at time i is given by x[i] = (u[i],v[i],q[i],pu[i]) ∈ Fnq ,
where n = 2ku + kv + ks.
In our construction, we select ku = B, kv = Teff −B, k = ku + kv = Teff and,
ks =N
Teff −N + 1ku, (3.9)
Remark 3.4. We note that if the value of ks in (3.9) is non-integer, extra source splitting by a certain
factor of m is needed. In particular, we set ku = mB, kv = m(Teff − B), k = ku + kv = mTeff and
ks = NTeff−N+1k
u = NTeff−N+1mB. It can be clearly seen that choosing m = Teff −N + 1 is sufficient for
ks to be an integer.
Decoder
In the analysis of the decoder, we consider the interval [0, Teff ] and show that the decoder can recover
s[0] by time t = Teff if there is either an erasure burst of length B or smaller, or up to N isolated erasures
in this interval. Once we show the recovery of s[0] by time t = Teff , we can cancel its effect from all
future parity-check packets if necessary. The same argument can then be used to show that s[1] can
be recovered by time Teff + 1 if there are no more than N isolated erasures or a single burst erasure of
maximum length B in the interval [1, Teff +1]. Recursively continuing this argument we are guaranteed
the recovery of each s[i] by time i+ Teff .
If there is a burst of length B in the interval [0, Teff ] our construction of q[·] already guarantees the
recovery of s[0] by time t = Teff (cf. Section 2.6.1). Thus we only need to consider the case when there
are N isolated erasures in the interval [0, Teff ]. We show that the decoder is guaranteed to recover v[0]
at time t = Teff − 1 using the parity-checks q[·] and u[0] at time t = Teff using the parity-checks pu[·].
Chapter 3. Maximum Distance And Span (MiDAS) Codes 31
Table 3.1: MiDAS code construction for (N,B) = (2, 3), a delay of T = 4 and rate R = 4/9.
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
ku = 3u0[i] u0[i+ 1] u0[i+ 2] u0[i+ 3] u0[i+ 4]u1[i] u1[i+ 1] u1[i+ 2] u1[i+ 3] u1[i+ 4]u2[i] u2[i+ 1] u2[i+ 2] u2[i+ 3] u2[i+ 4]
kv = 1 v0[i] v0[i+ 1] v0[i+ 2] v0[i+ 3] v0[i+ 4]
ku = 3u0[i− 4] + pv0[i] u0[i − 3] + pv0[i+ 1] u0[i− 2] + pv0 [i+ 2] u0[i− 1] + pv0 [i+ 3] u0[i] + pv0[i+ 4]u1[i− 4] + pv1[i] u1[i − 3] + pv1[i+ 1] u1[i− 2] + pv1 [i+ 2] u1[i− 1] + pv1 [i+ 3] u1[i] + pv1[i+ 4]u2[i− 4] + pv2[i] u2[i − 1] + pv2[i+ 1] u2[i− 2] + pv2 [i+ 2] u2[i− 1] + pv2 [i+ 3] u2[i] + pv2[i+ 4]
ks = 2pu0 [i] pu0 [i+ 1] pu0 [i+ 2] pu0 [i+ 3] pu0 [i+ 4]pu1 [i] pu1 [i+ 1] pu1 [i+ 2] pu1 [i+ 3] pu1 [i+ 4]
The recovery of v[0] by time Teff − 1 follows in a fashion similar to the simultaneous recovery step
above (2.32) in the previous chapter. However, we use P1 in Corollary 2.1 instead. Recall from (3.7) that
q[i] = pv[i]+u[i−Teff], where pv[i] are the parity-checks of the m-MDS code (3.6). Since the interfering
u[·] packets in the interval [0, Teff − 1] are not erased, they can be cancelled out by the decoder from q[·]and the corresponding parity-checks pv[·] are recovered at the decoder. Since the code (v[i],pv[i]) is a
m-MDS code of rate Rv = Teff−BTeff
, applying property P1 in Corollary 2.1 the number of isolated erasures
under which the recovery of v[0] is possible is given by Nv = (1−Rv)Teff = B. Since N ≤ B holds, the
recovery of v[0] by time t = Teff − 1 is guaranteed by the code construction.
For recovering u[0] at time t = Teff , we use the pu[·] parity-checks in the interval [0, Teff]. Note that
the associated code (u[i],pu[i]) is a m-MDS code with rate Ru = ku
ku+ks and hence it follows from P1 in
Corollary 2.1 that the number of isolated erasures under which the recovery of u[0] is possible is given
by
(1−Ru)(Teff + 1) =ks
ks + ku(Teff + 1) = N, (3.10)
where we substitute (3.9) in the last equality. The rate of the code satisfies
R =ku + kv
2ku + kv + ks=
Teff
Teff +B + B NTeff−N+1
(3.11)
>Teff
Teff +B +B NTeff−N
=Teff −N
Teff −N +B(3.12)
where (3.11) follows by substituting in (3.9). Rearranging (3.12) we have that
(R
1−R
)
B +N > Teff . (3.13)
The proof of Theorem 3.2 is thus completed. �
3.5.2 Example - MiDAS (N,B, T ) = (2, 3, 4) and W ≥ T + 1 = 5
Table 3.1 illustrates a MiDAS construction for (N,B) = (2, 3) and T = 4 and Teff = T .
Chapter 3. Maximum Distance And Span (MiDAS) Codes 32
Encoder
The encoding steps are as follows,
• Split each source packet s[i] into k = T = 4 symbols. The first ku = B = 3 symbols are
u[i] = (u0[i], u1[i], u2[i]) while the last kv = T −B = 1 symbol is v0[i].
• Apply a (ku + kv, kv, T ) = (4, 1, 4) m-MDS code of rate Rv = 14 to the v symbols generating the
B = 3 parity-check symbols,
pv[i] = (pv0[i], pv1[i], p
v2[i]) =
4∑
l=0
v0[i− l] ·Hvl , (3.14)
where Hvl for l ∈ {0, . . . , 4} are 1 × 3 matrices associated with the construction of m-MDS codes
(cf. (2.6)).
• Combine the u[·] packets with pv[·] packets and generate q[i] = pv[i] + u[i− T ].
• Apply a (ku + ks, ku, T ) = (5, 3, 4) m-MDS code of rate Ru = 35 to the u packets generating
ks = NT−N+1k
u = 2 parity-check symbols,
pu[i] = (pu0 [i], pu1 [i]) =
4∑
l=0
[
u0[i− l] u1[i− l] u2[i− l]]
Hul , (3.15)
where Hul for l ∈ {0, . . . , 4} are 3× 2 matrices associated with m-MDS codes in (2.6).
The channel packet at time i is given by,
x[i] = (u[i],v[i],q[i],pu[i]) , (3.16)
whose rate is R = ku+kv
2ku+kv+ks = T
T+B+ NBT−N+1
= 49 .
Decoder
For decoding, first assume that an erasure burst spans the interval [i, i+2]. We first recover pv0[i+3], pv1[i+
3], pv2[i+3] by subtracting u0[i−1], u1[i−1], u2[i−1] from the parity-check symbols q0[i+3], q1[i+3], q2[i+3]
respectively. In the interval [i, i + T − 1] = [i, i + 3], the channel introduces a burst of length 3. Thus,
the (4, 1, 4) m-MDS code is capable of recovering the three erased packets v0[i], v0[i + 1] and v0[i + 2]
by time i + 3 since (1 − Rv)(T ) = 3 (cf. P2 in Corollary 2.1). Once all the erased v[t] are recovered,
we can compute the parity-check packets pv[t] for t ∈ {i + 4, i + 5, i + 6} and subtract them from the
corresponding q[t] to recover u[i],u[i + 1],u[i + 2] at time i + 4, i + 5, i + 6 respectively, i.e., within a
delay of T = 4.
In the case of isolated erasures, we consider a channel introducing N = 2 isolated erasures in the
interval [i, i + 4] of length T + 1 = 5. We first recover the unerased parity-check packets pv[·] in the
interval [i, i+3] by subtracting the corresponding u[·] packets. The (4, 1, 4) m-MDS code is then capable
of recovering v0[i] by time i+T −1 = i+3 by invoking P1 in Corollary 2.1 since (1−Rv)T = 3 > 2 = N .
Similarly, u[0] can be recovered by time i + 4 using the (5, 3, 4) m-MDS code in the interval [i, i + 4]
since (1 −Ru)(T + 1) = 2 = N .
Chapter 3. Maximum Distance And Span (MiDAS) Codes 33
Table 3.2: MiDAS-MDS code construction for (N,B) = (2, 3), a delay of T = 4 and rate R = 4/9. Suchconstruction replaces m-MDS codes in MiDAS codes with block MDS codes.
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
ku = 3u0[i] u0[i+ 1] u0[i+ 2] u0[i+ 3] u0[i+ 4]
u1[i] u1[i+ 1] u1[i+ 2] u1[i+ 3] u1[i+ 4]
u2[i] u2[i+ 1] u2[i+ 2] u2[i+ 3] u2[i+ 4]
kv = 1 v0[i] v0[i+ 1] v0[i+ 2] v0[i+ 3] v0[i+ 4]
ku = 3u0[i− 4] + pv0[i] u0[i − 3]+ pv0[i+ 1] u0[i− 2] + pv0 [i+ 2] u0[i− 1] + pv0[i+ 3] u0[i] + pv0[i+ 4]
u1[i− 4] + pv1[i] u1[i− 3] + pv1[i + 1] u1[i− 2]+ pv1[i+ 2] u1[i− 1] + pv1[i+ 3] u1[i] + pv1[i+ 4]
u2[i− 4] + pv2[i] u2[i− 1] + pv2[i + 1] u2[i− 2] + pv2 [i+ 2] u2[i− 1]+ pv2[i+ 3] u2[i] + pv2[i+ 4]
ks = 2pu0 [i] pu0 [i+ 1] pu0 [i+ 2] pu0 [i+ 3] pu0 [i+ 4]
pu1 [i] pu1 [i+ 1] pu1 [i+ 2] pu1 [i+ 3] pu1 [i+ 4]
3.6 MiDAS Codes using MDS Codes
The MiDAS construction in Section 3.5 is based on m-MDS codes [33, 34]. Such codes are guaranteed
to exist when the underlying field-sizes are very large. In particular, the field-size must increase expo-
nentially in Teff except in some special cases [34]. In this section, we suggest an alternative construction
that uses block-MDS codes instead of m-MDS codes. This construction requires a field-size that only
increases as O(T 3eff). We refer to this construction as MiDAS-MDS code. Since MiDAS-MDS uses MDS
block codes, the associated decoding complexity is smaller compared to that using m-MDS codes as
discussed in Sections 2.2.1 and 2.2.2.
Proposition 3.1. For the channel C(N,B,W ) and delay T , there exists a streaming code, MiDAS-MDS
code, of rate R that satisfies (3.2) in Theorem 3.2 with a field-size that increases as O(T 3eff).
We start by giving two examples of the MiDAS-MDS code in Sections 3.6.1 and 3.6.2 and then discuss
the general construction in Section 3.6.3. The key step is to replace the m-MDS codes in (3.6) and (3.8)
by two block MDS codes applied diagonally to the v[·] and u[·] packets. MiDAS-MDS code also attains
the tradeoff in Theorem 3.2, i.e., it has the same performance as MiDAS code in Section 3.5 over the
sliding window erasure channel C(N,B,W ). However, MiDAS-MDS codes incur some performance loss
in simulations over statistical channels as it is less robust to non-ideal erasure patterns as discussed in
Section 3.7.
3.6.1 Example - MiDAS-MDS (N,B, T ) = (2, 3, 4) and W ≥ T + 1 = 5
Table 3.2 illustrates a MiDAS construction using MDS as constituent codes. The rate of this code
is R = T
T+B+ NBT−N+1
= 49 from (3.11). Note that this code has the same parameters as in Table 3.1
in Section 3.5. The encoding steps, stated below, are also similar except that the m-MDS codes are
replaced with block MDS codes.
Encoder
• Split each source packet s[i] into k = T = 4 symbols. The first ku = B = 3 symbols are
u[i] = (u0[i], u1[i], u2[i]), while the last kv = T −B = 1 symbol is v0[i].
Chapter 3. Maximum Distance And Span (MiDAS) Codes 34
• Apply a (T, T − B) = (4, 1) MDS code3 to the v symbols generating B = 3 parity-check symbols
pv[i] = (pv0 [i], pv1[i], p
v2[i]). Hence, at time i the generated codeword is,
cv[i] = (v0[i], pv0[i+ 1], pv1[i+ 2], pv2[i+ 3]) (3.17)
and is shown using the shaded boxes in Table 3.2.
• Combine the u[·] packets with pv[·] packets and generate q[t] = pv[t] + u[t− T ].
• Apply a (T + 1, T − N + 1) = (5, 3) MDS code to the u packets generating N = 2 parity-check
symbols pu[i] = (pu0 [i], pu1 [i]). The codeword starting at time i is given by,
cu[i] = (u0[i], u1[i+ 1], u2[i+ 2], pu0 [i+ 3], pu1 [i+ 4]) (3.18)
and is marked by the unshaded boxes in Table 3.2 for convenience.
The channel packet at time i is given by,
x[i] = (u[i],v[i],q[i],pu[i]) , (3.19)
whose rate is R = 3+13+1+3+2 = 4
9 which is consistent with (3.11).
Decoder
For decoding, first assume that an erasure burst spans the interval [i, i + 2]. We first recover pv0[i +
3], pv1[i + 3], pv2[i + 3] at time t = i + 3 from the parity-check packets q0[i + 3], q1[i + 3], q2[i + 3]. We
can use the underlying MDS codes to recover v0[i], v1[i + 1], v2[i + 2] at time t = i + 3 by considering
cv[i], cv[i + 1], cv[i + 2] respectively. Once all the erased v[t] are recovered, we recover u[i] at time
t = i+ 4, u[i+ 1] at time t = i+ 5 and u[i + 2] at time t = i+ 6.
In the case of isolated erasures, we assume a channel introducing N = 2 isolated erasures in the
interval [i, i+4] of length T +1 = 5. Note that the codeword cv[i] in (3.17) terminates at time t = i+3.
Thus there are no more than N = 2 erasures on it and thus the recovery of v0[i] is guaranteed at
time t = i + 3. Likewise the codewords cu[i − 2], cu[i − 1], cu[i] in (3.18) combining u2[i], u1[i], u0[i],
respectively, terminate by time t = i + 4 and there are no more than N = 2 erasures on any of them.
Thus the recovery of uj[i] for j = 0, 1, 2 is guaranteed at time t = i+ 4.
However, splitting each source packet into k = T symbols is not enough in general. In particular,
applying a (T, T −B) MDS code to the v[·] packets requires that the v[·] packets are split into a multiple
of T − B symbols. Similarly, applying a (T + 1, T − N + 1) MDS code to the u[·] packets requires
splitting them into a multiple of T −N + 1 symbols. On the other hand, achieving the tradeoff in (3.2)
requires that the ratio between the size of u[·] to v[·] to be BT−B
. Thus, splitting the u[·] packets to
B(T − N + 1) symbols and splitting the v[·] packets into (T − N + 1)(T − B) symbols fulfills all the
previous constraints. The following example illustrates this case.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 35
Table 3.3: MiDAS-MDS code construction for (N,B) = (2, 3), a delay of T = 5 and rate R = 10/19with block MDS constituent codes. We note that each of the parity-check symbols pvj [t] is combinedwith uj [t− 5] for j = {0, 1, . . . , 11} but are omitted for simplicity.
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4] [i+ 5]
ku = 12
u0[i] u0[i+ 1] u0[i+ 2] u0[i + 3] u0[i + 4] u0[i+ 5]
u1[i] u1[i+ 1] u1[i+ 2] u1[i + 3] u1[i + 4] u1[i+ 5]u2[i] u2[i+ 1] u2[i+ 2] u2[i + 3] u2[i + 4] u2[i+ 5]
u3[i] u3[i+ 1] u3[i+ 2] u3[i + 3] u3[i + 4] u3[i+ 5]
u4[i] u4[i+ 1] u4[i+ 2] u4[i + 3] u4[i + 4] u4[i+ 5]u5[i] u5[i+ 1] u5[i+ 2] u5[i + 3] u5[i + 4] u5[i+ 5]
u6[i] u6[i+ 1] u6[i+ 2] u6[i + 3] u6[i + 4] u6[i+ 5]
u7[i] u7[i+ 1] u7[i+ 2] u7[i + 3] u7[i + 4] u7[i+ 5]u8[i] u8[i+ 1] u8[i+ 2] u8[i + 3] u8[i + 4] u8[i+ 5]
u9[i] u9[i+ 1] u9[i+ 2] u9[i + 3] u9[i + 4] u9[i+ 5]
u10[i] u10[i+ 1] u10[i+ 2] u10[i+ 3] u10[i+ 4] u10[i+ 5]u11[i] u11[i+ 1] u11[i+ 2] u11[i+ 3] u11[i+ 4] u11[i+ 5]
kv = 8
v0[i] v0[i+ 1] v0[i+ 2] v0[i+ 3] v0[i+ 4] v0[i+ 5]
v1[i] v1[i+ 1] v1[i+ 2] v1[i+ 3] v1[i+ 4] v1[i+ 5]v2[i] v2[i+ 1] v2[i+ 2] v2[i+ 3] v2[i+ 4] v2[i+ 5]v3[i] v3[i+ 1] v3[i+ 2] v3[i+ 3] v3[i+ 4] v3[i+ 5]
v4[i] v4[i+ 1] v4[i+ 2] v4[i+ 3] v4[i+ 4] v4[i+ 5]
v5[i] v5[i+ 1] v5[i+ 2] v5[i+ 3] v5[i+ 4] v5[i+ 5]v6[i] v6[i+ 1] v6[i+ 2] v6[i+ 3] v6[i+ 4] v6[i+ 5]v7[i] v7[i+ 1] v7[i+ 2] v7[i+ 3] v7[i+ 4] v7[i+ 5]
ku = 12
pv0[i] pv0[i+ 1] pv0[i+ 2] pv0 [i+ 3] pv0 [i+ 4] pv0[i+ 5]
pv1[i] pv1[i+ 1] pv1[i+ 2] pv1 [i+ 3] pv1 [i+ 4] pv1[i+ 5]pv2[i] pv2[i+ 1] pv2[i+ 2] pv2 [i+ 3] pv2 [i+ 4] pv2[i+ 5]pv3[i] pv3[i+ 1] pv3[i+ 2] pv3 [i+ 3] pv3 [i+ 4] pv3[i+ 5]
pv4[i] pv4[i+ 1] pv4[i+ 2] pv4 [i+ 3] pv4 [i+ 4] pv4[i+ 5]
pv5[i] pv5[i+ 1] pv5[i+ 2] pv5 [i+ 3] pv5 [i+ 4] pv5[i+ 5]pv6[i] pv6[i+ 1] pv6[i+ 2] pv6 [i+ 3] pv6 [i+ 4] pv6[i+ 5]pv7[i] pv7[i+ 1] pv7[i+ 2] pv7 [i+ 3] pv7 [i+ 4] pv7[i+ 5]
pv8[i] pv8[i+ 1] pv8[i+ 2] pv8 [i+ 3] pv8 [i+ 4] pv8[i+ 5]
pv9[i] pv9[i+ 1] pv9[i+ 2] pv9 [i+ 3] pv9 [i+ 4] pv9[i+ 5]pv10[i] pv10[i + 1] pv10[i+ 2] pv10[i+ 3] pv10[i+ 4] pv10[i+ 5]pv11[i] pv11[i + 1] pv11[i+ 2] pv11[i+ 3] pv11[i+ 4] pv11[i+ 5]
ks = 6
pu0 [i] pu0 [i+ 1] pu0 [i+ 2] pu0 [i+ 3] pu0 [i+ 4] pu0 [i+ 5]
pu1 [i] pu1 [i+ 1] pu1 [i+ 2] pu1 [i+ 3] pu1 [i+ 4] pu1 [i+ 5]pu2 [i] pu2 [i+ 1] pu2 [i+ 2] pu2 [i+ 3] pu2 [i+ 4] pu2 [i+ 5]
pu3 [i] pu3 [i+ 1] pu3 [i+ 2] pu3 [i+ 3] pu3 [i+ 4] pu3 [i+ 5]
pu4 [i] pu4 [i+ 1] pu4 [i+ 2] pu4 [i+ 3] pu4 [i+ 4] pu4 [i+ 5]pu5 [i] pu5 [i+ 1] pu5 [i+ 2] pu5 [i+ 3] pu5 [i+ 4] pu5 [i+ 5]
3.6.2 Example - MiDAS-MDS (N,B, T ) = (2, 3, 5) and W ≥ T + 1 = 6
Table 3.3 illustrates a MiDAS construction using MDS as constituent codes. The rate of this code is
R = T
T+B+ NBT−N+1
= 1019 .
Encoder
The encoding steps are as follows.
• Split each source packet s[i] into k = (T−N+1)T = 20 symbols. The first ku = (T −N+1)B = 12
3This can be a simple repetition code, i.e., pv0 [i+ 1] = pv1[i+ 2] = pv2 [i+ 3] = v0[i].
Chapter 3. Maximum Distance And Span (MiDAS) Codes 36
of which are (u0[i], . . . , u11[i]) while the last kv = (T −N + 1)(T −B) = 8 are (v0[i], . . . , v7[i]).
• Apply a (T, T−B) = (5, 2) MDS code to the v symbols with an interleaving factor of T−N+1 = 4.
Hence, at time i four codewords are generated as follows,
cv0[i] = (v0[i], v4[i+ 1], pv0[i+ 2], pv4[i+ 3], pv8[i+ 4]) (3.20)
cv1[i] = (v1[i], v5[i+ 1], pv1[i+ 2], pv5[i+ 3], pv9[i+ 4]) (3.21)
cv2[i] = (v2[i], v6[i+ 1], pv2[i+ 2], pv6[i+ 3], pv10[i+ 4]) (3.22)
cv3[i] = (v3[i], v7[i+ 1], pv3[i+ 2], pv7[i+ 3], pv11[i+ 4]) (3.23)
The codeword cv0 [i] is shown using the shaded boxes in Table 3.3. According to (3.20), (3.21), (3.22)
and (3.23), (T −N + 1)B = 12 parity-check symbols are generated, namely (pv0[i], . . . , pv11[i]).
• Combine the u[·] packets with pv[·] packets and generate q[t] = pv[t] +u[t−T ]. For simplicity we
do not show these in Table 3.3.
• Apply a (T +1, T −N+1) = (6, 4) MDS code to the u packets with an interleaving factor of B = 3
generating BN = 6 parity-check symbols (pu0 [i], . . . , pu5 [i]). The resulting codewords are as follows,
cu0 [i] = (u0[i], u3[i + 1], u6[i+ 2], u9[i+ 3], pu0 [i+ 4], pu3 [i+ 5]) (3.24)
cu1 [i] = (u1[i], u4[i + 1], u7[i+ 2], u10[i+ 3], pu1 [i+ 4], pu4 [i+ 5]) (3.25)
cu2 [i] = (u2[i], u5[i + 1], u8[i+ 2], u11[i+ 3], pu2 [i+ 4], pu5 [i+ 5]) (3.26)
The codeword cu0 [i] is marked by the unshaded boxes in Table 3.3 for convenience.
The channel packet at time i is given by,
x[i] = (u[i],v[i],q[i],pu[i]) , (3.27)
whose rate is R = 12+812+8+12+6 = 10
19 .
Decoder
For decoding, first assume that an erasure burst spans the interval [i, i+ 2]. The decoding steps are as
follows,
• Recover pv[t] = (pv0[t], . . . , pv11[t]) for t = {i+ 3, i+ 4} by subtracting u[t− 5] from q[t].
• Recover v[i], v[i + 1] and v[i + 2] using the underlying (5, 2) MDS codes as follows. For j ∈{0, . . . , 3},
– cvj [i − 1] = (vj [i − 1], vj+4[i], pvj [i + 1], pvj+4[i + 2], pvj+8[i + 3]) has 3 erasures at i, i + 1 and
i+ 2. Hence, the vj+4[i] symbols are recovered by time i+ 3.
– cvj [i] = (vj [i], vj+4[i+ 1], pvj [i+ 2], pvj+4[i+ 3], pvj+8[i+ 4]) has 3 erasures at i, i+ 1 and i+ 2.
Hence, the vj [i] and vj+4[i+ 1] symbols are recovered by time i+ 4.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 37
– cvj [i+1] = (vj [i+ 1], vj+4[i+ 2], pvj [i+3], pvj+4[i+ 4], pvj+8[i+ 5]) has 3 erasures at i+1, i+2
and i+ 54. Hence, the vj [i + 1] and vj+4[i+ 2] symbols are recovered by time i+ 4.
– cvj [i+2] = (vj [i+ 2], vj+4[i+ 3], pvj [i+4], pvj+4[i+ 5], pvj+8[i+ 6]) has 3 erasures at i+2, i+5
and i+ 6. Hence, the vj [i+ 2] symbols are recovered by time i+ 4.
In other words, all the erased v[·] packets are recovered by time i+ 4.
• Compute the parity-check packets pv[t] for t ∈ {i+5, i+6, i+7} as they only combine v[·] packetsthat are either unerased or recovered in the previous step. These parity-check packets can be
subtracted from the corresponding p[t] packets to recover u[i−T ] packets within a delay of T = 5.
In other words, we recover u[i] at time t = i + 5, u[i + 1] at time t = i + 6 and u[i + 2] at time
t = i+ 7.
In the case of isolated erasures, we assume a channel introducing N = 2 isolated erasures in the
interval [i, i + 5] of length T + 1 = 6. Note that the codewords cvj [i] in (3.20)-(3.23) terminate at time
t = i+ 4. Thus there are no more than N = 2 erasures on either of them and thus the recovery of vj [i]
is guaranteed at time i + 4. Likewise the codewords cuj [i] in (3.24)-(3.26) terminate by time t = i + 5
and there are no more than N = 2 erasures on any of them. Thus the recovery of uj[i] is guaranteed at
time t = i+ 5.
3.6.3 General Code Construction
Encoder
The general construction achieving Prop. 3.1 is as follows.
• Source Splitting: We assume that each source packet s[i] ∈ Fkq and partition the k symbols into
two groups uvec[i] ∈ Fku
q and vvec[i] ∈ Fkv
q as follows,
s[i] = (s0[i], . . . , sk−1[i]) = (u0[i], . . . , uku−1[i]︸ ︷︷ ︸
uvec[i]
, v0[i], . . . , vkv−1[i]︸ ︷︷ ︸
vvec[i]
) (3.28)
where we select
ku = (Teff −N + 1)B, kv = (Teff −N + 1)(Teff −B). (3.29)
• MDS Parity-Checks for v[·] packets: Construct Teff −N +1 systematic MDS codes of param-
4We note that the parity-check symbols pvj+8[i+ 5] for j ∈ {0, . . . , 3} are counted as erasures since they are combined
with uj+8[i] which are erased.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 38
eters (Teff , Teff −B) starting at time i whose associated codewords are,
cvj [i] =
vj [i]
vj+(Teff−N+1)[i+ 1]
vj+2(Teff−N+1)[i + 2]...
vj+(Teff−N+1)(Teff−B−1)[i+ Teff −B − 1]
pvj [i+ Teff −B]
pvj+(Teff−N+1)[i+ Teff −B + 1]
...
pvj+(Teff−N+1)(B−1)[i+ Teff − 1]
, (3.30)
for j ∈ {0, 1, . . . , Teff − N}. Notice that each codeword cvj [i] spans the interval [i, i + Teff − 1]
and the adjacent symbols have an interleaving factor of Teff −N + 1. The resulting parity-check
packets at time i are expressed as: pv[i] = (pv0[i], . . . , pv(Teff−N+1)B−1[i])
• Repetition of u[·] packets: Combine the u[·] packets with the parity-check packets pv[·] afterapplying a shift of Teff , i.e., q[i] = pv[i] + u[i− Teff ].
• MDS Parity-Checks for u[·] packets: Construct B systematic MDS codes of parameters (Teff+
1, Teff −N + 1) at time i whose associated codewords are,
cuj [i] =
uj [i]
uj+B[i+ 1]
uj+2B [i+ 2]...
uj+B(Teff−N)[i+ Teff −N ]
puj [i+ Teff −N + 1]
puj+B [i+ Teff −N + 2]...
puj+B(N−1)[i+ Teff ]
, (3.31)
for j ∈ {0, 1, . . . , B−1}. Notice that each codeword cuj [i] spans the interval [i, i+Teff] and consists
of symbols with an interleaving factor of B. The resulting parity-check packets at time i are
denoted by pu[i] = (pu0 [i], . . . , puBN−1[i]).
• Concatenation of Parity-Checks: Concatenate the parity-check packets pu[·] and q[·], i.e., thechannel input at time i is given by,
x[i] = (u[i],v[i],q[i],pu[i]) . (3.32)
Note that the rate of the code in (3.32) equals to
R =(Teff −N + 1)Teff
(Teff −N + 1)Teff +B(Teff + 1)=
Teff
Teff + B(Teff+1)Teff−N+1
(3.33)
Chapter 3. Maximum Distance And Span (MiDAS) Codes 39
i i+ 1 i+ 2 i+ 3 i+ 4 i+ 5 i+ 6 i+ 7 i+ 8 i+ 9 i+ 10 i+ 11 i+ 12
u[i] u[i+ 1] u[i+ 3]
· · ·
cvj [i+ 3]− (5, 2) MDS Code
Figure 3.4: A Non-ideal erasure pattern in Section 3.7. Grey and white squares denote erased andunerased packets respectively. In the window of length 6 spanning the interval [i, i + 5], the pattern isneither a burst of length B = 3 nor N = 2 isolated erasures and is thus not included in C(2, 3, 6).
which is identical to the expression in (3.11). We denote such code by (N,B, Teff) MiDAS-MDS code.
Decoder
The decoding steps are similar to that discussed in the previous examples and is provided in Ap-
pendix B.1.
Remark 3.5. We note that splitting each source packet into (Teff − N + 1)Teff symbols requires that
each source packet consist of q1 = (Teff − N + 1)Teff symbols. One group of symbols is protected using
(Teff , Teff−B) MDS code while the other group is protected using (Teff+1, Teff−N+1) MDS code. Using
the well-known fact that an (n, k) MDS code exists over any field of size greater than or equal to n, a
field of size q2 = Θ(Teff) is sufficient for constructing both MDS codes. Hence, a field of size q = q1 · q2which is of the order O(T 3
eff) is sufficient to construct a (B,N, Teff) MiDAS-MDS code. Also, the use of
block MDS codes instead of m-MDS codes in MiDAS-MDS reduces the decoding complexity from cubic
to quadratic in Teff as discussed in Section 2.2.
3.7 Non-Ideal Erasure Patterns
Even though the MiDAS-MDS construction in Section 3.6 attains the same tradeoff over the sliding win-
dow erasure channel C(N,B,W ), their performance is more sensitive compared to the MiDAS-mMDS
construction in Section 3.5 when erasure patterns not included in the sliding window model are consid-
ered. To illustrate this we focus on the case when N = 2, B = 3, T = 5 and W ≥ 6 in our discussion.
The MiDAS-MDS construction for these parameters is illustrated in Table 3.3. The MiDAS-mMDS code
has a similar structure except that the parity-checks pvj [·] and puj [·] are generated using a m-MDS code.
We consider an erasure pattern that introduces a burst of length 2 in the interval [i, i + 1] and an
additional isolated erasure at time i + 3 as shown in Figure 3.4. Clearly such a pattern violates a
C(N = 2, B = 3,W = 6). Nonetheless, we argue that the MiDAS-mMDS codes are able to completely
recover from this erasure pattern but the MiDAS-MDS codes in Table 3.3 cannot.
In particular note that the parity packets p[i+2] and p[i+4] contribute a total of 24 symbols which
suffice to recover v[i],v[i + 1] and v[i + 3], each of which involves 8 symbols. Thus by time i + 4 all
the erased v[·] symbols are recovered and we can proceed to recover u[i],u[i + 1] and u[i + 3] at time
i+ 5, i+ 6 and i+ 8, respectively, i.e., a delay of T = 5 packets.
In the MiDAS-MDS construction, illustrated in Table 3.3, we either use cu[·] or cv[·] codewords torecover u[i].
Chapter 3. Maximum Distance And Span (MiDAS) Codes 40
Table 3.4: Achievable (N,B) over channel C(N,B,W ≥ T + 1) for different code constructions. Similartradeoffs for the first three codes can be achieved for W < T + 1 by replacing T with W − 1.
Code N B
m-MDS Codes (1−R)(T + 1) (1−R)(T + 1)
Maximally Short Codes 1 T ·min(1R− 1, 1
)
MiDAS Codes min(
B, T− R1−R
B)
B ∈ [1, T ]
E-RLC Codes [1]∆ ∈ [R(T + 1), T ], 1−R
R(T −∆) + 1 1−R
R∆
R ≥ 1/2
• Using cu[·] codewords: Here, a (T + 1, T −N + 1) = (6, 4) block MDS code is applied to each of
the u[·] packets. Each of the codewords cuj [i] for j ∈ {0, 1, 2} in (3.24), (3.25), and (3.26) has 3
erasures at i, i+ 1 and i+ 3 and hence the recovery of u[0] is impossible.
• Using cv[·] codewords: Also, the v[·] packets are protected using a (T, T −B) = (5, 2) MDS codes.
Let us consider the codewords cvj [i+3] = (vj [i+3], vj+4[i+4], pj[i+5], pj+4[i+6], pj+8[i+7]) for
j ∈ {0, 1, 2, 3} in (3.20), (3.21), (3.22) and (3.23). Each of these codewords has an erasure at time
i+3 and the parity-check packets pj [i+5] and pj+4[i+6] are combined with uj [i] and uj+4[i+1]
which are erased by the channel. Thus, a total of 3 erasures at times i+ 3, i+ 5 and i+ 6, which
implies that vj [i+ 3] can be recovered at time i+ 7. Now, the decoder can compute pj [i+ 5] and
pj+4[i + 6] and subtract them from qj [i + 5] and qj+4[i + 6] to recover uj[i] and uj[i + 1] with a
delay of 7 and 6, respectively, i.e., exceeds the delay of T = 5.
We will also see performance loss due to using MDS block codes instead of m-MDS codes in our
simulation results in Section 3.9.
3.8 Numerical Comparisons
Table 3.4 summarizes the feasible values of N and B for different codes5. For a fixed rate R and delay T
we indicate the values of N and B achieved by various codes. The first row corresponds to the m-MDS
code in Section 2.5.2, while the second row corresponds to the MS code in Section 2.6.1. The third row
corresponds to the proposed construction — MiDAS code — in Theorem 3.2. In contrast to the m-MDS
and MS codes, that only attain specific values of N and B, the family of MiDAS codes can attain a
range of (N,B) for a given R and T . The last row corresponds to another family of codes – Embedded
Random Linear Codes (E-RLC) – proposed in [1]. While such constructions achieve the upper bound
in (3.1) for R = 1/2, they are far from this bound in general and will not be discussed in this chapter.
We further numerically illustrate the achievable (N,B) pairs for various codes in Figure 3.5. We fix
the rate to R = 0.6. As stated before, the m-MDS and MS codes in Sections 2.5.2 and 2.6.1 respectively
only achieve the extreme points on the tradeoff. The MiDAS codes achieve a tradeoff, very close to the
upper bound for all rates. The E-RLC codes, illustrated with the broken lines marked with triangles,
achieve a tradeoff that is in general far from the upper bound in (3.1) except for R = 0.5 which is not
the case in this figure.
5We note that the floor of the values given in Table 3.4 should be considered as the values might not be integers
Chapter 3. Maximum Distance And Span (MiDAS) Codes 41
16 17 18 19 20 21 22 23 24 25 262
4
6
8
10
12
14
16T = 40, R = 0.6
Burst Erasure Correction (B)
Isol
ated
Era
sure
Cor
rect
ion
(N)
Upper−Bound
ERLC Codes
MiDAS Codes
Figure 3.5: Achievable tradeoff between N and B for different code constructions. The rate is fixed toR = 0.6 and the delay is fixed to T = 40 and W = T + 1. The uppermost curve marked with ‘o’ is theupper bound in (3.1). The MiDAS codes are shown with broken lines marked with ‘×’ and are at mostone delay unit from the upper bound. The E-RLC codes in [1] are shown with broken lines marked with‘△’.
Table 3.5: Channel and code parameters used in Figures 3.6a and 3.7a
(a) Gilbert-Elliott Channel Parameters
Figure 3.6a Figure 3.7aDelay T 12 50(α, β) (5× 10−4, 0.5) (5× 10−5, 0.2)Channel Length 107 108
Rate R 12/23 50/83
(b) Achievable N and B for different codes
Figure 3.6a Figure 3.7aCode N B N BMiDAS Code 2 9 4 30m-MDS 6 6 20 20MS Codes 1 11 1 33
3.9 Simulation Results
In this section we study the validity of our proposed code constructions over statistical channel models.
We consider two classes of channels that introduce both burst and isolated erasures.
3.9.1 Gilbert-Elliott Channel Experiments
In Figure 3.6a and Figure 3.7a we study the performance of various streaming codes over the Gilbert-
Elliott channel in Section 1.4.1. The channel parameters and code parameters are shown in Table 3.5.
Figure 3.6b and 3.7b indicate the histogram of the burst lengths observed for the two channels. We
remark that the channel parameters for the T = 12 case are the same as those used in [35, Section 4-B,
Figure 5]. We remark that for this choice of α, the contribution from failures due to small guard periods
between bursts is not dominant. When the inter-burst gaps are smaller we believe that an extension of
MiDAS codes that control the number of losses in such events may be necessary and is left for a future
investigation.
All codes in Figure 3.6a are selected to have a rate of R = 12/23 ≈ 0.52 and the delay is T = 12.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 42
1 2 3 4 5 6 7 8 9 10x 10
−3
10−6
10−5
10−4
10−3
10−2
ε
Loss
Pro
babi
lity
UncodedMS Code (N,B) = (1,11)m−MDS Code (N,B) = (6,6)MiDAS Code (N,B) = (2,9)
(a) Simulation results. All codes are evaluated using a decoding delay of T = 12 packets and a rate of R = 12/23 ≈ 0.52.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 160
0.1
0.2
0.3
0.4
0.5
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.5 which approximates a geometric distribution (shown dotted) with the same success proba-bility.
Figure 3.6: Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (5× 10−4, 0.5).
The channel is a Gilbert-Elliott channel with parameters (α, β) = (5 × 10−4, 0.5) and ε, the erasure
probability in the good-state, is varying on the x-axis in the range [10−3, 10−2]. The residual loss rate
associated with each code is plotted on the y-axis. For reference the uncoded loss-rate is also shown
by the upper-most dotted line marked with triangles. The horizontal line marked with ’×’ is the loss
rate of the m-MDS code. It achieves B = N = 6. Thus its performance is limited by its burst-erasure
correction capability and thus is consistent with the probability of observing bursts longer than 6 which
is given by ≈ 2 × 10−5. The curve marked with circles which deteriorates rapidly as we increase ε is
the Maximally Short code (MS). It achieves B = 11 and N = 1. Thus in general it cannot recover from
even two losses occurring in a window of length T +1. The remaining curve marked with squares shows
the MiDAS code which achieve B = 9 and N = 2. The loss probability also deteriorates with ε but at a
Chapter 3. Maximum Distance And Span (MiDAS) Codes 43
1 2 3 4 5 6 7 8 9 10x 10
−3
10−5
10−4
10−3
10−2
ε
Loss
Pro
babi
lity
Uncodedm−MDS Code (N,B) = (20,20)MiDAS Code (N,B) = (4,30)MS Code (N,B) = (1,33)
(a) Simulation results. All codes are evaluated using a decoding delay of T = 50 packets and a rate of R = 50/83 ≈ 0.6.
0 5 10 15 20 25 30 35 400
0.05
0.1
0.15
0.2
0.25
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.2 which approximates a geometric distribution (shown dotted) with the same success proba-bility.
Figure 3.7: Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (5× 10−5, 0.2).
much lower rate. Thus a slight decrease in B, while improving N from 1 to 2 exhibits noticeable gains
over both MS and m-MDS codes. At the left most point, i.e., when ε = 10−3, the loss probability is
dominated by burst losses, while as ε is increased, the effect of isolated losses becomes more significant.
In Figure 3.7a the rate of all codes is set to R = 50/83 ≈ 0.6 and the delay is set to T = 50. The
channel is a Gilbert-Elliott channel with parameters (α, β) = (5× 10−5, 0.2) and ε varying on the x-axis
in the range [10−3, 10−2]. Similar to Figure 3.6a, the residual loss rate associated with each code is
plotted on the y-axis. The m-MDS code (curve marked with ’×’) achieves B = N = 20 whereas the
MS code (curve marked with circles) achieves N = 1 and B = 33. Both codes suffer from the same
phenomenon discussed in the previous case. We also consider the MiDAS code (curve marked with
squares) with N = 4 and B = 30. We observe that its performance deteriorates as ε is increased and
Chapter 3. Maximum Distance And Span (MiDAS) Codes 44
1 2 3 4 5 6 7 8 9 10
x 10−3
10−5
10−4
10−3
10−2
ε
Loss
Pro
babi
lity
UncodedMS−mMDS (N,B) = (1,11)MS−MDS (N,B) = (1,11)MiDAS−mMDS (N,B) = (2,9)MiDAS−MDS (N,B) = (2,9)
Figure 3.8: Simulation over a Gilbert-Elliott Channel with (α, β) = (5 × 10−4, 0.5). All codes areevaluated using a decoding delay of T = 12 packets and a rate of R = 12/23 ≈ 0.52.
Table 3.6: Channel and code parameters used in Figures 3.9 and 3.10
(a) Fritchman Channel Parameters
Figure 3.9 Figure 3.10Channel States 9 12Delay T 40 40(α, β) (10−5, 0.5) (2 × 10−5, 0.75)Channel Length 108 5× 108
Rate R 40/79 ≈ 0.5 40/67 ≈ 0.6
(b) Achievable N and B for different codes
Figure 3.9 Figure 3.10Code N B N B
MiDAS Codes 8 31 4 24m-MDS 20 20 16 16MS Codes 1 39 1 27
eventually crosses the m-MDS codes. We believe that further improvements can be attained using codes
that correct both burst and isolated losses. These extensions will be studied in Chapter 4.
In Figure 3.8, we compare the performance of MiDAS-MDS and MS-MDS codes to the corresponding
MiDAS-mMDS and MS-mMDS codes. We consider the same GE channel in Figure 3.6a and delay
T = 12. The codes involving m-MDS codes are plotted using a solid line whereas the codes involving
block MDS codes are shown by the dotted lines of the same marker. We note that in all cases there is
a noticeable increase in the loss rate when a block MDS code is used despite the fact that these codes
achieve the same (N,B) values over sliding window channels. This loss in performance is due to their
sensitivity to non-ideal erasure patterns as discussed in Section 3.7.
3.9.2 Fritchman Channel Experiments
In Figure 3.9 and Figure 3.10, we evaluate streaming codes over the Fritchman channel in Section. 1.4.2.
The channel parameters and code parameters are shown in Table 3.6. We let the transition probability
from the good state to the first bad state E1 to be α whereas the transition probability from each of the
bad states equals β. Let ε be the probability of a packet loss in good state. We lose packets in any bad
state with probability 1. Figure 3.9b and 3.10b indicate the histogram of the burst lengths observed for
the two channels.
Chapter 3. Maximum Distance And Span (MiDAS) Codes 45
1 2 3 4 5 6 7 8x 10
−3
10−6
10−5
10−4
10−3
10−2
ε
Loss
Pro
babi
lity
Uncodedm−MDS Code (N,B) = (20,20)MS Code (N,B) = (1,39)MiDAS Code (N,B) = (8,31)
(a) Simulation results. All codes are evaluated using a decoding delay of T = 40 packets.
5 10 15 20 25 30 350
0.02
0.04
0.06
0.08
0.1
0.12
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.5 in a N + 1 = 9-States Fritchman Channel. The distribution follows a negative binomialdistribution (shown dotted) of N = 8 failures and a success probability of 0.5.
Figure 3.9: Simulation Experiments for Fritchman Channel Model with (N , α, β) = (8, 10−5, 0.5).
In Figure 3.9 and Figure 3.10, the uncoded loss rate is shown by the upper-most plot while the
horizontal line marked with ’×’ is the performance of m-MDS code. Note that the performance of this
code is essentially independent of ε in the interval of interest. As in the case of GE channels, the m-MDS
codes recover all the losses in the good state and fail against burst lengths longer than its burst-erasure
correction capability. Thus, their loss rate is consistent with the probability of observing bursts longer
than 20 and 16 which can be calculated to be ≈ 10−5 and ≈ 3× 10−5, respectively. The performance of
the MS codes is shown by the curve marked with circles in both figures. We note that it is better than
the m-MDS codes for ε = 10−3, but deteriorates quickly as we increase ε. The performance gains from
MiDAS codes are significantly more noticeable for the Fritchman channel because the hyper-geometric
burst-length distribution favors longer bursts over shorter ones. As in the case of GE Channels, we
Chapter 3. Maximum Distance And Span (MiDAS) Codes 46
1 2 3 4 5 6 7 8 9 10x 10
−3
10−5
10−4
10−3
10−2
ε
Loss
Pro
babi
lity
Uncodedm−MDS Code (N,B) = (16,16)MS Code (N,B) = (1,27)MiDAS Code (N,B) = (4,24)
(a) Simulation results. All codes are evaluated using a decoding delay of T = 40 packets.
11 12 13 14 15 16 17 18 19 20 21 22 23 240
0.05
0.1
0.15
0.2
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.75 in a N + 1 = 12-States Fritchman Channel. The distribution follows a negative binomialdistribution (shown dotted) of N = 11 failures and a success probability of 0.75.
Figure 3.10: Simulation Experiments for Fritchman Channel Model with (N , α, β) = (11, 2×10−5, 0.75).
expect further performance gains to be possible by considering more sophisticated erasure patterns,
such as burst plus isolated losses which will be considered in Chapter 4.
3.10 Conclusion
In this chapter, we propose a class of streaming codes, MiDAS codes, capable of recovering from either
a single erasure burst of maximum length B or up to N isolated erasures within a delay of T packets.
A layering approach is used to construct such codes. In particular, a MS code capable of recovering
a single burst of maximum length B within a delay of T is first constructed. We then add a layer of
parity-check packets generated using a m-MDS code to be able to recover from a pattern of at most N
Chapter 3. Maximum Distance And Span (MiDAS) Codes 47
isolated erasures within the same delay. The number of isolated erasures that can be corrected, N , is
proportional to the number of parity-check packets added in the last layer.
To determine the optimality of these codes, we consider a sliding window erasure channel model
which introduces either a burst of maximum length B or no more than N isolated erasures in any
window of length W packets. We show that there exists a fundamental tradeoff between the burst-
erasure correction and isolated-erasure correction capability of any streaming code. The MiDAS codes
are shown to achieve a tradeoff that is at most one unit of delay from the upper bound.
We then propose an alternative construction, MiDAS-MDS codes, which replaces the two m-MDS
codes used in MiDAS with two diagonally interleaved MDS block codes. Both constructions achieve the
same tradeoff between B and N . However, such replacement reduces both the field-size of the overall
code and the complexity associated with the decoder.
While our constructions are based on sliding window channel models, numerical simulations indicate
that the proposed constructions outperform baseline schemes over statistical models. However, for a class
of channel parameters, MiDAS codes suffer from the effect of having both burst and isolated erasures in
the same window. Further improvements can be attained by considering streaming codes that correct
both burst and isolated losses in the window of interest. This will be discussed in Chapter 4.
Chapter 4
Partial Recovery Codes (PRC)
4.1 Introduction
Although MiDAS codes, in Chapter 3, can recover from both burst and isolated erasures, we observe a
deterioration in their performance over Gilbert-Elliott and Fritchman channel models when long delays
are considered as discussed in Section 3.9. Through some investigation, we realized that the main reason
behind such deterioration is the failure of MiDAS codes to recover from patterns with bursts followed
by isolated erasures and vice versa. These erasure patterns appear during the transition between bad
and good states in any statistical model.
This motivates further improving the sliding window channel in Definition 3.1 to capture such erasure
patterns. In particular, we consider channels that involve both burst and isolated erasures in the window
of interest in this chapter. Some investigations show that perfect recovery over such channels requires
a significant overhead. Hence, we propose a class of codes, Partial Recovery Codes (PRC), that recover
most of the erased source packets over such channels. We observe in our simulations that when the delay
is relatively long, such patterns are dominant and the PRC construction indeed outperforms MiDAS as
well as other baseline codes.
4.2 System Model
We consider a class of packet erasure channels where the erasure patterns are locally constrained.
Definition 4.1 (Burst and Isolated Erasures Channel - CII(N,B,W )). In any sliding window of length
W , the channel CII(N,B,W ) can introduce only one of the following patterns:
• A single erasure burst of maximum length B plus a single isolated erasure where B + 1 < W or,
• A maximum of N ≤ B + 1 erasures in arbitrary locations.
The channel in Definition 4.1 can be easily extended to multiple isolated erasures. However, we will
only focus on the case of single isolated erasure as it captures the dominant erasure events associated
with GE and Fritchman channels as discussed in Section 4.5. Figure 4.1 provides an example of channel
CII(2, 3, 5).
48
Chapter 4. Partial Recovery Codes (PRC) 49
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 · · ·
Figure 4.1: An example of channel II in Definition 4.1: In any sliding window of length W = 5 there iseither a single erasure burst of length up to B = 3 and possibly one isolated erasure, or N = 2 isolatederasures. This channel is denoted by CII(2, 3, 5). The shaded packets are erased while the remainingones are perfectly received by the destination.
For Channel CII, it turns out that codes that achieve perfect recovery require a significant overhead,
particularly when T is close to B. Therefore we consider partial recovery codes as discussed next.
Clearly the erasure patterns associated with channel CII(N,B,W ) include those associated with
C(N,B,W ) in Definition 3.1. In Chapter 3, we first construct a MS code for the burst erasure channel
C(N = 1, B,W ) without considering the value of N . Thereafter, we concatenate an additional layer of
parity-checks to recover from the isolated erasure patterns. Inspired by this result, for the channel CII,we construct a partial recovery code for the burst plus one isolated erasure channel CII(N = 1, B,W ).
A similar layering principle can be applied to treat isolated losses.
4.3 Partial Recovery Codes (PRC)
Throughout this chapter, we let W ≥ 2T +B since we are interested in an interval of length T before or
after the erasure burst. In each such window, the channel introduces at most one burst and one isolated
erasure. We refer to this isolated erasure as the associated erasure and is defined as follows.
Definition 4.2 (Associated Isolated Erasure). An isolated erasure is defined to be associated with the
erasure burst, if it occurs within the T packets before or after this burst.
Note that every erasure burst has at most one associated isolated erasure (cf. Figure 4.2). Conversely
every isolated erasure can be associated with no more than one erasure burst.
Definition 4.3 (Partial Recovery Code (PRC)). A Partial Recovery Code (PRC) for CII(1, B,W ≥2T + B) recovers all but one source packet with a delay of T in each pattern consisting of an erasure
burst and its associated isolated erasure.
Theorem 4.1 (Partial Recovery Codes (PRC)). There exists a partial recovery code for CII(1, B,W ≥2T +B) of rate,
R = maxB<∆≤T
∆(T −∆) + (B + 1)
∆(T −∆) + (B + 1)(T −∆+ 2), (4.1)
that satisfy Definition 4.3.
The proof of Theorem 4.1 is illustrated through two steps. First, we construct a PRC code whose
rate meets (4.1) then, we illustrate the decoding steps for all erasure patterns. Such PRC codes are
designed to deal with patterns of one burst and one isolated erasures. To extend these codes to be
robust when the channel introduces N isolated erasures we add an extra layer of parity-checks and show
the corresponding rate in Section 4.3.3.
Chapter 4. Partial Recovery Codes (PRC) 50
· · ·
T B T − 1
2T +B
Figure 4.2: An isolated erasure associated with a burst in channel CII(N,B,W = 2T + B). Grey andwhite squares denote erased and unerased packets respectively.
4.3.1 Code Construction
Encoder
The main steps in our proposed construction of a partial recovery code for CII(1, B, 2T + B) are as
follows. Let u, v, s and ∆ be integers that will be specified in the sequel. We let s[i] ∈ Fu+vq .
1. Source Splitting: As in (2.29) we split s[i] into two groups u[i] ∈ Fuq and v[i] ∈ F
vq .
2. Construction of C12: We apply a rate R12 = vv+u+s
m-MDS (v+u+s, v, T ) code C12 : (v[i],p[i])
to the v[·] packets to generate parity-check packets p[·] ∈ Fu+sq ,
p[i] =
(T∑
l=0
v†[i− l] ·Hl
)†
, (4.2)
where H0, . . . ,HT ∈ Fv×(u+s)q are the matrices associated with the m-MDS code (2.7).
3. Parity-Check Splitting: We split each p[i] into two groups p1[i] ∈ Fuq and p2[i] ∈ F
sq by assigning
the first u symbols in p[i] to p1[i] and the remaining s symbols of p[i] to p2[i]. We can express:
pk[i] =
(T∑
l=0
v†[i− l] ·Hkl
)†
, k = 1, 2 (4.3)
where the matrices Hkl are defined as Hl = [H1
l | H2l ]. It can be shown that both H1
l and
H2l satisfy the m-MDS property [34, Theorem 2.4] and therefore the codes C1 : (v[i],p1[i]) and
C2 : (v[i],p2[i]) are both m-MDS codes. The proof of this property uses the fact that a subset
of linearly independent columns is linearly independent and is rather straightforward and is thus
omitted.
4. Repetition Code: We combine a shifted copy of u[·] with the p1[·] parity-checks to generate
q[i] = p1[i]+u[i−∆]. Here ∆ ∈ {B+1, . . . , T } denotes the shift applied to the u[·] stream before
embedding it onto the p1[·] stream.
5. Channel Packet: We concatenate the generated layers of parity-check packets to the source
packet to construct the channel packet,
x[i] = (s†[i],q†[i],p†2[i])
†. (4.4)
Chapter 4. Partial Recovery Codes (PRC) 51
The rate of the code in (4.4) is clearly R = u+v2u+v+s
. We further select the codes C12 and C2 to have
the following rates:
R12 =v
v + u+ s=
∆−B − 1
∆, (4.5)
R2 =v
v + s=
T −∆+ 1
T −∆+ 2. (4.6)
The rate R12 is chosen such that the corresponding code C12 is capable of correcting a burst of length
B + 1 within a delay of ∆− 1 packets. Hence, (4.5) satisfies (1 − R12)(∆) = B + 1. Similarly, the rate
R2 is chosen such that the C2 code is capable of recovering a single erasure within a delay of T −∆+1.
Thus, (4.6) satisfies (1−R2)(T −∆+ 2) = 1. It will be justified that these two properties are sufficient
for partial recovery on channel CII.We further use the following values of u, v and s that satisfy (4.5) and (4.6),
u = (B + 1)(T −∆+ 1)− (∆−B − 1)
v = (T −∆+ 1)(∆−B − 1) (4.7)
s = ∆−B − 1,
and the corresponding rate of the PRC code is
R = maxB<∆≤T
∆(T −∆) + (B + 1)
∆(T −∆) + (B + 1)(T −∆+ 2), (4.8)
which meets the rate given in Theorem 4.1. This completes the details of the encoder.
Remark 4.1. If we assume that the source alphabet is sufficiently large such that the integer constraints
can be ignored then the optimal shift is given by
∆⋆ = argmax∆
R(∆) = T + 1−√T −B, (4.9)
and the corresponding rate is given by
R⋆ =(T + 2)
√T −B − 2(T −B)
(T +B + 3)√T −B − 2(T −B)
, (4.10)
which is achieved using
u = u⋆ = (B + 2)√T −B − (T −B)
v = v⋆ = (T −B)(√T −B − 1) (4.11)
s = s⋆ = (T −B)−√T −B.
Decoder
In the decoding analysis, we will show that these parameters guarantee that the above construction is a
PRC code for CII(1, B,W ≥ 2T +B). The decoding steps are provided in Appendix C.1.
Chapter 4. Partial Recovery Codes (PRC) 52
Table 4.1: A Partial Recovery Code with B = 3 achieving a rate of 59 for a delay of T = 7.
[0] [1] [2] [3] [4] [5] [6] [7] [8] [9]u = 6 u[0] u[1] u[2] u[3] u[4] u[5] u[6] u[7] u[8] u[9]v = 4 v[0] v[1] v[2] v[3] v[4] v[5] v[6] v[7] v[8] v[9]
u = 6p1[0] p1[1] p1[2] p1[3] p1[4] p1[5] p1[6] p1[7] p1[8] p1[9]
+u[−6] +u[−5] +u[−4] +u[−3] +u[−2] +u[−1] +u[0] +u[1] +u[2] +u[3]s = 2 p2[0] p2[1] p2[2] p2[3] p2[4] p2[5] p2[6] p2[7] p2[8] p2[9]
4.3.2 Example
We illustrate a PRC construction for CII(1, B = 3,W = 17) with T = 7 and rate R = 5/9. From (4.11),
the values of the code parameters are u = 6, v = 4, s = 2 and ∆ = 6.
Encoder
1. Assume that each source packet s[i] ∈ F10q consists of ten symbols and split it into u[i] and v[i],
consisting of u = 6 and v = 4 symbols respectively.
2. Let s = 2 and apply a (u + v + s, v) = (12, 4) m-MDS code C12 to the v[i] packets to generate
parity-checks p[i] ∈ F8q. Note that the rate of C12 equals R12 = 1
3 .
3. We split each p[i] into p1[i] and p2[i] consisting of u = 6 and s = 2 symbols as shown in Table 4.1.
Note that the codes C1 : (v[i],p1[i]) and C2 : (v[i],p2[i]) are both m-MDS codes of rates R1 = 2/5
and R2 = 2/3 respectively.
4. We generate q[i] = p1[i] + u[i−∆] and let the transmitted packet be
x[i] =(
u[i]†,v[i]†,q[i]†,p†2[i])†
,
which corresponds to one column in Table 4.1.
The overall rate equals R = 5/9.
Decoder
We start by considering the case when the burst erasure precedes the isolated erasure. Without loss of
generality suppose that the burst erasure spans the interval [0, 2] and the isolated erasure occurs at time
t > 2.
• t ∈ [3, 4] : We use code C12 in the interval [0,∆ − 1] = [0, 5] to recover the erased packets
v[0],v[1],v[2],v[t] by time τ = 5. Note that in this interval any interfering u[·] packets are
not erased and can be cancelled out from q[·]. Using τ = ∆− 1 we have
(1−R12)(τ + 1) =
(
1− 1
3
)
6 = 4. (4.12)
Thus by using Lemma C.1 in Appendix C.1, the recovery of v[0],v[1],v[2],v[t] follows.
Thus by time τ = 5, all the erased v[·] packets have been recovered. The packets u[0],u[1],u[2]
and u[t] are each recovered at time 6, 7, 8 and t + ∆ respectively by cancelling p1[·]) from the
Chapter 4. Partial Recovery Codes (PRC) 53
corresponding parity-checks in Table 4.1. Thus all the erased packets get recovered by the decoder
for these erasure patterns.
• t ≥ 5 : We use code C12 in the interval [0, 4] to recover the erased packets v[0],v[1],v[2] at time
τ = 4. This follows by applying property P2 in Corollary 2.1 since
(1−R12)(τ + 1) =
(
1− 1
3
)
5 > 3. (4.13)
Once the erasure burst has been recovered, the erased packet v[t] can be recovered at time τ =
t+ T −∆+ 1 = t+2 using the parity-checks p2[·] associated with C2 in the interval [t, t+2]. This
again follows from Corollary 2.1 since
(1 −R2)(T −∆+ 2) =
(
1− 2
3
)
3 = 1 (4.14)
which suffices to recover the missing v[t].
For the recovery of u[t], note that by time t+∆ all the erased v[·] packets have been recovered and
hence the associated parity-check p1[t+∆] can be cancelled from q[t+∆] to recover u[t]. For the
recovery of u[0],u[1],u[2] we need to use q[6],q[7],q[8] respectively. If t = 5 then we recover v[5]
at time τ = 7 and then compute p1[6], . . . ,p1[8] and in turn recover all the missing u[·] packets. Ift ∈ [6, 8], then we will not be able to recover the associated u[t− 6] but the remaining two packets
can be recovered. Thus we will have one non-recovered packet in this case. If t > 8 then clearly
all the three u packets are recovered and complete recovery is achieved.
For the case when the isolated erasure occurs at time 0 and the burst follows it spanning the interval
[t, t+ 2], the decoder proceeds as follows.
• t = 1 : We use code C12 in the interval [0, 5] to recover v[0], . . . ,v[3] at time τ = 5. The decoder
computes p1[t+1] in the interval [6, 9] and subtracts them to recover u[0], . . . ,u[3]. All the erased
packets are recovered for this erasure pattern.
• t ≥ 2 : We recover v[0] at time 1 using the code C12 before the start of the erasure burst. In
particular taking τ = 1 note that
(1−R12)(τ + 1) =
(
1− 1
3
)
2 > 1, (4.15)
which by Corollary 2.1 suffices to recover v[0] at time τ = 1
We next show how v[t],v[t+1],v[t+2] can be recovered using the code C12 in the window [t, t+5].
If t ∈ {2, 3} then in addition to v[t],v[t + 1],v[t + 2] we should also account for the erasure at
time 6 due to the repetition of u[0]. Thus there are a total of 4 erasures in the window [t, t + 5]
in the form of 2 bursts. The recovery from such pattern can be shown by invoking Lemma C.1.
For t ∈ [4, 6] the erasure burst overlaps with the repetition of u[0] and thus there are only three
erasures and the recovery follows by the application of Corollary 2.1. For t > 6 the packet u[0] is
recovered at time 6 and then the erasure burst in [t, t+ 2] can be recovered separately.
For general values of B and T , similar decoding steps are used and is provided in Appendix C.1.
Chapter 4. Partial Recovery Codes (PRC) 54
4.3.3 Robust PRC Codes
While the PRC constructed in (4.4) is designed for the special case of CII(1, B,W ≥ 2T + B), we now
extend such construction to be able to deal with up to N isolated erasures, i.e., CII(N,B,W ≥ 2T +B).
We denote this extension as robust PRC codes. These codes ensures partial recovery (cf. Definition 4.3)
when the pattern is a burst and one isolated erasures but perfect recovery when the pattern consists of
N isolated erasures.
Similar to MiDAS codes, we start with the PRC code for burst plus one isolated erasure con-
structed (4.4) of rate R = u+v2u+v+s
and choose u, v and s according to (4.7). We then add a layer
of parity-checks to protect the u[·] packets. In particular, we apply a (u + r, u, T ) m-MDS code to the
u[·] packets generating r parity-check symbols w[i] = (w0[i], . . . , wr−1[i])†. We then concatenate the
generated parity-checks to the channel packet in (4.4), i.e., the channel packet of the robust version is
given by,
x[i] = (s†[i],q†[i],p†2[i],w
†[i])†. (4.16)
We start by assuming a channel that introduces N isolated erasures in the window [0, 2T + B − 1]
of length 2T + B. It suffices to show that s[0] = (u[0]†,v[0]†)† is recovered by time T . We note that
u[0] and v[0] are recovered separately. For v[0], we consider the interval [0,∆ − 1]. We note that the
interfering u[·] packets in this interval are not erased and can be cancelled out from q[·] to recover the
underlying parity-checks p1[·]. Hence, we use property P1 in Corollary 2.1 to recover v[0] by time ∆− 1
since,
(1−R12)(∆) = B + 1 ≥ N. (4.17)
For u[0] to be recovered using the (u[i],w[i]) m-MDS code of rate uu+r
in the interval [0, T ], the following
needs to be satisfied,
(
1− u
u+ r
)
(T + 1) ≥ N ⇒ r ≥ Nu
T −N + 1. (4.18)
4.4 PRC using MDS codes
As with the case of MiDAS codes, we propose a construction which replaces m-MDS codes in the PRC
construction with diagonally interleaved MDS block codes.
Theorem 4.2 (PRC using MDS codes). There exists a partial recovery code for CII(1, B,W ≥ 2T +B)
of rate,
R = maxB<∆≤T
(T −∆+ 1)∆
(T −∆+ 1)(∆ +B + 1) + (∆−B − 1), (4.19)
that satisfy Definition 4.3 and with a field-size that increases as O(T 3).
To establish Theorem 4.2, we provide the encoding and decoding steps of a code construction which
achieves the rate in (4.19) in Section 4.4.1. We denote this construction by PRC-MDS1 code. We also
1We refer to PRC codes in Section 4.3 as PRC-mMDS codes whenever we compare it to PRC-MDS codes.
Chapter 4. Partial Recovery Codes (PRC) 55
note that since PRC-MDS uses MDS block codes instead of m-MDS codes, the associated decoding
complexity is significantly reduced as discussed in Section 2.2.
4.4.1 Code Construction
Encoder
The encoding steps are as follows:
• Spit each source packets s[i] into (T −∆+ 1)∆ symbols, s[i] = (s0[i], . . . , s(T−∆+1)∆−1[i]).
• Divide them into two groups,
s[i] = (u[i],v[i]), (4.20)
where
u[i] = (u0[i], . . . , u(T−∆+1)(B+1)−1[i])
= (s0[i], . . . , s(T−∆+1)(B+1)−1[i]) (4.21)
v[i] = (v0[i], . . . , v(T−∆+1)(∆−B−1)−1[i])
= (s(T−∆+1)(B+1)[i], . . . , s(T−∆+1)∆−1[i]). (4.22)
• Apply a systematic (∆,∆−B−1) MDS code to the v[·] packets with interleaving factor of T−∆+1,
generating (T − ∆ + 1)(B + 1) parity-check packets, pI[·] = (pI0[i], . . . , pI(T−∆+1)(B+1)−1[i]). The
resulting codeword of such MDS code starting at vj [i] is given by,
cIj [i] =
vj [i]
vj+(T−∆+1)[i+ 1]
vj+2(T−∆+1)[i+ 2]...
vj+(T−∆+1)(∆−B−2)[i+∆−B − 2]
pIj [i+∆−B − 1]
pIj+(T−∆+1)[i+∆−B]...
pIj+(T−∆+1)B[i+∆− 1]
, (4.23)
for j = {0, 1, . . . , T −∆}.
• Combine the u[·] packets with the parity-check packets pI[·] after applying a shift of ∆ to the
earlier, i.e., q[i] = pI[i] + u[i−∆].
• Apply a systematic (T −∆+ 2, T −∆+ 1) MDS code to the v[·] packets with interleaving factor
of ∆ − B − 1, generating ∆ − B − 1 parity-check packets, pII[·] = (pII0 [i], . . . , pII(∆−B−1)−1[i]), i.e.,
Chapter 4. Partial Recovery Codes (PRC) 56
the resulting codeword would be,
cIIj [i] =
vj [i]
vj+(∆−B−1)[i+ 1]
vj+2(∆−B−1)[i+ 2]...
vj+(∆−B−1)(T−∆)[i+ T −∆]
pIIj [i+ T −∆+ 1]
, (4.24)
for j = {0, 1, . . . ,∆−B − 2}.
• Concatenate the parity-check packets pII[·] to the previously generated parity-check packets q[·],i.e., the channel packet is given by,
x[i] =(s[i],q[i],pII[i]
). (4.25)
One can see that the rate of the constructed code in (4.25) is given by,
R =(T −∆+ 1)∆
(T −∆+ 1)(∆ +B + 1) + (∆−B − 1), (4.26)
which matches that in Theorem 4.2.
Similar to the case of PRC-mMDS codes, the optimal value of ∆ (ignoring integer effects2) for
maximizing the rate R can be shown to be:
∆⋆ =T (B + 1)−
√
T (B + 1)(T −B)
B. (4.27)
Unlike the case of MiDAS-MDS codes, PRC-MDS codes has slightly lower rate than PRC-mMDS
codes for a given B and T . Figure 4.3 shows the achieved rates of both codes for B = 10, 20 and 30
and different values of T . The gap between the achieved rates of both codes increases as the difference
T − B increases.
Remark 4.2. We note that both PRC-mMDS and PRC-MDS codes designed for CII(N = 1, B,W ≥2T + B) can perfectly recover all erased source packets with a maximum delay of T if transmitted over
the channel C(N = 1, B+1,W ≥ T +1). In particular, if a burst of maximum length B+1 is introduced
by the channel, both PRC constructions are capable of recovering all source packets with a maximum
delay of T .
Decoder
We assume the channel introduces an erasure burst of length B in the interval [i, i + B − 1] together
with an isolated erasure happening at time ti. We consider the following cases:
• ti ∈ [i− T, i− (T −∆)− 1]: In this case, the v[ti] can be recovered using the parity-check packets
pII[·] of the (T −∆+2, T −∆+1) MDS code by time ti+(T −∆+2)− 1 ≤ i. This is because the
codewords cIIj [r] for r = {ti − T +∆, . . . , ti} and j = {0, 1, . . . ,∆−B − 2} have no more than one
2In practice, ⌈·⌉ or ⌊·⌋ can be used to find ∆⋆ in (4.9) and (4.27)
Chapter 4. Partial Recovery Codes (PRC) 57
10 15 20 25 30 35 40 45 50 55 600.4
0.5
0.6
0.7
0.8
0.9
1
Delay (T)
Rat
e (R
)
PRC−MDS
PRC−mMDS
B = 10
B = 30
B = 20
Figure 4.3: Achievable rate of PRC and PRC-MDS codes designed for channel CII(N = 1, B,W ≥2T +B).
erasure. Now, the packets v[i], . . . ,v[i + B − 1] can be recovered using the parity-check packets
pI[·] of the (∆,∆ − B − 1) MDS code by time i +∆ − 1. Thus the packets u[i], . . . ,u[i + B − 1]
can be recovered at time i + ∆, . . . , i + ∆ + B − 1 sequentially by subtracting the corresponding
pII[·] parity-check packets. We note that if ti +∆ /∈ [i, i+B − 1], one can recover u[ti], otherwise
it is lost.
• ti ∈ [i − (T −∆), i − 1]: Here, both the v[·] packets of isolated erasure together with that of the
erasure burst are recovered using the (∆,∆ − B − 1) MDS code. In particular, v[ti] is recovered
by time ti +∆− 1 using the associated MDS code, while v[i], . . . ,v[i + B − 1] are all guaranteed
to be recovered by time i +∆− 1 as explained in the MiDAS decoder. Now, the u[·] packets canbe recovered sequentially at time i+∆, . . . , i+∆+B − 1.
• ti ∈ [i+B, i+∆−1]: The v[·] packets of the erasure burst can be recovered using the (∆,∆−B−1)MDS code at time i+∆− 1 since there is no more than B + 1 erasures in each of the codewords,
cIj [r] for r = {i−∆+B+2, . . . , i+B− 1} and j = {0, 1, . . . , T −∆}. The v[ti] packet can now be
recovered using the (T −∆+2, T−∆+1) code by time ti+T−∆+1 ≤ i+∆−1+T−∆+1 = i+T
since all v[·] packets are recovered except for one. Now, the decoder can go back and compute
the parity-check packets pI[·] in the interval [i+∆, i+∆+B − 1] and subtract them to compute
u[i], . . . ,u[i+B − 1]. Also, u[ti] can be recovered later at time ti +∆.
• ti ∈ [i +∆, i +∆ + B − 1]: Similar to the previous case, the v[·] packets of the erasure burst are
recovered by time i+∆− 1 since the codewords. Now the parity-check packets pI[·] in the interval
[i+∆, ti − 1] can be computed and subtracted to recover u[i], . . . ,u[ti −∆− 1]. The v[ti] packet
can be recovered using the (T − ∆ + 2, T − ∆ + 1) by time ti + (T − ∆) + 1. At this time the
decoder goes back to compute the parity-check packets pI[·] in the interval [ti + 1, i+∆+ B − 1]
and subtract them to recover u[ti −∆+ 1], . . . ,u[i +B − 1]. Also, u[ti] can be recovered later at
time ti+∆. We note that u[ti−∆] cannot be recovered since the isolated erasure at time ti erases
its repeated version.
Chapter 4. Partial Recovery Codes (PRC) 58
1 2 3 4 5 6 7 8 9 10
x 10−3
10−5
10−4
10−3
ε
Loss
Pro
babi
lity
m−MDS (N= B = 25)(51,26) MDS (N = B = 25)MS−mMDS (N,B) = (1,49)MS−MDS (N,B) = (1,49)PRC−mMDS (∆,N,B) = (47,5,39)PRC−MDS (∆,N,B) = (48,5,38)MiDAS−mMDS (N,B) = (6,43)MiDAS−MDS (N,B) = (6,43)
(a) All codes are evaluated using a decoding delay of T = 50 packets and a rate of R = 50/99 ≈ 0.51.
0 10 20 30 40 50 600
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.1 which approximates a geometric distribution (shown dotted) with the same success proba-bility.
Figure 4.4: Simulation Experiments for Gilbert-Elliott Channel Model with (α, β) = (10−5, 0.1).
Remark 4.3. Computing the sufficient field-size for constructing PRC-MDS codes is similar to that
of MiDAS-MDS codes. The source vector must consist of q1 = (T − ∆ + 1)∆ symbols. Also, the
(∆,∆−B− 1) and (T −∆+2, T −∆+1) MDS codes can be constructed if each symbol belong to a field
of size q2 = max(∆, T −∆+ 2). Thus, for a given pair B and a delay T , a field size of q = q1 · q2, i.e.,O(T 3), is sufficient to construct the corresponding PRC-MDS code. Since PRC-MDS codes replaces the
m-MDS codes in PRC-mMDS with block MDS codes, the associated decoding complexity is significantly
reduced as discussed in Section 2.2.
Chapter 4. Partial Recovery Codes (PRC) 59
1 2 3 4 5 6 7 8 9 10
x 10−3
10−5
10−4
10−3
10−2
10−1
ε
Loss
Pro
babi
lity
Uncodedm−MDS (N= B = 16)(41,25) MDS (N = B = 16)MS−mMDS (N,B) = (1,27)MS−MDS (N,B) = (1,27)PRC−mMDS (∆,N,B) = (36,4,20)PRC−MDS (∆,N,B) = (36,4,18)MiDAS−mMDS (N,B) = (4,24)MiDAS−MDS (N,B) = (4,24)
(a) All codes are evaluated using a decoding delay of T = 40 packets and a rate of R = 40/67 ≈ 0.6.
5 10 15 20 25 30 350
0.02
0.04
0.06
0.08
0.1
0.12
Burst Length
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.5 in a N + 1 = 9-States Fritchman Channel. The distribution follows a negative binomialdistribution (shown dotted) of N = 8 failures and a success probability of 0.5.
Figure 4.5: Simulation Experiments for Fritchman Channel Model with (N +1, α, β) = (9, 2×10−5, 0.5).
4.5 Simulation Results
The Channel and Code parameters used in both experiments are given in Table. 4.2.
In Figure 4.4a and Figure 4.5a, we plot the loss rate on the y-axis versus ε on the x-axis. The loss
rate of the m-MDS version of all codes is shown in solid lines while that of the MDS version of the same
codes is shown in dashed lines. The performance of various codes is as follows:
• m-MDS (Maximum N) Codes - Lines marked with ’+’
As discussed earlier, both m-MDS and Interleaved-MDS codes achieve the maximum value of N
for a given rate R and delay T . Due to the large value of N associated with these codes, they can
recover from most erasures in the good-state which explains the independence on ε in the interval
Chapter 4. Partial Recovery Codes (PRC) 60
Table 4.2: Channel and Code Parameters used in Simulations.
Figure 4.4a Figure 4.5a(α, β) (10−5, 0.1) (2× 10−5, 0.5)Channel Length 5× 108 5× 108
Rate R 50/99 ≈ 0.51 40/67 ≈ 0.6Delay T 50 40
N B N B
m-MDS 25 25 16 16MDS 25 25 16 16MS-mMDS 1 49 1 27MS-MDS 1 49 1 27MiDAS 6 43 4 24MiDAS-MDS 6 43 4 24PRC 5 39 4 20PRC-MDS 5 38 4 18
of interest. The loss probability of these codes is determined by the bursts of length longer than
N . However, one can see that m-MDS codes has a slightly better performance which is due to its
capability of partial recovery when the channel introduces bursts longer than N .
• Maximally Short (Maximum B) Codes - Lines marked with circles
Maximally Short (MS) codes can be considered as a special case of MiDAS codes with no pu[·]parity-check packets added. One can see that for both MS-mMDS and MS-MDS codes, there is
a noticeable increase in the loss rate in the interval of ε considered. The packet loss probability
increases in proportion to ε2 as N = 1 for these codes.
• MiDAS Codes - Lines marked with squares
With choosing the right pair (N,B), the MiDAS codes has a larger B compared to maximum N
codes and larger N compared to maximum B codes. Thus, they achieve a better performance than
both codes for some values of ε. However, the performance deteriorates faster than that of m-MDS
codes. This is because of the bad performance of MiDAS codes against burst plus isolated erasure
patterns introduces by the Gilbert-Elliott and Fritchman channels in the transition from bad to
good states and vice versa. However, the performance of MiDAS-MDS deteriorates slightly faster
than MiDAS-mMDS codes which is due to an even worse performance when facing a subset of the
burst plus isolated erasure patterns as discussed in Section 3.7.
• PRC Codes - Lines marked with triangles
PRC codes are designed to recover from burst and isolated erasures with a maximum loss of
one packet. This explains the slower deterioration in performance when compared to MiDAS
codes. Thus, the performance of PRC codes is expected to be close to these constructions. The
PRC using m-MDS constituent codes also outperform PRC-MDS codes due to the better partial
recovery property as discussed for MiDAS codes in Section 3.7 as well as the slightly larger value
of B in Table 4.2.
Chapter 4. Partial Recovery Codes (PRC) 61
4.6 Conclusion
In this chapter, we study streaming codes that can recover from patterns that involve both burst and
isolated erasures in the same decoding window. Such patterns arise in the transition from good state to
bad state or vice versa in statistical channel models such as Gilbert-Elliott or Fritchman channels.
It turns out that targeting perfect recovery of all packets over such patterns requires low code rates
particularly when the delay T is close to the burst length B. In practice, high rates are more desirable.
Hence, we consider partial recovery codes (PRC). In particular, in a given pattern that includes a burst
and one isolated erasure, partial recovery codes are capable of recovering all source packets within a
delay of T except at most one. These codes use a similar layering technique to that used in constructing
MiDAS codes in Chapter 3. However, in PRC, the extra layer of parity-check packets is used to recover
the isolated erasure associated with the burst whereas in the MiDAS codes, the extra layer is used to
recover from a pattern of N isolated erasures. For the case of N isolated erasures, a layer of parity-check
packets similar to that in MiDAS codes can be added.
To reduce field-size and decoding complexity, we propose an alternative construction of partial re-
covery codes, PRC-MDS, which replaces m-MDS codes with diagonally interleaved MDS block codes.
However, a slight decrease in rate is observed when PRC-MDS code is compared to the corresponding
PRC-mMDS code for a given B and T . The performance of both proposed codes is justified through
running some experiments over Gilbert-Elliott and Fritchman channels. We observe that designing codes
for patterns with both bursts and isolated erasures is useful in terms of overall loss probability. We also
conclude that a slight decrease in loss rate is observed by PRC-MDS compared to that of PRC-mMDS
codes.
The optimality of PRC codes remains open. An upper bound on the rate over sliding window channels
when partial recovery is considered may be necessary. We believe that such bound may involve joint
source-channel coding arguments and is left for future investigation.
Chapter 5
Unequal Source-Channel Rates
5.1 Introduction
The work in Chapters 3 and 4 assumes that the source and channel transmission rates are identical, i.e.,
one source packet arrives before the transmission of every channel packet. In many practical systems
there is a mismatch between the source and channel transmission rates. For example in most video
streaming systems, each source frame arrives once approximately every 40 ms, whereas the interval
between successive channel packets is typically much smaller. Thus a large number of channel packets
may need to be transmitted in between the arrival of two successive source frames.
We refer to this mismatched scenario as unequal source-channel rates. A straightforward way of
implementing the streaming codes in this scenario is to split each source frame into multiple packets
such that there is one source packet for each transmitted channel packet. In this chapter, we show that
such a naive approach is sub-optimal and propose a new class of codes for the unequal rates scenario
which achieves the capacity of burst erasure channels. We then enhance these codes to be resilient
against isolated erasures using similar techniques to that used in MiDAS codes.
Finally, we present extensive simulation results over the Gilbert-Elliott and Fritchman channels that
indicate substantial performance gains over baseline codes for a wide range of channel parameters.
5.2 System Model
We discuss a generalization of the setup in Chapter 2 where one source packet arrives every M
channel uses. The alphabets of source and channel packets are as before. For convenience, the collection
of M channel packets is termed as a macro-packet. The index of each macro-packet is denoted using the
letter i, i.e.,
X[i, :] = [x[i, 1] | . . . | x[i,M ]] ∈ Fn×Mq (5.1)
denotes the macro-packet i consisting of M channel packets. At the start of macro-packet i, the encoder
observes the source packet s[i] ∈ S = Fkq and generates M packets x[i, j] ∈ X = F
nq , for j ∈ {1, . . . ,M}
which can depend on all the observed source packets up to that time, i.e.,
x[i, j] = fi,j(s[0], s[1], · · · , s[i]). (5.2)
62
Chapter 5. Unequal Source-Channel Rates 63
s[0] s[1] s[2] s[T ]
Recovers[0]
s[T + 1]
Recovers[1]
· · ·
X[0, :] X[1, :] X[2, :] X[T, :] X[T + 1, :]
Figure 5.1: Each source packet s[i] arrives just before the transmission of X[i, :] and needs to be recon-structed at the destination after a delay of T macro-packets.
These packets are transmitted in the M time slots corresponding to the macro-packet i. Figure 5.1
shows the system model. Note that for the case when M = 1 the setup reduces to that in Chapter 2.
The jth channel output packet in the macro-packet i is denoted by y[i, j]. When the channel input
is not erased, we have y[i, j] = x[i, j], whereas when the channel input is erased, y[i, j] = ⋆. The
channel output macro-packets are expressed as Y[i, :] = [y[i, 1] | . . . | y[i,M ]]. The decoder is required
to decode each source packet with a maximum delay of T macro-packets, i.e.,
s[i] = gi(Y[0, :],Y[1, :], · · · ,Y[i + T, :]). (5.3)
We note that the rate of this code is given by R = kMn
. In this definition, we are normalizing the
rate with the size of each macro-packet. The rate expression is motivated by the fact that nM channel
packets are transmitted over the channel for each k source packets.
Definition 5.1 (Streaming Capacity - Unequal Source-Channel Rates). A rate R is achievable with a
delay of T macro-packets over C(N,B,W ) if there exists a streaming code of this rate over some field-size
q such that every source packet s[i] can be decoded with a delay of T macro-packets. The supremum of
all achievable rates is the streaming capacity.
5.3 Main Result
For the above setup, the capacity has been obtained when N = 1 and W ≥M(T + 1).
Theorem 5.1 (Capacity of Burst Erasure Channels). For the channel C(N = 1, B,W ), and any M and
delay T , such that W ≥M(T + 1), the capacity C is expressed as follows,
C =
TT+b
, B′ ≤ bT+b
M, T ≥ b,M(T+b+1)−B
M(T+b+1) , B′ > bT+b
M, T > b,M−B′
M, B′ > M
2 , T = b,
0, T < b,
(5.4)
where the constants b and B′ be defined via
B = bM +B′, B′ ∈ {0, . . . ,M − 1}, b ∈ N0, (5.5)
The proof of Theorem 5.1 is divided into two main parts. The code construction is illustrated in
Section 5.6 while the converse appears in Section 5.7.
Chapter 5. Unequal Source-Channel Rates 64
x[i, 1] . . . x[i,M ]
w[i, 1] . . . w[i,M ]
s[i]
x[i+ 1, 1] . . . x[i+ 1,M ]
w[i+ 1, 1] . . . w[i+ 1,M ]
s[i + 1]
. . . . . . . . .
. . . . . . . . .
x[i+ T, 1] . . . x[i+ T,M ]
w[i+ T, 1] . . . w[i+ T,M ]
s[i + T ]
w[i, 1] . . . w[i,M ]
s[i]
Source Stream
Expanded
Source Stream
Channel Packets
Source Recovery
Expanded
Source Recovery
X[i, :] X[i+ 1, :] X[i+ T, :]
Figure 5.2: Each source packet s[i] is split into M sub-packets, i.e., s[i] = (w[i, 1],w[i, 2], . . . ,w[i,M ]).The expanded source stream is then encoded using a Maximally-Short code. The decoder recovers eachw[i, j] once y[i + T, j] is received which ensures that s[i] is recovered by the end of the macro-packeti+ T .
The constructions in Theorem 5.1 only apply to the burst erasure channel. Based on the layered
approach in Theorem 3.2 we also propose a robust construction for the case when N > 1 in Section 5.8.
However, the capacity of C(N > 1, B,W ) is left for future work.
5.4 Performance Analysis of Baseline Schemes
5.4.1 m-MDS Codes
Similar to the case of equal source-channel rates, it can be shown that when W ≥M(T +1), any (N,B)
that satisfies
N ≤M(1−R)(T + 1), B ≤M(1−R)(T + 1). (5.6)
is achievable using a (n, k,m) m-MDS code of rate R = kn. We will omit the details as they are very
similar to the justification of P1 in Corollary 2.1.
5.4.2 Maximally Short (MS) Codes
For the case of unequal source-channel rates, a straightforward adaptation of the MS codes is as follows.
We split each packet s[i] into M symbols, one for each time slot in the macro-packet and then apply
a MS code in Section 2.6.2 to this expanded source stream with delay T ′ = MT (cf. Figure 5.2). In
particular we assume that s[i] ∈ FkMq and proceed as follows.
• Split each s[i] = (w[i, 1], . . . ,w[i,M ]) where w[i, j] ∈ Fkq .
• Apply a (B,MT ) MS code in Section 2.6.2 for the C(N = 1, B,W ) channel with delay T ′ = MT
(channel packets) and W ≥M(T + 1).
• Transmit the associated channel packet x[i, j] ∈ Fnq in slot j of the macro-packet i.
Chapter 5. Unequal Source-Channel Rates 65
From Theorem 2.3 we have that
R =MT
MT +B=
T
T + b+ B′
M
(5.7)
is achievable when B ≤MT . Note that in the second equality in (5.7), we use (5.5). Note that the delay
of T ′ = M · T channel packets in the expanded stream, implies that w[i, j] is recovered when y[i+ T, j]
is received for each j ∈ {1, 2, . . . ,M}. Thus the entire source packet s[i] is guaranteed to be recovered
at the end of macro-packet i+ T , thus satisfying the delay constraint. We note that the rate in (5.7) is
only positive if B ≤ MT and attains the capacity in Theorem 5.1 in the special case when B′ = 0. If
B > MT the above construction is not feasible and the rate attained is zero.
We start by interpreting the capacity expression in Theorem 5.1 in Section 5.5. In Section 5.6, we
provide the code construction achieving such capacity. The decoding analysis is discussed in Section 5.6.2.
We illustrate both the encoding and decoding steps through a numerical example in Section 5.6.3. We
then provide the converse proof of Theorem 5.1 in Section 5.7. Finally, we present constructions that
are robust against isolated erasures in Section 5.8.
5.5 Capacity Expression
We note that C = 0, if T < b. This follows since an erasure burst of length B = bM + B′ can span all
underlying channel packets in macro-packets [i, i+ T ] thus making the recovery of s[i] by macro-packet
i + T impossible. This trivial case will therefore not be discussed further in this chapter. When T = b,
the capacity in Theorem 5.1 reduces to the following,
C =
12 , 0 ≤ B′ ≤ M
2 , T = b,
M−B′
M, M
2 < B′ ≤M − 1, T = b.(5.8)
In the special case of T = b, during the recovery of s[i] we can only use the unerased packets in Y[i, :]
and Y[i + b, :] as all the intermediate macro-packets are completely erased. It turns out that a simple
repetition code that uses min(M −B′, M2
)information packets and an identical number of parity-check
packets in each macro-packet achieves the capacity when T = b.
When T > b the capacity in Theorem 5.1 reduces to one of the following cases,
C =
TT+b
, 0 ≤ B′ ≤ bT+b
M,
M(T+b+1)−B
M(T+b+1) , bT+b
M < B′ ≤M − 1.(5.9)
Note that, for each value of b, the capacity remains constant as B′ is increased in the interval[
0, bT+b
M]
.
Figure 5.3 illustrates the capacity and rates achieved with baseline schemes for the case of unequal
source-channel rates. In this example, we consider M = 20 and a delay of T = 5 macro-packets and plot
the rate vs. correctable burst length. The capacity is shown by the curve marked with squares. Note
that it is constant in the intervals B ∈ [40, 45], [60, 67], [80, 88], [100, 110], which corresponds to the first
case in (5.4). The curve marked with circles denotes the rate achieved by a suitable modification of the
MS code (5.7). We note that the curves intersect whenever B is an integer multiple of M , indicating
that MS codes achieve the capacity for these special values: B ∈ {40, 60, 80, 100}. Furthermore for burst
Chapter 5. Unequal Source-Channel Rates 66
40 50 60 70 80 90 100 110
0.5
0.55
0.6
0.65
0.7
M = 20, T = 5
Burst Length (B)
Rat
e (R
)
CapacityMS Codesm−MDS Codes
Figure 5.3: Achievable rates for different code constructions for the case of unequal source-channelrates for the C(N = 1, B,W = M(T + 1)) channel. We fix the delay to T = 5 macro-packets and letM = 20. The line marked with squares corresponds to the capacity in Theorem 5.1. The line markedwith circles corresponds to the rate achieved by the adapted MS code (5.7) whereas the line markedwith + corresponds to the rate of the m-MDS code (5.6).
lengths B > MT = 100, the MS codes are no longer feasible and the associated rate is zero. The line
marked with + shows the performance of the m-MDS codes in (5.6). Since these codes do not perform
sequential recovery, their achievable rate is significantly lower than the capacity.
5.6 Code Construction
The proposed construction1 is a natural generalization of the Maximally Short codes in Section 2.6.2.
This is illustrated in Figure 5.4. However an additional step of reshaping as illustrated in Figure 5.5 is
needed.
5.6.1 Encoder
We separately consider three cases below.
Encoding: T ≥ b and B′ ≤ bT+b
M
We let
n = T + b, k = MT, (5.10)
throughout this case. Note that the rate R = kMn
reduces to the first case in both (5.8) and (5.9).
1Pratik Patil suggested placing all the uvec[·] packets in the first column of a macro-packet which leads to the capacityachieving code in the case of large delay, i.e., T ≥ b(M − 1) +B′ − 1 (cf. [4]). Also, the decoding analysis of the proposedcodes was done in collaboration with Pratik.
Chapter 5. Unequal Source-Channel Rates 67
�������
�������
�������
��
��
��
�������
�������
�������
������
������
������
������
������
������
��������
����������������
Figure 5.4: Construction of Parity-Check Packets. As in the MS code, each source packet s[t] is dividedinto two groups, uvec[t] and vvec[t]. A m-MDS code is applied to the vvec[·] packets and a repetition codeis applied to the uvec[·] packets. The resulting parities are then combined to generate the parity-checkpackets qvec[t] = pvec[t] + uvec[t− T ].
• Source Splitting: We assume that each source packet s[i] ∈ Fkq and partition the k symbols into
two groups uvec[i] ∈ Fku
q and vvec[i] ∈ Fkv
q as follows,
s[i] = (s0[i], . . . , sk−1[i]) = (u0[i], . . . , uku−1[i]︸ ︷︷ ︸
uvec[i]
, v0[i], . . . , vkv−1[i]︸ ︷︷ ︸
vvec[i]
) (5.11)
where we select
ku = Mb, kv = M(T − b). (5.12)
• m-MDS Parity-Checks: Apply a (kv + ku, kv, T ) m-MDS code of rate kv
kv+ku to the sub-stream
of vvec[·] packets generating ku parity-check packets, (p0[i], . . . , pku−1[i]) = pvec[i] ∈ Fku
q for each
macro-packet. In particular we have that
pvec[i] =
(T∑
l=0
v†vec[i− l] ·Hl
)†
(5.13)
where Hl ∈ Fkv×ku
q are the sub-matrices associated with the m-MDS code (2.6).
• Parity-Check Generation: Combine the uvec[·] packets with the pvec[·] parity-checks after ap-plying a shift of T to the former, i.e.,
qvec[i] = pvec[i] + uvec[i− T ], qvec[i] ∈ Fku
q . (5.14)
Chapter 5. Unequal Source-Channel Rates 68
������
������
������
�����
������ ������������
�������
����
�����
�����
�������
�����
�����
�����
��� ����
���
��� �������
��� �������
�
�
�
���
�
�� �������� ���
�
��
��
��
��
��������� ����� ���
������
����
Figure 5.5: Reshaping of Channel Packets. The three groups of packets, uvec[i], vvec[i] and qvec[i] arereshaped into U[i, :], V[i, :] and Q[i, :] which are denoted by vertically, diagonally and grid hatchedboxes, respectively. These reshaped packets are then concatenated to form the channel macro-packetX[i, :].
• Re-shaping: In order to construct the macro-packet, we reshape uvec[i], vvec[i] and qvec[i] into
groups each of n symbols generating the following matrices:
U[i, :] =
[
u[i, 1] · · · u[i, r]u[i, r + 1]
0
]
∈ Fn×r+1q
V[i, :] =
[
0
v[i, 1]v[i, 2] · · · v[i,M − 2r − 1]
0
v[i,M − 2r]
]
∈ Fn×M−2rq
Q[i, :] =
[
q[i, r + 1]
0q[i, r] · · · q[i, 1]
]
∈ Fn×r+1q ,
(5.15)
where
uvec[i] =
u[i, 1]
u[i, 2]...
u[i, r]
u[i, r + 1]
, vvec[i] =
v[i, 1]
v[i, 2]...
v[i,M − 2r − 1]
v[i,M − 2r]
, qvec[i] =
q[i, 1]
q[i, 2]...
q[i, r]
q[i, r + 1]
(5.16)
Chapter 5. Unequal Source-Channel Rates 69
In (5.15) we define r ∈ N0 and r′ ∈ {0, 1, . . . , n− 1} via
ku = r · n+ r′. (5.17)
Note that u[i, l] ∈ Fnq for each l ∈ {1, . . . , r} and u[i, r+1] ∈ F
r′
q . The splitting of qvec[i] into q[i, j]
in (5.15) follows in an analogous manner. We can express
q[i, j] = u[i − T, j] + p[i, j], j = 1, 2, . . . , r + 1 (5.18)
where p[i, j] is a sub-sequence of pvec[i] defined in a similar manner. We define P[i, :] similar to
Q[i, :] in (5.15) after replacing q[i, j] by p[i, j] for j ∈ {1, . . . , r+1}. In the splitting of vvec[i] into
v[i, j] we note that v[i, 1],v[i,M − 2r] ∈ Fn−r′
q whereas v[i, j] ∈ Fnq for 2 ≤ j ≤ M − 2r − 1. It
can be easily verified that M − 2r > 0 for our selected code parameters. When M − 2r = 1 the
structure of V[i, :] is as follows,
V[i, :] =
0
v[i, 1]
0
, (5.19)
where v[i, 1] ∈ Fn−2r′
q .
• Macro-Packet Generation: Concatenate U[i, :], V[i, :] and Q[i, :] to construct the channelmacro-packet X[i, :] as follows,
X[i, :] = [x[i, 1]| . . . |x[i,M ]] =[
u[i, 1] · · · u[i, r]u[i, r + 1]
v[i, 1]v[i, 2] · · · v[i,M − 2r − 1]
q[i, r + 1]
v[i,M − 2r]q[i, r] · · · q[i, 1]
]
,
(5.20)
for M − 2r > 1 while for the special case of M − 2r = 1, the channel macro-packet is given by,
X[i, :] = [x[i, 1]| . . . |x[i,M ]]
=
u[i, 1] · · · u[i, r]
u[i, r + 1]
v[i, 1]
q[i, r + 1]
q[i, r] · · · q[i, 1]
(5.21)
Note that the channel macro-packet at time i is denoted by X[i, :] ∈ Fn×Mq and the j-th channel
packet in X[i, :] by x[i, j] ∈ Fnq for j ∈ {1, . . . ,M}.
Remark 5.1. We note that in the minimum delay case, i.e., T = b, this construction degenerates into a
repetition code as kv = M(T − b) = 0 in this case (cf. (5.12)). The corresponding rate of such repetition
code is R = ku
2ku = 12 which meets the capacity expression in the first case in (5.8). The construction
achieving the second case with T = b and B′ > M2 is discussed later in this section.
This completes the description of the encoding function for the first case in (5.4). Figure 5.6 illustrates
the overall encoder structure.
Chapter 5. Unequal Source-Channel Rates 70
Encoding: T > b and B′ > bT+b
M
We begin by choosing the following values of n and k,
n = T + b+ 1, k = M(T + b + 1)−B (5.22)
and note that the rate R = kMn
reduces to the second case in (5.9).
• Split each source s[i] ∈ Fkq into k symbols and divide them into two groups uvec[i] ∈ F
ku
q and
vvec[i] ∈ Fkv
q as in (5.11). This time we select
ku = B = Mb+B′, kv = M(T + b+ 1)− 2B (5.23)
• Apply a (kv +ku, kv, T ) m-MDS code of rate kv
kv+ku to the sub-stream of vvec[·] packets generatingku parity-check packets, (p0[i], . . . , pku−1[i]) = pvec[i] ∈ F
ku
q for each macro-packet as in (5.13).
• Combine the uvec[·] packets with the pvec[·] parity-checks after applying a shift of T to the former,
i.e., qvec[i] = pvec[i] + uvec[i− T ].
• Reshape the uvec[i], vvec[i] and qvec[i] vectors into matrices U[i, :], V[i, :] and Q[i, :] as in (5.15). In
particular we let r and r′ be such that ku = r · n+ r′ as in (5.17). As in (5.16) we split uvec[i] into
{u[i, j]}1≤j≤(r+1) where u[i, j] ∈ Fnq for 1 ≤ j ≤ r and u[i, r + 1] ∈ F
r′
q holds. In a similar manner
we split qvec[i] into vectors {q[i, j]}1≤j≤(r+1) where q[i, j] ∈ Fnq for 1 ≤ j ≤ r and q[i, r + 1] ∈ F
r′
q
holds. Finally we split vvec[i] into {v[i, j]}1≤j≤(M−2r) where v[i, 1],v[i,M − 2r] ∈ Fn−r′
q and
v[i, j] ∈ Fnq for 2 ≤ j ≤ (M − 2r − 1).
• Generate the Macro-PacketX[i, :] by concatenatingU[i, :], V[i, :] and Q[i, :] as in (5.20) and (5.21).
Encoding: T = b and B′ > M2
A simple repetition scheme is used. We split each source packet into M − B′ packets, i.e., s[i] =
(s0[i], . . . , sM−B−1′ [i]) and assign the channel packets as follows,
x[i, j] =
sj−1[i] j ∈ [1,M −B′]
0 j ∈ [M −B′ + 1, B′]
sj−B′−1[i− T ] j ∈ [B′ + 1,M ].
(5.24)
The rate of such code is clearly R = M−B′
Mwhich matches the second case in (5.8). In this case, by
inspection we can check that the code described above is decodable within the decoding delay T = b.
Thus we will only focus on the previous two cases in our decoding analysis.
Remark 5.2. We note that the above construction achieves the capacity in Theorem 5.1 for W ≥M(T + 1). For the case when W < M(T + 1), the same construction can be used with replacing the
delay T with the effective delay Teff =⌊WM
⌋− 1.
Chapter 5. Unequal Source-Channel Rates 71
� ��� ����� ���
���������������� ������ ���������
���������������� ������������� ���������������������
� �����������������������������������������������������������������
���� ���������� ���
�� !" �����������
Figure 5.6: Encoding of source packets into macro-packets. Each source packet is split into two groups.A repetition code is applied to the U[t, :] group with a delay of T macro-packets and is denoted byvertically hatched boxes as shown in the first figure. A m-MDS code is applied to the V[t, :] group whichis denoted by diagonally hatched boxes to generate the parity-checks P[i, :] denoted by the horizontallyhatched boxes as shown in the second figure. The combination of the resulting parity-checks of the twogroups is indicated in the last figure with grid hatched boxes.
5.6.2 Decoder
We show in this section that our proposed code construction can completely recover from any arbitrary
burst of length B within the deadline. Consider a channel that introduces such burst of length B =
bM +B′ starting from x[i, j] for j ∈ {1, . . . ,M}. We first show how to recover s[i] by the macro-packet
i+T . Note that since our code is time invariant, it suffices to consider only the recovery of s[i]. Once s[i]
is recovered, we can compute X[i, :] and repeat the same procedure with the smaller burst that starts at
x[i + 1, 1] to recover s[i+ 1] and so on.
The decoding steps are illustrated in Figure 5.7 and are as follows,
1. Step 1: In each macro-packet X[t, :], for t ∈ [i + b, i + T − 1], recover all the unerased symbols of
pvec[t] by subtracting out uvec[t− T ] from the corresponding qvec[t] as the former are not erased.
Since, u[i, 1], . . . ,u[i, j − 1] are not erased, we can subtract these symbols from the corresponding
symbols of qvec[i+ T ] to recover the respective pvec[i+ T ] symbols.
2. Step 2: Recover all erased vvec[·] symbols by the macro-packet i + T using the underlying (ku +
kv, kv, T ) m-MDS code. This step will be justified later in the sequel.
3. Step 3: Compute pvec[i+ T ] as it combines vvec[·] packets that are either not erased or recovered
in the previous step.
4. Step 4: Subtract pvec[i+ T ] from qvec[i+ T ] to recover uvec[i] within a delay of T macro-packets.
At this point both uvec[i] and vvec[i] have been recovered (and hence s[i]) with a delay of T
macro-packets as required.
Chapter 5. Unequal Source-Channel Rates 72
� ��� ����� ���
����������� ��������������������������
��������� ��� �������������������� ��� ��������������
��������������������������������������
��������� ���� �!���������"�#��!�������������������������������
������� ��� ������������� ����� ��������������������������
������������������������������������������������������������ ����������
Figure 5.7: Decoding for the burst pattern starting from x[i, 1]. The grey are denotes an erasure burstof length B. The horizontally hatched parity-checks in the second figure are used to recover the erasedV[i, :], . . . ,V[i + b, :] packets. The third figure shows the recovery of u[i] using the parity-checks inmacro-packet i+ T .
It only remains to show the sufficiency of the m-MDS code in Step 2. To do that we use the following
lemma.
Lemma 5.1. Consider any erasure burst of length B starting at x[i, j] for some j ∈ {1, . . . ,M − r}.After Step 1 of cancelling uvec[t] symbols, the total number of unrecovered symbols in the sequence
{(vvec[t],pvec[t])}i≤t≤i+T is at most ku(T + 1).
Proof. See Appendix D.1.
We next claim that the decoder can recover all the erased vvec[t] symbols by the end of macro-packet
i+ T . To prove this, we recall that (vvec[t],pvec[t]) is a m-MDS code with parameters (kv + ku, kv, T ).
We consider the following cases,
• If the burst starts at j ∈ {1, ..., r+1} then all the symbols in {(vvec[t],pvec[t])}i≤t≤i+b−1 are erased
whereas a portion of the symbols in {(vvec[i + b],pvec[i + b])} are erased until the termination of
the erasure burst. Furthermore {pvec[i+ T, l]}j≤l≤r+1 are also considered to be erased since they
are interfered by the erased uvec[i, l] packets from macro-packet i. Note that all the erased symbols
involving vvec[t] will occur in a single erasure burst. Thus applying property L3 in Lemma 2.1 with
j = T and c = 0 and using B + I ≤ ku(T + 1) = (n − k)(j + 1), which follows from Lemma 5.1,
we are guaranteed that all the erased vvec[t] are recovered at the end of macro-packet i+ T .
• If the burst starts at j ∈ {r + 2, . . . ,M − r} then none of the symbols uvec[i] are erased and can
be subtracted out from qvec[i + T ] to recover pvec[i + T ]. All the erased symbols thus occur in a
Chapter 5. Unequal Source-Channel Rates 73
Table 5.1: Code construction for (M = 2, B = 3, T = 3) achieving a rate of R = 710 .
X[0, :] X[1, :] X[2, :] X[3, :]x[0, 1] x[0, 2] x[1, 1] x[1, 2] x[2, 1] x[2, 2] x[3, 1] x[3, 2]
u0[0] v2[0] u0[1] v2[1] u0[2] v2[2] u0[3] v2[3]u1[0] v3[0] u1[1] v3[1] u1[2] v3[2] u1[3] v3[3]u2[0] u0[−3]+p0[0] u2[1] u0[−2]+p0[1] u2[2] u0[−1]+p0[2] u2[3] u0[0] + p0[3]v0[0] u1[−3]+p1[0] v0[1] u1[−2]+p1[1] v0[2] u1[−1]+p1[2] v0[3] u1[0] + p1[3]v1[0] u2[−3]+p2[0] v1[1] u2[−2]+p2[1] v1[2] u2[−1]+p2[2] v1[3] u2[0] + p2[3]
burst. Thus using property L2 in Lemma 2.1, and using B ≤ (n − k)(T + 1) which follows from
Lemma 5.1, we are guaranteed that all the erased vvec[t] are recovered at the end of macro-packet
i+ T .
• If j ∈ {M − r+1, . . . ,M − 1} then none of the symbols in either uvec[i] or vvec[i] are erased. Thus
we can proceed to block i + 1 and apply the first step.
Finally as mentioned in Step 4 above, once all the erased symbols vvec[t] have been recovered by
macro-packet i+ T , their effect can be cancelled and uvec[t], for t ∈ {i, i+1, . . . , i+b} can be sequentially
recovered from macro-packet t+T by computing and subtracting pvec[t+T ] from qvec[t+T ]. Thus each
s[t] = (uvec[t],vvec[t]) can be recovered by the end of macro-packet t+ T . This completes the decoding
analysis.
Remark 5.3. We discuss intuition on the fact that the capacity function does not decrease with B′ in
the first case in (5.9) defined by B′ ≤ bT+b
M . Recall that for this case the parameters that are selected are
ku = Mb and n = T + b. Consider an erasure burst that starts at x[i, 1] and terminates at x[i + b, B′].
We claim that for such an erasure burst, as long as B′ ≤ MbT+b
, only the u[·] packets are erased in
macro-packet X[i + b, :]. In particular the number of symbols that are erased in marco-packet X[i + b, :]
is equal to nB′ = (T + b)B′ ≤ Mb = ku. Since the u[·] packets appears before any other packets in
each macro-packet only these packets are erased. Thus during the recovery process, the number of parity-
checks available for recovering v[·] packets does not decrease as B′ is increased from 0 to MbT+b
. Thus the
same code parameters can be used. The above argument assumes that the burst starts at the beginning
of a macro-packet. In Appendix D.1, in the proof of Lemma 5.1, we show that this is indeed the worst
case pattern. If the burst starts anywhere else, the number of available parity-checks can only increase.
This explains why, remarkably, the capacity is not a strictly decreasing function of B.
5.6.3 Example
In this section, we show a code construction for parameters M = 2, B = 3, T = 3. Note that b = 1 and
B′ = 1 > 12 = b
T+bM . Thus, the capacity is given by C = M(T+b+1)−B
M(T+b+1) = 710 which can be achieved
using the code illustrated in Table 5.1.
Encoding
1. Split each source packet into M(T + b+ 1)−B = 7 packets, i.e., s[i] = (s0[i], · · · , s6[i]).
2. Divide these into two groups uvec[i] and vvec[i] with ku = B = 3 and kv = M(T + b + 1)− 2B =
4 symbols, respectively, as in (5.11). We let uvec[i] = (u0[i], · · · , u2[i]) = (s0[i], · · · , s2[i]) and
vvec[i] = (v0[i], · · · , v3[i]) = (s3[i], . . . , s6[i]).
Chapter 5. Unequal Source-Channel Rates 74
Figure 5.8: The periodic erasure channel used in the converse proof of Theorem 5.1. Grey and whitesquares denote erased and unerased packets respectively.
3. We place B = 3 parity packets qvec[i] = (q0[i], q1[i], q2[i]) into the last channel packet of each
macro-packet. These parities consist of two components, qvec[i]=pvec[i]+uvec[i − 3]. The parity
packets p[i] are generated using a m-MDS code.
Decoding
Since M = 2, there are two burst patterns that need to be checked.
1. Burst that erases x[0, 1], x[0, 2] and x[1, 1].
Recovery of v packets: We first subtract uvec[t − T ] from qvec[t] for t = {1, 2} to recover the
corresponding pvec[t]. These are a total of 6 symbols and thus can be used to recover v0[0], · · · , v3[0]as well as v0[1], v1[1]. In other words, all erased v symbols are recovered by the end of the macro-
packet X[2, :].
Recovery of u packets: With all the erased v packets now recovered, we can compute the pvec[t]
packets for t = {3, 4} and subtract them from qvec[t] to recover uvec[0] and uvec[1] at their respective
deadlines.
2. Burst that erases x[0, 2],x[1, 1],x[1, 2].
Recovery of v packets: Since uvec[0] is not erased, we can subtract it from qvec[3] to recover pvec[3].
This together with pvec[2] is a total of 6 symbols. Thus, they can be used to recover the erased v
packets (v2[0], v3[0]) and (v0[1], · · · , v3[1]).Recovery of u packets: Similar to the previous burst pattern, we compute the value of the parity-
check packets pvec[4] and subtract it from qvec[4] to recover u[1] by its deadline.
5.7 Converse
In order to establish the converse we first consider the case when T > b. We show that any feasible rate
satisfies
R ≤ R+ = min
(M(T + b+ 1)−B
M(T + b+ 1),
T
T + b
)
. (5.25)
Consider a periodic erasure channel as shown in Figure 5.8. Each period consists of τP = T + b + 1
macro-packets. In each such period the first B channel packets are erased and the subsequent M(b+T+
Chapter 5. Unequal Source-Channel Rates 75
1)−B are not. Consider the first period with the burst starting at x[0, 1]. By definition we require that
s[0] be recovered by the end of macro-packet T , s[1] by macro-packet T + 1 and likewise the last erased
source packet s[b] by macro-packet T + b. Thus all the lost source packets are recovered by macro-packet
t = T+b. Once these erased packets are recovered, we can treat these erasures as having never happened
and simply repeat the argument for the next period and so on. Therefore our proposed streaming code
must be a feasible code for the periodic erasure channel. Since the capacity of the erasure channel is
simply the fraction of the non-erased channel packets, it follows that
R+ =M(T + b+ 1)− (bM +B′)
M(T + b + 1). (5.26)
is an upper bound on the rate of any feasible streaming code.
To establish the other inequality in (5.25) we consider a periodic erasure channel consisting of τP =
T + b macro-packets and assume that in each period the first B = Mb ≤ B channel packets are
erased. Thus in the proposed channel first b macro-packets are completely erased in each period and
the remaining T macro-packets are not erased. In particular in the first period, s[0], . . . , s[b − 1] must
be recovered at the end of macro-packets T, . . . , T + b − 1 respectively. At this point all the erased
source packets have been recovered and we can proceed to the recovery of the second burst starting
at macro-packet T + b. Thus the streaming code must also be feasible on this erasure channel whose
capacity is clearly TT+b
, and thus the upper bound follows.
When T = b we show that
C ≤ min
(M −B′
M,1
2
)
. (5.27)
When B′ ≤ M/2, the second condition C ≤ 12 dominates. This bound immediately follows from (5.25)
by substituting T = b in the second expression in (5.25). Thus we only need to show that when B′ > M2
and T = b the upper bound C ≤ M−B′
Mis valid.
We start by considering a channel that erases the first B = bM +B′ channel packets x[i, 1], . . . ,x[i+
b, B′]. Since the delay constraint for s[i] is i+ T = i+ b, the following equation should be satisfied,
H(s[i]∣∣x[i+ b, B′ + 1], . . . ,x[i+ b,M ]) = 0
⇒ H(s) ≤ (M −B′)H(x), (5.28)
which implies that R = kn= H(s)
M log |X | ≤ M−B′
Mas required. This completes the proof of the upper bound.
5.8 Robust Extensions
In Section 5.6, we provided capacity achieving codes for C(1, B,W ≥M(T +1)). In order to extend the
codes for channels with N > 1, we apply the approach used in the MiDAS construction in Section 3.5.
In particular we construct a burst erasure code and then append additional parity-checks for the u[·]packets to deal with isolated losses. In particular we extend the macro-packet construction in (5.20) as
follows,
Chapter 5. Unequal Source-Channel Rates 76
X[i, :] = [x[i, 1]| . . . |x[i,M ]] =
u[i, 1]
pu[i, 1]
· · ·u[i, r]
pu[i, r]
u[i, r + 1]
v[i, 1]
pu[i, r + 1]
· · ·
q[i, r + 1]
v[i,M − 2r]
pu[i,M − r]
q[i, r]
pu[i,M − r + 1]
· · ·q[i, 1]
pu[i,M ]
,
(5.29)
where u[i, j], v[i, j] and q[i, j] are packets obtained from the code construction for the C(N = 1, B,W )
channel. We apply another (ku+Mks, ku, T ) m-MDS code to the uvec[·] packets generating Mks parity-
check symbols (pu1 [i], . . . , puMks [i]) = pu
vec[i] ∈ FMks
q . We then concatenate the generated parities after
splitting them into M equal groups to each channel packet, puvec[i] = (pu[i, 1], . . . ,pu[i,M ]) as shown
in (5.29). The corresponding rate of such code is clearly R = ku+kv
M(n+ks) , where ku, kv and n are based on
the code for the burst-only channel in Section 5.6.
Proposition 5.1. To recover from any N ≤⌊
TT+b
Mb⌋
isolated erasures when W ≥ M(T + 1) and
T > b, it suffices to select
ks =
⌈Nn
M(T + 1)−N
⌉
. (5.30)
where ⌈·⌉ and ⌊·⌋ denote the ceil and floor functions respectively.
Proof. We recall that two m-MDS codes are used in the construction in (5.29). A (ku + kv, kv, T ) m-
MDS code is applied to vvec[·] packets to generate parity-checks pvec[·] and qvec[t] = pvec[t]+uvec[t− T ]
are transmitted. Furthermore a (ku + Mks, ku, T ) m-MDS code is applied to the uvec[·] packets to
generate parity-checks puvec[·]. We recall that there are two m-MDS codes underlying our construction
in (5.29). A (ku + kv, kv, T ) m-MDS code is applied to vvec[·] packets to generate parity-checks pvec[·]and qvec[t] = pvec[t] + uvec[t− T ] are transmitted. Furthermore a (ku + Mks, ku, T ) m-MDS code is
applied to the uvec[·] packets to generate parity-checks puvec[·].
Let us consider the window of length T consisting of the macro-packets X[i, :], . . . ,X[i + T − 1, :]
and assume that there are N erasures in arbitrary positions. Note that in qvec[t] = pvec[t] + uvec[t− T ]
for t ∈ [i, i + T − 1], the uvec[·] are from time i − 1 or before, and can be cancelled to recover pvec[t].
The (ku + kv, kv, T ) m-MDS code can recover vvec[i] if no more than kuT symbols are erased among
(vvec[i],qvec[i], . . . ,vvec[i+ T − 1],qvec[i+ T − 1]). Since these symbols are reshaped into columns each
having no more than n symbols, the number of erasures that are guaranteed to be corrected is given by,
Nv =
⌊kuT
n
⌋
≥ min
(⌊(bM +B′)T
T + b+ 1
⌋ ∣∣∣∣B′≥ b
T+bM
,
⌊MbT
T + b
⌋ ∣∣∣∣B′< b
T+bM
)
(5.31)
=
⌊MbT
T + b
⌋
(5.32)
≥ N, (5.33)
Chapter 5. Unequal Source-Channel Rates 77
Table 5.2: Channel and code parameters used in Figures 5.9 and 5.10
(a) Unequal Source Channel Rates
Figure 5.9 Figure 5.10
Channel States 2 20(M,T ) (20, 4) (40, 2)(α, β) (10−5, [0.05, 0.15]) (10−5, 0.5)Channel Length 109 109
Rate R 9/14 ≈ 0.64 40/63 ≈ 0.63
(b) Achievable N and B for different codes
Figure 5.9 Figure 5.10Code N B N BReshaped Code 1 50 1 58Robust Reshaped Code - - 5 53MiDAS Code - - 5 42m-MDS Code 35 35 43 43MS Codes 1 44 1 45
where we use
ku =
B, B′ ≥ bT+b
M
Mb, B′ < bT+b
M, n =
T + b+ 1, B′ ≥ bT+b
M
T + b, B′ < bT+b
M(5.34)
to get (5.31) and substitute for B′ ≥ bT+b
M in the first term in (5.31) to get (5.32).
Next we consider the number of erased packets that can be corrected by the (ku +Mks, ku, T ) m-
MDS code. Using Lemma 2.1, one can see that this code can recover from Mks(T + 1) erasures in the
window of interest. Since each channel input can have up to n+ ks symbols belonging to this code, the
total number of erasures that can be corrected is given by,
Nu =
⌊Mks(T + 1)
n+ ks
⌋
(5.35)
which upon re-arranging gives (5.30).
Remark 5.4. Unlike the case of MiDAS codes, we do not claim the optimality of the proposed robust
codes. Nevertheless in the simulation results we observe that in some cases these codes outperform
baseline schemes.
5.9 Simulation Results
In our simulations in Figure 5.9, we consider a Gilbert channel model which is the same as a Gilbert-
Elliott channel with ε = 0, i.e., the loss probability is 0 in the good state. We fix α = 10−5 and vary β
on the x-axis in the interval [0.05, 0.15] which in turn changes the burst length distribution. We further
select M = 20, i.e., 20 channel packets are generated for every source packet received at the encoder.
We fix the rate R = 9/14 and the delay T = 4 macro-packets. Under these conditions, the m-MDS code
can correct burst erasures of length up to B = 35, whereas a Maximally Short code achieves B = 44.
The code designed for M > 1 achieves B = 50. This gain in terms of correctable burst-length is reflected
in Figure 5.9 as one can see that codes designed for unequal source-channel rates, which are referred
to as reshaped codes, achieve a lower loss probability. We note that the code parameters in Figure 5.9
correspond to the second case in (5.4).
In Figure 5.10, we consider a Fritchman channel with (α, β) = (10−5, 0.5) and N +1 = 20 states. The
corresponding burst distribution is illustrated in Figure 5.10b. In Figure 5.10a, we show the performance
of different streaming codes in the case of unequal source-arrival and channel-transmission rates on such
Chapter 5. Unequal Source-Channel Rates 78
0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.1510
−7
10−6
10−5
10−4
β
Loss
Pro
babi
lity
UncodedM>1 Code (N,B) = (1,50)m−MDS Code (N,B) = (35,35)MS Code (N,B) = (1,44)
0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.1510
−7
10−6
10−5
10−4
β
Loss
Pro
babi
lity
UncodedM>1 Code (N,B) = (1,50)m−MDS Code (N,B) = (35,35)MS Code (N,B) = (1,44)
Figure 5.9: Simulation over a Gilbert Channel with α = 10−5 and β varied on the x-axis. All codesare of rate R = 9
14 and evaluated using a decoding delay of T = 4 macro-packets. Each macro-packetconsists of M = 20 channel packets.
channel. The rate for all codes is fixed to R = 0.64 and the delay constraint is T = 2 macro-packets
where each macro-packet hasM = 40 packets. As the probability of erasure in the good state ε increases,
the performance of m-MDS code (line marked with +) does not change. The loss rate of this code is
≈ 10−4 which is dominated by the fraction of erasures introduced by bursts longer than 43. On the
other hand, both Maximally Short and reshaped codes achieve N = 1 and thus deteriorate as quickly
as ε2. For the left most point corresponding to ε = 0, the probability of loss of the Maximally Short
code is ≈ 10−4 which reflects the number of erasures introduced by bursts longer than 45. Similarly, the
loss probability of the reshaped code is ≈ 3× 10−6 which matches the fraction of losses introduced due
to bursts longer than 58. The performance of the robust versions of these codes, namely MiDAS and
robust reshaped codes does not deteriorate as fast. However, the robust reshaped code outperforms the
MiDAS code as the former achieves B = 53 versus B = 42 achieved by the later while fixing N = 5,
Chapter 5. Unequal Source-Channel Rates 79
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.0110
−6
10−4
10−2
100
ε
Loss
Pro
babi
lity
UncodedM>1 Burst Erasure Code (N,B) = (1,58)Robust M>1 Code (N,B) = (5,53)m−MDS Code (N,B) = (43,43)MS Code (N,B) = (1,45)MiDAS Code (N,B) = (5,42)
(a) Simulation results. All codes are evaluated using a decoding delay of T = 2 macro-packets and a rate of R ≈ 0.63. Eachmacro-packet consists of M = 40 channel packets.
20 25 30 35 40 45 50 55 60 65 700
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Burst Lengths
Pro
babi
lity
of O
ccur
ence
(b) Burst Histogram at β = 0.5 in a N + 1 = 20-States Fritchman Channel. The distribution follows a negative binomialdistribution (shown dotted) of N = 19 failures and a success probability of 0.5.
Figure 5.10: Simulation Experiments for Fritchman Channel Model with N +1 = 20 states and (α, β) =(10−5, 0.5).
R = 0.63 and T = 2.
5.10 Conclusion
In this chapter, we consider extending the results in Chapter 3 for the case of unequal source-arrival and
channel-transmission rates. In particular, we allow M > 1 channel packets to be transmitted between
two successive source packets. As with the case of MiDAS codes, we first consider channels that only
introduces a single erasure burst of maximum length B channel packets. The delay in this case is T
Chapter 5. Unequal Source-Channel Rates 80
source packets.
The first observation is that using simple source expansion to adapt codes designed for M = 1 does
not achieve the capacity for the case when M > 1. Hence, we develop a new class of streaming codes
that achieve the capacity for burst erasure channels for any M ≥ 1. Such constructions have a similar
layering structure to that used in the M = 1 case but with an extra reshaping step. In particular, a
source stream s[·] is divided into two sub-streams uvec[·] and vvec[·]. A repetition code is applied to the
first whereas a m-MDS code is applied to the second sub-stream generating the parity-check packets
pvec[·]. The overall parity-check packets is a combination of the parity-checks of both codes after shifting
the first with T packets, i.e., qvec[t] = pvec[t]+uvec[t−T ]. The three groups of packets, uvec[·], vvec[·] andqvec[·] are then carefully reshaped and concatenated to form the channel macro-packet which consists of
the M channel packets to be transmitted between two successive source packets. We note that in such
construction, placing the uvec[·] packets in the beginning of the macro-packet is crucial. The location of
the vvec[·] and pvec[·] packets does not matter.
As a natural extension, we then consider enhancing these codes to recover from N > 1 isolated
erasures. The technique used in MiDAS codes is adopted. We first construct a burst erasure code as
discussed above and then append a layer of parity-check packets generated using a m-MDS code applied
to the uvec[·] packets.For future work, the optimality of the robust streaming codes for unequal source-channel rates when
N > 1 remains open. Different code construction techniques should be investigated. For example,
instead of coding for B and then adding a layer for N , doing the splitting and reshaping steps for both
B and N might be helpful. Also, a technique similar to that used in E-RLC codes in [1] can be applied
to achieve small values of N for no loss in rate. Moreover, the investigation of minimum required field-
size for streaming codes is also highly important. Further improvements can be attained by considering
streaming codes that correct both burst and isolated losses in the window of interest as observed in
Chapter 4.
Chapter 6
Algebraic Properties of Streaming
Codes
6.1 System Model
In this chapter, we study some algebraic properties of convolutional codes under delay constraints. In
our discussion we view the input packets s[i], for i ≥ 0, as a length k vector over Fq and x[i] as a
length n vector over Fq as in Definition 2.3. We restrict our attention to time-invariant linear (n, k,m)
convolutional codes specified by
x[i] =
(m∑
l=0
s†[i− l]Gl
)†
,
where G0, . . . ,Gm are k × n generator matrices over Fq.
We recall from (2.4) that the first j + 1 output packets can be expressed as,
[
x[0],x[1], . . . ,x[j]]
=[
s[0], s[1], . . . , s[j]]
·Gsj . (6.1)
where
Gsj =
G0 G1 . . . Gj
0 G0 Gj−1
.... . .
...
0 . . . G0
(6.2)
is the truncated generator matrix to the first (j + 1)n columns. Note that Gl = 0 if l > m.
In this chapter, we study the minimum distance and span properties of the truncated generator matrix
Gsj and show their connection to the low-delay properties of the underlying code. Such a connection
was discussed in [35] and used to perform a computer search of good low-delay codes.
6.2 Column Distance
The column distance of convolutional codes are well studied. See e.g., [76, Chapter 3] and [75].
81
Chapter 6. Algebraic Properties of Streaming Codes 82
Definition 6.1 (Column Distance - Packets). The jth column distance of Gsj in (6.2) in terms of packets
is defined as
dj = mins=[s[0],s[1],...,s[j]]
s[0] 6=0
wt([
x[0], . . . ,x[j]])
(6.3)
where wt([
x[0], . . . ,x[j]])
counts the number of non-zero elements in the j + 1 length vector.
Intuitively the jth column distance of the convolutional code finds the codeword sequence of minimum
Hamming weight in the interval [0, j] that diverges from the all zero state at time t = 0. We refer the
reader to [76, Chapter 3] for some properties of dj .
Fact 6.1. A convolutional code with a column distance of dj can recover every information packet with
a delay of j provided the channel introduces no more than N = dj − 1 erasures in any sliding window of
length j + 1. Conversely there exists at least one erasure pattern with dj erasures in a window of length
j + 1 where the decoder fails to recover all source packets.
Proof. The proof of sufficiency follows by using an argument similar to that in the proof of property L1
in Lemma 2.1. We will omit it as the argument is completely analogous.
Conversely there exists at least one output sequence whose Hamming weight equals dj and the input
packet s[0] 6= 0. By erasing all the non-zero dj positions for this output sequence, we cannot distinguish
it from the all-zero sequence.
Proposition 6.1. For any (n, k,m) convolutional code defined in (6.1), the jth column distance satisfies,
dj ≤ (1−R)(j + 1) + 1, (6.4)
and is achieved with equality using
1. A (n, k,m) systematic m-MDS code with m ≥ j, or;
2. A (n, k) = (j + 1, ⌊R(j + 1)⌋) Interleaved-MDS code.
Proof. Consider a (n, k,m) convolutional code with column distance dj . From the sufficiency part of
Fact 6.1, such a code is feasible over the channel which introduces no more than N = dj − 1 isolated
erasures in a window of length W ≥ j + 1 where j is the delay. Thus it must satisfy the upper bound
in (2.12). Substituting N = dj − 1 immediately gives (6.4).
For achievability,
1. Consider a (n, k,m) m-MDS code of rate R = kndefined in Section 2.5.2. From Corollary 2.1, such
code can recover from N = (1−R)(j +1) erasures in a window of length W ≥ j +1 provided that
j ≤ m.
2. Consider a (j + 1, ⌊R(j +1)⌋) Interleaved-MDS code of rate R defined in Section 2.5.1. Such code
is capable of recovering n− k = ⌊(1−R)(j + 1)⌋ erasures in any window of length W ≥ j + 1.
From the necessity part of Fact 6.1, such codes must satisfy dj ≥ N+1 and the achievability follows.
Chapter 6. Algebraic Properties of Streaming Codes 83
6.3 Column Span
The concept of span arises in the theory of trellises (see, e.g., [82–86]). To the best of our knowledge,
the column span of a convolutional code was first introduced in [35] in the context of low-delay codes
for burst erasure channels.
Definition 6.2 (Column Span - Packets). The j column span in terms of packets of Gsj in (6.2) is
defined as
cj = mins=[s[0],s[1],...,s[j]]
s[0] 6=0
span([x[0], . . . ,x[j]]) (6.5)
where span([x[0], . . . ,x[j]]) equals the support of the underlying vector, i.e., span([x[0], . . . ,x[j]]) = t2 −t1 + 1, where x[t2] is the last non-zero element in x = [x[0], . . . ,x[j]] and x[t1] is the first non-zero
element.
Fact 6.2. Consider a channel that introduces no more than a single erasure burst of maximum length
B in any sliding window of length j + 1. A necessary and sufficient condition for a convolutional code
to recover every erased packet with a delay of j is that cj > B.
Proof. To justify Fact 6.2, consider the interval [0, j] and the input sequence (s[0], . . . , s[j]) where s[0] 6= 0.
Let the corresponding output be (x[0], . . . ,x[j]).
Suppose the channel erases a burst of maximum length B packets. To recover s[0] at time j, it suffices
to show that the received sequence can be distinguished from the all zero sequence (by using linearity).
According to Definition 6.2, any output sequence that corresponds to an input sequence where s[0] 6= 0
has at least two non-zero packets cj apart from each other. These packets cannot be both erased by a
single burst of maximum length B whenever cj > B and hence s[0] can be recovered at time j. Now,
the effect of s[0] can be cancelled from all future channel packets and a similar argument can be applied
to the interval [1, j+1] to recover s[1] at time j. This step can be repeated to recover all source packets
within a delay of j and the claim follows.
Conversely there exists at least one output sequence whose span equals cj , i.e., the first non-zero
channel packet is x[t] and the last such packet is x[t+ cj − 1] and the input packet s[0] 6= 0. By erasing
a burst of length B = cj in the interval [t, t+ cj − 1] for this output sequence, we cannot distinguish it
from the all-zero sequence.
Proposition 6.2. For any (n, k,m) convolutional code defined in (6.1), the jth column span satisfies,
cj ≤ min(1−R
R, 1)
j + 1, (6.6)
and is achieved with equality using a (cj − 1, j) Maximally-Short code.
Proof. Consider a (n, k,m) convolutional code with column span cj. From the sufficiency part of Fact 6.2,
such a code is feasible over the channel which introduces no more than one burst of maximum length
B = cj−1 isolated erasures in a window of length W ≥ j+1 where j is the delay. Hence, it must satisfy
the upper bound in (2.13). Substituting B = cj − 1 immediately gives (6.6).
For achievability, consider a (cj − 1, j) Maximally-Short code of rate R = jj+cj−1 defined in Sec-
tion 2.6.1. Through rearranging (2.13), one can show that such code can recover from B = min(1−RR
, 1)j
erasures in a window of length W ≥ j+1. From the necessity part of Fact 6.2, such a code must satisfy
cj ≥ B + 1 and the achievability follows.
Chapter 6. Algebraic Properties of Streaming Codes 84
6.4 Column Distance Column Span Tradeoff
It follows from Facts 6.1 and 6.2 that a necessary and sufficient condition for any convolutional code to
recover each source packet with a delay of j over a channel C(N,B,W = j+1) is that both dj > N and
cj > B. Thus specializing Theorem 3.1 and 3.2 to W = j + 1 we have the following.
Proposition 6.3 (A Fundamental Tradeoff between Column Distance and Column Span). For any
(n, k,m) convolutional code and an integer j > 0 we have that the column distance dj and column span
cj must satisfy
R
1−Rcj + dj ≤ j + 1 +
1
1−R(6.7)
where R = kndenotes the rate of the code. Furthermore for any j > 0 there exists a (n, k,m) convolutional
code with column distance dj and column span cj, over a sufficiently large field-size such that,
R
1− Rcj + dj ≥ j +
1
1−R(6.8)
Proof. To establish (6.7), consider any convolutional code with a column distance dj and column span
cj . From the sufficiency parts of Facts 6.1 and 6.2 such a code is feasible over the channel C(N =
dj − 1, B = cj − 1,W = j + 1) with delay j. Thus it must satisfy the upper bound (3.1). Substituting
N = dj − 1 and B = cj − 1 immediately gives (6.7).
To establish (6.8), consider the code that satisfies the lower bound in (3.2) in Theorem 3.2. From the
necessity parts of Facts 6.1 and 6.2 such a code must satisfy cj ≥ B + 1 and dj ≥ N + 1. Substituting
in (3.2) immediately leads to (6.8).
As a final remark we note that Facts 6.1 and 6.2 also immediately apply to any channel with W ≥j + 1. In particular any erasure pattern for the C(N,B,W ) channel with W ≥ j + 1 is also feasible for
C(N,B,W = j + 1) and thus the sufficiency follows. Furthermore also note that whenever W ≥ j + 1,
any erasure pattern in the interval [0, j] used in the proof of the necessity part can also be used for the
channel C(N,B,W ).
6.5 symbol Level
In this section, we assume the symbols in x[i], i.e.,
x[i] = (x0[i], x1[i], . . . , xn−1[i]) (6.9)
are transmitted sequentially in the time interval [i · n, (i + 1) · n − 1] over the channel. Note that in
this section we are only transmitting symbols over Fq over the channel. We then re-consider the column
distance and column span of convolutional codes but with respect to symbols as follows.
Chapter 6. Algebraic Properties of Streaming Codes 85
6.5.1 Column Distance
The definition of column distance of convolutional codes with respect to symbols is provided in Defini-
tion 2.4. We restate the definition for convenience.
Definition 6.3 (Column Distance - Symbols). The jth column distance in terms of symbols of Gsj
in (6.2) is defined as
dj = mins=[s[0],s[1],...,s[j]]
s[0] 6=0
wt([x[0], . . . ,x[j]]) (6.10)
where wt([x[0], . . . ,x[j]]) counts the number of non-zero symbols in the (j + 1)n length vector, [x[0], . . . ,x[j]].
Fact 6.3. Consider a channel that introduces no more than N isolated erasures in any sliding window
of length n(j + 1). A necessary and sufficient condition for a convolutional code to recover every erased
packet with a delay of j is that dj > N .
Proof. The proof is similar to that of Fact 6.1 and will be omitted.
Proposition 6.4. For any (n, k,m) convolutional code defined in (6.1), the jth column distance satisfies,
dj ≤ (n− k)(j + 1) + 1, (6.11)
and is achieved with equality using a (n, k,m) systematic m-MDS code with m ≥ j.
Proof. We refer the reader to [33] and [32] for the proof of the upper bound on dj . For achievability, we
use a (n, k,m) m-MDS code with m ≥ j. Such code can recover from up-to N = (n− k)(j +1) erasures
in a window of length n(j +1). From the necessity part of Fact 6.3, we can substitute using dj = N +1
and the claim follows.
Remark 6.1. We note that a (n, k) Interleaved-MDS code does not achieve maximum column distance
with respect to symbols. In particular, if more than n − k erasures take place in the same diagonal
codeword, the associated source packets cannot be recovered, i.e., dj = n− k + 1 = d1.
6.5.2 Column Span
To the best of our knowledge the column span of a convolutional code in terms of symbols was not
studied in the literature. In this section, we show the maximum achievable column span of any (n, k,m)
convolutional code and provide a construction that achieves this maximum value.
Definition 6.4 (Column Span - Symbols). The jth column span of Gsj in (6.2) is defined as
cj = mins=[s[0],s[1],...,s[j]]
s[0] 6=0
ˆspan([x[0], . . . ,x[j]]) (6.12)
where ˆspan(·) equals the support of the underlying vector in symbols, i.e., ˆspan([x[0], . . . ,x[j]]) = (t2 −t1)n+(l2− l1) + 1, where xl2 [t2] is the last index where [x[0], . . . ,x[j]] is non-zero and xl1 [t1] is the first
such index.
Fact 6.4. Consider a channel that introduces no more than one burst erasure of maximum length B in
any sliding window of length (j + 1)n. A necessary and sufficient condition for a convolutional code to
recover every erased packet with a delay of j is that cj > B.
Chapter 6. Algebraic Properties of Streaming Codes 86
Proof. The proof is similar to that of Fact 6.2 and will be omitted.
Theorem 6.1 (Maximum Column Span of Convolutional Codes). For any (n, k,m) convolutional code,
the maximum achievable jth column span is given by,
cj =
(n− k)(j +⌊n−kk
j⌋+ 1) + 1, k ≥ n
2
n(j + 1)− k + 1, k ≤ n2
(6.13)
To prove Theorem 6.1, we redefine the problem as follows. For a given n, j and B what is the
maximum achievable rate of any (n, k,m) convolutional code. This is clearly equivalent to finding the
maximum k.
Lemma 6.1. For any n, j and cj = B + 1, the maximum achievable k is given by,
k =
⌊jnj+b
⌋
, B′ ≤ bj+b
n, j ≥ b,⌊n(j+b+1)−B
j+b+1
⌋
, B′ > bj+b
n, j > b,
n−B′, B′ > n2 , j = b,
0, j < b,
(6.14)
where the constants b and B′ be defined via
B = bn+B′, B′ ∈ {0, . . . , n− 1}, b ∈ N0, (6.15)
Proof. Since the system involves sending n symbols between two successive source packets, one can see
that the system considered is analogous to the unequal source-channel rates discussed in Chapter 5. In
the later, M channel packets each in Fnq are transmitted between two successive source frames each in
Fkq . Similarly, in the current model, n channel symbols each in Fq are transmitted between two successive
source packets each in Fkq . Hence, the proof of Lemma 6.1 follows by substituting n = 1, M = n and
R = knin Theorem 5.1.
Now we prove Theorem 6.1. For the converse, we note that the maximum value of B = bn+B′ for a
given n, k and j can be achieved by first maximizing b and then maximizing B′ at the maximum value
of b. This follows since B′ ≤ n− 1. The first case in (6.13), when k ≥ n/2, corresponds to the first two
cases in (6.14). According to the first case, the maximum value of b is given by,
b1 =
⌊n− k
kj
⌋
, (6.16)
while the maximum value of B′ in this case is clearly,
B′1 =
bn
j + b. (6.17)
For the second case in (6.13), the maximum value of b happens at the minimum value of B′ = bnj+b
and
is given by,
b2 =
⌊n− k
kj
⌋
= b1, (6.18)
Chapter 6. Algebraic Properties of Streaming Codes 87
while the maximum value of B′ can be computed from the same expression by substituting b = b2 as,
B′2 = (n− k)(j + 1)−
⌊n− k
kj
⌋
k. (6.19)
Combining both cases, the maximum burst length for a given n, k ≥ n/2 and j is given by,
B = max(b1n+B′1, b2n+B′
2)
(a)= b2n+B′
2
(b)= (n− k)(j +
⌊n− k
kj
⌋
+ 1), (6.20)
where (a) follows by using the fact that B′2 ≥ B′
1 while (b) follows by using (6.18) and (6.19). By using
Fact 6.4, one can substitute (6.20) in cj = B+1 and the proof of the first case in (6.13) follows. For the
second case in (6.13), we consider the third case in (6.14) since it is the only case that corresponds to
k < n/2. The values of b and B′ are j and n− k respectively in this case and hence the corresponding
B = bn+ B′ = n(j + 1) − k. We again use Fact 6.4 to compute the jth column span as cj = B + 1 =
n(j + 1)− k + 1 which matches the second case in (6.13) and the converse of Theorem 6.1 follows.
For achievability, we consider the following code construction.
Encoding
• Source Splitting: Split each source packet s[i] ∈ Fkq where k is chosen according to (6.14) into
two groups u[i] ∈ Fku
q and v[i] ∈ Fkv
q as follows,
s[i] = (u0[i], . . . , uku−1[i]︸ ︷︷ ︸
=u[i]
, v0[i], . . . , vkv−1[i]︸ ︷︷ ︸
=v[i]
), (6.21)
where ku+kv = k, i.e., u[i] constitutes the first ku = n−k symbols in s[i] whereas v[i] constitutes
the remaining kv = 2k − n symbols.
• m-MDS Code: Apply a (ku + kv, kv, j) = (k, 2k−n, j) m-MDS code of rate Rv = kv
ku+kv = 2k−nk
on the packets v[i] and generate parity-check packets
pv[i] =
(j∑
l=0
v†[i− l] ·Hvl
)†
, pv[i] ∈ Fku
q , (6.22)
where the matrices Hvl ∈ F
kv×ku
q are associated with a m-MDS code (2.6).
• Repetition Code: Superimpose the u[·] packets onto pv[·] and let
q[i] = pv[i] + u[i − j]. (6.23)
• Channel Packet Generation: Concatenate the generated parity-checks to the source packets
so that the channel packet at time i is given by x[i] = (u[i],v[i],q[i]) ∈ Fnq .
Chapter 6. Algebraic Properties of Streaming Codes 88
Decoding
The decoding analysis is similar to that in Section 5.6.2 and will be omitted.
6.6 Conclusion
In this chapter, we study some algebraic properties of convolutional codes when delay constraints are
considered. It is well-known that designing codes for isolated erasures is equivalent to maximizing their
column distance. Similarly, to recover from burst erasures, the underlying code must have a sufficiently
large column span. Using these facts, we show that the tradeoff between burst-erasure correction and
isolated-erasure correction capability discussed in Chapter 3 can be translated to a fundamental tradeoff
between the column distance and column span of any convolutional code. In the case of transmitting one
symbol over Fq in each channel use, we provide the maximum column span of any (n, k,m) convolutional
code. As a future extension, it might be interesting to study the tradeoff between column distance and
column span in the symbols level.
Chapter 7
Diversity Embedded Streaming
Codes (DE-SCo)
7.1 Introduction
For the burst erasure channel, a class of systematic time-invariant convolutional codes, Maximally-Short
(MS) codes, are proposed in [37, Chapter 8] and is reviewed in Chapter 2. These codes are shown to
correct the maximum burst length B for a given rate R and delay T . However, these codes require
the knowledge of both B and T a priori. This forces a conservative design in practice since the code is
designed for the worst case burst length B. This incurs a lower code rate and/or a larger decoding delay
even when the channel is relatively good. Moreover, in some applications there is often a flexibility in
the tolerable delay. Techniques such as adaptive media playback [38] have been designed to tune the
play-out rate as a function of the received buffer size to deal with a temporary increase in delay. Hence
it is not desirable to fix T during the design stage either.
In this chapter, we introduce a class of streaming codes that do not commit a priori to a specific
delay. Instead they realize a delay that depends on the channel conditions. At an information theoretic
level, our setup extends the point-to-point link in [37] to a multicast model with one transmitter and
two receivers. The channel for each receiver introduces an erasure burst of length Bi and each receiver
can tolerate a delay of Ti for i = 1, 2. We investigate Diversity Embedded Streaming Codes (DE-SCo).
These codes modify a single user MS code such that the resulting code can support a second user,
whose channel introduces a larger erasure burst, without sacrificing the performance of the first user.
Our construction embeds new parity-checks in an MS code in a manner such that (a) no interference is
caused to the stronger (and low delay) user and (b) the weaker user can use some of the parity-checks
of the stronger user as side information to recover part of the source packets. DE-SCo constructions
outperform baseline schemes that simply concatenate the single user MS code for the two users. A
periodic erasure channel converse establishes that DE-SCo achieves the minimum possible delay for the
weaker receiver without sacrificing the performance of the stronger user.
The chapter is organized as follows. We provide the system model in Section 7.2 and discuss some
useful properties of the single user MS codes in Section 7.3. In Section 7.4, we propose a new construction,
Interference Avoidance Streaming Codes (IA-SCo), which is appealing due to the simple encoder and
decoder. In Section 7.5.1, we provide a converse which shows that IA-SCo is sub-optimal. We then
89
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 90
�������
���� ���
���������
���������
�����������
�����������
����� ����������������������
�������
�����
�������
�����
���� ���
�����������
�����������
������ ���
������ ���
Figure 7.1: The source stream {s[t]} is causally mapped into an output stream {x[t]}. The channel ofuser i introduces an erasure burst of length Bi, and each user tolerates a delay of Ti, for i = 1, 2.
provide another construction, DE-SCo, which achieves the minimum possible delay at the second user in
Section 4.5. Simulation results over statistical channel models in Section 7.6 show that significant gains
over single user codes can be achieved using DE-SCo. Finally, a conclusion is provided in Section 7.7.
7.2 System Model
We extend the model in Section 2.3 to a multiuser model with one transmitter and two receivers. The
transmitter encodes a stream of source packets {s[t]}t≥0 intended to be received at two receivers as
shown in Figure 7.1. The channel packets {x[t]}t≥0 are produced causally from the source stream,
x[t] = ft(s[0], . . . s[t]). (7.1)
The channel of receiver i introduces an erasure burst of length Bi, i.e., the channel output at receiver i
at time t is given by
yi[t] =
{
⋆, t ∈ [ji, ji +Bi − 1]
x[t], otherwise(7.2)
where i = 1, 2, ji ≥ 0 and ⋆ denotes an erased packet. Furthermore, user i tolerates a delay of Ti, i.e.,
there should exist a sequence of decoding functions γ1t(.) and γ2t(.) such that
si[t] = γit(yi[0],yi[1], . . . ,yi[t+ Ti]), i = 1, 2, (7.3)
and Pr(si[t] 6= si[t]) = 0, ∀t ≥ 0 .
The source stream is an i.i.d. sequence and we assume that each packet is uniformly sampled from
a distribution ps(·) over the finite field Fkq . The channel packets x[t] belong to some alphabet Fn
q . The
rate of the multicast code is defined as the ratio of the size of each source packet to that of each channel
packet, i.e., R = k/n.
Definition 7.1 (Diversity Embedded Streaming Codes (DE-SCo)). Consider the multicast model in
Figure 7.1 where the channels of the two receivers introduce an erasure burst of lengths B1 and B2
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 91
De-Mux Mux
(B, T ) MS
(B, T ) MS
Source Stream
s[0], s[1], s[2], . . .
Channel Stream
x[0],x[1],x[2], . . .
s[0], s[2], s[4], . . .
s[1], s[3], s[5], . . .
x[0],x[2],x[4], . . .
x[1],x[3],x[5], . . .
Figure 7.2: A vertical interleaving approach to construct a (2B, 2T ) MS code from a (B, T ) MS code.
respectively with B1 < B2. A DE-SCo is a rate R = T1
T1+B1Mu-SCo construction that achieves a delay
T1 at receiver 1 and supports receiver 2 with the minimum possible delay T2.
Remark 7.1. Note that the considered model assumes a single erasure burst on each channel. However,
the proposed constructions correct multiple erasure bursts provided that each erasure pattern corresponds
to a burst erasure of maximum length B1 followed by a guard interval (with no erasures) of length T1,
or alternately corresponds to a burst of maximum length B2 followed by a guard interval of length T2.
7.3 Properties of MS Codes
In this section, we describe some additional properties of MS codes1 that will be useful in the proposed
code constructions.
7.3.1 Vertical Interleaving for (αB, αT ) MS
Suppose α > 1 is an integer and we need to construct a MS code with parameters (αB,αT ). The scheme
described in section 2.6.1 requires us to split each source packet into αT symbols. However we can take
advantage of the multiplicity factor α and simply construct the (αB,αT ) MS code from the (B, T ) MS
code via vertical interleaving of step α.
Figure 7.2 illustrates this approach for constructing a (2B, 2T ) MS from a (B, T ) MS. We split the
incoming source stream into two disjoint sub-streams; one consisting of the source packets at even time
slots and the other consisting of the source packets at odd time slots. We apply a (B, T ) MS on the
first sub stream to produce channel packets at even time slots. Likewise we apply a (B, T ) MS on the
second sub stream to produce channel packets at odd time slots. Since a burst of length 2B introduces
B erasures on either sub-streams, each of the (B, T ) code suffices to recover from these erasures. Further
each erased packet is recovered with a delay of T packets on its individual sub stream, which corresponds
to an overall delay of 2T packets.
More generally we split each source packet into T symbols. The information vector bt is modified
from (2.22) as
bt = (s0[t], s1[t+ α],. . . ,sT−1[t+ (T − 1)α]). (7.4)
The resulting codeword ct of the LD-BEBC is then mapped to a diagonal codeword by introducing a
1We note that Maximally-Short (MS) Codes used in Chapters 7 and 8 are the diagonally interleaved LD-BEBC versionproposed in [37] and reviewed in Section 2.6.1
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 92
Table 7.1: An example illustrating a vertical interleaving of step size α = 2 used to construct a (2, 4)MS code from a (1, 2) MS code.
(a) (B, T ) = (1, 2) MS Construction
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i + 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i + 2] s1[i+ 3] s1[i+ 4]
s0[i− 3] + s1[i− 2] s0[i − 2] + s1[i− 1] s0[i− 1] + s1[i] s0[i] + s1[i+ 1] s0[i + 1] + s1[i+ 2] s0[i+ 2] + s1[i+ 3]
(b) (B, T ) = (2, 4) MS Construction
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i+ 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i+ 2] s1[i+ 3] s1[i+ 4]
s0[i− 5] + s1[i− 3] s0[i − 4] + s1[i− 2] s0[i− 3] + s1[i− 1] s0[i− 2] + s1[i] s0[i− 1] + s1[i+ 1] s0[i] + s1[i+ 2]
step size of α in (2.23), i.e.,
dt = (s0[t], s1[t+ α], . . . , sT−1[t+ (T − 1)α], p0[t+ Tα], . . . , pB−1[t+ (T +B − 1)α]). (7.5)
As in the case of α = 2, the decoding proceeds by splitting the source stream into α sub-streams and
applying the decoder for (B, T ) MS on each of the sub-streams. This guarantees that each packet is
recovered with a delay of αT on the original stream.
Table 7.1 illustrates an example that uses a (B, T ) = (1, 2) MS code to construct a (αB,αT ) = (2, 4)
MS code using the vertical interleaving property with step size α = 2.
Due to the vertical interleaving property, it suffices to consider (B, T ) MS codes where B and T are
relatively prime. However, a (αB,αT ) MS code obtained by vertically interleaving α copies of a (B, T )
MS code performs worse when compared to a directly generated (αB,αT ) when the channel introduces
i.i.d. erasures. This is similar to the fact that interleaving 2 copies of a (n, k) MDS code does not generate
a (2n, 2k) MDS code. In this chapter, we only focus on burst erasure channels and hence not concerned
about such performance degradation.
7.3.2 Memory of MS Codes
While the definition of a streaming code allows the channel input packet x[t] to depend on an arbitrary
number of source packets, the MS code construction limits the memory of packet x[t] to previous T
packets, i.e.,
x[t] = f(s[t], s[t− 1], . . . , s[t− T ]). (7.6)
This follows from (2.24) which is reproduced below for convenience,
pj [t] = sj [t− T ] + hj(sB[t− j − T +B], . . . , sT−1[t− j − 1]), j = 0, . . . , B − 1, (7.7)
where hj(·) denotes a linear combination arising from the LD-BEBC code (2.21) when applied along the
main diagonal.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 93
7.3.3 Urgent and Non-Urgent symbols
In the construction of LD-BEBC codes we split the information vector b into urgent and non-urgent
symbols (2.20). The mapping of source symbols to information vector (2.22) then implies that the
symbols s0, . . . , sB−1 are the urgent symbols in the source stream whereas the symbols sB, . . . , sT−1 are
the non-urgent symbols. We will denote these by
u[t] = (s0[t], . . . , sB−1[t]),
v[t] = (sB[t], . . . , sT−1[t]). (7.8)
The urgent and non-urgent symbols are combined into a parity-check symbol as illustrated in (7.7). The
following observation is useful in the construction of DE-SCo.
Proposition 7.1. Suppose that the sequence of channel packets x[i − B], . . . ,x[i− 1] are erased by the
burst erasure channel. Then
1. All symbols in v[i−B], . . . ,v[i− 1] are obtained from the parity-checks p[i], . . . ,p[i+ T −B − 1].
2. The symbols in u[j] for i −B ≤ j < i are recovered at time j + T from parity-check p[j + T ] and
the previously recovered non-urgent symbols.
Proof. The proof follows via (7.6), (7.7) and is included in Appendix E.1.
7.3.4 Off-Diagonal Interleaving
The constructions in Section 2.6.1 involve interleaving along the main diagonal of the source stream
(cf. (2.23)). An analogous construction of the (B, T ) code along the off diagonal results in
bt = (s0[t], s1[t− 1], . . . , sT−1[t− (T − 1)]) (7.9)
rt = (p0[t+ 1], . . . , pB−1[t+B]) (7.10)
dt = (bt, rt) = (sT−1[t− (T − 1)], . . . , s1[t− 1], s0[t], p0[t+ 1], . . . , pB−1[t+B]) (7.11)
and the parity-checks pj are given by
pj [t] = sT−j−1[t− T ] + hj(sT−B−1[t− j − T +B], . . . , s0[t− j − 1]), j = 0, . . . , B − 1, (7.12)
when applied along the opposite diagonal. Finally off-diagonal interleaving also satisfies Proposition 7.1
provided with appropriate modifications in the definitions of urgent and non-urgent symbols as follows,
u[t] = (sT−1[t], . . . , sT−B[t]),
v[t] = (sT−B−1[t], . . . , s0[t]). (7.13)
7.3.5 Source Expansion
We start by giving a definition of source expansion as follows.
Definition 7.2 (Source Expansion). A (p, r) expansion of the source stream s[·] consists of,
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 94
Table 7.2: A source expansion example with parameters (p, 3). Each source packet s[i] is expanded into3 packets s[3i], s[3i + 1] and s[3i + 2]). A (3B, T ) is then applied to s[·] to generate x[·]. The channelpackets in the original stream is denoted by x[i] = (x[3i], x[3i + 1], x[3i + 2]). Shaded cells are erasedchannel packets while the remaining ones are perfectly received by the destination.
Source Stream s[0] s[1] s[2] s[3]Expanded Source Stream s[0] s[1] s[2] s[3] s[4] s[5] s[6] s[7] s[8] s[9] s[10] s[11]Expanded Channel Stream x[0] x[1] x[2] x[3] x[4] x[5] x[6] x[7] x[8] x[9] x[10] x[11]
Channel Stream x[0] x[1] x[2] x[3]
Expanded Stream Recovery⇓ ⇓ ⇓s[0] s[1] s[2]
Original Stream Recovery⇓s[0]
• Splitting each source packet s[i] ∈ Frpq into rp symbols each in Fq, i.e.,
s[i] = (s0[i], s1[i], . . . , srp[i]).
• Rearranging the rp symbols into r groups each with p symbols as follows,
s[ri] = (s0[i], . . . , sp−1[i])
s[ri+ 1] = (sp[i], . . . , s2p−1[i])
......
s[ri + r − 1] = (s(r−1)p[i], . . . , srp−1[i]), (7.14)
where s[·] denotes the expanded source stream.
The relation between the decoding capability of a MS code on the original stream and that on the
expanded stream is discussed in the following lemma.
Lemma 7.1. Consider a (p, r) expansion of the source stream s[·] to s[·]. A (rB, T ) MS code applied to
s[·] is capable of recovering a burst of length B packets within a delay of T = ⌈ Tr⌉ on the original stream
s[·].
Proof. Suppose that a (rB, T ) MS code be applied to s[·] to generate the channel packets x[·]. These
packets are multiplexed together, and the input on the channel at time i is
x[i] = (x[ri], x[ri + 1], . . . , x[ri + r − 1]). (7.15)
We suppose the channel erases a burst of length B on the original stream in the interval [i, i+B−1].
By using (7.15) this corresponds to an erasure burst in the interval [ri, r(i + B) − 1] on the expanded
stream of x[·]. This is a total of rB erasures in a burst and is thus recoverable using the (rB, T ) MS code
within a delay of T on the expanded stream. Each source packet s[j] for j ∈ {i, i+B − 1} is recoveredonce the last corresponding expanded source packet, s[rj + r − 1] is recovered. Using the (rB, T ) MS
code, such packet is recovered by time rj + r − 1 + T . This is equivalent to time ⌊ rj+r−1+Tr
⌋ on the
original stream (cf. (7.15)). Hence, s[j] is recovered with a delay of ⌊ T+r−1r⌋ = ⌈ T
r⌉ and the lemma
follows.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 95
Example 7.1. Table 7.2 illustrates an example for source expansion with parameters r = 3, B = 2 and
T = 7. A burst erasure of length B = 2 that erases x[0] and x[1] on the original stream corresponds
to rB = 6 erased channel packets, x[0], . . . , x[5] on the expanded stream. Hence, the (rB, T ) = (6, 7)
MS code is capable of recovering the erased source packets within a delay of T = 7. In particular, the
source packets s[0], s[1] and s[2] belonging to s[0] are recovered at time 7, 8 and 9 respectively on the
expanded stream. Hence, s[0] is recovered at time 3 on the original stream with is equivalent to a delay
of T = ⌈ Tr⌉.
7.4 Interference Avoidance Streaming Codes (IA-SCo)
In this section, we propose a simple code construction called Interference Avoidance Streaming Code
(IA-SCo). The proposed scheme involves starting with two single user MS codes, C1 : (B1, T1) and
C2 : (αB1, αT1), delaying the parity-checks of C2 by T1 units and then directly combining them with the
parity-checks of C1 such that they do not interfere with one another2. The corresponding achievable T2
for such scheme is given as follows.
Proposition 7.2 (Interference-Avoidance Streaming Code (IA-SCo)). An IA-SCo construction achieves
a rate of C1 = T1
T1+B1when B2 = αB1 (with α > 1 an integer) and
T2 ≥ αT1 + T1. (7.16)
We first provide a simple example to illustrate the main idea behind IA-SCo and then provide the
general construction.
7.4.1 IA-SCo - Example
Consider an example with the first and second users experiencing burst erasures of length B1 = 1 and
B2 = 2 packets respectively (i.e., α = 2) and the corresponding delay for the first user is T1 = 2. From
Proposition 7.2, we have that T2 = 6. Table 7.3a illustrates the IA-SCo construction. For comparison,
the DE-SCo construction achieving the minimum T2 = 5, is provided in Table 7.3b. This construction
will be discussed in Section 4.5.
Encoder
The construction of the IA-SCo is as follows.
• Split each source packet s[i] ∈ F2q into T1 = 2 two symbols of equal size, i.e., s[i] = (s0[i], s1[i]) and
s0[i], s1[i] ∈ Fq.
• Apply a (B1, T1) = (1, 2) MS code to the source packets s[i] to generate B1 = 1 parity-check
symbol, pI[i] = s0[i− 2]⊕ s1[i− 1].
• Apply a (αB1, αT1) = (2, 4) MS code to the source packets s[i] with interleaving step of α = 2 to
generate the parity-check symbol pII[i] = s0[i− 4]⊕ s1[i− 2].
2Emin Martinian suggested the idea of shifting the parity-checks of one MS code and then combining it with anotherwhich lead to the IA-SCo code in Proposition 7.2.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 96
Table 7.3: Rate 2/3 code constructions that satisfy user 1 with (B1, T1) = (1, 2) and user 2 with B2 = 2.A T2 = αT + T = 6 is achieved using IA-SCo while the minimum T2 = αT + B = 5 is achieved usingDE-SCo which will be discussed in Section 4.5.
(a) IA-SCo Construction for (B1, T1) = (1, 2) and (B2, T2) = (2, 6)
[i− 1] [i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i + 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i + 2] s1[i+ 3] s1[i+ 4]
s0[i− 3]⊕ s1[i− 2] s0[i − 2]⊕ s1[i− 1] s0[i− 1]⊕ s1[i] s0[i]⊕ s1[i+ 1] s0[i+ 1]⊕ s1[i+ 2] s0[i+ 2]⊕ s1[i + 3]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s0[i− 7]⊕ s1[i− 5] s0[i − 6]⊕ s1[i− 4] s0[i− 5]⊕ s1[i− 3] s0[i− 4]⊕ s1[i− 2] s0[i− 3]⊕ s1[i− 1] s0[i− 2]⊕ s1[i]
(b) DE-SCo Construction for (B1, T1) = (1, 2) and (B2, T2) = (2, 5)
[i− 1] [i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i + 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i + 2] s1[i+ 3] s1[i+ 4]
s0[i− 3]⊕ s1[i− 2] s0[i − 2]⊕ s1[i− 1] s0[i− 1]⊕ s1[i] s0[i]⊕ s1[i+ 1] s0[i+ 1]⊕ s1[i+ 2] s0[i+ 2]⊕ s1[i + 3]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s1[i− 6]⊕ s0[i− 5] s1[i − 5]⊕ s0[i− 4] s1[i− 4]⊕ s0[i− 3] s1[i− 3]⊕ s0[i− 2] s1[i− 2]⊕ s0[i− 1] s1[i− 1]⊕ s0[i]
• Combine the two streams of parity-checks after shifting pII[i] by T1 = 2 units, i.e., q[i] = pI[i] ⊕pII[i − T1]. The parity-check stream q[·] is then concatenated with the source packets to generate
the channel packet, x[i] = (s0[i], s1[i], q[i]), as shown in Table 7.3a.
Decoder
When an erasure of one packet occurs say at t = i − 1 for user 1, it needs to recover s[i − 1] at time
t = i + 1. Note that user 1 can cancel the second row of parity-checks pII[·], which combines unerased
symbols. For user 2 suppose that a burst erasure of length B2 = 2 packets occurs at times t = i− 2, i− 1.
User 2 simply ignores the parity-checks q[i] and q[i + 1]. Starting from t = i+ 2, the parity-checks pI[·]are functions of packets s[i] and later and do not involve the erased packets s[i−1] and s[i−2]. Therefore,we can subtract pI[t] for t ∈ [i + 2, i + 5] from q[t] and recover pII[t − 2], which suffice to recover the
missing packets.
7.4.2 General Construction
The main idea behind the general construction is to start with two single user codes for the two users,
(B1, T1) and (αB1, αT1) and delay the parity-checks of the second by T1 so that they can be combined
with the parity-checks of user 1 without causing any interference to the two users.
Throughout our discussion we let T1 = T and B1 = B and B2 = αB and T2 = αT + T .
Encoder
The encoding steps are as follows.
• Split each source packet s[i] ∈ FTq into T symbols, i.e., s[i] = (s0[i], . . . , sT−1[i]).
• Apply a C1 = (B, T ) MS code of rate C1 to the source symbols diagonally to produce B parity-check
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 97
symbols pI[i] = (pI0[i], . . . , pIB−1[i]) where,
pIj [i] = sj [i− T ] + hj(sB[i− (j + T −B)], . . . , sT−1[i− (j + 1)]), j = 0, . . . , B − 1. (7.17)
• Apply a C2 = (αB,αT ) MS code of rate C1 to the source symbols diagonally with interleaving step
of α (according to the vertical interleaving property in Section 7.3.1) to produce B parity-checks
pII[i] = (pII0 [i], . . . , pIIB−1[i]) where,
pIIj [i] = sj [i− αT ] + hj(sB[i− α(j + T −B)], . . . , sT−1[i− (j + 1)α]), j = 0, . . . , B − 1. (7.18)
• Combine the two streams of parity-check packets after applying a shift of T to the second stream,
pII[·], and then the resulting non-interfering streams are combined, i.e., q[i] = pI[i] + pII[i − T ].
The combined parity-check packets are then concatenated to the source stream to generate the
channel packets, i..e, x[i] = (s[i],q[i]).
Clearly the rate of the constructed code equals C1 = T1/(T1 + B1). We need to show that user 1
and user 2 can recover from erasure bursts of B and B2 = αB within delays of T and T2 = αT + T
respectively.
Decoder of User 1
Assume that the packets at time i, . . . , i+B − 1 are erased on user 1’s channel. By virtue of C1, packets[i+k] for k = 0, 1, . . . , B− 1 can be recovered by time i+k+T using parity-checks pI[i+B], . . . ,pI[i+
k + T ]. Thus it suffices to show that we can recover pI[i+ k] from q[i+ k] for k = B, . . . , B + T − 1.
First note that the pI[i + k] for k = B, . . . , T can be directly recovered from q[i + k] since the
interfering parity-checks pII[·] only consist of source packets before time i. Indeed the parity-check at
k = T is
q[i + T ] = pI[i+ T ] + pII[i]
and from (7.18) the symbols in pII[i] only depend on the source packets before time i. Thus upon
receiving q[i+ T ] user 1 can recover the erased packet s[i]. Furthermore, we can also compute pII[i+1]
which only consists of source packets up to time i and upon receiving q[i+T+1] can compute pI[i+T+1]
from
pI[i+ T + 1] = q[i + T + 1]− pII[i+ 1].
In turn it recovers s[i+ 1]. Continuing this process it can recover all the erased packets s[i+ k] by time
i+ T + k.
Decoder of User 2
Suppose that the packets at time i, . . . , i + B2 − 1 are erased on user 2’s channel. By virtue of C2,packet s[i+ k] (for k = 0, 1, . . . , B2 − 1) can be recovered by time i+ k+ αT using parity-checks pII[i+
B2], . . . ,pII[i+k+αT ]. To establish (7.16) it suffices to show that packets pII[i+B2], . . . ,p
II[i+k+αT ]
can be recovered from packets q[i+ T +B2], . . . ,q[i+ k + αT + T ]. Indeed since
q[i+ B2 + T ] = pI[i+B2 + T ] + pII[i+B2],
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 98
it suffices to observe that user 2 can cancel pI[i + B2 + T ] upon receiving q[i + B2 + T ]. It however
immediately follows from (7.17) that pI[i+B2 + T ] involves source packets at time i+B2 or later (the
construction limits the memory in the channel input stream to T packets as discussed in Section 7.3.2).
The packets pI[·] after this time also depend on s[·] at time i +B2 or later.
7.5 Diversity Embedded Streaming Codes (DE-SCo)
In this section, we characterize the minimum possible delay at the second user T ⋆2 for a given B1, T1
and B2 at a rate of C1. We show that IA-SCo are sub-optimal when compared to such value. We then
describe the DE-SCo construction which achieves T ⋆2 . We rely on several properties of the single user
MS code explained in Section 7.3. We note that while IA-SCo does not provide any capacity results,
their construction is much simpler than DE-SCo and perhaps easier to generalize when there are more
than two receivers.
Theorem 7.1 (Diversity-Embedded Streaming Codes (DE-SCo)). Let (B1, T1) = (B, T ) and suppose
B2 = αB where α > 1. The minimum possible delay for any code of rate R = TT+B
is
T ⋆2 = αT +B, (7.19)
and is achieved by the DE-SCo construction.
The proof of Theorem 7.1 is divided into two main parts, the converse is provided in Section 7.5.1
whereas the code construction is provided in Sections 7.5.3 and 7.5.4.
7.5.1 Converse Proof
We first establish the converse to Theorem 7.1. Consider any code that achieves {(B, T ), (B2, T2)} withT2 < T ⋆
2 . The rate of this code is strictly less than R = TT+B
. To establish this we separately consider
the case when T +B ≤ T2 < αT +B and the case when T2 < T +B. Let us assume the first case.
As shown in Figure 7.3, construct a periodic burst erasure channel in which every period of TP1 =
(α−1)B+T2 packets consists of a sequence of αB erasures followed by a sequence of non-erased packets.
Consider one period of the proposed periodic erasure channel with a burst erasure of length B2 = αB
from time t = 0, 1, . . . , αB−1 followed by a period of non-erasures for t = αB, . . . , TP1−1. The decoding
steps are as follows,
• For time t = 0, . . . , TP1 − 1 the channel behaves identically to a burst erasure channel with αB
erasures. The first (α−1)B packets, s[0], . . . , s[(α−1)B−1] can be recovered using the decoder of
user 2 with a delay of T2, i.e., by time TP1−1 and hence the channel packets x[0], . . . ,x[(α−1)B−1]can also be recovered via (7.1) as shown in Step (1) in Figure 7.3.
• Note that since the channel packets x[0], . . . ,x[(α − 1)B − 1] have been recovered, the resulting
channel between times t = 0, . . . , αB − 1 is identical to a burst erasure channel with B erasures
between time t = (α− 1)B, . . . , αB− 1. The decoder of user 1 applied to this channel recovers the
corresponding erased source packets by time αB−1+T ≤ TP1 −1, which follows since T2 ≥ T +B
as shown in Step (2) in Figure 7.3.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 99
������ �
��
�
�
�
�����������
������������
������
��������
�������
��
Figure 7.3: One period illustration of the Periodic Erasure Channel for T + B < T2 ≤ αT + B. Whitesquares denote unerased packets. Black and grey squares denote erased packets to be recovered usingC1 and C2 respectively.
Thus all the erased channel packets in the first period are recovered by time TP1 − 1. Since the channel
introduces periodic bursts, the same argument can be repeated across all periods. Since the length of each
period is (α− 1)B+T2 and contains αB erasures, thus the capacity is upper bounded by 1− αB(α−1)B+T2
which is less than R = TT+B
if T2 < T ⋆2 .
For the other case with T2 < T + B shown in Figure 7.4, the same steps follow except that the
periodic erasure channel has a period of TP2 = T + αB packets. Each period consists of a burst erasure
of length αB from time t = 0, 1, . . . , αB− 1 followed by a period of non-erasures for t = αB, . . . , TP2 − 1.
The decoder of user 2 recovers the (α − 1)B erasures at time t = 0, 1, . . . , (α − 1)B − 1 with a delay of
T2 (i.e., by time < TP2 − 1) as T2 < T +B. Furthermore, the decoder of user 1 recovers the B erasures
at time t = (α− 1)B, . . . , αB − 1 with a delay of T packets (i.e., by time αB + T − 1 = TP2 − 1). Now,
the length of each period is αB + T with T available packets, the rate is TT+αB
strictly smaller than R
as α > 1 and the converse follows.
We start by an example of the DE-SCo construction which achieves the minimum possible value of
T ⋆2 in (7.19). We then discuss the general case for integer and non-integer α in Sections 7.5.3 and 7.5.4
respectively.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 100
������ �
��
�
�
�����������
������������
������
��
��������
�������
�
Figure 7.4: One period illustration of the Periodic Erasure Channel for T2 ≤ T + B. White squaresdenote unerased packets. Black and grey squares denote erased packets to be recovered using C1 and C2respectively.
7.5.2 DE-SCo - Example
Figure 7.5 illustrates the DE-SCo {(B, T ), (B2, T2)} = {(2, 5), (4, 12)} construction, i.e., α = B2/B1 = 2.
Each column represents one time index between [−4, 9] shown in the top row of the table. A simpler
example with parameters {(2, 3), (4, 8)} is provided in Appendix E.2 which will be useful in the following
chapter.
Encoder
The encoding steps are as follows,
• Split each source packet into five symbols (a[.], b[.], c[.], d[.], e[.]), each occupying one row. The first
T −B = 3 of which, a[.], b[.] and c[.], are non-urgent symbols and the rest, d[.] and e[.], are urgent.
The next two rows denote the parity-check symbols.
• Apply a C1 = (B, T ) = (2, 5) MS code along the diagonal dIi (cf. (7.7)), to generate the parity-check
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 101
symbols p[.] and q[.] where,
p[i] = a[i− 5] + c[i− 3] + e[i− 1]
q[i] = b[i− 5] + d[i− 3] + e[i− 2]. (7.20)
• Apply a C2 = ((α− 1)B, (α− 1)T ) = (2, 5) MS code along the opposite diagonal dIIi (cf. (7.23)) to
generate the parity-checks y[.] and z[.] in the shaded two top rows where,
y[i] = e[i− 5] + c[i− 3] + a[i− 1]
z[i] = d[i − 5] + b[i− 3] + a[i− 2]. (7.21)
• Shift these parity-checks by T +B = 7 slots and combined with the corresponding parity-checks of
C1 as shown in Figure 7.5. The channel packet at time i is given by, x[i] = (a[i], b[i], c[i], d[i], e[i], p[i]+
y[i− 7], q[i] + z[i− 7]).
Decoder
The decoder for user 1 is similar to the case of IA-SCo and will be omitted. For user 2, we assume that
a burst erasure of length B2 = 4 occurs in the interval [−4,−1] and illustrate the decoding steps as
follows.
(1) Recover {pII[t−∆]}t≥T :
By construction of C1 all the parity-checks pI[t] for t ≥ 5 do not involve the erased symbols. In
particular the parity-checks marked by p[.] and q[.] at t ≥ 5 do not involve source symbols before
t = 0 (cf. (7.20)) and hence these can be cancelled to recover the parity-checks y[.] and z[.] for
t ≥ 5.
(2) Upper-left triangle:
The parity-checks in step (1) enable us to recover the non-urgent erased symbols in dII−3 =
(e[−7], d[−6], c[−5], b[−4], a[−3]) and dII−4 = (e[−8], d[−7], c[−6], b[−5], a[−4]) which are a[−4],
a[−3] and b[−4], i.e., the upper-left triangle symbols. We use the corresponding diagonal code-
words, dII−3 = (e[−7], d[−6], c[−5], b[−4], a[−3], y[−2], z[−1]) to recover a[−3] and b[−4] from the
parity-checks y[−2] and z[−1] and dII−4 = (e[−8], d[−7], c[−6], b[−5], a[−4], y[−3], z[−2]) to recover
a[−4] from the parity-check z[−2]. We note that a[−4] is recovered from z[−2] at t = 5 and not
from y[−3] which appears at t = 4 and is not recovered in step (1). More generally, as we note
later, the parity-checks at i+ T and later suffice to recover symbols in this step.
(3) Recover pI[t] for 0 ≤ t ≤ T − 1:
The symbols recovered in step (2) suffice to recover all parity-checks pI[t] for 0 ≤ t ≤ 4. Note that
the relevant interfering parity-checks from pII[·] in this interval is y[−3] = e[−8] + c[−6] + a[−4].Since the only erased symbol a[−4] is already recovered in step (2), these parity-checks can be
cancelled. More generally as we show later, for the general case, our construction guarantees that
the interfering parity-checks pII[·] in the interval 0 ≤ t ≤ T − 1 only involve erased symbols from
the upper left triangle, which are decoded in step (2).
Chapter7.
Diversit
yEmbeddedStreaming
Codes(D
E-SCo)
102
�������������
����
����
� � � �� �� � � � � � � �
���� ���� ����� ����� �� � ���� ���� ��� ��� ���� ���� ���� ���� ����
���� ���� ����� ����� �� � ���� ���� ��� ��� ���� ���� ���� ���� ����
���� ���� ����� ����� �� � ���� ���� ��� ��� ���� ���� ���� ���� ����
���� ���� ����� ����� �� � ���� ���� ��� ��� ���� ���� ���� ���� ����
���� ���� ����� ����� �� � ���� ���� ��� ��� ���� ���� ���� ���� ����
����������� ��������� � ����������� ����������� �� ������� ���������� ���������� �������� �������� ���������� ���������� ������� � ��������� ���������
����������� ��������� � ����������� ����������� �� ������� ���������� ���������� �������� �������� ���������� ���������� ������� � ��������� ���������
������
��!"����!#�$!$�%�&�$'��%����(�!)�
*!$�+�&�$'
,%����(�!)�
+�&�$'
,%����(�!)�
-��.'��/0��1��
,%����(�!)�
+������.'���0��1��!#�/�
+������.'���0��1��!#�/� ��!"����!#�%�&�$'�
�%����(�!)�
Figure 7.5: A {(2, 5)− (4, 12)} DE-SCo code construction is illustrated in the above figure. The parity-check symbols p[t] and q[t] of a (2, 5) MS codealong the main diagonal is added to another (2, 5) MS code parity-check symbols y[t] and z[t] but applied along the opposite diagonal and shifted byT +B = 7 (i.e., the two parity-check checks at time instant t are p[t] + y[t− 7] and q[t] + z[t− 7]). Shaded columns are erased channel packets whilethe remaining ones are perfectly received by the destination.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 103
(4) Upper-right triangle:
Since the diagonals dI−2 = (a[−2], b[−1], c[0], d[1], e[2]) and dI
−1 = (a[−1], b[0], c[1], d[2], e[3]) involvetwo or fewer erasures, we can now recover these symbols using parity-checks of code C1 recovered
in the previous step. In particular, the upper-right triangle source symbols a[−2], b[−1] and a[−1]can be recovered from p[3], q[4] and p[4] respectively.
(5) Recover non-urgent symbols recursively:
The remaining non-urgent symbols need to be recovered in a recursive manner. Note that dI−3 =
(a[−3], b[−2], c[−1], d[0], e[1]) has three erased symbols. However, the first symbol a[−3] also be-
longs to dII−3 and has already been recovered in step (2). The remaining two symbols, b[−2] and
c[−1], can be recovered by the two available parity-checks of code C1 in dI−3 = (a[−3], b[−2], c[−1], d[0], e[1], p[2], q[3]),
i.e., from p[2] and q[3]. Similarly, dII−2 = (e[−6], d[−5], c[−4], b[−3], a[−2]) also has three erasures,
but the upper-most symbol a[−2] also belongs to dI−2 which has been recovered in step (4). Hence
the remaining erased symbols in dII−2, c[−4] and b[−3], can be recovered using the parity-checks
y[−1] and z[0] in dII−2 = (e[−6], d[−5], c[−4], b[−3], a[−2], y[−1], z[0]).
At this stage it only remains to recover the two remaining non-urgent symbols c[−3] and c[−4]by time t = 7. These are recovered in the next step of the recursion. Note that the symbols
c[−2] and d[−1] are the only remaining erased symbols on the diagonal dI−5 and are recovered
from parity-checks p[1] and q[2]. Likewise, c[−3] and d[−4] are the only remaining erased symbols
on the diagonal dII−1 and can be recovered using the parity-checks y[0] and z[1]. Since c[−3] is a
non-urgent symbol, from Proposition 7.1 it is recovered before d[−4] using only y[0]. Thus both
c[−3] and c[−4] are recovered by t = 7.
(6) Recover urgent symbols:
After recovering all non-urgent symbols in the previous steps, we can directly recover the urgent
ones (i.e., the bottom two rows) using parity-checks pII[t] for 8 < t ≤ 11.
7.5.3 DE-SCo Construction for Integer α
We now study the general construction for any integer α > 1.
Encoder
The encoding steps are as follows,
• Source Splitting: Split each source packet s[i] into T symbols, i.e., s[i] = (s0[i], . . . , sT−1[i]).
• Construction of C1: Apply a C1 = (B, T ) MS code by combining the source symbols along the
main diagonal and producing B parity-check symbols pI = (pI0[i], . . . , pIB−1[i]) at each time.
In other words, a (T + B, T ) LD-BEBC code is applied along the diagonal bIi = (s0[i], s1[i], . . . ,
sT−1[i + T − 1]) constructing the diagonal codeword dIi = (s0[i], . . . , sT−1[i + T − 1], pI0[i +
T ], . . . , pIB−1[i+ T +B − 1]) where, from (7.7),
pIk[i] = sk[i− T ] + hk(sB[i− k − T +B], . . . , sT−1[i− k − 1]), k = 0, . . . , B − 1. (7.22)
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 104
• Construction of C2: Apply a C2 = ((α−1)B, (α−1)T ) MS code by combining the source symbols
along the opposite diagonal with an interleaving step of size ℓ = (α − 1) and constructing a total
of B parity-checks pII[i] = (pII0 [i], . . . , pIIB−1[i]).
In other words, a (T + B, T ) LD-BEBC code is applied along the diagonal bIIi = (sT−1[i −
ℓ(T − 1)], sT−2[i − ℓ(T − 2)], . . . , s0[i]) to construct a diagonal codeword dIIi = (sT−1[i − ℓ(T −
1)], . . . , s0[i], pII0 [i+ ℓ], . . . , pIIB−1[i+ ℓB]) where
pIIk [i] = sT−k−1[i− ℓT ] + hk(sT−B−1[i− ℓ(k + T −B)], . . . , s0[i− ℓ(k + 1)]), k = 0, . . . , B − 1.
(7.23)
• Combination of Parity-Checks of C1 and C2: Combine the two streams of parity-check packets
pI[·] and pII[·] after introducing a shift of ∆ = T+B in the later stream, i.e., q[i] = pI[i]+pII[i−∆].
The output packet at time i is x[i] = (s[i],q[i]).
Throughout our discussion, we refer to the non-urgent and urgent packets of code C2. The set of
urgent packets and non-urgent packets are as stated in (7.13). Also note that since there are B parity-
check symbols for every T source symbols, it follows that the rate of the code is TT+B
.
Decoder of User 1
Suppose that the packets at time i − B, . . . , i − 1 are erased by the channel of user 1. User 1 first
recovers parity-checks pI[i], . . . ,pI[i + T − 1] from q[i], . . . ,q[i + T − 1] by cancelling the parity-checks
pII[·] that combine with pI[·] in this period. Indeed at time i + T − 1 the interfering parity-check is
pII[i + T − ∆ − 1] = pII[i − B − 1], which clearly depends on the (non-erased) source symbols before
time i −B. All parity-checks pII[·] before this time are also non-interfering. The erased source packets
can be recovered from pI[i], . . . ,pI[i+ T − 1] by virtue of code C1.
Decoder of User 2
Suppose that the packets at times i − αB, . . . , i − 1 are erased for receiver 2. Let T ∆= i − αB + T ⋆
2 .
We use parity-checks at time i ≤ t ≤ T − 1 to recover {v[τ ]}i−1τ=i−αB in the first five steps where
v[τ ] = (s0[τ ], . . . , sT−B−1[τ ]) denote the set of non-urgent symbols for C2. In the last step, we use parity-
checks at time t ≥ T to recover the set of non-urgent symbols for C2, u[τ ] = (sT−B[τ ], . . . , sT−1[τ ]).
(1) Recover {pII[t−∆]}t≥i+T :
For t ≥ i+T , the decoder recovers parity-check pII[t−∆] from q[t] by cancelling the parity-checks
pI[t] which depend only on (non-erased) source packets at time i or later as via (7.7) the memory
in C1 is limited to previous T packets. Consequently the parity-check packets {pI[t]}t≥i+T depend
only on source symbols after time i. Hence these parity-checks can be cancelled.
(2) Upper-left triangle:
In this step, the decoder recovers the non-urgent symbols in dIIi−αB , . . . ,d
IIi−B−1 using the parity-
check packets {pII[t−∆]}T−1t=i+T . Clearly these vectors are affected by at most (α−1)B erasures be-
tween times i−αB, . . . , i−B−1. Furthermore, the corresponding parity-checks {pII[t−∆]}t≥i+T =
{pII[t]}t≥i−B have been recovered in step (1). By construction C2 can recover the erased source
symbols in the stated diagonal vectors. Furthermore by applying Proposition 7.1, the non-urgent
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 105
symbols are recovered from the first (α − 1)(T − B) parity-check columns. Taking into account
the shift of ∆ = T + B, it follows that all the non-urgent source symbols are recovered by time
i+ T + (α− 1)(T −B)− 1 = T − 1.
(3) Recover pI[t] for i ≤ t ≤ i+ T − 1:
We consider the last column of parity-checks, q[i + T − 1] = pI[i + T − 1] + pII[i − B − 1]. From
(7.23), for k = 0, 1, . . . , B − 1 we have,
pIIk [i−B − 1] = sT−k−1[i−B − 1− (α − 1)T ]
+ hk(sT−B−1[i−B − 1− (α− 1)(T −B + k)], . . . , s0[i−B − 1− (k + 1)(α− 1)]).
Thus the only urgent symbols involved in pII[i−B−1] are at time t = i−B−1−(α−1)T , which are
unerased. Moreover, the non-urgent symbols involved are those of dIIi−B−(α−1)(k+1)−1 which have
already been recovered in step (2). Thus, it follows that we can reconstruct pII[i−B−1]. A similar
argument can be used to show that we can recover all the columns pII[i−B−T ], . . . ,pII[i−B−1],
cancel their effect on q[i], . . . ,q[i + T − 1] and recover pI[i], . . . ,pI[i+ T − 1].
(4) Upper-right triangle:
In this step, the decoder recovers the non-urgent symbols in dIi−1, . . . ,d
Ii−B using the parity-
checks pI[i], . . . ,pI[i+ T − 1]. Step (4) follows in a similar way to step (2). The diagonal vectors
dIi−B, . . . ,d
Ii−1 spanning the upper-right triangle of the erased source symbols are affected by a
burst erasure of length B between times i −B, . . . , i − 1. Furthermore, the corresponding parity-
checks {pI[t]}i≤t<i+T recovered earlier are capable of recovering the erased source symbols in these
diagonal vectors by at most time i+ T − 1 < T .
(5) Recover non-urgent symbols recursively:
For each k ∈ {1, . . . , T −B − 1} recursively recover the remaining non-urgent symbols as follows:
(Ind. 1) Recover the non-urgent symbols in dIi−B−k using the non-urgent symbols in
{dIIj }j≤i+(k−1)(α−1)−B−1 and the parity-checks pI[t] in the interval t ∈ [i, i+ T ).
(Ind. 2) Recover the non-urgent symbols in dIIi−B+(k−1)(α−1), . . . ,d
IIi−B+k(α−1)−1 using
{dIj}j≥i−B−(k−1) and the parity-checks pII[t] in the interval t ∈ [i+ T, T ).
Once this recursion terminates, all the non-urgent symbols {v[τ ]}i−1τ=i−αB are recovered by time
T − 1. We establish the claim of the recursion using induction in Appendix E.3.
We then show that all the non-urgent erased source symbols are recovered at k = T − B − 1.
Because of the recovery along the diagonals, it suffices to show that the lower left most non-urgent
symbol in the region i−B, . . . , i− 1, i.e., sT−B−1[i−B] is an element of dIi−B−k = dI
i−T+1 which
is clear from the definition of dIi at i− T + 1 as,
dIi−T+1 = (s0[i− T + 1], . . . , sT−B−1[i−B], . . . , sT−1[i]).
Similarly, we need to show that dIIi−B+k(α−1)−1 = dII
i−B+(T−B−1)(α−1)−1 contains the lower right
most non-urgent symbol in the region i − αB, . . . , i − B − 1, i.e., sT−B−1[i − B − 1]. This too
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 106
immediately follows by applying the definition of dIIi at time i−B + (T −B − 1)(α− 1)− 1 as,
dIIi−B+(T−B−1)(α−1)−1 = (s0[i−B + (T −B − 1)(α− 1)− 1],
. . . , sT−B−1[i−B − 1], . . . , sT−1[i− αB − 1]).
(6) Recover urgent symbols:
Finally, the decoder recovers urgent symbols u[τ ] = (sT−B[τ ], . . . , sT−1[τ ]) for i − αB ≤ τ < i
at time t = τ + T ⋆2 using the parity-check packets pII[t] and the previously decoded non-urgent
symbols. We establish this claim as follows. After recovering all the non-urgent source sym-
bols {v[τ ]}i−1τ=i−αB, we can directly apply the construction of C2 to recover the urgent symbols
{u[τ ]}i−1τ=i−αB using parity-checks pII[·] within a delay of T ⋆
2 .
7.5.4 DE-SCo Construction for Non-Integer α
In this section, we show that DE-SCo codes {(B, T ), (αB,αT + B)} can be constructed for any non-
integer value of α such that B2 = αB is an integer. For any α = B2
B> 1, let α = a
bwhere a and b are
integers and abis in the simplest form.
Encoder
We introduce suitable modifications to the construction given in the previous section. Clearly since abis
in simplest form, B must be an integer multiple of b, i.e., B0 = Bb∈ N. We first consider the case when
T is also an integer multiple of b, i.e., T0 = Tb∈ N. The case when T is not an integer multiple can be
dealt with by using source expansion discussed in Section 7.3.5, as outlined at the end of the section.
• Split each source packet s[i] into T0 symbols (s0[i], . . . , sT0−1[i]).
• Apply a C1 = (B, T ) = (bB0, bT0) MS code by combining the source symbols along the main diago-
nal with an interleaving step of size b and producingB0 parity-check symbols pI = (pI0[i], . . . , pIB0−1[i])
at each time where,
pIk[i] = sT0−k−1[i− bT0]+hk(sT0−B0−1[i− b(k+T0−B0)], . . . , s0[i− b(k+1)]), k = 0, . . . , B0− 1.
(7.24)
• Apply a C2 = ((α−1)B, (α−1)T ) = ((a−b)B0, (a−b)T0) MS code by combining the source symbols
along the opposite diagonal and with an interleaving step of size ℓ = (a − b) and constructing a
total of B0 parity-checks pII = (pII0 [i], . . . , pIIB0−1[i]) where,
pIIk [i] = sT0−k−1[i− (a− b)T0] + hk(sT0−B0−1[i− (a− b)(k + T0 −B0)], . . . ,s0[i− (a− b)(k + 1)]),
k = 0, . . . , B0 − 1.
(7.25)
• Introduce a shift ∆ = T +B = b(T0 +B0) in the stream pII[·] and combine with the parity-check
stream pI[·], i.e., q[i] = pI[i] + pII[i −∆]. The output packet at time i is x[i] = (s[i],q[i]).
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 107
Decoder of User 1
The decoding steps is analogous to the case when α is integer. We sketch the main steps. As be-
fore the decoding is done along the diagonal vectors dIi = (s0[i], . . . , sT0−1[i + (T0 − 1)b]) and dII
i =
(s0[i], . . . , sT0−1[i− (T0− 1)ℓ]).
For the first user, the same argument applies as in previous section, i.e., a shift of ∆ = b(T0 + B0)
in pII[·] guarantees that user 1 can cancel the interfering parity-checks to recover the pI[·] stream of
interest.
Decoder of User 2
We verify that steps in section 7.5.3 continue to apply. A little examination shows that the claims (1)—
(4) as well as the proofs in the previous case follow immediately as they hold for an arbitrary interleaving
step for C2 and do not rely on the interleaving step of C1 being 1. The induction step needs to be modified
to reflect that the interleaving step size of C1 is b > 1.
For each k ∈ {1, . . . , T −B − 1} recursively recover the remaining non-urgent symbols as follows:
• Ind. 1 Recover the non-urgent symbols in dIi−B−(k−1)b−1, . . . ,d
Ii−B−kb using the non-urgent
symbols in {dIIj }j≤i+(k−1)(a−b)−B−1 and parity-checks pI[·] between i ≤ t < i+ T .
• Ind. 2Recover the non-urgent symbols in dIIi−B+(k−1)(a−b), . . . ,d
IIi−B+k(a−b)−1 using {dI
j}j≥i−B−(k−1)b
and the parity-checks pII[·] between i + T ≤ t < T .
Once this recursion terminates, all the non-urgent symbols {v[τ ]}i−1τ=i−αB are recovered by time T − 1.
The proof of this recursion is similar to that in the previous section and will be relegated to Appendix E.4.
Finally, the assumption that T is a multiple of b (i.e. αT is an integer) can be relaxed using
the source expansion approach in Section 7.3.5. We expand the source stream s[i] with parameters
(p, r) = (MαT,M) such that MαT is an integer. We then apply a {(MB,MT ), (MαB,M(αT + B))}DE-SCo and show it satisfies both users on the original stream. The decoder follows by applying
Lemma 7.1. In particular, a burst of length B on the original stream will be recovered within a delay
of ⌈MTM⌉ = T packets on the original stream. Similarly, a burst of length B2 on the original stream
will be decoded with a delay of M(αT + B) on the expanded stream which is equivalent to a delay of
⌈M(αT+B)M
⌉ = ⌈αT + B⌉ packets on the original stream. Hence, the burst-delay parameters for both
receivers are satisfied.
7.6 Simulation Results
To examine fundamental performance, we compare between the proposed DE-SCo codes, single user
Maximally-Short (MS) codes in 2.6.1 and m-MDS codes in 2.5.2 through some experiments and discuss
advantages and disadvantages of the proposed codes.
In our simulations, any packets that incur a delay that exceeds the maximum delay, are declared to
be lost.
In Figure 7.6, we have a multicast setup with two users. Each user is experiencing a different instant
of a Gilbert channel. The first user sees a Gilbert channel with α1 = 10−4 and β1 spanning the interval
[0.3, 0.8] while the second user sees a Gilbert channel with α2 = 10−5 and β2 ∈ [0.1, 0.5]. We note
that the average burst length introduced by the considered Gilbert channels is given by 1βi
for i = 1, 2.
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 108
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.910
−6
10−5
10−4
10−3
β1
Loss
Pro
babi
lity
Uncoded(B,T) = (7,8) − MS Code{(B
1,T
1),(B
2,T
2)} = {(7,8),(14,23)} − DE−SCo
m−MDS Code − Rate = 8/15
(a) User 1. All codes are evaluated using a decoding delay of T1 = 8 packets and a rate of R = 8/15.
0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.510
−7
10−6
10−5
10−4
β1
Loss
Pro
babi
lity
Uncoded(B,T) = (7,8) − MS Code{(B
1,T
1),(B
2,T
2)} = {(7,8),(14,23)} − DE−SCo
m−MDS Code − Rate = 8/15
(b) User 2. All codes are evaluated using a decoding delay of T2 = 23 packets and a rate of R = 8/15.
Figure 7.6: Simulation experiments over Gilbert channel. Each users sees a different Gilbert channels,the first with α1 = 10−4, while the second with α2 = 10−5.
Hence, the second user channel introduces longer bursts than that of the first user. We plot the average
loss probability for a stream of 108 channel packets for; (1) DE-SCo code with burst-delay parameters
{(B1, T1), (αB1, αT1 + B1)} = {(7, 8), (14, 23)}, i.e., α = 2, (2) (B, T ) = (7, 8) MS code and (3) a m-
MDS code with a rate of R = T1
T1+B1= 8
15 . We note that the rate for all codes is the same and the
decoding delay is also fixed to T1 = 8 for the first user in Figure 7.6a and T2 = 23 for the second user
in Figure 7.6b.
We make a few remarks on the simulation results. The DE-SCo construction can recover from any
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 109
0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8
10−6
10−5
10−4
10−3
10−2
β1
Loss
Pro
babi
lity
Uncoded(B,T) = (10,11) − MS Code{(B
1,T
1),(B
2,T
2)} = {(10,11),(20,32)} − DE−SCo
m−MDS Code − Rate = 11/21
(a) User 1. All codes are evaluated using a decoding delay of T1 = 11 packets and a rate of R = 11/21.
0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.810
−7
10−6
10−5
10−4
10−3
β1
Loss
Pro
babi
lity
Uncoded(B,T) = (10,11) − MS Code{(B
1,T
1),(B
2,T
2)} = {(10,11),(20,32)} − DE−SCo
m−MDS Code − Rate = 11/21
(b) User 2. All codes are evaluated using a decoding delay of T2 = 32 packets and a rate of R = 11/21.
Figure 7.7: Simulation experiments over Fritchman channel. Each users sees a different instances ofFritchman channel, the first with α1 = 10−4 and a total of N1 + 1 = 5 states, whereas the second withα2 = 10−5 and a total of N2 + 1 = 12 states.
burst of length no more than Bi at user i = {1, 2}. If the burst is longer, the whole burst is declared
lost. For MS codes, it can only recover from a burst of length B = B1 or shorter. This explains why
they perform similar to DE-SCo at the first user but much worse at the second. For m-MDS codes of
rate R = T1
T1+B1, if the burst-length exceeds (1 − R)(Ti + 1) (cf. P2 in Corollary 2.1), at least the first
packet will not be decoded with a delay of Ti.
Next we see that DE-SCo always outperforms m-MDS code for user 1. This can be explained as
follows. A rate R DE-SCo can recover completely from an erasure burst of length B1 or smaller for user
1. It fails to recover the erased packets if the burst length exceeds B1. The m-MDS code only recovers
completely from an erasure burst of length no more than ⌊(1 − R)(T1 + 1)⌋ = 4. It provides partial
Chapter 7. Diversity Embedded Streaming Codes (DE-SCo) 110
recovery for burst erasures up to length B1 = 7 and fails to recover any source packets when the erasure
length exceeds B1. Thus the performance of DE-SCo always dominates that of m-MDS code for user 1
as illustrated in Figure 7.6a.
For user 2, the delay is given by T2 = B2
B1T1 + B1. DE-SCo can correct all erasures up to length
B2 and fail to recover any packets if the erasure length is beyond B2. While the threshold for perfect
recovery for m-MDS codes is ⌊(1−R)(T2+1)⌋ = 11 ≤ B2, interestingly it allows for partial recovery for
burst lengths up to B2 +B2
1
T1> B2. Hence, there is a range of erasure burst lengths where the m-MDS
code can recover a partial subset of source symbols whereas DE-SCo fails to recover any source symbols.
This explains why DE-SCo and m-MDS codes have comparable performance for user 2.
In Figure 7.7, we run a similar experiment but over Fritchman channels instead of Gilbert channels.
The channel seen by the first user is a Fritchman channel with N1 + 1 = 5 states and α1 = 10−4 while
the second user experiences a Fritchman channel with N2 + 1 = 12 states and α2 = 10−5. We note that
the second user channel has longer bursts than that introduced by the first as N2 > N1. For the three
codes, a similar performance is observed compared to that over the Gilbert channels except that DE-SCo
constructions outperform m-MDS codes at the second user in Figure 7.7b. This is due to the heavier
tail of the associated burst distribution in Fritchman channels compared to Gilbert channels. We also
note that MS codes at the second user fail to recover from any erasure as the channel introduces burst
of length N2 = 11 and longer which is larger than B1 = 10 for the MS code in this case.
7.7 Conclusion
This chapter constructs a new class of streaming erasure codes that do not commit a priori to a given
delay, but rather achieve a delay based on the channel conditions. We model this setup as a multicast
problem to two receivers whose channels introduce different erasure burst lengths and require differ-
ent delays. The DE-SCo construction embeds new parity-checks into the single user code, in a way
such that we do not compromise the single user performance of the stronger user while supporting the
weaker receiver with an information theoretically optimum delay. We provide an explicit construction
of these codes as well as the associated decoding algorithm. Simulations results suggest that these codes
outperform both MS and m-MDS codes.
We also propose a different class of codes, IA-SCo, which achieves a slightly larger delay at the weaker
user. However, these codes are appealing due to the simple associated encoder and decoder. While these
codes can be naturally extended to more than two users, we note that adding more users to the system is
accompanied by a penalty in the decoding delay as observed in the considered two user scenario. Hence,
it might not be desirable in practice to add more users. However, we argue that by suitably choosing
B1, B2 and T1, one can achieve better performance over single user codes.
Our simulation results indicate that the performance gains of the proposed code constructions are
limited to burst erasure channels. We believe that enhancing these codes to be robust against iso-
lated erasures can be done using a similar layering technique to that used in MiDAS codes in Chap-
ter 3. A more general problem would be designing codes that achieve the capacity for any feasible pair
{(B1, T1), (B2, T2)}. This will be discussed in the following chapter.
Chapter 8
Multicast Streaming Codes
(Mu-SCo)
8.1 Introduction
In this chapter, we consider the same multicast setup in Chapter 7 which involves one sender and two
receivers. The two receivers are connected to the sender over a burst erasure broadcast channel and
both the receivers are interested in reconstructing the same source stream, but with different delays.
One receiver’s channel introduces a burst of length B1 and the required reconstruction delay is T1. The
second receiver’s channel introduces a burst of length B2 and the associated reconstruction delay is T2.
In Chapter 7, the rate is fixed to the single user rate of the strong user and the minimum achievable
delay at the weaker user was shown to be achieved using a DE-SCo construction.
In this chapter, we consider a more general problem. We seek to characterize the multicast stream-
ing capacity for any B1, T1, B2 and T2 denoted by C(B1, T1, B2, T2). First, we observe that system
performance can be divided into two operating regimes. When both the delays T1 and T2 are smaller
than certain thresholds the system operates in a low-delay regime. Otherwise it operates in a large-delay
regime. In the latter case, we show that the delay of either receiver 1 or receiver 2 can be reduced to a
certain minimum threshold without reducing the capacity. This property is used in our code construc-
tions. In the low-delay regime, the characterization of the capacity is more challenging. We characterize
the capacity for a subset of this region by proposing a new coding scheme and a matching converse. For
the remainder of this region we propose an upper and lower bounds and justify their tightness in some
special cases.
The chapter is organized as follows. Section 8.2 introduces the streaming setup and defines the
quantity of interest, the multicast streaming capacity. A summary of the main results is provided in
Section 8.3. In Section 8.4, the capacity in the large delay regime is established. The subsequent sections
treat the low-delay regime. For the case when T2 ≥ T1 + B1 we establish the capacity by presenting
the code construction in Section 8.5 and the corresponding converse in Section 8.6. The case when
T2 < T1 + B1 is treated in Sections 8.7 and 8.8. We establish upper and lower bounds on the capacity
in Section 8.7, establish the capacity in the special cases when T1 = B1 and T2 = B2 in Sections 8.8.1
and 8.8.2 respectively, and present a conjecture on the capacity in Section 8.8.3. We finally present the
conclusions in Section 8.9.
111
Chapter 8. Multicast Streaming Codes (Mu-SCo) 112
8.2 System Model
We consider a similar model to that studied in Chapter 7. We reproduce it for convenience. At each
time instant t ≥ 0, the encoder observes a source packet s[t] ∈ Fkq and a channel packet x[t] ∈ F
nq is
causally generated, i.e.,
x[t] = ft(s[0], . . . , s[t]). (8.1)
The channel of user i introduces an erasure burst of maximum length Bi for i = 1, 2, i.e., the channel
output at receiver i at time t is given by
yi[t] =
{
⋆, t ∈ [ji, ji +Bi − 1]
x[t], otherwise, (8.2)
where ji ≥ 0 and ⋆ denotes an erasure. Furthermore, the source stream {s[t]}t≥0 has to be constructed at
receiver i within a maximum delay of Ti packets, i.e., there should exist a sequence of decoding functions
γ1t(.) and γ2t(.) such that
s[t] = γit(yi[0],yi[1], . . . ,yi[t+ Ti]), i = 1, 2, (8.3)
and Pr(s[t] 6= s[t]) = 0, ∀t ≥ 0.
We note that the source stream is an i.i.d. process; each source packet is uniformly sampled from a
distribution ps(·) over the finite alphabet Fkq .
The rate of the multicast code is defined as the ratio of the size of each source packet to that of each
channel packet, i.e., R = k/n, and the multicast streaming capacity is defined as follows.
Definition 8.1 (Multicast Streaming Capacity). A rate R is achievable if there exists a streaming code
of this rate over some field-size q such that if the channel introduces a burst of length Bi for i ∈ {1, 2},every source packet s[t] for t ≥ 0 can be decoded with a delay of Ti ≥ Bi
1. Such code is called a
{(B1, T1), (B2, T2)} Multicast Streaming Code (Mu-SCo). The supremum of all achievable rates is the
multicast streaming capacity and is denoted by C(B1, T1, B2, T2).
We note that Remark 7.1 still applies to Mu-SCo. We also note that if the guard interval after a burst
of length Bi is Ti < Ti for i = 1, 2, one can target a smaller delay. In particular, a {(B1, T1), (B2, T2)}Mu-SCo is capable of recovering all source packets over such channel. A similar argument is used in
the single user setup to achieve the capacity of burst erasure channels (cf. Theorem 2.3 in Chapter 2).
Another related reference is [65] that considers the case of multiple erasure bursts within the same
decoding window.
Without loss of generality, we assume throughout the chapter that B2 > B1. We only consider the
burst erasure channel model in this chapter. More general channel models that include both burst and
isolated erasures can be potentially tackled using a layered coding approach as discussed in Chapters 3
and 4.
1We note that the capacity is clearly zero whenever T1 < B1 or T2 < B2.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 113
��
��
�����
�� �
�
������
�
��� �
���
���� ������
��� ����� �
���
� ����
���
���
���
Figure 8.1: Capacity behavior in the (T1, T2) plane. We hold B1 and B2 as constants with (B2 > B1),so the regions depend on the relation between T1 and T2 only. The dashed line shows the contour ofconstant capacity in regions (a), (b), (c) and (d).
8.3 Main Results
We divide our results into two main regimes, the large-delay regime and the low-delay regime, which are
treated separately below.
8.3.1 Large-Delay Regime
The parameters of the DE-SCo construction in Theorem 7.1 fall within a larger class which we refer to
as the large-delay regime. In particular, if at least one of T1 and T2 is larger than a certain threshold:
T1 ≥ B2, (or) T2 ≥ B1 +B2. (8.4)
we have been able to determine the multicast capacity as stated in Theorem 8.1 below. In Figure 8.1
this regime consists of all pairs (T1, T2) outside the rectangular box [B1, B2)× [B2, B1 +B2).
Theorem 8.1 (Multicast Capacity in Large-Delay Regime). When the delays T1 and T2 satisfy (8.4)
Chapter 8. Multicast Streaming Codes (Mu-SCo) 114
and B2 > B1 the multicast capacity is given by
C(B1, T1, B2, T2) =
C1, T2 ≥ αT1 +B1,T2−B1
T2−B1+B2, T1 +B1 ≤ T2 ≤ αT1 +B1,
T1
T1+B2, T1 ≤ T2 ≤ T1 +B1,
C2, T2 ≤ T1.
(8.5)
where Ci =Ti
Ti+Biis the single user capacity of user i = 1, 2 and we have defined α = B2
B1.
The proof of Theorem 8.1 appears in Section 8.4. The achievability scheme involves a suitable
application of the DE-SCo construction in Theorem 7.1 and single user MS codes. In particular, we
exploit the following observation.
Remark 8.1. In each of the four cases in (8.5) the capacity only depends on either T1 or T2, but not
on both of them simultaneously. In particular, as shown in Figure 8.1, the contour of constant capacity
is a piecewise constant line. On the horizontal portions, the delay T1 can be reduced without reducing
the capacity whereas on the vertical portions the delay T2 can be reduced without reducing the capacity.
This property allows us to use code constructions at the two dominating points on each constant-capacity
contour as discussed in Section 8.4.
The converse is based on a periodic erasure channel argument and uses the decoding constraints at
both the receivers in a suitable fashion.
8.3.2 Low-Delay Regime
We next consider the case when the delay pair (T1, T2) falls in the box [B1, B2)× [B2, B1 +B2), i.e.,
B1 ≤ T1 < B2, (and) B2 ≤ T2 < B1 +B2. (8.6)
This regime is more challenging compared to the large-delay regime. We further split the low-delay
regime into two regions, (e) and (f) as illustrated in Figure 8.1. The capacity is characterized in region (e)
as stated in Theorem 8.2, whereas in region (f), upper and lower bounds are provided in Theorem 8.3.
Furthermore, the capacity in region (f) for the special cases T1 = B1 and T2 = B2 is provided in
Propositions 8.1 and 8.2 respectively.
Theorem 8.2 (Capacity in Region (e)). The multicast streaming capacity in region (e) defined by
T1 +B1 ≤ T2 ≤ B2 +B1 and B1 ≤ T1 < B2 is given by,
Ce =T1
2T1 +B1 +B2 − T2. (8.7)
The complete proof of Theorem 8.2 is divided into two main parts. The achievability scheme is
provided in Section 8.5 while the converse is given in Section 8.6.
The expression in (8.7) can be interpreted as follows. Consider the special case when T2 = B2. The
rate in (8.7) can be attained using a simple concatenation of two single user codes, a (B1, T1) MS code
and a (T2, T2) repetition code. When T2 6= B2, the parity-check streams of these codes must overlap in
T2−B2 symbols in order to attain (8.7). Our code construction in Section 8.5 is based on this observation
Chapter 8. Multicast Streaming Codes (Mu-SCo) 115
and involves embedding an additional set of parity-check packets to clear the interference due to such
overlap.
The converse in Section 8.6 involves a new insight of revealing some of the source packets to a virtual
decoder to obtain a tighter bound than the periodic erasure channel argument used in the converse proof
of Theorem 8.1. A rigorous information theoretic proof of the converse is also provided in this section.
The remainder of the low-delay regime is called region (f). For this region, we provide a general
upper and lower bounds on the capacity. The capacity remains open except in the special cases of either
T1 = B1 or T2 = B2.
Theorem 8.3 (Bounds on Capacity in Region (f)). The multicast streaming capacity in region (f)
defined by B2 ≤ T2 ≤ T1 +B1 and B1 ≤ T1 < B2 is upper and lower bounded as follows,
C−f ≤ Cf ≤ C+
f , (8.8)
where the lower bound is given by,
C−f =
T1
2T1 +B1 +B2 − T2(8.9)
and the upper bound is given by,
C+f =
T2 − B1
2(T2 −B1) + (B2 − T1). (8.10)
We note that the rate expression in (8.9) is the same as the capacity expression in region (e) in
Theorem 8.2. The code construction is essentially the same as that in Section 8.5, but requires a
modification in the decoder of user 1 as discussed in Section 8.7.1. The proof of the upper bound is given
in Section 8.7.2 and also involves similar arguments as that used in the converse proof of Theorem 8.2.
The bounds in Theorem 8.3 do not coincide in general. We identify special cases when each is tight
as stated in the following propositions.
Proposition 8.1 (Capacity in Region (f) at (T1 = B1)). The multicast streaming capacity in region (f)
defined by B2 ≤ T2 ≤ T1 + B1 and B1 ≤ T1 < B2 at the minimum delay case for user 1 (T1 = B1) is
given by,
Cf(T1=B1) = C+f . (8.11)
To establish Proposition 8.1, we provide the encoding and decoding steps of the code construction
achieving the rate in (8.11) in Section 8.8.1. The code is obtained by concatenating the parity-check
packets of a (T1, T1) repetition code and a (B2 −B1, T2 −B1) MS code. The converse is already proved
in Theorem 8.3.
Proposition 8.2 (Capacity in Region (f) at (T2 = B2)). The multicast streaming capacity in region (f)
defined by B2 ≤ T2 ≤ T1 + B1 and B1 ≤ T1 < B2 at the minimum delay case for user 2 (T2 = B2) is
given by,
Cf(T2=B2) = C−f =
T1
2T1 +B1. (8.12)
The converse proof of Proposition 8.2 is provided in Section 8.8.2. The technique is significantly
Chapter 8. Multicast Streaming Codes (Mu-SCo) 116
Table 8.1: Summary of capacity expressions, code constructions and converse proofs of all regions in theconsidered multicast model with two users of parameters {(B1, T1), (B2, T2)}. The acronym PEC standsfor “Periodic Erasure Channel”.
Region Capacity Expression Code Construction Converse Proof
Large-Delay
Regim
eT1≥
B2,(or)
T2≥
B1+B
2
Region (a)Ca = T1
T1+B1DE-SCo PEC
T2 ≥ αT1 +B1
Region (b)Cb =
T2−B1
T2−B1+B2
DE-SCo +PEC
T1 + B1 < T2 < αT1 +B1 Source ExpansionRegion (c)
Cc =T1
T1+B2(B2, T1) MS Code PEC
T1 < T2 ≤ T1 +B1
Region (d)Cd = T2
T2+B2(B2, T2) MS Code PEC
T2 ≤ T1
Low
-Delay
Regim
eB
1≤
T1<
B2,(and)
B2≤
T2<
B1+B
2
Region (e)Ce =
T1
2T1+B1+B2−T2Partial Concatenation Revealing
T2 ≥ T1 +B1
Region (f)T2 < T1 +B1
C−f ≤ Cf ≤ C+
f
C−f = T1
2T1+B1+B2−T2Partial Concatenation -
C+f = T2−B1
2(T2−B1)+(B2−T1)- Revealing
Cf(T1=B1) = C+f Simple Concatenation -
Cf(T2=B2) = C−f - Double-Counting
different than earlier converses and involves carefully double-counting2 the redundancy arising from the
recovery of certain source packets. The achievability scheme is the same as that in Theorem 8.3 provided
in Section 8.5 since substituting T2 = B2 in (8.9) gives (8.12).
A conjecture on the capacity in region (f), which is consistent with all the special cases above, is
discussed in Section 8.8.3.
A summary of the main results is provided in Table 8.1.
8.4 Multicast Capacity in Large-Delay Regime (Theorem 8.1)
We discuss in turn the achievability and converse of Theorem 8.1 in this section.
8.4.1 Achievability
The case in region (a) where the capacity equals C1 = T1
T1+B1, was already shown in Theorem 7.1 to be
achieved using DE-SCo. Region (c), sandwiched between T1 ≤ T2 ≤ T1 +B1 and T1 ≥ B2 in Figure 8.1
satisfies
Cc =T1
T1 +B2. (8.13)
In this region we can use a single user (B2, T1) MS code that simultaneously satisfies both the users.
Clearly this code is feasible since T1 ≥ B2. It satisfies user 1 since B2 > B1 and user 2 since T2 ≥ T1.
Similarly in region (d), defined by T2 ≤ T1 and B2 ≥ B1, it suffices to serve user 2 using a single user
MS code of parameters (B2, T2).
Thus the only remaining region of the large-delay regime in Figure 8.1 is region (b). Recall that the
capacity here is Cb =T2−B1
T2−B1+B2and T1+B1 ≤ T2 ≤ αT1+B1 holds. Since the capacity does not depend
2The ideas of revealing and double counting used in the converse proofs of Theorems 8.2 and 8.3 and Proposition 8.2are obtained in collaboration with Devin Lui.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 117
on T1, we can keep reducing the value of T1 to T1 such that
T2 = αT1 +B1,
where α = B2
B1. This is equivalent to
T1 =B1
B2(T2 −B1). (8.14)
Provided that T1 ≥ B1, and furthermore T1 is an integer, we can use a {(B1, T1), (B2, T2)} DE-SCo
in Section 4.5 to achieve T1
T1+B1= Cb and hence for the original point in region (b). The former condition
is equivalent to T2 ≥ B2 +B1 which is satisfied by every point in region (b).
If T1 is not an integer, we use source expansion defined in Section 7.3.5. We start by expanding the
source stream s[i] with parameters (p, r) = (nT1, n) where n is the smallest integer such that p = nT1 is
an integer. We then apply a DE-SCo with parameters {(nB1, nT1), (nαB1, n(αT1+B1))} to the expandedsource stream s[·]. Using Lemma 7.1, it can be verified that the proposed construction satisfies both
the receivers. An example of using source expansion to achieve the capacity in region (b) is included in
Appendix F.1.
8.4.2 Converse
For the converse we start by establishing an upper bound on the multicast streaming capacity in the
large-delay regime as follows,
Lemma 8.1. For any two receivers with burst-delay parameters of (B1, T1) and (B2, T2), the multicast
streaming capacity is upper bounded by C ≤ C+, where
C+ =
{T2−B1
T2−B1+B2T2 > T1 +B1,
T1
T1+B2T2 ≤ T1 +B1,
(8.15)
Proof. The proof of Lemma 8.1 is provided in Appendix F.2. It involves a periodic erasure channel
(PEC) argument similar to the single user case but with simultaneously using the decoding constraints
of both the receivers.
We further tighten the upper bound in (8.15) using the fact that the multicast streaming capacity
cannot exceed the single user capacity on any of the two links:
CU = min{C+, C1, C2
}=
min{
T2−B1
T2−B1+B2, C1, C2
}
, T2 > T1 +B1
min{
T1
T1+B2, C1, C2
}
, T2 ≤ T1 +B1.(8.16)
Chapter 8. Multicast Streaming Codes (Mu-SCo) 118
Through straightforward calculations one can further simplify:
CU =
min{
T2−B1
T2−B1+B2, C1
}
, T2 > T1 +B1
min{
T1
T1+B2, C2
}
, T2 ≤ T1 +B1
=
C1 , Ca, T2 ≥ αT1 +B1
T2−B1
T2−B1+B2, Cb, T1 +B1 < T2 < αT1 +B1
T1
T1+B2, Cc, T1 < T2 ≤ T1 +B1
C2 , Cd, T2 ≤ T1.
(8.17)
where recall that α = B2
B1. This completes the proof of Theorem 8.1.
8.5 Achievability Scheme in Region (e) (Theorem 8.2)
We show that for any point in region (e) in Figure 8.1 that satisfies, T1 + B1 ≤ T2 ≤ B2 + B1 and
B1 ≤ T1 < B2, there exists a multicast code that achieves the following rate:
Ce =T1
2T1 +B1 +B2 − T2(8.18)
Towards this end we parametrize the T2 and B2 as
T2 = T1 +B1 +m, B2 = T1 + k +m, m ≥ 0, k ∈ [0, B1] (8.19)
Substituting into (8.18), the capacity simplifies to
Ce =T1
2T1 + k. (8.20)
In our construction we first consider two codes, one for each user as follows.
• Split each source packets s[i] in T1 symbols
s[i] = (s0[i], . . . , sT1−1[i])
• Let C1 be a (B1, T1) MS code, as described in Section 2.6.1 applied to the source packets s[i]
producing B1 parity-check symbols
p1[i] = (p10[i], . . . , p1B1−1[i]) (8.21)
at each time by combining the source symbols diagonally:
p1j [i] = sj [i− T1] + h1j(sB1 [i− j − T1 +B1], . . . , sT1−1[i− j − 1]), j = {0, 1, . . . , B1 − 1}, (8.22)
where recall that h1j(·) denote a linear combination of the symbols sB1 [i−j−T1+B1], . . . , sT1−1[i−
j − 1] as in (2.24).
Chapter 8. Multicast Streaming Codes (Mu-SCo) 119
• Let C2 be simple repetition code applied to the source packets s[i] with a delay of T2, i.e.
p2[i] = (p20[i], . . . , p2T1−1[i]) = (s0[i− T2], . . . , sT1−1[i − T2]) = s[i− T2]. (8.23)
• Concatenate the two streams p1[·] and p2[·] in (8.21) and (8.23) with a partial overlap of
T2 −B2 = B1 − k
packets as illustrated in (8.24). The two streams of parity-checks p1[·] and p2[·] are concatenated
with the last B1 − k rows of the first added to upper most B1 − k rows of the second.
x[i] =
s0[i]...sT1−1[i]p10[i]...p1k−1[i]
p1k[i] +s0[i− T2]...
p1B1−1[i] +sB1−k−1[i− T2]sB1−k[i− T2]...sT1−1[i− T2]
(8.24)
Note that x[i] consists of a total of T1 symbols of s[i], B1 parity-check symbols of p1[i] and T1
parity-check symbols of p2[i] with an overlap of B1 − k symbols. Thus the rate associated with x[i]
satisfies (8.20).
As we will see, the above construction needs to be further extended by embedding a third code onto
the non-overlapping parity-checks of C2. To motivate this we begin by considering the decoding at both
the receivers.
8.5.1 Decoding at Receiver 1
At receiver 1 suppose that the erasure burst spans the interval [i − B1, i − 1]. We need to reconstruct
each s[t] for t ∈ [i − B1, i − 1] with a delay of T1. The parity-checks p1[·] that will be required at
the decoder span I1 = [i, i + T1 − 1]. We claim that the overlapping parity-checks s2[t] for t ∈ I1, asillustrated in (8.24) are not erased and hence can be cancelled out. In particular, consider t = i. The
overlapping s[i−T2] is clearly not erased since T2 ≥ B2 > B1, by definition. At the other extreme when
t = i+ T1 − 1, we have:
i+ T1 − T2 − 1 ≤ i−B1 − 1 (8.25)
since T2 ≥ B1 + T1 in region (e). Thus s[t− T2] is again associated before the start of the erasure burst
and is not erased. Thus it follows that for each t ∈ I1 the overlapping s[t − T2] in (8.24) is not erased
Chapter 8. Multicast Streaming Codes (Mu-SCo) 120
and can be cancelled out to recover p1[t] with no additional delay. At this point one can use the decoder
associated with C1, which is a (B1, T1) MS Code, to recover each erased source packet with a delay of
T1.
8.5.2 Decoding at Receiver 2
At receiver 2 we suppose that the erasure burst spans the interval [i − B2, i − 1]. We are required to
reconstruct each s[t] for t ∈ [i − B2, i − 1] with a delay of T2. Recall that the parity-check p2[t] =
s[t − T2] is merely a repetition code with a shift of T2. Thus if we can cancel the overlapping p1[t] for
t ∈ [i − B2 + T2, i + T2 − 1], we can recover each erased source packet with a delay of T2. Now note
that using (8.19) we have that T2 − B2 = B1 − k and 0 ≤ k ≤ B1 holds. We can partition the interval
[i, i+ T2 − 1] into three sub-intervals as follows:
• J1 = [i, i+T2−B2−1]. For each t ∈ J1 we have that t−T2 ≤ i−B2−1, and thus the overlapping
s[t− T2] is not erased. Thus one can cancel the overlapping s[t− T2] and recover p1[t] in (8.24).
Note that in this interval the portion of s[t − T2] that does not overlap with p1[·] is not used in
the recovery.
• J2 = [i + T2 − B2, i + T1 − 1]. For each t ∈ J2 we have that both s[t − T2] are erased and the
overlapping p1[t] involve source packets in the interval [t−T1, t− 1], which may be erased. In this
case one cannot recover symbols sj [t−T2] for j = 0, . . . , B1− k− 1. However we recover sj [t−T2]
for j = B1 − k, . . . , T1 − 1. Thus a subset of s[t− T2] cannot be recovered.
• J3 = [i+T1, i+T2− 1]. For each t ∈ J3, we have that p1[t] involves source packets in the interval
[t−T1, t− 1], since the memory of the code C1 is T1 (see Section 7.3.2). Since these source packets
appear after the erasure burst and hence are erased, one can cancel the overlapping p1[t] and
thus recover p2[t] = s[t − T2]. Thus it follows that the erased source packet t − T2 is recovered
from (8.24).
To summarize the above steps, when the erasure burst spans the interval [i−B2, i− 1] at receiver 2,
the code construction in (8.24) is not able to recover source symbols sj[t−T2] for j ∈ {0, . . . , B1−k−1}and t ∈ {i+B1− k, . . . , i+T1− 1}. These (B1− k) · (T1−B1+ k) symbols are illustrated by the shaded
box in Figure 8.2. Secondly, in the interval J1 = [i, i + B1 − k − 1], all the parity-check packets p2[t]
correspond to the source packets that are not erased. The total number of symbols that do not overlap
with p1[t], illustrated by the region with hatched lines in Figure 8.2 also equals (B1− k) · (T1−B1 + k),
the same as the number of shaded symbols. We next describe our code construction that uses these
available positions to recover the missing source packets.
8.5.3 Construction of C3
We extend our construction of x[i] to incorporate a third set of parity-check packets q[i] = (q0[i], . . . ,
qT1−(B1−k)−1[i]) so that the transmitted packet x[i] is expressed as follows:
Chapter 8. Multicast Streaming Codes (Mu-SCo) 121
����������
�����
�������� ��
���� ��
���� ��
���� ��
�������
��
����� ���������
��
�
��
��
�� �
���
��� ������ ����
������
��
Figure 8.2: A graphical illustration of the structure of the code construction. The labels on the rightshow the layers spanned by each set of parity-check symbols. The labels at the bottom show the intervalsin which each set of parity-check symbols combine erased source symbols. Note that the construction x[i]in (8.24) involves an overlap between p1[·] and p2[·] as shown. The shaded packets cannot be recoveredat user 2. To recover these we use a third layer of parity-check packets p3[·] that are embedded in thelast T1 − (B1 − k) rows as shown.
x[i] =
s[i]p10[i]...p1k−1[i]
p1k[i] +s0[i− T2]...
p1B1−1[i] +sB1−k−1[i − T2]sB1−k[i− T2] +q0[i]
...sT1−1[i− T2] +qT1−(B1−k)−1[i]
(8.26)
We show that by judiciously selecting q[·], these parity-check packets in J1 can be used to recover the
parity packets p1[t] for t ∈ J2 and ultimately all the source packets. As remarked before, the number of
available parity symbols of q[·] in J1, which are denoted by the region with the hatched lines is sufficient
to recover the erased symbols of p1[·] in the interval J2, which are shaded in Figure 8.2. However in the
construction of q[·] one needs to satisfy two additional properties in the streaming construction.
• Causality: Each q[t] must only depend on the source packets up to time t.
• Delay Constraint: Each p1[t] for t ∈ J2 must be reconstructed with zero delay, so that the
Chapter 8. Multicast Streaming Codes (Mu-SCo) 122
underlying source packet s[t− T2] can be recovered with a delay of T2.
We discuss the construction of q[·] separately in the cases when T1 ≤ 2(B1 − k), and when T1 >
2(B1 − k) below.
Case (A) {T1 ≤ 2(B1 − k)}
We define T3 = B1− k and B3 = T1− (B1− k), and consider a code C3 that is a (B3, T3) single user MS
code applied to the last B1 − k parity-check packets of p1[·] i.e., to (p1k[i], . . . , p1B1−1[i]). Such a code is
feasible since B3 ≤ T3, due to T1 ≤ 2(B1 − k). Thus we let
p3[i] = (p30[i], . . . , p3T1−(B1−k)−1[i]) (8.27)
by combining the last B1 − k parity-check symbols, (p1k[.], . . . , p1B1−1[.]), diagonally, i.e.,
p3j [i] = p1k+j [i− T3] + h3j
(p1k+B3
[i− j − T3 +B3], . . . , p1k+T3−1[i− j − 1]
), (8.28)
for j = {0, 1, . . . , T1 − (B1 − k) − 1} and h3j(·) involves the linear combination associated with the MS
code, as defined in (2.24).
We note that the code C3 is a MS code. It can recover a burst of length B3, spanning the interval
J2 = [i+ T3, i+ T3 +B3 − 1] with a delay of T3, provided the following conditions are satisfied:
• The associated parity-check packets p3[t] in the interval t ∈ [i+ T1, i+ T1 + T3 − 1] are available.
• The packets p1[t] for t ∈ J1 = [i, i + T3 − 1] which is the period of length T3 = T2 − B2 packets
preceding the burst are available to the decoder.
As noted before, the interfering packets of p2[t] for t ∈ J1 can be cancelled out and thus p1[t] for t ∈ J1are available. Furthermore, in order to retrieve the required p3[t] from the hatched lines in Figure 8.2,
we apply a backward shift of T1 source packets and let
q[t] = p3[t+ T1] (8.29)
With the above mapping, each required p3[t] for t ∈ [i + T1, i + T1 + T3 − 1] can be retrieved from the
corresponding q[t− T1], which span the interval J1 and hence are available. However the choice of q[t]
in (8.29) does not satisfy the causality condition. This is because p3[t+ T1] can potentially depend on
source packets after time t whereas q[t] must only depend on the source packets up to time t. Therefore
we need to modify (8.29) to only send the causal part of p3[t+ T1] at time t as discussed next.
Definition 8.2 (Causal and Non-Causal Parts of a Parity-Check). Consider a linear parity-check symbol
pj [t1] generated over source stream s[t]. For any t2 ≤ t1 we can express
pj [t1] =←−p j [t1]
∣∣t2+−→p j [t1]
∣∣t2
(8.30)
where ←−p j [t1]∣∣t2
and −→p j [t1]∣∣t2
denote the causal and non-causal parts of the parity-check pj [t1] with
respect to t2, respectively.
The causal part, ←−p j [t1]∣∣t2, is obtained by replacing all source packets s[t] from time t > t2 with zeros
in pj[t1], whereas the non-causal part, −→p j [t1]∣∣t2, is obtained by replacing all source packets from time
t ≤ t2 with zeros in pj [t1].
Chapter 8. Multicast Streaming Codes (Mu-SCo) 123
Recall that in (8.30) we use the fact that pj[t1] is a linear combination of source symbols (see (2.24)).
Example 8.1. Consider the parity-check p0[5] = s0[1] + s1[2] + s2[3] + s3[4]. The causal and non-causal
parts with respect to t2 = 2 are given by,
←−p 0[5]∣∣2= s0[1] + s1[2]
−→p 0[5]∣∣2= s2[3] + s3[4], (8.31)
respectively. In particular, ←−p 0[5]∣∣2is equal to p0[5] after removing all source symbols later than time
t = 2 whereas −→p 0[5]∣∣2is the same but after removing all source symbols from time before and including
time t = 2.
Hence, we replace the parity-check packet p3[i + T1] with its causal part ←−p 3[i + T1]∣∣iin (8.29) as
discussed in Definition 8.2, i.e.,
q[i] =←−p 3[i+ T1]∣∣i=←−p 3
0[i+ T1]∣∣i, . . . ,←−p 3
T1−(B1−k)−1[i+ T1]∣∣i). (8.32)
We now revisit the decoding analysis for user 2 in Section 8.5.2. Suppose that the packets in the
interval I2 = [i − B2, i − 1] are erased by the channel of user 2. The main steps at the decoder are
summarized as follows. We refer to the four layers as illustrated in Figure 8.2.
• Step (1) (Recovery of p1[·]): The parity-checks ←−p 3[t + T1]∣∣tin the interval t ∈ J1 = [i, i +
B1 − k − 1] (which correspond to the hatched region in Figure 8.2), are used to recover the last
B1 − k symbols of p1[t] for t ∈ J2 = {i+ B1 − k, . . . , i + T1 − 1} by time t. These correspond to
the shaded packets in Fig 8.2.
• Step (2) (Removal of p1[·]): Subtract p1[t] in layer (3) in the interval t ∈ J2 ∪ J3 = [i+ B1 −k, i+ T2 − 1] and recover the underlying symbols of p2[t] that overlap with them.
• Step (3) (Removal of p3[·]): Compute and subtract p3[t] in layer (4) in the interval t ∈ J2∪J3 =
[i+B1 − k, i+ T2 − 1] and recover the underlying symbols of p2[t] that overlap with them.
• Step (4) (Recovery using p2[·]): Use p2[t] for t ∈ [i+B1 − k, i+ T2 − 1] to recover the erased
source packets, (s[i−B2], . . . , s[i− 1]).
Having summarized the four steps, we provide a justification for each of them below.
Step (1) (Recovery of p1[·])
Step (1) is the most elaborate step and is established in Appendix F.3. We summarize it in the following
lemma.
Lemma 8.2. When the erasure burst spans the interval I2 = [i − B2, i − 1], the decoder at receiver
2 can recover each of the overlapping parity symbols, p1j [t], for t ∈ J2 = [i + B1 − k, i + T1 − 1] and
j ∈ {k, . . . , B1 − 1} by time t, using ←−p 3j [t+ T1]
∣∣tfor t ∈ J1 = [i, i+B1 − k− 1] and the unerased source
packets starting from time i.
Proof. See Appendix F.3.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 124
Step (2) (Removal of p1[·])
Once the overlapping parity-check symbols of C1 in the interval J2 = [i+B1 − k, i+ T1 − 1] have been
recovered in Step (1), they can be cancelled to recover the parity-check symbols of C2. Those packets
p1[t] appearing in the interval t ∈ J3 = [i + T1, i + T2 − 1] are clearly functions of unerased source
packets (since the underlying MS code has a memory of T1); these can be computed by the decoder, and
cancelled at time t to recover the parity-check symbols of C2.
Step (3) (Removal of p3[·])
Recall that C3 is a (B3, T3) MS code with a memory of T3 = B1 − k. We show that the interfering p3[·]in the interval J2 ∪ J3 = [i + T3, i + T2 − 1] can be computed by the decoder and cancelled. Consider
the parity-check at time i + T3,←−p 3[i + B1 − k + T1]
∣∣i+B1−k
. Following (8.28), it involves parity-check
symbols p1[t] of C1 of time
i+B1 − k − T3 + T1 = i + T1
and later. In computing the above we subtract T3 from the time index, as this is the memory of the
code, and add T1 as this corresponds to the backward shift. Furthermore as argued in Step (2) above,
the parity-check symbols of C1 at time i + T1 and later only combine unerased source packets. Thus
the overlapping p3[t] can be computed and cancelled out as claimed. In a similar fashion all other such
packets in the interval [i+ T3, i+ T2 − 1] can be cancelled out.
Step (4) (Recovery using p2[·])
Using the previous two steps each p2j [t] for j ∈ {0, . . . , T1 − 1} and t ∈ {i+ B1 − k, . . . , i+ T2 − 1} canbe recovered by time t. Since p2[i+B1 − k] = s[i+B1 − k − T2] = s[i−B2] it follows that each erased
source packet can be recovered with a delay of T2.
This completes the decoding analysis at receiver 2. Since the analysis of decoder 1 remains unchanged,
the achievability proof is complete when T1 ≤ 2(B1 − k) holds.
Case (B): T1 > 2(B1 − k)
To complete the proof, it remains to consider the case T1 > 2(B1−k). In this case B1−k < T1−(B1−k),therefore the MS code with B3 = T1− (B1−k), and T3 = B1−k constructed before is no longer feasible.
We begin by expressing T1 − (B1 − k) as follows:
T1 − (B1 − k) = r(B1 − k) + q, 0 ≤ q < (B1 − k) and r ≥ 1. (8.33)
We construct a total of r+1 codes C3,n as follows. For n = 1, . . . , r we let C3,n to be a (B1−k,B1−k)
repetition code applied to the lastB1−k symbols in p1[i], shifted back by n(B1−k) i.e., the correspondingparity-check packets are given by:
p3,n[i] = (p3,n0 [i], . . . , p3,nB1−k−1[i]) = (p1k[i+ n(B1 − k)], . . . , p1B1−1[i + n(B1 − k)]). (8.34)
Let C3,r+1 be a (B3,r+1, T3,r+1) = (q, B1 − k) MS code again applied on the last B1 − k parity-check
symbols (p1k[i], . . . , p1B1−1[i]) and then constructing q parity-checks p3,r+1[i] = (p3,r+1
0 [i], . . . , p3,r+1q−1 [i])
Chapter 8. Multicast Streaming Codes (Mu-SCo) 125
at each time by combining the last B1 − k parity-check symbols, (p1k[.], . . . , p1B1−1[.]) as in (8.28) after
replacing B3 and T3 with B3,r+1 and T3,r+1 respectively.
Concatenate the set of streams p3,n[.] for n = 1, . . . , r and p3,r+1[.] after introducing a shift of
∆3,r+1 = −T1 in the later and keeping its causal part. The output packet at time i is as in (8.26) where,
q[i] = (q0[i], . . . , qT1−(B1−k)−1[i]) = (←−p 3,1[i]∣∣i, . . . ,←−p 3,r[i]
∣∣i,←−p 3,r+1[i+ T1]
∣∣i) (8.35)
is the concatenation of the causal part of the r + 1 parity-check sub-streams for the codes C3,n for
n = 1, . . . , r+1, respectively. We let C3 be the result of concatenating {C3,1, . . . , C3,r+1} and let p3[t] be
the result of concatenating the parity symbols as in (8.35).
In the decoding analysis we only need to revisit steps (1) and (3) in the case T1 ≤ 2(B1 − k) above.
We show that Lemma 8.2 continues to hold for the above construction in Appendix F.3, i.e., each p1[t]
for t ∈ [i + B1 − k, i + T1 − 1] can be decoded with zero delay using the parity symbols of p3[t] in the
interval t ∈ [i, i+B1 − k − 1].
In step (3) we need to show that each p3[t] for t ∈ [i+B1− k, i+ T2− 1] can be cancelled to recover
the associated p2[t]. For each n = 1, 2, . . . , r we have from (8.34) that
←−p 3,n[t]∣∣t= p1[t+ n(B1 − k)]
∣∣t
(8.36)
We note that if t+n(B1− k) ≥ i+T1 then since C1 has a memory of T1 it follows that p1[t+n(B1− k)]
only involves source packets s[·] after time i, which are not erased. Thus these parity symbols can be
computed by the decoder. If instead t + n(B1 − k) < i + T1 then each such ←−p 1[t + n(B1 − k)]∣∣tis
recovered at time i+B1 − k− 1 as it is recovered via Lemma 8.2, and hence is available to the decoder.
Finally for n = r + 1 the decoder can compute and cancel ←−p 3,r+1[t + T1]|t in a manner analogous to
step (3) in case (A).
8.5.4 Examples
In Appendix F.4, we provide two examples of the above code construction with parameters {(4, 5), (7, 10)}and {(3, 5), (7, 9)} corresponding to the two cases when T1 ≤ 2(B1 − k) and T1 > 2(B1 − k) discussed
above.
8.6 Converse Proof in Region (e) (Theorem 8.2)
We start by considering the example {(4, 5), (7, 10)} illustrating the steps of the converse proof. Then,
we will provide a rigorous converse proof for any point in region (e). We again use the periodic erasure
channel technique with a period of length 12 and assume that the first 7 of these packets are erased.
With 7 erasures, code C2 = (7, 10) can recover the first two packets s[0] and s[1] by time 10 and 11,
respectively (cf. Figure 8.3b). Since code C1 = (4, 5) is not capable of recovering the remaining 5
erasures, we reveal the first packet at time t = 2 to the decoder. Now, C1 can recover the source packets
s[3], . . . , s[6] by times 8 to 11, respectively (i.e., incurring a delay of 5 packets). Finally the unerased
packets in t ∈ [7, 11] are guaranteed to be recovered using the (7, 10) code in the following period. Thus
a total of 5 unerased channel packets are sufficient to recover 6 erased source packet. Therefore one can
see that a rate of 5/11 upper bounds the capacity of this channel.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 126
� � � � � � � � � ��
(a) Step (1)
������������
� � � � � � � � � ��
(b) Step (2)
� � � � � � � � � ��
(c) Step (3)
�����������
� � � � � � � � � ��
(d) Step (4)
Figure 8.3: Main steps of finding the upper bound for the {(4, 5), (7, 10)} point lying in region (e)through one period illustration of the Periodic Erasure Channel. Grey and white squares denote erasedand unerased packets respectively while hatched squares denote packets revealed to the receiver.
· · ·
a = T1 +B2 − T2
b = B2 −B1
c = B2
d = B2 + T1
Figure 8.4: One period of the periodic erasure channel used to prove an upper bound on capacity inregion (e). Grey and white squares denote erased and unerased packets respectively..
For the general case, one period of the periodic erasure channel to be used is shown in Figure 8.4.
Each period has B2 erasures followed by T1 non-erasures. We can assign
a = T1 +B2 − T2, b = B2 −B1, c = B2, d = B2 + T1 (period length),
V1,i = s[id+a−1
id
]
, V2,i = s[id+b−1id+a
]
, V3,i = s[id+c−1id+b
]
, V4,i = s[(i+1)d−1
id+c
]
W1,i = x[id+a−1
id
]
, W2,i = x[id+b−1id+a
]
, W3,i = x[id+c−1id+b
]
, W4,i = x[(i+1)d−1
id+c
]
Vi = (V1,i, V2,i, V3,i, V4,i), Wi = (W1,i,W2,i,W3,i,W4,i)
We use the decoder of receiver 2 to recover s[a−10
]
within a delay of T2 using the channel packets
x[d−1c
]
. We then reveal the channel packets x[b−1a
]
. The decoder of receiver 1 can now be used to
recover the next B1 source packets, which are the packets s[c−1b
]
, using x[d−1c
]
again. In general,
we may not have a systematic code, so even if x[d−1c
]
is received, we may not be able to recover the
corresponding source packet s[d−1c
]
. Instead, s[d−1c
]
can be recovered using the second decoder and
the first and second sets of channel packets that are not erased, i.e., x[d−1c
]
and x[2d−1d+c
]
.
So far, we have recovered (T1 +B2 − T2) +B1 + T1 = 2T1 +B1 +B2 − T2 source packets, using 2T1
channel packets. We do not include the source packets s[b−1a
]
, because it cannot be decoded from the
information in the unerased channel packets. The channel has a period of B2 + T1 packets, and if we
Chapter 8. Multicast Streaming Codes (Mu-SCo) 127
had n periods, then we would be able to recover m(2T1 +B1 +B2 − T2) source packets using (m+1)T1
channel packets. Therefore, we can suppose that the upper bound on the multicast streaming capacity
is given by
R ≤ m+ 1
m· T1
2T1 +B1 +B2 − T2
m→∞−−−−→ T1
2T1 +B1 +B2 − T2(8.37)
The more formal proof is as follows. We start by defining the capability of the C1 = (B1, T1) and
C2 = (B2, T2) codes in ith period. In particular, a similar argument to that used in the proof of
Lemma A.1 can be used to get the following for m ≥ i+ 1,
H(V1,i|W i−10 ,Wm
4,0) = 0, H(V3,i|W1,i,W2,i,Wi−10 ,Wm
4,0) = 0,
H(V4,i|W1,i,W2,i,W3,i,Wi−10 ,Wm
4,0) = 0 (8.38)
Hence,
m(2T1 +B1 +B2 − T2)H(s) = H(V m−11,0 , V m−1
3,0 , V m−14,0 )
(a)= H(V m−1
0 )−H(V m−12,0 )
≤ H(V m−10 ,Wm
4,0)−H(V m−12,0 )
= H(Wm4,0) +
m−1∑
i=0
(
H(V1,i|V i−10 ,Wm
4,0) +H(V2,i|V1,i, Vi−10 ,Wm
4,0)
+H(V3,i|V1,i, V2,i, Vi−10 ,Wm
4,0) +H(V4,i|V1,i, V2,i, V3,i, Vi−10 ,Wm
4,0)−H(V2,i|V i−12,0 )
)
(b)
≤ H(Wm4,0) +
m−1∑
i=0
(
H(V1,i|V i−10 ,Wm
4,0) +H(V3,i|V1,i, V2,i, Vi−10 ,Wm
4,0)
+H(V4,i|V1,i, V2,i, V3,i, Vi−10 ,Wm
4,0))
(c)= H(Wm
4,0) +
m−1∑
i=0
(
H(V1,i|W i−10 ,Wm
4,0) +H(V3,i|W1,i,W2,i,Wi−10 ,Wm
4,0)
+H(V4,i|W1,i,W2,i,W3,i,Wi−10 ,Wm
4,0))
(d)= H(Wm
4,0) ≤ (m+ 1)T1 · log |X |, (8.39)
where (a) follows from the fact that source packets are i.i.d., (b) follows by using the fact that condi-
tioning reduces entropy, (c) holds since channel packets are causal functions of source packets (cf. (8.1))
and (d) follows by using (8.38).
Finally, we conclude that the rate of any {(B1, T1), (B2, T2)} code in region (e) must satisfy,
R =k
n=
H(s)
log |X | ≤m+ 1
m· T1
2T1 +B1 +B2 − T2
m→∞−−−−→ T1
2T1 +B1 +B2 − T2(8.40)
which matches the our upper bound on the rate in (8.7).
Chapter 8. Multicast Streaming Codes (Mu-SCo) 128
8.7 Upper and Lower Bounds in Region (f) (Theorem 8.3)
8.7.1 Lower Bound
We note that the lower bound C−f in Theorem 8.3 is the same expression as the capacity Ce in The-
orem 8.2. We argue that the code construction in region (e) can also be used in region (f). The only
difference between the two regions is that the condition T2 ≥ T1 + B1 is satisfied in region (e) whereas
T2 < T1 + B1 holds in region (f). Upon examining the decoding analysis for the code in region (e), we
note that the requirement T2 ≥ T1 + B1 is not used in the analysis of decoder 2. It is however used in
the analysis of decoder 1 and in particular in (8.25). Fortunately it turns out that this condition is not
necessary in the decoding analysis for user 1. As such only the condition T2 > T1 is required, which
holds in region (f) since T1 < B2 holds in this region and B2 ≤ T2 holds by definition. Thus we only
need to explain a modified decoding procedure such that T2 ≥ T1 +B1 is not required.
We consider a channel that introduces a burst of length B1 in the interval [i − B1, i − 1] and show
that the source packets s[i−B1 + j] for j ∈ {0, . . . , B1 − 1} are recovered at time i− B1 + T1 + j.
Since C1 is a (B1, T1) MS code, it suffices to show that in the interval t ∈ [i, i − B1 + T1 + j] the
parity packets p1[t] are all available by time i−B1+T1+ j by cancelling the interfering p2[t] = s[t−T2]
in this period.
We establish the above property recursively starting with j = 0 and considering the interval t ∈[i, i + T1 − B1]. The parity-check packets of C2, given by p2[t] = s[t− T2], are not erased. This follows
since t− T2 ≤ i+ T1 −B1 − T2 < i−B1 since T2 > T1. Hence the overlapping p1[t] in this interval can
be recovered and used to recover s[i−B1] at time i+ T1 −B1.
For each j > 0, we assume recursively that the source packets up to time i−B1+ j− 1 are recovered
and the parity-checks p1[·] in the interval [i, i − B1 + T1 + j − 1] are also available. We claim that at
time t = i − B1 + j + T1, the source packet s[i − B1 + j] and the parity-check p1[i − B1 + j + T1]
can also be recovered. Note that the interfering parity-check p2[i − B1 + j + T1] is the source packet
s[i−B1 + j + T1 − T2] which has been recovered since i−B1 + j + T1− T2 ≤ i−B1 + j − 1 as T2 > T1.
Thus this packet can be cancelled out to compute p1[i − B1 + j + T1] and in turn the source packet
s[i−B1 + j] can be recovered.
The claim now follows.
8.7.2 Upper Bound
The proof of the upper bound in region (f) uses a revealing argument similar to that used in the converse
proof in region (e) provided in Section 8.6. We shall use Figure 8.5 to illustrate one period of the periodic
erasure channel used in this proof. One period contains B2 erasures followed by T2 − B1 non-erasures,
for a total of B2 + T2 −B1 packets.
The first B2 − B1 source packets can be recovered using code C2, from x[B2+T2−B1−1
B2
]
, which are
the T2 − B1 unerased channel packets. We can see that s[0] is recovered at time T2, while sB2−B1−1
is recovered at time B2 + T2 − B1 − 1. Code C1 recovers the next T2 − T1 source packets, which is
s[B2−B1+T2−T1−1
B2−B1
]
. We then reveal the remaining channel packets in the block of B2 erased packets,
which are the packets x[
B2−1B2−B1+T2−T1
]
. Finally, code C2 is used to recover s[B2+T2−B1−1
B2
]
, using two
sets of T2 −B1 unerased channel packets, which are x[B2+T2−B1−1
B2
]
and x[2B2+2T2−2B1−1
2B2+T2−B1
]
.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 129
· · ·
a
b
c
d
T2 − B1T2 − T1B2 − B1
Figure 8.5: One period of the periodic erasure channel used to prove the first upper bound in region (f).Grey and white squares denote erased and unerased packets respectively.
In this one period of B2 + T2 − B1 packets, we have recovered s[B2−B1−1
0
]
, s[B2−B1+T2−T1−1
B2−B1
]
and
s[B2+T2−B1−1
B2
]
. This is a total of 2(T2 − B1) + (B2 − T1) source packets recovered by 2(T2 − B1)
channel packets. We can extrapolate that m(2(T2 − B1) + (B2 − T1)) source packets can be recovered
by (m+ 1)(T2 −B1) channel packets. As in region (e) proof, we can suppose that the upper bound on
the capacity is:
R =k
n=
H(s)
log |X | ≤m+ 1
m· T2 −B1
2(T2 −B1) + (B2 − T1)
m→∞−−−−→ T2 −B1
2(T2 −B1) + (B2 − T1)(8.41)
For the formal proof, we assign the following:
a = B2 −B1, b = B2 −B1 + T2 − T1, c = B2, d = B2 + T2 −B1 (period length),
V1,i = s[id+a−1
id
]
, V2,i = s[id+b−1id+a
]
, V3,i = s[id+c−1id+b
]
, V3,i = s[(i+1)d−1
id+c
]
W1,i = x[id+a−1
id
]
, W2,i = x[id+b−1id+a
]
, W3,i = x[(id+c−1id+b
]
, W4,i = x[(i+1)d−1
id+c
]
Vi = (V1,i, V2,i, V3,i, V4,i), Wi = (W1,i,W2,i,W3,i,W4,i)
A similar argument to that used in the proof of Lemma A.1 can be applied to C1 = (B1, T1) and
C2 = (B2, T2) codes, i.e., for m ≥ i + 1,
H(V1,i|W i−10 ,Wm
0 ) = 0, H(V2,i|W1,i,Wi−10 ,Wm
0 ) = 0,
H(V4,i|W1,i,W2,i,W3,i,Wi−10 ,Wm
0 ) = 0 (8.42)
Hence,
m(2(T2 −B1) + (B2 − T1))H(s) = H(V m−11,0 , V m−1
2,0 , V m−14,0 )
(a)= H(V m−1
0 )−H(V m−13,0 )
≤ H(V m−10 ,Wm
4,0)−H(V m−13,0 )
= H(Wm4,0) +
m−1∑
i=0
(
H(V1,i|V i−10 ,Wm
4,0) +H(V2,i|V1,i, Vi−10 ,Wm
4,0)
+H(V3,i|V1,i, V2,i, Vi−10 ,Wm
4,0) +H(V4,i|V1,i, V2,i, V3,i, Vi−10 ,Wm
4,0)−H(V3,i|V i−13,0 )
)
(b)
≤ H(Wm4,0) +
m−1∑
i=0
(
H(V1,i|V i−10 ,Wm
4,0) +H(V2,i|V1,i, Vi−10 ,Wm
4,0)
Chapter 8. Multicast Streaming Codes (Mu-SCo) 130
Table 8.2: Mu-SCo Construction for (B1, T1) = (4, 4) and (B2, T2) = (5, 6). This point achieves theupper bound C+
f given in Theorem 8.3 as stated in Proposition 8.1 since T1 = B1 = 4.
[i] [i+ 1] [i+ 2] [i+ 3] [i+ 4] [i + 5]
s0[i] s0[i+ 1] s0[i+ 2] s0[i+ 3] s0[i+ 4] s0[i + 5]
s1[i] s1[i+ 1] s1[i+ 2] s1[i+ 3] s1[i+ 4] s1[i + 5]
s0[i− 4] s0[i− 3] s0[i− 2] s0[i− 1] s0[i] s0[i + 1]
s1[i− 4] s1[i− 3] s1[i− 2] s1[i− 1] s1[i] s1[i + 1]
s0[i− 6] +s1[i− 5]
s0[i− 5] +s1[i− 4]
s0[i− 4] +s1[i− 3]
s0[i− 3] +s1[i− 2]
s0[i− 2] +s1[i− 1]
s0[i− 1] + s1[i]
+H(V4,i|V1,i, V2,i, V3,i, Vi−10 ,Wm
4,0))
(c)= H(Wm
4,0) +
m−1∑
i=0
(
H(V1,i|W i−10 ,Wm
4,0) +H(V2,i|W1,i,Wi−10 ,Wm
4,0)
+H(V4,i|W1,i,W2,i,W3,i,Wi−10 ,Wm
4,0)
(d)= H(Wm
4,0) ≤ (m+ 1)(T2 −B1) · log |X |, (8.43)
where (a) follows from the fact that source packets are i.i.d., (b) follows by using the fact that condi-
tioning reduces entropy, (c) follows by using (8.1) and (d) follows by using (8.42).
In other words,
R =k
n=
H(s)
log |X | ≤m+ 1
m· T2 −B1
2(T2 −B1) + (B2 − T1)
m→∞−−−−→ T2 −B1
2(T2 −B1) + (B2 − T1)(8.44)
Therefore, (8.44) governs the rate of any {(B1, T1), (B2, T2)} code in region (f).
8.8 Special Cases in Region (f)
We present the capacity in region (f) for the special cases when T1 = B1 and T2 = B2 and then present
a conjecture on the capacity in the general case in this section.
8.8.1 Achievability Scheme in Region (f) at T1 = B1 (Proposition 8.1)
For the special case when T1 = B1, we provide a code construction that attains the upper bound C+f in
this section. We begin with an example of {(4, 4), (5, 6)} Mu-SCo construction of rate 2/5, as shown in
Table 8.2. We split each source packet into two symbols as shown. A (4, 4) repetition code is applied
resulting in the first two rows of parity-checks and then a (B2−B1, T2−T1) = (1, 2) MS code is applied
and the resulting parity-checks are shifted by T1 = 4 forming the last row. Note that the first user
can recover from any burst erasure of length 4 within a delay of 4 packets using the first two rows of
parity-check symbols. For the second user, suppose a burst erasure of length 5 takes place from time
i−5 to i−1. Notice that user 2 recovers s1[i−5] and s0[i−5] respectively from the last two parity-checks
at time t = i + 1, i.e., with a delay of T2 = 6. The rest of the erased source packets are recovered with
a delay of T1 = 4 using the repetition code.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 131
Code Construction
Our proposed code construction, which achieves the minimum delay for user 1, i.e., T1 = B1 is as follows
• Split each source packet s[i] into T2 −B1 = T2 − T1 symbols
s[i] = (s0[i], . . . , sT2−T1−1[i])
.
• Let C1 be the single user (B1, T1) = (T1, T1) MS code obtained by repeating the source packets to
produce T2 − T1 parity-check symbols, i.e.,
p1[i] = (p10[i], . . . , p1T2−T1−1[i]) = (s0[i− T1], . . . , sT2−T1−1[i− T1]) = s[i− T1]. (8.45)
• Let C2 be a (B2 − B1, T2 − T1) MS code applied to the source packets s[i] and then constructing
(B2 − B1) parity-checks p2[i] = (p20[i], . . . , p2B2−B1−1[i]) at each time by combining the source
symbols diagonally.
• Concatenate the two streams p1[·] and p2[·] after introducing a shift of T1 in the second stream.
The output packet at time i is x[i] = (s[i],p1[i],p2[i− T1]).
Since there are T2−T1 and B2−B1 parity-check symbols for every T2−T1 source symbols, it follows
that the rate of the code is T2−T1
2(T2−T1)+(B2−B1)= C+
f .
Decoding at User 1
A burst erasure of length B1 can be directly recovered using the stream of parity-checks p1[·] producedby code C1 within a delay of T1. Recall that this immediately follows since the parity-checks of the two
codes are concatenated and not added.
Decoding at User 2
Suppose that the packets x[i −B2], . . . ,x[i − 1] are erased by the channel of user 2. We first show how
the receiver can recover s[t] for t ∈ [i−B2, i−B1 − 1] at time t+ T2. To recover s[t], the code C2 which
is a (T2−T1, B2−B1) code, can be used provided that the corresponding parity-checks starting at time
i−B1 are available. Due to the forward shift of T1 = B1 applied in our construction, these parity-checks
appear starting at time t = i and are clearly not erased. Secondly for the recovery of s[t] we also need
the source packets in the interval [i−B1, t+T2−T1]. The C1 repetition code guarantees that these are in
fact available by time t+T2. This shows that all the erased packets in the interval [i−B2, i−B1−1] can
be recovered. The remaining packets in the interval [i− B1, i− 1] are recovered using the C1 repetition
code. This completes the decoding analysis for user 2.
8.8.2 Converse Proof in Region (f) at T2 = B2 (Proposition 8.2)
In contrast to the special case when T1 = B1, where the upper bound C+f in Theorem 8.3 is shown to
be tight, we show that in the case when T2 = B2 the lower bound C−f is the true capacity. We do this
by presenting a tighter upper bound, which depends on double recovery of some source packets, once
Chapter 8. Multicast Streaming Codes (Mu-SCo) 132
� � � � � � �
(a) Step (1)
� � � � � � �
�����������
(b) Step (2)
� � � � � � �
�����������
�����������
(c) Step (3)
� � � � � � �
�����������
(d) Step (4)
Figure 8.6: Main steps of finding the upper bound for the {(2, 3), (4, 4)} point lying in region (f) throughone period illustration of the Periodic Erasure Channel. Grey and white squares denote erased andunerased packets respectively. Note that the packet at time 2 is recovered by both codes C1 and C2.
using code C1 and another using C2. We illustrate the main idea of such converse through considering
the specific point {(2, 3), (4, 4)} shown in Figure 8.6. We start by considering a periodic erasure channel
with period length 7. The first 4 packets are erased while the rest are unerased. With 4 erasures, code
C2 = (4, 4) can recover the first two packets s[0] and s[1] by time 4 and 5, respectively. We note that
the channel packet at time i is sufficient to recover the source packet at time i− 4 (i.e., no more channel
packets are required). Step (3) in Figure 8.6 gives the main idea of this converse. Since, there are two
remaining erasures, the source packet s[2] can be recovered using C1 = (2, 3) within a delay of 3 (i.e.,
by time 5). Also, the same source packet can be decoded using C2 by time 6 (double recovery). The
remaining erasure can be recovered using C1 by time 6. Moreover, the repetition code C2 = (4, 4) can
recover the source packets s[4], s[5] and s[6] from their corresponding channel packets. Therefore, the
three channel packets are capable of recovering a total of 8 source packets (packet at time 2 is recovered
twice) which implies that a rate of 3/8 is an upper bound.
For the general case, one period of the corresponding periodic erasure channel to be used for proving
the upper bound is given in Figure 8.7. Each period has B2 erasures followed by T1 non-erasures.
In Figure 8.7, we have the first period of the erasure channel. The key is to show that the received
channel packets x[B2+T1−1
B2
]
alone can recover all of the source packets in the period, but there is enough
information in the channel packets to recover some of the source packets twice. The fact that we have
two decoders allows some of the source packets to be decoded by mutually exclusive groups of channel
packets, but when we put all of the channel packets together, the redundant information in the channel
packets does affect the maximum achievable rate of the code.
The source packets that can be recovered by x[B2+T1−1
B2
]
are s[T1−1
0
]
, s[
B2−1B2−B1
]
and s[B2+T1−1
B2
]
.
As Figure 8.7 shows, the first two groups of source packets overlap. The overlap consists of the packets
s[
T1−1B2−B1
]
. The reason why we can use a single period in the proof is because the B2 = T2 constraint
allows us to decode the final group of source packets s[B2+T1−1
B2
]
using only the packets x[B2+T1−1
B2
]
and does not require any future channel packets.
Assuming that what we have just described is possible, then we have T1 channel packets recovered
Chapter 8. Multicast Streaming Codes (Mu-SCo) 133
· · ·
T1
B1 T1
B2
Figure 8.7: One period of the periodic erasure channel used to prove an upper bound on capacity inregion (f) for the special case T2 = B2. Grey and white squares denote erased and unerased packetsrespectively.
2T1 +B1 source packets. We should be able to write the relation:
(2T1 +B1) ·H(s) ≤ T1 · log |X |
R =k
n=
H(s)
log |X | ≤T1
2T1 +B1(8.46)
The formal proof shows that this is indeed possible. We can split the proof into three major parts.
1. The source packets s[B2−B1−1
0
]
and s[2B2−B1−T1−1
B2−B1
]
can be recovered from the channel packets
x[2B2−B1−1
B2
]
using the (B2, B2) and (B1, T1) decoders respectively, i.e.,
H(
s[B2−B1−1
0
]∣∣∣x[2B2−B1−1
B2
])
= 0, (8.47)
H(
s[2B2−B1−T1−1
B2−B1
]∣∣∣x[2B2−B1−1
B2
]
s[B2−B1−1
0
])
= 0. (8.48)
Next, we can write
H(
x[2B2−B1−1
B2
])
= H(
s[B2−B1−1
0
]
, s[2B2−B1−T1−1
B2−B1
]
,x[2B2−B1−1
B2
])
−H(
s[B2−B1−1
0
]∣∣∣x[2B2−B1−1
B2
])
−H(
s[2B2−B1−T1−1
B2−B1
]∣∣∣s[B2−B1−1
0
]
,x[2B2−B1−1
B2
])
(a)= H
(
s[2B2−B1−T1−1
0
]
x[2B2−B1−1
B2
])
= H(
s[2B2−B1−T1−1
0
])
+H(
x[2B2−B1−1
B2
]∣∣∣s[2B2−B1−T1−1
0
])
= H(
s[2B2−B1−T1−1
0
])
+H(
x[2B2−B1−1
B2
]∣∣∣s[2B2−B1−T1−1
0
]
,x[2B2−B1−T1−1
0
])
,
(8.49)
where we used (8.47) and (8.48) to remove the negative terms before step (a).
2. In this step, we show that two source packets s[m−B2] and s[m−T1] can be recovered from each
added channel packet x[m] for m ∈ {2B2−B1, . . . , B2 + T1− 1}. We start by establishing the following
lemma.
Lemma 8.3. The following inequality is true for all m ≥ 2B2 −B1 − 1:
m∑
i=B2
H(x[i]) ≥ H(
s[m−B2
0
])
+H(
s[
m−T1
B2−B1
])
+H(
x[
mB2
]∣∣∣s[m−B2
0
]
s[
m−T1
B2−B1
]
x[m−B2
0
])
(8.50)
Proof. See Appendix F.5.
Chapter 8. Multicast Streaming Codes (Mu-SCo) 134
We then substitute m = B2 + T1 − 1 into (8.50)
B2+T1−1∑
i=B2
H(x[i]) ≥ H(
s[T1−1
0
])
+H(
s[
B2−1B2−B1
])
+H(
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[T1−1
0
])
.
(8.51)
3. We can recover s[B2+T1−1
B2
]
from x[B2+T1−1
B2
]
given the previous channel packets x[B2−1
0
]
using
decoder 2, so we can write
H(
s[B2+T1−1
B2
]∣∣∣x[B2+T1−1
0
])
= 0. (8.52)
Using (8.52), we continue with (8.51) to get,
B2+T1−1∑
i=B2
H(x[i]) ≥ H(
s[T1−1
0
])
+H(
s[
B2−1B2−B1
])
+H(
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[T1−1
0
])
= H(
s[T1−1
0
])
+H(
s[
B2−1B2−B1
])
+H(
s[B2+T1−1
B2
]
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[B2−1
0
])
−H(
s[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[B2+T1−1
0
])
(b)= H
(
s[T1−1
0
])
+H(
s[
B2−1B2−B1
])
+H(
s[B2+T1−1
B2
]
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[B2−1
0
])
= H(
s[T1−1
0
])
+H(
s[
B2−1B2−B1
])
+H(
s[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[
B2−1B2−B1
]
x[B2−1
0
])
+H(
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[B2+T1−1B2−B1
]
x[B2−1
0
])
= H(
s[T1−1
0
])
+H(
s[B2+T1−1B2−B1
])
+H(
x[B2+T1−1
B2
]∣∣∣s[T1−1
0
]
s[B2+T1−1B2−B1
]
x[B2−1
0
])
≥ H(
s[T1−1
0
])
+H(
s[B2+T1−1B2−B1
])
(8.53)
where step (b) makes use of (8.52).
Finally, we use the fact that all source packets have the same entropy and all channel packets have
the same size to write,
B2+T1−1∑
i=B2
H(x[i]) ≥ H(
s[T1−1
0
])
+H(
s[B2+T1−1B2−B1
])
T1 · log |X | ≥ (2T1 +B1) ·H(s)
R =k
n=
H(s)
log |X | ≤T1
2T1 +B1(8.54)
which matches the upper bound in (8.12).
8.8.3 Conjectured Capacity in Region (f)
Conjecture 8.1. For any given point {(B1, T1), (B2, T2)} in region (f) defined by B2 ≤ T2 ≤ T1 + B1
and B1 ≤ T1 < B2, the capacity is given by,
Cf =T1
2T1 +B2−T1
T2−T1B1
. (8.55)
Chapter 8. Multicast Streaming Codes (Mu-SCo) 135
��
��
�����
��
��
��
������
�
��
� ���
��
� ���� ������
� ��� ���� �
���
� �
���
���
���
���
�������������������������������������
������������������������������������� �� ������
������������������������� �����������
������������������� ��������������!��� �������������"��� ���� ���������������������#��������������������
��
�����������
� ��� ����������� �� ���������� ����������!
Figure 8.8: Capacity behavior in the (B2, T2) plane. We hold B1 and T1 as constants, so the regionsdepend on the relation between T2 and B2 only. The dashed line gives the contour of constant capacityin region (e) as well as in the special case of T1 = B1 in region (f).
One can see that the capacity in (8.55) coincides with the capacity results in Propositions 8.1 and 8.2
by simply substituting T1 = B1 and T2 = B2 respectively in (8.55). It also maintains the continuity
in capacity at the edges of region (f) with regions (c) and (e). In particular, the results of substituting
B2 = T1 and T2 = T1 +B1 in (8.55) match the second case in (8.5) and (8.7), respectively.
For a given B1 and T1, the conjectured capacity is constant for all points satisfying,
B2 − T1
T2 − T1= k ⇒ T2 =
1
k(B2 − T1) + T1. (8.56)
Hence, the capacity can be shown to be constant on the straight lines passing through the point (B2, T2) =
(T1, T1) and slope 1/k in Figure 8.8. However, this point is not included since Cf in (8.55) is undetermined
at (B2, T2) = (T1, T1). The capacity at this point is 12 and can be obtained from the capacity in region
(c) in (8.13).
Also note that the capacity expression in region (e), Ce, only depends on B2 and T2 via the difference
B2 − T2. To identify the contour of constant capacity in region (e), it is natural to fix B1 and T1 and
classify the various regions as shown in Figure 8.8. Observe that the streaming capacity for any point in
region (e) is constant across the 45-degrees line and is equal to the multicast upper bound at the lowest
point on the horizontal line, T2 = T1 +B1, separating regions (e) and (f) in Figure 8.8.
An example of region (f) with (B1, T1) = (10, 16) is illustrated in Figure 8.9. Different points denote
Chapter 8. Multicast Streaming Codes (Mu-SCo) 136
16 17 18 19 20 21 22 23 24 25 26
16
17
18
19
20
21
22
23
24
25
26
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
8/21
16/37
24/53
32/69
8/17
48/101
56/117
64/133
72/149
16/33
8/21
12/29
16/37
4/9
24/53
28/61
32/69
36/77
8/17
8/21
32/79
8/19
16/37
56/127
64/143
24/53
16/35
8/21
2/5
12/29
14/33
16/37
18/41
4/9
8/21
48/121
56/137
64/153
72/169
16/37
8/21
28/71
32/79
12/29
8/19
8/21
64/163
72/179
16/39
8/21
9/23
2/5
8/21
16/41 8/21
B2
T2
Figure 8.9: An example of region (f) in the (B2, T2) plane for (B1, T1) = (10, 16). The dashed lines givesome examples of the contour of constant conjectured capacity in region (f). This conjecture is provedfor the cases B2 = T1 which is the left vertical edge of the triangle, T2 = T1 + B1 which is the upperhorizontal edge of the triangle and T2 = B2 which is the right 45-degrees edge. It is also proved for thespecial case of T1 = B1 which is not shown in this figure.
different B2 and T2 values as shown on the x and y axes respectively. The fraction at each point is the
conjectured capacity as per (8.55). One can see that the conjectured capacity is constant on the straight
lines passing through the point (B2, T2) = (T1, T1) = (16, 16) but not including it.
8.9 Conclusion
In this chapter, we study the problem of multicast streaming over burst erasure channels. We consider
a setup with one transmitter and two receivers. Each receiver sees a burst erasure channel which erases
a single burst of maximum length Bi for i = 1, 2 and is interested in recovering each source packet
within a maximum delay of Ti packets. We show that the streaming capacity intricately depends on
the burst and delay parameters. Consequently, we divide the operation into two main regimes, large-
delay and low-delay regimes. In the large-delay regime, we characterize the capacity and observe that
the delay of one of the receivers can be reduced up to a certain critical value without reducing the
capacity. This property is then used to show that the capacity can be achieved by either a MS code in
Section 2.6.1 or a DE-SCo in Section 4.5. In the low-delay regime, a new code construction is developed
where parity-checks are generated in multiple layers and then carefully combined to meet the required
decoding constraints. We show that such code achieves the capacity for a subset of the regime. For the
rest of the low-delay regime, upper and lower bounds are provided and the capacity is only characterized
for some special cases.
Chapter 9
Conclusion
In this thesis, we studied low-delay streaming codes for packet-erasure channels. In Chapter 2, a sliding
window erasure channel model was introduced where the set of allowable erasure patterns was constrained
in each window of W consecutive packets. We observed that such local structure of erasure patterns is
critical in establishing fundamental limits when delay constraints are imposed. For example, for a given
rate and delay, the burst length and the number of isolated erasures that can be recovered are not the
same. Also, the algebraic metrics of the associated codes for each class of patterns are different. It is
worth mentioning that compound channels that introduces both burst and random errors are studied
in the literature. For example, in [87], a class of codes was proposed that combines Fire codes, which
are burst correcting codes, with BCH codes, which are random error correcting codes. In [88–91], the
burst and random error correction and detection capabilities of different codes were studied. However,
as mentioned earlier, both the fundamental limits and the structure of codes are different when delay
constraints are imposed which is the focus in this thesis.
In Chapter 3, we considered a channel which introduces either a single erasure burst of certain
maximum length, or a certain maximum number of isolated erasures in any window of length W . These
patterns were motivated by the dominant erasure patterns observed in statistical models such as Gilbert-
Elliott and Fritchman channels. We showed that, for a given rate and decoding delay, there exists a
tradeoff between the burst erasure correction and isolated erasure correction capability of a given code.
In Chapter 6, we showed that such tradeoff can be translated to a fundamental tradeoff between two
algebraic metrics of a convolutional code, column distance and column span. We then constructed a class
of codes - Maximum Distance and Span (MiDAS) codes - which achieve a tradeoff that is at most one
unit of delay far from the upper bound. These codes involve a layering technique to carefully combine
two codes, one designed for burst erasures and the other for isolated erasures.
In Chapter 4, we considered a more sophisticated channel model that introduces both burst and
isolated erasures in the same window. For such channels, aiming for perfect recovery from all associated
patterns requires a large overhead. Therefore, we proposed a class of Partial Recovery Codes (PRC)
that can recover all but one source packet in any feasible pattern. In simulations over Gilbert-Elliott
and Fritchman channel models, we showed that our proposed codes indeed outperform baseline schemes.
However, the optimality of PRC codes is still an open problem and we will leave this extension for future
work.
Both MiDAS and PRC use m-MDS codes as constituent codes which are shown to exist for field-sizes
137
Chapter 9. Conclusion 138
that grow exponentially in the delay. This might not be practical in some cases. Hence, we considered
alternative constructions for both codes which involve diagonally interleaved block MDS codes instead
of m-MDS codes. The alternative constructions exist for field-sizes that grow polynomially in the delay.
Both MiDAS constructions attain the same rate, while the block MDS version of PRC codes achieves a
slightly lower rate than that using m-MDS codes.
We further considered an extension of our streaming setup in Chapter 5 where the source-arrival
and channel-transmission rates are unequal. In particular, M > 1 channel packets must be transmitted
between any two successive source packets. In this case, we showed that a simple adaptation of the
code constructions designed for M = 1 is sub-optimal and proposed new codes that achieve the capacity
for burst erasure channels in the case of unequal rates. We then discussed a robust extension of these
codes which also recovers from isolated erasures but without claiming its optimality. As a future work,
it might be interesting to investigate different approaches for constructing robust codes when M > 1.
In Chapters 7 and 8, we considered a multicast setup with two users. The channel of user i introduces
a burst of length Bi and delay Ti. We showed that the capacity intricately depends on the burst-delay
parameters. In Chapter 7, we developed Diversity-Embedded Streaming Codes that achieve the capacity
of the stronger user, while still correcting the longer burst of the weaker user with minimum possible
delay. Thereafter in Chapter 8, we considered all possible values of {(B1, T1), (B2, T2)} and characterized
the capacity in most cases and proposed associated code constructions. From a practical point of view,
multicast codes can also be used in a single user scenario. Instead of designing the codes for the worst-
case burst length, one can allow the decoder to recover from short bursts within a short delay and
from longer bursts within a longer delay. One possibility of future work can be extending multicast
streaming codes for channels that introduce both bursts and isolated erasures. It remains to be seen
whether a layering approach similar to that used in constructing MiDAS codes in Chapter 3 suffices for
the multicast case.
As a final note, in this thesis, we focus on constructing low-delay streaming codes for erasure channels.
We study the associated fundamental limits, decoding analysis and algebraic properties. On a practical
level, we also study the performance of the proposed codes over practical channels. In Table. 1.1 in
Chapter 1, we indicate the bitrate and delay constraints of some streaming applications. We also
compute the delay in packets and use such values in all the experiments in the thesis. For such values,
we show that a significant performance gain over classical codes can be achieved using different proposed
codes for different channel parameters. For example, Skype uses an FEC redundancy ratio which is
about 4.5 times the actual packet loss rate. Such high ratio is most likely motivated by the delay
constraints associated with each packet [92,93]. Hence, we argue that replacing classical codes with our
proposed codes which are designed for delay constrained communications in some existing streaming
applications will result in a better performance. This can be a possibility of future work. Also, we note
that the source packets are assumed to arrive at the encoder at a fixed rate in this thesis. Although
such assumption is fulfilled in many applications, it is not the case in others. For the variable source
arrival rate case, some modifications to the proposed codes might be needed. For example, in [94], the
authors considered minimizing the overall loss probability by selecting the code block length when the
arrival rate is bursty over a discrete memoryless channel.
Appendix A
Background
A.1 Proof of Lemma 2.1
In order to establish L1, consider the interval [0, j] and consider two input sequences (s[0], . . . , s[j]) and
(s′[0], . . . , s′[j]) with s[0] 6= s′[0]. Let the corresponding output be (x[0], . . . ,x[j]) and (x′[0], . . . ,x′[j]).
Note that the output sequences differ in at least dj = (n−k)(j+1)+1 symbols since otherwise the output
sequence (x[0]−x′[0], . . . ,x[j]−x′[j]) which corresponds to (s[0]− s′[0], . . . , s[j]− s′[j]) has a Hamming
weight less than dj while the input s[0] − s′[0] 6= 0, which is a contradiction. Thus if (s[0], . . . , s[j]) is
the input source sequence, for any sequence of dj − 1 or fewer erased symbols, there will be at least
one packet where (x′[0], . . . ,x′[j]) differs from (x[0], . . . ,x[j]). Thus s[0] is recovered uniquely at time
j. Once s[0] is recovered we can cancel its contribution from all the future packets and repeat the same
argument for the interval [1, j + 1] to recover s[1] and proceed.
To establish L2, we use the notation Wi(l) to denote a window of length l · n starting at time i · n,i.e., Wi(l) = [i · n, (i+ l)n− 1] (see Figure A.1). We show that the entire erasure burst can be recovered
through the following steps.
• In the window W0(j + 1) = [0, (j + 1)n− 1], the channel introduces B ≤ (n− k)(j + 1) erasures.
Hence, we use L1 to recover s[0] = (s0[0], . . . , sk−1[0]) at time (j + 1)n − 1 among which only
the last k − c symbols are erased. At this point we can also compute the n − k symbols of
p[0] = (p0[0], . . . , pn−k−1[0]). Thus all the symbols until time t = n− 1 have now been recovered
by the decoder.
• The next windowW1(j) = [n, (j+1)n− 1] has B− (n− c) < j(n− k) erasures since c < k. Hence,
L1 can be used to recover s[1] at time (j + 1)n− 1 and p[1] can be computed consequently.
• Similarly,W2(j− 1) = [2n, (j+1)n− 1] has B− (n− c)−n < (n−k)(j− 1) erasures which implies
the recovery of s[2] at time (j + 1)n− 1.
• Repeating the previous step for Wi(j − i+ 1) = [i · n, (j + 1)n− 1] and i · n ≤ c+ B − 1, one can
recover all erased packets in the erasure burst at time (j + 1)n− 1.
The proof of L2 is thus complete. The claim in L3 is a generalization of L2, as it permits the erasure
pattern to have both burst and isolated erasures, but only guarantees the recovery of the burst erasure.
139
Appendix A. Background 140
0 c n− 1 n 2n− 1 2n 3n− 1 c+ B − 1 (j − 1)n jn− 1 jn (j + 1)n− 1 · · ·
W0(j + 1)
W1(j)
W2(j − 1)
x[0] x[1] x[2] x[j − 1] x[j]
B
Figure A.1: An erasure channel with B erasures in a burst starting at time c used in proving L2 inLemma 2.1. Grey and white squares denote erased and unerased symbols respectively.
To establish L3 we can proceed in a similar fashion as above and stop when the recovery of the erasure
burst is complete.
A.2 Information Theoretic Converse of Theorem 2.3
In this section we provide an information theoretic converse to Theorem 2.3. While the capacity of the
point-to-point case was established in [35–37], the converse argument was based on a periodic erasure
channel (PEC). Our information theoretic approach is not only more rigorous but also generalizes to the
multicast setting in subsequent sections.
Let us use the following notation:
s[ba
]
=
{
s[a], s[a+ 1], . . . , s[b− 1], s[b], a ≤ b
∅, otherwise(A.1)
W ba =
{
Wa,Wa+1, . . . ,Wb−1,Wb, a ≤ b
∅, otherwise(A.2)
To establish the proof of Theorem 2.3, we consider a periodic erasure channel with a period length of
B+T channel packets. In each period, the first B packets are erased while the remaining T packets are
received at the decoder. In particular, the ith period consists of the channel packets x[i(T+B)], . . . ,x[(i+
1)(T +B)− 1] among which the first B packets, x[i(T +B)], . . . ,x[i(T +B)+B− 1] are erased whereas
the T following packets x[i(T +B) +B], . . . ,x[(i + 1)(T +B)− 1] are not erased.
To aid us in our proof, let us introduce the terms
Vi = s[(i+1)(T+B)−1
i(T+B)
]
, V1,i = s[i(T+B)+B−1
i(T+B)
]
, V2,i = s[(i+1)(T+B)−1
i(T+B)+B
]
,
Wi = x[(i+1)(T+B)−1
i(T+B)
]
, W1,i = x[i(T+B)+B−1
i(T+B)
]
, W2,i = x[(i+1)(T+B)−1
i(T+B)+B
]
, (A.3)
where i ∈ {0, 1, 2, . . .}. Note that Vi = [V1,i, V2,i] refer to a group of source packets, whereas Wi =
[W1,i,W2,i] is a group of channel packets. We also note that the channel packet at any time i is a causal
function of source packets, i.e.,
x[i] = fi(s[0], . . . , s[i],M), (A.4)
Appendix A. Background 141
where M is common randomness at encoder and decoder. Figure A.2 shows the time slots that the
packets come from as well as the size of Vi and Wi.
Link: · · · · · · · · ·
V0 V1 Vi
W2,0 W2,1 W2,i
T +B
T
Figure A.2: The periodic erasure channel used in proving the upper bound of the single user scenario inTheorem 2.3, but with indication of which packets are in the groups Vi and Wi. Grey and white squaresdenote erased and unerased packets respectively.
Lemma A.1. Consider a (B, T ) code which is capable of recovering each source packet over a burst
erasure channel which introduces a burst erasure of maximum length B with a delay of T packets and a
maximum error probability of ε, i.e.,
Pr(s[t] 6= s[t]) ≤ ε, t ≥ 0, (A.5)
then the following is true for m ≥ i+ 1,
H(V1,i|W i−10 ,Wm
2,0,M) ≤ B(H(ε) + εH(s)), (A.6)
H(V2,i|W1,i,Wi−10 ,Wm
2,0,M) ≤ T (H(ε) + εH(s)), (A.7)
where V1,i, V2,i, Wi, W1,i and W2,i is as in (A.3).
Proof. We start by considering a simple channel which introduces one burst of length B in the interval
[j, j+B− 1] as shown in Figure A.3. A (B, T ) code is capable of recovering each source packet within a
delay of T and maximum error probability ε as defined in (A.5). We are only interested in the recovery
of the two sets, s[j + r1] for r1 ∈ {0, . . . , B − 1} and s[j − r2] for r2 ∈ {1, . . . , T }. Applying Fano’s
inequality [95] to each of these source packets we have that:
H(
s[j + r1]∣∣x[j−10
]
,x[j+r1+Tj+B
]
,M)
≤ H(ε) + εH(s), r1 ∈ [0, B − 1], (A.8)
H(
s[j − r2]∣∣x[j−10
]
,x[j−r2+Tj+B
]
,M)
≤ H(ε) + εH(s), r2 ∈ [1, T ]. (A.9)
We then start from the L.H.S. of (A.6) and use chain rule as follows,
H(V1,i|W i−10 ,Wm
2,0,M)(a)
≤ H(V1,i|W i−10 ,W2,i,M) = H
(
s[i(T+B)+B−1
i(T+B)
]∣∣∣W i−1
0 ,W2,i,M)
=B−1∑
r1=0
H(
s[i(T +B) + r1]∣∣∣s[i(T+B)+r1−1
i(T+B)
]
,W i−10 ,W2,i,M
)
(b)
≤B−1∑
r1=0
H(
s[i(T +B) + r1]∣∣∣W i−1
0 ,W2,i,M)
Appendix A. Background 142
. . . j − r2 . . . j − 1 j . . . . . . j + r1 . . . . . . . . . j +B − 1 j +B . . . j − r2 + T . . . . . . j + r1 + T · · ·
T
B
T
Figure A.3: A channel introducing a single burst of length B in the interval [j, j+B−1] used in provingLemma A.1. Grey and white squares denote erased and unerased packets respectively.
(c)
≤B−1∑
r1=0
H(
s[i(T +B) + r1]∣∣∣x[i(T+B)−1
0
]
,x[i(T+B)+r1+T
i(T+B)+B
]
,M)
(d)
≤B−1∑
r1=0
(H(ε) + εH(s)) = B (H(ε) + εH(s)) , (A.10)
where (a) and (b) uses the fact that conditioning reduces entropy and (a) uses that m ≥ i + 1, (c)
follows since x[i(T+B)+r1+T
i(T+B)+B
]
⊆ W2,i for r1 ∈ [0, B − 1], whereas (d) uses (A.8) at j = i(T + B). This
establishes (A.6).
Using (A.9) a similar argument can be used to establish (A.7),
H(V2,i|W1,i,Wi−10 ,Wm
2,0,M)
(a)
≤ H(V2,i|W1,i,Wi−10 ,W i+1
2,i ,M) = H(V2,i|W i0 ,W2,i+1,M)
= H(
s[(i+1)(T+B)−1i(T+B)+B
]∣∣∣W i
0 ,W2,i+1,M)
(b)
≤T∑
r2=1
H(
s[(i+ 1)(T +B)− r2]∣∣∣W i
0,W2,i+1,M)
(c)
≤T∑
r2=1
H(
s[(i+ 1)(T +B)− r2]∣∣∣x[(i+1)(T+B)−1
0
]
,x[(i+1)(T+B)+T−r2−1
(i+1)(T+B)+B
]
,M)
(d)
≤T∑
r2=1
(H(ε) + εH(s)) = T (H(ε) + εH(s)) , (A.11)
where (a) uses the fact that conditioning reduces entropy as m ≥ i + 1, (b) uses chain rule together
with conditioning reduces entropy, (c) follows since x[(i+1)(T+B)+T−r2−1
(i+1)(T+B)+B
]
⊆W2,i+1 for r2 ∈ [1, T ], and
(d) uses (A.9) at j = (i+ 1)(T +B).
An explanation to Lemma A.1 is as follows. For the ith period, (A.6) states that the (B, T ) MS
code is capable of recovering the B erased source packets V1,i using the following T channel packets Wi
provided that all the previous source packets are recovered. Similarly, the next T channel packets W2,i
are received, we cannot assume that the corresponding source packets V2,i can be decoded because the
code may not be systematic. To recover those source packets, we can use the next group of T unerased
packets in W2,i+1 as stated in (A.7). In general, we may not need all of these channel packets, but the
proof is simpler if we have it all available.
Now, we can write,
m(T +B)H(s) = H(V m−10 )
(a)= H(Vm−1
0 |M) ≤ H(V m−10 ,Wm
2,0|M)
Appendix A. Background 143
= H(Wm2,0|M) +
m−1∑
i=0
(
H(V1,i|V i−10 ,Wm
2,0,M) +H(V2,i|V1,i, Vi−10 ,Wm
2,0,M))
(b)= H(Wm
2,0|M) +
m−1∑
i=0
(
H(V1,i|W i−10 ,Wm
2,0,M) +H(V2,i|W1,i,Wi−10 ,Wm
2,0,M))
(c)
≤ H(Wm2,0|M) +m(T +B)(H(ε) + εH(s))
(d)
≤ H(Wm2,0) +m(T +B)(H(ε) + εH(s))
≤ (m+ 1)T · log |X |+m(T +B)(H(ε) + εH(s)), (A.12)
where (a) holds due to independence of source packets on the random variable M1, (b) follows by
using (A.4), (c) follows by using Lemma A.1 and (d) follows from the fact that conditioning reduces
entropy.
Finally, we conclude from (A.12) that any (B, T ) streaming erasure code must satisfy
R =k
n=
H(s)
log |X | ≤m+ 1
m· T
T +B+
H(ε) + εH(s)
log |X |m→∞,ε→0−−−−−−−→ T
T +B, (A.13)
which gives our upper bound on the rate.
1We note that the same upper bound in (A.12) is achieved when no common randomness is assumed. Hence, we willonly consider deterministic codes and M will be dropped in the subsequent converse proofs.
Appendix B
Maximum Distance And Span
(MiDAS) Codes
B.1 Decoding Analysis of MiDAS-MDS code
In the decoding analysis it is sufficient to show that each source packet s[i] can be recovered at time
t = i+ Teff if there is either an erasure burst of length B or up to N isolated erasures in the interval
[i, i+ Teff ].
B.1.1 Burst Erasure
First consider the case when a burst erasure spans [i, i+B− 1]. Following this burst, we are guaranteed
that for the C(N,B,W ) channel, there are no erasures in the interval [i+B, i+Teff+B−1]. We argue that
the decoder can first recover v[i], . . . ,v[i+B−1] simultaneously by time t = i+Teff−1 and then recover
u[i] at time t = i+ Teff by computing pv[i+ Teff ] and then u[i] = q[i+ Teff ]− pv[i+ Teff ]. To show the
recovery of v[i], . . . ,v[i+B−1], note that there are no erasures in the interval spanning [i+B, i+Teff−1]and the interfering u[·] packets in q[t] = u[t− Teff ] + pv[t] can be subtracted out to recover pv[t]. The
diagonal codewords{cvj [r]
}spanning v[i], . . . ,v[i+B− 1] start at r ∈ {i− (Teff −B)+1, . . . , i+B− 1}.
Each such codeword belongs to a (Teff , Teff − B) MDS code. Hence, if no more than B erasures take
place in each codeword, the erased packets can be recovered. However, we still need to take the delay
into account. We first note that the v[·] packets in the interval [i, i + B − 1] are erased. Also, the q[·]packets in the interval [i+Teff , i+Teff +B−1] combine u[·] which are erased and thus the corresponding
p[·] packets must also be treated as erased. We split the diagonal codewords of interest into two groups,
• r ∈ {i − (Teff − B) + 1, . . . , i}: These codewords end before time i + Teff where there are only B
erased columns in the interval [i, i+B − 1].
• r ∈ {i + 1, . . . , i + B − 1}: Each of these codewords has Teff − B symbols in the interval [i +
B, i + Teff − 1] which are not erased. Since the length of each codeword is Teff , then the number
of erasures are at most Teff − (Teff −B) = B.
We note that all the considered diagonal codewords{cvj [r]
}for r ∈ {i− (Teff−B)+1, . . . , i+B−1} end
before time i+ Teff +B. Also, the p[·] parities in the interval [i+ Teff , i+ Teff +B − 1] cannot be used
144
Appendix B. Maximum Distance And Span (MiDAS) Codes 145
as discussed earlier. Thus it follows that the corresponding v[·] packets are recovered by i+ Teff − 1.
B.1.2 Isolated Erasures
Next we show that when there are N erasures in arbitrary locations in the interval [i, i + Teff ], then
u[i] is guaranteed to be recovered by time t = i + Teff , and v[i] is guaranteed to be recovered by time
t = i + Teff − 1. For the recovery of u[i] we note that the codewords{cuj [r]
}that include u[i] start at
r ∈ {i − (Teff − N), . . . , i}. Since each cuj [r] is a (Teff + 1, Teff − N + 1) MDS code, and there are no
more than N erasures on each such sequence, it follows that all the erased packets are guaranteed to be
recovered by time i + Teff . The recovery of u[i] by time t = i + Teff now follows. For recovering v[i],
we consider the non-erased parity-check packets pv[t] for t ∈ [i, i + Teff − 1], which can be obtained by
cancelling the interfering u[t − Teff ] packets from q[t] as discussed in the case of burst erasure above.
Notice that the diagonal codewords{cvj [r]
}spanning v[i] start at r ∈ {i − (Teff − B) + 1, . . . , i} and
terminate by time i + Teff − 1. It follows that each such sequence has no more than N erasures and
hence all the erased v[i] packets are recovered by time t = i+ Teff − 1.
Appendix C
Partial Recovery Codes (PRC)
C.1 Decoding Analysis of Partial Recovery Codes in Theo-
rem 4.1
We first begin with the following extension of Corollary 2.1 which considers two bursts in the interval
[0, j].
Lemma C.1. In the interval [0, j] suppose that there are two erasure bursts spanning the intervals
[0, B1 − 1] and [r, r + B2 − 1]. A (n, k,m) m-MDS convolutional code with rate R = kn, can recover all
the erased B1 +B2 source packets s[0], . . . , s[B1 − 1], s[r], . . . , s[r +B2 − 1] by time t = j provided that
r ≤ B1
1−Rand B1 +B2 ≤ (1−R)(j + 1) (C.1)
for any j = 0, 1, . . . ,m.
Proof. See Appendix C.2.
We now proceed to prove Theorem 4.1. For any given erasure pattern, in which the channel introduces
a burst of length B and one isolated erasure in a sliding window of length 2T+B, the decoder can recover
all packets with a delay of T but one. We divide these patterns into two main categories. In the first
case the erasure burst is followed by an isolated erasure whereas in the second case an isolated erasure
precedes the erasure burst.
C.1.1 Erasure Burst followed by an Isolated Erasure
Without loss of generality assume that the channel introduces an erasure burst in the interval [0, B− 1]
and that the isolated erasure occurs at time t ≥ B. Since the associated isolated erasure follows the
erasure burst from Def. 4.2 it must occur in the interval [B, T + B − 1]. This implies that the interval
[−T,−1] is free of any erasure so that there is only one burst and isolated erasure in the interval
[−T, T +B − 1], which is of length 2T +B. Since the memory of the code equals T , any erased packets
before t < −T will not affect the decoder. Thus we assume that there are no erasures before t = 0.
We further consider two cases as stated below.
146
Appendix C. Partial Recovery Codes (PRC) 147
Table C.1: The decoding analysis of PRC codes for various erasure patterns in Channel II. The erasuresare shaded grey boxes whereas the parity-check packets used to recover the v[·] packets are marked usingbold borders. The u[·] packets are recovered by subtracting the combined p1[·] parity-checks.
(a) Burst followed by Isolated Erasure: Burst and Isolated Erasures Recovered Simultaneously.
[0] . . . [B − 1] [B] . . . [t] [t+ 1] . . . [∆− 1] [∆] . . . [∆ +B − 1]u u[0] . . . u[B − 1] u[B] . . . u[t] u[t+ 1] . . . u[∆− 1] u[∆] . . . u[∆ +B − 1]v v[0] . . . v[B − 1] v[B] . . . v[t] v[t+ 1] . . . v[∆ − 1] v[∆] . . . v[∆ +B − 1]
up1[0]+ . . . p1[B − 1]+ p1[B]+ . . . p1[t]+ p1[t+ 1]+ . . . p1[∆− 1] p1[∆] . . . p1[∆+B− 1]u[−∆] . . . u[B −∆− 1] u[B −∆] . . . u[t−∆] u[t−∆+ 1] . . . +u[−1] +u[0] . . . +u[B − 1]
s p2[0] . . . p2[B − 1] p2[B] . . . p2[t] p2[t+ 1] . . . p2[∆− 1] p2[∆] . . . p2[∆+B− 1]
︸ ︷︷ ︸ ︸ ︷︷ ︸⇓ ⇓ ⇓
Burst Erasure Recover v[0], . . . ,v[B − 1],v[t] u[0] . . . u[B − 1]
(b) Burst followed by Isolated Erasure: Burst and Isolated Erasures Recovered Separately. τ =⌈
B ∆B+1
⌉
− 1.
[0] . . . [B − 1] . . . [τ ] . . . [∆] . . . [t] . . . [t+ T −∆− 1] . . . [∆ +B − 1]u u[0] . . . u[B − 1] . . . u[τ ] . . . u[∆] . . . u[t] . . . u[t+ T −∆− 1] . . . u[∆ +B − 1]v v[0] . . . v[B − 1] . . . v[τ ] . . . v[∆] . . . v[t] . . . v[t+ T −∆− 1] . . . v[∆ +B − 1]
up1[0]+ . . . p1[B − 1]+ . . . p1[τ ]+ . . . p1[∆] . . . p1[t]+ . . . p1[t+ T −∆− 1] . . . p1[∆+B− 1]u[−∆] . . . u[B −∆− 1] . . . u[τ −∆] . . . +u[0] . . . u[t−∆] . . . +u[t+T −2∆−1] . . . +u[B − 1]
s p2[0] . . . p2[B − 1] . . . p2[τ ] . . . p2[∆] . . . p2[t] . . . p2[t+ T −∆− 1] . . . p2[∆+B− 1]
︸ ︷︷ ︸ ︸ ︷︷ ︸⇓ ⇓
︸ ︷︷ ︸
Burst Erasure v[0], . . . ,v[B − 1] u[0] . . . Recover v[t]⇓ ⇓ ⇓ ⇓. . . u[t+ T − 2∆+ 1] . . . u[B − 1]
(c) Isolated Erasure followed by Burst: Isolated and Burst Erasures Recovered Simultaneously.
[0] . . . [t] . . . [t+B − 1] [t+B] . . . [∆− 1] [∆] . . . [t+∆] . . . [t+∆+B − 1]u u[0] . . . u[t] . . . u[t+B − 1] u[t+B] . . . u[∆− 1] u[∆] . . . u[t+∆] . . . u[t+∆+B − 1]v v[0] . . . v[t] . . . v[t+B − 1] v[t+B] . . . v[∆− 1] v[∆] . . . v[t+∆] . . . v[t+∆+B − 1]
up1[0]+ . . . p1[t]+ . . . p1[t+B − 1]+ p1[t+B]+ . . . p1[∆− 1] p1[∆] . . . p1[t+∆] . . . p1[t+∆+B − 1]u[−∆] . . . u[t−∆] . . . u[t−∆+B− 1] u[t−∆+B] . . . +u[−1] +u[0] . . . +u[t] . . . +u[t+B − 1]
s p2[0] . . . p2[t] . . . p2[t+B − 1] p2[t+B] . . . p2[∆− 1] p2[∆] . . . p2[t+∆] . . . p2[t+∆+B − 1]
︸ ︷︷ ︸⇓ ⇓ ⇓ ⇓
Recover v[0],v[t], . . . ,v[t+B − 1] u[0] u[t] . . . u[t+B − 1]
(d) Isolated Erasure followed by Burst: Isolated and Burst Erasures Recovered Separately. τ =⌈
∆B+1
⌉
− 1.
[0] . . . [τ ] . . . [t] . . . [t+B − 1] . . . [∆] . . . [t+∆] . . . [t+∆+B − 1]u u[0] . . . u[τ ] . . . u[t] . . . u[t+B − 1] . . . u[∆] . . . u[t+∆] . . . u[t+∆+B − 1]v v[0] . . . v[τ ] . . . v[t] . . . v[t+B − 1] . . . v[∆] . . . v[t +∆] . . . v[t+∆+ B − 1]
up1[0]+ . . . p1[τ ]+ . . . p1[t]+ . . . p1[t+B − 1]+ . . . p1[∆] . . . p1[t+∆] . . . p1[t+∆+B − 1]u[−∆] . . . u[τ −∆] . . . u[t−∆] . . . u[t−∆+B− 1] . . . +u[0] . . . +u[t] . . . +u[t+B − 1]
s p2[0] . . . p2[τ ] . . . p2[t] . . . p2[t+B − 1] . . . p2[∆] . . . p2[t+∆] . . . p2[t+∆+B − 1]
︸ ︷︷ ︸ ︸ ︷︷ ︸ ︸ ︷︷ ︸⇓ ⇓ ⇓
Recover v[0] Burst Erasure Recover v[t], . . . ,v[t+B − 1] u[t] . . . u[t+B − 1]
Burst and Isolated Erasures Recovered Simultaneously:
In this case, the burst and the isolated erasures are close enough such that all the v[·] packets are
recovered simultaneously. This case is illustrated in Table C.1a. The isolated erasure happens at time t
where B ≤ t < BB+1∆. The recovery of the erased packets proceeds as follows:
1. Recover {v[0], . . . ,v[B − 1],v[t]} at time τ = ∆ − 1 using the (v + u + s, v) m-MDS code C12 in
the interval [0,∆− 1].
2. Recover {u[0], . . . ,u[B − 1],u[t]} at time τ = ∆, . . . ,∆+ B − 1 and τ = t + ∆ respectively from
the associated parity-checks q[·].
To justify the recovery of v[0], . . . ,v[B − 1],v[t] in the first step we consider the available parity-
checks of C12 in the interval [0,∆− 1]. We first note that the interfering packets u[·] in this interval are
not erased and can be cancelled out from q[·] to recover the parity-checks p1[·]. We apply Lemma C.1
with B1 = B, B2 = 1, r = t and R = R12 and j = ∆ − 1. Note that t < BB+1∆ also satisfies t < B1
1−R12
Appendix C. Partial Recovery Codes (PRC) 148
since R12 = ∆−B−1∆ from (4.5). Thus the first condition in (C.1) in Lemma C.1 is satisfied. Furthermore
note that (1−R12)(j+1) = B+1 and thus the second condition in (C.1) in Lemma C.1 is also satisfied.
Thus Lemma C.1 applies and the recovery of v[0], . . . ,v[B − 1],v[t] at time ∆− 1 follows.
To justify the recovery of {u[0], . . . ,u[B − 1],u[t]}, recall that q[i] = u[i −∆] + p1[i]. Since all the
v[·] packets have been recovered in step (1), the associated parity-checks p1[i] can be computed and
cancelled by the decoder to recover the u[·] packets as claimed.
As a final remark we note that all the erased packets are recovered for the above erasure pattern.
Burst and Isolated Erasures Recovered Separately:
In this case, there is a sufficiently large gap between the burst and the isolated erasures so that the v[·]packets of the erasure burst are recovered before the isolated erasure takes place. This case is illustrated
in Table C.1b. The isolated erasure happens at time t ≥ B ∆B+1 . The recovery of the erased packets
proceeds as follows:
1. Recover {v[0], . . . ,v[B − 1]} by time τ =⌈
B ∆B+1
⌉
− 1 using the (v + u+ s, v) m-MDS code C12 in
the interval [0, τ ].
2. Recover {u[0], . . . ,u[t −∆ − 1]} from q[∆], . . . ,q[t − 1] respectively by cancelling the interfering
p1[·] packets.
3. Recover v[t] by time τ = t + T − ∆ + 1 using the (v + s, v) m-MDS code C2 in the interval
[t, t+ T −∆+ 1].
4. Recover {u[t−∆+1], . . . ,u[B− 1],u[t]} from q[t+1], . . . ,q[B+∆− 1],q[t+∆] by cancelling the
interfering p1[·] packets.
To justify the recovery of {v[0], . . . ,v[B−1]} in the first step above, we consider the available parity-
checks of C12 in the interval [0, τ ]. Note that the interfering u[·] packets in this interval are not erased
and can be cancelled out from q[·] to recover the underlying parity-checks p1[·]. Furthermore,
(1−R12)(τ + 1) ≥ (1 −R12)
(B
B + 1∆
)
= B (C.2)
where we substituted (4.5) forR12. Thus using property P2 in Corollary 2.1 we recover the v[0], . . . ,v[B−1] by time τ as stated. To justify step (2) above note that q[i] = u[i − ∆] + p1[i] and the interfering
p1[i] are only functions of v[·] packets that have either been recovered in step (1) or are not erased. To
justify (3) we consider the interval [t, t+T −∆+1] and consider the parity-checks of C2 in this interval.
Note that using (4.6) we have:
(1−R2)(T −∆+ 2) ≥ 1 (C.3)
holds and hence using property P2 in Corollary 2.1 we recover v[t] by time t + T −∆ + 1. To justify
step (4), note that once v[t] is recovered in step (3), the parity-checks p1[t+1], . . . ,p1[B+∆],p1[t+∆]
can be computed and cancelled from the associated q[·] packets, and the claim follows.
As a final remark, we note that when if t ∈ [∆,∆ + B − 1] the packet u[t −∆] which is erased in
the first burst, cannot be recovered as its repeated copy at time t is also erased. This is the only packet
that cannot be recovered in the above erasure pattern.
Appendix C. Partial Recovery Codes (PRC) 149
C.1.2 Isolated Erasure followed by an Erasure Burst
We assume without loss of generality that the isolated erasure happens at time zero and that the burst
erasure happens at time t > 0. Since the isolated erasure precedes the erasure burst, it follows that the
erasure burst must begin in the interval t ∈ [1, T ] from Def. 4.2. This implies that there cannot be any
erasure in the interval [−T,−1] since the interval [−T, T + B − 1] must have only one isolated erasure
and one erasure burst. Since the memory of the code equals T , any erased packet before time t = −Twill not affect the decoder. Thus in what follows we assume that there are no erasures before time 0.
This class of patterns is sub-divided into two cases discussed below.
Isolated and Burst Erasures Recovered Simultaneously:
In this case the burst erasure and the isolated erasure are close enough so that all the v[·] packetsare simultaneously recovered. This case is illustrated in Table C.1c. The burst erasure begins at time
t < ∆B+1 . The recovery of the erased packets proceeds as follows:
1. Recover {v[0],v[t], . . . ,v[t+B−1]} using the (v+u+s, v) m-MDS code C12 in the interval [0,∆−1].
2. Recover {u[0],u[t], . . . ,u[t+B− 1]} at time τ = ∆, t+∆, . . . , t+B+∆− 1 respectively from the
associated parity-checks q[·].
To justify step (1) we note that the interfering u[·] packets in q[·] in the interval [0,∆ − 1] are not
erased and can be cancelled to recover p1[·]. We apply Lemma C.1 to code C12 in the interval [0,∆− 1]
using B1 = B, B2 = B and r = t. Note that by assumption on t and from (4.5) we have that
r <∆
B + 1=
1
1−R12(C.4)
and thus the first condition in (C.1) holds. Furthermore from (4.5) we also have that (1 − R12)∆ =
B + 1 and thus the second condition in (C.1) also holds. Thus Lemma C.1 guarantees the recovery of
{v[0],v[t], . . . ,v[t+B − 1]} by time τ = ∆− 1.
To justify step (2), note that there are no further erasures in the interval [∆,∆+t+B−1]. Since all theerased v[·] packets are recovered in step (1), the decoder can compute p1[∆],p1[i+∆], . . . ,p1[i+B+∆−1]and subtract them from the corresponding q[·] packets to recover u[0],u[i], . . . ,u[i+B− 1], respectively
with a delay of ∆ ≤ T .
As a final remark we note that all the erased packets are fully recovered in this erasure pattern.
Isolated and Burst Erasures Recovered Separately:
In this case the gap between the isolated erasure and the burst erasure is sufficiently large so that v[0]
is recovered before the burst erasure begins. This case is illustrated in Table C.1d. In this case we have
that t ≥ ∆B+1 . The recovery of the erased packets proceeds as follows:
1. Recover the packet v[0] by time τ =⌈
∆B+1
⌉
− 1 using the (v + u + s, v) m-MDS code C12 in the
interval [0, τ ].
2. Recover the packets v[t], . . . ,v[t+B − 1] by time t+∆− 1 using the (v + u+ s, v) m-MDS code
C12 in the interval [t, t+∆− 1].
Appendix C. Partial Recovery Codes (PRC) 150
3. Recover u[t], . . . ,u[t + B − 1] from q[t + ∆], . . . ,q[t + B + ∆ − 1] respectively by cancelling the
associated p1[·] packets.
To justify the above steps note the interfering u[·] packets in q[·] for t ∈ [0,∆ − 1] are not erased and
can be cancelled out to recover p1[·]. In step (1), it suffices to use P1 in Corollary 2.1 and show that
v[0] is recovered by time τ =⌈
∆B+1
⌉
− 1. Note that
(1−R12)(τ + 1) ≥ (1−R12)∆
B + 1= 1 (C.5)
where we substitute (4.5) for R12 above. Since by assumption on t, v[0] is the only packet erased in the
interval [0, τ ] it follows that v[0] is recovered by this time.
To justify step (2), consider the interval [t, t+∆−1] and recall that the erasure burst spans [t, t+B−1].Furthermore even though v[0] has been recovered in step (1) and its effect can be cancelled out, the
packet u[0] appears in q[∆] and may contribute to one additional erasure when t ≤ ∆. In this case, we
assume that a total of B + 1 erasures occur in the above stated interval. We use Lemma C.1 applied to
the code C12 with B1 = B and B2 = 1, in order to show the recovery of v[t], . . . ,v[t+B− 1]. Note that
the first condition in (C.1) is satisfied since
∆− t ≤ ∆B
B + 1=
B
1−R12(C.6)
is satisfied and the second condition is satisfied as well since (1 − R12)∆ = B + 1. By time t + ∆ − 1,
the decoder has recovered all the erased v[·] packets. If instead we had t > ∆ then u[0] can be recovered
at time t = ∆ and there remain only B erasures in the interval [t, t + ∆ − 1], so the recovery of
v[t], . . . ,v[t +B − 1] again follows.
Finally to recover the u[·] packets in the interval [t, t+∆− 1], we compute the parity-check packets
p1[·] in the interval [t+∆, t+B+∆−1], subtract them from the corresponding q[·] packets, and recover
u[t], . . . ,u[t+B − 1] respectively as stated in step (4).
As a final remark we note that the packet u[0] my not be recovered if its repeated copy at time ∆
is erased as part of the erasure burst. Thus we may have one unrecovered packet for the above erasure
pattern.
This completes the proof of the decoder in Theorem 4.1.
C.2 Proof of Lemma C.1
Link: · · ·
W0WB1−1
B1 B2
Period = j + 1
Figure C.1: The periodic erasure channel used in proving Lemma C.1, and indicating the first and lastwindows of interest, W0 and WB−1, respectively. Grey and white squares denote erased and unerasedpackets respectively.
Consider a periodic erasure channel of period length j + 1 as shown in Figure C.1. In each period,
Appendix C. Partial Recovery Codes (PRC) 151
the channel introduces an erasure burst of length B1 packets followed by another burst of length B2
packets starting r packets from the start of the first burst and the rest of the period is not erased. In
the first period starting at time t = 0, the two bursts of length B1 and B2 span the intervals [0, B1 − 1]
and [r, r +B2 − 1] respectively, where r < B1
1−R.
We consider the intervalsWi = [i, i+ j− 1] for i = {0, 1, . . . , B1− 1}. In each such interval, the total
number of erasures equals B1+B2 ≤ (1−R)(j+1). Thus as in the proof of property P2 in Corollary 2.1,
we can recover s[0], s[1], . . . , s[B1 − 1] by time j since x[j + 1],x[j + 2], . . . ,x[j +B1 − 1] are erased.
For the second burst we consider the interval [r, j] of length j − r + 1. At this point all the source
packets in the first burst have been recovered and only a total of B2 erasures remain. The property P2
in Corollary 2.1 can be used to for recovering s[r], . . . , s[r +B2 − 1] by time j since,
B2 ≤ (1 −R)(j + 1)−B1
≤ (1 −R)(j + 1)− (1−R)r
= (1 −R)(j − r + 1). (C.7)
where the second step uses (C.1). At this point all the erased packets in the first period have been
recovered by time j and the claim follows.
Appendix D
Unequal Source-Channel Rates
D.1 Proof of Lemma 5.1
We need to show that the total erased symbols (vvec[·],pvec[·]) between macro-packets i to macro-packet
i+ T , i.e., in the following sequence,
{vvec[t],pvec[t]
}
i≤t≤i+T=
(
v0[i], . . . , vkv−1[i], p0[i], . . . , pku−1[i], v0[i+1], . . . , vkv−1[i+1], p0[i+1], . . . ,
pku−1[i+ 1], . . . , v0[i+ T ], . . . , vkv−1[i+ T ], p0[i+ T ], . . . , pku−1[i+ T ]
)
, (D.1)
after the cancellation of u[·] packets, is ku(T +1). We start by considering that the burst begins at j = 1
and subsequently consider other cases in Figure D.1. For the case when j = 1 we consider two cases.
• B′ > bT+b
M : We first show that the total number of vvec[·] and pvec[·] symbols erased due to the
burst in the macro-packets i, i + 1, . . . , i + b equals kuT = B · T . Furthermore in macro-packet
i+T , the parity-checks pvec[i+T ] combine with uvec[i] which are also erased. Hence these symbols
contribute to additional ku erasures leading to a total of ku(T + 1) erased symbols.
Note that the erasure burst spans the entire macro-packets X[i, :], . . . ,X[i + b − 1, :] as well as
x[i+ b.1], . . . ,x[i+ b, B′]. The total number of symbols in vvec[t] and pvec[t] in each macro-packet
is kv + ku = M(T + b + 1) − B. In the b-th macro-packet we only have the first B′ columns
erased. Out of these the first ku symbols are from the uvec[·] symbols whereas the remaining
B′n−ku = B′(T +b+1)−B symbols come from vvec[i+b] and pvec[i+b] . It can be easily verified
that B′(T + b+ 1)−B ≥ 0. Hence the total number of erased symbols of vvec[t] and pvec[t] is
b(ku + kv) +B′n− ku = b(M(T + b+ 1)−B) +B′(T + b+ 1)− ku
= B(T + b+ 1)− bB − ku
= B(T + 1)− ku = Tku (D.2)
where we use the fact that ku = B in our code construction and B = bM +B′.
• B′ ≤ bT+b
M : Again the macro-packets X[i, :], . . . ,X[i + b − 1, :] are completely erased and each
contributes to b(kv + ku) = b(M(T + b) −Mb) erasures. In the X[i + b, :] only the symbols in
152
Appendix D. Unequal Source-Channel Rates 153
j = 1
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = 2
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = r + 1
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = r + 2
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = r + 3
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = M − r
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
j = M − r + 1
[
u[i,1] u[i,2] ··· u[i,r]u[i,r+1]
v[i,1]
v[i,2] v[i,3] ··· v[i,M−2r−1]q[i,r+1]
v[i,M−2r]
q[i,r] ··· q[i,1]
]
Figure D.1: Different erasure patterns considered in the analysis of the decoder. The index j at the leftof each row, indicates the starting location of each burst in macro-block i. The shaded blocks shows thepackets that are erased.
uvec[i+ b] are erased as it can be easily verified that B′n ≤ ku. Finally as in the previous case all
the vvec[i+T ] symbols in macro-packet i+T that combine with uvec[i] must be considered erased.
Thus the total number of erased symbols is b(M(T + b)−Mb) + ku = bMT + ku = ku(T + 1).
To establish the claim for j = 2, 3, . . . ,M − r it suffices to show the following lemma
Lemma D.1. Let Nj denote the total number of erased symbols in {vvec[t],pvec[t]} after the cancellation
of non-erased uvec[·] symbols when the erasure burst begins at x[i, j]. Then we have that Nj ≤ Nj−1 for
each j = 2, 3, . . . ,M − r.
Lemma D.1 establishes that the worst case erasure sequence is the one that begins at j = 1. Since
we have already established that the total number of erasures in {vvec[t],pvec[t]} in this case does not
exceed ku(T + 1), this will complete our claim.
To establish Lemma D.1, we note that going from the burst pattern that starts at x[i, j] to the
pattern that start at x[i, j + 1] results in one extra erased channel packet at the end. Also, it results
in revealing the first channel packet which is x[i, j]. We assume (as a worst case) that the extra erased
channel packet at the end contributes to n additional erased symbols of either vvec[·] or pvec[·]. We
consider the effect of revealing the channel packet x[i, j] and show that it always compensates exactly
n new unerased packets of either vvec[·] or pvec[·]. Thus we do not increase the total number of erased
packets in such a transition.
Appendix D. Unequal Source-Channel Rates 154
Recall that x[i, j] can be one of the following (cf. Fig D.1),
x[i, j] =
u[i, j] j = {1, . . . , r}
u[i, r + 1]
v[i, 1]
j = r + 1
v[i, j − r] j = {r + 2, . . . ,M − r − 1}
q[i, r + 1]
v[i, j − r]
j = M − r
q[i,M − j + 1] j = {M − r + 1, . . . ,M}.
(D.3)
• j = {1, · · · , r}: In the case under consideration, the revealed x[i, j] is always u[i, j] . It can be
subtracted from q[i+ T, j] to recover p[i+ T, j] ∈ pvec[·] having n symbols. Thus, it compensates
for the n extra erased symbols.
• j = r+1: The r′ symbols of u[i, r+1] helps in recovering the r′ symbols of p[i+T, r+1] ∈ pvec[·].This together with the revealed n − r′ symbols of v[i, 1] ∈ vvec[·] compensates for the n extra
erasures.
• j = {r+2, . . . ,M − r− 1}: In this case, the revealed channel packet is x[i, j] = v[i, j − r] ∈ vvec[·]and has n symbols which are now available at the decoder.
• j = M−r: As shown in Figure D.1, the decoder can subtract u[i−T, r+1] from q[i, r+1] to recover
the r′ symbols p[i, r+1] ∈ pvec[·]. This together with the n− r′ symbols of v[i, j− r] ∈ vvec[·] addup to n symbols and the claim follows.
This establishes Lemma D.1 and in turn the proof of Lemma 5.1 is complete.
Appendix E
Diversity Embedded Streaming
Codes (DE-SCo)
E.1 Proof of Proposition 7.1
From (7.7), the urgent symbols combined in the set of parity-checks p[i], . . . ,p[i+T−B−1] are unerased(from time before i−B). The linear combinations hj ’s corresponding to these parity-checks combine all
erased non-urgent symbols v[t], for t ∈ [i−B, i− 1]. This fact can be proved by expanding the earliest
vector of parity-checks p[i] = (p0[i], . . . , pT−1[i]), where from (7.7)
pj [i] = sj [i− T ] + hj(sB [i− T +B − j], . . . , sT−1[i− j − 1])
for j = 0, . . . , B − 1 and the latest vector of parity-checks p[i + T −B − 1] as follows,
pj [i+ T −B − j] = sj [i−B − 1] + hj(sB [i− 1− j], . . . , sT−1[i + T −B − 2− j]),
for j = 0, 1, . . . , B − 1. Since hj are linear combinations arising from the MDS code, they are linearly
independent. Furthermore, the number of available linear combinations are equal to that of the erased
non-urgent symbols equal to (T −B)B and thus sufficient to recover these symbols.
For the set of parity-checks p[j + T ], i− B ≤ j < i, we can directly recover the set of erased urgent
symbols u[t], for t ∈ [i−B, i− 1] since all the non-urgent packets are now available (either un-erased or
revealed in the previous step).
E.2 Example of DE-SCo {(B1, T1), (B2, T2)} = {(2, 3), (4, 8)}
In this section, we discuss both the encoding and decoding steps of a DE-SCo with parameters {(2, 3), (4, 8)}shown in Table E.1. The rate achieved is given by Ca = T1
T1+B1= 3
5 .
155
Appendix E. Diversity Embedded Streaming Codes (DE-SCo) 156
Table E.1: Rate 3/5 DE-SCo construction that satisfy the region (a) point described by user 1 with(B1, T1) = (2, 3) and user 2 with (B2, T2) = (2B1, 2T1 +B1) = (4, 8).
[i− 1] [i] [i+ 1] [i+ 2] [i+ 3] [i+ 4]
s0[i− 1] s0[i] s0[i+ 1] s0[i + 2] s0[i+ 3] s0[i+ 4]
s1[i− 1] s1[i] s1[i+ 1] s1[i + 2] s1[i+ 3] s1[i+ 4]
s2[i− 1] s2[i] s2[i+ 1] s2[i + 2] s2[i+ 3] s2[i+ 4]
s0[i− 4]⊕ s2[i− 2] s0[i− 3]⊕ s2[i− 1] s0[i− 2]⊕ s2[i] s0[i− 1]⊕ s2[i+ 1] s0[i]⊕ s2[i+ 2] s0[i+ 1]⊕ s2[i+ 3]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s2[i− 9]⊕ s0[i− 7] s2[i− 8]⊕ s0[i− 6] s2[i− 7]⊕ s0[i− 5] s2[i− 6]⊕ s0[i− 4] s2[i− 5]⊕ s0[i − 3] s2[i− 4]⊕ s0[i− 2]
s1[i− 4]⊕ s2[i− 3] s1[i− 3]⊕ s2[i− 2] s1[i− 2]⊕ s2[i− 1] s1[i− 1]⊕ s2[i] s1[i]⊕ s2[i+ 1] s1[i+ 1]⊕ s2[i+ 2]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s1[i− 9]⊕ s0[i− 8] s1[i− 8]⊕ s0[i− 7] s1[i− 7]⊕ s0[i− 6] s1[i− 6]⊕ s0[i− 5] s1[i− 5]⊕ s0[i − 4] s1[i− 4]⊕ s0[i− 3]
[i+ 5] [i + 6] [i+ 7] [i+ 8] [i+ 9] [i+ 10]s0[i+ 5] s0[i + 6] s0[i+ 7] s0[i + 8] s0[i+ 9] s0[i+ 10]s1[i+ 5] s1[i + 6] s1[i+ 7] s1[i + 8] s1[i+ 9] s1[i+ 10]s2[i+ 5] s2[i + 6] s2[i+ 7] s2[i + 8] s1[i+ 9] s1[i+ 10]
s0[i+ 2]⊕ s2[i+ 4] s0[i+ 3]⊕ s2[i+ 5] s0[i+ 4]⊕ s2[i+ 6] s0[i+ 5]⊕ s2[i+ 7] s0[i+ 6]⊕ s2[i + 8] s0[i+ 7]⊕ s2[i+ 9]⊕ ⊕ ⊕ ⊕ ⊕ ⊕
s2[i− 3]⊕ s0[i− 1] s2[i− 2]⊕ s0[i] s2[i− 1]⊕ s0[i+ 1] s2[i]⊕ s0[i+ 2] s2[i+ 1]⊕ s0[i + 3] s2[i+ 2]⊕ s0[i+ 4]
s1[i+ 2]⊕ s2[i+ 3] s1[i+ 3]⊕ s2[i+ 4] s1[i+ 4]⊕ s2[i+ 5] s1[i+ 5]⊕ s2[i+ 6] s1[i+ 6]⊕ s2[i + 7] s1[i+ 7]⊕ s2[i+ 8]⊕ ⊕ ⊕ ⊕ ⊕ ⊕
s1[i− 3]⊕ s0[i− 2] s1[i− 2]⊕ s0[i− 1] s1[i− 1]⊕ s0[i] s1[i]⊕ s0[i+ 1] s1[i+ 1]⊕ s0[i + 2] s1[i+ 2]⊕ s0[i+ 3]
E.2.1 Encoder:
The encoding steps are as follows,
• Split each source packet into T1 = 3 symbols, i.e., s[i] = (s0[i], s1[i], s2[i]).
• Apply a (B1, T1) = (2, 3) MS code to the source stream generating B1 = 2 parity-check symbols
pI[i] = (pI0[i], pI1[i]) where,
pI0[i] = s0[i− 3]⊕ s2[i− 1], pI1[i] = s1[i− 3]⊕ s2[i− 2]. (E.1)
• Apply another (B1, T1) = (2, 3) MS code to the source stream across the opposite diagonal. In
particular, the generated B1 = 2 parity-check symbols will be pI[i] = (pII0 [i], pII1 [i]) and is given by,
pII0 [i] = s2[i− 3]⊕ s0[i− 1], pII1 [i] = s1[i − 3]⊕ s0[i− 2]. (E.2)
• Combine the two parity-check streams after shifting the second stream by T1 +B1 = 5 time slots
and then concatenate the generated parities with the source packets. In other words, the overall
channel packet would be,
x[i] = (s[i],pI[i] + pII[i− 5]) (E.3)
The associated channel packets in the interval [i− 1, i+ 10] is shown in Table E.1.
Appendix E. Diversity Embedded Streaming Codes (DE-SCo) 157
E.2.2 Decoder:
For the first user, we assume an erasure burst of length B1 = 2 in the interval [i, i+ 1]. The decoder of
the first user can compute the parity-check packets pII[i − 3], pII[i − 2] and pII[i − 1] located at time
i + 2, i + 3 and i + 4 respectively as they combine source packets which are not erased. Thus, the first
(B1, T1) MS code is recovered and the burst of B1 can be recovered with a delay of T1.
For the second user, we suppose an erasure burst of length B2 = 3 takes place in the interval [i, i+3].
The decoder proceeds as follows,
1. Recover s0[i + 2] and s1[i + 3] at time i+ 5 and i+ 6 respectively.
2. At time i+ 7, the receiver recovers s0[i+ 1] and s0[i].
3. Now, the receiver can go back and subtract s0[i] to recover s0[i+ 3] at time i+ 6.
4. At time i + 8, s2[i] can be recovered with a delay of T2 = 8 by subtracting s0[i + 2] which is
recovered earlier. Similarly, s1[i] can be recovered at time i + 8, i.e. with a delay of T2 = 8 by
subtracting s0[i+ 1].
5. At time i+9, s2[i+1] and s1[i+1] can be recovered with a delay of T2 = 8 by subtracting s0[i+3]
and s0[i+ 2] recovered in steps 3 and 1 respectively.
6. The decoder proceeds similar to step 5 to recover s2[i+ 2], s1[i+2], s2[i+3] and s1[i+3] at time
i+ 10, i+ 10, i+ 11 and i+ 11 respectively, i.e., with a delay of T2 = 8.
By that time, the decoder has recovered all erased source packets s[t] = (s0[t], s1[t], s2[t]) for t ∈ [i, i+3]
with a delay of T2 and the decoder follows.
E.3 Proof of Recursion for DE-SCo with Integer α
We use induction to establish the recursion in Ind. 1 and Ind. 2 as follows.
We start with the base step, i.e., k = 1. According to Ind. 1 the non-urgent symbols {dIIj }j≤i−B−1
are available (from step 1). To recover dIi−B−1, note that the only erased symbol in this vector before
time i−B is s0[i−B− 1] which has already been recovered in dIIi−B−1. Hence the parity-checks of C1 at
the times i, . . . , i+ T − 1 suffice to recover the remaining symbols. According to Ind. 2 the non-urgent
symbols in {dIj}j≥i−B have been recovered in step (4). Furthermore in vectors dII
i−B , . . . ,dIIi−B+α−2 the
only erased symbols after time i − B − 1 are s0[i − B], . . . , s0[i − B + α − 2], which are available from
{dIj}j≥i−B. Thus the parity-checks pII[·] can be used to recover the remaining non-urgent symbols in
these vectors.
Next suppose the statement holds for some t = k. We establish that the statement holds for t = k+1.
In Ind. 1, the vector of interest is,
dIi−B−(k+1) = (s0[i−B − (k + 1)], ..., sk[i−B − 1], ..., sT−1[i−B − k + (T − 2)]).
The erased elements in the interval i− αB, . . . , i−B − 1 are sj [i−B − k+ j − 1] for j = 0, . . . , k. Note
that sj [i−B−k+ j− 1] is precisely the j−th symbol in the diagonal vector dIIi−B−k+αj−1. Furthermore
the diagonals of interest dIIi−B−k−1, . . . ,d
IIi−B+(α−1)k−1, already visited in Ind. 2 in the k-th recursion.
Hence the remaining symbols are recovered using the parity-checks of C1.
Appendix E. Diversity Embedded Streaming Codes (DE-SCo) 158
For Ind. 2, the first vector of interest at step k + 1 is
dIIi−B+k(α−1) = (s0[i −B + k(α− 1)], ..., sk[i−B], sk+1[i−B − (α− 1)], ...).
Note that the symbols s0[.], . . . , sk[.] above, also belong to vectors dIi−B+(α−1)k, . . ., d
Ii−B−k, and are
recovered in Ind. 1 by the k−th step. Since the remaining erased packets span the interval [i−αB, i−B)
the parity-checks {pII[·]}t≥i−B recovered in step (3) can be used to recover these erased packets.
Likewise, the last vector of interest at step k + 1 is
dIIi−B+(k+1)(α−1)−1 = (s0[i−B + (k + 1)(α− 1)− 1], . . . , sk[i−B + (α− 1)− 1],
sk+1[i−B − 1], ...).
Note that the symbols s0[i − B + (k + 1)(α − 1) − 1], . . . , sk[i − B + (α − 1) − 1] above, also belong
to the vectors dIi−B+(k+1)(α−1)−1, . . . ,d
Ii−B+(α−1)−k−1 which are recovered in Ind. 1 by step number
k + 1 − (α − 1) < k + 1. Since the remaining erased packets span the interval [i − αB, i − B) the
parity-checks {pII[·]}t≥i−B recovered in step (3) can be used to recover these erased packets.
It only remains to show that the non-urgent packets in the diagonal dII are all recovered before time
T . From Proposition. 7.1 all the non-urgent symbols are recovered using the first (α−1)(T −B) columns
of the parity-checks {pII[·]}t≥i−B. Since these parity-checks are shifted by T +B, they fall in the interval
i+ T, . . . , i + T + (α− 1)(T −B)− 1 = T − 1. Thus only the parity-checks before time T are required
to recover the non-urgent source symbols.
This completes the claim in Ind. 1 and Ind. 2.
E.4 Proof of Recursion for DE-SCo with Non-integer α
Consider the case when k = 1. According to Ind. 1 the non-urgent packets {dIIj }j≤i−B−1 are available
(from step 1). To recover dIi−B−1, note that the only erased packet in this vector before time i − B is
s0[i−B − 1] which has already been recovered in dIIi−B−1. Going b steps to the left, the same argument
applies as the only erased packet in the vector dIi−B−b before time i − B is s0[i − B − b] which has
already been recovered in dIIi−B−b. Hence the parity-checks of C1 between [i, i+ T − 1] suffice to recover
the remaining packets. According to Ind. 2 the non-urgent symbols in {dIj}j≥i−B have been recovered
(in step 3). Furthermore in vectors dIIi−B, . . . ,d
IIi−B+(a−b)−1 the only erased packets after time i−B− 1
are s0[i − B], . . . , s0[i − B + (a − b)− 1], which are available from {dIj}j≥i−B . These the parity-checks
pII[·] can be used to recover the remaining non-urgent symbols in these vectors. We show dependence
of any case k+1 on the previous case (k) for any value of k ∈ {1, . . . , T −B− 1} as follows. In the k+1
recursion, according to Ind. 1,
dIi−B−kb−1 =(s0[i−B − kb− 1], ..., sk[i−B − 1], ..., sT0−1[i−B − kb+ (T0 − 1)b− 1]),
with the latest element in the interval i− αB, . . . , i− B − 1 is sk[i −B − 1] which is an element of the
vector dIIi−B−1+k(a−b) recovered in Ind. 2 at the kth recursion. Likewise, the earliest vector in Ind. 1 at
the (k + 1)th recursion can be expressed as,
dIi−B−(k+1)b =(s0[i−B − (k + 1)b], ..., sk[i−B − b], ..., sT0−1[i−B − (k + 1)b+ (T0 − 1)b]),
Appendix E. Diversity Embedded Streaming Codes (DE-SCo) 159
with the latest element in the interval i− αB, . . . , i −B − 1 is sk[i −B − b] which is an element of the
vector dIIi−B−b+k(a−b) recovered in Ind. 2 at an earlier recursion.
According to Ind. 2,
dIIi−B+k(a−b) = (s0[i−B + k(a− b)], ..., sk[i−B], sk+1[i−B − (a− b)], ...),
with the earliest element lying in the interval i − B, . . . , i − 1 is sk[i − B] which is an element of the
vector dIi−B−kb recovered in Ind. 1 at the kth recursion. Likewise, the earliest vector in Ind. 2 at the
(k + 1)th recursion can be expressed as,
dIIi−B+(k+1)(a−b)−1 = (s1[i −B + (k + 1)(a− b)− 1], ..., sk+1[i−B + (a− b)− 1], sk+2[i−B − 1], ...),
with the earliest element lying in the interval i − B, . . . , i − 1 is sk+1[i − B + (a − b) − 1] which is an
element of the vector dIi−B−kb+(a−b)−1 recovered in Ind. 1 at an earlier recursion and the claim in Step
2 follows.
The fact that all the non-urgent packets are recovered before time T as well as the proof of step 2,
i.e., the recovery of urgent packets from C2 is identical to the case of integer α.
Appendix F
Multicast Streaming Codes
(Mu-SCo)
F.1 Example - {(1, 2), (2, 4)} in Region (b)
Consider a Mu-SCo with parameters {(1, 2), (2, 4)} which falls in region (b). The capacity is given
by R = 3/5 according to Theorem 8.1. The construction is provided in Table F.1. Through direct
calculation note that T1 = 1.5 (cf. (8.14)). Hence we implement a source expansion technique with
(p, r) = (nT1, n) = (3, 2). According to Definition 7.2, such expansion is as follows. We split each
source packet s[i] into six symbols s0[i], . . . , s5[i] and construct an expanded source sequence s[·] suchthat s[2i] = (s0[i], s1[i], s2[i]) and s[2i + 1] = (s3[i], s4[i], s5[i]). We apply a {(2, 3), (4, 8)} DE-SCo (see
Table E.1 in Appendix E.2) to s[·] to produce the parity-checks p[·] and transmit p[i] = (p[2i], p[2i+1])
along with s[i] at time i. Using Lemma 7.1, it can be directly verified that the resulting code corrects a
single erasure with a delay of 2 packets and an erasure burst of length 2 with a delay of 4.
F.2 Proof of Lemma 8.1
To establish Lemma 8.1, we consider the two cases T2 > T1 + B1 and T2 ≤ T1 + B1 separately. When
T2 > T1+B1, we consider a periodic erasure channel with period length Tp = T2+B2−B1. Each period
has B2 erasures followed by T2 − B1 unerased packets as shown in Figure F.1a. We start with the first
period consisting of the channel packets x[t] for t ∈ [0, Tp − 1] and the decoder proceeds as follows,
• For time t = 0, 1, . . . , Tp−1, the channel behaves similar to burst erasure channel with B2 erasures.
Hence, the first B2−B1 source packets s[0], . . . , s[B2−B1− 1] can be recovered using the (B2, T2)
code within a delay of T2 packets, i.e., by time T2, . . . , T2 + B2 − B1 − 1 = Tp − 1, respectively.
The corresponding channel packets x[0], . . . ,x[B2 −B1 − 1] can then be computed.
• With all the previous channel packets being recovered, the channel at time t = B2 −B1, . . . , Tp −1 behaves as a burst erasure channel with B1 erasures. Hence, the B1 source packets s[B2 −B1], . . . , s[B2 − 1] can be recovered using the (B1, T1) code within a delay of T1 packets, i.e., by
time T1 +B2 −B1, . . . , T1 +B2 − 1. We note that the latest recovery time is T1 +B2 − 1 < Tp − 1
since T2 > T1 +B1.
160
Appendix F. Multicast Streaming Codes (Mu-SCo) 161
Table F.1: Rate 3/5 Mu-SCo construction that satisfy the region (b) point described by user 1 with(B1, T1) = (1, 2) and user 2 with (B2, T2) = (2, 4).
[i− 1] [i] [i+ 1]
s0[i− 1] s3[i − 1] s0[i] s3[i] s0[i+ 1] s3[i+ 1]
s1[i− 1] s4[i − 1] s1[i] s4[i] s1[i+ 1] s4[i+ 1]
s2[i− 1] s5[i − 1] s2[i] s5[i] s2[i+ 1] s5[i+ 1]
s3[i− 3]⊕ s5[i− 2] s0[i− 2]⊕ s2[i− 1] s3[i− 2]⊕ s5[i− 1] s0[i− 1]⊕ s2[i] s3[i− 1]⊕ s5[i] s0[i]⊕ s2[i+ 1]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s2[i− 5]⊕ s0[i− 4] s5[i− 5]⊕ s3[i− 4] s2[i− 4]⊕ s0[i− 3] s5[i− 4]⊕ s3[i− 3] s2[i− 3]⊕ s0[i − 2] s5[i− 3]⊕ s3[i− 2]
s4[i− 3]⊕ s2[i− 2] s1[i− 2]⊕ s5[i− 2] s4[i− 2]⊕ s2[i− 1] s1[i− 1]⊕ s5[i− 1] s4[i− 1]⊕ s2[i] s1[i]⊕ s5[i]
⊕ ⊕ ⊕ ⊕ ⊕ ⊕s1[i− 5]⊕ s3[i− 5] s4[i− 5]⊕ s0[i− 4] s1[i− 4]⊕ s3[i− 4] s4[i− 4]⊕ s0[i− 3] s1[i− 3]⊕ s3[i − 3] s4[i− 3]⊕ s0[i− 2]
[i+ 2] [i+ 3] [i+ 4]s0[i+ 2] s3[i + 2] s0[i+ 3] s3[i+ 3] s0[i+ 4] s3[i+ 4]s1[i+ 2] s4[i + 2] s1[i+ 3] s4[i+ 3] s1[i+ 4] s4[i+ 4]s2[i+ 2] s5[i + 2] s2[i+ 3] s5[i+ 3] s1[i+ 4] s5[i+ 4]
s3[i]⊕ s5[i + 1] s0[i+ 1]⊕ s2[i+ 2] s3[i+ 1]⊕ s5[i+ 2] s0[i+ 2]⊕ s2[i+ 3] s3[i+ 2]⊕ s5[i + 3] s0[i+ 3]⊕ s2[i+ 4]⊕ ⊕ ⊕ ⊕ ⊕ ⊕
s2[i− 2]⊕ s0[i− 1] s5[i− 2]⊕ s3[i− 1] s2[i− 1]⊕ s0[i] s5[i− 1]⊕ s3[i] s2[i]⊕ s0[i+ 1] s5[i]⊕ s3[i+ 1]
s4[i]⊕ s2[i + 1] s1[i+ 1]⊕ s5[i+ 1] s4[i+ 1]⊕ s2[i+ 2] s1[i+ 2]⊕ s5[i+ 2] s4[i+ 2]⊕ s2[i + 3] s1[i+ 3]⊕ s5[i+ 3]⊕ ⊕ ⊕ ⊕ ⊕ ⊕
s1[i− 2]⊕ s3[i− 2] s4[i− 2]⊕ s0[i− 1] s1[i− 1]⊕ s3[i− 1] s4[i− 1]⊕ s0[i] s1[i]⊕ s3[i] s4[i]⊕ s0[i+ 1]
• It remains to show that the source packets s[B2], . . . , s[Tp − 1] are also recovered. For time t =
B2, . . . , Tp + B2 − 1, the channel behaves as a burst erasure channel which introduces a burst of
length B2 spanning the interval [Tp, Tp + B2 − 1]. Hence, the source packets s[B2], . . . , s[Tp − 1]
can be recovered using the (B2, T2) code.
The above steps can be repeated across all periods. Since the length of each period is Tp = T2+B2−B1
and contains B2 erasures, any {(B1, T1), (B2, T2)} Mu-SCo with T2 > T1 +B1 must satisfy,
R ≤ T2 −B1
T2 −B1 +B2(F.1)
which gives our upper bound on the rate.
For the case with T2 ≤ T1 + B1, a similar argument applies except that the period length is Tp =
T1 +B2 with B2 erasures in each period (see Figure F.1b) and the corresponding upper bound is given
by,
R ≤ T1
T1 +B2. (F.2)
This completes the proof.
F.3 Proof of Lemma 8.2
We restate the lemma for convenience of the reader.
Lemma. When the erasure burst spans the interval I2 = [i − B2, i − 1], the decoder at receiver 2 can
recover all the overlapping parity symbols of p1j [t] for t ∈ J2 = [i+B1−k, i+T1−1] and j ∈ {k, . . . , B1−1}
Appendix F. Multicast Streaming Codes (Mu-SCo) 162
· · ·
b = B2
a = B2 −B1 B1 T1
T2
c = T2 +B2 −B1
(a) T2 > T1 +B1.
· · ·
b = B2
a = B2 − B1 B1 T1
T2
c = B2 + T1
(b) T2 ≤ T1 +B1.
Figure F.1: One period illustration of the Periodic Erasure Channel in Figure A.2 to be used for provingthe multicast upper bound provided in Lemma 8.1. Grey and white squares denote erased and unerasedpackets respectively.
by time t, using ←−p 3j [t + T1]
∣∣tfor t ∈ J1 = [i, i + B1 − k − 1] and the unerased source packets starting
from time i.
First recall that the parity-check symbols of C1 that span the interval t ∈ J1 = [i, i − B2 + T2 − 1]
are available to the decoder as they combine p2[t] = s[t − T2] that are not erased. Hence, ←−p 3j2[t] for
j2 ∈ {0, . . . , B3 − 1} and t ∈ J1 are recovered at the decoder.
Recall that C3 is a (B3, T3) MS code applied to the last B1 − k parity-check symbols of C1 as source
symbols.
In our proof, it will be convenient to define the parity-check packets of C1 that needs to be recovered
as:
w[t] = (w0[t], . . . , wT3−1[t]) = (p1k[t], . . . , p1B1−1[t]), t ∈ {i+ T3, . . . , i+ T1 − 1}. (F.3)
We first consider case (A), i.e., when T1 ≤ 2(B1 − k). Since C3 is a MS code which involves diagonal
interleaving of Low Delay - Burst Erasure Block Codes (LD-BEBC), the diagonals that span the symbols
of interest are as follows:
dr = (w0[i + r], . . . , wT3−1[i+ r + T3 − 1], p30[i+ r + T3], . . . , p3B3−1[i+ r + T3 +B3 − 1]),
r ∈ {1, . . . , T3 +B3 − 1}. (F.4)
Since the parity-check symbols of C3 are shifted back by T1 = T3+B3, keeping only their causal part,
the corresponding diagonals of interest are
dr = (w0[i + r], . . . , wT3−1[i+ r + T3 − 1],←−p 30[i + r + T3]
∣∣i+r−B3
, . . . ,←−p 3B3−1[i+ r + T3 +B3 − 1]
∣∣i+r−1
).
(F.5)
where recall that←−p j [t1]∣∣t2
denotes the causal part of the parity-check pj [t1] w.r.t. t2 (cf. Definition 8.2).
With every parity-check symbol projected to a different time instant, one can clearly see that dr is
no more a codeword of an LD-BEBC code.
The following conditions are sufficient to establish Lemma 8.2.
c1 The diagonals dr in (F.5) for r ∈ {1, . . . , T3 + B3 − 1} span all the parity-check symbols that need
to be recovered, i.e., p1j [·] for j ∈ {k, . . . , B1 − 1} in the interval J2 = [i+ T2 −B2, i+ T1 − 1].
Appendix F. Multicast Streaming Codes (Mu-SCo) 163
��� � ���
������
� �
��� �
�
�
�
�
�
�
������
�
�
�
�
����
����
����
����
����
����
����
����
����
���� �
���
��
����
����
����
����
������ �
���
�����
�
Figure F.2: Diagonal Embedding of parity-checks for the construction in Section 8.5. The parity-checksp3[·] in layer (4) are generated using a (B3, T3) MS code to the last B1− k parity-checks of p1[·] in layer3. The parity-checks p3[·] are shifted back by T1 units as discussed before and only the causal part ofthese parities are used.
c2 The decoder can compute the non-causal part of each parity-check p3j [·] in the interval J2 = [i+T2−B2, i + T1 − 1] and reduce (F.5) to (F.4). Furthermore this step should not violate the zero-delay
constraint for any erased packet on the diagonal, i.e., the non-causal part of the parity-check symbol
p3j1 [tx] required for the recovery of a given parity-check wj2 [ty] should combine source symbols s[.]
which are both, not erased and from time earlier than ty.
c3 Each diagonal dr should have no more than B3 erased symbols.
For (c1), we note that the diagonal d1 covers wT3−1[i + T3] = p1B1−1[i + T3] which is the lower
left most symbol that needs to be recovered. At r = T3 + B3 − 1, one can see that dr combines
w0[i+T3+B3−1] = p1k[i+T3+B3−1] which is the upper right most symbol that needs to be recovered.
Figure F.2 easily illustrates that the diagonal dr for r ∈ [1, T3 +B3 − 1] cover all of the erased symbols
in the interval J2.For (c2), consider the symbols w0[i+ r], . . . , wT3−1[i+ r + T3 − 1] of the diagonal dr. These involve
source packets s[·] from time i + r − 1 and earlier according to the diagonal interleaving property of
C1. Thus, one can conclude that the non-causal part of any parity-check symbol p3j [i+ r + T3 + j] with
respect to i+ r −B3 + j for j ∈ {0, . . . , B3 − 1} in dr is just a combination of source packets in the
interval [i+ r−B3+ j, i+ r− 1]. Thus the entire non-causal part of each parity-check is available before
time i+ r and the reduction to (F.4) is possible for each dr.
Finally note that the zero delay constraint also requires that the packets wj [t] with t ≥ i+ T1 in dr
be made available before time t = i+ r. Since each wj [t] for t ≥ i+ T1 only consists of combinations of
source packets in [i, i+ r− 1] these packets can be explicitly computed by the decoder by time i+ r− 1
and c2 follows.
For (c3), we divide the values of r into three intervals.
Appendix F. Multicast Streaming Codes (Mu-SCo) 164
• dr for r ∈ {1, . . . , T1 − T3}In this range, one can see that the following packets are available,
(w0[i+ r], . . . , wT3−r−1[i+ T3 − 1],←−p 3B3−r[i+ T3 +B3]
∣∣i, . . . ,←−p 3
B3−1[i + r + T3 +B3 − 1]∣∣i+r−1
),
which are a total of T3 symbols in the beginning and the end of the diagonals dr which contains
T3+B3 symbols. In other words, each such diagonal has B3 erased symbols happening in a burst.
• dr for r ∈ {T1 − T3 + 1, . . . , T3}In these diagonals, the following packets are available,
(w0[i+ r], . . . , wT3−r−1[i+ T3 − 1], wT1−r[i+ T1], . . . , wT3−1[i + r + T3 − 1],
←−p 30[i+ r + T3]
∣∣i+r−B3
, . . . ,←−p 3B3−1[i+ r + T3 +B3 − 1]
∣∣i+r−1
),
The first group is a total of T3 − r consecutive symbols, while the other two groups are a total of
r consecutive symbols. This implies that each such diagonal dr has B3 erased symbols in a burst.
• dr for r ∈ {T3 + 1, . . . , T3 +B3 − 1}The available symbols in these diagonals are,
(wT1−r[i+ T1], . . . , wT3−1[i+ r + T3 − 1],←−p 30[i+ r + T3]
∣∣i+r−B3
, . . . ,
←−p 3T3+B3−r−1[i+ 2T3 +B3 − 1]
∣∣i+T3−1
),
which are again a total of T3 consecutive symbols which implies that the considered diagonals dr
has B3 erased symbols in a burst and (c3) follows. We note that LD-BEBC codes are capable of
recovering wrap-around bursts which may start at the end of the block and wrap around to the
beginning of that block.
This completes the proof when T1 ≤ 2(B1 − k).
F.3.1 T1 > 2(B1 − k)
When T1 > 2(B1 − k) note that C3 is a concatenation of r + 1 codes, the first r of which are repetition
codes with parity-check symbols given by (8.34). These parity-check symbols in the interval [i, i+(B1−k) − 1] can be used to recover the causal part of the parity-check symbols (p1k[t1], . . . , p
1B1−1[t1]) for
t1 ∈ {i+ (B1 − k), . . . , i+ (r+ 1)(B1 − k)− 1} = {i+ T2 −B2, . . . , i+ T1 − q − 1}. The non-causal part
of these parity-check symbols combine source symbols in the interval [i, t1− 1] which are not erased and
thus can be recovered.
The remaining q columns of parity-check symbols (p1k[t2], . . . , p1B1−1[t2]) for t2 ∈ {i + (r + 1)(B1 −
k), . . . , i+(r+1)(B1− k) + q− 1} = {i+T1− q, . . . , i+T1− 1} can be recovered using the parity-check
symbols of C3,r+1 = (q, B1 − k). This step is similar to that of recovering the T1 − (B1 − k) columns
of parity-check symbols of C1 using C3 = (B3, T3) = (T1 − (B1 − k), B1 − k) done above, except that
B3 = T1 − (B1 − k) is replaced by B3,r+1 = q.
Appendix F. Multicast Streaming Codes (Mu-SCo) 165
Table F.2: Rate 5/11 Mu-SCo Construction for the point, (B1, T1) = (4, 5) and (B2, T2) = (7, 10) lyingin region (e). This point is also illustrating case (A) defined by T1 ≤ 2(B1 − k). For the causal part ofparity-check symbols of C1 shifted back to time i− t, we use ←−p j [i] instead of ←−p j [i]
∣∣i−t
for simplicity.
[i] [i+ 1] [i + 2] [i+ 3] [i+ 4] [i+ 5]
(1)
s0[i] s0[i+ 1] s0[i+ 2] s0[i + 3] s0[i+ 4] s0[i+ 5]s1[i] s1[i+ 1] s1[i+ 2] s1[i + 3] s1[i+ 4] s1[i+ 5]s2[i] s2[i+ 1] s2[i+ 2] s2[i + 3] s2[i+ 4] s2[i+ 5]s3[i] s3[i+ 1] s3[i+ 2] s3[i + 3] s3[i+ 4] s3[i+ 5]s4[i] s4[i+ 1] s4[i+ 2] s4[i + 3] s4[i+ 4] s4[i+ 5]
(2) p0[i] p0[i+ 1] p0[i+ 2] p0[i+ 3] p0[i+ 4] p0[i+ 5]
(3)s0[i−10]+p1[i] s0[i− 9] +
p1[i+ 1]s0[i− 8] +p1[i+ 2]
s0[i− 7] +p1[i+ 3]
s0[i− 6] +p1[i+ 4]
s0[i− 5] +p1[i+ 5]
s1[i−10]+p2[i] s1[i− 9] +p2[i+ 1]
s1[i− 8] +p2[i+ 2]
s1[i− 7] +p2[i+ 3]
s1[i− 6] +p2[i+ 4]
s1[i− 5] +p2[i+ 5]
s2[i−10]+p3[i] s2[i− 9] +p3[i+ 1]
s2[i− 8] +p3[i+ 2]
s2[i− 7] +p3[i+ 3]
s2[i− 6] +p3[i+ 4]
s2[i− 5] +p3[i+ 5]
(4)
s3[i− 10] s3[i− 9] s3[i− 8] s3[i − 7] s3[i− 6] s3[i− 5]+ + + + + +←−p 1[i+ 2] +←−p 3[i+ 4]
←−p 1[i+ 3] +←−p 3[i+ 5]
←−p 1[i+ 4] +←−p 3[i+ 6]
←−p 1[i+ 5] +←−p 3[i + 7]
←−p 1[i+ 6] +←−p 3[i+ 8]
←−p 1[i+ 7] +←−p 3[i+ 9]s4[i− 10] s4[i− 9] s4[i− 8] s4[i − 7] s4[i− 6] s4[i− 5]
+ + + + + +←−p 2[i+ 2] +←−p 3[i+ 3]
←−p 2[i+ 3] +←−p 3[i+ 4]
←−p 2[i+ 4] +←−p 3[i+ 5]
←−p 2[i+ 5] +←−p 3[i + 6]
←−p 2[i+ 6] +←−p 3[i+ 7]
←−p 2[i+ 7] +←−p 3[i+ 8]
F.4 Examples of Code Construction in Region (e)
We give the construction for two specific points in this region, Table F.2 shows the code construction
for the point {(4, 5), (7, 10)} whereas Table F.3 shows the code construction for the point {(3, 5), (7, 9)}.In both cases k = 1 and m = 1. The former satisfies T1 < 2(B1 − k) whereas the latter satisfies
T1 > 2(B1 − k).
F.4.1 Example (1): {(4, 5), (7, 10)}Using the relations, T2 = T1 + B1 + k and B2 = T1 + k +m, we have that k = m = 1.
The code construction achieving the capacity of 5/11 is illustrated in Table F.2. In this example, we
walk through the steps of both the encoder and the decoder. We note that this point denotes case (A)
defined by T1 ≤ 2(B1 − k) in the general code construction given in Section 8.5.
Encoder
• Each source packet is divided into T1 = 5 symbols (s0[.], . . . , s4[.]). A C1 = (4, 5) is applied along
the diagonal of such source symbols producing B1 = 4 parity-check symbols (p0[.], . . . , p3[.]) defined
as follows,
p0[i] = s0[i− 5] + s4[i− 1]
p1[i] = s1[i− 5] + s4[i− 2]
p2[i] = s2[i− 5] + s4[i− 3]
p3[i] = s3[i− 5] + s4[i− 4] (F.6)
Appendix F. Multicast Streaming Codes (Mu-SCo) 166
• The T1 = 5 parity-check packets of code C2 = (10, 10) which are repetitions of the source symbols
such that p2j [i] = sj [i − 10] for j ∈ {0, . . . , 4} are concatenated to the parity-checks of C1 with
partial overlap of B1 − k = 3 rows as shown in Table F.2.
• A C3 = (T1−(B1−k), B1−k) = (2, 3) MS code is applied to the last B1−k = 3 rows of parity-check
symbols of C1, (p1[.], p2[.], p3[.]) producing T1 − (B1 − k) = 2 parity-check symbols, (p30[.], p31[.]).
The produced parity-checks is shifted back by T1 = 5 and combined with the last two rows of
parity-check symbols of C2.
We note that applying a shift back of T1 = 5 on the parity-check symbols of C3 explains why p30[i] =
p1[i + 2] + p3[i + 4] appears at time i and not i + 5. Moreover, since p1[i + 2] + p3[i + 4] in general
combines source symbols at time i + 3 and earlier, they cannot appear at time i as this violates the
causality of the code construction. Thus, the causal part of such parity-checks shifted to any time
instant t (denoted by ←−p j [.]∣∣t) is to be sent instead. For example, the first parity-check symbol of C3 at
time i is p30[i+ 5] = p1[i+ 2] + p3[i+ 4] = s1[i− 3] + s4[i+ 1] + s3[i− 1] + s4[i]. The causal part of this
parity-check is sent instead, i.e., ←−p 30[i+ 5]
∣∣i=←−p 1[i+ 2]
∣∣i+←−p 3[i+ 4]
∣∣i= s1[i − 3] + s3[i− 1] + s4[i].
According to Figure 8.2, we divide each channel packet into four layers,
• Layer (1) contains the first five rows which are the source symbols.
• Layer (2) contains the next row.
• Layer (3) contains the next three rows where the overlap between the parity-checks of codes C1and C2 takes place.
• Layer (4) contains the last two rows. The overlap between the parity-checks of codes C2 and C3takes place.
Decoder
With a burst erasure of length B1 = 4 taking place at times [i − 4, i− 1], the decoder at user 1 simply
uses the first four rows of parity-checks at times [i, i+ 4] after subtracting the unerased source symbols
s0[t], s1[t], s2[t] for t ∈ {i − 10, . . . , i − 6}. For user 2, we assume a burst erasure of length B2 = 7 at
times [i− 7, i− 1]. The decoding steps are as follows.
• Step (1): Recover pj [i+ 3] and pj [i+ 4] for j = {1, 2, 3}.
(a) In layer (3), spanning the second, third and fourth rows of parity-checks, one can see that the
parity-check symbols of C2 in the interval [i, i + 2] are unerased source symbols. Thus, the
corresponding combined parity-check symbols of C1 can be computed in this interval.
(b) In the same layer but in the interval [i + 5,∞), the parity-check symbols of C1 are of indices
i + 5 and later. Using the fact that (B1, T1) MS code has a memory of T1 packets, it can be
easily shown that these parity-check symbols combine only source symbols of time i and later
which are not erased and thus can be computed as well (cf. (F.6)).
(c) Steps (a) and (b) show that all the parity-check symbols of C1 in layer (3) can be computed
except for the interval [i+ 3, i+ 4].
Appendix F. Multicast Streaming Codes (Mu-SCo) 167
(d) The parity-check symbols of C2 in layer (4) spanning the last two rows of parity-check symbols
in the interval [i, i+ 2] are again unerased source symbols and thus can be cancelled and the
corresponding parity-check symbols of C3 can be computed in this interval.
(e) The parity-check symbols of C3 in the interval [i, i+ 2],
(
p30[i+ 5] p30[i+ 6] p30[i+ 7]
p31[i+ 5] p31[i+ 6] p31[i+ 7]
)
, (F.7)
can recover the remaining two columns of parity-check symbols of C1 in the interval [i+3, i+4]
lying in layer (3),
p1[i + 3] p1[i+ 4]
p2[i + 3] p2[i+ 4]
p3[i + 3] p3[i+ 4]
,
since C3 is a (2, 3) MS code whose parity-check symbols are shifted back by T1 = 5.
However, only the causal part of the parity-checks of C3 are available. Thus, the non-causal
part is to be computed and added to the causal-part to recover the original parity-checks of
the MS code. Using (F.6), it can be seen that the recovery of the non-causal part does not
require the availability of source symbols after time1 i+ 3. For example, the non-causal part
of p30[i + 5] is −→p 30[i + 5]
∣∣i= s4[i + 1] which is clearly available before time i + 3. Thus the
non-causal portions of all the parity-checks are computed and then (F.7) is applied.
• Step (2): After recovering these parity-check symbols, the decoder can cancel their effect in the
second, third and fourth rows of parity-checks (layer (3)) at times i+ 3 and i+ 4.
• Step (3): Furthermore, one can see that the parity-check symbols of C3 interfering in the last two
rows (layer (4)) starting at time i+3 combine parity-check symbols of C1 of indices i+5 and later
which was shown before to combine unerased source symbols (cf. (F.6)).
According to Step (2) and (3), the parity-checks of C2 = (10, 10) repetition code in layers (3) and
(4) are now free of any interference from i+ 3 and later. Thus, the decoder of user 2 is capable of
recovering the erased source symbols in the interval [i− 7, i− 1].
F.4.2 Example (2): {(3, 5), (7, 9)} ⇒ k = 1, m = 1
Again the capacity equals 5/11. The code construction achieving such rate is illustrated in Table F.3.
The reason we give the detailed encoding and decoding steps for one more example is to show the main
differences between case (A): T1 ≤ 2(B1 − k) illustrated by the previous example {(4, 5), (7, 10)} and
case (B): T1 > 2(B1 − k) illustrated by this example, {(3, 5), (7, 9)}.
Encoder
• Each source packet is divided into T1 = 5 symbols (s0[.], . . . , s4[.]) (layer (1)). A C1 = (3, 5)
is applied along the diagonal of such source symbols producing B1 = 3 parity-check symbols
1A proof of this in the general case is provided in the proof of Lemma 8.2 in Appendix F.3.
Appendix F. Multicast Streaming Codes (Mu-SCo) 168
Table F.3: Rate 5/11 Mu-SCo Construction for the point, (B1, T1) = (3, 5) and (B2, T2) = (7, 9) lyingin region (e). This point is also illustrating case (B) defined by T1 > 2(B1 − k). For the causal part ofparity-check symbols of C1 shifted back to time i− t, we use ←−p j [i] instead of ←−p j [i]
∣∣i−t
for simplicity.
[i] [i+ 1] [i + 2] [i+ 3] [i+ 4] [i+ 5]
(1)
s0[i] s0[i+ 1] s0[i+ 2] s0[i + 3] s0[i+ 4] s0[i+ 5]s1[i] s1[i+ 1] s1[i+ 2] s1[i + 3] s1[i+ 4] s1[i+ 5]s2[i] s2[i+ 1] s2[i+ 2] s2[i + 3] s2[i+ 4] s2[i+ 5]s3[i] s3[i+ 1] s3[i+ 2] s3[i + 3] s3[i+ 4] s3[i+ 5]s4[i] s4[i+ 1] s4[i+ 2] s4[i + 3] s4[i+ 4] s4[i+ 5]
(2) p0[i] p0[i+ 1] p0[i+ 2] p0[i+ 3] p0[i+ 4] p0[i+ 5]
(3)s0[i− 9] + p1[i] s0[i− 8] +
p1[i+ 1]s0[i− 7] +p1[i+ 2]
s0[i− 6] +p1[i+ 3]
s0[i− 5] +p1[i+ 4]
s0[i− 4] +p1[i+ 5]
s1[i− 9] + p2[i] s1[i− 8] +p2[i+ 1]
s1[i− 7] +p2[i+ 2]
s1[i− 6] +p2[i+ 3]
s1[i− 5] +p2[i+ 4]
s1[i− 4] +p2[i+ 5]
(4)
s2[i− 9] +←−p 1[i+ 2]s2[i− 8] +←−p 1[i+ 3]
s2[i− 7] +←−p 1[i+ 4]s2[i− 6] +←−p 1[i + 5]
s2[i− 5] +←−p 1[i+ 6]s2[i− 4] +←−p 1[i+ 7]
s3[i− 9] +←−p 2[i+ 2]s3[i− 8] +←−p 2[i+ 3]
s3[i− 7] +←−p 2[i+ 4]s3[i− 6] +←−p 2[i + 5]
s3[i− 5] +←−p 2[i+ 6]s3[i− 4] +←−p 2[i+ 7]
s4[i− 9] s4[i− 8] s4[i− 7] s4[i − 6] s4[i− 5] s4[i− 4]+ + + + + +←−p 1[i+ 3] +←−p 2[i+ 4]
←−p 1[i+ 4] +←−p 2[i+ 5]
←−p 1[i+ 5] +←−p 2[i+ 6]
←−p 1[i+ 6] +←−p 2[i + 7]
←−p 1[i+ 7] +←−p 2[i+ 8]
←−p 1[i+ 8] +←−p 2[i+ 9]
(p0[.], p1[.], p2[.]) defined as follows,
p0[i] = s0[i− 5] + s3[i− 2]
p1[i] = s1[i− 5] + s4[i− 2]
p2[i] = s2[i− 5] + s3[i− 4] + s4[i− 3]. (F.8)
• Then, the T1 = 5 parity-check packets of code C2 = (9, 9) which are repetitions of the corresponding
source symbols are concatenated to the parity-checks of C1 with partial overlap of B1− k = 2 rows
as shown in Table F.3.
• Since T1 = 5 > 4 = 2(B1−k), this point falls in case (B), one can write T1−(B1−k) = r(B1−k)+q
as 3 = 1(2) + 1, i.e., r = 1 and q = 1. Thus, r + 1 = 2 MS codes are to be constructed. The
first is a repetition code of parameters C3,1 = (B1 − k,B1 − k) = (2, 2) is applied on the last
B1 − k = 2 rows of parity-check symbols of C1, (p1[.], p2[.]) producing (B1 − k) = 2 parity-check
symbols, (p30[·], p31[·]) which are then shifted back by 2(B1 − k) = 4 packets, while the second is
a C3,2 = (q, B1 − k) = (1, 2) MS code applied again on the last two rows of parity-check symbols
of C1 diagonally producing one row of parity-check symbols, p32[·] which is shifted back by T1 = 5
packets. The parity-check symbols of C3,1 and C3,2 (denoted by C3) are then concatenated forming
T1 − (B1 − k) = 3 rows of parity-check symbols and then combined with the last three rows of
parity-check symbols of C2 (layer (4)).
The same causality argument stated in the previous example applies and the causal parts of the cor-
responding parity-check symbols shifted to any time instant t denoted by ←−p j [.]∣∣tare sent instead (cf.
Table F.3).
Similar to the previous example, we divide each channel packet into four layers (cf. Figure 8.2),
• Layer (1) contains the first five rows which are the source symbols.
Appendix F. Multicast Streaming Codes (Mu-SCo) 169
• Layer (2) contains the next row.
• Layer (3) contains the next two rows where overlap between the parity-checks of codes C1 and C2takes place.
• Layer (4) contains the last three rows. The overlap between the parity-checks of codes C2 and C3takes place.
Decoder
For user 1, the decoding is similar to the previous example. We assume a burst erasure of length B1 = 3
taking place at times [i− 3, i− 1]. One can recover the parity-checks of code C1 in the first three rows of
parity-checks at times [i, i+ 4] after subtracting the unerased combined source symbols s0[t], s1[t], s2[t]
for t ∈ {i−9, . . . , i−5}. For user 2, we assume a burst erasure of length B2 = 7 in the interval [i−7, i−1].The decoding steps are as follows.
• Step (1): Recover pj [i+ 2], pj[i + 3] and pj [i+ 4] for j = {1, 2}.
(a) In layer (3), spanning the second and third rows of parity-checks, one can see that the parity-
check symbols of C2 in the interval [i, i+1] are unerased source symbols. Thus, the overlapping
parity-check symbols of C1 can be computed in this interval.
(b) In the same layer but in the interval [i + 5,∞), the parity-check symbols of C1 are of indices
i + 5 and later. Using the fact that (B1, T1) MS code has a memory of T1 packets, it can be
easily shown that these parity-check symbols combine only source symbols of time i and later
which are not erased and thus can be computed as well (cf. (F.8)).
(c) In steps (a) and (b), we show that all the parity-check symbols of C1 in layer (3) can be
computed except for the interval [i + 2, i + 4]. Let us mark the uncomputed parity-check
symbols as erased source symbols with two rows and three columns.
(d) Moreover, the parity-check symbols of C2 in layer (4) spanning the last three rows of parity-
check symbols in the interval [i, i + 1] are again unerased source symbols and thus can be
cancelled and the corresponding parity-check symbols of C3 can be computed in this interval.
(e) C3 is a concatenation of C3,1 = (2, 2) repetition code producing two parity-check symbols
(p30[.], p31[.]) and a C3,2 = (1, 2) MS code producing a single parity-check symbol p32[.]. At time
i and i+ 1, the parity-checks of C3,1,( ←−p 3
0[i]∣∣i←−p 3
1[i]∣∣i
)
=
( ←−p 1[i+ 2]∣∣i←−p 2[i+ 2]∣∣i
)
,
thus, ←−p 1[i + 2]∣∣iand ←−p 2[i + 2]
∣∣ican be directly recovered, while their corresponding non-
causal parts can be computed before time i+2. Similarly, ←−p 1[i+ 3]∣∣iand ←−p 2[i+3]
∣∣ican be
recovered at time i+1 and their corresponding non-causal parts can be retrieved before i+3.
The remaining column, (←−p 1[i+ 4]∣∣i,←−p 2[i + 4]
∣∣i) can be recovered using the parity-checks of
C3,2 = (1, 2) MS code at time i and i + 1, p32[i] and p32[i + 1] in a similar way used in the
previous example.
Appendix F. Multicast Streaming Codes (Mu-SCo) 170
• Step (2): After recovering these parity-check symbols of C1, the decoder can cancel their effect in
the second and third rows of parity-checks (layer (3)) at times i+ 2, i+ 3 and i+ 4.
• Step (3): Remove interference in layer (4) starting at time i+ 2.
The parity-check symbols of C3 interfering in the last two rows (layer (4)) starting at time i + 2
are of indices i + 4 and later which are either recovered in Step (1) or can be calculated as they
combine unerased source symbols (cf. (F.8)).
According to Step (2) and (3), the parity-checks of C2 in layers (3) and (4) are now free of any
interference starting at time i+2 and thus, the decoder of user 2 can use the parity-checks in layer
(3) and (4) to recover the erased source packets, s[i− 7], . . . , s[i− 1].
F.5 Proof of Lemma 8.3
Using the first decoder with a (B1, T1) property, we can write the following relation,
H(
s[i− T1]∣∣∣x[
ii−T1+B1
]
,x[i−T1−1
0
])
= 0, (F.9)
which follows from (A.8) by substituting j = i − T1 and r1 = 0. Also, using the (B2, B2) decoder, we
can write,
H(
s[i−B2]∣∣∣x[i],x
[i−B2−1
0
])
= 0 (F.10)
which again follows from (A.8) but with j = i−B2 and r1 = 0. This can be used in the following steps
H(x[i]) ≥ H(
x[i]∣∣∣x[i−B2−1
0
])
= H(
s[i−B2],x[i]∣∣∣x[i−B2−1
0
])
−H(
s[i−B2]∣∣∣x[i],x
[i−B2−1
0
])
= H(
s[i−B2],x[i]∣∣∣x[i−B2−1
0
])
= H(s[i−B2]) +H(
x[i]∣∣∣s[i−B2],x
[i−B2−1
0
])
. (F.11)
We use mathematical induction to prove (8.50). For the base case, (8.50) at m = 2B2 − B1 − 1 is
already proved by the result in (8.49).
For the inductive step, we assume that (8.50) is true for m = j, i.e.,
j∑
i=B2
H(x[i]) ≥ H(
s[j−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
x[
jB2
]∣∣∣s[j−B2
0
]
, s[
j−T1
B2−B1
]
,x[j−B2
0
])
. (F.12)
We then add H(x[j + 1]) to both sides, and use (F.9) and (F.10) to recover the source packets
s[j + 1−B2] and s[j + 1− T1] respectively as follows,
j+1∑
i=B2
H(x[i])(a)
≥ H(
s[j−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
x[
jB2
]∣∣∣s[j−B2
0
]
, s[
j−T1
B2−B1
]
,x[j−B2
0
])
+H(s[j + 1−B2]) +H(
x[j + 1]∣∣∣s[j + 1−B2],x
[j+1−B2
0
])
≥ H(
s[j+1−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
x[j+1B2
]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1−B2
0
])
Appendix F. Multicast Streaming Codes (Mu-SCo) 171
= H(
s[j+1−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
s[j + 1− T1],x[j+1B2
]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1−B2
0
])
−H(
s[j + 1− T1]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1B2
]
,x[j+1−B2
0
])
(b)= H
(
s[j+1−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
s[j + 1− T1],x[j+1B2
]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1−B2
0
])
= H(
s[j+1−B2
0
])
+H(
s[
j−T1
B2−B1
])
+H(
s[j + 1− T1]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1−B2
0
])
+H(
x[j+1B2
]∣∣∣s[j+1−B2
0
]
, s[j+1−T1
B2−B1
]
,x[j+1−B2
0
])
(c)
≥ H(
s[j+1−B2
0
])
+H(
s[j+1−T1
B2−B1
])
+H(
x[j+1B2
]∣∣∣s[j+1−B2
0
]
, s[j+1−T1
B2−B1
]
,x[j+1−B2
0
])
(F.13)
Step (a) is the addition of (F.11) and (8.50), step (b) uses the fact that j ≥ 2B2 − B1 − 1 and thus
B2 −B1 ≤ j + 1−B2 and thus:
H(
s[j + 1− T1]∣∣∣s[j+1−B2
0
]
, s[
j−T1
B2−B1
]
,x[j+1B2
]
,x[j+1−B2
0
])
= H(
s[j + 1− T1]∣∣∣x[j+1B2
]
,x[j−T1
0
])
= 0
which follows using (F.9), and step (c) uses the fact that the source packets are independent of each
other together with the fact that s[j + 1− T1] /∈ s[j+1−B2
0
]
since B2 > T1 is satisfied throughout region
(f). The result is in the form (8.50) for m = j + 1.
Bibliography
[1] A. Badr, A. Khisti, W. Tan, and J. Apostolopoulos, “Streaming codes for channels with burst and
isolated erasures,” in Proc. International Conference on Computer Communications (INFOCOM),
2013.
[2] A. Badr, A. Khisti, W. Tan, and J. Apostolopoulos, “Robust streaming erasure codes using MDS
constituent codes,” in Proc. Canadian Workshop on Information Theory (CWIT), pp. 158–163,
2013.
[3] A. Badr, A. Khisti, W. Tan, and J. Apostolopoulos, “Robust streaming erasure codes based on
deterministic channel approximations,” in Proc. International Symposium on Information Theory
(ISIT), (Istanbul, Turkey), 2013.
[4] P. Patil, A. Badr, and A. Khisti, “Streaming erasure codes under mismatched source-channel frame
rates,” in Proc. Canadian Workshop on Information Theory (CWIT), pp. 153–157, 2013.
[5] P. Patil, A. Badr, and A. Khisti, “Delay-optimal streaming codes under source-channel rate mis-
match,” in Proc. Asilomar Conference on Signals, Systems & Computers, 2013.
[6] A. Badr, P. Patil, A. Khisti, W. Tan, and J. Apostolopoulos, “Layered constructions for low-delay
streaming codes,” CoRR, vol. abs/1308.3827, 2013.
[7] A. Badr, A. Khisti, and E. Martinian, “Diversity Embedded Streaming erasure Codes (DE-SCo):
Constructions and optimality,” in Proc. Global Communications Conference (GLOBECOM), pp. 1–
5, 2010.
[8] A. Badr, A. Khisti, and E. Martinian, “Diversity Embedded Streaming erasure Codes (DE-SCo):
Constructions and optimality,” IEEE Journal on Selected Areas in Communications (JSAC), vol. 29,
pp. 1042–1054, May 2011.
[9] A. Badr, D. Lui, and A. Khisti, “Multicast Streaming Codes (Mu-SCo) for burst erasure channels,”
in Proc. Allerton Conference on Communication, Control, and Computing, 2010.
[10] A. Badr, D. Lui, and A. Khisti, “Streaming-codes for multicast over burst erasure channels,” CoRR,
vol. abs/1303.4370, 2013.
[11] A. P. Stephens and et al., “IEEE P802.11 wireless LANs: Usage models, IEEE 802.11-03/802r23,”
May 2004.
[12] T. Stockhammer and M. Hannuksela, “H.264/avc video for wireless transmission,” IEEE Wireless
Communications, vol. 12, pp. 6 – 13, aug. 2005.
172
Bibliography 173
[13] J. Bi, Q. Wu, and Z. Li, “Packet delay and packet loss in the internet,” in Proc. International
Symposium on Computers and Communications (ISCC), pp. 3–8, 2002.
[14] D. Loguinov and H. Radha, “Effects of channel delays on underflow events of compressed video over
the internet,” in Proc. International Conference on Image Processing, vol. 2, pp. II–205 – II–208,
2002.
[15] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling tcp throughput: A simple model
and its empirical validation,” in Proc. ACM SIGCOMM conference on Applications, technologies,
architectures, and protocols for Computer Communication, pp. 303–314, 1998.
[16] P. Elias, “Coding for noisy channels,” IRE Convention Record, vol. 4, pp. 37 – 46, March 1955.
[17] J. K. Wolf and A. J. Viterbi, “On the weight distribution of linear block codes formed from convo-
lutional codes,” IEEE Transactions on Communications, vol. 44, pp. 1049–1051, Sep 1996.
[18] S. Lin and D. J. Costello, Error Control Coding: Fundamentals and Applications. Englewood Cliffs,
NJ: Prentice-Hall, 1983.
[19] R. M. Fano, “A heuristic discussion of probabilistic decoding,” IEEE Transactions on Information
Theory, vol. 9, pp. 64 – 73, April 2003.
[20] A. D. J. A. Konrad, B. Y. Zhao and R. Ludwig, “A markov-based channel model algorithm for
wireless networks,” Wireless Networks, vol. 9, no. 3, pp. 189–199, 2003.
[21] J. K. M. Yajnik and D. Towsley, “Packet loss correlation in the mbone multicast network,” in
Proc. Global Communications Conference (GLOBECOM), pp. 94–99, Nov 1996.
[22] M. Zorzi and R. R. Rao, “On the statistics of block errors in bursty channels,” IEEE Transactions
on Communications, vol. 45, pp. 660–667, Jun 1997.
[23] G. T. Nguyen, B. Noble, R. H. Katz, and S. Mahadev, “A trace-based approach for modeling
wireless channel behavior,” in Proc. Simulation Conference, pp. 597–604, 1996.
[24] E. N. Gilbert, “Capacity of a burst-noise channel,” Bell Systems Technical Journal, vol. 39, pp. 1253–
1265, September 1960.
[25] E. O. Elliott, “Estimates of error rates for codes on burst-noise channels,” Bell Systems Technical
Journal, vol. 42, pp. 1977–1997, September 1963.
[26] B. D. Fritchman, “A binary channel characterization using partitioned markov chains,” IEEE Trans-
actions on Information Theory, vol. 13, pp. 221–227, 1967.
[27] H. Bai and H. Aerospace, “Error modeling schemes for fading channels in wireless communications:
A survey,” IEEE Communications Surveys and Tutorials, vol. 5, pp. 2 – 9, 2003.
[28] J. K. M. Yajnik, S. B. Moon and D. Towsley, “Measurement and modeling of the temporal de-
pendence in packet loss,” in Proc. International Conference on Computer Communications (INFO-
COM), March 1999.
Bibliography 174
[29] S. F.-P. J. Bolot and D. Towsley, “Adaptative FEC-base error control for interactive audio in the
Internet,” in Proc. International Conference on Computer Communications (INFOCOM), March
1999.
[30] G. C. H. Sanneck and R. Koodli, “A framework model for packet loss metrics based on loss run-
length,” in SPIE/ACM SIGMM Multimedia Computing Network Conference, 2000.
[31] M. L. J. Y. Chouinard and G. Y. Delisle, “Estimation of Gilberts and Fritchmans models parameters
using the gradient method for digital mobile radio channels,” IEEE Transactions on Vehicular
Technology, vol. 37, pp. 158 – 166, August 1998.
[32] J. Justesen and L. Hughes, “On maximum-distance-separable convolutional codes,” IEEE Transac-
tions on Information Theory, vol. 20, no. 2, p. 288, 1974.
[33] E. M. Gabidulin, “Convolutional codes over large alphabets,” in Proc. International Workshop on
Algebraic Combinatorial and Coding Theory, (Varna, Bulgaria), pp. 80–84, 1988.
[34] H. Gluesing-Luerssen, J. Rosenthal, and R. Smarandache, “Strongly-MDS convolutional codes,”
IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 584–598, 2006.
[35] E. Martinian and C. W. Sundberg, “Burst erasure correction codes with low decoding delay,” IEEE
Transactions on Information Theory, vol. 50, no. 10, pp. 2494–2502, 2004.
[36] E. Martinian and M. Trott, “Delay-optimal burst erasure code construction,” in Proc. International
Symposium on Information Theory (ISIT), (Nice, France), July 2007.
[37] E. Martinian, Dynamic Information and Constraints in Source and Channel Coding. PhD thesis,
Massachusetts Institute of Technology (MIT), 2004.
[38] M. Kalman, E. Steinbach, and B. Girod, “Adaptive media playout for low-delay video streaming over
error-prone channels,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14,
pp. 841 – 851, June 2004.
[39] H. S. Witsenhausen, “On the structure of real-time source coders,” Bell Syst. Tech. J., vol. 58,
pp. 1437–1451, Jul-Aug 1979.
[40] D. Teneketzis, “On the structure of optimal real-time encoders and decoders in noisy communica-
tion,” IEEE Transactions on Information Theory, vol. 52, pp. 4017–4035, sep 2006.
[41] T. Javidi and A. Goldsmith, “Dynamic joint source-channel coding with feedback,” in Proc. Inter-
national Symposium on Information Theory (ISIT), 2013.
[42] L. Schulman, “Coding for interactive communication,” IEEE Transactions on Information Theory,
vol. 42, no. 6, pp. 1745–1756, 1996.
[43] A. Sahai, Anytime Information Theory. PhD thesis, Massachusetts Institute of Technology (MIT),
2001.
[44] R. Sukhavasi and B. Hassibi, “Linear error correcting codes with anytime reliability,” in Proc. In-
ternational Symposium on Information Theory (ISIT), pp. 1748–1752, 2011.
Bibliography 175
[45] O. Tekin, T. Ho, H. Yao, and S. Jaggi, “On erasure correction coding for streaming,” in Information
Theory and Applications Workshop (ITA), pp. 221–226, 2012.
[46] D. Leong and T. Ho, “Erasure coding for real-time streaming,” in Proc. International Symposium
on Information Theory (ISIT), 2012.
[47] D. Leong, A. Qureshi, and T. Ho, “On coding for real-time streaming under packet erasures,” in
Proc. International Symposium on Information Theory (ISIT), 2013.
[48] R. Smarandache, H. Gluesing-Luerssen, and J. Rosenthal, “Strongly MDS convolutional codes,
a new class of codes with maximal decoding capability,” in Proc. International Symposium on
Information Theory (ISIT), pp. 426–, 2002.
[49] V. Tomas, J. Rosenthal, and R. Smarandache, “Decoding of convolutional codes over the erasure
channel,” IEEE Transactions on Information Theory, vol. 58, no. 1, pp. 90–108, 2012.
[50] A. R. Iyengar, M. Papaleo, G. Liva, P. H. Siegel, J. K. Wolf, and G. E. Corazza, “Protograph-based
LDPC convolutional codes for correlated erasure channels,” in Proc. International Conference on
Communications (ICC), May 2010.
[51] G. E. Corazza, A. R. Iyengar, M. Papaleo, P. H. Siegel, A. Vanelli-Coralli, and J. K. Wolf, “Latency
constrained protograph-based LDPC convolutional codes,” in International Symposium on Turbo
Codes and Iterative Information Processing (ISTC), September 2010.
[52] M. C. O. Bogino, P. Cataldi, M. Grangetto, E. Magli, and G. Olmo, “Sliding-window digital fountain
codes for streaming of multimedia contents,” in IEEE International Symposium on Circuits and
Systems (ISCAS), 2007.
[53] T. Tirronen and J. Virtamo, “Finding fountain codes for real-time data by fixed point method,” in
Proc. International Symposium on Information Theory (ISIT), 2008.
[54] N. Rahnavard, B. N. Vellambi, and F. Fekri, “Rateless codes with unequal error protection prop-
erty,” IEEE Transactions on Information Theory, vol. 53, no. 4, pp. 1521–1532, 2007.
[55] Y. Li and E. Soljanin, “Rateless codes for single-server streaming to diverse users,” in Proc. Allerton
Conference on Communication, Control, and Computing, 2009.
[56] Y. Li and E. Soljanin, “Rateless codes for single-server streaming to diverse users,” CoRR,
vol. abs/0912.5055, 2009.
[57] A. Sahai, “Why do block length and delay behave differently if feedback is present?,” IEEE Trans-
actions on Information Theory, vol. 54, pp. 1860 – 1886, May 2008.
[58] J. K. Sundararajan and D. S. and M. Medard, “ARQ for network coding,” in Proc. International
Symposium on Information Theory (ISIT), 2008.
[59] H. Yao, Y. Kochman, and G. W. Wornell, “A multi-burst transmission strategy for streaming over
blockage channels with long feedback delay,” IEEE Journal on Selected Areas in Communications
(JSAC), vol. 29, no. 10, pp. 2033–2043, 2011.
Bibliography 176
[60] Y. K. H. Yao and G. W. Wornell, “On delay in real-time streaming communication systems,” in
Proc. Allerton Conference on Communication, Control, and Computing, 2010.
[61] G. W. W. H. Yao, Y. Kochman, “Delay-throughput tradeoff for streaming over blockage channels
with delayed feedback,” in Proc. IEEE Military Communications Conference (MILCOM), 2010.
[62] Z. Li, A. Khisti, and B. Girod, “Forward error protection for low-delay packet video,” in Interna-
tional Packet Video Workshop, December 2010.
[63] D. Lui, A. Badr, and A. Khisti, “Streaming codes for a double-link burst erasure channel,” in
Proc. Canadian Workshop on Information Theory (CWIT), 2011.
[64] D. Lui, “Coding theorems for delay sensitive communication over burst-erasure channels,” Master’s
thesis, University of Toronto, Toronto, ON, August 2011.
[65] Z. Li, A. Khisti, and B. Girod, “Correcting erasure bursts with minimum decoding delay,” in
Proc. Asilomar Conference on Signals, Systems & Computers, 2011.
[66] S. S. Vyetrenko, Network coding for error correction. PhD thesis, California Institute of Technology,
2011.
[67] M. K. H. Deng and J. Evans, “Burst erasure correction capabilities of (n,n-1) convolutional codes,”
in Proc. International Conference on Communications (ICC), 2009.
[68] D. Wu, Y. Hoy, W. Zhu, Y. Zhang, and J. Peha, “Streaming video over the internet: approaches
and directions,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, pp. 282
– 300, March 2001.
[69] G. Joshi, Y. Kochman, and G. Wornell, “On playback delay in streaming communications,” in
Proc. International Symposium on Information Theory (ISIT), 2012.
[70] S. Kokalj-Filipovic, P. Spasojevic, and E. Soljanin, “Doped Fountain coding for minimum delay
data collection in circular networks,” CoRR, vol. abs/1001.3765, 2010.
[71] R. C. Singleton, “Maximum distance q-nary codes,” IEEE Transactions on Information Theory,
vol. 10, pp. 116 – 118, 1964.
[72] I. S. Reed and G. Solomon, “Polynomial codes over certain finite fields,” SIAM Journal of Applied
Mathematics, vol. 8, pp. 300 – 304, June 1960.
[73] S. B. Wicker and V. K. Bhargava, Reed-Solomon Codes and Their Applications. New York: IEEE
Press, 1994.
[74] R. Blahut, Algebraic Codes for Data Transmission. Cambridge, UK: Cambridge University Press,
2003.
[75] D. J. Costello, “A construction technique for random-error-correcting convolutional codes,” IEEE
Transactions on Information Theory, vol. 15, pp. 631 – 639, September 1969.
[76] R. Johannesson and K. Zigangirov, Fundamentals of Convolutional Coding. Wiley-IEEE Press,
1999.
Bibliography 177
[77] A. J. Viterbi, “Error bound for convolutional codes and an asymptotically optimum decoding algo-
rithm,” IEEE Transactions on Information Theory, vol. 13, pp. 260 – 269, April 1967.
[78] R. D. Hutchinson, Generic Properties of Convolutional Codes. PhD thesis, University of Notre
Dame, 2006.
[79] R. G. Gallager, Information Theory and Reliable Communication. John Wiley and Sons, 1968.
[80] J. G. D. Forney, “Convolutional codes I: Algebraic structure,” IEEE Transactions on Information
Theory, vol. 16, pp. 720 – 738, November 1970.
[81] M. H. Costa, “Writing on dirty paper,” IEEE Transactions on Information Theory, vol. 29, pp. 439–
441, May 1983.
[82] J. G. D. Forney, “Coset codes-part II: Binary lattices and related codes,” IEEE Transactions on
Information Theory, vol. 34, pp. 1152 – 1187, September 1988.
[83] D. J. Muder, “Minimal trellises for block codes,” IEEE Transactions on Information Theory, vol. 34,
pp. 1049–1053, September 1988.
[84] G. D. Forney and M. D. Trott, “The dynamics of group codes: state spaces, trellis diagrams, and
canonical encoders,” IEEE Transactions on Information Theory, vol. 39, pp. 1491–1513, September
1993.
[85] F. R. Kschischang and V. Sorokine, “On the trellis structure of block codes,” IEEE Transactions
on Information Theory, vol. 41, pp. 1924–1937, November 1995.
[86] R. J. McEliece, “On the BCJR trellis for linear block codes,” IEEE Transactions on Information
Theory, vol. 42, pp. 1072–1092, July 1996.
[87] J. K. M. Yajnik, S. B. Moon and D. Towsley, “Burst or random error correction based on Fire and
BCH codes,” in Information Theory and Applications Workshop (ITA), February 2014.
[88] S. M. Reddy and J. P. Robinson, “Random error and burst correction by iterated codes,” IEEE
Transactions on Information Theory, vol. 18, pp. 182–185, Jan 1972.
[89] H. O. Burton and E. Weldon, “Cyclic product codes,” IEEE Transactions on Information Theory,
vol. 11, pp. 433–439, Jul 1965.
[90] T. K. H. Hsu and R. T. Chien, “Error-correcting codes for a compound channel,” IEEE Transactions
on Information Theory, vol. 14, pp. 135–139, Jan 1968.
[91] S. B. Wicker, Error Control Systems for Digital Communication and Storage. Prentice Hall, 1995.
[92] X. Zhang, Y. Xu, H. Hu, Y. Liu, Z. Guo, and Y. Wang, “Modeling and analysis of Skype video
calls: Rate control and video quality,” IEEE Transactions on Multimedia, vol. 15, pp. 1446–1457,
Oct 2013.
[93] X. Zhang, Y. Xu, H. Hu, Y. Liu, Z. Guo, and Y. Wang, “Profiling Skype video calls: Rate control
and video quality,” in Proc. International Conference on Computer Communications (INFOCOM),
pp. 621–629, March 2012.
Bibliography 178
[94] T. Javidi and R. N. Swamy, “Optimal code length for bursty sources with deadlines,” in Proc. In-
ternational Symposium on Information Theory (ISIT), pp. 2694–2698, June 2009.
[95] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley and Sons, 1991.