31
1 Channel Modeling CMPT820 Multimedia Systems Taher Dameh School of Computing Science Simon Fraser University [email protected] Nov 2 nd , 2010

Channel Modeling

  • Upload
    josef

  • View
    77

  • Download
    0

Embed Size (px)

DESCRIPTION

Channel Modeling. CMPT820 Multimedia Systems Taher Dameh School of Computing Science Simon Fraser University [email protected] Nov 2 nd , 2010. Outline. Introduction Basic Probability Theory and Information Theory Concepts. Discreet Random variables Entropy Mutual Information - PowerPoint PPT Presentation

Citation preview

Page 1: Channel Modeling

1

Channel Modeling

CMPT820 Multimedia SystemsTaher Dameh

School of Computing ScienceSimon Fraser University

[email protected] 2nd , 2010

Page 2: Channel Modeling

2

Outline

Introduction

Basic Probability Theory and Information Theory Concepts. Discreet Random variables Entropy Mutual Information Channel Capacities

Discrete MemoryLess Channels (DMC) Binary Erasure Channel (BEC) Packet Erasure Channel (PEC) Cascaded BEC BEC with Feedback

Channels with memory Gilbert model Evaluating a desired loss/recovery probability Packet Correlation Cascaded Channels

Page 3: Channel Modeling

3

Introduction

Why to model channels ? Play a crucial role in the design and development of multimedia

applications as well as resilience techniques to protect the content over the Internet.

End-to-end multimedia applications parameters influence the particulate techniques used for recovering lost packets. the application designer may choose to adopt a strategy for recovering

lost packets that is based on retransmission, Forward Error Correction (FEC), or both.

We will cover fundamental analysis tools that are used to characterize the loss performance of channels and networks that carry multimedia packets Then (next talk) describe a comprehensive Internet video study

conducted for gaining insight into a variety end-end performance parameters that are crucial for real-time multimedia applications.

Page 4: Channel Modeling

4

Discrete Random VariableProbability Mass Functions:

Joint Probability Mass Functions

Marginal Probability Mass Functions:

Independent Random Variables:

Conditional Probability Mass Functions:

Example of pmf for d.r.v X

Joint pmf for two r.v X and Y

Page 5: Channel Modeling

5

Entropy

Joint entropy

Conditional entropy (equivocation)

Entropy (uncertainty ):

measure of the amount of uncertainty (randomness) associated with the value of the discrete random variable X (the expected value of the information content of X )

0 <= H(X) <= Log L ;

where X have L possible values

Page 6: Channel Modeling

6

Mutual Information

Individual (H(X),H(Y)), joint (H(X,Y)), and conditional entropies for a pair of correlated subsystems X,Y with mutual information I(X; Y).

Page 7: Channel Modeling

7

Mutual Information mutual information measures the information that X and Y share: it

measures how much knowing one of these variables reduces our uncertainty about the other. For example,

if X and Y are independent, then knowing X does not give any information about Y and vice versa, so their mutual information is zero.

if X and Y are identical then all information conveyed by X is shared with Y: knowing X determines the value of Y and vice versa. As a result, in the case of identity the mutual information is the same as the uncertainty contained in Y (or X) alone, namely the entropy of Y (or X: clearly if X and Y are identical they have equal entropy).

The mutual information of X relative to Y is given by:

Page 8: Channel Modeling

8

Channel Capacity

its the maximum amount of error-free digital data (that is, information) that can be transmitted over a communication link with a specified bandwidth in the presence of noise interference, ( Shannon limit or Shannon capacity) or ( the theoretical maximum information transfer rate of the channel ).

is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution

Information theory, developed by Claude E. Shannon during World

War II

Bits or packets per channel use

Page 9: Channel Modeling

9

Discrete Memoryless Channels DMC

p(y|x) that expresses the probability of observing the output symbol ygiven that we send the symbol x. The channel is said to be memoryless if the probability distribution of the output depends only on the input at that time and is conditionally independent of previous channel inputs or outputs.

Communication system

Page 10: Channel Modeling

10

Binary Erasure Channel (BEC) *

The input X: binary (Bernoulli) random variable (0 or 1)

: The loss (erased or deleted) probability

The output Y: ternary random variable (0,1 or erasure)

* introduced by Peter Elias of MIT in 1954 as a toy example.

Page 11: Channel Modeling

11

BEC

No Error occur over a BEC as:Pr[Y=0 | X=1] = Pr[Y=1 | X=0] = 0 .

This capacity, which is measured in “bits” per “channel use,” can be achieved when the channel input X is a uniform random variable with Pr[X = 0] = Pr[X =1] = 1/2.

Page 12: Channel Modeling

12

Packet Erasure Channel (PEC) The multimedia content is usually packetized and transmitted over internet link as

“integrated vectors” of bits rather than individual bits (the bits on same packet are 100% dependent on each other; either all transmitted successfully or all bits are erased)

The input is a vector of random variables :

is a binary random variable

The capacity in this case is measured in “packets” per “channel use.”

Page 13: Channel Modeling

13

Cascaded BEC Channels

Lets assume that we have two BEC channels that are in cascade (two internet links over which multimedia packets are routed):

Then the loss probability for the two channels could be different :

Assuming the two channels are independent

For L cascaded links of BEC-independent channels:

The Overall channel capacity:

Page 14: Channel Modeling

14

The end-end path of L BEC channels is equivalent to a BEC channel with an effective end-end loss probability:

the overall capacity C of the end-to-end route is bounded by the capacity Cmin of the link with the smallest capacity among all cascaded channels in the route. In other words, C ≤ Cmin = mini(Ci).

Cmin should not be confused by the “bottleneck” bandwidth Bmin. Rtotal=BC Rtotal<=Mini(Ri)=mini(BiCi)

Page 15: Channel Modeling

15

The BEC Channel with Feedback

It is quite common for many Internet applications, including multimedia ones, to support some form of feedback from the receiver to the transmitter. This could include feedback regarding requests for retransmissions of lost packets.

what happens to the overall performance bounds of such channels with feedback?

Information theory provides an interesting and arguably a surprising answer for any (DMC) Channel, feedback does not improve (or worsen for that matter) :

Discrete memoryless channel with feedback

* Proof: Elements of Information Theory, Second Edition, Thomas M. Cover, Joy A. Thomas p216

*

Page 16: Channel Modeling

16

Cascaded BEC/PEC with feedback For cascaded BEC/PEC with feedback on an end-end bases:

If the feedback is done on a link-by-link basis, then the overall channel capacity of a cascaded set of links is bounded by the capacity of the “bottlneck” link:

The aforementioned results for channel capacity with feedback are applicable to memoryless channels. Meanwhile, it has been well established that channels with memory could increase their capacity by employing feedback. The performance aspects of channels with memory are addressed next.

Page 17: Channel Modeling

17

Packet Losses over channel with memory Packet losses over Internet links and routes exhibit a high level of

correlation and tend to occur in bursts. This observation, which has been well established by several Internet packet-loss studies, indicates that Internet links and routes exhibit memory.

The most popular analysis and modeling tool used to capture memory is based on Markov chains

A special case of Markov channels is the Gilbert–Elliott channel, which consists of a two-state Markov chain

Good State (G) : the receiver receives all packets correctly Bad State (B) : all packets are lost

Page 18: Channel Modeling

18

Gilbert–Elliott channel

G and B: represent the Good state and the Bad state, respectively

• pGG = 1 - pGB and pBB = 1 – pBG

• The steady state probabilities :

Page 19: Channel Modeling

19

Evaluating a desired loss/recovery probability We want to evaluate receiving i packets among n packets transmitted over

Gilbert erasure channels.

Construct another Markov process by extending the two-state Gilbert model using the number of correctly received packets as the indexes for the sate of the extended Markov process

The probability that the sender transmits n packets and the receiver correctly receives i packets is the probability that the process starts at G0 or B0 and is in state Gi or Bi after n stages.

Page 20: Channel Modeling

20

The generating function [5] ФG0Gi (z) of φG0Gi (n)

taking the inverse Z transform, we have:

The detail of the derivation of this equation is outlined in Appendix A in [3].

Page 21: Channel Modeling

21

Similarly, it can be shown that:

Page 22: Channel Modeling

22

• If we use φ(n, i) to represent the probability that the sender transmits n packets and the receiver correctly receives i packets, then:

• For FEC codes, we are often concerned about the probability that a node can receive enough packets to decode an FEC block. For a (n, k) FEC code, this probability (decodable probability) is:

Page 23: Channel Modeling

23

Packet Correlation over Channels with Memory A more useful insight and analysis can be gained by considering other

parameter pairs: The average loss p the packet correlation ρ, where:

The Steady state probabilities are directly related to the average loss rate :

The packet erasure correlation ρ provides an average measure of how the states of two consecutive packets are correlated to each other:

when ρ = 0, the loss process is memoryless (BEC) as the value of ρ increases, then the states of two consecutive packets

become more and more correlated.

Page 24: Channel Modeling

24

Probability of receiving i packets given that the transmitter sends n = 30 packets, for loss probability p = 0.1;

Page 25: Channel Modeling

25

A detailed view of the probability of receiving i packets given that the transmitter sends n packets; as the correlation is strong, once the process initially starts in bad or good state, it has the inertia to stay at that state

Page 26: Channel Modeling

26

the average decodable probability of a receiver when the sender sends FEC blocks through a Gilbert channel. The block size n is set 30, the average loss rate of the channel is set to be 1%; k is changed from 20 to 30, and the packet correlation ρ is changed from 0 to 0.9.

Page 27: Channel Modeling

27

Packet Losses over cascaded channels with memory Internet routes and paths consist of multiple links that can be modeled as

cascaded channels. Further, each of these links/channels usually exhibits memory.

A representation of the reception of i packets when transmitting n packets over two-cascaded channels with memory.

Page 28: Channel Modeling

28

the desired probability φ(n, i) of receiving i packets at the output of these two cascaded channels when the transmitter sends n packets into the input of the first channel

in other words:

Page 29: Channel Modeling

29

Hence if the cascaded channels with memory are Gilbert channels, then

Page 30: Channel Modeling

30

Now for N channels with memory:

A more Compact representation is :

Page 31: Channel Modeling

31

References

[1] Schaar and Chou (editors), Multimedia over IP and Wireless Networks: Compression, Networking, and Systems, Elsevier, 2007. [2] Thomas M. Cover, Joy A. Thomas, Elements of Information Theory, Second Edition, Wiley, 1991. [3] M. Wu and H. Radha. “Network-Embedded FEC (NEF) Performance for Multi-Hop Wireless Channels with Memory,” IEEE International Conference on Communications (ICC), May 2005. [4] M.Wu and H. Radha. “Network Embedded FEC for Overlay and P2P Multicast over Channels with Memory,” Conference on Information Sciences and Systems (CISS), Johns Hopkins University, March 2005. [5] R. A. Howard, Dynamic Probabilistic Systems. New York: Wiley, 1971.