7
The State-Dependent Broadcast Channel with Cooperation Lior Dikstein Ben Gurion University of the Negev [email protected] Haim H. Permuter Ben Gurion University of the Negev [email protected] Yossef Steinberg Technion [email protected] Abstract—In this paper, we investigate problems of commu- nication over physically-degraded, state-dependent broadcast channels (BC) with cooperating decoders. Three different setups are considered and their capacity regions are characterized. First, we study a setting where noncausal state information is available to the encoder and the strong decoder. Furthermore, the strong decoder can use a finite capacity link to send the weak decoder information regarding the messages or the channel state. Second, we examine a setting where the encoder and the strong decoder both have access to noncausal state information, while the weak decoder has access to rate-limited state information. This scenario can be interpreted as a special case, where the strong decoder can only cooperate to send the weak decoder rate-limited information about the state sequence. A third case we consider, is a cooperative setting where state information is available only to the encoder in a causal manner. Finally, we discuss the optimality of using rate-splitting when coding for cooperative BC. In particular, we prove that rate- splitting is not necessarily optimal when coding for cooperative BC and solve an example where a multiple-binning coding scheme outperforms rate-splitting. Index Terms—Binning, broadcast channels, causal coding, channel capacity, cooperative broadcast, degraded broadcast channel, noncausal coding, partial side information, side infor- mation, state-dependent channels. I. I NTRODUCTION Classical broadcast channels adequately model a variety of practical communication scenarios such as cellular sys- tems, wireless Wi-Fi routers and digital TV broadcasting. However, the rapid growth of wireless networking motivates us to expand the study of such channels and to consider more complex settings which can more accurately describe a wider range of scenarios. Some of these important ex- tensions include settings where the broadcast channel is state-dependent. Wireless channels with fading, jamming or interference in the transmission are just a few examples that can be modeled by the state-dependent settings. Other important extensions include cooperation between different nodes in the network. Here, the nodes are able to assist one another in decoding. These settings can, inter alia, describe sensor networks, in which a large number of nodes conduct measurements on some ongoing process and wish to help each other to successfully decode the messages. Some practical sensor network applications include surveillance, 1 This work was supported by the Israel Science Foundation (grant no. 684/11). health monitoring and environmental detection. Therefore, the results presented in this work help to meet the grow- ing need to find the fundamental limits of such important communication scenarios. State-dependent channels were first introduced by Shan- non [1] who characterized the capacity for the case where a single user channel, P Y |X,S , is controlled by a random parameter S, and the state information sequence at time i, s i , is known causally at the encoder. The case where the full realization of the channel states, s n , is known noncausally at the encoder was studied by Gel’fand and Pinsker in [2], where they derived the capacity of the channel. Many studies regarding state-dependent multi-user channels have been per- formed in recent years. Some examples considering multiple access channels (MAC) include the works of Lapidoth and Steinberg [3], [4], Piantanida, Zaidi and Shamai [5], Somekh- Baruch, Shamai and Verdu [6] and many more. However, while state-dependent MACs have been widely studied and capacity regions, upper bounds and achievable rates were derived for many cases, much less is known regarding broadcast channels. Steinberg studied the degraded, state- dependent BC in [7]. Inner and outer bounds where derived for the case where the state information is known noncausally at the encoder and the capacity region was found for the case where the states are known causally at the encoder, or known noncausally at the encoder as well as the strong decoder. Our channel setting with cooperation naturally extends this model, and the capacity results of this paper generalize the capacity regions found in [7]. Other important settings regarding state-dependent chan- nels are cases where only rate-limited state information is available at the encoder or decoder. In many practical systems, information on the channel is not available freely. Thus, in order to provide side information on the channel state to the different users, we must allocate resources such as time slots, bandwidth and memory. Heegard and El Gamal [8] presented a model of a state-dependent channel, where the transmitter is informed of the state information at a rate-limited to R e and the receiver is informed of the state information at a rate-limited to R d . Cover and Chiang [9] extended the Gel’fand-Pinsker problem to the case where both the encoder and the decoder are provided with different, but correlated, partial side information. Steinberg in [10] derived the capacity for a channel where the state information 1307 978-1-4799-3410-2/13/$31.00 ©2013 IEEE Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2 - 3, 2013

[IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

  • Upload
    yossef

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

The State-Dependent Broadcast Channel with

Cooperation

Lior Dikstein

Ben Gurion University of the Negev

[email protected]

Haim H. Permuter

Ben Gurion University of the Negev

[email protected]

Yossef Steinberg

Technion

[email protected]

Abstract—In this paper, we investigate problems of commu-nication over physically-degraded, state-dependent broadcastchannels (BC) with cooperating decoders. Three different setupsare considered and their capacity regions are characterized.First, we study a setting where noncausal state information isavailable to the encoder and the strong decoder. Furthermore,the strong decoder can use a finite capacity link to sendthe weak decoder information regarding the messages or thechannel state. Second, we examine a setting where the encoderand the strong decoder both have access to noncausal stateinformation, while the weak decoder has access to rate-limitedstate information. This scenario can be interpreted as a specialcase, where the strong decoder can only cooperate to send theweak decoder rate-limited information about the state sequence.A third case we consider, is a cooperative setting where stateinformation is available only to the encoder in a causal manner.Finally, we discuss the optimality of using rate-splitting whencoding for cooperative BC. In particular, we prove that rate-splitting is not necessarily optimal when coding for cooperativeBC and solve an example where a multiple-binning codingscheme outperforms rate-splitting.

Index Terms—Binning, broadcast channels, causal coding,channel capacity, cooperative broadcast, degraded broadcastchannel, noncausal coding, partial side information, side infor-mation, state-dependent channels.

I. INTRODUCTION

Classical broadcast channels adequately model a variety

of practical communication scenarios such as cellular sys-

tems, wireless Wi-Fi routers and digital TV broadcasting.

However, the rapid growth of wireless networking motivates

us to expand the study of such channels and to consider

more complex settings which can more accurately describe

a wider range of scenarios. Some of these important ex-

tensions include settings where the broadcast channel is

state-dependent. Wireless channels with fading, jamming or

interference in the transmission are just a few examples

that can be modeled by the state-dependent settings. Other

important extensions include cooperation between different

nodes in the network. Here, the nodes are able to assist

one another in decoding. These settings can, inter alia,

describe sensor networks, in which a large number of nodes

conduct measurements on some ongoing process and wish to

help each other to successfully decode the messages. Some

practical sensor network applications include surveillance,

1This work was supported by the Israel Science Foundation (grant no.684/11).

health monitoring and environmental detection. Therefore,

the results presented in this work help to meet the grow-

ing need to find the fundamental limits of such important

communication scenarios.

State-dependent channels were first introduced by Shan-

non [1] who characterized the capacity for the case where

a single user channel, PY |X,S , is controlled by a random

parameter S, and the state information sequence at time i,si, is known causally at the encoder. The case where the full

realization of the channel states, sn, is known noncausally

at the encoder was studied by Gel’fand and Pinsker in [2],

where they derived the capacity of the channel. Many studies

regarding state-dependent multi-user channels have been per-

formed in recent years. Some examples considering multiple

access channels (MAC) include the works of Lapidoth and

Steinberg [3], [4], Piantanida, Zaidi and Shamai [5], Somekh-

Baruch, Shamai and Verdu [6] and many more. However,

while state-dependent MACs have been widely studied and

capacity regions, upper bounds and achievable rates were

derived for many cases, much less is known regarding

broadcast channels. Steinberg studied the degraded, state-

dependent BC in [7]. Inner and outer bounds where derived

for the case where the state information is known noncausally

at the encoder and the capacity region was found for the case

where the states are known causally at the encoder, or known

noncausally at the encoder as well as the strong decoder.

Our channel setting with cooperation naturally extends this

model, and the capacity results of this paper generalize the

capacity regions found in [7].

Other important settings regarding state-dependent chan-

nels are cases where only rate-limited state information

is available at the encoder or decoder. In many practical

systems, information on the channel is not available freely.

Thus, in order to provide side information on the channel

state to the different users, we must allocate resources such

as time slots, bandwidth and memory. Heegard and El Gamal

[8] presented a model of a state-dependent channel, where

the transmitter is informed of the state information at a

rate-limited to Re and the receiver is informed of the state

information at a rate-limited to Rd. Cover and Chiang [9]

extended the Gel’fand-Pinsker problem to the case where

both the encoder and the decoder are provided with different,

but correlated, partial side information. Steinberg in [10]

derived the capacity for a channel where the state information

1307978-1-4799-3410-2/13/$31.00 ©2013 IEEE

Fifty-first Annual Allerton ConferenceAllerton House, UIUC, Illinois, USAOctober 2 - 3, 2013

Page 2: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

is known fully to the encoder in a noncausal manner and

is available rate-limited at the decoder. Coding for such a

channel involves solving a Gel’fand-Pinsker problem as well

as a Wyner-Ziv problem [11] simultaneously. An extension of

this setting to a degraded, state-dependent BC is introduced

in this work.

Cooperation between different users in multi-user channels

was first considered for MACs in the work of Willems

[12]. Further studies involving cooperation between encoders

in MACs include papers such as [13], [14] and [15]. A

setting where cooperation between users takes place in a

state-dependent MAC, where the partial state information in

known to each user and full state information is known at

the decoder, was treated by Permuter Shamai and Somekh-

Baruch in [16].

The notion of cooperation between receivers in a BC was

proposed by Dabora and Servetto [17]. In their paper, they

characterize the capacity region for the cooperative degraded

BC. The direct part combines methods of broadcast coding

and code constructions for the degraded relay channel. The

degraded, state-dependent BC setting we present in this

work generalizes Dabora and Servetto’s model. Hence, our

capacity result generalizes the result of [17]. However, we

propose a different coding scheme, which uses triple-binning

and superposition coding techniques. We show that this

coding scheme can also be applied to achieve the capacity

of the cooperative degraded BC setting presented in [17].

In this work, we consider several scenarios of cooperation

between decoders for physically degraded, state-dependent

(PDSD) BCs. First, we consider a setting where there is no

limit on when the cooperation link between the decoders,

C12, should be used. For this setting we characterize the

capacity region for the noncausal case, where noncausal

state information is known at the encoder and the strong

decoder, as well as for the causal case, where causal state

information is available at the encoder. Here, it turns out

that the optimal schemes use the cooperation link between

the decoders, C12, solely for sending additional information

about the messages. I.e., information about the state sequence

is not sent explicitly via C12. The second setting we consider

is a case where the cooperation link C12 can be used only

before the outputs are given. In such a case, the strong

decoder can only use the cooperation link to convey rate

limited state information to the weaker user. This can be

regarded as a broadcast extension of the results in [10]. The

capacity region for this case is derived, using the methods

we developed when solving the first scenario, combined with

Wyner-Ziv coding.

Another interesting result concerns the use of rate-splitting

techniques when coding for cooperative BC. In MACs, rate-

splitting is the preferred coding method and it has been

used to achieve the capacity of most of the settings that

have been solved. Hence, a first guess would be to use

such methods when coding for cooperative BC. However, we

prove that rate-splitting schemes are not necessarily optimal

for this class of channels. Moreover, we show that other

techniques, such as binning, strictly outperform rate-splitting.

An example for which rate-splitting is demonstrated to be

suboptimal is solved for the binary symmetric BC with

cooperation.

The remainder of the paper is organized as follows.

Section II presents the mathematical notation used in this

paper. In Section III, all the main results are stated, which

include the capacity regions of three PDSD BC settings.

Section III-A is devoted to the noncausal PDSD BC and

the discussion of special cases, Section III-B is dedicated

to the causal PDSD BC and Section III-C is dedicated to

the PDSD BC with rate-limited state information. In Section

IV, we discuss the optimality of using rate-splitting methods

when dealing with cooperating BC. Finally, in Section IV-A,

we give an example of a cooperative BC where rate-splitting

is suboptimal.

II. NOTATION AND PROBLEM DEFINITION

A

B

Si

Sn

MZ

MY

EncoderXn

PY,Z|X,S =

PY |X,SPZ|Y

Y n

Zn

Decoder Y

Decoder Z

C12

MY

MZ

Fig. 1. The physically degraded, state-dependent broadcast channel withcooperating decoders.

In this paper, random variables are denoted with upper

case letters, deterministic realizations or specific values are

denoted with lower case letters and calligraphic letters denote

the alphabets of the random variables. Vectors of n elements,

i.e. xn = (x1, x2, ..., xn) are denoted as xn and xji denotes

the i−j+1-tuple (xi, xi+1, ..., xj) when j ≥ i and an empty

set otherwise. The probability distribution function of X , the

joint distribution function of X and Y and the conditional

distribution of X given Y will be denoted by PX , PX,Y and

PX|Y , respectively.

A PDSD BC, (S, PS(s),X ,Y,Z, PY,Z|X,S(y, z|x, s)), il-

lustrated in Fig. 1, is a channel with input alphabet X ,

output alphabet Y × Z and a state space S. In addition,

the channel probability function can be decomposed as

PY,Z|X,S = PY |X,SPZ|Y , i.e Z−Y −(X,S) form a Markov

chain.

Definition 1:

1) A C12 conference between Decoder Y and Decoder

Z is defined by a conference message set M12 ={1, 2, ..., 2nR12} and a mapping function

h12 : Yn × Sn → M12 (1)

1308

Page 3: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

which maps the received output Y n and the channel

state, Sn, known at the decoder, to a message M12 ∈{1, 2, ..., 2nR12}, which is transmitted to Decoder Z.

2) A C12-admissible conference is a conference that sat-

isfies R12 ≤ C12.

Definition 2: A ((2nRZ , 2nRY ), 2nC12 , n) code for the

PDSD broadcast channel consists of two sets of integers,

MZ = {1, 2, ..., 2nRZ} and MY = {1, 2, ..., 2nRY }, called

message sets. An index is chosen uniformly and indepen-

dently by the senders out of the message sets. The encoder

selects a channel input sequence, Xn = Xn(MZ ,MY , Sn).

The outputs of the channel at Decoder Y and Decoder Z

are denoted Y n and Zn, respectively. The channel is char-

acterized by the conditional probability p(yi, zi|xi, si) and is

assumed to be memoryless. Therefore, both probabilities do

not depend on the index i, i.e.

p(yi, zi, si+1|xi, si) = p(yi|xi, si)p(zi|yi)p(si+1). (2)

The code for the noncausal case (where switch A is closed)

is defined by the encoding function

f : MZ ×MY × Sn → Xn, (3)

a C12-admissible conference

h12 : Yn × Sn → M12, (4)

and two decoding functions

gy : Yn × Sn → MY (5)

gz : Zn ×M12 → MZ . (6)

For the causal case (where switch A is open and switch Bis closed) the encoding function f is replaced by

f : MZ ×MY × Si → Xn, (7)

and Decoder Y’s decoding function is replaced by

gy : Yn → MY . (8)

In definition 2 there is no restriction on when the link

C12 can be used. However, if we restrict the link to be used

before the sequence Y n is given to the strong decoder, then

only information on the state sequence, Sn, can be sent there.

This is the subject of the next definition.

Definition 3: A ((2nRZ , 2nRY ), 2nRd , n) code for the

PDSD broadcast channel with rate-limited side informa-

tion at the weak decoder, illustrated in Fig. 2, consists

of three sets of integers, MZ = {1, 2, ..., 2nRZ} and

MY = {1, 2, ..., 2nRY }, called message sets, and index

T = {1, 2, ..., 2nRd}. The code is defined by two encoding

functions: a channel encoding function

f : MZ ×MY × Sn → Xn, (9)

and a state encoding function

fs : Sn → T , (10)

and two decoding functions

gy : Yn × Sn → MY (11)

gz : Zn × T → MZ . (12)

Definition 4: We define the average probability of error

for a ((2nRZ , 2nRY ), 2nC12 , n) code as follows:

P (n)e = Pr

(

(MY 6= MY ) ∪ (MZ 6= MZ))

. (13)

The average probability of error at each receiver is defined

as

P (n)e,y = Pr(MY 6= MY ) (14)

P (n)e,z = Pr(MZ 6= MZ). (15)

As is commonly held when discussing broadcast channels,

the average probability P(n)e goes to zero as n approaches

infinity, iff P(n)e,y and P

(n)e,z both go to zero as n approaches

infinity.

Definition 5: A pair rate (RZ , RY ) is achievable if there

exists a sequence of codes ((2nRZ , 2nRY ), 2nC12 , n) such

that P(n)e → 0 as n → ∞.

Definition 6: The capacity region is the closure of all

achievable rates.

Sn

MZ

MY

EncoderXn

PY,Z|X,S =

PY |X,SPZ|Y

Y n

Zn

Decoder Y

Decoder Z

Rate ≤ Rd

MY

MZ

State

Encoder

Sn

Fig. 2. The physically degraded, state-dependent broadcast channel withfull state information at the encoder and one decoder together with rate-limited state information at the other decoder. This model describes thecase where the cooperation between the decoders is confined to take placeprior to the decoding of the messages. Therefore, the only information thatthe strong decoder, Decoder Y, can send to the weaker decoder, DecoderZ, is about the state sequence. However, the state is only partially availableat Decoder Z, since it is sent rate-limited due to the limited capacity of thelink between the decoders.

III. MAIN RESULTS AND INSIGHTS

A. Capacity Region of the PDSD Broadcast Channel with

Cooperating Decoders

We begin by stating the capacity region for the PDSD BC

illustrated in Fig. 1 (switch A is closed) in the following

theorem.

Theorem 1: The capacity region of the PDSD BC,

(X,S)− Y −Z , with noncausal state information known at

the encoder and at Decoder Y and with cooperating decoders

1309

Page 4: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

is the closure of the set that contains all the rates (RZ , RY )that satisfy

RZ ≤ I(U ;Z)− I(U ;S) + C12 (16a)

RY ≤ I(X ;Y |U, S) (16b)

RZ +RY ≤ I(X ;Y |S), (16c)

for some joint probability distribution of the form

PS,U,X,Z,Y = PSPU|SPX|S,UPY,Z|X,S . (17)

The achievability of Theorem 1 is proved using techniques

that include triple-binning and superposition coding. The

main idea is to identify how to best use the capacity link

between the decoders. In particular, we want to maximize

the potential use of the capacity link and simultaneously

successfully balance the allocation of rate resources between

both messages. The allocation of resources between the

message intended for Decoder Y, MY , and the message

intended for Decoder Z, MZ , is possible due to the fact

that we use superposition coding. Using this coding method,

Decoder Y, which is the strong decoder, also decodes the

message intended for Decoder Z. This allows us to shift

resources and thus increase the rate resources of MZ at the

expense of the message MY . Decoder Y can then send its

additional information about MZ to Decoder Z using the

capacity link between them.

This additional information sent from Decoder Y to De-

coder Z comes into play by the use of binning. We can divide

the messages MZ amongst superbins in ascending order.

Next, we use a Gel’fand Pinsker code for each superbin.

Now, we can redirect some of the rate resources of the

message MY to send the superbin which contains MZ .

Decoder Y decodes both messages MY and MZ and sends

the superbin that contains MZ to Decoder Z through the

capacity link between the decoders. By fully utilizing this

capacity link, we can increase the rate to Decoder Z that

was achieved using the Gel’fand Pinsker coding scheme.

Nevertheless, if the capacity link between the decoders is

better then the channel itself, there is still a restriction on the

amount of information we can send. This emerges through

the bound on the rate sum, RZ + RY ≤ I(X ;Y |S). This

bound indicates that we cannot send more information about

(MY ,MZ) with this setting than the information we could

have sent about (MY ,MZ) through a state-dependent point-

to-point channel, where the state information is known at

both the encoder and decoder. Moreover, notice that

RZ +RY ≤ I(X ;Y |S) = I(U,X ;Y |S) (18)

= I(U ;Y |S) + I(X ;Y |U, S).

That is, if we have a large capacity link between the

decoders, we have a tradeoff between sending information

about MZ and MY . If we choose to send information about

MY at the maximal rate possible, I(X ;Y |U, S), then the

maximal rate we can send MZ is I(U ;Y |S). On the other

hand, we can increase the rate of MZ (up to the minimum

between I(U ;Z) − I(U ;S) + C12 or I(X ;Y |S)) at the

expense of reducing the rate of MY . This is reflected, for

example, if we have an infinite capacity link between the

decoders. In this case, we can consolidate the two decoders

into one decoder that wants to decode both messages. This is

exactly the case of a state-dependent point-to-point channel

with state information available at both the decoder and

encoder.

Another interesting insight is revealed when comparing the

result presented in Theorem 1 to the result of the physically

degraded broadcast channel with cooperating receivers pre-

sented in [17]. Let us consider a special case of our theorem

where S = ∅. For this case, the region (16) reduces to

RZ ≤ I(U ;Z) + C12 (19a)

RY ≤ I(X ;Y |U) (19b)

RZ +RY ≤ I(X ;Y ), (19c)

for some

PU,X,Z,Y = PX,UPY,Z|X .

This special case was considered in [17, Theorem 1], where it

was referred to as the physically degraded broadcast channel

with cooperating receivers (i.e., the broadcast channel model

where the channel is not state-dependent). In their paper, a

different expression for the capacity region was found:

RZ ≤ min{I(U ;Z) + C12 , I(U ;Y )} (20a)

RY ≤ I(X ;Y |U) (20b)

for some

PU,X,Z,Y = PX,UPY,Z|X .

Therefore we can infer that the two regions, (19) and

(20), are equivalent. It is easy to see that (19) contains (20),

since every pair (RZ , RY ) from (20) has to also satisfy (19).

The other side of the equality, proving (20) contains (19),

is not straightforward. We need to show that for every pair

(RZ , RY ) in (19) there exists a U such that (RZ , RY ) satisfy

(20). However, since the two regions have been proven to be

the capacity regions for the same setting, we can conclude

that they are, indeed, equivalent.

B. Causal Side Information

Consider the case where the state is known to the encoder

in a causal manner, i.e. at each time index, i, the encoder

has access to the sequence si. This setting is illustrated in

Fig. 1 where we take switch A to be open and switch Bto be closed. In this scenario, the encoder is the only user

informed of the state sequence, in contrary to the noncausal

case, where the strong decoder is also informed of the state

sequence. The capacity region for this setting is characterized

in the following theorem.

Theorem 2: The capacity region for the PDSD BC with

cooperating decoders and causal side information known at

the encoder is the closure of the set that contains all the rates

1310

Page 5: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

(RZ , RY ) that satisfy

RZ ≤ I(U ;Z) + C12 (21a)

RY ≤ I(V ;Y |U) (21b)

RZ +RY ≤ I(V, U ;Y ), (21c)

for some joint probability distribution of the form

PS,U,V,X,Z,Y = PSPU,V PX|S,U,V PY,Z|X,S . (22)

The proof of Theorem 2 follows closely the proof

of the noncausal case. We generate satellite codewords

vn(mY ,mZ) (instead of xn(mY ,mZ)) around each cloud

center un(mZ), where the codewords un are divided among

2nC12 superbins. To send (mY ,mZ), the encoder trans-

mits x(ui(mz), vi(mZ ,mY ), si) at time i ∈ {1, 2, ..., n}.

For decoding, the strong decoder, Decoder Y, decodes

the satellite codeword vn (and hence also the cloud

center un), i.e. it looks for an (mY , mZ) such that

(un(mZ), vn(mZ , mY ), y

n) ∈ T(n)ǫ (U, V, Y ). Decoder Y

then uses the link between the decoders to send Decoder

Z the number of the superbin which contains un. Decoder

Z now looks in this specific superbin for a unique mZ such

that (un(mZ), zn) ∈ T

(n)ǫ (U,Z).

C. Capacity Region of the PDSD BC with Rate-Limited State

Information at the Weak Decoder

We state the capacity region in the following theorem

Theorem 3: The capacity region of the PDSD BC,

(X,S) − Y − Z , with rate-limited state information at the

weak decoder, illustrated in Fig. 2, is the closure of the set

that contains all the rates (RZ , RY ) that satisfy

RZ ≤ I(U ;Z, Sd)− I(U ;S, Sd) (23)

RY ≤ I(X ;Y |U, S, Sd) (24)

for some joint probability distribution of the form

PS,U,X,Z,Y = PSPSd,U,X|SPY,Z|X,S (25)

such that

Rd ≥ I(S;Sd)− I(Z;Sd). (26)

Remark 1: As was noted in [10], we can replace the rate

bound on RZ in (23) with the bound:

RZ ≤ I(U ;Z|Sd)− I(U ;S|Sd). (27)

This can easily be seen by applying the chain rule on the

expressions on the righthand side of (23) as follows

I(U ;Z, Sd)− I(U ;S, Sd)

= I(U ;Sd) + I(U ;Z|Sd)− I(U ;Sd)− I(U ;S|Sd)

= I(U ;Z|Sd)− I(U ;S|Sd).

Due to lack of space, a full proof of the theorem is

not given. However, we will shortly describe the guide-

lines which combine similar methods as in the proof of

Theorem 1 and as developed in [10]. For the direct part,

the coding scheme that achieves the capacity region (23)-

(24) uses techniques that include Wyner-Ziv compression,

superposition coding and Gel’fand-Pinsker coding. The main

idea is based on a Gel’fand-Pinsker code, but with several

extensions. First, since the Channel Encoder and the strong

decoder, Decoder Y, know the state sequence sn and the

State Encoder’s strategy, they also know snd , the compressed

codeword that the State Encoder sends Decoder Z. The

encoder uses this knowledge to find a codeword un that is

jointly typical not only with sn, as in the original Gel’fand

Pinsker scheme, but also with snd . Each codeword un is the

center of 2nRY bins, one bin for each message mY . Once a

codeword un is chosen, we look in its satellite bin mY for

a codeword xn such that it is jointly typical with that un,

the state sequence sn and with snd . Upon transmission, the

State Encoder chooses a codeword snd , the Channel Encoder

chooses a codeword un and a corresponding xn, where xn

is transmitted through the channel. Consequently, identifying

xn at the decoder leads to the identification of un.

At the decoder’s end, the State Encoder sends Decoder

Z a compressed version of sn using the codewords sndin a Wyner-Ziv scheme. The side information available at

Decoder Z is the channel output zn. However, the joint

typicality of zn and snd is not due to a Markov Sd − S −Z ,

as in the Wyner-Ziv theorem, but as a result of the channel

Markov Z−Y −(X,S)−Sd and the fact that the codewords

xn are generated ∼∏n

i=1 p(xi|ui, sd,i). Finally, snd is used

as side information to identify the codeword un. As for

the strong decoder, Decoder Y looks for codewords un and

xn that are jointly typical with the received yn, the state

sequence sn and the codeword snd .

For the converse, the rate Rd is bounded similarly

to the Wyner-Ziv theorem where we identify Sd,i as

(T, Sni+1, Z

i−1). The upper bound on RZ proven in a similar

way to the converse presented in [10, Theorem 1], where we

identify Ui to be equal to MZ . Finally, the bound on RY is

proven similarly to Theorem 1, with some modifications.

IV. IS RATE-SPLITTING OPTIMAL FOR COOPERATION?

When dealing with cooperation settings, the most common

approach that comes to mind is the use of rate-splitting.

Many coding schemes based on rate-splitting have been

known to achieve the capacity of channels involving coop-

eration. For example, rate-splitting is the preferred coding

method when coding for cooperative MACs, and it has

been shown to be optimal. However, when dealing with

cooperative BC, we prove that rate-splitting schemes are

not necessarily optimal. Moreover, other techniques, such as

binning, strictly outperform rate-splitting.

The main idea of the rate-splitting scheme is to split

the message intended for the weaker decoder, MZ , into

two messages, MZ = (MZ1,MZ2

). Next, we reorganize

the messages. We concatenate part of the message intended

for the weak decoder, MZ2, to the message intended for

the strong decoder, MY . This lets us define new message

sets, M ′Z = MZ1

and M ′Y = (MY ,MZ2

), where we

1311

Page 6: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

choose MZ2to be of size ≤ C12. Now that we have a

new message set (M ′Z ,M

′Y ), send the messages using a

Gelfand-Pinsker superposition coding scheme, such as the

one described in [7]. Once the strong decoder decodes both

messages, (M ′Z ,M

′Y ) (which, in fact, equal (MZ ,MY )),

it uses the capacity link between the decoders, C12, to send

the message MZ2to the weak decoder. This scheme ends

up with the strong decoder decoding both messages and the

weak decoder decoding the original message MZ .

Using the achievability result of the capacity region found

in [7, Theorem 3] for the PDSD BC with state information

known at the encoder and decoder, as well as the fact that the

rate RZ2cannot be negative or greater then C12, we derive

the following bounds:

RZ2≤ C12 (28a)

RZ2≥ 0 (28b)

RZ1≤ I(U ;Z)− I(U ;S) (28c)

RY +RZ2≤ I(X ;Y |U, S) (28d)

RZ1+RZ2

+RY ≤ I(X ;Y |S). (28e)

Recalling that RZ = RZ1+RZ2

, we substitute RZ1with

RZ −RZ2in the bounds (28c) and (28e). Next, by using the

Fourier-Motzkin elimination, we eliminate the bounds that

contain RZ2, (28a), (28b) and (28d). The resulting region is

the following bounds on RZ and RY

RZ ≤ I(U ;Z)− I(U ;S) + C12 (29a)

RY ≤ I(X ;Y |U, S) (29b)

RZ +RY ≤ I(U ;Z)− I(U ;S) + I(X ;Y |U, S).(29c)

This region, (29), is the achievable region as a result of rate-

splitting.

We note that in the process we derive an additional bound

on the rate sum RZ + RY ≤ I(X ;Y |S). However, we can

see that this bound is satisfied automatically by satisfying

(29c) since

I(X ;Y |S) = I(U,X ;Y |S)

= I(U ;Y |S) + I(X ;Y |U, S)

= I(U ;Y, S)− I(U, S) + I(X ;Y |U, S)

≥ I(U ;Z)− I(U ;S) + I(X ;Y |U, S), (30)

where the last inequality is due to the degradedness prop-

erties of the channel. Moreover, we also omit the bound

C12 ≥ 0, which is trivial from the problem setting.

Examining the region (29), we notice that it has a different

form from the capacity region of this channel, (16). There-

fore, an interesting question arises; Are rate-splitting coding

schemes optimal for BCs with cooperating decoders? We

answer this question in the following lemma.

Lemma 4: Using a rate-splitting method for BCs with

cooperating decoders is not necessarily optimal.

Proof: We would like to show that the rate-splitting

coding scheme that derives the region (29) does not achieve

the capacity of the channel given by (16) in Theorem 1. In

order to do so, we need to show that the region (16) is strictly

larger then the region achievable by rate-splitting, (29). The

region (16) is shown to be achievable using triple-binning.

Hence, by showing that (16) is strictly larger then (29), we

can conclude that the rate-splitting method is not optimal.

Firstly, it is easy to see that the region (16) contains (29),

since the bounds on RZ and RY are the same, yet the bound

on the rate sum (16c) is greater than or equal to (29c), as

shown in (30).

However, to show that (16) is strictly larger than (29), we

need to show that for all distributions of the form (17):{

∃(RZ , RY ) ∈ (16) : (RZ , RY ) /∈ (29)}

. (31)

This is not an easy task. If we look at the regions in their

general form, we need to find a pair (RZ , RY ) ∈ (16)and show that for every random variable U we choose,

(RZ , RY ) /∈ (29). Nevertheless, we can show that (16) is

strictly larger then (29) by considering a specific channel

and showing that for this channel (29) ⊂ (16).

A. Rate-Splitting Achievable Region is not Optimal for the

Binary Symmetric Broadcast Channel

Consider the physically degraded binary symmetric BC,

[18, Section 5.3], illustrated in Fig. 4. Here, for some p1 <p2 < 1

2 , we have Y = X ⊕ W1, Z = Y ⊕ W2, where

W1 ∼Ber(p1), W2 ∼Ber( p2−p1

1−2p2

). This channel is a special

case of our setting, where the channel is not state-dependent

(hence, we take the state as a constant).

X

Y

Z

W1 ∼Ber(p1) W2 ∼Ber( p2−p1

1−2p2

)

Fig. 3. On the left hand side we have the binary symmetric broadcastchannel. On the right hand side we have the equivalent physically degradedversion.

Following closely the arguments given in [18, Section

5.4.2] we can upper bound the region (29) by considering

all the set of rate pairs (RZ , RY ) such that

RZ ≤ 1−H(α ∗ p2) + C12 (32a)

RY ≤ H(α ∗ p1)−H(p1) (32b)

RZ +RY ≤ 1−H(α ∗ p2) +H(α ∗ p1)−H(p1).(32c)

for some α ∈ [0, 12 ]. On the other hand, we can show that

by taking X = U ⊕ V , where U ∼Ber( 12 ) and V ∼Ber(α)

and calculating the corresponding expressions of (16), the

following region of rate pairs (RZ , RY ) such that

RZ ≤ 1−H(α ∗ p2) + C12 (33a)

RY ≤ H(α ∗ p1)−H(p1) (33b)

RZ +RY ≤ 1−H(p1). (33c)

is achievable via the binning scheme.

1312

Page 7: [IEEE 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton) - Monticello, IL (2013.10.2-2013.10.4)] 2013 51st Annual Allerton Conference on Communication,

0 0.05 0.1 0.15 0.2 0.25 0.3 0.350

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

RY

RZ

Rate Region for C12

=0.03

Binning

Rate−splitting

Fig. 4. The upper bound on the region (29) which is calculated in (32)is plotted by the solid line and corresponds to the smaller region. Theachievable region for (16) given by the expressions in (33) is plotted by thedashed line and corresponds to the bigger region. Both regions are plottedfor values of p1 = 0.2, p2 = 0.3, and C12 = 0.3.

Consequently, we can observe that the region (33) is

strictly larger then the region (32). For example take p1 =0.2, p2 = 0.3 and C12 = 0.03. Looking at both regions, we

can see that taking RY to be zero, the point (RZ , RY ) =(0.1487, 0) is achievable in the binning region (33) (the

dashed line) for α = 0. However, it is not achievable in

the rate-splitting region (32) (the solid line) for any value of

α ∈ [0, 12 ].Hence, we have shown that when considering the binary

symmetric BC, an achievable region derived from the binning

coding scheme region by a specific choice of U is strictly

larger then an upper bound for the region (29). Therefore, we

can conclude that (16) is strictly larger then (29) and that the

rate-splitting coding scheme is not necessarily optimal when

dealing with BC.

REFERENCES

[1] C. E. Shannon. Channels with side information at the transmitter. IBM

J. Res. Devel., 2:289–293, 1958.[2] S. I. Gel’fand and M. S. Pinsker. Coding for channel with random

parameters. Probl. Contr. and Inf. Theory, 9(1):19–31, 1980.

[3] A. Lapidoth and Y. Steinberg. The multiple access channel with causaland strictly causal side information at the encoders. 2010.

[4] A. Lapidoth and Y. Steinberg. The multiple-access channel with causalside information: Common state. IEEE Trans. Inf. Theory, 59(1):32–50, 2013.

[5] P. Piantanida A. Zaidi and S. Shamai. Multiple access channel withstates known noncausally at one encoder and only strictly causally atthe other encoder. In CoRR, 2011. abs/1105.5975,.

[6] S. Shamai A. Somekh-Baruch and S. Verdu. Cooperative multiple-access encoding with states availavle at one transmitter. IEEE Trans

Inf. Theory, IT-54:4448–4469, Oct. 2008.

[7] Y. Steinberg. Coding for the degraded broadcast channel with randomparameters, with causal and noncausal side information. IEEE Trans

Inf. Theory, 51(8):2867–2877, August 2005.

[8] C. Heegard and A. A. El Gamal. On the capacity of computer memorywith defects. IEEE Trans. Inf. Theory, 29(5):731–739, 1983.

[9] T. M. Cover and M. Chiang. Duality between channel capasity and ratedistortion with two-sided state information. IEEE Trans. Inf. Theory,48(6):1629–1638, June 2002.

[10] Y. Steinberg. Coding for channels with rate-limited side informationat the decoder, with applications. IEEE Trans. Inf. Theory, 54:4283–4295, 2008.

[11] A. D. Wyner and J. Ziv. The rate-distortion function for source codingwith side information at the decoder. IEEE Trans. Inf. Theory, 22(1):1–10, January 1976.

[12] F. M. J. Willems. The discrete memoryless multiple channel withpartially cooperating encoders. IEEE Trans. Inf. Theory, 29(6):441–445, 1983.

[13] F. M. J Willems. Information-theoretical results for the discrete

memoryless multiple access channel. PhD thesis, 1982. Ph.D. Thesis.[14] H. V. Poor A. J. Goldsmith O. Simeone, D. Gunduz and S. Shamai.

Compound multiple-access channels with partial cooperation. IEEE

Trans. Inf. Theory, 55(6):2425–2441, 2009.[15] E. Amir and Y. Steinberg. Joint source-channel coding for cribbing

models. In Int. Symp. Information Theory (ISIT-12), pages 1952–1956,2012.

[16] S. Shamai H. Permuter and A. Somekh-Baruch. Message and statecooperation in multiple access channels. IEEE Trans Inf. Theory,57(10):6379–6396, 2011.

[17] R. Dabora and S. Servetto. Broadcast channels with cooperatingdecoders. IEEE Trans Inf. Theory, 52(12):5438–5454, December 2006.

[18] A.E. Gamal and Y.H. Kim. Network Information Theory. CambridgeUniversity Press, 2011.

1313