Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
Information-theoretic perspective on massivemultiple-access
Yury Polyanskiy
Department of EECSMIT
SkolTech Mini Course, Jul. 2018
Legal notice: Some images in this presentation are borrowed from publiclyavailable sources. The copyright on these images belongs to their originalcreators. For full copyright information please contact the author.
Yury Polyanskiy MAC tutorial 1
Lecture plan
1 Lecture 1: Short packets. Classical MAC.I Motivation: Why work on MAC now? What is new?I Finite blocklength IT: a few resultsI Classical MAC IT
2 Lecture 2: Gaussian MAC. Modulation. CDMA.I Orthogonal modulation (TDMA, FDMA, CDMA) and non-orthogonal
(NOMA).I Gaussian MAC. TIN. TIN+SIC. Rate-Splitting.I Spectral efficiency and Eb/N0.I Randomly-spread CDMA. Effect of MUD.
3 Lecture 3: Massive MAC. Information theoretic analysis.I Number of users scales with blocklength K = µn.I Per-user probability of error (PUPE). Absence of strong converse.I Gaussian-process achievability bound.
4 Lecture 4: Random-accessI Survey of attempts to formalize random-access.I Our take: random-access = same-codebook.I Achievability bound.I Lattice-based coding scheme.
Yury Polyanskiy MAC tutorial 2
How does your cell phone work?
9SMS
3 -+
x
cingular9:41 AM
iPod
Web
Phone
Settings
Notes
Calculator
Clock
YouTubeStocks
MapsWeather
Camera
Photos
Calendar
Text
9SMS
-+
x
cingular9:41 AM
iPod
Web
M
Phone
Settings
Notes
Calculator
Clock
YouTubeStocks
MapsWeat
Camera
Photos
Calendar
Text
• Cell phone is powered on.• Announces its presence on PRACH.• Base station (periodically) gives permission to send.• Summary:
I Random-Access is very low duty cycle.I BS makes access ORTHOGONAL across usersI bulk of communication is over an interference-free single-user AWGN.
• What’s new in 5G?Yury Polyanskiy MAC tutorial 3
Internet-of-Things
• Smart Agriculture• Advanced Metering systems• Fire alarms• Home security and automation• Oilfield and pipeline monitoring
• M-health• Smart parking, intelligent traffic• Waste and recycling• Asset tracking and geo-location• Animal tracking and livestock
Expected density: 100-500 devices per household/office
Yury Polyanskiy MAC tutorial 4
Soup of solutions
years
Long Battery
Life
years
Long Battery
Life
km/miles
Long Range
$Low Cost
High Capacity
km/miles
Long Range
$Low Cost
High Capacity
LTELTE 868/915ISM
868/915ISM
DECTDECT
GPRSGPRS
NFCNFC
UWBUWB
ZigbeeZigbee
ZwaveZwave
HSDPAHSDPA
Wi-GigWi-Gig SigfoxSigfox
ThreadThread BluetoothBluetooth
HSPAHSPALoRaLoRa
Wi-MaxWi-Max
Wi-FiWi-Fi
BluetoothLE
BluetoothLE
3
NWaveNWave
Yury Polyanskiy MAC tutorial 5
Two breeds of IoTLPWAN
One basestation covers 10 km
Yury Polyanskiy MAC tutorial 6
IoT is about battery life
Q: What drains the battery? Examples (@ 3.3V):
Arduino (w/o reg.) XBee (Zigbee) LP-WAN sensor
Sleep 5 uA 1 uA 1-2 uACPU Running 50 uA 40 uA 60 uA
Radio Xmit 40 mA 20 mA
• Duty-cycle of 1 sec / 20 min radio lasts 6-10 yr / AA bat.• Caveat: Calculation assumes single-user• Key problem: Energy usage will grow with # of sensors deployed.How much?
• Sad: depends on technology? Happy: IT comes to rescue!
Yury Polyanskiy MAC tutorial 7
IoT is about battery life
Q: What drains the battery? Examples (@ 3.3V):
Arduino (w/o reg.) XBee (Zigbee) LP-WAN sensor
Sleep 5 uA 1 uA 1-2 uACPU Running 50 uA 40 uA 60 uARadio Xmit 40 mA 20 mA
• Duty-cycle of 1 sec / 20 min radio lasts 6-10 yr / AA bat.• Caveat: Calculation assumes single-user• Key problem: Energy usage will grow with # of sensors deployed.How much?
• Sad: depends on technology? Happy: IT comes to rescue!
Yury Polyanskiy MAC tutorial 7
IoT is about battery life
Q: What drains the battery? Examples (@ 3.3V):
Arduino (w/o reg.) XBee (Zigbee) LP-WAN sensor
Sleep 5 uA 1 uA 1-2 uACPU Running 50 uA 40 uA 60 uARadio Xmit 40 mA 20 mA
• Duty-cycle of 1 sec / 20 min radio lasts 6-10 yr / AA bat.• Caveat: Calculation assumes single-user• Key problem: Energy usage will grow with # of sensors deployed.How much?
• Sad: depends on technology? Happy: IT comes to rescue!
Yury Polyanskiy MAC tutorial 7
Outline
Envisioned solution:• To save battery: sensors sleep all the time, except transmissions.• ... uncoordinated transmissions.• ... they wake up, blast the packet, go back to sleep.• Focus on low-energy (low Eb/N0)• Focus on fundamental limits• ... but with low-complexity solutions (single-user-only decoding).
Issues we need to understand:1 packets are short: finite-blocklength (FBL) info theory2 multiple-access channel: Classical MAC3 low-complexity MAC: modulation, CDMA, multi-user detection4 massive random-access: many users, same-codebook codes (NEW)
Supporting 10 users at 1Mbps is much easier than 1M users at 10bps.
Yury Polyanskiy MAC tutorial 8
Outline
Envisioned solution:• To save battery: sensors sleep all the time, except transmissions.• ... uncoordinated transmissions.• ... they wake up, blast the packet, go back to sleep.• Focus on low-energy (low Eb/N0)• Focus on fundamental limits• ... but with low-complexity solutions (single-user-only decoding).
Issues we need to understand:1 packets are short: finite-blocklength (FBL) info theory2 multiple-access channel: Classical MAC3 low-complexity MAC: modulation, CDMA, multi-user detection4 massive random-access: many users, same-codebook codes (NEW)
Supporting 10 users at 1Mbps is much easier than 1M users at 10bps.
Yury Polyanskiy MAC tutorial 8
Outline
Envisioned solution:• To save battery: sensors sleep all the time, except transmissions.• ... uncoordinated transmissions.• ... they wake up, blast the packet, go back to sleep.• Focus on low-energy (low Eb/N0)• Focus on fundamental limits• ... but with low-complexity solutions (single-user-only decoding).
Issues we need to understand:1 packets are short: finite-blocklength (FBL) info theory2 multiple-access channel: Classical MAC3 low-complexity MAC: modulation, CDMA, multi-user detection4 massive random-access: many users, same-codebook codes (NEW)
Supporting 10 users at 1Mbps is much easier than 1M users at 10bps.
Yury Polyanskiy MAC tutorial 8
FBL Info Theory: short intro
Yury Polyanskiy MAC tutorial 9
Case study: 1000-bit BSC
• Consider channel BSC(n = 1000, δ = 0.11)
• How many data bits can we transmit with (block) Pe ≤ 10−3?• Attempt 1: Repetition
k = 47 bits via [21,1,21]-code
• Attempt 2: Reed-Muller
k = 112 bits via [64,7,32]-code
• Shannon’s prediction: C = 0.5 bit so
k ≈ 500 bit
• Finite blocklength IT:414 ≤ k ≤ 416
Yury Polyanskiy MAC tutorial 10
Case study: 1000-bit BSC
• Consider channel BSC(n = 1000, δ = 0.11)
• How many data bits can we transmit with (block) Pe ≤ 10−3?• Attempt 1: Repetition
k = 47 bits via [21,1,21]-code
• Attempt 2: Reed-Muller
k = 112 bits via [64,7,32]-code
• Shannon’s prediction: C = 0.5 bit so
k ≈ 500 bit
• Finite blocklength IT:414 ≤ k ≤ 416
Yury Polyanskiy MAC tutorial 10
Abstract communication problem
Noisy channel
Goal: Decrease corruption of data caused by noise
Yury Polyanskiy MAC tutorial 11
Channel coding: principles
Noisy channel
Data bits Redundancy
Goal: Decrease corruption of data caused by noise
Solution: Code to diminish probability of error Pe.
Key metrics: Rate and Pe
Yury Polyanskiy MAC tutorial 12
Channel coding: principles
Possible
Impossible
Data bits Redundancy
Pe Reliability−Rate tradeoff
Rate
Noisy channel
Yury Polyanskiy MAC tutorial 13
Channel coding: principles
Data bits Redundancy
Pe Reliability−Rate tradeoff
Rate
Noisy channelDecreasing Pe further:
1. More redundancyBad: loses rate
2. Increase blocklength!
n = 10
Yury Polyanskiy MAC tutorial 14
Channel coding: principles
Data bits Redundancy
Pe Reliability−Rate tradeoff
Rate
Noisy channelDecreasing Pe further:
1. More redundancyBad: loses rate
2. Increase blocklength!
n = 100
Yury Polyanskiy MAC tutorial 15
Channel coding: principles
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
Rate
Decreasing Pe further:
1. More redundancyBad: loses rate
2. Increase blocklength!
n = 1000
Yury Polyanskiy MAC tutorial 16
Channel coding: principles
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
Rate
Decreasing Pe further:
1. More redundancyBad: loses rate
2. Increase blocklength!
n = 106
Yury Polyanskiy MAC tutorial 17
Channel coding: Shannon capacity
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
C
Channel capacity
Rate
Shannon: Fix R < CPe ↘ 0 as n→∞
Yury Polyanskiy MAC tutorial 18
Channel coding: Shannon capacity
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
C Rate
Channel capacity
Shannon: Fix R < CPe ↘ 0 as n→∞
Question:For what n will Pe < 10−3?
Yury Polyanskiy MAC tutorial 19
Channel coding: Gaussian approximation
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
C Rate
Channel capacity
Channel dispersion
Shannon: Fix R < CPe ↘ 0 as n→∞
Question:For what n will Pe < 10−3?
Answer:
n & const · VC2
Yury Polyanskiy MAC tutorial 20
Channel coding: Gaussian approximation
Noisy channel
Data bits Redundancy
Pe Reliability−Rate tradeoff
C Rate
Channel capacity
Channel dispersion
Shannon: Fix R < CPe ↘ 0 as n→∞
Question:For what n will Pe < 10−3?
Answer:
n & const · VC2
Yury Polyanskiy MAC tutorial 20
How to describe evolution of the boundary?
Pe Reliability−Rate tradeoff
C Rate
Classical results:• Vertical asymptotics: fixed rate, reliability functionElias, Dobrushin, Fano, Shannon-Gallager-Berlekamp
• Horizontal asymptotics: fixed ε, strong converse,√n terms
Wolfowitz, Weiss, Dobrushin, Strassen, Kemperman
Yury Polyanskiy MAC tutorial 21
How to describe evolution of the boundary?
Pe Reliability−Rate tradeoff
C Rate
XXI century:• Tight non-asymptotic bounds• Remarkable precision of normal approximation• Extended results on horizontal asymptoticsAWGN, O(log n), cost constraints, feedback, etc.
Yury Polyanskiy MAC tutorial 21
Finite blocklength fundamental limit
Pe Reliability−Rate tradeoff
C Rate
Definition
R∗(n, ε) = max
{1
nlogM : ∃(n,M, ε)-code
}
(max. achievable rate for blocklength n and prob. of error ε)
Rough summary: For ergodic channels
R∗(n, ε) ≈ C −√V
nQ−1(ε) .
Yury Polyanskiy MAC tutorial 22
Connection to CLT
• Let PY n|Xn = PnY |X be memoryless.
• Converse bounds (roughly):
R∗(n, ε) . ε-th quantile of1
nlog
dPY n|Xn
dQY n
• Achievability bounds (roughly):
R∗(n, ε) & ε-th quantile of1
nlog
dPY n|Xn
dQY n
• Info-density i(Xn;Y n) = logdPY n|Xn
dQY nis a sum of iid.
• Choice of QY n is an art. Often c.a.o.d. works. Then,E[i(Xn;Y n] = nC.
• So by CLTR∗(n, ε) ≈ ε-quantile of N (C, V/n)
Yury Polyanskiy MAC tutorial 23
Connection to CLT
• Let PY n|Xn = PnY |X be memoryless.
• Converse bounds (roughly):
R∗(n, ε) . ε-th quantile of1
nlog
dPY n|Xn
dQY n
• Achievability bounds (roughly):
R∗(n, ε) & ε-th quantile of1
nlog
dPY n|Xn
dQY n
• Info-density i(Xn;Y n) = logdPY n|Xn
dQY nis a sum of iid.
• Choice of QY n is an art. Often c.a.o.d. works. Then,E[i(Xn;Y n] = nC.
• So by CLTR∗(n, ε) ≈ ε-quantile of N (C, V/n)
Yury Polyanskiy MAC tutorial 23
FBL achievability bounds
• A random transformation APY |X−→ B
• (M, ε) codes:
W → A→ B→ W W ∼ Unif{1, . . . ,M}P[W 6= W ] ≤ ε
• For every PXY = PXPY |X define information density:
ı(x; y) , logdPY |X=x
dPY(y)
I E[ı(X;Y )] = I(X;Y )I Var[ı(X;Y )|X] = VI Memoryless channels: ı(An;Bn) = sum of iid.
ı(An;Bn)d≈ nI(A;B) +
√nV Z, Z ∼ N (0, 1)
• Goal: Prove FBL bounds.As by-product: R∗(n, ε) & C −
√VnQ−1(ε)
Yury Polyanskiy MAC tutorial 24
DT bound
Theorem (Dependence Testing Bound)
For any PX there exists a code with M codewords and
ε ≤ E[exp
{−∣∣∣ıX;Y (X;Y )− log
M−1
2
∣∣∣+}]
.
Highlights:• Strictly stronger than Feinstein-Shannon• . . . and no optimization over γ!• Easier to compute than RCU
• Easier asymptotics: ε ≤ E[e−n|
1nı(Xn;Y n)−R|+
]
≈ Q(√
nV {I(X;Y )−R}
)
• Has a form of f -divergence: 1− ε ≥ Df (PXY ‖PXPY )
Yury Polyanskiy MAC tutorial 25
DT bound: Proof
• Codebook: random C1, . . . CM ∼ PX iid• Feinstein decoder:
W = smallest j s.t. ıX;Y (Cj ;Y ) > γ
• j-th codeword’s probability of error:
P[error |W = j] ≤ P[ıX;Y (X;Y ) ≤ γ]︸ ︷︷ ︸©a
+(j − 1)P[ıX;Y (X;Y ) > γ]︸ ︷︷ ︸©b
In ©a : Cj too far from YIn ©b : Ck with k < j is too close to Y
• Average over W :
P[error] ≤ P [ıX;Y (X;Y ) ≤ γ] +M−1
2P[ıX;Y (X;Y ) > γ
]
Yury Polyanskiy MAC tutorial 26
DT bound: Proof
• Recap: for every γ there exists a code with
ε ≤ P [ıX;Y (X;Y ) ≤ γ] +M−1
2P[ıX;Y (X;Y ) > γ
].
• Key step: closed-form optimization of γ.• Introduce X ⊥⊥ Y : ıX;Y = log dPXY
dPXY
• We have
PXY
[dPXYdPXY
≤ eγ]
+M−1
2PXY
[dPXYdPXY
> eγ]
Bayesian dependence testing!Optimum threshold: Ratio of priors ⇒ γ∗ = log M−1
2
• Change of measure argument:
P
[dP
dQ≤ τ
]+ τQ
[dP
dQ> τ
]= EP
[exp
{−∣∣∣∣log
dP
dQ− log τ
∣∣∣∣+}]
.
Yury Polyanskiy MAC tutorial 27
FBL Converse bounds
• Take a random transformation APY |X−→ B
(think A = An, B = Bn, PY |X = PY n|Xn)• Input distribution PX induces PY = PY |X ◦ PX
PXY = PXPY |X
• Fix code:W
encoder−→ X → Ydecoder−→ W
W ∼ Unif [M ] and M = # of codewordsInput distribution PX associated to a code:
PX [·] , # of codewords ∈ (·)M
.
• Goal: Upper bounds on logM in terms of ε , P[error]
As by-product: R∗(n, ε) . C −√
VnQ−1(ε)
Yury Polyanskiy MAC tutorial 28
Fano’s inequality
Theorem (Fano)
For any codeencoder decoder
PSfrag replacements
W
W1W2
W
W1
W2
Y
Y1Y2
X
X1X2
PY |X
AA1BB1CC1
with W ∼ Unif{1, . . . ,M}:
logM ≤ supPX I(X;Y ) + h(ε)
1− ε , ε = P[W 6= W ]
Implies weak converse:
R∗(n, ε) ≤ C
1− ε + o(1) .
Proof: ε-small =⇒ H(W |W )-small =⇒ I(X;Y ) ≈ H(W )= logM
Yury Polyanskiy MAC tutorial 29
A (very long) proof of Fano via channel substitution
Consider two distributions on (W,X, Y, W ):
P : PWXY W = PW × PX|W × PY |X ×PW |YDAG: W → X → Y → W
Q : QWXY W = PW × PX|W × QY ×PW |YDAG: W → X Y → W
Under Q the channel is useless:
Q[W = W ] =
M∑
m=1
PW (m)QW (m) =1
M
M∑
m=1
QW (m) =1
M
Next step: data-processing for relative entropy D(·||·)
Yury Polyanskiy MAC tutorial 30
Data-processing for D(·||·)
TransformationPB|A
PA
Input distribution Output distribution
QA
PB
QB
(Random)
D(PA‖QA) ≥ D(PB‖QB)
Apply to transform: (W,X, Y, W ) 7→ 1{W 6= W}:
D(PWXY W ‖QWXY W ) ≥ d(P[W = W ] ‖Q[W = W ] )
= d(1− ε|| 1M )
where d(x||y) = x log xy + (1− x) log 1−x
1−y .
Yury Polyanskiy MAC tutorial 31
Data-processing for D(·||·)
TransformationPB|A
PA
Input distribution Output distribution
QA
PB
QB
(Random)
D(PA‖QA) ≥ D(PB‖QB)
Apply to transform: (W,X, Y, W ) 7→ 1{W 6= W}:
D(PWXY W ‖QWXY W ) ≥ d(P[W = W ] ‖Q[W = W ] )
= d(1− ε|| 1M )
where d(x||y) = x log xy + (1− x) log 1−x
1−y .
Yury Polyanskiy MAC tutorial 31
A proof of Fano via channel substitution
So far:D(PWXY W ‖QWXY W ) ≥ d(1− ε|| 1
M )
Lower-bound RHS:
d(1− ε‖ 1M ) ≥ (1− ε) logM − h(ε)
Analyze LHS:
D(PWXY W ‖QWXY W ) = D(PXY ‖QXY )
= D(PXPY |X‖PXQY )
= D(PY |X‖QY |PX)
(Recall: D(PY |X‖QY |PX) = Ex∼PX[D(PY |X=x‖QY )])
Yury Polyanskiy MAC tutorial 32
A proof of Fano via channel substitution: last step
Putting it all together:
(1− ε) logM ≤ D(PY |X ||QY |PX) + h(ε) ∀QY ∀code
Two methods:1 Compute supPX infQY and recall
infQY
D(PY |X‖QY |PX) = I(X;Y )
2 Take QY = P ∗Y = the caod (capacity achieving output dist.) andrecall
D(PY |X‖P ∗Y |PX) ≤ supXI(X;Y ) ∀PX
Conclude:(1− ε) logM ≤ sup
PX
I(X;Y ) + h(ε)
Important: Second method is particularly useful for FBL!Yury Polyanskiy MAC tutorial 33
Tightening: from D(·||·) to βα(·, ·)Question: How about replacing D(·||·) with other divergences?
D(·||·) relative entropy(KL divergence) weak converse
Dλ(·||·) Rényi divergence strong converse
βα(·, ·) Neyman-PearsonROC curve FBL bounds
Note: Using βα is aka meta-converse.
... and leads to R∗(n, ε) ≤ C −√
VnQ−1(ε)
Yury Polyanskiy MAC tutorial 34
Tightening: from D(·||·) to βα(·, ·)Question: How about replacing D(·||·) with other divergences?
D(·||·) relative entropy(KL divergence) weak converse
Dλ(·||·) Rényi divergence strong converse
βα(·, ·) Neyman-PearsonROC curve FBL bounds
Note: Using βα is aka meta-converse.
... and leads to R∗(n, ε) ≤ C −√
VnQ−1(ε)
Yury Polyanskiy MAC tutorial 34
Tightening: from D(·||·) to βα(·, ·)Question: How about replacing D(·||·) with other divergences?
D(·||·) relative entropy(KL divergence) weak converse
Dλ(·||·) Rényi divergence strong converse
βα(·, ·) Neyman-PearsonROC curve FBL bounds
Note: Using βα is aka meta-converse.
... and leads to R∗(n, ε) ≤ C −√
VnQ−1(ε)
Yury Polyanskiy MAC tutorial 34
General meta-converse principle
Steps:• Select auxiliary channel QY |X (art)
E.g.: QY |X=x = center of a cluster of x• Prove converse bound for channel QY |X
E.g.: Q[W = W ] . # of clustersM
• Compute distance D(P‖Q) between two spaces
P : PWXY W = PW × PX|W × PY |X × PW |Y
vs.
Q : PWXY W = PW × PX|W × QY |X × PW |Y
• Apply data processing: D(PW,W ‖QW,W ) ≤ D(PX,Y ‖QX,Y )
• Key observation: This inequality connects P[error], Q[error] anddistance D(P‖|Q).
Yury Polyanskiy MAC tutorial 35
FBL: summary
• All in all, these methods allow us to conclude:
R∗(n, ε) ≈ C −√V
nQ−1(ε)
for a wide range of channels.• Typically, V = Var[i(X;Y )|X] for cap.ach. distribution X.
• Example: The AWGN Channel
Z∼ N (0, σ2)↓
X −→ ⊕ −→ Y
Codewords xn ∈ Rn satisfy power-constraint:∑n
j=1 |xj |2 ≤ nP
C(P ) =1
2log(1 + P ), V (P ) =
log2 e
2
(1− 1
(1 + P )2
)
• Curious property of Gaussian noise: V (P ) ≤ log2 e2
Below for Gaussian MAC we focus on m.i./capacity. By FBLthere ∃ codes within O( 1√
n) uniformly in P .
Yury Polyanskiy MAC tutorial 36
FBL: summary
• All in all, these methods allow us to conclude:
R∗(n, ε) ≈ C −√V
nQ−1(ε)
for a wide range of channels.• Typically, V = Var[i(X;Y )|X] for cap.ach. distribution X.• Example: The AWGN Channel
Z∼ N (0, σ2)↓
X −→ ⊕ −→ Y
Codewords xn ∈ Rn satisfy power-constraint:∑n
j=1 |xj |2 ≤ nP
C(P ) =1
2log(1 + P ), V (P ) =
log2 e
2
(1− 1
(1 + P )2
)
• Curious property of Gaussian noise: V (P ) ≤ log2 e2
Below for Gaussian MAC we focus on m.i./capacity. By FBLthere ∃ codes within O( 1√
n) uniformly in P .
Yury Polyanskiy MAC tutorial 36
FBL: summary
• All in all, these methods allow us to conclude:
R∗(n, ε) ≈ C −√V
nQ−1(ε)
for a wide range of channels.• Typically, V = Var[i(X;Y )|X] for cap.ach. distribution X.• Example: The AWGN Channel
Z∼ N (0, σ2)↓
X −→ ⊕ −→ Y
Codewords xn ∈ Rn satisfy power-constraint:∑n
j=1 |xj |2 ≤ nP
C(P ) =1
2log(1 + P ), V (P ) =
log2 e
2
(1− 1
(1 + P )2
)
• Curious property of Gaussian noise: V (P ) ≤ log2 e2
Below for Gaussian MAC we focus on m.i./capacity. By FBLthere ∃ codes within O( 1√
n) uniformly in P .
Yury Polyanskiy MAC tutorial 36
Classical multiple-access IT
Yury Polyanskiy MAC tutorial 37
IT vs networks view on MAC
• Core problem: many users, one channel
• Networking folks:
• ALOHA protocol (slotted) achieves:∑
i
Ri ≈ 0.37C
• Open problem: what max fraction η∗ achievable?State of the art [Tsybakov-Lihanov’87]: 0.476 ≤ η∗ ≤ 0.568(collision resolution codes)
• IT: We want∑
iRi � C !• How? By exploiting physics of collision.
Yury Polyanskiy MAC tutorial 38
IT vs networks view on MAC
• Core problem: many users, one channel• Networking folks:
• ALOHA protocol (slotted) achieves:∑
i
Ri ≈ 0.37C
• Open problem: what max fraction η∗ achievable?State of the art [Tsybakov-Lihanov’87]: 0.476 ≤ η∗ ≤ 0.568(collision resolution codes)
• IT: We want∑
iRi � C !• How? By exploiting physics of collision.
Yury Polyanskiy MAC tutorial 38
IT vs networks view on MAC
• Core problem: many users, one channel• Networking folks:
• ALOHA protocol (slotted) achieves:∑
i
Ri ≈ 0.37C
• Open problem: what max fraction η∗ achievable?State of the art [Tsybakov-Lihanov’87]: 0.476 ≤ η∗ ≤ 0.568(collision resolution codes)
• IT: We want∑
iRi � C !• How? By exploiting physics of collision.
Yury Polyanskiy MAC tutorial 38
2-user MAC: IT formalism
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
• 2-input channel: PY |X1,X2(memoryless)
• Random messages W1 ∈ [2nR1 ],W2 ∈ [2nR2 ]• Encoders: Xn
1 = f1(W1), Xn2 = f2(W2)
• Joint decoder: (W1, W2) = g(Y )• Joint probability of error:
P[W1 = W1,W2 = W2] ≥ 1− ε .
• FBL fundamental limit (region):
R∗(n, ε) = {(R1, R2) : ∃(2nR1 , 2nR2 , ε)-code}• Asymptotics: [·] = closure
Cε =[lim infn→∞
R∗(n, ε)], C =
⋂
ε>0
Cε
Yury Polyanskiy MAC tutorial 39
2-user MAC: IT formalism
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
• 2-input channel: PY |X1,X2(memoryless)
• Random messages W1 ∈ [2nR1 ],W2 ∈ [2nR2 ]• Encoders: Xn
1 = f1(W1), Xn2 = f2(W2)
• Joint decoder: (W1, W2) = g(Y )• Joint probability of error:
P[W1 = W1,W2 = W2] ≥ 1− ε .• FBL fundamental limit (region):
R∗(n, ε) = {(R1, R2) : ∃(2nR1 , 2nR2 , ε)-code}• Asymptotics: [·] = closure
Cε =[lim infn→∞
R∗(n, ε)], C =
⋂
ε>0
Cε
Yury Polyanskiy MAC tutorial 39
2-user MAC: capacity region
Theorem (Ahlswede-Liao (capacity) + Dueck (Strong converse))
C = Cε =
co
⋃
PX1,PX2
Penta(PX1 , PX2)
Penta(PX1 , PX2) ,
(R1, R2) :
R1 +R2 ≤ I(X1, X2;Y )R1 ≤ I(X1;Y |X2)R2 ≤ I(X2;Y |X1)
• co{·} – convex hull• Fun fact: w/o syncronization C = [
⋃Penta] but w/o co{·} !
• Not true with cost constraints. In that case need time-sharing:
C =⋃
X1,X2,U
(R1, R2) :
R1 +R2 ≤ I(X1, X2;Y |U)R1 ≤ I(X1;Y |X2, U)R2 ≤ I(X2;Y |X1, U)
.
Yury Polyanskiy MAC tutorial 40
2-user MAC: capacity region
Theorem (Ahlswede-Liao (capacity) + Dueck (Strong converse))
C = Cε =
co
⋃
PX1,PX2
Penta(PX1 , PX2)
Penta(PX1 , PX2) ,
(R1, R2) :
R1 +R2 ≤ I(X1, X2;Y )R1 ≤ I(X1;Y |X2)R2 ≤ I(X2;Y |X1)
• co{·} – convex hull• Fun fact: w/o syncronization C = [
⋃Penta] but w/o co{·} !
• Not true with cost constraints. In that case need time-sharing:
C =⋃
X1,X2,U
(R1, R2) :
R1 +R2 ≤ I(X1, X2;Y |U)R1 ≤ I(X1;Y |X2, U)R2 ≤ I(X2;Y |X1, U)
.
Yury Polyanskiy MAC tutorial 40
Capacity = Union of pentagons
Penta(PX1 , PX2) ,
(R1, R2) :
R1 +R2 ≤ I(X1, X2;Y )R1 ≤ I(X1;Y |X2)R2 ≤ I(X2;Y |X1)
R1
R2
I(X1;Y |X2)
I(X2;Y |X1)I(X1, X2;Y )
Note: After taking⋃PX1
,PX2and convex-hull, resulting region may be
curvilinear!
Yury Polyanskiy MAC tutorial 41
MAC theorem: standard proof (outline)
Theorem
C = Cε =
co
⋃
PX1,PX2
Penta(PX1 , PX2)
Here is a standard proof• Weak-converse:
I sum-rate
R1 +R2 .1
nI(Xn
1 , Xn2 ;Y n) ≤ 1
n
n∑
i=1
I(X1i, X2i;Yi) .
I genie gives Xn1 to decoder
R2 .1
nI(Xn
2 ;Y n|Xn1 ) ≤ 1
n
n∑
i=1
I(X2i;Yi|X1i)
I Hence (R1, R2) ∈ 1n
∑i Penta(PX1i
, PX2i)
Yury Polyanskiy MAC tutorial 42
MAC theorem: standard proof (outline)
Theorem
C = Cε =
co
⋃
PX1,PX2
Penta(PX1 , PX2)
Here is a standard proof• Achievability:
I Fix PX1, PX2
.I Generate codewords for user i from (PX1)⊗n iidI Decode via joint-typicalityI Have (M1 − 1)(M2 − 1) possibilities with both W1, W2 wrong
(each w.p. ≤ 2−nI(X1,X2;Y ))I Have Mi− 1 possibilities with Wi wrong (each w.p. ≤ 2−nI(Xi;Y |Xi))I Hence, if (R1, R2) ∈ Penta(PX1 , PX2) all three types of errors are
small.I Let us understand this more carefully...
Yury Polyanskiy MAC tutorial 42
MAC achievability: details I
• Gen. M1 = 2nR1 codewords Ciiid∼ (PX1)⊗n
• Gen. M2 = 2nR2 codewords Diiid∼ (PX2)⊗n
• True message W1 = i0,W2 = j0.• Decoder sees yn. How to decode?
• Why is this not the same as decoding single-user M1×M2-size code?
• Extra structure: (Ci0 , Dj) 6⊥⊥ (Ci0 , Dj0)
Yury Polyanskiy MAC tutorial 43
MAC achievability: details I
• Gen. M1 = 2nR1 codewords Ciiid∼ (PX1)⊗n
• Gen. M2 = 2nR2 codewords Diiid∼ (PX2)⊗n
• True message W1 = i0,W2 = j0.• Decoder sees yn. How to decode?• Why is this not the same as decoding single-user M1×M2-size code?
• Extra structure: (Ci0 , Dj) 6⊥⊥ (Ci0 , Dj0)
Yury Polyanskiy MAC tutorial 43
MAC achievability: details I
• Gen. M1 = 2nR1 codewords Ciiid∼ (PX1)⊗n
• Gen. M2 = 2nR2 codewords Diiid∼ (PX2)⊗n
• True message W1 = i0,W2 = j0.• Decoder sees yn. How to decode?• Why is this not the same as decoding single-user M1×M2-size code?
• Extra structure: (Ci0 , Dj) 6⊥⊥ (Ci0 , Dj0)
Yury Polyanskiy MAC tutorial 43
MAC achievability: details II
• Decoder sees yn. How to decode?• A good test for rejecting (M1 − 1)(M2 − 1) codewords in (P12):
(T12) i(ci, dj ; yn) ≤ γ12 ⇒ remove (i, j) from consideration
• i(c, d; yn) , logPY n|Xn1 ,X
n2
(yn|c,d)
PY n (yn)
• Standard bound: ∀i 6= i0, j 6= j0:
P[i(Ci, Dj ;Yn) > γ12] ≤ e−γ12
• Set γ12 = log(M1M2) + τ then test (T12) filters all (i, j) ∈ (P12)
Yury Polyanskiy MAC tutorial 44
MAC achievability: details III
• Decoder sees yn. How to decode?• A good test for rejecting (M2 − 1) codewords in (P2):
(T2) i(dj ; yn|ci) ≤ γ2 ⇒ remove (i, j) from consideration
• i(d; yn|c) , logPY n|Xn1 ,X
n2
(yn|c,d)
PY n|Xn1(yn|c)
• Standard bound: ∀j 6= j0:
P[i(Dj ;Yn|Ci0) > γ2] ≤ e−γ2
• Set γ2 = log(M2) + τ then test (T2) filters all (i0, j) ∈ (P2)
Yury Polyanskiy MAC tutorial 45
• Decoder sees yn. How to decode?(T12) i(ci, dj ; y
n) ≤ n(R1 +R2) + τ ⇒ remove (i, j)
(T1) i(ci; yn|dj) ≤ nR1 + τ ⇒ remove (i, j)
(T2) i(dj ; yn|ci) ≤ nR2 + τ ⇒ remove (i, j)
• This achieves:
ε ≤ 3e−τ + P[{i(Xn
1 , Xn2 ;Y n) ≤ n(R1 +R2) + τ} ∪
{i(Xn1 ;Y n|Xn
2 ) ≤ nR1 + τ} ∪ {i(Xn2 ;Y n|Xn
1 ) ≤ nR2 + τ}].
• By CLT a (R1, R2) within 1√nof the boundary of Penta is
achievable.
• Typical decodingI Use (T12) rule – this is like decoding single-user M1 ×M2-code
(LDPC+LDGM structure!)I After applying it, most often get only one (true) message left (!)I Unless R1 = I(X1;Y |X2) +O( 1√
n).
I In this case, many (i, j)’s remain. But they are all in one column!I Hence decode W2. Conditioned on X2 – decode M1-code.
Yury Polyanskiy MAC tutorial 46
• Decoder sees yn. How to decode?(T12) i(ci, dj ; y
n) ≤ n(R1 +R2) + τ ⇒ remove (i, j)
(T1) i(ci; yn|dj) ≤ nR1 + τ ⇒ remove (i, j)
(T2) i(dj ; yn|ci) ≤ nR2 + τ ⇒ remove (i, j)
• This achieves:
ε ≤ 3e−τ + P[{i(Xn
1 , Xn2 ;Y n) ≤ n(R1 +R2) + τ} ∪
{i(Xn1 ;Y n|Xn
2 ) ≤ nR1 + τ} ∪ {i(Xn2 ;Y n|Xn
1 ) ≤ nR2 + τ}].
• By CLT a (R1, R2) within 1√nof the boundary of Penta is
achievable.• Typical decoding
I Use (T12) rule – this is like decoding single-user M1 ×M2-code(LDPC+LDGM structure!)
I After applying it, most often get only one (true) message left (!)
I Unless R1 = I(X1;Y |X2) +O( 1√n
).I In this case, many (i, j)’s remain. But they are all in one column!I Hence decode W2. Conditioned on X2 – decode M1-code.
Yury Polyanskiy MAC tutorial 46
• Decoder sees yn. How to decode?(T12) i(ci, dj ; y
n) ≤ n(R1 +R2) + τ ⇒ remove (i, j)
(T1) i(ci; yn|dj) ≤ nR1 + τ ⇒ remove (i, j)
(T2) i(dj ; yn|ci) ≤ nR2 + τ ⇒ remove (i, j)
• This achieves:
ε ≤ 3e−τ + P[{i(Xn
1 , Xn2 ;Y n) ≤ n(R1 +R2) + τ} ∪
{i(Xn1 ;Y n|Xn
2 ) ≤ nR1 + τ} ∪ {i(Xn2 ;Y n|Xn
1 ) ≤ nR2 + τ}].
• By CLT a (R1, R2) within 1√nof the boundary of Penta is
achievable.• Typical decoding
I Use (T12) rule – this is like decoding single-user M1 ×M2-code(LDPC+LDGM structure!)
I After applying it, most often get only one (true) message left (!)I Unless R1 = I(X1;Y |X2) +O( 1√
n).
I In this case, many (i, j)’s remain. But they are all in one column!I Hence decode W2. Conditioned on X2 – decode M1-code.
Yury Polyanskiy MAC tutorial 46
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
Y = X1 +X2 Xi ∈ {0, 1}, Y ∈ {0, 1, 2}
• Maximal sum-rate:
Csum = maxA,B
I(A,B;Y ) = maxH(A+B) =3
2log 2
• Each user can send 1 bit/ch.use. But together 32 bit/ch.use. How?
Yury Polyanskiy MAC tutorial 47
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
• Take R1 = 1. Then X2 → Y sees channel:
0
1
0
1
2
12
12
12
12
= BEC(1/2)
• successive interference cancellation (SIC):
An
Bn
Y n DecBEC(1/2)
An
Bn
Yury Polyanskiy MAC tutorial 48
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
• Take R1 = 1. Then X2 → Y sees channel:
0
1
0
1
2
12
12
12
12
= BEC(1/2)
• successive interference cancellation (SIC):
An
Bn
Y n DecBEC(1/2)
An
Bn
Yury Polyanskiy MAC tutorial 48
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
• Take R1 = 1. Then X2 → Y sees channel:
0
1
0
1
2
12
12
12
12
= BEC(1/2)
• successive interference cancellation (SIC):
An
Bn
Y n DecBEC(1/2)
An
Bn
Yury Polyanskiy MAC tutorial 48
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
Y = X1 +X2 Xi ∈ {0, 1}, Y ∈ {0, 1, 2}• Analyzing FBL achievability we can show: (maximal sumrate)
R∗sum(n, ε) ≥ 3
2−√
1
4nQ−1(ε) +O(log n) .
• Open problem: Prove R∗sum(n, ε) ≤ 32 +
√1nKε
• Conjecture: [Ajjanagadde-P.’15] for all 0 < α < 1
maxAn⊥⊥Bn
Hα(An +Bn) = nHα(14 ,
12 ,
14)
where Hα(·) is Renyi entropy.• If true implies Open problem. How?
Yury Polyanskiy MAC tutorial 49
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
Y = X1 +X2 Xi ∈ {0, 1}, Y ∈ {0, 1, 2}• Analyzing FBL achievability we can show: (maximal sumrate)
R∗sum(n, ε) ≥ 3
2−√
1
4nQ−1(ε) +O(log n) .
• Open problem: Prove R∗sum(n, ε) ≤ 32 +
√1nKε
• ... not even asking for Kε < 0
• ... So far best-known result (Ahslwede): R∗sum ≤ 32 + c
√1n log n
• The state is so bad that even for ε = 0 we only know (Fano):
R∗sum(n, ε = 0) ≤ 32
• Open problem: Prove limn→∞R∗sum(n, ε = 0) < 3
2 .• Conjecture: [Ajjanagadde-P.’15] for all 0 < α < 1
maxAn⊥⊥Bn
Hα(An +Bn) = nHα(14 ,
12 ,
14)
where Hα(·) is Renyi entropy.• If true implies Open problem. How?
Yury Polyanskiy MAC tutorial 49
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
Y = X1 +X2 Xi ∈ {0, 1}, Y ∈ {0, 1, 2}• Analyzing FBL achievability we can show: (maximal sumrate)
R∗sum(n, ε) ≥ 3
2−√
1
4nQ−1(ε) +O(log n) .
• Open problem: Prove R∗sum(n, ε) ≤ 32 +
√1nKε
• ... not even asking for Kε < 0
• ... So far best-known result (Ahslwede): R∗sum ≤ 32 + c
√1n log n
• The state is so bad that even for ε = 0 we only know (Fano):
R∗sum(n, ε = 0) ≤ 32
• Open problem: Prove limn→∞R∗sum(n, ε = 0) < 3
2 .
• Conjecture: [Ajjanagadde-P.’15] for all 0 < α < 1
maxAn⊥⊥Bn
Hα(An +Bn) = nHα(14 ,
12 ,
14)
where Hα(·) is Renyi entropy.• If true implies Open problem. How?
Yury Polyanskiy MAC tutorial 49
Example: Binary Adder Channel (BAC)
A
BR1
R2
1
1
YR1 +R2 ≤ 3/2
Y = X1 +X2 Xi ∈ {0, 1}, Y ∈ {0, 1, 2}• Analyzing FBL achievability we can show: (maximal sumrate)
R∗sum(n, ε) ≥ 3
2−√
1
4nQ−1(ε) +O(log n) .
• Open problem: Prove R∗sum(n, ε) ≤ 32 +
√1nKε
• Conjecture: [Ajjanagadde-P.’15] for all 0 < α < 1
maxAn⊥⊥Bn
Hα(An +Bn) = nHα(14 ,
12 ,
14)
where Hα(·) is Renyi entropy.• If true implies Open problem. How?
Yury Polyanskiy MAC tutorial 49
MAC: revisit weak-converse (genie)
P :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
Q :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
P[W1,2 = W1,2] = 1− ε Q[W1,2 = W1,2] = 1M1
. . . apply data processing of D(·||·) . . .⇓
d(1− ε‖ 1M1
) ≤ D(PY |X1X2‖QY |X1
|PX1PX2)
Optimizing QY |X1:
logM1 ≤I(X1;Y |X2) + h(ε)
1− ε
Yury Polyanskiy MAC tutorial 50
MAC: revisit weak-converse (genie)
P :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
Q :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
P[W1,2 = W1,2] = 1− ε Q[W1,2 = W1,2] = 1M1M2
. . . apply data processing of D(·||·) . . .⇓
d(1− ε‖ 1M1
) ≤ D(PY |X1X2‖QY |PX1PX2)
Optimizing QY :
logM1M2 ≤I(X1, X2;Y ) + h(ε)
1− εTogether with previous: full (pentagon) weak converse
Yury Polyanskiy MAC tutorial 51
MAC: towards strong-converse
P :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
Q :
PSfrag replacements
W
W1
W2
W
W1
W2
Y
Y1Y2X
X1
X2
PY |XAA1BB1CC1
P[W1,2 = W1,2] = 1− ε Q[W1,2 = W1,2] = 1M1M2
. . . use Renyi Dλ(·‖·) . . .⇓
Dλ(PX1X2Y ‖PX1PX2QY ) ≥ dλ(1− ε‖ 1M1M2
)
Selecting λ = 1 + 1√nyields (for BAC)
logM1,M2 ≤ supAn⊥⊥Bn
Hαn(An +Bn) +K√n
with αn = 1− 1√n.
Yury Polyanskiy MAC tutorial 52
Classical MAC: summary
• Trivially generalizes to K-user MAC:
Penta = {(R1, . . . , RK) :∑
i∈SRi ≤ I(XS ;Y |XSc)∀S ⊂ [K]}
• Classic IT: Fix K let n→∞.• Use joint probability of error:
P[W1 = W1, . . . ,WK = Wk] ≥ 1− ε .
• New FBL issue: for K = 100 need 2100 tests in achievability.
• What is new today?I Many-user scaling [D. Guo et al]: K = µn, n→∞I New probability of error [P.’17]: 1
K
∑i P[Wi 6= Wi] ≤ ε
I Same-codebook coding [P.’17]: Xi ∈ C for all i.
Yury Polyanskiy MAC tutorial 53
Classical MAC: summary
• Trivially generalizes to K-user MAC:
Penta = {(R1, . . . , RK) :∑
i∈SRi ≤ I(XS ;Y |XSc)∀S ⊂ [K]}
• Classic IT: Fix K let n→∞.• Use joint probability of error:
P[W1 = W1, . . . ,WK = Wk] ≥ 1− ε .
• New FBL issue: for K = 100 need 2100 tests in achievability.• What is new today?
I Many-user scaling [D. Guo et al]: K = µn, n→∞I New probability of error [P.’17]: 1
K
∑i P[Wi 6= Wi] ≤ ε
I Same-codebook coding [P.’17]: Xi ∈ C for all i.
Yury Polyanskiy MAC tutorial 53
Gaussian MAC. Modulation
Let’s put on our engineering boots.
Yury Polyanskiy MAC tutorial 54
The classical model: K-user multiple-access channel
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• Users send coded waveforms Xj(t) Tech note: synchronized block coding
• Additive Gaussian noise Z(t)
• Base station’s job: estimate Xj from the knowledge of Y (t)
Yury Polyanskiy MAC tutorial 55
The classical model: K-user multiple-access channel
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• Users send coded waveforms Xj(t) Tech note: synchronized block coding
• Additive Gaussian noise Z(t)
• Base station’s job: estimate Xj from the knowledge of Y (t)
Yury Polyanskiy MAC tutorial 55
How to avoid inter-user interference?
These are called orthogonal schemesKey problem: resources divided among active and inactive (!) users
(or need costly resource ack/grant protocol)
in IoT most are inactive ⇒ huge waste of bandwidth
Yury Polyanskiy MAC tutorial 56
How to avoid inter-user interference?
These are called orthogonal schemesKey problem: resources divided among active and inactive (!) users
(or need costly resource ack/grant protocol)
in IoT most are inactive ⇒ huge waste of bandwidth
Yury Polyanskiy MAC tutorial 56
Orthogonal and non-orthogonal multiple access (NOMA)
This “pie-slicing” philosophy comes from:• Given: W Hz bandwidth and duration T sec:• By XYZ Theorem: d.o.f. n = 2WT
XYZ ∈ { Kotelnikov, Nyquist, Shannon, Slepian, . . . }• TDMA, FDMA, CDMA: just different bases in R2WT .
(Fine print: CDMA = Orthogonal CDMA here).
• Is there value in having K > n? (non-orthogonal signalling)• Is it even possible to have K > n or even K � n?• Silly: Take n = 1 and let user j send a bit via {0, 2j}.• ... cheating: user K’s power is 22K larger than user 1’s.• Challenge: users only allowed to send ±1, can we have K � n?
Yury Polyanskiy MAC tutorial 57
Orthogonal and non-orthogonal multiple access (NOMA)
This “pie-slicing” philosophy comes from:• Given: W Hz bandwidth and duration T sec:• By XYZ Theorem: d.o.f. n = 2WT
XYZ ∈ { Kotelnikov, Nyquist, Shannon, Slepian, . . . }• TDMA, FDMA, CDMA: just different bases in R2WT .
(Fine print: CDMA = Orthogonal CDMA here).
• Is there value in having K > n? (non-orthogonal signalling)• Is it even possible to have K > n or even K � n?
• Silly: Take n = 1 and let user j send a bit via {0, 2j}.• ... cheating: user K’s power is 22K larger than user 1’s.• Challenge: users only allowed to send ±1, can we have K � n?
Yury Polyanskiy MAC tutorial 57
Orthogonal and non-orthogonal multiple access (NOMA)
This “pie-slicing” philosophy comes from:• Given: W Hz bandwidth and duration T sec:• By XYZ Theorem: d.o.f. n = 2WT
XYZ ∈ { Kotelnikov, Nyquist, Shannon, Slepian, . . . }• TDMA, FDMA, CDMA: just different bases in R2WT .
(Fine print: CDMA = Orthogonal CDMA here).
• Is there value in having K > n? (non-orthogonal signalling)• Is it even possible to have K > n or even K � n?• Silly: Take n = 1 and let user j send a bit via {0, 2j}.
• ... cheating: user K’s power is 22K larger than user 1’s.• Challenge: users only allowed to send ±1, can we have K � n?
Yury Polyanskiy MAC tutorial 57
Orthogonal and non-orthogonal multiple access (NOMA)
This “pie-slicing” philosophy comes from:• Given: W Hz bandwidth and duration T sec:• By XYZ Theorem: d.o.f. n = 2WT
XYZ ∈ { Kotelnikov, Nyquist, Shannon, Slepian, . . . }• TDMA, FDMA, CDMA: just different bases in R2WT .
(Fine print: CDMA = Orthogonal CDMA here).
• Is there value in having K > n? (non-orthogonal signalling)• Is it even possible to have K > n or even K � n?• Silly: Take n = 1 and let user j send a bit via {0, 2j}.• ... cheating: user K’s power is 22K larger than user 1’s.
• Challenge: users only allowed to send ±1, can we have K � n?
Yury Polyanskiy MAC tutorial 57
Orthogonal and non-orthogonal multiple access (NOMA)
This “pie-slicing” philosophy comes from:• Given: W Hz bandwidth and duration T sec:• By XYZ Theorem: d.o.f. n = 2WT
XYZ ∈ { Kotelnikov, Nyquist, Shannon, Slepian, . . . }• TDMA, FDMA, CDMA: just different bases in R2WT .
(Fine print: CDMA = Orthogonal CDMA here).
• Is there value in having K > n? (non-orthogonal signalling)• Is it even possible to have K > n or even K � n?• Silly: Take n = 1 and let user j send a bit via {0, 2j}.• ... cheating: user K’s power is 22K larger than user 1’s.• Challenge: users only allowed to send ±1, can we have K � n?
Yury Polyanskiy MAC tutorial 57
Achieving capacity of K-user BAC with zero-error
Y =
K∑
j=1
Xj Xi ∈ {±1}
• Known: Csum(K) = H(Bin(K, 1/2)) ≈ 12 logK.
• IOW, for sending 1-bit (each) the frame-length n ≈ 2Klog2 K
� K.
How can K > n users signal in n dimensions simultaneously?
• Lindström, Cantor-Mills, Khachatrian-Martirossian: even withzero-error!First, recall a particularly nice orthogonal basis:
(each user is modulating his row)• K.-M. noticed you can add more rows!
Yury Polyanskiy MAC tutorial 58
Achieving capacity of K-user BAC with zero-error
Y =
K∑
j=1
Xj Xi ∈ {±1}
• Known: Csum(K) = H(Bin(K, 1/2)) ≈ 12 logK.
• IOW, for sending 1-bit (each) the frame-length n ≈ 2Klog2 K
� K.
How can K > n users signal in n dimensions simultaneously?• Lindström, Cantor-Mills, Khachatrian-Martirossian: even withzero-error!First, recall a particularly nice orthogonal basis:
(each user is modulating his row)• K.-M. noticed you can add more rows!
Yury Polyanskiy MAC tutorial 58
Recursive construction (Cantor-Mills,Khachatrian-Martirossian)
How can K > n users signal in n dimensions simultaneously?• Walsh-Hadamard basis:
• K.-M. signals:
• Key property: x 7→ xAm is injective on {±1}Km , Km = m2 2m + 1
• Number of users at dimension n: K ≈ 12n log2 n (optimal!)
• Idea: (±1)2m ·Hm has many “holes”; add ±1-vectors there.
Yury Polyanskiy MAC tutorial 59
Recursive construction (Cantor-Mills,Khachatrian-Martirossian)
How can K > n users signal in n dimensions simultaneously?• Walsh-Hadamard basis:
• K.-M. signals:~
• Key property: x 7→ xAm is injective on {±1}Km , Km = m2 2m + 1
• Number of users at dimension n: K ≈ 12n log2 n (optimal!)
• Idea: (±1)2m ·Hm has many “holes”; add ±1-vectors there.
Yury Polyanskiy MAC tutorial 59
Recursive construction (Cantor-Mills,Khachatrian-Martirossian)
How can K > n users signal in n dimensions simultaneously?• Walsh-Hadamard basis:
• K.-M. signals:
• Key property: x 7→ xAm is injective on {±1}Km , Km = m2 2m + 1
• Number of users at dimension n: K ≈ 12n log2 n (optimal!)
• Idea: (±1)2m ·Hm has many “holes”; add ±1-vectors there.Yury Polyanskiy MAC tutorial 59
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]
⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′]
⇒ xAm−1 =g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2
⇒ induct
Yury Polyanskiy MAC tutorial 60
~
• Want to show: v is decodable from vAm for any v ∈ {±1}⊗Km andv2Km−1+1 = 0.
• Equivalently: v ∈ {0, 1}⊗Km (just use v 7→ 1+v2 )
• Let v = [x y z] and
[x y z]Am = [g h]⇒ g − h = [x y z]
02Am−1
2I2m−1
• z1 = 0, so by adding (g − h)1 to (g − h)` we get:
(*) 2z` = (g − h)1 + (g − h)` − 2y · v` ` = 2, . . . , 2m−1
where v` is sum of 1-st and `-th column of Am−1
• Key: v`’s entries are {0, 2}. Take mod 4 of (∗) and decode z`’s !• Subtracting z`’s we get system:
[x y]
(Am−1 Am−1
Am−1 −Am−1
)= [g′ h′] ⇒ xAm−1 =
g′ + h′
2⇒ induct
Yury Polyanskiy MAC tutorial 60
Reflections
• When user inputs are constrained (to ±1), can have K � n and stillrecover inputs.
• Total information grows with K: H(X1 + · · ·+XK) ∼ 12 logK.
(This is similar to 12 log(1 +KP ) in GMAC.)
• Lots of smart ideas in MAC codes.
• Information theory structures it all into:
C =⋃
X1,...,XK ,U
{(R1, . . . , RK) : RS ≤ I(XS ;Y |XSc , U)}
• Similar to how all the smarts (Reed-Muller, BCH, LDPC, Polar, ...)are hidden behind
C = maxX
I(X;Y )
• We understand that “pie-slicing” point of view of radio-MAC iswrong. What is right?
Yury Polyanskiy MAC tutorial 61
Reflections
• When user inputs are constrained (to ±1), can have K � n and stillrecover inputs.
• Total information grows with K: H(X1 + · · ·+XK) ∼ 12 logK.
(This is similar to 12 log(1 +KP ) in GMAC.)
• Lots of smart ideas in MAC codes.• Information theory structures it all into:
C =⋃
X1,...,XK ,U
{(R1, . . . , RK) : RS ≤ I(XS ;Y |XSc , U)}
• Similar to how all the smarts (Reed-Muller, BCH, LDPC, Polar, ...)are hidden behind
C = maxX
I(X;Y )
• We understand that “pie-slicing” point of view of radio-MAC iswrong. What is right?
Yury Polyanskiy MAC tutorial 61
Reflections
• When user inputs are constrained (to ±1), can have K � n and stillrecover inputs.
• Total information grows with K: H(X1 + · · ·+XK) ∼ 12 logK.
(This is similar to 12 log(1 +KP ) in GMAC.)
• Lots of smart ideas in MAC codes.• Information theory structures it all into:
C =⋃
X1,...,XK ,U
{(R1, . . . , RK) : RS ≤ I(XS ;Y |XSc , U)}
• Similar to how all the smarts (Reed-Muller, BCH, LDPC, Polar, ...)are hidden behind
C = maxX
I(X;Y )
• We understand that “pie-slicing” point of view of radio-MAC iswrong. What is right?
Yury Polyanskiy MAC tutorial 61
Reflections
• When user inputs are constrained (to ±1), can have K � n and stillrecover inputs.
• Total information grows with K: H(X1 + · · ·+XK) ∼ 12 logK.
(This is similar to 12 log(1 +KP ) in GMAC.)
• Lots of smart ideas in MAC codes.• Information theory structures it all into:
C =⋃
X1,...,XK ,U
{(R1, . . . , RK) : RS ≤ I(XS ;Y |XSc , U)}
• Similar to how all the smarts (Reed-Muller, BCH, LDPC, Polar, ...)are hidden behind
C = maxX
I(X;Y )
• We understand that “pie-slicing” point of view of radio-MAC iswrong. What is right?
Yury Polyanskiy MAC tutorial 61
2-user Gaussian MAC
Y = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
X1
X2
Y
Z
• Evaluating capacity region:
R1 +R2 ≤ I(X1, X2;Y ) ≤ 1
2log(1 + P1 + P2)
Ri ≤ I(Xi;Y |Xi) = I(Xi;Xi + Z) ≤ 1
2log(1 + Pi)
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
Yury Polyanskiy MAC tutorial 62
2-user Gaussian MAC
Y = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
X1
X2
Y
Z
• Evaluating capacity region:
R1 +R2 ≤ I(X1, X2;Y ) ≤ 1
2log(1 + P1 + P2)
Ri ≤ I(Xi;Y |Xi) = I(Xi;Xi + Z) ≤ 1
2log(1 + Pi)
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
Yury Polyanskiy MAC tutorial 62
2-user Gaussian MAC
Y = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
X1
X2
Y
Z
• Evaluating capacity region:
R1 +R2 ≤ I(X1, X2;Y ) ≤ 1
2log(1 + P1 + P2)
Ri ≤ I(Xi;Y |Xi) = I(Xi;Xi + Z) ≤ 1
2log(1 + Pi)
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
Yury Polyanskiy MAC tutorial 62
2-GMAC rates for TDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
• Here is a TDMA:I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1)
I User 2 sends in λn: R2 = λ2 log(1 + P2)
R1
R212log(1 + P1 + P2)
• Note: low-complexity decoder – two users are decoded separately.
Yury Polyanskiy MAC tutorial 63
2-GMAC rates for TDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
• Here is a TDMA:I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1)
I User 2 sends in λn: R2 = λ2 log(1 + P2)
R1
R212log(1 + P1 + P2)
• Note: low-complexity decoder – two users are decoded separately.
Yury Polyanskiy MAC tutorial 63
2-GMAC rates for TDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)
• Here is a TDMA:I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1)
I User 2 sends in λn: R2 = λ2 log(1 + P2)
R1
R212log(1 + P1 + P2)
• Note: low-complexity decoder – two users are decoded separately.
Yury Polyanskiy MAC tutorial 63
2-GMAC rates for FDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)• Here is a FDMA:
I Use Fourier transform to change n=time to n=frequency.I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1
λ )
I User 2 sends in λn: R2 = λ2 log(1 + P2
λ)
R1
R212log(1 + P1 + P2)
b
λ∗ = P1P1+P2
achieves optimalsumrate
Yury Polyanskiy MAC tutorial 64
2-GMAC rates for FDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)• Here is a FDMA:
I Use Fourier transform to change n=time to n=frequency.I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1
λ )
I User 2 sends in λn: R2 = λ2 log(1 + P2
λ)
R1
R212log(1 + P1 + P2)
b
λ∗ = P1P1+P2
achieves optimalsumrate
Yury Polyanskiy MAC tutorial 64
2-GMAC rates for FDMAY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)• Here is a FDMA:
I Use Fourier transform to change n=time to n=frequency.I Partition block: n = λn+ (1− λ)n
I User 1 sends in λn: R1 = λ2 log(1 + P1
λ )
I User 2 sends in λn: R2 = λ2 log(1 + P2
λ)
R1
R212log(1 + P1 + P2)
b
λ∗ = P1P1+P2
achieves optimalsumrate
Yury Polyanskiy MAC tutorial 64
2-GMAC rates for TINY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)• Treat-interference-as-noise (TIN):
I Each user treats the other as noise (single-user decoders)I Random coding ensures noise is Gaussian.I Rates: R1 = 1
2 log(1 + P1
1+P2), R2 = 1
2 log(1 + P2
1+P1)
R1
R212log(1 + P1 + P2)
b12log(1 + P2
1+P1)
12log(1 + P1
1+P2)
• TIN point can be inside/outside TDMA.
Yury Polyanskiy MAC tutorial 65
2-GMAC rates for TINY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
12log(1 + P2)
12log(1 + P1)• Treat-interference-as-noise (TIN):
I Each user treats the other as noise (single-user decoders)I Random coding ensures noise is Gaussian.I Rates: R1 = 1
2 log(1 + P1
1+P2), R2 = 1
2 log(1 + P2
1+P1)
R1
R212log(1 + P1 + P2)
b12log(1 + P2
1+P1)
12log(1 + P1
1+P2)
• TIN point can be inside/outside TDMA.
Yury Polyanskiy MAC tutorial 65
TIN + SICY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
b
b
12log(1 + P2
1+P1)
12log(1 + P1
1+P2)• Consider a corner point:
R1 =1
2log(1 +
P1
1 + P2), R2 =
1
2log(1 + P2) .
• User 1 can be decoded by TIN. But then can subtract it out!
X1
X2
Y
Z
Dec1 X1
X2Dec2
Enc1
Enc2
• So far: achieved three optimal points via SU-decoding. Any more?
Yury Polyanskiy MAC tutorial 66
TIN + SICY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
b
b
12log(1 + P2
1+P1)
12log(1 + P1
1+P2)• Consider a corner point:
R1 =1
2log(1 +
P1
1 + P2), R2 =
1
2log(1 + P2) .
• User 1 can be decoded by TIN. But then can subtract it out!
X1
X2
Y
Z
Dec1 X1
X2Dec2
Enc1
Enc2
• So far: achieved three optimal points via SU-decoding. Any more?
Yury Polyanskiy MAC tutorial 66
TIN + SICY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
b
b
12log(1 + P2
1+P1)
12log(1 + P1
1+P2)• Consider a corner point:
R1 =1
2log(1 +
P1
1 + P2), R2 =
1
2log(1 + P2) .
• User 1 can be decoded by TIN. But then can subtract it out!
X1
X2
Y
Z
Dec1 X1
X2Dec2
Enc1
Enc2
• So far: achieved three optimal points via SU-decoding. Any more?
Yury Polyanskiy MAC tutorial 66
TIN + SICY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
b
b
12log(1 + P2
1+P1)
12log(1 + P1
1+P2)• Consider a corner point:
R1 =1
2log(1 +
P1
1 + P2), R2 =
1
2log(1 + P2) .
• User 1 can be decoded by TIN. But then can subtract it out!
X1
X2
Y
Z
Dec1 X1
X2Dec2
Enc1
Enc2
• So far: achieved three optimal points via SU-decoding. Any more?
Yury Polyanskiy MAC tutorial 66
Rate-splittingY = X1 +X2 + Z
Ziid∼ N (0, 1)
E[(X1)2] ≤ P1,E[(X2)2] ≤ P2
R1
R212log(1 + P1 + P2)
bbb
• Split user 1 into two virtual users 1A and 1B:
R1 = R1A +R1B, P1 = P1A + P1B
• A funny order of decoding:I Decode X1A via TIN: R1A = 1
2 log(1 + P1A
1+P1B+P2)
I Subtract X1A, decode X2: R2 = 12 log(1 + P2
1+P1B)
I Subtract X2, decode X1B : R1B = 12 log(1 + P1B)
• Simple check:
R1A +R1B +R2 =1
2log(1 + P1 + P2) sumrate optimal
by varying P1A + P1B = P1 can achieve any point!!
Yury Polyanskiy MAC tutorial 67
K-user GMAC
[t]
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• Assume equal-power setting Pi = P . Capacity region (sumrate):
K∑
i=1
Ri ≤1
2log(1 +KP )
Yury Polyanskiy MAC tutorial 68
K-user GMAC
[t]
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• single-user decoders achieve:I FDMA optimal at symmetric point: Ri = 1
2K log(1 +KP )I TIN+SIC achieves all vertices.I Rate-Splitting all points of optimal sumrate.
• Is that it? Let us see...Yury Polyanskiy MAC tutorial 68
K-user GMAC: Reflections
• So total capacity:
Csum =1
2log2(1 +KP ) bit/rdof
growing to ∞ as K →∞.
• But at the same time, per-user rate:
Csym =1
2Klog2(1 +KP )→ 0 .
• The crucial performance metric: HRH energy-per-bitEbN0
,total energy spent2× total # bits
=nKP
2nCsum
• As K →∞:EbN0
=KP
log(1 +KP )→∞ !!!
• Capacity ↗, but each user works harder and moves fewer bits/sec!• Correct scaling: Ptot = KP should be fixed!
Yury Polyanskiy MAC tutorial 69
K-user GMAC: Reflections
• So total capacity:
Csum =1
2log2(1 +KP ) bit/rdof
growing to ∞ as K →∞.• But at the same time, per-user rate:
Csym =1
2Klog2(1 +KP )→ 0 .
• The crucial performance metric: HRH energy-per-bitEbN0
,total energy spent2× total # bits
=nKP
2nCsum
• As K →∞:EbN0
=KP
log(1 +KP )→∞ !!!
• Capacity ↗, but each user works harder and moves fewer bits/sec!• Correct scaling: Ptot = KP should be fixed!
Yury Polyanskiy MAC tutorial 69
K-user GMAC: Reflections
• So total capacity:
Csum =1
2log2(1 +KP ) bit/rdof
growing to ∞ as K →∞.• But at the same time, per-user rate:
Csym =1
2Klog2(1 +KP )→ 0 .
• The crucial performance metric: HRH energy-per-bitEbN0
,total energy spent2× total # bits
=nKP
2nCsum
• As K →∞:EbN0
=KP
log(1 +KP )→∞ !!!
• Capacity ↗, but each user works harder and moves fewer bits/sec!
• Correct scaling: Ptot = KP should be fixed!
Yury Polyanskiy MAC tutorial 69
K-user GMAC: Reflections
• So total capacity:
Csum =1
2log2(1 +KP ) bit/rdof
growing to ∞ as K →∞.• But at the same time, per-user rate:
Csym =1
2Klog2(1 +KP )→ 0 .
• The crucial performance metric: HRH energy-per-bitEbN0
,total energy spent2× total # bits
=nKP
2nCsum
• As K →∞:EbN0
=KP
log(1 +KP )→∞ !!!
• Capacity ↗, but each user works harder and moves fewer bits/sec!• Correct scaling: Ptot = KP should be fixed!
Yury Polyanskiy MAC tutorial 69
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Spectral efficiency vs. EbN0
• Studying this tradeoff is the favorite pastime of ComSoc
• Sp.eff. ρ , total # of data bitstotal real d.o.f.
• We have:
ρ =1
2log(1 +KP ),
EbN0
=KP
log(1 +KP )
• regardless of K : (and any sumrate-optimal arch)
EbN0
=22ρ − 1
2ρ≥ −1.59 dB
• Compare to TIN: ρ = K2 log2(1 + P
1+(K−1)P )K→∞−→ 1
2 ln 2Ptot
1+Ptot
ρ =1
2 ln 2
Ptot1 + Ptot
,EbN0
= (1 + Ptot) ln 2
• IMPORTANT: ρ ≤ 12 ln 2 = 0.72 bit/rdof
• IMPORTANT: Essentially optimal for low sp.eff.
Yury Polyanskiy MAC tutorial 70
Modulation
• Given that TIN is not bad for low sp.eff., let us try to achieve it.• Problem: Per-user rate = ρ
K and is very small for large K.
• Solution: each user modulates some N -signature si ∈ RN
...
n
N N N
• Think of N -blocks as new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I new rate: ρN
K = ρβ in bits / one B-symbol.
I with proper choice should have ρβ ∼ 1 as ComSoc likes.
Yury Polyanskiy MAC tutorial 71
Modulation
• Given that TIN is not bad for low sp.eff., let us try to achieve it.• Problem: Per-user rate = ρ
K and is very small for large K. Aside:I For IT Soc: Channel with C = 0.5 and channel with C = 0.001 are
not fundamentally different.I For ComSoc: First channel is OK (turbo/LDPC/polar), second is a
nightmare.I Why? First, SNR needs to be brought up to a reasonable level.I This is the idea of modulation.
I Another issue: how do you do TIN practically? A code with ±1entries will create a very non-Gaussian interference!
• Solution: each user modulates some N -signature si ∈ RN
...
n
N N N
• Think of N -blocks as new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I new rate: ρN
K = ρβ in bits / one B-symbol.
I with proper choice should have ρβ ∼ 1 as ComSoc likes.
Yury Polyanskiy MAC tutorial 71
Modulation
• Given that TIN is not bad for low sp.eff., let us try to achieve it.• Problem: Per-user rate = ρ
K and is very small for large K. Aside:I For IT Soc: Channel with C = 0.5 and channel with C = 0.001 are
not fundamentally different.I For ComSoc: First channel is OK (turbo/LDPC/polar), second is a
nightmare.I Why? First, SNR needs to be brought up to a reasonable level.I This is the idea of modulation.I Another issue: how do you do TIN practically? A code with ±1
entries will create a very non-Gaussian interference!
• Solution: each user modulates some N -signature si ∈ RN
...
n
N N N
• Think of N -blocks as new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I new rate: ρN
K = ρβ in bits / one B-symbol.
I with proper choice should have ρβ ∼ 1 as ComSoc likes.
Yury Polyanskiy MAC tutorial 71
Modulation
• Given that TIN is not bad for low sp.eff., let us try to achieve it.• Problem: Per-user rate = ρ
K and is very small for large K.• Solution: each user modulates some N -signature si ∈ RN
...
n
N N N
• Think of N -blocks as new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I new rate: ρN
K = ρβ in bits / one B-symbol.
I with proper choice should have ρβ ∼ 1 as ComSoc likes.
Yury Polyanskiy MAC tutorial 71
...
n
N N N
• N -blocks are new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .• Side observation:
I If si’s are chosen orthogonally and K = N , this is FDMA (henceoptimal).
I But incurs FBL loss – important when K ∼ n. Ignore for now.I So why not do so? Many reasons:
• K may vary, but N should be constant.• Requires central distribution of signatures among ACTIVE users.• Asynchrony kills orthogonality
I Early Qualcomm: random-like si’s resolve all issues, and are goodenough for TIN !
• Idea 1: Decode via matched-filter + SU decoders:
Bi = 〈si, Y N 〉 = Bi + Zi, Var[Zi] = 1 +NP∑
j 6=i|〈si, sj〉|2
• Idea 2: Select si randomly. (attractive sys. arch.)• When si’s are random and N large:
|〈si, sj〉| ≈1√N
w.h.p.
• So SU-decoder sees effective SNR = NP1+(K−1)P = Ptot
1+Ptot1β
Yury Polyanskiy MAC tutorial 72
...
n
N N N
• N -blocks are new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .
• Idea 1: Decode via matched-filter + SU decoders:
Bi = 〈si, Y N 〉 = Bi + Zi, Var[Zi] = 1 +NP∑
j 6=i|〈si, sj〉|2
• Idea 2: Select si randomly. (attractive sys. arch.)• When si’s are random and N large:
|〈si, sj〉| ≈1√N
w.h.p.
• So SU-decoder sees effective SNR = NP1+(K−1)P = Ptot
1+Ptot1β
Yury Polyanskiy MAC tutorial 72
...
n
N N N
• N -blocks are new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I random (non-orthogonal) signaturesI matched-filter + SU-decoder
• End result:
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
I As β →∞ we approach TIN.I So classical CDMA folks (Viterbi...) were only trying to achieve TIN.
Yury Polyanskiy MAC tutorial 73
...
n
N N N
• N -blocks are new super-symbols. Effective channel:
Y N = s1B1 + s2B2 + · · · sKBk + ZN , ‖si‖ = 1
I Set β = KN
I new power-constraint: E[B2i ] ≤ NP = Ptot
β .I random (non-orthogonal) signaturesI matched-filter + SU-decoder
• End result:
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
−2 −1 0 1 2 3 4 5 6 7 80
0.2
0.4
0.6
0.8
1
1.2
Eb/No, dB
Spectr
al effic
iency, bit/r
dof
Spectral efficiency vs Eb/No (classic Shannon IT)
Optimal
TIN
CDMA−MF: β=0.5, 1, 3
I As β →∞ we approach TIN.I So classical CDMA folks (Viterbi...) were only trying to achieve TIN.
Yury Polyanskiy MAC tutorial 73
CDMA: going beyond TIN
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• So far we considered matched-filter arch.:
B1 = 〈s1, YN 〉
· · ·BK = 〈sK , Y N 〉
• Can we do better?
Yes! via multi-user detection (MUD).• In one of two ways:
I Signal-processing: Estimate BK via MMSE or decorrelator.Note: does not leverage knowledge of distribution of Bi
I Coding: Use joint-decoding of BK also leveraging knowledge that(e.g.) Bi = ±1
Yury Polyanskiy MAC tutorial 74
CDMA: going beyond TIN
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• So far we considered matched-filter arch.:
B1 = 〈s1, YN 〉
· · ·BK = 〈sK , Y N 〉
• Can we do better? Yes! via multi-user detection (MUD).• In one of two ways:
I Signal-processing: Estimate BK via MMSE or decorrelator.Note: does not leverage knowledge of distribution of Bi
I Coding: Use joint-decoding of BK also leveraging knowledge that(e.g.) Bi = ±1
Yury Polyanskiy MAC tutorial 74
CDMA+MUD vs OFDM
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• multi-user detectors (MUD) improve performance of random-CDMA.• E.g. MMSE detector yields (Tse-Hanly/Verdú-Shamai formula)
ρMMSE =β
2log2(1 + P1 −
1
4F), P1 =
Ptotβ
where F = (√P1(1 +
√β)2 + 1−
√P1(1−√β)2 + 1)2
• Allows to beat TIN’s ρ ≤ 0.72 bit/rdof bottleneck.• Still, industry converged to OFDM : spectrum is too precious.• IoT: centralized orthogonalization impossible! Comeback of MUD?
Yury Polyanskiy MAC tutorial 75
CDMA+MUD vs OFDM
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• multi-user detectors (MUD) improve performance of random-CDMA.• E.g. MMSE detector yields (Tse-Hanly/Verdú-Shamai formula)
ρMMSE =β
2log2(1 + P1 −
1
4F), P1 =
Ptotβ
where F = (√P1(1 +
√β)2 + 1−
√P1(1−√β)2 + 1)2
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
Optimal
TIN
CDMA−MMSE: β=0.5, 1, 3
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
Optimal
TIN
CDMA−MMSE: best β
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
Optimal
TIN
CDMA−MMSE: best β
−2 0 2 4 6 80
0.2
0.4
0.6
0.8
1
Eb/No, dB
Sp
ectr
al e
ffic
ien
cy,
bit/r
do
f
Spectral efficiency vs Eb/No
Optimal
TIN
CDMA−MMSE: best β
• Allows to beat TIN’s ρ ≤ 0.72 bit/rdof bottleneck.• Still, industry converged to OFDM : spectrum is too precious.• IoT: centralized orthogonalization impossible! Comeback of MUD?
Yury Polyanskiy MAC tutorial 75
CDMA+MUD vs OFDM
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• multi-user detectors (MUD) improve performance of random-CDMA.• E.g. MMSE detector yields (Tse-Hanly/Verdú-Shamai formula)
ρMMSE =β
2log2(1 + P1 −
1
4F), P1 =
Ptotβ
where F = (√P1(1 +
√β)2 + 1−
√P1(1−√β)2 + 1)2
• Allows to beat TIN’s ρ ≤ 0.72 bit/rdof bottleneck.• Still, industry converged to OFDM : spectrum is too precious.
• IoT: centralized orthogonalization impossible! Comeback of MUD?
Yury Polyanskiy MAC tutorial 75
CDMA+MUD vs OFDM
• Set β = KN
• new power-constraint: E[B2i ] ≤ NP = Ptot
β .• random (non-orthogonal) signatures• matched-filter + SU-decoder
ρCDMA =β
2log2(1 +
Ptot1 + Ptot
1
β)
EbN0
=Ptot
2ρCDMA
• multi-user detectors (MUD) improve performance of random-CDMA.• E.g. MMSE detector yields (Tse-Hanly/Verdú-Shamai formula)
ρMMSE =β
2log2(1 + P1 −
1
4F), P1 =
Ptotβ
where F = (√P1(1 +
√β)2 + 1−
√P1(1−√β)2 + 1)2
• Allows to beat TIN’s ρ ≤ 0.72 bit/rdof bottleneck.• Still, industry converged to OFDM : spectrum is too precious.• IoT: centralized orthogonalization impossible! Comeback of MUD?
Yury Polyanskiy MAC tutorial 75
New problems: many users with short packets
Yury Polyanskiy MAC tutorial 76
The classical model: K-user multiple-access channel
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• Before: Fix K, let n→∞. Few users. Large payloads.• Now: Huge K. Small payload.
• Random-access: User activity – random, uncoordinated
Yury Polyanskiy MAC tutorial 77
The classical model: K-user multiple-access channel
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = X1(t) + · · ·+XK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• Before: Fix K, let n→∞. Few users. Large payloads.• Now: Huge K. Small payload.
• Random-access: User activity – random, uncoordinated
Yury Polyanskiy MAC tutorial 77
On number of sensors (user density)
• Key metric: µ in users/rdof
µ =# of active users per frame
size of frame• Ktot sensors sending with period Tper (sec) in band B (Hz)
µ =Ktot
2BTper• Futuristic example:
I City of 106.I Each house has 102 devices.I Each dev sends every 10 min, Tper = 600 s.I sub-GHz bandwidth is scarce: ISM B = 20 MHz.I µ ≈ 4 · 10−3.
• Another point of view:I Traditional comm: focus on sp.eff. ρ vs Eb
N0. Why?
I ρBK = per-user speed?
I or is it ρB
speed = number of happy users?
Yury Polyanskiy MAC tutorial 78
On number of sensors (user density)
• Key metric: µ in users/rdof
µ =# of active users per frame
size of frame• Ktot sensors sending with period Tper (sec) in band B (Hz)
µ =Ktot
2BTper• Futuristic example:
I City of 106.I Each house has 102 devices.I Each dev sends every 10 min, Tper = 600 s.I sub-GHz bandwidth is scarce: ISM B = 20 MHz.I µ ≈ 4 · 10−3.
• Another point of view:I Traditional comm: focus on sp.eff. ρ vs Eb
N0. Why?
I ρBK = per-user speed?
I or is it ρB
speed = number of happy users?
Yury Polyanskiy MAC tutorial 78
New twists compared to classic MAC
Problem 1 large K →∞, fixed payload log2M
Relevant asymptotics: K,n→∞ with Kn = µ.
Problem 2 “user-centric” probability of error
Pe , 1K
∑j P[Xj 6= Xj ]
Problem 3 “random-access”
indistiguishable users (same-codebook), non-asymptotics.
Yury Polyanskiy MAC tutorial 79
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n
• Each user transmits log2M ≈ 102 bits.
• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
Problem 1: “massive” number of users
• Number of users K = µn scales linearly with blocklength!• Q: Why scale linearly? A: # of devices waking up � time.• Q: Ok, but what µ should we look at?A: µ ∼ 10−3.
Yury Polyanskiy MAC tutorial 80
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n
• Each user transmits log2M ≈ 102 bits.
• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
Problem 1: “massive” number of users
• Number of users K = µn scales linearly with blocklength!• Q: Why scale linearly? A: # of devices waking up � time.• Q: Ok, but what µ should we look at?
A: µ ∼ 10−3.
Yury Polyanskiy MAC tutorial 80
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n• Each user transmits log2M ≈ 102 bits.• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
Problem 1: “massive” number of users• Number of users K = µn scales linearly with blocklength!• Q: Why scale linearly? A: # of devices waking up � time.• Q: Ok, but what µ should we look at?A: µ ∼ 10−3. Here is why:
I City of 106.I Each house has 102 devices.I Each dev sends 1-10 times/hour.I sub-GHz bandwidth is scarce, unlikely to ever get > 20 MHz.I ⇒ K
n ≈ 10−3 . . . 10−2. This relation is unlikely to change soon.Yury Polyanskiy MAC tutorial 80
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n• Each user transmits log2M bits.• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
Problem 1: “massive” number of users• Number of users K = µn scales linearly with blocklength!• [Chen-Chen-Guo’17]: Fix per-user power to P (i.e. codeword‖c‖22 ≤ nP ), then
logM∗user(K = µn, n, P ) ≈ 1
2µlog(1 + µnP )
• Note: this corresponds to EbN0→∞.
• Our work: What about finite EbN0
?
Yury Polyanskiy MAC tutorial 81
New twists compared to classic MAC
Problem 1 large K →∞, fixed payload log2M
Relevant asymptotics: K,n→∞ with Kn = µ.
Problem 2 “user-centric” probability of error
Pe , 1K
∑j P[Xj 6= Xj ]
Problem 3 “random-access”
indistiguishable users (same-codebook), non-asymptotics.
Yury Polyanskiy MAC tutorial 82
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n• Each user transmits log2M bits.• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
• Regime: K = µn, n→∞.Problem 2: “user-centric” prob. of error• For finite Eb
N0we have ( Why? See next...)
P[W1 = W1, . . .WK = WK ]→ 0 as n→∞• ⇒ NEED to switch to per-user Pe, PUPE :
Pe =1
K
K∑
i=1
P[Wi 6= Wi]
Yury Polyanskiy MAC tutorial 83
Eb/N0 →∞ for classical probability of error
TheoremSuppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
And, thus, classical probability of error → 1 as K →∞.
Proof:• WLOG can assume: Y =
∑ciWi + Z, where ci ∈ Rn and
Wi ∼ Ber(1/2).• Genie: Reveal vector of Wi’s to within Hamming-distance 1.• New problem: See Y = cU + Z, U ∼ [K]. Goal: find U .
• Fano + Capacity calculation:
P[U = U ] logK − log 2 ≤ I(cU ;Y )
≤ n
2log
(1 +En
)≤ log e
2E
Yury Polyanskiy MAC tutorial 84
Eb/N0 →∞ for classical probability of error
TheoremSuppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
And, thus, classical probability of error → 1 as K →∞.
Proof:• WLOG can assume: Y =
∑ciWi + Z, where ci ∈ Rn and
Wi ∼ Ber(1/2).• Genie: Reveal vector of Wi’s to within Hamming-distance 1.• New problem: See Y = cU + Z, U ∼ [K]. Goal: find U .
• Fano + Capacity calculation:
P[U = U ] logK − log 2 ≤ I(cU ;Y )
≤ n
2log
(1 +En
)≤ log e
2E
Yury Polyanskiy MAC tutorial 84
Eb/N0 →∞ for classical probability of error
TheoremSuppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
And, thus, classical probability of error → 1 as K →∞.
Proof:• WLOG can assume: Y =
∑ciWi + Z, where ci ∈ Rn and
Wi ∼ Ber(1/2).• Genie: Reveal vector of Wi’s to within Hamming-distance 1.• New problem: See Y = cU + Z, U ∼ [K]. Goal: find U .• Fano + Capacity calculation:
P[U = U ] logK − log 2 ≤ I(cU ;Y )
≤ n
2log
(1 +En
)≤ log e
2E
Yury Polyanskiy MAC tutorial 84
Eb/N0 →∞ for classical probability of error
TheoremSuppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
And, thus, classical probability of error → 1 as K →∞.
Proof:• WLOG can assume: Y =
∑ciWi + Z, where ci ∈ Rn and
Wi ∼ Ber(1/2).• Genie: Reveal vector of Wi’s to within Hamming-distance 1.• New problem: See Y = cU + Z, U ∼ [K]. Goal: find U .• Fano + Capacity calculation:
P[U = U ] logK − log 2 ≤ I(cU ;Y ) ≤ n
2log
(1 +En
)≤ log e
2E
Yury Polyanskiy MAC tutorial 84
Eb/N0 →∞ for classical probability of error
Theorem (AWGN)
Suppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
Same proof:
Theorem (BSC)
Let G be a K × n generating matrix with ≤ E ones per row. Then overBSC(δ) and all n:
1− P[block error] ≤ d(δ‖δ)E + log 2
logK
Puzzle: Genie + Fano method fails for BEC! (Proof by induction works.)
Yury Polyanskiy MAC tutorial 85
Eb/N0 →∞ for classical probability of error
Theorem (AWGN)
Suppose K users send one bit each with finite energy E over the GMAC(with arbitrary n): Y n =
∑Ki=1Xi + Zn. Then we have
P[X1 = X1, . . . , XK = XK ] ≤ Elog e
2 + log 2
logK.
Same proof:
Theorem (BSC)
Let G be a K × n generating matrix with ≤ E ones per row. Then overBSC(δ) and all n:
1− P[block error] ≤ d(δ‖δ)E + log 2
logK
Puzzle: Genie + Fano method fails for BEC! (Proof by induction works.)Yury Polyanskiy MAC tutorial 85
K-user GMAC under PUPE: surprise
• Per-user probability of error as
Pe =1
K
K∑
i=1
P[Wi 6= Wi] .
• Let’s forget about K = µn and consider ...• Classical regime: K-fixed, power P fixed, n→∞. Symmetriccapacity
Csym(K) =1
2Klog(1 +KP ) .
• But no strong converse (!)
Csym,ε(K) > Csym(K − 1) ∀ε & 1 + logeK
K
• Lesson: When PUPE above logKK , far from usual GMAC+JPE.
Yury Polyanskiy MAC tutorial 86
K-user GMAC under PUPE: no strong converse
• Let Csym,ε(K) be the max achievable symmetric rate (K-fixed,n→∞) under PUPE
1
K
K∑
i=1
P[Wi 6= Wi] ≤ ε .
Theorem (P.-Telatar’16)
We have: Csym,ε(K, ε) =
{1
2K log(1 +KP ), ε < 1/K
≥ 12(K−1) log(1 + (K − 1)P ), ε & 1+logeK
K
• Note that sequence: 12K log(1 +KP ) is monotonically decreasing.
• First part: by union bound PUPE ≤ ε implies JPE ≤ Kε +strong-converse for GMAC.
• Second part: Choose codebooks for symmetric-rate point of(K − 1)-GMAC
• Each user sends 0 w.p. ε. Then w.p. 1− (1− ε)K only (K − 1) areactive.
Yury Polyanskiy MAC tutorial 87
K-user GMAC under PUPE: no strong converse
• Let Csym,ε(K) be the max achievable symmetric rate (K-fixed,n→∞) under PUPE
1
K
K∑
i=1
P[Wi 6= Wi] ≤ ε .
Theorem (P.-Telatar’16)
We have: Csym,ε(K, ε) =
{1
2K log(1 +KP ), ε < 1/K
≥ 12(K−1) log(1 + (K − 1)P ), ε & 1+logeK
K
• Note that sequence: 12K log(1 +KP ) is monotonically decreasing.
• First part: by union bound PUPE ≤ ε implies JPE ≤ Kε +strong-converse for GMAC.
• Second part: Choose codebooks for symmetric-rate point of(K − 1)-GMAC
• Each user sends 0 w.p. ε. Then w.p. 1− (1− ε)K only (K − 1) areactive.
Yury Polyanskiy MAC tutorial 87
K-user GMAC under PUPE: no strong converse
• Let Csym,ε(K) be the max achievable symmetric rate (K-fixed,n→∞) under PUPE
1
K
K∑
i=1
P[Wi 6= Wi] ≤ ε .
Theorem (P.-Telatar’16)
We have: Csym,ε(K, ε) =
{1
2K log(1 +KP ), ε < 1/K
≥ 12(K−1) log(1 + (K − 1)P ), ε & 1+logeK
K
• Note that sequence: 12K log(1 +KP ) is monotonically decreasing.
• First part: by union bound PUPE ≤ ε implies JPE ≤ Kε +strong-converse for GMAC.
• Second part: Choose codebooks for symmetric-rate point of(K − 1)-GMAC
• Each user sends 0 w.p. ε. Then w.p. 1− (1− ε)K only (K − 1) areactive.
Yury Polyanskiy MAC tutorial 87
New twists compared to classic MAC
Problem 1 large K →∞, fixed payload log2M
Relevant asymptotics: K,n→∞ with Kn = µ.
Problem 2 “user-centric” probability of error
Pe , 1K
∑j P[Xj 6= Xj ]
Problem 3 “random-access”
indistiguishable users (same-codebook), non-asymptotics.
Yury Polyanskiy MAC tutorial 88
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n• Each user transmits log2M bits.• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
• Regime: K = µn, n→∞.• PUPE definition: Pe , 1
K
∑Kj=1 P[Xj 6= Xj ].
Next: new results
• Converse bound (via reduction to known problems)• Achievability bound (via Gaussian process theory)
Yury Polyanskiy MAC tutorial 89
Fundamental limits: for K-user MAC with K ∼ n
Recap: MAC setting and performance metrics
• Perfectly synchronized K-user Gaussian MAC with blocklength n• Each user transmits log2M bits.• Figures of merit: energy-per-bit and user density
EbN0
, E[‖Xn‖2]2 log2M
µ , Kn
• Regime: K = µn, n→∞.• PUPE definition: Pe , 1
K
∑Kj=1 P[Xj 6= Xj ].
Next: new results• Converse bound (via reduction to known problems)• Achievability bound (via Gaussian process theory)
Yury Polyanskiy MAC tutorial 89
TheoremCommunication with (µ,M, ε) is asymptotically (n→∞) feasible only ifboth of these hold:
(1− ε)µ log2M ≤ 1
2log2(1 + µPtot) + µh(ε)
1
M≥ Q
(√Ptotµ
+Q−1(1− ε)).
where Ptot = 2µ log2M · EbN0is the total received power.
• First bound: A working code recovers W ∈ [M ]K with Hammingdistortion ≤ ε. Comparing sum-capacity with rate-distortion functionwe get the bound.
• Second bound: To get small EbN0one necessarily needs to code over
large payloads (i.e. log2M � 1) – this is [PPV’11].• Namely, we use the genie argument. At least one of K users shouldhave Pe ≤ ε.
• Even if that user communicated alone over a n =∞ AWGN channel,he’d need large total energy-per-bit if M is small.
[P.-Poor-Verdú’11]
100
101
102
103
104
105
106
−2
−1
0
1
2
3
4
5
6
7
8
Information bits, k
Eb
/No
, d
B
Achievability (non−feedback)
Converse (non−feedback)
Full feedback (optimal)
Yury Polyanskiy MAC tutorial 90
TheoremCommunication with (µ,M, ε) is asymptotically (n→∞) feasible only ifboth of these hold:
(1− ε)µ log2M ≤ 1
2log2(1 + µPtot) + µh(ε)
1
M≥ Q
(√Ptotµ
+Q−1(1− ε)).
where Ptot = 2µ log2M · EbN0is the total received power.
• Second bound: To get small EbN0one necessarily needs to code over
large payloads (i.e. log2M � 1) – this is [PPV’11].• Namely, we use the genie argument. At least one of K users shouldhave Pe ≤ ε.
• Even if that user communicated alone over a n =∞ AWGN channel,he’d need large total energy-per-bit if M is small.
[P.-Poor-Verdú’11]
100
101
102
103
104
105
106
−2
−1
0
1
2
3
4
5
6
7
8
Information bits, k
Eb
/No
, d
B
Achievability (non−feedback)
Converse (non−feedback)
Full feedback (optimal)
Yury Polyanskiy MAC tutorial 90
TheoremCommunication with (µ,M, ε) is asymptotically (n→∞) feasible only ifboth of these hold:
(1− ε)µ log2M ≤ 1
2log2(1 + µPtot) + µh(ε)
1
M≥ Q
(√Ptotµ
+Q−1(1− ε)).
where Ptot = 2µ log2M · EbN0is the total received power.
• Second bound: To get small EbN0one necessarily needs to code over
large payloads (i.e. log2M � 1) – this is [PPV’11].• Namely, we use the genie argument. At least one of K users shouldhave Pe ≤ ε.
• Even if that user communicated alone over a n =∞ AWGN channel,he’d need large total energy-per-bit if M is small.
[P.-Poor-Verdú’11]
100
101
102
103
104
105
106
−2
−1
0
1
2
3
4
5
6
7
8
Information bits, k
Eb
/No
, d
BAchievability (non−feedback)
Converse (non−feedback)
Full feedback (optimal)
Yury Polyanskiy MAC tutorial 90
Theorem (Thrampoulidis-Zadik-P.’18)
For each β > 0 there exists codes with EbN0
= β2
2 log2 Mand PUPE ε
provided that
θµ logM + µh(θ) <1
2log(1 + β2θµ) +
log e
2
(ψ(β, θ, µ)
1 + β2θµ− 1
)
for all θ ∈ [ε, 1] where
ψ(β, θ, µ) =√
1 + β2θµ− βµ√2πe−
12
(Q−1(θ))2
Proof outline:• Use random gaussian codebooks• Use maximum likelihood decoder (not optimal!): min ‖Y −∑i ci‖2• Use information-density thresholding trick• Use Gaussian process theory (Gordon’s lemma) to evaluate the bound
Yury Polyanskiy MAC tutorial 91
Theorem (Thrampoulidis-Zadik-P.’18)
For each β > 0 there exists codes with EbN0
= β2
2 log2 Mand PUPE ε
provided that
θµ logM + µh(θ) <1
2log(1 + β2θµ) +
log e
2
(ψ(β, θ, µ)
1 + β2θµ− 1
)
for all θ ∈ [ε, 1] where
ψ(β, θ, µ) =√
1 + β2θµ− βµ√2πe−
12
(Q−1(θ))2
Proof outline:• Use random gaussian codebooks• Use maximum likelihood decoder (not optimal!): min ‖Y −∑i ci‖2• Use information-density thresholding trick• Use Gaussian process theory (Gordon’s lemma) to evaluate the bound
Yury Polyanskiy MAC tutorial 91
• Generate codewords c(j)m
iid∼ N (0, P In), j ∈ [K],m ∈ [M ], whereP = β2
n
• Use ML decoder (suboptimal!):
W = argminw1,...,wK‖Y − (c(1)w1
+ · · ·+ c(K)wK
)‖22 .
• Define
F (S0) = {∃(mj)j∈S0 : ‖Y−(c(Sc0)+∑
j∈S0
c(j)mj )‖2 ≤ ‖Y−c([K])‖2,mj 6= Wj∀j}
• We have:P[dH(W, W ) = t] ≤ P
[∪S0:|S0|=tF (S0)
]
• Main goal: Show P[∪S0:|S0|=tF (S0)
]→ 0 for all t = θn, θ ∈ [ε, 1].
• Intermediate step: Bound P[F (S0)|c[K], Y,W[K]]
Yury Polyanskiy MAC tutorial 92
• Define information density
it(u; y|v) =n
2log(1 + Pt) +
log e
2
(‖y − v‖221 + P ′t
− ‖y − u− v‖22),
• Define c(T ) =∑
j∈T c(j)Wj
, c′ =∑
j∈S0c
(j)mj for some mj 6= Wj .
Then:
{‖Y−(c(Sc0)+c′)‖2 ≤ ‖Y−c([K])‖2} = {it(c′;Y |c(Sc0)) ≥ it(c(S0);Y |c(Sc0))} .
• Let A1, . . . , AKiid∼ N (0, P In) and B =
∑iAi + Z. For any
S0 ∈(
[K]t
):
logdPAS0
|ASc0 ,B
dPASc0
= it(u; y|v) ,
where u =∑
j∈S0Aj , v =
∑j∈Sc0
Ajand y = B.• And thus we get:
P[it(c′;Y |c(Sc0)) > γ|Y, c[K],W[K]
]≤ e−γ
Yury Polyanskiy MAC tutorial 93
• We have shown (via union bound):
P[F (S0)|c[K], Y,W[K]] ≤M t exp{−it(c(S0);Y |c(Sc0))} .• So we now use a smart union bound:
P [∪S0F (S0)] ≤M t
(K
t
)exp{−γ}+ P[It ≤ γ] ,
where It = minS0 it(c(S0);Y |c(Sc0))• Left to study the extrema of Gaussian matrix G ∈ Rn×µn withiid∼ N (0, 1)
Φ ,1
nmin
{∥∥∥∥β√nGx+ Z
∥∥∥∥2
: x ∈ {0, 1}µn, ‖x‖0 = θµn
}
• After dualizing norm, we get a problem:
P[minu
maxvAu,v ≤ c] ≤?
• Gaussian comparison method: Bound extrema of A via extrema of asimpler process B
Yury Polyanskiy MAC tutorial 94
• We have shown (via union bound):
P[F (S0)|c[K], Y,W[K]] ≤M t exp{−it(c(S0);Y |c(Sc0))} .• So we now use a smart union bound:
P [∪S0F (S0)] ≤M t
(K
t
)exp{−γ}+ P[It ≤ γ] ,
where It = minS0 it(c(S0);Y |c(Sc0))• Left to study the extrema of Gaussian matrix G ∈ Rn×µn withiid∼ N (0, 1)
Φ ,1
nmin
{∥∥∥∥β√nGx+ Z
∥∥∥∥2
: x ∈ {0, 1}µn, ‖x‖0 = θµn
}
• After dualizing norm, we get a problem:
P[minu
maxvAu,v ≤ c] ≤ P[min
umaxvBu,v ≤ c]
• Gaussian comparison method: Bound extrema of A via extrema of asimpler process B
Yury Polyanskiy MAC tutorial 94
Slepian’s lemma (1962) and Gordon’s lemma (1985)
Theorem (Slepian)
Let {Av}v∈V and {Bv}v∈V be zero-mean Gaussian processes, s.t.Cov(A) ≤ Cov(B) and Var[Av] = Var[Bv] for all v then
E[maxvAv] ≥ E[max
vBv]
Theorem (Gordon)
Let {Au,v} and {Bu,v} be zero-mean Gaussian processes, s.t.1 Var[Au,v] = Var[Bu,v]
2 E[Au,vAu,v′ ] ≤ E[Bu,vBu,v′ ] for all u, v, v′
3 E[Au,vAu′,v′ ] ≥ E[Bu,vBu′,v′ ] for all u 6= u′, v, v′. Then:minu
maxvAu,v � min
umaxvBu,v
Remark: 2) implies A∗u = maxv Au,v � B∗u = maxv Bu,v.3) implies {A∗u} is “more-correlated” than {B∗u}
Yury Polyanskiy MAC tutorial 95
Slepian’s lemma (1962) and Gordon’s lemma (1985)
Theorem (Slepian)
Let {Av}v∈V and {Bv}v∈V be zero-mean Gaussian processes, s.t.Cov(A) ≤ Cov(B) and Var[Av] = Var[Bv] for all v then
maxvAv � max
vBv (stoch. domination)
Theorem (Gordon)
Let {Au,v} and {Bu,v} be zero-mean Gaussian processes, s.t.1 Var[Au,v] = Var[Bu,v]
2 E[Au,vAu,v′ ] ≤ E[Bu,vBu,v′ ] for all u, v, v′
3 E[Au,vAu′,v′ ] ≥ E[Bu,vBu′,v′ ] for all u 6= u′, v, v′. Then:minu
maxvAu,v � min
umaxvBu,v
Remark: 2) implies A∗u = maxv Au,v � B∗u = maxv Bu,v.3) implies {A∗u} is “more-correlated” than {B∗u}
Yury Polyanskiy MAC tutorial 95
Slepian’s lemma (1962) and Gordon’s lemma (1985)
Theorem (Slepian)
Let {Av}v∈V and {Bv}v∈V be zero-mean Gaussian processes, s.t.Cov(A) ≤ Cov(B) and Var[Av] = Var[Bv] for all v then
maxvAv � max
vBv (stoch. domination)
Theorem (Gordon)
Let {Au,v} and {Bu,v} be zero-mean Gaussian processes, s.t.1 Var[Au,v] = Var[Bu,v]
2 E[Au,vAu,v′ ] ≤ E[Bu,vBu,v′ ] for all u, v, v′
3 E[Au,vAu′,v′ ] ≥ E[Bu,vBu′,v′ ] for all u 6= u′, v, v′. Then:minu
maxvAu,v � min
umaxvBu,v
Remark: 2) implies A∗u = maxv Au,v � B∗u = maxv Bu,v.3) implies {A∗u} is “more-correlated” than {B∗u}
Yury Polyanskiy MAC tutorial 95
Slepian’s lemma (1962) and Gordon’s lemma (1985)
Theorem (Slepian)
Let {Av}v∈V and {Bv}v∈V be zero-mean Gaussian processes, s.t.Cov(A) ≤ Cov(B) and Var[Av] = Var[Bv] for all v then
maxvAv � max
vBv (stoch. domination)
Theorem (Gordon)
Let {Au,v} and {Bu,v} be zero-mean Gaussian processes, s.t.1 Var[Au,v] = Var[Bu,v]
2 E[Au,vAu,v′ ] ≤ E[Bu,vBu,v′ ] for all u, v, v′
3 E[Au,vAu′,v′ ] ≥ E[Bu,vBu′,v′ ] for all u 6= u′, v, v′. Then:minu
maxvAu,v � min
umaxvBu,v
Remark: 2) implies A∗u = maxv Au,v � B∗u = maxv Bu,v.3) implies {A∗u} is “more-correlated” than {B∗u}
Yury Polyanskiy MAC tutorial 95
User density vs. Energy-per-bit: best bounds
−2 0 2 4 6 8 100
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
Converse
Achievability (Gaussian Process)
Yury Polyanskiy MAC tutorial 96
User density vs. Energy-per-bit: CDMA (w/o MUD)
−2 0 2 4 6 8 100
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
Converse
Achievability (Gaussian Process)
Achievability: TIN (aka CDMA w/o MUD)
Yury Polyanskiy MAC tutorial 97
User density vs. Energy-per-bit: TDMA
−2 0 2 4 6 8 100
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
Converse
Achievability (Gaussian Process)
Achievability: TIN (aka CDMA w/o MUD)
Achievability: TDMA
Yury Polyanskiy MAC tutorial 98
User density vs. Energy-per-bit: higher reliability
−2 0 2 4 6 8 100
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
0.02
Converse
Achievability (Gaussian Process)
Achievability: TIN (aka CDMA w/o MUD)
Achievability: TDMA
Yury Polyanskiy MAC tutorial 99
Problem 3: Information theory of random-access
Yury Polyanskiy MAC tutorial 100
Prior work on MAC/random-access
It’s a mess...
• Channel model: collision vs. additive• Noise model: noiseless, stochastic or worst-case• Coding with or without feedback (as in CSMA)• Probability of error: zero, vanishing or fixed > 0.• Probability of error: per-user vs all-users• User activity: always-on vs sporadic• finite blocklength vs n→∞• Various asymptotics: K = const, n→∞ vs both K,n→∞
Yury Polyanskiy MAC tutorial 101
Prior work on MAC/random-access
It’s a mess...• Channel model: collision vs. additive• Noise model: noiseless, stochastic or worst-case• Coding with or without feedback (as in CSMA)• Probability of error: zero, vanishing or fixed > 0.• Probability of error: per-user vs all-users• User activity: always-on vs sporadic• finite blocklength vs n→∞• Various asymptotics: K = const, n→∞ vs both K,n→∞
Yury Polyanskiy MAC tutorial 101
Classification by user activity
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
Yury Polyanskiy MAC tutorial 102
Classification by user activity
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
All active Some active
Yury Polyanskiy MAC tutorial 102
Classification by user activity
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
All active Some active
Active set known
Active set unknown
Yury Polyanskiy MAC tutorial 102
Sample of prior work
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
All active
• Classical IT[Liao’72],[Ahlswede’73]
• Orthogonal schemes TDMA/FDMA• Rate splitting [Rimoldi-Urbanke’99]
• Finite blocklength [MolavianJazi-Laneman’14-16]
• Many-user [Chen-Guo’14]
Yury Polyanskiy MAC tutorial 103
Sample of prior work
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
All active Some active
Active set known
Active set unknown
• Non-orthogonal CDMA, MUD• Randomly-spread CDMA
[Tse-Hanly’99], [Verdú-Shamai’99]
• [Mathys’90]
• LDS, SCMA
• Many-access [Chen-Chen-Guo’17]
• Blind-detection for CDMA• [BarDavid-Plotnik-Rom’93]
• conflict-avoiding codes[Bassalygo-Pinsker’83], B.Tsybakov
Yury Polyanskiy MAC tutorial 104
Sample of prior work
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
All active Some active
Active set known
Active set unknown
• Non-orthogonal CDMA, MUD• Randomly-spread CDMA
[Tse-Hanly’99], [Verdú-Shamai’99]
• [Mathys’90]
• LDS, SCMA
• Many-access [Chen-Chen-Guo’17]
• Blind-detection for CDMA• [BarDavid-Plotnik-Rom’93]
• conflict-avoiding codes[Bassalygo-Pinsker’83], B.Tsybakov
Yury Polyanskiy MAC tutorial 104
Sample of prior work
MAC
Identifiable users
individual codebooks one (same) codebook
Non-identifiable users
• ALOHA [Abramson’70]
• [Massey-Mathys’85]
• Collision-resolution protocols[Capetanakis’79]
• Superimposed codes[Ericson-Gyorfi’88]
[Furedi-Ruszinkó’99]
• Br-codes [Dyachkov-Rykov’81]
• Coded Slotted ALOHA[Casini et al’07],[Liva’11]
• Compressed sensing[Jin-Kim-Rao’11]
Yury Polyanskiy MAC tutorial 105
Key definition: random-access code
++
. . .
+
=
User 1
User K
Noise
Received
output
Definition (P.’17)
f : [M ]→ Rn is a random-access code for Ka users if ∃ list-Ka decoderg s.t.
P[Wj 6∈ g(f(W1) + · · ·+ f(WKa) + Z)] ≤ ε ∀j ∈ [Ka]
where Wiiid∼ Unif[M ].
For ε = 0 this was studied:• Noiseless channels: Br-codes [Dyackhov-Rykov’81]
• Worst-case noise: superimposed codes [Ericson-Gyorfi’88, Furedi-Ruszinkó’99]
Yury Polyanskiy MAC tutorial 106
Key definition: random-access code
++
. . .
+
=
User 1
User K
Noise
Received
output
Definition (P.’17)
f : [M ]→ Rn is a random-access code for Ka users if ∃ list-Ka decoderg s.t.
P[Wj 6∈ g(f(W1) + · · ·+ f(WKa) + Z)] ≤ ε ∀j ∈ [Ka]
where Wiiid∼ Unif[M ].
For ε > 0 this is:• Just compressed sensing: Y = Xβ + Z, X is Ka-out-of-M sparse.• ⇒ studied by many, but not w.r.t. Eb
N0and not with M = 2Θ(n).
Yury Polyanskiy MAC tutorial 106
Same-codebook codes = compressed sensing
• random-access = all users share same codebook• ... obviously decoding is upto permutation of users• New problems: capacity/error-exponent/zero-error capacity• Equivalent to compressed-sensing [Jin-Kim-Rao’11]
• Let same-codebook (column) vectors be c1, . . . cj .
X =(c1 | · · · | cM
)
• Let β ∈ {0, 1}M with βj = 1 if codeword j was transmitted• Then the problem is:
Y = Xβ + Z, Goal: E[‖β − β(Y )‖]→ min
(linear regression with sparsity ‖β‖0 = Ka aka comp.sensing).• The famous n ∼ 2Ka logeM is just TIN :
logeM ≈n
2loge(1 +
P
1 + (Ka − 1)P) ≈ n
2Ka
So all the L1 (LASSO) frenzy is just to achieve TIN (hehe...)
Yury Polyanskiy MAC tutorial 107
Same-codebook codes = compressed sensing
• random-access = all users share same codebook• ... obviously decoding is upto permutation of users• New problems: capacity/error-exponent/zero-error capacity• Equivalent to compressed-sensing [Jin-Kim-Rao’11]
• Let same-codebook (column) vectors be c1, . . . cj .
X =(c1 | · · · | cM
)
• Let β ∈ {0, 1}M with βj = 1 if codeword j was transmitted• Then the problem is:
Y = Xβ + Z, Goal: E[‖β − β(Y )‖]→ min
(linear regression with sparsity ‖β‖0 = Ka aka comp.sensing).
• The famous n ∼ 2Ka logeM is just TIN :
logeM ≈n
2loge(1 +
P
1 + (Ka − 1)P) ≈ n
2Ka
So all the L1 (LASSO) frenzy is just to achieve TIN (hehe...)
Yury Polyanskiy MAC tutorial 107
Same-codebook codes = compressed sensing
• random-access = all users share same codebook• ... obviously decoding is upto permutation of users• New problems: capacity/error-exponent/zero-error capacity• Equivalent to compressed-sensing [Jin-Kim-Rao’11]
• Let same-codebook (column) vectors be c1, . . . cj .
X =(c1 | · · · | cM
)
• Let β ∈ {0, 1}M with βj = 1 if codeword j was transmitted• Then the problem is:
Y = Xβ + Z, Goal: E[‖β − β(Y )‖]→ min
(linear regression with sparsity ‖β‖0 = Ka aka comp.sensing).• The famous n ∼ 2Ka logeM is just TIN :
logeM ≈n
2loge(1 +
P
1 + (Ka − 1)P) ≈ n
2Ka
So all the L1 (LASSO) frenzy is just to achieve TIN (hehe...)Yury Polyanskiy MAC tutorial 107
Key definition: random-access code
++
. . .
+
=
User 1
User K
Noise
Received
output
Definition (P.’17)
f : [M ]→ Rn is a random-access code for Ka users if ∃ list-Ka decoderg s.t.
P[Wj 6∈ g(f(W1) + · · ·+ f(WKa) + Z)] ≤ ε ∀j ∈ [Ka]
where Wiiid∼ Unif[M ].
This definition is answer to many prayers, but . . .Bad news: Asymptotics of Ka = µn, n→∞ is nonsense.
Yury Polyanskiy MAC tutorial 108
Prototypical random-access code: ALOHA
Slotted ALOHA protocol (shaded slots indicate collision)
A
B
C
D
E
F
G
H
• n-frame is partitioned into L = nn1
subframes of length n1
• Each of Ka users places his n1-codeword into a random subframe.• Per-user error: Pe ≈ P[Bino(Ka − 1, 1
L) > 0] ≈ KaL e−Ka
L
Yury Polyanskiy MAC tutorial 109
Main result 2: random-coding bound
Remark: For classical regime Ka-fixed, n→∞ and ε→ 0
Crandom−access(Ka) =1
2Kalog(1 +KaP ) .
Yury Polyanskiy MAC tutorial 110
Random-coding achievability bound
• Generate M codewords: ci ∼ N (0, P )⊗n.• WLOG, users send c1, c2, . . . , cKa .• Decoder sees
Y = c1 + · · ·+ cKa + Z
• Define sum-codewords c(S) ,∑
i∈S ci• ML-decoder (not optimal!)
S = arg minS‖c(S)− Y ‖ .
• Error-analysis:
Pe ≤Ka∑
t=1
t
KaP[t–misguessed]
P[t–misguessed] ≤ P
⋃S∈(Kat )
⋃S′∈(M−Kat )
‖c(S)− c(S′) + Z‖ ≤ ‖Z‖
Analysis I:• Condition on Z, c1, . . . , cKa
• Use Chernoff + Gallager ρ-trick for P[∪S′ · · · |cKa1 , Z]
• Use another Gallager ρ-trick for P[∪S · · · |Z]
• Finally take expectation over Z
Analysis II:• Define information density appropriately• Use Feinstein’s trick to bound
P[∪S ∪S′ · · · ] ≤ P[imin(XKa1 ;Y ) < γ] +
(Ka
t
)(Mt
)e−γ
imin = minS it(c(S);Y |c(Sc))• imin ≈ max of Gaussian process indexed by t-subsets of [Ka]
Classical IT: term S goes → 0 if I(XS ;Y |XSc) >∑
i∈S Ri
Yury Polyanskiy MAC tutorial 111
Random-coding achievability bound
• Generate M codewords: ci ∼ N (0, P )⊗n.• WLOG, users send c1, c2, . . . , cKa .• Decoder sees
Y = c1 + · · ·+ cKa + Z
• Define sum-codewords c(S) ,∑
i∈S ci• ML-decoder (not optimal!)
S = arg minS‖c(S)− Y ‖ .
• Error-analysis:
Pe ≤Ka∑
t=1
t
KaP[t–misguessed]
P[t–misguessed] ≤ P
⋃S∈(Kat )
⋃S′∈(M−Kat )
‖c(S)− c(S′) + Z‖ ≤ ‖Z‖
Analysis I:• Condition on Z, c1, . . . , cKa
• Use Chernoff + Gallager ρ-trick for P[∪S′ · · · |cKa1 , Z]
• Use another Gallager ρ-trick for P[∪S · · · |Z]
• Finally take expectation over Z
Analysis II:• Define information density appropriately• Use Feinstein’s trick to bound
P[∪S ∪S′ · · · ] ≤ P[imin(XKa1 ;Y ) < γ] +
(Ka
t
)(Mt
)e−γ
imin = minS it(c(S);Y |c(Sc))• imin ≈ max of Gaussian process indexed by t-subsets of [Ka]
Classical IT: term S goes → 0 if I(XS ;Y |XSc) >∑
i∈S Ri
Yury Polyanskiy MAC tutorial 111
Random-coding achievability bound
• Generate M codewords: ci ∼ N (0, P )⊗n.• WLOG, users send c1, c2, . . . , cKa .• Decoder sees
Y = c1 + · · ·+ cKa + Z
• Define sum-codewords c(S) ,∑
i∈S ci• ML-decoder (not optimal!)
S = arg minS‖c(S)− Y ‖ .
• Error-analysis:
Pe ≤Ka∑
t=1
t
KaP[t–misguessed]
P[t–misguessed] ≤ P
⋃S∈(Kat )
⋃S′∈(M−Kat )
‖c(S)− c(S′) + Z‖ ≤ ‖Z‖
Analysis I:• Condition on Z, c1, . . . , cKa
• Use Chernoff + Gallager ρ-trick for P[∪S′ · · · |cKa1 , Z]
• Use another Gallager ρ-trick for P[∪S · · · |Z]
• Finally take expectation over Z
Analysis II:• Define information density appropriately• Use Feinstein’s trick to bound
P[∪S ∪S′ · · · ] ≤ P[imin(XKa1 ;Y ) < γ] +
(Ka
t
)(Mt
)e−γ
imin = minS it(c(S);Y |c(Sc))• imin ≈ max of Gaussian process indexed by t-subsets of [Ka]
Classical IT: term S goes → 0 if I(XS ;Y |XSc) >∑
i∈S Ri
Yury Polyanskiy MAC tutorial 111
Random-coding achievability bound
• Generate M codewords: ci ∼ N (0, P )⊗n.• WLOG, users send c1, c2, . . . , cKa .• Decoder sees
Y = c1 + · · ·+ cKa + Z
• Define sum-codewords c(S) ,∑
i∈S ci• ML-decoder (not optimal!)
S = arg minS‖c(S)− Y ‖ .
• Error-analysis:
Pe ≤Ka∑
t=1
t
KaP[t–misguessed]
P[t–misguessed] ≤ P
⋃S∈(Kat )
⋃S′∈(M−Kat )
‖c(S)− c(S′) + Z‖ ≤ ‖Z‖
Analysis I:• Condition on Z, c1, . . . , cKa
• Use Chernoff + Gallager ρ-trick for P[∪S′ · · · |cKa1 , Z]
• Use another Gallager ρ-trick for P[∪S · · · |Z]
• Finally take expectation over Z
Analysis II:• Define information density appropriately• Use Feinstein’s trick to bound
P[∪S ∪S′ · · · ] ≤ P[imin(XKa1 ;Y ) < γ] +
(Ka
t
)(Mt
)e−γ
imin = minS it(c(S);Y |c(Sc))• imin ≈ max of Gaussian process indexed by t-subsets of [Ka]
Classical IT: term S goes → 0 if I(XS ;Y |XSc) >∑
i∈S Ri
Yury Polyanskiy MAC tutorial 111
Numerical evaluation
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb/N
0, dB
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
NOMA: random−coding achievability
Lower bound
For Ka . 50 dominant term t ≤ 3For Ka & 150 dominant term t = Ka
Yury Polyanskiy MAC tutorial 112
Numerical evaluation
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb/N
0, dB
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
NOMA: random−coding achievability
Lower bound
For Ka . 50 dominant term t ≤ 3For Ka & 150 dominant term t = Ka
Yury Polyanskiy MAC tutorial 112
Fundamental limits vs. ALOHA
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb/N
0, dB
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
NOMA: random−coding achievability
Lower bound
ALOHA
Yury Polyanskiy MAC tutorial 113
Fundamental limits vs. TIN (aka CDMA w/o MUD)
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
B
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
ALOHA
DT−TIN bound
NOMA: random−coding achievability
Lower bound
Yury Polyanskiy MAC tutorial 114
Fundamental limits vs. Coded Slotted ALOHA
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
B
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
ALOHA
NOMA: random−coding achievability
Lower bound
Coded ALOHA (irreg., rep. rate = 3.6)
Coded ALOHA (2−regular)
Yury Polyanskiy MAC tutorial 115
. . . and randomly-spread CDMA w/ optimal MUD
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
B
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
ALOHA
NOMA: random−coding achievability
Lower bound
Coded ALOHA (irreg., rep. rate = 3.6)
Coded ALOHA (2−regular)Random CDMA, BPSK, optimal MUD; K
a/N=1
Yury Polyanskiy MAC tutorial 116
New twists compared to classic MAC
Problem 1 large K →∞, fixed payload log2M
Relevant asymptotics: K,n→∞ with Kn = µ.
Problem 2 “user-centric” probability of error
Pe , 1K
∑j P[Xj 6= Xj ]
Problem 3 “random-access”
indistiguishable users (same-codebook), non-asymptotics.
Yury Polyanskiy MAC tutorial 117
Low-complexity random-access over GMAC
Yury Polyanskiy MAC tutorial 118
Massive Connectivity
Key challenge:
Providing multiple-access to massive number ofUNCOORDINATED
and infrequently communicating devices
Yury Polyanskiy MAC tutorial 119
Massive Connectivity
Key challenge:
Providing multiple-access to massive number ofUNCOORDINATED
and infrequently communicating devices
Typical scenario:• Huge # of users Ktot ≈ 106 − 107
• Still large # of active users Ka ≈ 1− 500
• Small data payload, e.g. k = 100 bits• Blocklength n ∼ 104
• kn � 1, but system spectral efficiency ρ = Ka·k
n ∼ 1
Yury Polyanskiy MAC tutorial 119
Massive Connectivity
Key challenge:
Providing multiple-access to massive number ofUNCOORDINATED
and infrequently communicating devices
Typical scenario:• Huge # of users Ktot ≈ 106 − 107
• Still large # of active users Ka ≈ 1− 500
• Small data payload, e.g. k = 100 bits• Blocklength n ∼ 104
• kn � 1, but system spectral efficiency ρ = Ka·k
n ∼ 1
The goal is to communicate with the smallest possible energy-per-bit
Yury Polyanskiy MAC tutorial 119
Simple scheme I: Treat interference as noise (TIN)
Theorem (DT-TIN bound)
There exists C ⊂ B(0,√nP ) of size M such that
P[X1 6∈ {top-Ka closest c/w to Y }] . E[e−|i(X;X+Z)−logM |+
]
where Y = X1 + · · ·+XKa + Z, Xi – uniform on C, X ∼ N (0, P )⊗n
and Z ∼ N (0, 1)⊗n.
Remarks:• Decoder searches for top-Ka closest codewords• Achieves about logM ≈ nCTIN (P )−
√nVTIN (P )Q−1(ε)
CTIN (P ) = 12 log
(1 + P
1+(Ka−1)P
), VTIN (P ) = P log2 e
1+KaP.
• Spectral efficiency as Ka →∞ is bounded by log2 e2 ≈ 0.72 bit.
Yury Polyanskiy MAC tutorial 120
Simple scheme I: Treat interference as noise (TIN)
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
B
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
ALOHA
DT−TIN bound
NOMA: random−coding achievability
Lower bound
Yury Polyanskiy MAC tutorial 121
Simple scheme II: T -fold ALOHA
Slotted ALOHA protocol (shaded slots indicate collision)
A
B
C
D
E
F
G
H
• Each user places his n1-codeword into one of L subframes.• Assume any T -fold collision is resolvable
• Per-user error: Pe ≈ P[Bino(Ka − 1, 1L) > T ] ≈
(KaL
)Te−
KaL
Yury Polyanskiy MAC tutorial 122
Simple scheme II: T -fold ALOHA
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
BEnergy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, P
e = 0.1
NOMA: random−coding achievability
Lower bound
ALOHA
DT−TIN bound
5−fold ALOHA
Want T -MAC codes for T ∼ 3-10
Yury Polyanskiy MAC tutorial 123
Simple scheme II: T -fold ALOHA
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
BEnergy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, P
e = 0.1
NOMA: random−coding achievability
Lower bound
ALOHA
DT−TIN bound
5−fold ALOHA
Want T -MAC codes for T ∼ 3-10
Yury Polyanskiy MAC tutorial 123
Our scheme: high-level idea
Xn1
Xn2
Zn
Y n
• Send lattice points• Decode sum of codewords via single-user decoder [Nazer-Gastpar’11]
• Use a subset of points forming a Sidon set(all sums c1 + c2 distinct)
• Single-lattice (no MMSE scaling): R ≈ 12K log+ P
• Nested-lattice (with MMSE scaling): R ≈ 12K log+
(1K + P
)
Warning: issues with same-dither
• Lose power-factor compared to 12K log(1 +KP )
Yury Polyanskiy MAC tutorial 124
Sample performance of new scheme
Ka
50 100 150 200 250 300
Eb/N
0 [dB
]
0
5
10
15
20
25
30
Random Coding
TIN
ALOHA
5-fold ALOHA
Our Scheme
n = 30, 000, k = 100, Pe = 0.05Yury Polyanskiy MAC tutorial 125
Related work
Many ideas appeared separately:
• Compute-and-forward [Nazer-Gastpar’11]• Explicit codes for the modulo-2 binary adder channel [Lindström’69,Bar-David et al.’93]
• 2-user codes for Fq-adder MAC [Dumer-Zinoviev’78, Dumer’95]• Concatenation of codes with good minimum distance and codes forthe BAC [Ericson-Levenshtein’94]
• Concatenation of CoF inner codes with syndrome decoding forcompressed sensing [Lee-Hong’16]
Yury Polyanskiy MAC tutorial 126
Details of our scheme
Three phases:• Sidon set: {0, 1}k → Fnp• Compute-and-forward: Fnp → Rn1
• T -fold ALOHA: Place n1-codeword in a random subframe
Yury Polyanskiy MAC tutorial 127
Concatenation scheme
Inner code (CoF):Convert T -user GMAC into a mod-p (noiseless) adder MAC.
w1 Elin c1 ∈ Clin
wT Elin cT ∈ Clin
w1, . . . ,wT are vectors in Zp
Clin is linear code over Zp
... y
z
Yury Polyanskiy MAC tutorial 128
Concatenation scheme
Inner code (CoF):Convert T -user GMAC into a mod-p (noiseless) adder MAC.
w1 Elin c1 ∈ Clin
wT Elin cT ∈ Clin
w1, . . . ,wT are vectors in Zp
Clin is linear code over Zp
... y
z
modp Dlin
yBAC
yBAC =[∑T
i=1 wi
]mod p
Yury Polyanskiy MAC tutorial 128
Concatenation scheme
Inner code (CoF):Convert T -user GMAC into a mod-p (noiseless) adder MAC.Outer code (BAC):CBAC code for mod-p adder T -MAC Here: only p = 2
w1 Elin c1 ∈ Clin
wT Elin cT ∈ Clin
w1, . . . ,wT are vectors in Zp
Clin is linear code over Zp
... y
z
modp Dlin
yBAC
yBAC =[∑T
i=1 wi
]mod p
Yury Polyanskiy MAC tutorial 128
Concatenation scheme
Inner code (CoF):Convert T -user GMAC into a mod-p (noiseless) adder MAC.Outer code (BAC):CBAC code for mod-p adder T -MAC Here: only p = 2
w1 Elin c1 ∈ Clin
wT Elin cT ∈ Clin
w1, . . . ,wT are vectors in Zp
Clin is linear code over Zp
... y
z
modp Dlin
yBAC
yBAC =[∑T
i=1 wi
]mod p
m1 EBAC
mt EBAC
... DBAC
Yury Polyanskiy MAC tutorial 128
More on the CoF phase
• Clin ⊂ {0, 1}n is a binary linear code (shifted to ±√P )
• Receive y =∑T
i=1 xi + z, shift, rescale, take mod-2, get
yCoF = [x + z] mod 2
where x = [∑
i xi] mod 2 ∈ Clin ⊂ {0, 1}n• The channel from x to yCoF is a BMS with folded Gsn noise
=⇒ Designing Clin is a standard coding taskNormal approximation: log |Clin| ≈ nC −
√nV Q−1(εcode)
Yury Polyanskiy MAC tutorial 129
More on the CoF phase
• Clin ⊂ {0, 1}n is a binary linear code (shifted to ±√P )
• Receive y =∑T
i=1 xi + z, shift, rescale, take mod-2, get
yCoF = [x + z] mod 2
where x = [∑
i xi] mod 2 ∈ Clin ⊂ {0, 1}n• The channel from x to yCoF is a BMS with folded Gsn noise
=⇒ Designing Clin is a standard coding taskNormal approximation: log |Clin| ≈ nC −
√nV Q−1(εcode)
What is lost in the conversion y 7→ yCoF?
Sum-capacity of y grows like log(T · P )Capacity of yCoF only grows like log(P )
Yury Polyanskiy MAC tutorial 129
More on the CoF phase
• Clin ⊂ {0, 1}n is a binary linear code (shifted to ±√P )
• Receive y =∑T
i=1 xi + z, shift, rescale, take mod-2, get
yCoF = [x + z] mod 2
where x = [∑
i xi] mod 2 ∈ Clin ⊂ {0, 1}n• The channel from x to yCoF is a BMS with folded Gsn noise
=⇒ Designing Clin is a standard coding taskNormal approximation: log |Clin| ≈ nC −
√nV Q−1(εcode)
What is lost in the conversion y 7→ yCoF?
Sum-capacity of y grows like log(T · P )Capacity of yCoF only grows like log(P )
T -fold ALOHA reduces “power-loss” to 1/T instead of 1/Ka
Yury Polyanskiy MAC tutorial 129
More on the BAC Phase
yBAC =
[T∑
i=1
wi
]mod 2, w1, . . . ,wT ∈ CBAC
Need to decode a list {w1, . . . ,wT }Symmetric-capacity: Csym = 1
T
Yury Polyanskiy MAC tutorial 130
More on the BAC Phase
yBAC =
[T∑
i=1
wi
]mod 2, w1, . . . ,wT ∈ CBAC
Need to decode a list {w1, . . . ,wT }Symmetric-capacity: Csym = 1
T
How to construct explicit codes?• Let H = [h1| · · · |hN ] be the parity-check matrix of a T -errorcorrecting code
• ⇒ all T -sums of columns are distinct• Set CBAC = {h1, . . . ,hN}• BCH parity check matrix: RBAC = 1
T (optimal!)• Encoding: easy (just compute α, α3, · · · , α2T−1)
Yury Polyanskiy MAC tutorial 130
More on the BAC Phase
yBAC =
[T∑
i=1
wi
]mod 2, w1, . . . ,wT ∈ CBAC
Need to decode a list {w1, . . . ,wT }Symmetric-capacity: Csym = 1
T
How to construct explicit codes?• Let H = [h1| · · · |hN ] be the parity-check matrix of a T -errorcorrecting code
• ⇒ all T -sums of columns are distinct• Set CBAC = {h1, . . . ,hN}• BCH parity check matrix: RBAC = 1
T (optimal!)• Encoding: easy (just compute α, α3, · · · , α2T−1)
Problem: decoding complexity of BCH linear in n = 2k − 1Yury Polyanskiy MAC tutorial 130
More on the BAC Phase: Decoding BCH
Decoding:
• α1, . . . , αT ∈ F2k are messages• yBAC = He′ – syndrome (!) ⇒ we know
∑i(αi)
s, s ≤ 2T
• Error locator: Berlekamp-Massey yields coeffs of
σ(z) =
T∏
i=1
(1 + αiz)
• Find roots of σ(·) e.g. via [Rabin’80]
• Invert roots: using the identity α−1 = α2k − 1
Total complexity: O(kT 2 log2(T ) log log(T )) operations in F2k
Yury Polyanskiy MAC tutorial 131
Spectral Efficiency > 1
The spectral efficiency ρ = Ka·kn of our scheme is at most Rlin
What if ρ > 1?
Solution: - work with p > 2
• CoF phase requires good linear codes over Fp• BAC phase can be implemented using H = [h1| · · · |hn] of a
[n = ps − 1, n− k = 2T ] Reed-Solomon code over Fps with
CBAC = {αhi : α ∈ Fps \ {0}, i = 1, . . . , ps − 1}
• Can use nested lattice to achieve the 1.53dB shaping gain• Drawback: hard to analyze finite blocklength
Yury Polyanskiy MAC tutorial 132
Approximate performance
Asymptotic optimum:(EbN0
)∗= 22ρ−1
2ρ , with ρ = Ka·kn .
Let L = KaαT for α ∈ (0, 1] be number of subframes
Pe ≈ P[T -collision] = Pr(
Binomial(Ka − 1, αTKa
)≥ T
)
Linear code rate Rlin = ρα
∆ =
(EbN0
)dB−
(EbN0
)∗dB
≈ 6ρ1− αα
+ 10 log10(α)
+10 log10(T )−10 log10(1− 2−2ρ)+1.53
T-Collision avoidance loss due to a 1/α increase in spectral efficiency
Yury Polyanskiy MAC tutorial 133
Approximate performance
Asymptotic optimum:(EbN0
)∗= 22ρ−1
2ρ , with ρ = Ka·kn .
Let L = KaαT for α ∈ (0, 1] be number of subframes
Pe ≈ P[T -collision] = Pr(
Binomial(Ka − 1, αTKa
)≥ T
)
Linear code rate Rlin = ρα
∆ =
(EbN0
)dB−
(EbN0
)∗dB
≈ 6ρ1− αα
+ 10 log10(α)+10 log10(T )
−10 log10(1− 2−2ρ)+1.53
CoF loss from the reduction y 7→ yCoF
Yury Polyanskiy MAC tutorial 133
Approximate performance
Asymptotic optimum:(EbN0
)∗= 22ρ−1
2ρ , with ρ = Ka·kn .
Let L = KaαT for α ∈ (0, 1] be number of subframes
Pe ≈ P[T -collision] = Pr(
Binomial(Ka − 1, αTKa
)≥ T
)
Linear code rate Rlin = ρα
∆ =
(EbN0
)dB−
(EbN0
)∗dB
≈ 6ρ1− αα
+ 10 log10(α)+10 log10(T )−10 log10(1− 2−2ρ)
+1.53
Loss of +1 in computation rate
Yury Polyanskiy MAC tutorial 133
Approximate performance
Asymptotic optimum:(EbN0
)∗= 22ρ−1
2ρ , with ρ = Ka·kn .
Let L = KaαT for α ∈ (0, 1] be number of subframes
Pe ≈ P[T -collision] = Pr(
Binomial(Ka − 1, αTKa
)≥ T
)
Linear code rate Rlin = ρα
∆ =
(EbN0
)dB−
(EbN0
)∗dB
≈ 6ρ1− αα
+ 10 log10(α)+10 log10(T )−10 log10(1− 2−2ρ)+1.53
Shaping loss
Yury Polyanskiy MAC tutorial 133
Low-complexity schemes: summary
50 100 150 200 250 300
0
5
10
15
20
25
30NOMA: random-coding achievability
ALOHA
DT-TIN bound
5-fold ALOHA
Our Scheme - Exact
Our Scheme - Estimated
Yury Polyanskiy MAC tutorial 134
MAC with random path-loss
Yury Polyanskiy MAC tutorial 135
K-user GMAC with random path-loss
User 1Tx
User 2Tx
User KTx
Rx
X1
Xk
Y (t) = H1X1(t) + · · ·+HKXK(t) + Z(t)
++
. . .
+
=
User 1
User K
Noise
Received
output
• More realistic model: waveforms added with random gains• Standard work-around: use pilots• Impossible without coordination!
Yury Polyanskiy MAC tutorial 136
New multi-access protocol (2018): idea
• Step 1: Partition entire frame into subframes of length N
...
n
N N N
• Step 2: Each user randomly selects a subframe for communication.• Step 3: Users encode their data via sparse-graph (LDPC) codes• Step 4: Decoder uses joint Tanner graph (LDPC+LDGM structure)to iteratively decode data and learn the channel gains!
H1 H2
C C
p(H1) p(H2)
Yury Polyanskiy MAC tutorial 137
New multi-access protocol (2018): idea
• Step 1: Partition entire frame into subframes of length N
...
n
N N N
• Step 2: Each user randomly selects a subframe for communication.Important: K and n
N are chosen so that > T -fold collisions areimprobable.
• Step 3: Users encode their data via sparse-graph (LDPC) codes• Step 4: Decoder uses joint Tanner graph (LDPC+LDGM structure)to iteratively decode data and learn the channel gains!
H1 H2
C C
p(H1) p(H2)
Yury Polyanskiy MAC tutorial 137
New multi-access protocol (2018): idea
• Step 1: Partition entire frame into subframes of length N
...
n
N N N
• Step 2: Each user randomly selects a subframe for communication.• Step 3: Users encode their data via sparse-graph (LDPC) codes
• Step 4: Decoder uses joint Tanner graph (LDPC+LDGM structure)to iteratively decode data and learn the channel gains!
H1 H2
C C
p(H1) p(H2)
Yury Polyanskiy MAC tutorial 137
New multi-access protocol (2018): idea
• Step 1: Partition entire frame into subframes of length N
...
n
N N N
• Step 2: Each user randomly selects a subframe for communication.• Step 3: Users encode their data via sparse-graph (LDPC) codes• Step 4: Decoder uses joint Tanner graph (LDPC+LDGM structure)to iteratively decode data and learn the channel gains!
H1 H2
C C
p(H1) p(H2)
Yury Polyanskiy MAC tutorial 137
New multi-access protocol (2018): results
0 50 100 150 200 250 300
Ka
5
10
15
20
25
30
Eb/N
0(d
B)
Fading MAC: Eb/N
0(dB) vs K
a for n=30000, k=100 bits, P
e=0.1
1-ALOHA using LDPC scheme
2-ALOHA using LDPC scheme
3-ALOHA using LDPC scheme
4-ALOHA using LDPC scheme
2-ALOHA using FBL bound
3-ALOHA using FBL bound
4-ALOHA using FBL bound
1-ALOHA using FBL bound
Shamai-Bettesh asymptotic bound
Converse
ALOHA breaks down at about ∼ 20 usersNEW scheme at about ∼ 250 users
Yury Polyanskiy MAC tutorial 138
New multi-access protocol (2018): results
0 50 100 150 200 250 300
Ka
5
10
15
20
25
30
Eb/N
0(d
B)
Fading MAC: Eb/N
0(dB) vs K
a for n=30000, k=100 bits, P
e=0.1
1-ALOHA using LDPC scheme
2-ALOHA using LDPC scheme
3-ALOHA using LDPC scheme
4-ALOHA using LDPC scheme
2-ALOHA using FBL bound
3-ALOHA using FBL bound
4-ALOHA using FBL bound
1-ALOHA using FBL bound
Shamai-Bettesh asymptotic bound
Converse
ALOHA breaks down at about ∼ 20 usersNEW scheme at about ∼ 250 users
Yury Polyanskiy MAC tutorial 138
Other ideas for low-complexity schemes
• Work in progress by several groupsI Narayanan-ChamberlandI P.-FrolovI Durisi-DalaiI Popovski-LivaI ... (sorry to those I forgot)
• Methods we did not cover:I Coded Slotted ALOHAI ... including with MPR capabilityI iterative decoding same-codebook LDPCsI super-imposed codes
• Problem is even more interesting with fadingI Random channel gains Hj help distinguish users.I With many users, order statistics of Hj ’s becomes deterministic.
Yury Polyanskiy MAC tutorial 139
Outline – revisited
Envisioned solution:• To save battery: sensors sleep all the time, except transmissions.• ... uncoordinated transmissions.• ... they wake up, blast the packet, go back to sleep.• Focus on low-energy (low Eb/N0)• Focus on fundamental limits• ... but with low-complexity solutions (single-user-only decoding).
Issues we need to understand:1 packets are short: finite-blocklength (FBL) info theory2 multiple-access channel: Classical MAC3 low-complexity MAC: modulation, CDMA, multi-user detection4 massive random-access: many users, same-codebook codes (NEW)
Supporting 10 users at 1Mbps is much easier than 1M users at 10bps.
Yury Polyanskiy MAC tutorial 140
Outline – revisited
Envisioned solution:• To save battery: sensors sleep all the time, except transmissions.• ... uncoordinated transmissions.• ... they wake up, blast the packet, go back to sleep.• Focus on low-energy (low Eb/N0)• Focus on fundamental limits• ... but with low-complexity solutions (single-user-only decoding).
Issues we need to understand:1 packets are short: finite-blocklength (FBL) info theory2 multiple-access channel: Classical MAC3 low-complexity MAC: modulation, CDMA, multi-user detection4 massive random-access: many users, same-codebook codes (NEW)
Supporting 10 users at 1Mbps is much easier than 1M users at 10bps.
Yury Polyanskiy MAC tutorial 140
Thank you!
Yury Polyanskiy MAC tutorial 141
Extra: More plots
Yury Polyanskiy MAC tutorial 142
ALOHA + codes repairing 5-fold collisions
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb/N
0, dB
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
NOMA: random−coding achievability
Lower bound
ALOHA
DT−TIN bound
5−fold ALOHA
Yury Polyanskiy MAC tutorial 143
Other schemes...
0 50 100 150 200 250 300−2
0
2
4
6
8
10
# active users
Eb
/N0
, d
B
Energy−per−bit vs. number of users. Payload k = 100 bit, frame n = 30000 rdof, Pe = 0.1
ALOHA
ALOHA + 5MAC
NOMA: Treat interference as noise (TIN)
NOMA: random−coding achievability
Lower bound
Coded ALOHA (irreg., rep. rate = 3.6)
Coded ALOHA (2−regular)Random CDMA, BPSK, optimal MUD; K
a/N=1
Yury Polyanskiy MAC tutorial 144