Upload
sunee
View
35
Download
6
Embed Size (px)
DESCRIPTION
Fast and robust sparse recovery. Mayank Bakshi INC, CUHK. New Algorithms and Applications. Sheng Cai. Eric Chan. Mohammad Jahangoshahi. Sidharth Jaggi. Venkatesh Saligrama. Minghua Chen. The Chinese University of Hong Kong. The Institute of Network Coding. - PowerPoint PPT Presentation
Citation preview
Fast and robust sparse recoveryNew Algorithms and Applications
The Chinese University of Hong Kong
The Institute of Network Coding
ShengCai
EricChan Minghua
ChenSidharth
JaggiMohammad Jahangoshahi
VenkateshSaligrama
Mayank BakshiINC, CUHK
? n
2
Fast and robust sparse recovery
m
m<n
k
Unknown x
MeasurementMeasurement output
Reconstruct x
A. Compressive sensing
4
?
k ≤ m<n
? n
m
k
A. Robust compressive sensing
y=A(x+z)+eApproximate sparsity
Measurement noise
5
?
z
e
TomographyComputerized Axial
(CAT scan)
B. Tomography
Estimate x given y and T
y = Tx
B. Network Tomography
Measurements y:• End-to-end packet delays
Transform T:• Network connectivity matrix (known a priori)
Infer x:• Link/node congestion
Hopefully “k-sparse”
Compressive sensing?
Challenge:• Matrix T “fixed”• Can only take “some”
types of measurements
9
n-dd
1 0q
1q
For Pr(error)< ε , Lower bound:
Noisy Combinatorial OMP:What’s known
…[CCJS11]
0
C. Robust group testing
A. Robust compressive sensing
y=A(x+z)+eApproximate sparsity
Measurement noise
11
?
z
e
Apps: 1. Compression
12
W(x+z)
BW(x+z) = A(x+z)M.A. Davenport, M.F. Duarte, Y.C. Eldar, and G. Kutyniok, "Introduction to Compressed Sensing,"in Compressed Sensing: Theory and Applications, 2012
x+z
Apps: 2. Fast(er) Fourier Transform
13
H. Hassanieh, P. Indyk, D. Katabi, and E. Price. Nearly optimal sparse fourier transform. In Proceedings of the 44th symposium on Theory of Computing (STOC '12).
Apps: 3. One-pixel camera
http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/cscam.gif14
y=A(x+z)+e
15
y=A(x+z)+e
16
y=A(x+z)+e
17
y=A(x+z)+e
18
y=A(x+z)+e
(Information-theoretically) order-optimal19
(Information-theoretically) order-optimal
• Support Recovery
20
SHO-FA:SHO(rt)-FA(st)
O(k) measurements,O(k) time
1. Graph-Matrix
n ck
d=3
24
A
1. Graph-Matrix
25
n ck
Ad=3
26
1. Graph-Matrix
2. (Most) x-expansion
≥2|S||S|27
3. “Many” leafs
≥2|S||S|L+L’≥2|S|
3|S|≥L+2L’
L≥|S|L+L’≤3|S|
L/(L+L’) ≥1/3L/(L+L’) ≥1/2
28
4. Matrix
29
Encoding – Recap.
30
0
1
0
1
0
Decoding – Initialization
31
Decoding – Leaf Check(2-Failed-ID)
32
Decoding – Leaf Check (4-Failed-VER)
33
Decoding – Leaf Check(1-Passed)
34
Decoding – Step 4 (4-Passed/STOP)
35
Decoding – Recap.
36
0
0
0
0
0
?
?
?0
0
0
1
0
Decoding – Recap.
28
0
1
0
1
0
Noise/approx. sparsity
39
Meas/phase error
40
Correlated phase meas.
41
Correlated phase meas.
42
Correlated phase meas.
43
44
• Goal: Infer network characteristics (edge or node delay)• Difficulties:
– Edge-by-edge (or node-by node) monitoring too slow– Inaccessible nodes
Network Tomography
45
• Goal: Infer network characteristics (edge or node delay)• Difficulties:
– Edge-by-edge (or node-by node) monitoring too slow– Inaccessible nodes
• Network Tomography:– with very few end-to-end measurements– quickly– for arbitrary network topology
Network Tomography
B. Network Tomography
Measurements y:• End-to-end packet delays
Transform T:• Network connectivity matrix
(known a priori)
Infer x:• Link/node congestion
Hopefully “k-sparse”
Compressive sensing?
Idea:• “Mimic” random matrix
Challenge:• Matrix T “fixed”• Can only take “some”
types of measurements
Our algorithm: FRANTIC• Fast Reference-based Algorithm for Network
Tomography vIa Compressive sensing
SHO-FA
49
n ck
Ad=3
50
T
1. Integer valued CS [BJCC12] “SHO-FA-INT”
2. Better mimicking of desired T
Node delay estimation
1v3v4v2v
Node delay estimation
4v2v3v
1v
4v2v1v3v
Node delay estimation
Edge delay estimation
1e 5e6e 3e4e
2e
Idea 1: Cancellation
, ,
Idea 2: “Loopy” measurements
•Fewer measurements•Arbitrary packet injection/
reception•Not just 0/1 matrices (SHO-FA)
,
C. GROTESQUE: Noisy GROup TESting (QUick and Efficient)
63
n-dd
1 0q
1q
For Pr(error)< ε , Lower bound:
Noisy Combinatorial OMP:What’s known
…[CCJS11]
0
Decoding complexity
# Tests
Lower bound
Lower bound
Adaptive
Non-Adaptive
2-Stage Adaptive
This work
O(poly(D)log(N)),O(D2log(N))
O(DN),O(Dlog(N))
[NPR12]
Decoding complexity
# Tests
This work
Hammer: GROTESQUE testing
Multiplicity
?
Localization
?
Noiseless:
Noisy:
Nail: “Good” Partioning
GROTESQUE
n itemsd defectives
Adaptive Group Testing
O(n/d)
Adaptive Group Testing
O(n/d)
GROTESQUEGROTESQUE
GROTESQUE
GROTESQUE
O(dlog(n)) time, tests, constant fraction recovered
Adaptive Group Testing
•Each stage constant fraction recovered•# tests, time decaying geometrically
Adaptive Group Testing
T=O(logD)
Non-Adaptive Group Testing
Constant fraction “good”
O(Dlog(D))
Non-Adaptive Group Testing
Iterative Decoding
2-Stage Adaptive Group Testing
=D2
D. Threshold Group Testing
l u # defective items in a group
Prob
abili
ty th
at
Out
put i
s pos
itive
0
1
n itemsd defectives
Each test:
Goal: find all d defectives
Our result: tests suffice; Previous best algorithms:
Summary• Fast and Robust Sparse Recovery algorithms
• Compressive sensing: Order optimal complexity, # of measurements
• Network Tomography: Nearly optimal complexity, # of measurements
• Group Testing: Optimal complexity, nearly optimal # of tests- Threshold Group Testing: Nearly optimal # of tests
THANK YOU謝謝
18