137
Compressed Sensing using Generative Models Ashish Bora Ajil Jalal Eric Price Alex Dimakis UT Austin Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 1 / 20

Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing using Generative Models

Ashish Bora Ajil Jalal Eric Price Alex Dimakis

UT Austin

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 1 / 20

Page 2: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to recover a signal (e.g. an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Linear measurements: Choose A ∈ Rm×n, see y = Ax .

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20

Page 3: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to recover a signal (e.g. an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Linear measurements: Choose A ∈ Rm×n, see y = Ax .

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20

Page 4: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to recover a signal (e.g. an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Linear measurements: Choose A ∈ Rm×n, see y = Ax .

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20

Page 5: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to recover a signal (e.g. an image) from noisy measurements.

MedicalImaging

Astronomy Single-PixelCamera

Oil Exploration

Linear measurements: Choose A ∈ Rm×n, see y = Ax .

How many measurements m to learn the signal?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 2 / 20

Page 6: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?

I Naively: m ≥ n or else underdetermined

: multiple x possible.

I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 7: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined

: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 8: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.

I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 9: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 10: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 11: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 12: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 13: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 14: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)

I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 15: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)I Image “compressible” =⇒ information in image is small.

I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 16: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Given linear measurements y = Ax , for A ∈ Rm×n.

How many measurements m to learn the signal x?I Naively: m ≥ n or else underdetermined: multiple x possible.I But most x aren’t plausible.

5MB 36MB

I This is why compression is possible.

Ideal answer: (information in image) / (new info. per measurement)I Image “compressible” =⇒ information in image is small.I Measurements “incoherent” =⇒ most info new.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 3 / 20

Page 17: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:

I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 18: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:

I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 19: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:

I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 20: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:I How should we formalize that an image is “compressible”?

I What algorithm for recovery?I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 21: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:I How should we formalize that an image is “compressible”?I What algorithm for recovery?

I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 22: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Want to estimate x ∈ Rn from m n linear measurements.

Suggestion: Find “most compressible” image that fits measurements.

Three questions:I How should we formalize that an image is “compressible”?I What algorithm for recovery?I How to choose the measurement matrix?

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 4 / 20

Page 23: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Standard compressed sensing: sparsity in some basis

I Sparsity + other constraints (“structured sparsity”)

This talk: new method.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20

Page 24: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Standard compressed sensing: sparsity in some basis

I Sparsity + other constraints (“structured sparsity”)

This talk: new method.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20

Page 25: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Standard compressed sensing: sparsity in some basis

I Sparsity + other constraints (“structured sparsity”)

This talk: new method.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20

Page 26: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Standard compressed sensing: sparsity in some basis

I Sparsity + other constraints (“structured sparsity”)

This talk: new method.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20

Page 27: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Compressed Sensing

Standard compressed sensing: sparsity in some basis

I Sparsity + other constraints (“structured sparsity”)

This talk: new method.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 5 / 20

Page 28: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.

I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSO

minx ‖Ax − y‖22 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 29: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSO

minx ‖Ax − y‖22 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 30: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSO

minx ‖Ax − y‖22 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 31: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2

+ λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 32: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 33: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 34: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 35: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]

I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 36: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γx

I If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 37: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)

I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 38: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 39: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]

I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 40: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Standard Compressed Sensing Formalism“Compressible” = “sparse”

Want to estimate x from y = Ax + η, for A ∈ Rm×n.I For this talk: ignore η, so y = Ax .

Algorithm for recovery: LASSOminx ‖Ax − y‖2

2 + λ‖x‖1

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (1)

I Reconstruction accuracy proportional to model accuracy.

Theorem [Candes-Romberg-Tao 2006]I A satisfies REC if for all k-sparse x , ‖Ax‖ ≥ γxI If A satisfies REC, then LASSO recovers x satisfying (1)I m = O(k log(n/k)) suffices for A to satisfy REC.

Theorem [Do Ba-Indyk-Price-Woodruff 2010]I m = Θ(k log(n/k)) is optimal

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 6 / 20

Page 41: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 42: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 43: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 44: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 45: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 46: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 47: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?

I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 48: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?I Deep convolutional neural networks.

I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 49: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Alternatives to sparsity?

Basis needs to be handcrafted

Does not capture the plausible vectors tightly : Too simplistic

Ignores a lot of domain dependent structure

Billions of natural images, millions of MRIs collected each year

Opportunity to improve structural understanding

Better structural understanding should give fewer measurements

Best way to model images in 2017?I Deep convolutional neural networks.I In particular: generative models.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 7 / 20

Page 50: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 51: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 52: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 53: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 54: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:

I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 55: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.

I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 56: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative Models

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 57: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative ModelsBeGAN

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 58: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative ModelsBeGAN

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].

I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 59: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative ModelsBeGAN

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 60: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Generative ModelsBeGAN

Want to model a distribution D of images.

Function G : Rk → Rn.

When z ∼ N(0, Ik), then ideally G (z) ∼ D.

Active area of machine learning research in last few years.

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]:I Competition between generator and discriminator.I W-GAN, BeGAN, InfoGAN, DCGAN, ...

Variational Auto-Encoders (VAEs) [Kingma & Welling 2013].I Blurrier, but maybe better coverage of the space.

Suggestion for compressed sensing

Replace “x is k-sparse” by “x is in range of G : Rk → Rn”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 8 / 20

Page 61: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recovery

I minz ‖AG (z)− y‖22 + λ‖z‖2

2I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 62: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recovery

I minz ‖AG (z)− y‖22 + λ‖z‖2

2I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 63: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recovery

I minz ‖AG (z)− y‖22 + λ‖z‖2

2I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 64: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recoveryI minz ‖Ax − y‖2

2 + λ‖x‖1

I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 65: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recoveryI minz ‖AG (z)− y‖2

2 + λ‖z‖22

I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 66: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recoveryI minz ‖AG (z)− y‖2

2 + λ‖z‖22

I Backprop to get gradients wrt z .

I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 67: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Want to estimate x from y = Ax , for A ∈ Rm×n.

We are given the generative model G : Rk → Rn.

Algorithm for recoveryI minz ‖AG (z)− y‖2

2 + λ‖z‖22

I Backprop to get gradients wrt z .I Optimize with gradient descent

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 9 / 20

Page 68: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · mink-sparse x ′

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 69: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 70: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 71: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).

I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 72: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.

I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 73: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 74: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:

I For any Lipschitz G , m = O(k log rLδ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 75: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′∈range(G)

‖x − x ′‖2 (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log L) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 76: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.

I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 77: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 78: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:

I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 79: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:I Just like training, no proof that gradient descent converges

I Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 80: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)

I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 81: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.

I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 82: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Our Results“Compressible” = “near range(G)”

Goal: x with

‖x − x‖2 ≤ O(1) · minx ′=G(z ′),‖z ′‖2≤r

‖x − x ′‖2+δ (2)

I Reconstruction accuracy proportional to model accuracy.

Main Theorem I: m = O(kd log n) suffices for (2).I G is a d-layer ReLU-based neural network.I When A is random Gaussian matrix.

Main Theorem II:I For any Lipschitz G , m = O(k log rL

δ ) suffices.I Morally the same O(kd log n) bound: L, r , δ−1 ∼ nO(d)

Convergence:I Just like training, no proof that gradient descent convergesI Approximate solution approximately gives (2)I Can check that ‖x − x‖2 is small.I In practice, optimization error is negligible.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 10 / 20

Page 83: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Experimental Results

Faces: n = 64× 64× 3 = 12288, m = 500O

rigin

al

Lass

o (

DC

T)

Lass

o (

Wavele

t)D

CG

AN

MNIST: n = 28x28 = 784, m = 100.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 11 / 20

Page 84: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Experimental Results

Faces: n = 64× 64× 3 = 12288, m = 500O

rigin

al

Lass

o (

DC

T)

Lass

o (

Wavele

t)D

CG

AN

MNIST: n = 28x28 = 784, m = 100.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 11 / 20

Page 85: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Experimental Results

10

25

50

10

0

20

0

30

04

00

50

0

75

0

Number of measurements

0.00

0.02

0.04

0.06

0.08

0.10

0.12

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso

VAE

VAE+Reg

MNIST

20

50

10

0

20

0

50

0

10

00

25

00

50

00

75

00

10

00

0

Number of measurements

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso (DCT)

Lasso (Wavelet)

DCGAN

DCGAN+Reg

Faces

For fixed G , have fixed k , so error stops improving after some point.

Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20

Page 86: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Experimental Results

10

25

50

10

0

20

0

30

04

00

50

0

75

0

Number of measurements

0.00

0.02

0.04

0.06

0.08

0.10

0.12

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso

VAE

VAE+Reg

MNIST

20

50

10

0

20

0

50

0

10

00

25

00

50

00

75

00

10

00

0

Number of measurements

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso (DCT)

Lasso (Wavelet)

DCGAN

DCGAN+Reg

Faces

For fixed G , have fixed k , so error stops improving after some point.

Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20

Page 87: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Experimental Results

10

25

50

10

0

20

0

30

04

00

50

0

75

0

Number of measurements

0.00

0.02

0.04

0.06

0.08

0.10

0.12

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso

VAE

VAE+Reg

MNIST

20

50

10

0

20

0

50

0

10

00

25

00

50

00

75

00

10

00

0

Number of measurements

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

Reco

nst

ruct

ion e

rror

(per

pix

el)

Lasso (DCT)

Lasso (Wavelet)

DCGAN

DCGAN+Reg

Faces

For fixed G , have fixed k , so error stops improving after some point.

Larger m should use higher capacity G , so min‖x − G (z)‖ smaller.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 12 / 20

Page 88: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.

I Then analogous to proof for sparsity:(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 89: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 90: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 91: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:

I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 92: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 93: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 94: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 95: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 96: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof Outline (ReLU-based networks)

Show range(G ) lies within union of ndk k-dimensional hyperplane.I Then analogous to proof for sparsity:

(nk

)≤ 2k log(n/k) hyperplanes.

I So dk log n Gaussian measurements suffice.

ReLU-based network:I Each layer is z → ReLU(Aiz).

I ReLU(y)i =

yi yi ≥ 00 otherwise

Input to layer 1: single k-dimensional hyperplane.

Lemma

Layer 1’s output lies within a union of nk k-dimensional hyperplanes.

Induction: final output lies within ndk k-dimensional hyperplanes.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 13 / 20

Page 97: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 98: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 99: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 100: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version

: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 101: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 102: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 103: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 104: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof of LemmaLayer 1’s output lies within a union of nk k-dimensional hyperplanes.

A1z is k-dimensional hyperplane in Rn.

ReLU(A1z) is linear, within any constant region of sign(A1z).

How many different patterns can sign(A1z) take?

k = 2 version: how many regions can n lines partition plane into?

I 1 + (1 + 2 + . . .+ n) = n2+n+22 .

I n half-spaces divide Rk into less than nk regions.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 14 / 20

Page 105: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).

I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 106: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).

I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 107: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).

I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 108: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).

I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 109: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).

I Size ( rLδ )k : union bound works for m = O(k log rL

δ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 110: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 111: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 112: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 113: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Proof outline (Lipschitz networks)

Need that if x1, x2 ∈ range(G ) are very different, then ‖Ax1 − Ax2‖ islarge.

I Hence can distinguish with noise.

True for fixed x1, x2 with 1− e−Ω(m) probability.

Apply to δ-cover of range(G ).I Comes from δ/L-cover of domain(G ).I Size ( rL

δ )k : union bound works for m = O(k log rLδ ).

x1, x2 lie close to cover =⇒ additive δ‖A‖ loss:

‖Ax1 − Ax2‖ ≥ 0.9‖x1 − x2‖ − O(δn)

Hence‖x − x‖2 ≤ C min

x ′∈range(G)‖x ′ − x‖2 + δn

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 15 / 20

Page 114: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Non-gaussian measurements

A = subset of Fourier matrix: MRI

A = Gaussian blur: superresolution

A = diagonal with zeros: inpainting

Algorithm can be applied, unifying problems.

I Guarantee only holds if G and A are “incoherent”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20

Page 115: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Non-gaussian measurements

A = subset of Fourier matrix: MRI

A = Gaussian blur: superresolution

A = diagonal with zeros: inpainting

Algorithm can be applied, unifying problems.

I Guarantee only holds if G and A are “incoherent”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20

Page 116: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Non-gaussian measurements

A = subset of Fourier matrix: MRI

A = Gaussian blur: superresolution

A = diagonal with zeros: inpainting

Algorithm can be applied, unifying problems.

I Guarantee only holds if G and A are “incoherent”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20

Page 117: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Non-gaussian measurements

A = subset of Fourier matrix: MRI

A = Gaussian blur: superresolution

A = diagonal with zeros: inpainting

Algorithm can be applied, unifying problems.

I Guarantee only holds if G and A are “incoherent”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20

Page 118: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Non-gaussian measurements

A = subset of Fourier matrix: MRI

A = Gaussian blur: superresolution

A = diagonal with zeros: inpainting

Algorithm can be applied, unifying problems.I Guarantee only holds if G and A are “incoherent”.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 16 / 20

Page 119: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Inpainting

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 17 / 20

Page 120: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Super-resolution

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 18 / 20

Page 121: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Summary

Number of measurements required to estimate an image= (information content of image) / (new information per measurement)

Generative models can bound information content as O(kd log n).

I Open: do real networks require linear in d?

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20

Page 122: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Summary

Number of measurements required to estimate an image= (information content of image) / (new information per measurement)

Generative models can bound information content as O(kd log n).

I Open: do real networks require linear in d?

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20

Page 123: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Summary

Number of measurements required to estimate an image= (information content of image) / (new information per measurement)

Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20

Page 124: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Summary

Number of measurements required to estimate an image= (information content of image) / (new information per measurement)

Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20

Page 125: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Summary

Number of measurements required to estimate an image= (information content of image) / (new information per measurement)

Generative models can bound information content as O(kd log n).I Open: do real networks require linear in d?

Generative models differentiable =⇒ can optimize in practice.

Gaussian measurements ensure independent information.

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 19 / 20

Page 126: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?

I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 127: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?

I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 128: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.

I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 129: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 130: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 131: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?

I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 132: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 133: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?

I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 134: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 135: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Future work

More uses of differentiable compression?

More applications of distance to range of G?I Metric of how good generative model is.I Objective to train new generative model.

F Bojanowski et al. 2017: impressive results with this.

Computable proxy for Lipschitz parameter?I Real networks may be much better than worst-case nd .

Competitive guarantee for non-Gaussian A?I Even nonlinear A?

Thank You

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 20 / 20

Page 136: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 21 / 20

Page 137: Compressed Sensing using Generative Modelsashishbora.github.io/assets/csgm/slides_google_ashish.pdf · 2017-11-07 · Compressed Sensing Want to recover a signal (e.g. an image) from

Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing using Generative Models 22 / 20