Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions

Preview:

DESCRIPTION

Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions. Onur G. Guleryuz oguleryuz@erd.epson.com Epson Palo Alto Laboratory Palo Alto, CA. (Full screen mode recommended. Please see movies.zip file for some movies, or email me. - PowerPoint PPT Presentation

Citation preview

Nonlinear Approximation Based Image Recovery Using Adaptive

Sparse ReconstructionsOnur G. Guleryuz

oguleryuz@erd.epson.com

Epson Palo Alto LaboratoryPalo Alto, CA

(Full screen mode recommended. Please see movies.zip file for some movies, or email me.

Audio of the presentation will be uploaded soon.)

Overview

•Algorithm.

•Five examples and movies to discuss transform properties.

•Conclusion.

•Problem definition.

•Notation and main idea.

•Difficulties with nonstationary statistics.

•Properties.

•Many more (~20) simulation examples, movies, etc. Please stay after questions.

Working paper: http://eeweb.poly.edu/~onur/online_pub.html (google: onur guleryuz)

Problem Statement

Image

LostBlock

Use surrounding spatial information to recover lost block via adaptive

sparse reconstructions.

Generalizations: Irregularly shaped blocks, partial information, ...

Applications: Error concealment, damaged images, ...Any image prediction scenario.

Any signal prediction scenario.

Notation: Transforms

i

N

iihcx

1

1Nsignal

1Ntransform basis

transform coefficient (scalar)

xhc Tii

Assume orthonormal transforms:

Notation: Approximation

Linear approximation:

i

N

iihcx

1

i

K

iilinear hcKx

1

)(ˆ

Nonlinear approximation:j

xSjjnonlinear hcKx

)(

)(ˆ

KxScard ))((

KNxVcard ))((

: the index set of significant coefficients,

: the index set of insignificant coefficients,CxSxV )()(

(K largest)

)(xS

(N-K smallest)

•Keep K<N coefficients.

apriori ordering

signal dependent ordering

“Nonlinear Approximation Based Image …”

Notation: Sparse

Linear app:

)(ˆ~ Kxx linear

Nonlinear approximation:

)1(,0~ NlKcl

)1(, NlKTcl

))((, xVlTcl

,NK )(ˆ~ Kxx nonlinear

sparse classes for linear approximation

sparse classes for nonlinear approximation

Main Idea

•Given T>0 and the observed signal

•Fix transform basis

)(ˆ xVnonlinearxy ˆ~

Original

1

0

x

xx

0

0xLost

Block

0}n

1}n)( 10 Nnn

Image

1

0

xyPredicted

available pixels

lost pixels(assume zero mean)

1.

2.

3.

Sparse Classes

Pixel coordinates for a “two pixel” image

x x

1c

2c

Transform coordinates

Linear app:

or

Nonlinear app:

convex set convex set non-convex, star-shaped set

Onur G. Guleryuz, E. Lutwak, D. Yang, and G. Zhang, ``Information-Theoretic Inequalities for Contoured Probability Distributions,'‘ IEEE Transactions on Information Theory, vol. 48, no. 8, pp. 2377-2383, August 2002.

2T

class(K,T)

Rolf Schneider , ``Convex Bodies : The Brunn-Minkowski Theory,’’ Cambridge University Press, March 2003.

Examples

+9.37 dB

+8.02 dB

+11.10 dB

+3.65 dB

3. Image prediction has a tough audience!

2. MSE improving.

1. Interested in edges, textures, …, and combinations(not handled well in literature)

Small amount of data, make a mistake, learn mixed statistics.

Difficulties with Nonstationary Data•Estimation is a well studied topic, need to infer statistics, then build estimators.•With nonstationary data inferring statistics is very difficult.

Regions with different statistics. Perhaps do edge detection?

Need accurate segmentation (very difficult) just to learn!

Higher order method, better edge detection?

Statistics are not even uniform and order must be very high.

Statistics change rapidly without apriori structure.

Important Properties

•Applicable for general nonstationary signals.

No non-robust edge detection, segmentation, training, learning, etc., required.

•Very robust technique.

Use it on speech, audio, seismic data, …

•This technique does not know anything about images.

•Just pick a transform that provides sparse decompositions using nonlinear approximation, the rest is automated.

(DCTs, wavelets, complex wavelets, etc.)

Main Algorithm

G

Gyc

)( NN : orthonormal linear transformation.

: linear transform of y ( ).

1

0

xy

1x̂2

antinsignificc

•Start with an initial value.

•Get c

•Threshold coefficients to determine V(x,T) sparsity constraint

•Recover by minimizing

•Reduce threshold (found solution becomes initial value).

(equations or iterations)

Progression of Solutions

Search over

non-convex, star-shaped set

Nonlinear app: class(K,T)

Pixel coordinates for a “two pixel” image

x

available pixel

missing pixel available pixel constraint

),0( 11 TKclass

T decreasesSearch over ),( 1212 TTKKclass

Search over ),( 2323 TTKKclass …

Class size increases

Estimation Theory

01ˆ Axx

01

0 1

ˆx

Ax

xy

Proposition 1: Solution of subject to sparsity constraint results in the linear estimate

1x̂

Proposition 2: Conversely suppose that we start with a linear estimate for via 1x̂

0n

}

01ˆ Axx

Sparsity Constraint = Linear Estimation

restricted to dimensional subspace

0n

sparsity constraints

Required Statistics?

None. The statistics required in the estimation are implicitly determined by the utilized transform and V(x).

(V(x) is the index set of insignificant coefficients)

I will fix G and adaptively determine V(x).

(By hard-thresholding transform coefficients)

Apriori v.s. Adaptive

][min]ˆ[min2

01

2

11ˆ1

AxxExxEAx

]|)([min]|ˆ[min 0

2

001)(

0

2

11ˆ 01

xxxAxExxxExAx

VxV )(ˆMethod 1:

Can at best be ensemble optimal for second order statistics.

)(ˆ)(ˆ0xVxV Method 2:

]|[)( 0100 xxExxA Can at best be THE optimal!

Do not capture nonstationary signals with edges.

J.P. D'Ales and A. Cohen, “Non-linear Approximation of Random Functions”, Siam J. of A. Math 57-2, 518-540, 1997

Albert Cohen, Ingrid Daubechies, Onur G. Guleryuz, and Michael T. Orchard, “On the importance of combining wavelet-based nonlinear approximation with coding strategies,” IEEE Transactions on Information Theory, July 2002.

optimality?

Conclusion

•Simple, robust technique.

•Very good and promising performance.

•Estimation of statistics not required (have to pick G though).

•Applicable to other domains.

•Q: Classes of signals over which optimal? A: Nonlinear approximation

classes of the transform.

•Signal dependent basis to expand classes over which optimal.

•Help design better signal representations.

(intuitive)

“Periodic” Example

DCT 9x9

+11.10 dB

Lower thresholds, larger classes.

PS

NR

Properties of Desired Transforms

•Periodic, approximately periodic regions:

Transform should “see” the period

Example: Minimum period 8 at least 8x8 DCT (~ 3 level wavelet packets).

k

kMngns )()(

l M

lwwGwS )2

()()(

•Localized

……

M-M

s(n)

……

|S(w)|

zeroes

Want lots of small coefficients wherever they may be …

Periodic Example

(period=8)

DCT 8x8

Perf. Rec.

(Easy base signal, ~fast decaying envelope).

“Periodic” Example

DCT 24x24

+5.91 dB

(Harder base signal.)

Edge Example

DCT 8x8

+25.51 dB

(~ Separable, small DCT coefficients except for first row.)

Edge Example

DCT 24x24

+9.18 dB

(similar to vertical edge, but tilted)

Properties of Desired Transforms

•Periodic, approximately periodic regions: Frequency selectivity

•Edge regions:

Transform should have the frequency selectivity to “see” the slope of the edge.

•Localized

Overcomplete Transforms

smooth

smooth

edge

DCT block over an edge (not very sparse)

DCT block over a smooth region (sparse)

21

antinsignificc

1ˆmin

x

DCT1

22

antinsignificc

23

antinsignificc+ +

DCT2=DCT1 shifted DCT3

Onur G. Guleryuz, ``Weighted Overcomplete Denoising,‘’ Proc. Asilomar Conference on Signals and Systems, Pacific Grove, CA, Nov. 2003.

Only the insignificant coefficients contribute. Can be generalized to denoising:

Properties of Desired Transforms

•Frequency selectivity for “periodic” + “edge” regions.

•Localized

!Nonlinear Approximation does not work for non-localized Fourier transforms.

J.P. D'Ales and A. Cohen, “Non-linear Approximation of Random Functions”, Siam J. of A. Math 57-2, 518-540, 1997

(Overcomplete DCTs have more mileage since for a given freq. selectivity, have the ~smallest spatial support.)

“Periodic” Example

DCT 16x16

+3.65 dB

“Periodic” Example

DCT 16x16

+7.2 dB

“Periodic” Example

DCT 24x24

+10.97 dB

Edge Example

DCT 16x16

+12.22 dB

“Edge” Example

DCT 24x24

+4.04 dB

Combination Example

DCT 24x24

+9.26 dB

Combination Example

DCT 16x16

+8.01 dB

Combination Example

DCT 24x24

+6.73 dB

(not enough to “see” the period)

Unsuccessful Recovery Example

DCT 16x16

-1.00 dB

Partially Successful Recovery Example

DCT 16x16

+4.11 dB

Combination Example

DCT 24x24

+3.77 dB

“Periodic” Example

DCT 32x32

+3.22 dB

Edge Example

DCT 16x16

+14.14 dB

Edge Example

DCT 24x24

+0.77 dB

Robustness

G remains the same butchanges.

)(ˆ0xV

Determination•Start by layering the lost block. Estimate layer at a time.

Recover layer P by using information from layers 0,…,P-1

(the lost block is potentially large)

)(ˆ0xV

thk DCT block

u

w

DCT (LxL) tiling 1

Outer border of layer 1

Image

Lost block

o (k)w

o (k)u

Hard threshold block k coefficients if

o (k) < L/2w

o (k) < L/2u

OR

•Fix T. Look at DCTs that have limited spatial overlap with missing data.•Establish sparsity constraints by thresholding these DCT coefficients with T.

(If |c(i)|<T add to sparsity constraints.)

Determination II)(ˆ0xV

Recommended