2
Lecturer: Ron DeVORE, Texas A&M University, Texas, USA Webpages: http://www.math.tamu.edu/~rdevore/ Title: HIGH DIMENSIONAL APPROXIMATION: THEORY AND ALGORITHMS Date and time: Mon, April 22 from 10:00 to 12:00 Tue, April 23 to Fri, April 26, 2013, 9:00 to 11:00 Abstract: This course will study scientific problems that are challenged by the fact that they live naturally in a Euclidean space RN with N very large. We mention three settings which will be considered in this short course. Broad-banded signals: It is well known that a bandlimited signal (function on R) can be captured exactly from its equispaced time samples taken at the Nyquist rate. However when the bandwidth of the signal is large then the sampling rate cannot be implemented accurately in circuitry. This leads us to consider other models for signals which are still realistic but enable capturing the signal with much fewer measurements. We shall study one such setting called Compressed Sensing (CS) which models signals as having a sparse or compressible representation in a suitably chosen basis of waveforms. We shall develop the mathematical foundations of compressed sensing culminating with a derivation of the optimal rate/distortion that can be achieved with CS and a discussion of the encoder/decoders which achieve this optimality. Mathematical learning: The mathematical theory of data fitting in a stochastic set- ting is known as learning. Many interesting learning problems are set in high dimension and must engage the problem of approximating functions of a large number of variables. Classical approaches of approximation in several variables suffer from the curse of dimen- sionality: the approximation rate severely suffers as the dimension increases. We shall consider possible ways to avoid the curse of dimensionality through variable reduction and sparsity. Stochastic PDEs: The classical approach to solving stochastic PDEs is Monte Carlo methods. While these methods are robust they have severe limits in terms of their rate/distortion bounds. We shall study alternatives to Monte Carlo methods which uti- lize Wiener chaos expansions to convert the stochastic PDE to a deterministic parametric PDE. The number of parameters in this approach is infinite and therefore its success depends capturing a function of an infinite number of variables in a numerically realistic way. We shall derive properties of the solutions to such parametric problems in the elliptic case that show these solutions have highly accurate sparse approximations. We shall then derive potential numerical approaches to capturing these sparse approximants.

HIGH DIMENSIONAL APPROXIMATION: THEORY AND … · Mathematical learning: The mathematical theory of data fitting in a stochastic set- ting is known as learning. Many interesting learning

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HIGH DIMENSIONAL APPROXIMATION: THEORY AND … · Mathematical learning: The mathematical theory of data fitting in a stochastic set- ting is known as learning. Many interesting learning

 

 

Lecturer: Ron DeVORE, Texas A&M University, Texas, USA Webpages: http://www.math.tamu.edu/~rdevore/ Title: HIGH DIMENSIONAL APPROXIMATION: THEORY AND ALGORITHMS Date and time: Mon, April 22 from 10:00 to 12:00 Tue, April 23 to Fri, April 26, 2013, 9:00 to 11:00 Abstract: This course will study scientific problems that are challenged by the fact that they live naturally in a Euclidean space RN with N very large. We mention three settings which will be considered in this short course. Broad-banded signals: It is well known that a bandlimited signal (function on R) can be captured exactly from its equispaced time samples taken at the Nyquist rate. However when the bandwidth of the signal is large then the sampling rate cannot be implemented accurately in circuitry. This leads us to consider other models for signals which are still realistic but enable capturing the signal with much fewer measurements. We shall study one such setting called Compressed Sensing (CS) which models signals as having a sparse or compressible representation in a suitably chosen basis of waveforms. We shall develop the mathematical foundations of compressed sensing culminating with a derivation of the optimal rate/distortion that can be achieved with CS and a discussion of the encoder/decoders which achieve this optimality. Mathematical learning: The mathematical theory of data fitting in a stochastic set- ting is known as learning. Many interesting learning problems are set in high dimension and must engage the problem of approximating functions of a large number of variables. Classical approaches of approximation in several variables suffer from the curse of dimen- sionality: the approximation rate severely suffers as the dimension increases. We shall consider possible ways to avoid the curse of dimensionality through variable reduction and sparsity. Stochastic PDEs: The classical approach to solving stochastic PDEs is Monte Carlo methods. While these methods are robust they have severe limits in terms of their rate/distortion bounds. We shall study alternatives to Monte Carlo methods which uti- lize Wiener chaos expansions to convert the stochastic PDE to a deterministic parametric PDE. The number of parameters in this approach is infinite and therefore its success depends capturing a function of an infinite number of variables in a numerically realistic way. We shall derive properties of the solutions to such parametric problems in the elliptic case that show these solutions have highly accurate sparse approximations. We shall then derive potential numerical approaches to capturing these sparse approximants.

Page 2: HIGH DIMENSIONAL APPROXIMATION: THEORY AND … · Mathematical learning: The mathematical theory of data fitting in a stochastic set- ting is known as learning. Many interesting learning

 

 

Theory and algorithms for high dimensional problems are in their infancy and so this course will expose many interesting open questions. Bibliography:

1. [P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk, Convergence rates for greedy algorithms in reduced bases Methods, SIAM J. Math. Anal., 43 (2011), 1457–1472.

2. A. Buffa, Y. Maday, A.T. Patera, C. Prud’homme, and G. Turinici, A Priori convergence of the greedy algorithm for the parameterized reduced basis, M2AN Math. Model. Numer. Anal., 46(2012), 595–603.

3. R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, 28(2008), 253–263.

4. E. Candés, J. Romberg, and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure and Appl. Math., 59(2006), 1207–1223.

5. A. Chkifa, A. Cohen, R. DeVore, and C. Schwab, Sparse Adaptive Taylor Approximation Algorithms for Parametric and Stochastic Elliptic PDEs, submitted.

6. A. Cohen, W. Dahmen, and R. DeVore, Compressed sensing and best k term approximation, JAMS 22(2009), 211–231.

7. A. Cohen, R. DeVore, C. Schwab, Convergence rates of best n term Galerkin approximations for a class of elliptic SPDEs, J. FOCM, 10 (2010), 615-646.

8. R. DeVore, Nonlinear Approximation, Acta Numerica, Volume 7 (1998), 51-150. 9. R. DeVore, G. Petrova, and P. Wojtaszczyk, Approximating functions of few variables in

high dimensions , Constructive Approximation, Constructive Approximation, 33 (2011), 125-143.

10. J. Haupt, R. Castro and R. Nowak, Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation, preprint, 2010.

11. Y. Maday, A.T. Patera, and G. Turinici, A priori convergence theory for reduced-basis approximations of single-parametric elliptic partial differential equations, J. Sci. Comput., 17(2002), 437–446.

12. Y. Maday, A. T. Patera, and G. Turinici, Global a priori convergence theory for reduced- basis approximations of single-parameter symmetric coercive elliptic partial differential equations, C. R. Acad. Sci., Paris, Ser. I, Math., 335(2002), 289–294.