107
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type o f computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Yuhong Yang Thesis - Yale University

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Yuhong Yang Thesis - Yale University

INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI

films the text directly from the original or copy submitted. Thus, some

thesis and dissertation copies are in typewriter face, while others may be

from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the

copy submitted. Broken or indistinct print, colored or poor quality

illustrations and photographs, print bleedthrough, substandard margins,

and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete

manuscript and there are missing pages, these will be noted. Also, if

unauthorized copyright material had to be removed, a note will indicate

the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by

sectioning the original, beginning at the upper left-hand comer and

continuing from left to right in equal sections with small overlaps. Each

original is also photographed in one exposure and is included in reduced

form at the back of the book.

Photographs included in the original manuscript have been reproduced

xerographically in this copy. Higher quality 6” x 9” black and white

photographic prints are available for any photographs or illustrations

appearing in this copy for an additional charge. Contact UMI directly to

order.

UMIA Bell & Howell Information Company

300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 2: Yuhong Yang Thesis - Yale University

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 3: Yuhong Yang Thesis - Yale University

M inimax Optimal Density Estim ation

A Dissertation

Presented to the Faculty of the Graduate School

of

Yale University

in Candidacy for the Degree of

Doctor of Philosophy

by

Yuhong Yang

Dissertation Director: Professor Andrew R. Barron

May 1996

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 4: Yuhong Yang Thesis - Yale University

UMI Number: 9635410

Copyright 1996 by Yang, YuhongAll rights reserved.

UMI Microform 9635410 Copyright 1996, by UMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

300 North Zeeb Road Ann Arbor, MI 48103

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 5: Yuhong Yang Thesis - Yale University

©1996 by Yuhong Yang

All rights reserved.

*

' •»Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 6: Yuhong Yang Thesis - Yale University

ABSTRACT

M inim ax Optimal D ensity Estim ation

Yuhong Yang

Yale University

May 1996

Information-theoretic tools are used to derive minimax risk bounds for density estima­

tion. A metric entropy condition alone determines the minimax rate of convergence in each

class of density functions. To achieve the minimax rates simultaneously for multiple func­

tion classes, we consider lists of finite-dimensional approximating models and use model

selection criteria related to A I C and M D L to select adaptively a good model based on

data. The use of many candidate models, as in the case of subset selection, provides more

flexibility for adaptation, yet significant selection bias can occur with criteria such as AIC.

We incorporate a model complexity term in the model selection criteria to handle this se­

lection bias. It is shown that the risk of the estimated density is bounded by an index of

resolvability, which characterizes the best tradeoff among approximation error, estimation

error, and model complexity. As an application, we show that the optimal rate of conver­

gence is simultaneously achieved for density in the Sobolev spaces Wo(U) without knowing

the smooth parameter s and norm parameter U in advance. Applications in neural network

models and sparse density estimation are also provided.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 7: Yuhong Yang Thesis - Yale University

To Dr. Yan Xin

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 8: Yuhong Yang Thesis - Yale University

Acknowledgements

I am deeply indebted to my advisor, Professor Andrew Barron. I feel extremely lucky

to have been a student of his, learning from him and working with him throughout these

past five years. I have benefited so much from his deep insight into the fields of statistics

and information theory. I appreciate the immeasurable amount of time he has spent talking

with me. Professor Barron has been tremendously generous, caring, and patient. I have

learned so much from him in every aspect of my life.

My sincere thanks also go to other members of the faculty: Professor David Pollard

for his fabulous teaching and encouragement which gave me much inspiration for statistical

research; Professor Joseph Chang for his enthusiasm and his wonderful courses; Professor

.John Hartigan for teaching me applied statistics and his valuable suggestions; and Professor

Nicolas Hengartner for his time and effort in reading my dissertation.

I appreciate all the help I had from the department during a difficult, time in my life. I

will cherish forever the help, support and friendship from Professor Andrew Barron, Barbara

Amato-Kuslan, Lucy Kennedy and many others.

Finally, I want to thank my family; my parents, sisters and my wife Zhaohui, for their

boundless love and support. My parents have given me everything, asking nothing in return.

This work would never have been completed without Zhaohui’s love, emotional support, and

encouragement.

i

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 9: Yuhong Yang Thesis - Yale University

Contents

1 Introduction 1

2 M in im ax rates o f convergence 6

2.1 B ackground ............................................................................................................... (j

2.2 Main re su lts ............................................................................................................... 11

2.2.1 Minimax under global entropy cond ition ................................................. 13

2.2.2 Minimax rates under L 2 loss .................................................................... 20

2.2.3 Minimax rates under Iv-L lo ss .................................................................... 25

2.2.4 Lower bounding minimax risk through upper b o u n d s ............................ 20

2.3 Application in data compression .......................................................................... 31

2.4 Application in nonparametric reg ressio n ............................................................. 34

2.5 E x a m p le s .................................................................................................................... 37

3 A d aptation o f D ensity E stim ation 41

3.1 Adaptation under entropy conditions................................................................... 43

3.2 Adaptation based on existing good estim ato rs ................................................... 45

4 M odel Selection for D ensity E stim ation 51

4.1 In troduction ................................................................................................................ 51

4.2 A key l e m m a ............................................................................................................ 57

4.3 Main re su lts ................................................................................................................ 58

4.4 A pplications................................................................................................................ 65

4.4.1 Sequences of exponential fam ilies ............................................................... 65

4.4.2 Neural network m odels.................................................................................. 72

4.4.3 Estimating a not strictly positive density ............................................. 74

ii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 10: Yuhong Yang Thesis - Yale University

4.4.4 Complete models versus sparse subset m o d els ....................................... 70

4.5 Proofs of the main L e m m a s ................................................................................... 84

4.6 Some simple inequalities used for main re su lts ................................................... 89

iii

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 11: Yuhong Yang Thesis - Yale University

Chapter 1

Introduction

We are interested in estimating a density function based on an independent and identically

distributed sample. Density estimation provides a good way to understand the distribu­

tion that governs the data and is also used in some other statistical procedures such as

nonparametric discriminant analysis and clustering analysis.

The simplest density estimation approaches use parametric models and then the estima­

tion problems reduce to parameter estimations. When data become more irregular than the

usual parametric models can handle, to provide enough flexibility to capture the complexi­

ties of the density curves, nonparametric methods have been proposed and widely used in

statistical applications. Roughly speaking, there are two kinds of nonparametric procedures

for density estimation in practice. Some are fully nonparametric in the sense that there is no

operating finite-dimensional parametric models involved in the statistical procedures, thus

no usual parameter estimation is required. For instance, in kernel density estimation, only

choices of a kernel function and a bandwidth are involved. Some smoothing splines density

estimation procedures also belong to this category. Some other nonparametric procedures

do not abandon the parametric structure completely. Rather, they use parametric models

to perform estimation, but give the freedom to use more and more complicated param et­

ric models as data suggest the necessity of doing so. The main difference between these

procedures and the traditional parametric approaches is that the underlying curve is not

assumed to be in any of the finite-dimensional models, and the parametric models are used

for approximation of the true function.

Of course, any statistical procedure should be evaluated for us to understand how well

1

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 12: Yuhong Yang Thesis - Yale University

and when the procedure works. Many evaluation criteria have been used for density esti­

mation depending on the nature of the problems, particular interests, or convenience for

handling. For instance, one could consider the statistical risk of an estimator at a specific

sample point, or consider global performance of the estimator under some loss function. In

this dissertation, we focus on density estimation under global risks using Hellinger loss, L,,

loss, or Ivullback-Leibler loss.

Suppose the true density function is in a certain function class (a target class). For a

given estimator, the worst case risk (or supreme risk if the maximum is not achievable)

is the maximum risk of the estimator over the target class. Minimax risk is then the

smallest (or infimum) possible worst case risk over all density estimators based on the

sample. This quantity characterizes how well we can estimate a density function using a

sample in a uniform sense. A minimax procedure (which produces the minimax risk) gives

the best protection for the worst case risk. Intuitively, a minimax procedure might be too

conservative and indeed they are for some cases. But with certain choices of intrinsically

invariant loss functions (e.g., K-L loss), the minimax risk may characterize the typical risk

for functions in the target class (see, e.g., Barron and Hengartner (1995)).

Among well-studied nonparametric density classes are classical smooth function classes

such as Sobolev spaces and Besov classes. For such nonparametric classes, a lot of asymp­

totic results have been obtained about the minimax risks and for various nonparametric

procedures (see Donoho, Johnstone, et al (1995) for references). Roughly speaking, the

minimax risk of a class of densities having more derivatives converges to zero faster than

that of a class of densities assumed to have fewer derivatives as the sample size increases.

The smoothness conditions are often used in the construction of minimax-ratc estimators

whose worst case risks converge within a constant factor of the minimax risk. For instance,

for some smooth classes, a kernel density estimator with bandwidth suitably chosen accord­

ing to the smoothness condition of the target class converges to the true density a t the

minimax rate (see e.g., Devroye (1987)). Similar results are also obtained for some sieve

estimators with the parametric model sizes suitably chosen again according to smoothness

conditions (e.g., see Stone (1990, 1994), Birge and Massart (1993), Barron and Sheu (1991),

and Wong and Shen (1995)).

The above mentioned procedures that are constructed depending on smoothness condi­

tions of the target classes entail a great difficulty in application because it is impossible to

2

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 13: Yuhong Yang Thesis - Yale University

know how smooth the true density function is in advance in practical situations. The proce­

dures with a predetermined bandwidth in kernel estimation or a predetermined parametric

model size in sieve estimation is unlikely to always produce a very good estimator. This

suggests that in practice, different bandwidths or model sizes should be considered and a

suitable one needs to be chosen somehow based on data (instead of subjective assumptions).

The above consideration calls for adaptive estimation procedures in density estimation.

In this work, we are interested in minimax-rate adaptiveness over multiple density classes.

The true underlying density function is assumed to be in any of a countable collection of

classes. For each given class, a good estimator (e.g., a minimax-rate optimal for this class)

could be obtained. W ithout knowing which class contains the true density, can we have a

single estimation procedure that works optimally in the minimax-rate sense simultaneously

for all the classes being considered? Such an estimator is minimax-rate adaptive because

it automatically adjusts according to the nature of the true density based only on data so

that it converges at the right minimax rate for all the classes.

Many adaptive procedures have been proposed for different statistical problems. For

nonparametric regression, methods have been introduced to adaptively select the bandwidth

for kernel estimator (e.g., Hardle, Hall and Marron (1985)), or smoothness parameters for

smoothing splines (e.g., Craven and Waliba (1979)), or model size for linear estimators

using parametric models (Shibata (1981), Li (1987)), basis selection for wavelet estimators

(Donoho, Johnstone, et al (1995)). For density estimation, Efroimovich (1985) considered

linear procedures using projection estimators for the trigonometric coefficients and proposed

a final estimator which was shown to be adaptive among some ellipsoidal classes with dif­

ferent smoothness conditions. Donoho, Johnstone, et al (1993) considered adaptive wavelet

estimators for density estimation. Recently, Birge, and Massart (1995), and Barron, Birge,

and Massart (1995) have obtained general model selection results using various contrast

functions.

In this dissertation, I will address several issues on density estimation: minimax rates

of convergence for a given density class; adaptation among a general collection of density

classes; and model selection for adaptive density estimation. The following give some de­

scription of the problems we will deal with and summarize the main results we have obtained

in these directions.

1. Minimax rates of convergence.

3

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 14: Yuhong Yang Thesis - Yale University

Due to Le Cam (1973), Birge (1983, 1986) and other researchers’ work, metric entropy

of a target class is believed by many to determine the minimax rates of convergence.

Indeed, Birge (1986) gives good minimax upper bounds based only on local Hellinger

metric entropy, but in deriving lower bounds, he takes a rough bound on Shannon’s

mutual information for the use of Fano’s inequality and uses an additional condition

other than entropy on the target class in his work. This extra condition is not always

easy to check and is not necessary for checking once the metric entropy structure is

known. We use some better bounds on Shannon’s mutual information using some

information theoretic tools to derive minimax lower bounds based only on global

metric entropy. Some upper bounds on minimax risk are also provided based on

Kullback-Leibler distance. As a result, it is shown that metric entropy is indeed

essentially the only quantity needed to determine the minimax rate of convergence

for a general density class. The key fact used in the derivation of the minimax results

is the connection between density estimation and data compression in an information

theory context.

2. Adaptation over a general collection of density classes.

Under some mild conditions, we construct minimax-rate adaptive estimators using

metric entropies of a general collection of nonparametric density classes. The con­

struction of the adaptive estimators involve mixing densities shown to be minimax rate

optimal for each class. The result is that one estimator is simultaneously minimax-rate

optimal for all the classes.

3. Model selection for adaptive density estimation.

A practical way to get adaptive estimators is through the use of model selection criteria

to come up with models that produce good estimators for the given sample size. Con­

sider approximating the true density function by some finite-dimensional parametric

models. Given the approximating models, we use a model selection criterion related

to A I C and M D L to compare the models and show that risk of the density estimator

based on the selected model is upper bounded by an index of resolvability which char­

acterizes the best' trade-off between the approximation error (bias) and estimation

error among all the models being considered. Thus upper bounds on the worst case

risk of the estimator based on model selection for a density class could lie obtained

4

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 15: Yuhong Yang Thesis - Yale University

by examining the index of resolvability for this class. W ith the approximating models

suitably chosen for the target classes, this approach can produce adaptive estimators

and the adaptation property can Ire shown by evaluating the index of resolvability.

As an example, we show that the minimax rates of convergence are simultaneously

achieved by density estimator based on model selection for Sobolev spaces without

knowing the smoothness parameter and norm parameter in advance.

When exponentially many models are considered (as in subset selection problems),

significant selection bias may occur with empirical based model selection criteria. To

handle the selection bias, we incorporate a model complexity penalty term in model

selection criteria and show that the risk of the density estimator based 011 these criteria

is upper bounded by the best trade-off among approximation error, estimation error

and model complexity. Applications in subset selection for density estimation and in

some neural network models will be provided.

The dissertation is accordingly divided into 4 chapters.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 16: Yuhong Yang Thesis - Yale University

Chapter 2

M inimax rates of convergence

2.1 Background

Let X i , X ‘2 , X n be an independent, identically distributed sample from some distribution

on a measurable space X. We assume the probability distribution is dominated by a a-

finite measure p on X with a density function which is assumed to Ire in a density class

{po : 9 € ©} with respect to p. The parameter space 0 could be a finite-dimensional space

or a nonparametric space (e.g., the class of all densities, or square root densities). We want

to estimate the true density po or 6 based on the sample.

Density estimation is important because it may extract useful information about the

distribution that governs the data. For instance, with a good density estimator, we can

have some visual understanding on whether the distribution is skewed, or has multiple

modes or not. Some other statistical procedures also require estimation of certain densities.

For example, density estimation is required in some nonparametric discriminant analysis

methods and clustering analysis methods. For more details about the applications of density

estimation, see Silverman (1986).

Nonparametric density estimation has been studied intensively in the past few decades.

Rosenblatt (1956) proposed a moving window estimator and it was later generalized to ker­

nel estimators (Parzen (1962), Cacoullos (1966)). Minimum distance estimators for density

estimation were studied by Le Cam (1966), Pfanzagl (1968), Beran (1977), Pollard (1980),

Millar (1981, 1983), and Yatracos (1985). Sieve density estimators using suitably chosen

approximating models are studied by Cencov (1982), Portnoy (1988), Stone (1990, 1994),

Barron and Sheu (1991), Birge and Massart (1993, 1995), Slien and Wong (1994), Wong

6

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 17: Yuhong Yang Thesis - Yale University

and Shen (1995), and others. Adaptive density estimation has been studied by Efroimovich

(1985), Donoho, Johnstone, et al (1993), Birge and Massart (1995), Barron, Birge and Mas­

sart (1995). More discussions and some results about adaptive density estimation will be

given in Chapter 3.

In this chapter, we study some essential questions about density estimation. Several

global loss functions will be considered for the evaluation of density estimators. They

include Kullback-Leibler (K-L) (also called relative entropy) loss, the Hellinger loss and L >.

We are interested in minimax risks of density (or other parameters) estimators. For a given

density class, we study how fast the minimax risk goes to zero and what essential property

of the target class determines the minimax rate of convergence.

In statistical decision theory, minimax methods are of interest. Minimax risk charac­

terizes how well we can estimate a parameter or the whole density in a target class in a

uniform sense. Thus the minimax risk provides us with the insight into the limitation we

can not overcome for the worst case. Furthermore, in many situations, the minimax rate not

only captures the worst case but also the typical rate of convergence. Indeed, an intrinsic

homogeneity of some of the loss functions we consider here leads to existence of (minimax

rate optimal) estimators that have nearly equivalent risk throughout the parameter space.

We determine minimax risk bounds for subclasses {po : 9 6 S'}, S C 0 , which may be

parametric or nonparametric (e.g., the class of densities with certain derivative satisfying a

Lipschitz condition).

Let S be an action space for the parameter estimates with S C S C 0 . An estimator of

9 is then a measurable mapping from the sample space of A^i, AA,..., A'„, to S. Let A n be

the collection of all such estimators. For nonparametric density estimation, S = 0 is often

chosen to be the set of all densities or some transform of the densities (e.g., square root of

density). We consider a general loss function d, which is a mapping from S x S to R + with

d(9,9') > 0 for 9 ^ 9 ' . We call d a distance whether or not it satisfies properties of a metric.

The minimax risk of estimating 9 € S with action space S is defined as

R„ = min max End2 (9,0). oeAr °^s

Here “min” and “max” are understood to be “inf” and “sup” respectively if the miniinizer

or maximizer does not exist.

Related problems are point estimations such as estimating density or regression value

7

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 18: Yuhong Yang Thesis - Yale University

at some point xo € X or some functional of the density function. We here study a global

measure of loss. For characteristics of minimax rates of point estimation, see Donoho and

Liu (1991), Bickel and Ritov (1988), and Birge and Massart (1992).

The minimax rates of convergence are often determined by deriving a minimax upper

bound using a specific estimator and obtaining a minimax lower bound in some way. If

the maximum risk of the estimator is within a constant factor of the derived lower bound,

then the minimax rate of convergence is obtained. For global minimax risk as we are

considering here, two methods are often used to derive the minimax lower bounds: Fano’s

inequality and Assouad’s lemma. The first one is used in Hasminskii (1978), Ibragimov

and Hasminskii (1980, 1982), Efroimovich and Pinsker (1982), Nemirovskii (1986), and

Hasminskii and Ibragimov (1990). The second one is utilized in Bretagnollc and Huber

(1979), Birge (1986), and Devroye (1987). Birge (1986) claims that Fano’s inequality is

more general and could replace Assouacl’s Lemma in almost all practical situations. Yu

(1995) gives a lower bound similar to Assouad’s in terms of Kullback-Leibler distance using

Fano’s inequality.

Both Assouad’s lemma and Fano’s inequality as it has previously been used involve first

restriction to a local subset of the function space with special properties of packing sets in

such a subset.

The purpose of our work here is to demonstrate situations under which the convergence

rate is determined by the global metric entropy over the whole function class (or over large

subsets of it). The advantage of this approach is that the metric entropies are available

in approximation theory for many function classes. In such cases, it is not necessary to

uncover local packing properties.

We prove the following result characterizing minimax convergence rate in terms of metric

entropy. Let d(f, g) be a distance and let N ( e; J7) be the size of the largest packing set of

density functions in the class separated by e and let e„. satisfy ej- - M where M(e; J7) =

log N(e; J7) is the metric entropy and n is the sample size. Assume the target class is rich

enough to satisfy limf .,Q > 1 (which is true if M(e; JF) = Q ) w(e) with r > 0 and

—» 1 as e —> 0). This condition is satisfied by the usual smooth nonparametric classes.

For convenience, we will use the symbols >z and x : an >z hn means bn = 0 ( a n), and

a„ x bn means both an >; b„ and bn >: an.

8

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 19: Yuhong Yang Thesis - Yale University

P ro p o s itio n : In the following cases, the minimax convergence rate is characterized by

metric entropy in terms of the critical radius e„ as follows:

mmm&xErd2(f, /') x e2./ s e r 1 1

1. J 7 is any class of density functions bounded above and below 0 < C < / < C

for / G T . Here d2(f ,g) is either integrated squared distance ./ (/(.'/:) - f j { x ) ) 2 dp.,

squared Hellinger distance, or Kullback-Leibler divergence.

2. T is a convex class of densities with / < C for / G IF and there exists at least one

density in T bounded away from zero and d is the L > distance.

3. T is any class of functions / with / < C for / G T for the regression model Y —

, f (X) + e, X and e are independent X ~ P\- and e ~ Norrnal.{0,a’2), o > 0 and d is

the L i iP x ) norm.

Now let us outline roughly the method of lower bounding the minimax risk using Fano’s

inequality. The first step is to restrict attention to a subset So of the parameter space where

minimax estimation is nearly as difficult as for the whole space and moreover, where the

loss function of interest is related locally to the Kullback-Leibler divergence that arises in

Fano’s inequality. (For example, the subset can in some cases be the set of densities with

a bound on their logarithms.) As we shall reveal, the lower bound on the minimax rate is

determined by the metric entropy of the subset.

The proof technique involving Fano’s inequality first, lower bounds the minimax risk

by restricting to a finite set of parameter values {$[ , ■■■, d,„ } separated from each other by

an amount en in the distance of interest. The critical separation en is the largest separa­

tion such that the hypothesis ,..., 0m) are nearly indistinguishable on the average by

tests. Fano’s inequality reveal this indistinguishability in terms of the Kullback-Leibler di­

vergence between densities pOj{x\,..n x n ) = I I a n d the centroid of such densities

q{xi, . . . ,xn) = X Y^jLi VOj (-^li •••) x n)- Here the key question is to determine the separation

such that the average of this K-L divergence is small compared to the distance logm that

would correspond to maximally distinguishable densities (for which 0 is determined lay X n).

It is critical here that K-L divergence does not have a triangle inequality between the joint

densities. We show that the K-L divergence from every poj{xi, to the centroid is

bounded by the right order 2ne£ even though the distance between two such poj{xi , . . . ,xn)

9

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 20: Yuhong Yang Thesis - Yale University

is as large as n(3 where 0 is the K-L diameter of the whole set {p0l, ...,pom }. The proper con­

vergence rate is thus identified provided the cardinality of the subset m is chosen such that

ne.„/ log m is bounded by a suitable constant less than 1. The metric entropy (logarithm of

the largest cardinality of an e,,-packing set) determines when this can be done.

Previous uses of Fano’s inequality used the coarse bound n0 (or a similar rough bound)

on the K-L diameter of the set {p# , ...,p£ }. In that theory, to obtain a suitable bound, a

statistician needs to find a suitable subset {0 i , . . w i t h diameter 0 of the order of e„

and m of the order of the metric entropy. Typical tools involve perturbations of densities

parametrized by vertices of a hypercube. While interesting, such involved calculations are

not needed to obtain the correct order bounds. It suffices to know or bound the metric

entropy of the chosen set So.

It is not our purpose to criticize the use of hypercube type arguments in general to

determine the minimax rates of convergence. In fact, lies ides the success of such methods

in deriving minimax rates as demonstrated in Birge (1983, 1986), they are also useful

in other applications such as determining the minimax rates of estimating functionals of

densities (see, e.g., Bickel and Ritov (1988), Birge and Massart (1992), and Pollard (1993)).

Our point here is that for function classes with metric entropy of known order, there is no

need to identify a special subset to get the right order lower bound.

The density estimation problem we consider is closely related to a data compression

problem in information theory (see section 2.3). The relationship allows us to obtain both

upper and lower bounds on the minimax risk from upper bounding a maximum redundancy

which can be easily related to the global metric entropy. Combining the new lower bounds

with upper bounds, the minimax rates of convergence are determined from the metric

entropy properties alone.

In previous analyses, the techniques used for upper bounds seem to be unrelated to those

for lower bounds. One of our findings is that given a certain metric entropy of a density

class, an upper bound on the minimax K-L risk immediately results in a lower bound on

the minimax risk.

This chapter is divided into 5 sections. In Section 2, the main results are presented.

Applications in data compression and regression are given in Section 3 and 4 respectively.

In Section 5, we demonstrate the determination of minimax rates of convergence for several

classes of densities.

10

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 21: Yuhong Yang Thesis - Yale University

2.2 M ain results

We first give definitions of “metric” entropies.

D efin ition 2.1: A finite set N c C S is saicl to be an e-packing set (e > 0) in S if for any

6,6' G N c, 6 ^ d', we have d{6,6') > e , and for any 6 € S, there exists a 6a G N f such that

d(6,6o) < e. We call e the packing radius.

D efin itio n 2.2: A set Gc C S is said to be an e-net for S if for any 6 € S, there exists a

6q G Gc such that d{6, 6q) < e.

D efin ition 2.3: Let M,/(e) be the logarithm of the maximum cardinality of any e-packing

set in S. We call M,/(e) the packing e-entropy of S.

D efin ition 2.4: Let Vri(e) be the logarithm of the minimum cardinality of any e-net for

set S. We call V,i(e) the covering e-entropy of S.

From the definitions, it is clear that M,/(e) and V,i(e) are nonincreasing in e. Kolmogorov

and Tihomirov (1959) showed that and V,i(e) are right continuous when d is a metric.

The same proof works to show M,/(e) is also right continuous for any distance d.

The above definitions are slight generalizations of the metric entropy notions introduced

by Kolmogorov and Tihomirov (1959). We do not require the distance d to be a metric.

In fact, one choice of d will be the square root of the relative entropy or Kullback-Leibler

(K-L) distance. Let d2K{6,6') - D{p0 || p0>) = Jpo\og(p0/p 0') dp.. Clearly dK{6,6') is

asymmetric in its two arguments, so it can not be a metric. The major shortcoming of

this distance is that there is no triangle-like inequality in general, that is, there might not

exist a constant c > 0 such that d ^ ( 6 ,6') + d^(6,6 ) > c d ^ t f ' ,6") for any 6,6' and 6" in

S. Such an inequality would usually be used to rule out the possibility that one density

estimator is too close in d, to too many densities which are far away from each other in the

same distance d. This might happen with K-L distance. It seems necessary to have some

additional conditions enabling a triangle-like inequality to obtain a reasonable lower bound

in terms of d distance. The following example demonstrates this point.

11

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 22: Yuhong Yang Thesis - Yale University

E x am p le 2.1: Consider densities on [0,1] with respect to the Lebesgue measure. Let

S = {9 : 0 < 9 < 5 } and po(%) = %I{o<x<o+\}- Then for any e > 0 , the e-packing set

under <1k must be S itself. Let p = I{o<x<i}- Then D(po || p) = log2 for all 0 < 0 <

Clearly there can not be any triangle-like inequality and the packing number alone can not

determine the minimax rate of convergence.

Another distance we consider is Hellinger distance dp (9,0') — \J f ( s/po - yZ/v ), 1

Hellinger distance is a metric. We will also consider Lf/ distance d,l(0,0 ) = ( f \po - p0> \11 dp)

for q > 1.

We assume the distance d satisfies the following condition, which is used in the derivation

of minimax lower bounds.

A ssu m p tio n 2.0: There exists a positive constant A < 1 such that for any 9, O’ 6 S,

9 e s ,

d(6,9) + d(0',O) >Ad(0,9 ') .

R em arks:

1. Here we have assumed this triangle-like inequality to hold for all indicated parameter

values. The assumption can be relaxed to require that it holds only for 0 close to 0

and 9', specifically, that there exist positive constants A < L and eo > 0 such that for

any 9,6' e S , 9 e S, if max{d(9,6),d(9' ,0)) < e„, then <1(9,9) + d(6',0) > Ad(0,0').

If one uses local entropy conditions, then a local version of this triangle-like inequality

can be used to get similar results.

2. For general distance d, satisfaction of the above inequality may depend on the choice

of S. However, if d is a metric on 0 , then Assumption 2.0 is always satisfied with

A = 1 for any S C 0 .

When Assumption 2.0 is satisfied, the packing entropy and covering entropy have the

following relationship.

12

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 23: Yuhong Yang Thesis - Yale University

L em m a 2.0: Suppose Assumption 2.0 is satisfied for distance d. Then

< Vd(e) < Mrf(c).

The proof of the lemma is similar to that given for d being a metric by Kolmogorov and

Tihomirov (1959).

We will obtain minimax results for such general d and then special results will be given

with several choices of d: the square root K-L distance, Hellinger distance and L q distance.

We assume M d(e) < oo for all e > 0 and M,/(e) -» oo as e -M ) (the latter requirement is

used to avoid the triviality of S being a finite set). The square root K-L, Hellinger and L q

packing entropies are denoted M/V'(e), M//(e) and respectively.

In subsection 1, we give minimax bounds under global entropy conditions. In subsections

2 and 3, more results are given for Li risk and K-L risk respectively. In subsection 4, we

present interesting results connecting minimax upper bounds with minimax lower bounds.

2.2.1 M inim ax under global entropy condition

Suppose a good upper bound on the covering e-entropy under the square root K-L distance: is

available. That is, assume V)e(e) < V(e) with V being a nonincreasing and right continuous

function. Ideally V/v-(e) and V(e) are of the same order. Let e„ = infje > 0 : V(e) < ne~}

denote what we call the critical covering radius. Then because V'(e) is right continuous, the

radius e„ satisfies, _ V(ea)

e" n '

The squared radius is the same as the covering entropy divided by the sample size. The

trade-off here between and e2 is analogous to that between the squared bias and variance

of an estimator. As will be shown later, 2e2 is an upper bound 011 the minimax K-L risk.

Let en ,1 be a radius e such that

Md(in,d) = 4ne2 + 2 log 2.

The existence of e,, ,/ follows from the right continuity of M,/(e) and the assumption M,i(e) -4

00 as e —> 0. Roughly, en d is the packing radius at which the packing entropy under d

distance divided lay the sample size n is approximately four times the square of the covering

13

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 24: Yuhong Yang Thesis - Yale University

radius. We call en<d the packing radius commensurate with the critical covering radius

The speed at which g d converges to 0 determines an lower bound 011 the minimax risk.

M in im ax Low er b o u n d

T h e o rem 2.1: Suppose Assumption 2.0 is satisfied for the distance d. Then the minimax

risk for estimating 9 G S satisfies

min m<LxEod2(0,0) > — deA„ °£s 8

where the minimum is over all estimators mapping from X" to S.

P roof: Let Ngn d be an e„ ,r packing set with the maximum cardinality in S under the given

distance d and let GCn be an en-net for S under d/v-. For any estimator 0 e A,n define

0 = arcj minfl/gyv, ; d(d' ,§) (if there are more than one minimizer, choose any one), where

the minimum is over f)' in the finite packing set N f d. Then we have

c/(M ) > d{9,9) > Ad(9,6) — d(9, 9).

Thus if 6 ^ 9, 2d{9,9) > Ad(9,9) > Aen4, i.e., d2{9,9) > Then

min^g^ maxogs Egd2(9,8) > min^ maxog;v, Eodr ((),{))' A~(~

> m ingm axogA ^ E,,l{(¥~}

= minfj maxog/v,^ d —f ^ P o ^

> min5 E « 6A'£ii d ™{Q)P0 (f) ¥=

= —IT 1 min« P"> {°

where in the last line, 9 is randomly drawn according to a prior probability w restricted to

N ( d, and Pw denotes the Bayes average probability with respect to the prior w. By Fano’s

inequality (see e.g., Cover and Thomas (1991), Chapter 8 ), with wq being the uniform prior

011 ©o = N c , we have

( i i >

where I (Qq-.X71) is Shannon’s mutual information between the random parameter and the

observation X n. This mutual information is upper bounded by the maximum K-L distance

14

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 25: Yuhong Yang Thesis - Yale University

between the product measure p{xn\0) and any density q(x") 011 the sample space. Indeed,

/ (0O; X " ) = Eo M W J p W'\0) log

= E 0 w oW /p(a;n |« > ) l o g g ^ M ^ '‘)< E 0 m (0 ) J p(xn\0) log z 7i /i(rf:c")

< max0eNLn U D (p x »\o II Q x") ,

where Qx » has density q(xn), and pwo(x"j = T,o wo{0)p{x"\0). The first inequality above

follows the fact that the Bayes mixture density pWQ(x") minimizes the average relative en­

tropy w0(9) f p { x n \Q) log l2 ^ - p { d x n) over all densities q{x") (any other choice yields

a larger value by the amount j p wo{x") log P'“° ^ \ t(dxn) > 0). Choose be the uniform

prior on GCn and let q(xn) = pm (xn) = Eo '«,i(^)/j(;c" l^) and Qx « = P<oux» he the corre­

sponding Bayes mixture density and distribution respectively. Because Gc„ is an 6,,-net in

S under dx , for each 6 £ S, there exists 0 E Ge„ such that D(po || pg) = d'~h-(0,O) < 67,.

Also by definition, log|Gc„| < Vx (en). It follows that

Z > ( P „ „ | |P ,„ )

< E log p(A-'-IO)icbH.V"l«) (2 .2 )

^ log lGe„l + D (-P,Y"|0 II ^.V'lo)< V(en) + nel.

Thus, by our choice of en

Therefore

f (©o; X n) + log 2 1

l ° g l ^ „ J ~ 2 '

- A 2 el ,imin maxEodr(8,6) > — — — .

He An 0e5' b

R em ark : Up to the point (2.1), the development here is standard. Previous use of Fano’s

inequality for minimax lower bound takes one of the following weak bounds on mutual

information / (©; X n) < n I ( Q ; X 1) or I ( Q ; X " ) < n max0 D (P Vl |« || Px^o' )■ O '11'

use of the improved bound is borrowed from ideas in universal data compression for which

I (0 ; X n) represents the Bayes average redundancy and maxo6s D{PX » \o || P x n) < V (r,,) +

ne\ represents an upper bound on the minimax redundancy mingA.„ max«6,s' D(PXn\o ||

Q x ’1)- The data compression interpretations of these quantities originate with Davisson

(1973); see Clarke and Barron (1994), Haussler and Opper (1995) for some recent work in

that area. The bound D(PXn\o || P x n) < V(e«.) + ne* has its roots in Barron (1987, pp.

15

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 26: Yuhong Yang Thesis - Yale University

89), where it is given in a more general form for arbitrary priors (D{Px»\o || Pw,X”) <

log rj (7y~;') + where Af0,e - {&' : D(p0 || p0>) < e2}). The redundancy bound V(e„) + ue.2

can also be obtained from use of a two stage code of length log |GC„ |+ ininfl»6C( log- , , ) ^ ,

see Barron and Cover (1991, Section V).

When K-L distance is lower bounded by a multiple of the chosen distance d on S, then

a minimax lower bound on the K-L risk is obtained. That is, if there exists a constant /lo

such that Aod2(0,O') < d~^(f), ()') for any 0, f)' G 5, then under the previous condition,

min maxEodjs(8,§) > ----- -— —.6eA„ °eS °

A natural choice for d is the Hellinger distance (since Hellinger distance does satisfy the

triangle inequality between densities and locally square root K-L distance behaves like

Hellinger for bounded log-density ratios). Let and en // be the packing radius commen­

surate with the critical covering radius en under cZ/v- and r///, determined by A'//v(e„ -) =

4ne2 + '2 log 2 and M//(e„ /./ ) = 4ne2 + 2 log 2, respectively. We have the following two

corollaries.

Corollary 2.1: Assume there exists a constant A such that for any (9, O' 6 S and 0 G 5,

D{po || Pg) + D(p0' II p~j > AD(po II Pg , ) . Then

min maxEoc/;-(0 , 0) > — oeAn oes ~ 8

Corollary 2.2: For the square Hellinger risk, we have2

min m axiEod?l. r ( d ,8 ) > deA„ °£s b

Note in Corollary 2 .1, e2 K may be determined by packing entropy Mx(e) only (with the

choice V(e) = M/<-(e)). However, for general distance d (specifically the Hellinger distance

in Corollary 2), e2 d is determined by two quantities: both Mu(e) and V'K-(e) without any

assumption on the relationship between the distances.

When the distance dn is locally upper bounded by a multiple of distance d on 5, the

minimax lower bound under d2 risk can be expressed in terms of packing entropy under d.

distance.

1G

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 27: Yuhong Yang Thesis - Yale University

C orollary 2.3: Assume Assumption 2.0 is satisfied for distance d and there exists a con­

stant A such that D(po,p0') < Ad2{6,6') for any 6,6' G S with d{6,6') < where t „,,/ isV/t

determined from M,i{-^j=) = nT2d. Then,

- A 2 t 2 , min max E()d~{6,6) > — oeAn 065 k

where T n d is chosen such that M d{zn,d) = 4n r 2 d + 21og2.

P roof: Under the assumption between distances d and d/,•. a ^=-packing set in S under dV.'l

also serves as a rn<d-covering set for S under d^. Thus when e < Tn>d, Ua-(c) < M d

The result follows from Theorem 2.1.

The advantage of this bound is that it is determined by the radius r„ using exclusively

the chosen distance d.

For applications, the lower bounds above may be applied to a subclass of densities

{po : 6 G So} (So C S ) which is rich enough to characterize the difficulty of the estimation

of the densities in the whole class yet is easy enough to check the conditions. For instance,

if the densities {po : 6 G So} have support on a compact space and ||log po ||oo< T for

all 6 € So, then the square root K-L distance, Hellinger distance and L-> distance are all

equivalent in the sense that each of them is both upper bounded and lower bounded by

multiples of any other distance.

U p p e r b o u n d

To provide an upper bound on the minimax rate of convergence, we construct an estimator

as follows. Consider the e„-net Gc„ for S under c//v- and the uniform prior wq on Gfn. For

n = 1, 2 ,..., let

P(xn) = J2 wmp(*n\o) = ^ - r E p(*ni»)oecCn |LTc"l flee,,,,

be the corresponding mixture density. Let

i=0

17

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 28: Yuhong Yang Thesis - Yale University

be the density estimator constructed as a Cesaro average of the Bayes predictive density

estimators P i { x ) = p (yY,:+ i|.Xt) evaluated at = x, which equal for i > 0 and

pi(x) = p(x) = p T j E o s6'en p(x \8) for * = 0- Then by convexity and the chain rule (as in

Barron (1987)),

EoDfro II p) < i E 0 (E l'S .1 D(PXi+l]tt || Px ,+llx,

„ 2^1=1 o p ( Xj +\ |.r')= l£;iog.e L O )Pw\{ A )= i D ( p x „]0 || P IUl,A-„)^ » iV (en) + ne'i) = 2e'“ ,

where the last line is as derived as in equation (2.2). Tims

min max EoD(p0 || p) < 2e“ , p <>es

where minimization is over all density estimators.

Prom the above lower and upper bounds, we have the following theorem on the minimax

risk.

T h e o re m 2.2: Assume Assumption 2.0 is satisfied for distance d and Aodr(0,0') <

d2x (0, ()') for any 0, O' £ S. Assume also that A n (the set of all allowed estimators) contains

the estimator corresponding to p constructed above. Then

4 )^4 ,rf < Aq min max Eodr (0,0) < min max Eodx (6,0) < 2e". deAn °£s 6eA„

The condition that A n contains p in Theorem 2.2 is satisfied if {po : 0 € 5} is convex.

Specifically, if the action space S is the set of all densities on X and d is a pseudo-metric on

densities and if we allow all estimators for competition, then the only remaining condition

needed for the above inequalities is A odr(0, O') < drK {0,0') for all 0, ()'. This is satisfied by

Hellinger distance and L i distance (with A q — 1 and Ao = 4 respectively).

R e m a rk : In obtaining both the upper bound and lower bound on the minimax risk,

D (Px ,,\„ || Px ») plays an important role. For the lower bound, the quantity is used for

bounding I (0o; X ” ) , and for the upper bound, it bounds the risk of a specific estimator.

IS

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 29: Yuhong Yang Thesis - Yale University

It, is interesting that D ^P,\'"|0 II P x nj is upper and lower bounded in the very same way

but with different radius choices for the two cases. As we shall see. asymptotically these

two radii typically have the same rate.

If and 2e£ converge to 0 at the same rate, then the minimax rate of convergence

is identified. For t? d and e~ to be of the same order, it is sufficient that the following two

conditions hold:

(1). There exist two positive constants a and b such that when e is small enough,

M d(be) < VK(e) < Md(ae); (2.3)

( 2 ).

M M > h {2A]

The condition (2.3) is the equivalence of the entropy structure under the square root

K-L distance and that under d distance when e is small, which is satisfied, for instance when

all the densities in the target class are uniformly bounded above and away from 0 and d is

taken to be either Hellinger distance or L 2 distance. It is also satisfied by the nonparametric

regression example in Section 4. The second condition requires the density class to be large

enough, namely, M</.(e) approaches 00 at least polynomially fast in 7 as e —> 0 , i.e., there

exists a constant S > 0 such that M,i(e) >z ( 7 ) ■ The second condition is satisfied if M,i(e)

can be expressed as

Md(e) = ( ; ) ' «(e),

where r > 0 and - > 1 as e —> 0 .

C o ro lla ry 2.4: Assume Assumption 2.0 is satisfied for distance d and {po : 0 € 5} is

convex. Under conditions (2.3) and (2.4), we have

min max Eod2(0,0) x % , deAn °<=s

where j n, is determined by the equation = n

19

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 30: Yuhong Yang Thesis - Yale University

Corollary 2.4 is applicable for many smooth nonparametric classes. However, for not

very rich classes of densities (for example, finite dimensional families or analytical densities),

the lower bound and the upper bound derived in the above way do not converge at the same

rate. For instance, for a finite-dimensional class, both M a-(e) and M//(e) might Ire of order

( i N m^ 1 for some constant m > 1. Then e„ and en l! are not of the same order with

e„. x and gn<Ij — o(^=). Thus both the u p p e r bound and lower bound provided

by the theorem are near but not the optimal rate. For smooth finite-dimensional models,

the minimax risk can Ire solved using some traditional statistical methods such as Bayes

procedures, Cramer-Rao inequality, Van Tree’s inequality, etc:. But these techniques require

more than the entropy condition. If local entropy conditions are used instead of those on

global entropy, results can be obtained suitable for both parametric and nonparametric

families of densities.

2 .2 .2 M in im a x ra tes un d er Lr loss

For general classes of densities, the assumption of upper boundedness of square root K-L

distance by a multiple of d distance for the whole density class in Corollary 2.3 may not

hold. Theorem 2.1 is applicable but the resulting minimax lower bounds involve metric

entropies under both and d. In this subsection, we derive minimax bounds for L > risk

without appealing to Iv-L covering entropy.

Let T be a class of density functions / with respect to a finite measure p. on a compact

set X such as [0,1]. More generally, we may assume that /./, is a finite dominating measure.

We normalize p to be a probability measure. Let the packing entropy of T be under

the L q metric.

To derive minimax upper bounds, we need a lemma.

We change the estimation of / to another estimation problem and show that the minimax

risk of the original problem is upper bounded by the minimax risk of the new class. From

any estimator in the new class (e.g., a minimax estimator), a randomized estimator in the

original problem is determined for which the risk is not greater than a multiple of the risk

in the new class.

In addition to the observed i.i.d. sample X i , X 2 , ..., X n from / , let Y \ , V2, ...Yn be a

generated i.i.d. sample from uniform distribution on X with respect to p (independent of

V i , ..., X n). Let Zi be Xi or Y) with probability (17, ^) using V) ~ Bernoull i(^) indepen-

20

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 31: Yuhong Yang Thesis - Yale University

clently for i — 1 Then Z; has density g(x) = 4 ( / + 1). Clearly the new density g is

bounded below (away from 0), whereas the family of the original densities need not be. Let

T — {g : g = ^ T i f € F } be the new density class.

L em m a 2.1: The minimax L2 risks of the two classes F and F have the following rela­

tionship.

min max £b\'" || / - / ||o < 16 min max E Z" || q - q ||->,/ f e r ~ • ■ -

where the minimization on the left hand side is over all estimators based 011 Ab,..., X„

and the minimization on the right hand side is over all estimators based on n independent

observations from g. Generally, for q > 1, we have

minmax£(,Y" || f — f M < 4'/ min max || q — q ||[{ ./ f &:F ' y e f

P roo f: We only prove the assertion for L2. The proof for general L q is similar. Let g be

any density estimator of g based on Z,;, i = 1, ...n. Let g be the density that minimizes

II h ' (J 111 over ^ € {A: : k(x) > 5 , / k(x)dg, = 1}. Then lay triangle inequality and because

g € {A: : k{x) > ^ f k{x)d).i = 1}, || g - g \\\ < 2 || g - g ||i +2 || g - g \\% < 4 || g - g ||ij .

Now we construct a density estimator for / . Note that f ( x ) - 2g(x) — 1, let

fran rl('V) = 2-fj{x) 1.

Then f rand(x) is a nonnegative and normalized probability density estimate and depends on

X i ,..., X,,., Y i,..., Fn and outcomes of coin flips Vi,.. . ,Vn . So it is a randomized estimator.

The squared Lo loss of f rand is bounded as follows:

J ( f ( x ) - frandixj) = I {^j{x ) ~ 2//(-';) ) 2= 4 / {g - g f d/i,< w \ \ g - g Hi •

To avoid randomization, we may replace fm.nd{x ) with its expected value over Y \ , ..., Y„ and

coin flips Vi , ..., Vn to get f ( x ) with

E X" II f - f 111 = E Xn || / - E y « ,V” fraud Hi

< E x " E y v y „ || / ||i

= Ez* || / — fraud 111

< 16EZn || g - (J ||i,

21

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 32: Yuhong Yang Thesis - Yale University

where the first inequality is by convexity and the second identity is because depends

on X n , Y n, V n only through Z". Thus

max E Xn || / — / ||1 < 16 max E z » || fj - <j Hi ■yer

Taking the minimum over estimators g, we proved the lemma.

Now, since || — £±1 H2 = 5 || /1 — Si II2, for the new class E. the e-packing entropy

under Lo is M 2(e) = M 2{2e).

Now we give upper and lower bounds on the minimax L2 risk. Let us first, get an upper

bound.

For the new class, the square root Iv-L distance is upper bounded by a multiples of L >

distance. Indeed, for densities g \ , g2 € E,

D{g\ || e/2 ) < j dg <2 j (r/i - e/2 ) 2 d//.,

where the first inequality is the familiar relationship between K-L distance and chi-square

distance, and the second inequality follows because g\ is lower bounded by 1. Let V/c(c)

denote the dx covering entropy of E. Then Vj{(e) < M 2{-^=) — 2e). Let en be chosen

such that

M2( V 2e„) = ne~.

From Theorem 2.2, there exists a density estimator [jq such that

max Ez" D(g || g0) < 2e;. oer

It follows that

max Ez» d// {g, go) < 2c;,g£p

and

m axEzn || g - g0 ||?< Se~. oer

By Lemma 2 .1,

min max i? \'n II / — f 111 £ 8 \ / 8en./To get a good estimator in terms of L2 risk, we assume sup /e - || / ||oo< L < 0 0 . Let g be

the density in E that is closest to go in Hellinger distance. Then by triangle inequality,

max. Ez»-rfH(g,g) < 2 max Ez«-tfiH(g, go) +2m&xEz»(lrII{g,ga)<j£'-F g£F g t f

22

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 33: Yuhong Yang Thesis - Yale University

< 4 maxEz»dji{<j,fjo)yZF

< 8 e l

Now because both || g ||oo and || g are bounded by p p ,

j (3 ~ 3 ? dg = j (s/g - \ffj) i^/g + \/j]) dg < 2(L + l)d2n (g, g).

Thus max ^ E z » || 3 — 3 | | |^ 16(L + l)e^. Using Lemma 2.1 again, we have an upper

bound on minimax squared Lo risk.

P ro p o s itio n 2.1: Let M2(e) be the L2 metric entropy of a density class T with respect to

a probability measure. Let en satisfy M2(\/2er,) — ne~. Then

min max E,\” || / — f ||i < 8 \ / 8e„.I

If in addition, sup/e r || / ||oo< L < 00 , then

minmaxjBy || f — f ||-j < 256(L + l)ejr f feF

The above result upper bounds the minimax L\ risk and L 1 risk (under sup/-eyr || / ||00<

00 for Zr risk) using only the L2 metric entropy. For a related result under local entropy

assumptions, see Birge (1986, Theorem 3.1).

Using the relationship between Lfj norms, namely, || / - ./' ||(/ <|| / — / ||2 for 1 < g < 2,

under s u p | | / ||oo< 00 , we have

min max Ex» |( f - f ||ji ^ , for 1 < q < 2 ./ f e?

To get a minimax lower bound, we use the following assumption, which is satisfied by

the classical classes such as Sobolev, Lipschitz, the class of monotone densities, and more.

A ssu m p tio n 2.1: There exists at least one density f* € F with mm,..e ,v/*(;i:) = C > 0

and a positive constant cv G (0,1) such that JF0 = {a/* + (1 - o)g : g € iF) C T.

For a convex class of densities, Assumption 2.1 is satisfied if there is at least one density

bounded away from zero.

23

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 34: Yuhong Yang Thesis - Yale University

L em m a 2.2: Under Assumption 2.1, the subclass has L2 metric entropy =

Proof: Because 1J j ((a f * + (1 - a)gi) - (a /* + (1 - a)</2))2 dfi, = ( l - a - ) / f ( < / i - <j2)~ dfi,

an e-packing set in T corresponds to an (1 — cv)e-packing set in and vise versa.

Under Assumption 2.1, for two densities f \ and f 2 in .Fo,

D ( j \ || f 2) < j i h ~foh ) ~dn < ± [ (/, - /,)- dfi..

Thus applying Theorem 2.1 on F q , we have the following conclusion.

P roposition 2.2: Let M 2(e) be the L 2 metric entropy of a density class T with respect

to a probability measure. Let e„, satisfy = m f t and f„ be chosen such that

M 2{jz^e„) = 4ne~ + 21og2. Then

2m inm axEA-« || / — / Ha ^f o

If Assumption 2.1 is not satisfied, similar lower bound can be obtained under the fol­

lowing condition.

A ssum ption 2.2: Suppose there exists a subclass Fo c T and ,/b € IFq such that

m axjsjr0 || log ||oo< 00 and the L 2 metric entropy M®(e) of Fb satisfies M.2(e) > M 2{Ce)

for some constant C (independent of e).

For some classes, a choice of Fo in Assumption 2.2 might be F " 1'"- = { / e F : v>i <

f ( x ) < v2} for some constants v2 > v\ > 0 .

Combining the lower bounds with upper bounds, the minimax L2 rate is determined

under some conditions.

24

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 35: Yuhong Yang Thesis - Yale University

Theorem 2.3: Suppose sup /6yr || / ||0O< oo and Assumption 2.1 (or Assumption 2 .2 ) is

satisfied. If l im ^ n^ Q - > 1, then

min max EX’< || / - / \\'i ~ f'f,,/ fzJ7

where en is determined by M2(en) = ne'“ .

Using the relationship between Lo and Lq (1 < q < 2) distances and applying Theorem

2 .1, we have the following corollary.

Corollary 2.5: Suppose the conditions in Theorem 2.3 are satisfied and lirm^n^ - p - > 1

for some q € [1,2). Let en// satisfy M q{enq) = ne'“ . Then

^ nun max E X" || / - / f'i,J

If the packing entropies under and Lq are eciuivalent, then the above upper and lower

bounds converge at the same rate. Generally for a uniformly upper bounded density class

T on a compact set, because / ( / — f j )2 (lfj. < (|| / + g j|oo)./ 1./ _ <y|r///., we know M\{a) <

M ‘z{e) < M \ ( ) • Then the corresponding lower bound for L\ risk may vary from

fi„ to e ( depending on how different the two entropies arc.

2.2.3 M inim ax rates under K-L loss

For the square root K-L distance, Assumption 2.0 is not necessarily satisfied for general

classes of densities. We next discuss Assumption 2.0 for (Ik and present some more, results

concerning the K-L risk.

Lem m a 2.3: Assume D(po,p0') < Ad?(8,d') for all 6,8' G S and D{po,p0i) > Au<P(6,6')

for all 6,8' 6 0 , where d is a metric on 0 . Then Assumption 2.0 is satisfied for di< with

A = / J for any choice of 5 C 0 .

Remark: It suffices to assume D(po,p0') < Ad?{8,6') for 6,8' € S when dr(6,6') is small.

Further more, if the condition is satisfied only locally, then local entropy can lie used to

derive minimax lower bounds.

25

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 36: Yuhong Yang Thesis - Yale University

The conditions in the lemma are satisfied by the normal location family and the regres­

sion family considered in Section 4.

P ro o f o f L em m a 2.3: Prom the assumptions,

d tf(M ') < n/ M M ' )

< y/X(d(0,0) + d{O,i?))

< ^ ( d A ' ( M ) + r//v- ( M ) ) .

L em m a 2.4: For the square root K-L distance, each of the following two equivalent con­

ditions is sufficient for the satisfaction of Assumption 2.0 with 0 < A < 1.

1. D (po || rj^ ) + D (?V II > AD(Po || p0' ) for all 0,0' G 5.

2. D || po) < i ( i - 1 )D (po || for all 0,0' G 5.

R em ark : Because D (p0 || + D (pg> || ) < 21og2, the above condition 1

necessarily enforce the family to be totally bounded in K-L distance. If {po : 0 G .5’} is a

convex family (so that — p~ for some 0 G .5’), then the conditions are necessary as

well as sufficient. It is enough to assume condition 1 is satisfied when D (po || ''"+,?V ) +

D (pai || is small or satisfied only locally (for O' close to a fixed point 0 in terms of

K-L distance) if local entropy condition is used.

P ro o f of L em m a 2.4: The sufficiency of condition 1 comes from the fact that lll+y-v-

minimizes \D{po || p) + ^D{p0> || p) over all densities p. For the second condition, because

D (po || + D (?V || - D(P0 || / v )

= / Pfllog + 1 Po' - I Pfllog;^-

= f + I Po' l°Sw^rr

2G

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 37: Yuhong Yang Thesis - Yale University

(This equality is a special case of a parallelogram identity, see Csiszar and Ivorner (1981,

pp. 59)). Thus,

D{po || Po')

< D (,„, II H ! ± 2 £ ) + D ( p , II ! i ± r t ) + (i - 1)D II )

< i ( D ( » l l ?H £2l) + D ( i v l l ^ ) ) .

When {po : 9 € 5} is convex, then from the above lemma, a sufficient condition for

the satisfaction of Assumption 2.0 is that there exists a constant 0 < A < 1 such that

D(po || p0<) < ~ 1 )D(pai || po) for any 9,9' € S.

Corollary 2.6: Suppose the conditions in Lemma 2.3 or Lemma 2.4 are satisfied. Then

< min max E O(PK (9,0) < 2e ',o OeA„

where e„. and e„ are determined by M/V'(e„) = rie^ and M/c(£») = 4neif, + 21og2 .

As discussed before, if limc_ > o ^ ^ y > 0, then e„ and e„ are of the same order, which

determines the minimax rate of convergence.

As mention before, the conditions in both lemmas are satisfied if the log-density in the

class are uniformly bounded. When these conditions are not satisfied (for instance, if the

densities in the class have different supports, then the conditions in Lemma 2.3 can not be

satisfied), the following result provides minimax lower bound involving only the Hellinger

metric entropy.

We now consider estimating a density defined on X with respect to a measure //. with

fi{X) = 1.

Lem m a 2.5: Assume for a density / , that ||/||oo < T . Then if for a density <j, d i [ { f , fj) < f

for some 0 < e < \ /2 , there exists a density g on X depending only on <y, T and e (but not

on / ) such that

D{,f || g) < 2 ( 2 + log ( g ) ) (9 + 8 (8T - l ) 2) e2.

27

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 38: Yuhong Yang Thesis - Yale University

Bounds analogous to Lemma 2.5 are in Barron, Birge and Massart (1995), Wong and

Slien (1995).

P roof o f Lem m a 2.5: The proof is by a truncation of g from above and below. Let G =

: g(x) < AT). Let g = qIq + ATIg<=- Then because /!//(/, <7) < e, we have jG<: ( \ f ] — ^/g)2

dp, < e2. Since tf (x) < T < \g{x) for x G Gc, it follows that ~ \ f ^ ) <

Jgc (n/7 _ \ /y)2 (b'‘ — (2- Thus JGc gdp < 4e2, which implies 1 — 4e2 < jgdp, < 1 and

J’ (\/ff ~ V a f d/i < JGc gdp < 4e2. Let g = 7 - Clearly g is a probability density

function with respect to //,. For 0 < 2 < 4T, by simple calculation using 1 — 4e2 < Jgd/i. < 1,

we have | \ fz — ^j — 2(8T - l)e. Thus / [s/fj - \ fg)2 d/i. < 4(8T - l ) 2f2. Therefore,

by triangle inequality,

I ( \ / 7 “ \ f l l f dp < 2 j (v /J - s/g)2 dp, + 4 j { / g - s/jj)2 dp, + 4 j (s/?i - s/?]}" dp.

< 2e2 + 16e2 + 16(8T - l ) 2e2.

That is d!f[(f,g) < 2 (9 + 8 (8T — l)2) e2. Because £ < — d-— < by Lemma 4.5 inf V<ll'+4' -

Section 6 of Chapter 4,

D ( f || g) < 2 ( 2 + log ( g ) ) (9 + 8 (8T - l )2) e2,

which completes the proof.

For classes whose metric entropy structure is known under the Hellinger distance but

hard to know under K-L distance, the lemma is useful to give a bound 011 the covering

entropy under K-L distance.

For a density class T for which ||/||oo < T for each / G T. let M//(e) be the packing

entropy under cl//. By the lemma, an e-net, under cl// can always result in an //-net, under

cZ//, where rj = ^ 2 ( 2 + log (fjnr)) (9 + 8 (8T - l ) 2)e < Tie log ( j j for 0 < e < \/2 with T\

being a constant depending only on T. Thus for e > V// (^ -e log j < Mn(e)

or equivalently,

VK{e) < M h I I - 12.51

28

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 39: Yuhong Yang Thesis - Yale University

Let e„ satisfy M u ^ |o^ " 4„ = ne„ (then e„ > under the assumption M,i(e) -»• oo

as e ->■ 0, hence (2.5) is satisfied with e = e„) and let e„ be chosen such that M u (£.„) =

inef, ■+■ 21og2. From Theorem 2.2, we have the following result.

T heorem 2.4: Assume the packing entropy M//(e) < oo and Mu(e) -> oo as f -> 0 for

the density class T with ||/||oo < T for each / € T . Then with e„ and c„ as defined above,

i oY < m in^m ax/e^E V i;^/,/) < m inim axf e r E D { f || ./') < 2qr

Remark: Due to the presence of log/i term in the determination of e„, f 2 and e’~ arc'

typically of order and r^logn respectively for nonparametric smooth families, where

Tn is chosen such that M |/(r„) = n r 2. See Barron, Birge and Massart (1995) for related

conclusions. We suspect the extra logn might be necessary for the upper bound without

any regularity condition relating K-L distance to Hellinger distance.

2.2.4 Lower bounding minimax risk through upper bounds

From the proof of Theorem 2.1, we see that an upper bound on maxwg.y \o || qx u)

with any choice of qx" together with the packing entropy determines an minimax lower

bound as stated in the following corollary.

C orollary 2.7: Assume Assumption 2.0 is satisfied for distance d and there exists a density

function qn such that

™ x D { p x „ ]0 II 9A’“ ) < " C

Let n be chosen such that

= 2(n<5" + l062)'

Then the minimax risk satisfies

A 2 i fmin max Eod1 (6,9) > — ihAn ^

29

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 40: Yuhong Yang Thesis - Yale University

If a good upper bound on m a x o e s D { P x » \ o II (1X") is available with a .suitable choice of

q (not necessarily constructed as in the proof of Theorem 2.1), then a lower bound on the

minimax risk is given by the corollary.

The term maxfl6,s D (pxnjo II Q x n ) is the maximum redundancy of Shannon codes based

on densities q for an i.i.d. sequence of data from density po,8 G S. This redundancy is

closely related to the K-L risk. In fact, based on a good sequence of estimators, a good

data compression strategy could be constructed and vise versa. More precisely, we have the

following lemma.

L em m a 2.6: Suppose there exists a sequence of estimators 0^ based on A"|, ...Ah for k > 1

such that

Let 6c = maxflgs D(pn || p°) for any given p°. Then there exists a density q„ such that

Conversely, for any density qn on the sample space of Ah,..., A'„, there exists an estimator

p such that

Proof: Given the estimator sequence 6 , define q(xk+i\xk) - p ~ (xk+1) for k > 1 and

q(xi\x°) = p°. Let qn(xi , . . . , xn ) = Tl'i!.Zocl(x k+i\vk)- Then qn is a probability density

function. Following an argument similar to the one for proving the upper bound part of

Theorem 2 .2 , we have D{px \n || Q x n ) < Y^'=o bk-

For the second assertion, for any qn G Qn, we can rewrite it as qn(xi, ...x„) = hi(xi)ho(x 2 \

• ■ h„.(xn]xn~ l ), where h-i(x) = /).,:(.t|.'c'_1) is the conditional density of Ah given Ar,_1 = .r' -1

according to the joint density qn. Then

lo l s E D (po II poJ - ^ k = ~ L

™axD tPx*\o II 7X-) < J 2 bl-

E 0D(po II p) < ~D(Po II (In) for all 0 G S.

D ( P o || (In) = I P o ( x i ) ■ ■ ■ P o ( x n ) l o g hi{xi )h2{x2\xi) ■ ■ ■ hn,(xn \xn- l )dt'1po{xi) • • •po{xn)

n

Y , E 0D(po II hi).

30

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 41: Yuhong Yang Thesis - Yale University

Let p = i S " = i hi(x). Then p is a density estimator of po. From the proof of the upper

bound in Theorem 2.2,

E 0D(po || p) < ± 0 (1 # || q„).

This completes the proof of the lemma.

From Lemma 2.6 and Corollary 2.7, we have the following corollary.

C orollary 2.8: For a sequence of estimators 0*. based on A'|,...AT, 1 < k < n, let

maxflgs ED(po || p$ ) — 6| , then for the minimax risk, we have

A~(T2 , min imix E()d~ (0,0) > —0GAn oes - S ■

where a2 d is chosen such that

( n - 1

M d ( ( L n , d ) = 2 b i + loS 2V/=o

Remark: For smooth nonparametric classes, b'f is often of the same order of

for a sequence of estimators converging at the optimal rate. Then anA gives the right rate.

It may seem mysterious that an upper bound also forecasts a lower bound. Our expla­

nation is as follows. The smaller the upper bound is, the closer the densities in the class

to a fixed q(aq, in terms of the Kullback divergence, which suggests the densities are

harder to distinguish as revealed by Fano’s inequality.

2.3 A pplication in data com pression

The obtained theorems can be used to get bounds on tire minimax redundancy for data

compression. Let X i , . . . , X n be an i.i.d. sample of discrete random variable from po, 0 £ S.

Let qn (x i, ...xn) be a density (probability mass) function. The redundancy of the Shannon

code using density qn is the difference of its expected codelength and the expected codelength

of the Shannon code using the true density po, that is, D(p% || </„). Formally, we examine the

minimax properties of the game with loss D(p,} || qn) for continuous random variables also.

31

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 42: Yuhong Yang Thesis - Yale University

In that case, D{p\) || qn) corresponds to the redundancy in the limit of fine quantization of

the random variable (see, e.g., Clarke and Barron (1990, pp. 459-400)).

The minimax redundancy lower bounds have been previously considered by Rissaneu

(1986), Clarke and Barron (1990), Rissanen, Speed and Yu (1992), Yu (1996) and others.

These results were derived for smooth parametric families or a specific smooth nonpara­

metric class. We here give general redundancy lower bounds for nonparametric classes.

Let Qn be the collection of all density functions on the sample space A"' of (Ari ,..., X n).

From Lemma 2.6, we have the following result connecting the minimax redundancy with

the minimax risk.

C orollary 2.9:

it—ln m m fl&Vnm ax0 esE 0D (p 0 || p) < niinf/ll6Q„max0esD{p'o II 7») < minf)iePimc\xoesEoD(po

;=o

where for i = 0 , pi is any fixed density.

Rem ark: For smooth nonparametric density classes, n-min;je-piimaxfle,s'£JoD(po || p) of­

ten gives the right order of the minimax redundancy. However, for parametric classes or

other less “rich” families, this lower bound may be suboptimal. For instance, for smooth

parametric families, it is known (see, e.g., Clarke and Barron (1994)) that the minimax

redundancy is of order ^ log n, where m is the number of parameters in the family. But

nmainjj£-pnmax0£sJFfl.D(p0 || p) is bounded by a constant.

Now we take © = V to be the set of all probability densities on X and S C V to be a

subclass. The action space is assumed to be the set of all densities S = 'P (so as to include

estimates such as p constructed in the proof of Theorem 2.2). Let d(p, p ) be a metric on

V. Let Mrf(e) be the packing entropy of S under d and let V(e) be an upper bound on the

covering entropy Y/V'(e) of S under d^ . Choose en such that and choose en d be

a radius e such that M (i{en d) = 4ne\ + 2 log2.

32

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 43: Yuhong Yang Thesis - Yale University

Theorem 2.5: Assume that D(p || p ) > A 0dr{p ,p ) for all p, p € V. Then we have

nAoAe' i ,,— - — - < min inaxD{pn || qn) < 2ue~,

o In p£b

where the minimization is over all densities 011 X".

Two special choices that satisfy the requirements are Hellinger distance and L\. distance.

P roof o f Theorem 2.5: The lower bound follows from Theorem 2.1 and Lemma 2.G. For

the upper bound, consider the code based on q(x11) = j^j-j J2peac Pix ")i the mixture with

respect to the uniform prior on an en-net Gc„ of S. Then the redundancy is D(p" || q„) <

V (en) + neft < 2ne^ as in equation (2 .2 ).

The redundancy for data compression is connected with the cumulative risk of density

estimation under K-L loss (see, e.g., Clark and Barron (1990)). This risk is natural for

consideration when we estimate the density sequentially based 011 observations obtained so

far, predict the next observation, and then adjust the estimator once a new observation is

obtained.

Let 5 be an estimation procedure. That is, for each sample size n, it produces an

estimator f n based on X i , ..., X n. Let /o = /o be an initial guess density without any (•observations. Then the cumulative risk under the K-L distance up to n — 1 observations

Rcumif, <5, n) is defined as

n - l

Rcum(f,6,n) = Y , E f D ( f \ \ f i ) .1 = 0

This is the cumulative redundancy of predictive codes (using Shannon code based on

to encode the next observation A,;+i) in a information theory context. This cumulative

redundancy is exactly the redundancy of data compression for A'j, ..., X„. As seen in

Lemma 2.6, any estimation procedure 5 can result in a density on sample space of A”i,

..., X n, which can be used to construct a data compression scheme. Let . . . ,xn) —

/o(®i) ' .fi(x 2 \ x [ ) f n- i { x n \xu - , x n- i ) . Then As shown before,

D ( f n \ \ $ )) = n.cum(L5,n) .

33

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 44: Yuhong Yang Thesis - Yale University

Thus

m m sm a x f er R cam{ f ,S ,n ) = minr;(„)niaxfeJrD{f" || r/">),

where (fn)j is over all densities on sample space of X i , X„.

Suppose the minimizer of m ax;e^ D ( /" || r /n)) exists, say qi"'1. In some sense, qi"'1 is a

center of the product density class { /" , / G T ) and maXfG D ( f ” || q["')) is the “radius” of

the class { / ” , / G F}. Theorem 2.5 provides useful bounds on this radius.

2.4 A pplication in nonparam etric regression

Consider the regression model

Vi = u(xi) + Si, i - 1, ...n.

Suppose the errors s,, 1 < i < n are i.i.d. with JV(0,1) distribution. The explanatory

variables Xj, 1 < i < n are i.i.d. with density function h(x). The regression function u is

assumed to be in a function class U. For this case, the square root K-L distance between

the joint densities of (A', Y) in the family is a metric. Let || u — v ||/,.,(/,)= ’(«• - v)~hd(i be

the L 2 distance with respect to the measure induced by X . Let iV/2(e) lie the maximum of

the logarithm of the cardinality of any e-packing set under norm. Assume < oo

for every e > 0 and M2(e) —> oo as e —> 0 . Choose e„ such that

M 2(\/2e „) = n q v

Let e?l satisfy

M2( V 2en) = 4 ne'f, + 21og2.

T h e o rem 2.6: The minimax L 2{h) risk for the regression function estimation is lower

bounded by a rate determined by the L 2 packing entropy of U as follows:

e2min max II u — u ||7 .,m.\ > -r- u U&u 11 - 4

34

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 45: Yuhong Yang Thesis - Yale University

P ro o f: Denote the joint density -^=e 2(ff /;,(:/;) of (A', Y) with regression function u

by pu. Then

D(pu || pv) = E u± ( ( Y - u ( X) 2 - ( Y - v ( X ) 2)

= E u l-{a{ X ) - v ( X ))2

= \ j (u (x ) ~ v{x))2h{x)dx.

Let S = {u : u G U } and S = {u :|| u oo}. Let A„ lie the collection of the

regression estimators which maps from the sample space to S. Let d2{ti,,v) = D(p„ || p„).

From Theorem 2.1, we have

The conclusion follows.

To get good upper bounds, a little more work is needed. The upper hound in Theorem

2.2 is not directly applicable, because it is less clear whether the minimax K-L risk of

of estimating the regression function. Under some conditions, we show that it is indeed

the case. To that end, Hellinger risk of estimating the density of { X . Y) is used as an

intermediate quantity.

We assume sup„S£/ II u ||oo< L < oo, which will be needed in our analysis. From Theo­

rem 2 .2 , there exists a density estimator pn of the joint density such that max,, eu ED{p„ ||

Pn) < 2e2. It follows that maxu€H E(PH(pu,pn) < 2e‘ . More precisely, let ui{x), U'2 ( x ) , ....

un{x) be a covering set in U under L ‘>{h) norm with covering radius e„, then the estimator

constructed in the proof of Theorem 2.2 has the form pn = £ pi, where

min max ED{pn || p tl) >u£An o

estimating the joint density of (vY, Y) is lower bounded by a multiple of the minimax risk

= h{x)fji ^ j \ x , { X h Ylyl=l)

for i > 1 and po{x,y) — h(x) ■ Thus the marginal density

of Ar in the joint density pn is h(x) and the conditional density of Y given X = x is

cjn (y | x, (Xi, Yt)?=l) = i EfcTo19i {‘U I x, (A'i,Yi)ll=l). For given x and ( A T ;)}'=1, let un be

35

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 46: Yuhong Yang Thesis - Yale University

the minimizer of dn (gn (y), s)' ) over c with |,:| < L. Then u„(x) is an estimator

of u(x) based on (Xi,Y[)f= l. By triangle inequality, given x and (Ar/,

dH ( - i = e - & v- < x»2,\v27r \/27r /

< dH ^ e - ^ - ^ \ g n(y)^ + dH f,„ (y)

< 2 dH ( - j = e - & - « * » 2,gn( y ) y

Because for two joint densities h(x)gi{y \ x) and li(x)<j2 (y \ x) with the same marginal

density h(x) of X , the Hellinger distance between the joint densities equals

J h(x)d2,., {(ji{y | x), <72(37 | x)) dp,,

so from the above inequality, given (Xp Y/ jjjL.j,

4l(Pu,Pu„) < 4dj,(pu,p„ ).

It follows that

max Edjr(pu,p~ ) < 4 max Ed2„ {p„, p„) < 8c;,. u&U " ii&A

Now

E d2„(pn,p~n) = 2£ ( i - I ^Pujhd ly x dp

= 2E j h(x) ( l - dp.

Because ±(u(x) - un{x))2 < \ t f , ( l - e-£(«(*>-«"(*)>2) > l e~ ^ ■ (u(x) - n„(x))2. It

follows that

' I {u{x) - u „ { x ) f d p < 1 6 e ^ ‘ maxE( f i , { pu, p ll) < 128e*L'e*. ,/ neu

Thus we have the following result.

max E u&A

T h e o re m 2.7: Assume sup„ey || u ||oo< L. Then

min max || u - u || |.)(,0 < 12S e^V ;;.u u&A ~x '

If further limc > T then

min max || u - u \\l„{h)^ e*.a «6U ' '

36

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 47: Yuhong Yang Thesis - Yale University

R em ark : Using a similar argument, it can be shown that the above conclusion is still true

if the error distribution is assumed to be double exponential in stead of normal.

Previously, minimax rates of convergence for nonparametric regression are identified for

specific smooth classes such as Lipschitz classes, Sobolev classes and Besov classes by Stone

(1982), Nemirovskii (1986), Nemirovskii, Polyak, and Tsybakov (1985), Donoho, Johnstone,

et al (1993), Pinsker (1980), Ibragimov and Hasminskii (1982) and others. Here we have

shown that under normal (or double exponential) assumption on the error distribution, the

minimax L 2 rates are determined by metric entropy.

2.5 Exam ples

In this section, we demonstrate the applications of the theorems developed in the previous

sections. As will be seen from the following examples, once we know the order of (or bounds

on) metric entropy of a target class, the minimax rates (or bounds) can lie determined right

away for many smooth nonparametric classes without additional work.

We will consider several function classes on [0, l]f/ for some <l> 1 in the examples.

1. Take d = 1. Let u>f(h) = max^i^o^;,..^! |A /,/(x)|, where A i.f{x) = f ( x + 2t.) —

2f ( x + t) + f {x) is the second difference of / at x with increment t (maximum is

taken over those t for which is defined). Let <j>(h) be a concave increasing function

and let A^(C) = { / : / is continuous, | / | < C, and u /(/i) < </>{!>■)}■ Then from a

result of Clements (1963), the sup-norm metric entropy of this class Moo(e) satisfies

Mco(e) b .ft-!(<;)• An example of such a (p is <p(h) = ha for 0 < a < 1.

2. Let A (? ct( C o , Ci,.. . , CT, C ) be the class of functions / which have all partial derivatives

|D (fc)( /) | < Ck for k = 0 , 1, ..,r , and \ D ^ { f ) { x ) - D ^ { f ) { x + h)\ < Cha {0 < a < 1).

Prom results of Kolmogorov and Tihomirov (1959) and Clements (1963), for 1 < q <,!

00 , the L f, packing entropy of A^a is of order M q(e) ^ j r+" .

3. Let A(f„(Co, C i , ..., C,., C) be the class analogous to A((„(Co,C'i,...,Cr ,C ) defined in

terms of Lo norm. That is, A ^ (C o , C \ ,..., Cr , C) consists of functions / which have all

partial derivatives || D ^ ( f ) ||2< Q, for k = 0 , l , . . , r , and || D ^ { f ) ( x ) — D ^ ( f ) ( x +

37

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 48: Yuhong Yang Thesis - Yale University

It) lk< Ch° (0 < cv < 1). Then from Lorentz (19GG), the Lo metric entropy of this

class is also of order ^ r+n.

4. Let Pa(C) be the class of real functions f ( x ) on [0,27t], periodic with period 27r,

with mean value zero and having a derivative of order a in L ’ (in the sense of Weyl)

uniformly bounded in mean by C. It was shown by Kolmogorov and Tihomirov (1959)

that the Li metric entropy of this class is of order ( f ) " •

D e n s ity e s tim a tio n .M f - )

Assume lim,_>.n > 1 f°r A^(C). Consider the density class of / with log/ € A,/,(C).

For this class, the log-densities are uniformly bounded and the sitp-norm metric entropy is

of the same order as Moo(e). Note also that K-L distance is equivalent to Lo distance and

hence upper bounded by a multiple of the squared sup-norm distance. Applying Theorem

2.1, we have min^ maxj0g^gA E || / - / Hoo ; e;n where en satisfies neft - Moo(e«). But

Moo(e) y , we obtain

min max E || / - / ||oo>= zl, f log/eA^C)

where e.„ satisfies nei — t tA —t.-ii - n 0 i(£n)

Assume log/ G Af a(Co, C \ , ..., Cr ,C). Then for this density class, the L q (q > 1) metric

entropy is of order ^ j r+" . From Corollary 2.4, we have

- O 2 ( r + q )

min max E || f - f | | ^ n / log/eA;(Q

Because the log-densities are uniformly bounded, the minimax K-L risk or squared Hellinger

risk converges at the same rate. Also because the metric entropies under L q, q > 1 are of. j 2 ( r + t > J

the same order, using Theorem 2.1, we have m inj maxj0g fsA,, E || / — / ||“> n *('•+«)+''.

Together with the upper bound rate on L 2 risk, we have for 1 < q < 2,

min max E || / - / ||“~ n / log/eA/„

Now we consider classes of densities which may not be bounded above or below from

zero. Let A^„(Co, C \ , ..., Cr , C) and A ^ (C o ,C i,...,C ,.,C ) be the density functions in A

;f0 (Co, C i , ..., Cr , C) and A(!^(Cq, C \ ,..., Cr , C) respectively. When the constants C, C o,..., Cr

are large enough (which is assumed here), the orders of Lq metric entropies of A',(fV(Co, C \ ,..., C,.

38

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 49: Yuhong Yang Thesis - Yale University

are still ({ j +“ - For these classes, Assumption 2.1 is satisfied. Therefore, by Theorem 2.3

and Corollary 2.5, for the class we have

- 2 ( r + n )

min max E || / — / | | ^ n for 1 < <y < 2 ./ / e a;!,«

The densities in Afia are 110 necessarily bounded. But from Proposition 2.1, for the squared

L\ risk,2( ■•+.')

min max E || / — / n f f e a ™

Because A f a(Co, Cl, ..., C,., C) C A ;f^(Co,C i,...,C ,.,C ), the lower bound obtained above

on the squared Ly risk is also a lower bound for the larger class. Combining the upper and

lower bounds, we have- .-) 2( !•+..)

min max E || f — f ||f>; n 2(<-+")+>'./ /e

Birge (1986) and Devroye (1987) obtained similar results for A)(n with additional con­

structions of special subsets.

R egression function estim ation.

Consider the regression problem in Section 4. Let h be the density of the explanatory

variable X . If |log/i| is bounded, then for Af a (Co,Ci ,. . . ,Cr,C) and Pn (C), the metric

entropies under L-2.{h) norm are of the same orders as given before. By Theorems 2.6 and

2.7, we have the following conclusion.

C o ro lla ry 2.10: Assume |log/i| is bounded. Then the minimax L2 rate or bound of

estimating a function in A^a (Co, C i , ..., Cr, C) and Pa(C) are given below.

2 ( r + t . )

min max E || u — u x n '•!(r+">+''. « ue\i'0

min max E II u — u ||.j >- n~'ln+t . n u6P„(C) “

D ata com pression.

Because K-L distance is lower bounded by half the squared L \ distance, according to the

relationship between density estimation and data compression, we know that the minimax

39

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 50: Yuhong Yang Thesis - Yale University

redundancy for compressing an i.i.d. data string governed by a density in Ajf , is lower

bounded in rate as follows (as a consequence of Theorem 2.5):

1 2 ( r + . Q

min max — D( f " || q„) > n •■*(>■+«)+>'.'InSQn U

This rate is also obtained by Yu (1996) through a hypercube argument, to lower bound the

mutual information between the parameter and the observations.

If we assume log/ € then the minimax redundancy rate is identified from Theorem

2.5:1

min max —D(f""\ \ f j„)mi . a(r+<.)+./.r /n € Q n l o g / e A ', ! ^ n

40

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 51: Yuhong Yang Thesis - Yale University

Chapter 3

A daptation of D ensity Estim ation

In this chapter, we estimate a density function assumed to be in a countable collection

of nonparametric classes. We study the possibility of adaptation and provide adaptive

estimators over the classes.

The minimax results in Chapter 2 deal with one general class of densities. In some

statistical applications, target classes are chosen to be smooth nonparametric classes such

as Sobolev spaces, Lipschitz spaces, etc. The smoothness condition of a function indicates

how much the function value may change according to change of the independent variables.

Smoothness is often measured by some kind of norm defined in terms of the derivatives

of the function. As seen in the examples in Chapter 2, smoothness conditions of a target

class affect how fast the minimax risk converges to zero and roughly speaking, a smoother

class has a smaller order metric entropy, thus has a better minimax rate of convergence. For

various smooth nonparametric classes, a lot of estimation procedures such as kernel methods

with predetermined band widths (e.g., see Devroye (1987)) and some sieve methods (e.g.,

Stone (1990, 1994), Barron and Sheu (1991), Birge and Massart, (1993), Wong and Shen

(1995)) have been proposed utilizing the smoothness information. But in practice, we only

observe a sample and the smoothness condition of the density is not known to us. A

statistical procedure specifically designed for one smoothness condition generally does not

work optimally for other classes with different smoothness conditions. This consideration

suggests the necessity of adaptation capability of estimators. Of course, one may first

obtain an estimator under a smoothness condition somehow chosen based on some rough

idea, and then readjust the assumption based on the estimator. Or some times it might

41

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 52: Yuhong Yang Thesis - Yale University

be possible to compare estimators under different smoothness assumptions and choose the

best one according to some visual or ad hoc justifications. But these kinds of procedures

might depend too much on the user’s personal opinion. We here instead are interested in

good data-driven strategies.

Though smooth nonparametric classes are the ones most often used in practice, concep­

tually we do not restrict ourselves to those classes. More generally, we assume that the true

density can be from any of a countable collection of density classes. An interesting ques­

tion is: Can we have one estimator that suits simultaneously for all classes in some sense?

We wish to have one estimation procedure which can automatically adjust to the curvature

(smoothness) of the true density based only on data. Such an estimation procedure is called

an adaptive procedure.

Adaptation is desirable for an estimator, because such an estimator is more flexible

and can work well without strong assumptions on the true density. The idea of adapta­

tion dates back over ten years ago. For instance, a variable bandwidth was considered to

make an adaptive kernel estimator (e.g., Hardle, Hall and Marron (19S5)), and smoothness

parameters were adaptively adjusted based on data in smoothing spline estimation (e.g.,

Craven and Wahba (1979)). Efroimovich (1985) made a great contribution in this direction

for density estimation. He considered estimating a density having a Fourier representation

satisfying a certain smoothness assumption with smoothness parameter not known. He con­

sidered projection estimators of the Fourier coefficients and proposed an adaptive strategy

to achieve the minimax rates of convergence without knowing the smoothness parameter in

advance. In later years, Donoho, Johnstone and some other researchers (see, e.g., [32] and

[33]) advocated the use of wavelet shrinkage estimators in both nonparametric regression

and density estimation. They showed that the wavelet shrinkage estimators converge near

optimally simultaneously over the Besov spaces without knowing the hyper-parameters in

advance. Birge and Massart (1995), Barron, Birge and Massart (1995) have had great

success in providing model selection theory using general contrast functions.

In this chapter, we are interested in adaptation in terms of the minimax rates of con­

vergence. From now on, we will concentrate on the estimation of density itself in stead of

other parametrization (such as root density or log-density). We will only consider K-L loss

in this chapter.

Let J-j, j > 1 be a collection of target classes. Assume / £ U/>|JF;-. If the conditions in

42

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 53: Yuhong Yang Thesis - Yale University

Corollary 2.4 are satisfied for each class, then for a given class, the estimator constructed

in the proof of Theorem 2.2 converges at the optimal rate for that class. Now, without

knowing which class contains / , can we have one estimator (not depending on j ) such that

it converges at optimal rate of the class containing / ? If such an estimator exists, we call

it an adaptive estimator over the classes Fj, j > 1.

3.1 A daptation under entropy conditions

Let Mj(e), j > 1 be packing entropies under r/// for the density classes Fj, j > 1. For

simplicity, we assume that for each class, there exist constants C ’j and 7 j such that D ( f ||

a) < Cjdjr ( f ,g) for / , <7 € Fj with (J~H(/, (]) < 7j. This condition necessarily requires K-L

distance and squared Hellinger distance behave similarly when the densities are close to

each other in Hellinger distance. Suppose also that the classes are rich enough such that

liminfc-K) j^.^) > f°r j > 1- Applying Corollary 2.3 and Theorem 2 .2 , for a fixed j ,

the minimax rate of convergence under K-L distance is where e„tJ- is determined by

en,i = — n • Using the relationship between density estimation and data compression

as discussed in Chapter 2, we obtain the following result.

T h e o re m 3.1: Under the above conditions, there is a minimax-rate adaptive estimator,

that is, an estimator that is simultaneously minimax-rate optimal for {.F/, j > 1}. Specifi­

cally, the estimator /„ given in (3.1) based on X i , . . . , X n. has risk bound

max E f D ( f || /„) < const j • e*j

for all j > 1.

P roof: We construct an adaptive estimator using some Bayesian mixing idea. As before,

for each class j , consider an enj-net GCn j in T3 under (Ik and the uniform prior 011 Glu }

and let Qj{xn) = jg}—f X)/gGe f nixn) be the mixture density over the covering set. Let

7r(j) be positive prior probabilities on the classes satisfying ^ (.'/') — U Then let us mix

these mixtures over the classes according to the prior n(j) on the classes. Let

?<n V ) = I > U ) Q j ( x ")-j> 1

43

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 54: Yuhong Yang Thesis - Yale University

Then this density is close to all densities in the classes in K-L distance sense. In fact, for

any / 6 T j - ,

D i f | | r / n)) = f r ( x n )log f ')xp r//,,

<-

= l o g ^ + ^ l r i u j v ’).

From previous analysis, we know that D ( / " || < 2nef, j . . Thus

D ( r II «<">)< +

As before, let

/ ( » = ^ P = X'IA"') C3-1)r'- ;=o

be the estimator constructed as a Cesaro average of the Bayes predictive density estimators,

where

E J < i )V (* 1 * 0 =

is Bayes predictive density based on X 1 using two layers of priors. Then we obtain

Thus for every j* > 1,

E D ( P0 || /„ ) < X- D ( /" || </">)

^ FloS ^ ( F ) + 2e»,i*-

max E f D ( f || f n) < ^ l o g ^ y + 26“^..

Note that ^ lo g ^ ry does not affect the rate of convergence in - lo g ^ ^ rj -t- 2 j . and the

estimator does not require the knowledge of which class contains the true density. We

conclude that / is an adaptive estimator in terms of the minimax rates of convergence.

From the above analysis, due to not knowing which class contains the true density, we

pay a price of an extra ^ lo g ^ ry in the risk bound. Because log^jry oo as 7r(j*) -» 0,

the obtained upper bound becomes useful for a class with small prior probability, only when

the sample size is large compared to log^jry.

44

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 55: Yuhong Yang Thesis - Yale University

3.2 A daptation based on existing good estim ators

In the above subsection, an adaptive estimator is constructed by considering suitable cov­

ering sets in the classes and mixing the corresponding densities on the product space. The

construction is theoretically easy to handle, but hard to implement for practical applica­

tions. For specific classes, many nonparametric procedures have been proposed and have

been shown to be minimax optimal (in terms of rate of convergence). Can we obtain adap­

tive estimators based on these specially constructed estimators for specific function classes?

We are going to answer this question in the positive direction. Again, we take advantage

of the connection between estimating a density and data compression to construct a good

density on the product space of (A’i, . . . ,Xn) which is close to all densities in the considered

classes in K-L sense and then use the relationship between the two problems in the reverse

direction to get back one density estimator that is optimal for all the density classes under

some conditions.

Suppose for each class / € !Fj, we have an estimation strategy i)j producing density

estimators f j ti{x |ATi), f j ^ { x lA ^A b), ..., ,/’j,n- i (.i'|.Yi, •••, Ar„ _ i ) , ... based on observation(s)

{A’i}, {A'i, A 2}, ..., {A’i, JY2, ...,A „_i}, and so on. Here we allow estimation strategies to

be different for different classes. For instance, we may use kernel estimators for some classes

and wavelet estimators for some others. A single estimator will be constructed lay mixing

these estimators somehow and shown to be good for all the classes in the asymptotic sense.

The following is the recipe to get a good estimator.

1. Construct a good density on the product space (re 1, ...,.t„) for each class.

Let ,fjfi(x) be a fixed density function on the sample space. We may take //,n(:r) to

be the “centroid” density /* (or a more easily obtained good one) of the density class

Tj that minimizes m ax/ejr. D ( f || /*) over all densities (that is, fj,o{x) is close to

the minimax density estimator based on no data). However, the choice of will

have no effect on the asymptotic results. Let

'7j") = /i,o(*l) • f j , i(m2 |m i) f j >n- i ( x n\x\, ...,.i;„_i).

Then q ^ is a density function on the product space of A | , , X n.

2. Average over j to get a mixture density.

45

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 56: Yuhong Yang Thesis - Yale University

Let 7Tj, j > 1 be prior probabilities of the density classes satisfying itj > 0 for all

j > 1. Let

</'“> = E v 4 ”’-

It is a mixture density of constructed densities on the product space.

3. Get conditional densities based on r/n).

Let us rewrite the density r /n) as product of conditional densities:

7(n) ; 7o(al) • 7 i(^2k i) ■ • • 7».-l(;,:il|;,;l ; •••, )•

4. Final estimator.

Let f i(x) = ...Xi), i — 0, ...,n — 1 be final predictive density estimators based

on current observations X \ , ..., Ar,\ Call this estimation strategy 5*. Let

H . - l

n r= o

We use / ,, as our final estimator of the unknown density based on A’i ,..., A’„_i •

Let R ( f ,8 j , n ) = D ( f || /y,n) and WR.(Sj,n) = ma.x/ej . D ( f || Ire the risk at / and

worst case risk respectively of estimator f j <n (produced using strategy Sj) based on sample

A’i, ..., X n. We next bound the risk D ( f || /„ ) in terms of /?.(/, Sj . ■/), 1 < ■/ < n — 1.

As before, for any / ,

n —iE f D u ii \\.h)

7 = 0

< - £ > ( . r i i 7 (n)) n1 f f 1l(xn)

< - / / " log J W dl'n ■' n { j* )qP (x n)1 1 I f f"(x")

= - lo g — - + - / / ' ' ( m " ) l o g ^ — n n j q P { x " )

= - l o g - ^ 7 + - D ( / " II ryj.'.0 ) .77. 7T(j*) 77. r j J

The term D ( / " || cfp'j can be bounded in terms of risks of original estimators. Indeed,

D ( . r i i 4 ‘’) =

46

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 57: Yuhong Yang Thesis - Yale University

Thus we have obtained the following inequality:

E j D ( f II /,,) < + 1 E W . v . o -

Let B.cum(f ,S ,n) = /?(/, <5,0- It is the cumulative K-L risk of the estimation

strategy 5 up to n — 1 observations. With the given prior 7r on the strategies 5j , let

R n (tt, { 5 j J > 1}) = infj>| ( l° 6' ^ y + B.cmnU\Sj ,v,)Sj .

Then R n (ir, {<5j , j > 1}) is the best trade-off between the cumulative risk and the logarithm

of the inverse prior probability over the estimation strategies.

T heorem 3.2: For any given countable collection of estimation strategies {8j , j > 1},

we can construct one estimation strategy 5* such that for any underlying density / , the

cumulative risk of J* up to n — 1 observations and the usual risk of / „ based on n — 1

observations are upper bounded by R n (ir, {5j ,j > 1}) and ^ R n (tt, {<')/,.; > 1}) respectively.

That is,

E f D ( f \ \ } n) <

and

B cum(f,6*,n) < B.n (7r, {Sj , j > 1}).

From the above theorem, by mixing the existing strategies designed for different classes

using a prior, we have one single estimation strategy that shares the advantages of all the

proposed strategies automatically in the asymptotic sense. More precisely, the cumulative

risk of the mixed strategy is bounded by the best trade-off between minus logarithm of the

47

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 58: Yuhong Yang Thesis - Yale University

prior probability (weight) of a strategy and the cumulative risk of that strategy. It follows

that the cumulative risk of the mixed strategy is bounded by minus logarithm of the weight

put on the best strategy for the underlying density / (the strategy minimizes R c„ m U \ S j , n )

over Sj ) and the cumulative risk of the best strategy for / . Thus without knowing which

strategy works best for the unknown density, the price we pay in terms of the cumulative

risk is at most a constant (minus logarithm of the weight). In another word, one strategy

can do the job of a countable collection of strategies designed for different target densities

in terms of the rates of convergence of the cumulative risks.

From the theorem, if Sj is minimax-rate optimal for class T j in terms of the cumulative

risk, then the mixed strategy J* is minimax-rate adaptive over the classes Tj, j > 1.

It is more complicated to obtain minimax-rate adaptive estimators over a general collec­

tion of density classes for the usual risk instead of cumulative risk. Our approach of analysis

on risk of f n is through the results on the cumulative risk. As will Ire seen next, this ap­

proach does not always guarantee the minimax-rate adaptivity for the usual risk as opposed

to that for the cumulative risk, yet the result is satisfactory for many nonparametric classes

of densities.

Suppose Sj , j > 1 are minimax-rate procedures for the corresponding classes under the

usual risks, that is, there exist constants (3j (j > I) such that

Rmm{j i n)

is upper bounded by a constant Cj for every class j.

Typically for smooth nonparametric classes, the minimax risks converge around some

polynomial rates in the sample size. For example, if R mm{j,n) ~ n~r> for some 0 < rj < 1,

m a x R ( f , S j , n ) < f j j min m a x D ( f || /') f z f j ' /(A ’i ...... A

for all n. Denote the minimax risk m in ^ V] max/gjr. D{J' || / ) by n) for the

classes. Then, from Theorem 3.2, for each j > 1,_ I

II /„.) < “ log

1 1 1 1 < - lo g — r + Pj - R-mrnU, >■)

n tt(j) n f ^

H— maxm ax D (/ || /,,.) < -logJ C*/ j i t

Thus, the estimator / „ is adaptive if

then EILq' Rnimlj, i) ~ n is indeed bounded.

48

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 59: Yuhong Yang Thesis - Yale University

We next give some sufficient conditions for the average ^ E /U )1 -ftmmCyVO to converge at

the same order of R mm{j, n).

Let an be a decreasing sequence. Suppose an — n~^l~a^K(n), where 0 < cv < 1 and «(n)

satisfies one of the following conditions.

C ase 1. n(n) is increasing to oo. An example is «(n) = (logn)v for some ?/ > 0. Then

1 A 1 ,ln “ n .i=i i=i

n / \ —(l-o) 1= Z ( i ) - W - i ■ » -< ->

But E ”=1 ( ^ ) • n. -> Jo x (l ° )dx = So » Ei=i “» - Together with the

monotonicity of an, we know for this case,

1 n

n i=ian.

C ase 2. «(n) stay bounded above and away from 0. For this case, using a similar

argument, it is not hard to see that ^ EE=i ai x a» -

C ase 3. «(ra) \. 0 and assume that there exists r > 0 such that liminf- --"j-■ - > 0 . An

example for this case is k,(u) = for some q > 0. Then

i t - = i t ‘-“-MO7 . = 1 i = l

1 n *+r •. ii.1 ' . _ C l . 1

= i E C ' - ' - f i ) + i E4-1 ;=nT+v

n r-r n r .?.=i i=i

In the last expression, the first term is of order n ^ l+T ■* = o (n ^ and the second

term is of order under the assumption. Thus ^ E?=i a> x an-

49

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 60: Yuhong Yang Thesis - Yale University

From the above simple calculations, if for each class the minimax risk behaves as

one of the above situations, then our mixed strategy S* does provide an adaptive estimator

over the considered classes.

5 0

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 61: Yuhong Yang Thesis - Yale University

Chapter 4

M odel Selection for D ensity

Estim ation

4.1 Introduction

In Chapter 3, we constructed adaptive estimators based either on metric entropies of the

density classes or on existing good estimators for each of the target classes. The adaptive

estimator based on metric entropies require construction of suitable packing sets in the

density classes, which is very hard to do in practice. The adaptive estimator based on good

estimators for each of the classes is also hard to compute, because integration is involved

in one step. Practically feasible adaptive density estimators are desired.

Two types of nonparametric density estimation procedures are often used in practice.

One type is fully nonparametric, where no parametric models are assumed to do statistical

inference. Another type is a compromise between full nonparametric and parametric pro­

cedures, where parametric models are still used to perform statistical estimation, but the

parametric models are allowed to become more and more complicated as the sample size

increases. In this chapter, we consider the use of the second approach to obtain adaptive

estimators based on model selection.

To estimate the unknown density function a sequence of finite-dimensional density

families f k ( x , 0 ^ ) , 0 ^ £ &k are suggested to approximate the true density /(:»;). For

example, one might approximate the logarithm of the density function by a basis function

expansion using polynomials, trigonometric, or spline series (for a detailed review on this

51

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 62: Yuhong Yang Thesis - Yale University

topic, see Barron and Sheu (1991)). For a given model k, we consider the maximum

likelihood estimator of Barron and Sheu (1991) (later referred as B & S) show

that n~ 2»+‘ is the optimal rate of convergence of to f ( x ) in the sense of relative

entropy (Kullback-Leibler distance) f / ( x) log *or whose logarithms

have s square-integrable derivatives and that this rate is achieved by suitably choosing the

model size according to the smoothness parameter s. Stone (1990) obtains similar results

for one dimensional log-spline models and later (1994) develops convergence rates for multi­

dimensional function estimation (including density estimation) using tensor products of

splines with a given order of interaction. The convergence rates are also obtained with the

knowledge of the smoothness property of the target function. These results are theoretically

very useful but are not applicable when the smoothness condition of the logarithm of the true

density is not known in advance. In practice, with the smoothness parameters unknown,

the size of the model to be used should Ire chosen automatically from data. The completely

data-driven estimation requires a model selection criterion to compare the different models

and select a suitable size one.

A I C (Akaike (1973)) is a widely used model selection criterion in many statistical ap­

plications. This criterion is suggested by Akaike from considering the asymptotic behavior

of the relative entropy between the true density and the estimated one from a model. From

his analysis, a bias correction term should be added to -loglikelihood as a penalty term

to provide an asymptotically unbiased estimate of a certain essential part of the relative

entropy loss. The familiar A I C takes the form

AIC(k) = — loglikelihood + m/,:,

where mk is the number of parameters in model k, and the likelihood is maximized over

each family.

In addition to AIC, some other criteria have received a lot of attention. Schwartz (1978)

proposed B I C based on some Bayesian analysis; Rissanen (1984) suggested the minimum

description length (M D L ) criterion from an information-theoretic point of view. Usually

the M D L criterion takes the form

MDL{k) = — log likelihood + log n.

The term ^ log n is the description length of the parameters with precision of order for

each parameter, and the likelihood is maximized over the parameters represented with this

52

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 63: Yuhong Yang Thesis - Yale University

precision (addition terras that appear in refinements of A I C and M D L are in Bernardo

(1979), Clark and Barron (1990, 1994)).

The asymptotic properties of these criteria have been studied. It is shown that if the

true density f ( x ) is in one of the finite-dimensional models, then D IC chooses the correct

model with probability tending to 1 (see, e.g., Haughton (1989) and Speed and Yu (1993)).

For A I C , however, under the same setting, the probability of selecting a wrong model does

not vanish as the sample size approaches oo.

In a related nonparametric regression setting, an asymptotic optimality property is

shown for A I C with fixed design (Shibata (1981) and Li (1987)). Li shows that if the

true regression function is not in any of the finite-dimensional models, then the average

squared error of the selected model is asymptotically the same as that could be achieved

with the knowledge of the size of the best model to be used in advance. For the above M D L

criterion, however, the average squared error of the selected model converges a t a slower

rate due to the presence of the logn factor in the penalty term. In a density estimation

setting, Barron and Cover (1991) show that the Hellinger distance between the true density

and the estimated one from M D L converges at a rate within a logarithmic factor of the

optimal rate.

The M D L principle requires that the criterion retains the Kraft’s inequality requirement

of a uniquely decodable code. This requirement puts a restriction on the choices of candidate

parameter values. For some cases, with suitable restrictions on the parameters, the M D L

principle can yield a minimax optimal criterion of the form — log likelihood + constant • m*,,

whose penalty term is of the same order as that in A I C (see Barron, Yang and Yu (1994)).

In this work, we consider comparing models using criteria related to A I C and M D L in

the density estimation setting. We demonstrate that the criteria have an asymptotic opti­

mality property for certain nonparametric classes of densities, i.e., the optimal rate of con­

vergence for density functions in various nonparametric classes is simultaneously achieved

with the automatically selected model without knowing the smooth parameters in advance.

As opposed to A I C , we allow the bias correction penalty term to lie a multiple of the

number of parameters in the model, and the coefficient will depend on a dimensionality

constant of the finite-dimensional model related to the metric entropy. This dependency is

needed when the dimensionality constants for all the models are not uniformly bounded.

In this paper, the coefficients are specified so that the asymptotic results hold. W ith this

53

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 64: Yuhong Yang Thesis - Yale University

consideration, the criteria take the form:11

- lo8 0{k]) + h m , (4.1)i= 1

where 8 ^ is the maximum likelihood estimator in model k and A*, is a positive constant.

Let k be the selected model which minimizes the above criterion value.

In contrast to the minimum description length criterion, we do not discretize the pa­

ram eter spaces and the criteria used here do not necessarily have a total description length

interpretation. In addition, the results here can be applied to more classes of densities than

that considered by Barron, Yang and Yu (1994). We should also note that our criteria are

not necessarily Bayesian.

We evaluate the criteria by comparing the Hellinger distance dj.j(f, f-k —

■\Jff./)(k))2di-i with an index of resolvability. The concept of resolvability was introduced by

Barron and Cover (1991). It naturally captures the capability of estimating a function by

a sequence of models. The index of resolvability can be defined as

R n(f) = inf{ .inf D{f \ \ f ki„w ) + ^ } .k 0(fc)gQ fc ’ n

The first term inf0w eQk D ( f \ \ f k0(is:)) reflects the approximation capability of the model k

to the true density function in the sense of relative entropy distance, and the second term

reflects the variation of the estimator in the model due to the estimation of the best

parameters in the model. The index of resolvability quantifies the best trade-off between

the approximation error and the estimation error. It is shown in this work that with the use

of the criterion, when the Afc’s are chosen large enough, the statistical risk E d2n ( / , f-k is

bounded by a multiple of /?„ (/) . Clearly, the resolvability gets the best rate with smallest

allowable A/.’s, say A*;, k e T . The cases studied in this paper follow one of the two forms:

1. X*k s are constants independent of k (then it is like A I C lout with a constant possibly

different from 1); 2. Xk <constant- logm/t (then it is like B I C but with a constant possibly

different from 5 ).

To apply the above results, we can evaluate /?,,(/) for / in various nonparametric

classes of functions, then an upper bound of the convergence rate can be easily obtained.

Examples will be given to show these bounds correspond to optimal or near optimal rates

of convergence for density functions in various nonparametric classes.

In statistical applications, models are needed to perform sensible analysis and draw

useful conclusions. Usually, the models are suggested based on previous experience or

54

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 65: Yuhong Yang Thesis - Yale University

intuition on what class of functions the unknown function might he in. Due to the lack of

such knowledge on the true function, it is often more flexible to consider more than one

class of functions to do statistical analysis. For example, if we use series expansion method

to estimate the logarithm of density, we might consider polynomial models, trigonometric

models, and spline models at the same time and wish to choose whatever is the best in term

of the statistical risk. For spline models, we might consider different orders and different

numbers of knots. Even if we consider only one type of series expansion, it might bo

advantageous to consider sparse subset models if the true function is sparse in the sense

that only a small fraction of the basis functions are useful for good approximation. Such an

advantage will be demonstrated in Section 4. In high dimensional function estimation, the*

complete models which use all the basis functions up to certain orders often fail due to the

“curse of dimensionality” . On the other hand, the sparse subset models such as additive

models and low order interaction models might yield reasonable estimates. Considering

many different classes of functions or sparse models may result in exponentially many or

even more models. When exponentially many models are considered, significant selection

bias might occur with the bias-correction based criteria like A I C and the criteria we just

proposed. The reason is that the criterion value can not estimate the targeted quantity

(e.g., the relative entropy loss of the density estimator in each model) uniformly well for

exponentially many models. For such cases, the previously obtained results for the selection

among polynomially many models can not be applied any more. For example, for the

nonparametric regression function estimation with fixed design, a condition for Li’s results

is no longer satisfied. To handle the selection bias in that regression setting, a model

complexity based on an information-theoretical consideration is incorporated to A I C and

a new criterion named A B C is suggested (Yang (1993)). There it is shown that A B C

provides the best trade-off among the approximation error, the estimation error and the

model complexity.

For the density estimation problem, we also take the model complexity into consideration

to handle the possible selection bias when exponentially many or more models are presented

for more flexibility. For each model, a complexity C ' / . is assigned with L*. — (log2 e)C/.

satisfying the Kraft’s inequality: Yhk %~Lh < 1; that is

£ e ~ Cfc< l -k

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 66: Yuhong Yang Thesis - Yale University

The complexity L k = Ck log2 e can be interpreted as the a codelength of a uniquely decod-

able code to describe the models. Another interpretation is that e~Ck is a prior probability

of model k. Then the criteria we propose are

11- log f k(Xi, 0<fe>) + Akm k + vCk, (4.2)

i=i

where v is a nonnegative constant.

For the above more general criteria, we redefine the index of resolvability by adding the

complexity term as follows:

RnU) = inf{ inf D(f \ \ f k j m ) + (4.3)fc oWe&k n 11

It provides the best trade-off among the approximation error, estimation error, and the

model complexity relative to sample size. We show

= 0{R,,{/))■

As an example, we will consider estimating a density function on [0,1]. We assume that

the logarithm of the density is in the union of the classes of Sobolev space WgiU),# € N,

U > 0. We approximate the logarithm of the density by spline functions. If we knew U and

s , then by using suitably pre-determined order splines, the optimal rate of convergence is2h

achieved. However, this rate of convergence of n is saturated for smoother densities.

W ithout knowing U and s, we might consider all the spline models with different smoothness

orders and let the criterion choose a suitable one automatically from data. Indeed, from our

theorem, the optimal rate of convergence is obtained simultaneously for density functions

with logarithms in the classes Wf (17), s G N, U > 0. In another word, the density estimator

based on the model selection adapts to every class (U), s € N, U > 0.

The above examples suggest that good model selection criteria can provide us with

minimax optimal function estimation strategies simultaneously for many different classes.

As some other applications of our results, neural network models and sparse density function

estimation will be considered.

This chapter is organized as follows: in Section 2, we present a key lemma for the main

result; in Section 3, we state and prove the main theorem; in Section 4, we provide some

applications of the main results; in Section 5, we give the proofs of the key lemma and some

other lemmas; and finally in Section 6 , we prove several useful inequalities.

5G

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 67: Yuhong Yang Thesis - Yale University

4.2 A key lem m a

Let / be the true density function, and f ( x ,9 ) ,9 6 0 be a parametric family of densities.

For r > 0, let B©(/, r) be a Hellinger “ball” in 0 around / (./’ may not be in the parametric

family) with radius r defined by jB©(/,r) = {9 : 0 € 0 , d //( /, fo) < r 2}.

Let P* denote the outer measure of probability measure P on some measurable space

(fl, G) where A ' i , X n are defined. Outer measure is used later for possibly non-measurable

sets of interests.

Our asymptotic results rely on an exponential inequality to control the probability

of selecting a bad model. The inequality requires a dimensionality assumption on the

parametric family. This type of assumptions were previously used by Lc Cam (1973), Birge

(1983) and others.

In our analysis, we will consider sup-norm distance between the logarithms of densities.

In this chapter, unless stated otherwise, by a 5-net, we mean a 5-net in the sense of sup-

norm requirement for the logarithms of the densities. That is, for a class of densities B, we

say a finite collection of densities Fg is a 5-net if for any density / 6 B, there exists / £ Fg

such that || log / — log/ ||oo< 5. For convenience, the index set of Fg might also be called

A ssu m p tio n 4.0: For a fixed density / , there exist constants A > 0 , m > 1 and p > 0

with p < A {A, m, p are allowed to depend on / ) such that for any r > 0 and 5 < pr,

there exists a 5-net Fg for H©(/, ?•) satisfying the following requirement:

R em ark : This dimensionality assumption necessarily requires that the densities in the

parametric family share the same support. If the support of the true density is not known

to us, we might consider families of densities with different supports and let the model

selection criterion decide which one has a suitable support for the best estimation of the

unknown density.

L em m a 4.0: Assume Assumption 4.0 is satisfied with p > for some 0 < 7 < If

a 5-net.

card (F,5) </ l r \ m5 J

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 68: Yuhong Yang Thesis - Yale University

£ > 1^ log 0 * ^ ) , then

P*{for some 0 € © , i E r = i log > - 7 < ( / , / « ) + £}< 15.1exp(—*■

R em ark : From the proof of Lemma 4.0, it is seen that the requirement, in Assumption 4.0

needs only to be checked for r >

The proof of Lemma 4.0 is given in section 5.

4.3 M ain results

We consider a list of parametric families of densities / a ,(;x , 8 ^ ) , 8 ^ £ © a -, A: £ F , where F

is the collection of the indices of the models. The model list is assumed to be fixed and

independent of sample size unless otherwise stated (e.g., in Subsection 4.4.2). Lemma 4.0

will be used to derive the main theorem with the choice of 7 = 0.039 to have small penalty

constants A/.’s (see Theorem 4.1). The corresponding value of p is 0.005G. We use this value

in the following assumption.

A ssu m p tio n 4.1: For a fixed density / , for each k £ F, Assumption 4.0 is satisfied with

some constants A/;, w,k and p > 0.0056.

Assumption 4.1 may look hard to be checked because of the presence of the unknown

function / as the center of the balls ( /, r), but actually this condition can be replaced

by a condition involving only the operating families in F.

For 8 £ ©a, , consider Hellinger balls centered at density / (t) (instead of the true

density) in family k defined by

B k(8{0k\ r ) = {0(fc) : 8W € 0 * ,4 ( / (fc), / M (t)) < r 2}.

A ssu m p tio n 4.1*: For each k £ T, and 0 ^ G ©fc, Assumption 4.0 is satisfied for density

/ (*) with Bk{8n"\r) in place of B q . ( / , r) and with p > 0.0056 and constants Ak,mk not,k/'odepending on 8q"\

5 S

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 69: Yuhong Yang Thesis - Yale University

L e m m a 4.1: If Assumption 4.1* is satisfied with Ak,irik and />, then Assumption 4.1 is

satisfied for any density / with Ak = 3/lfc, rrik and p.

P ro o f : Fix any e > 0. Let o[k) € @k satisfy du{ } \ f kfiw ) < info(fc)60<t d n U J k f i w ) +

Then because

dr i { f J k , o w ) > k{di i U \ f k<0w ) + d n( f , Sk,ow)) ~ Y

> 5dIi {fk,oW. f kM[k)) — f I

we have

c € ©fc,dn(fkduhfk,ow) Z (2 + e> }= Bk(eih\ ( 2 + e ) r ) .

Thus if Assumption 4.1* is satisfied with Ak,m.k and p, then Assumption 4.1 is satisfied

with Ak = (2 + e)Ak , m*, and p for any e > 0. For the statement of the Lemma, e is taken

to be 1. We note that if inffl(t)gQfc d\i(}\ fk.,ow) is achievable for all k G F. then we may set

e = 0 and Assumption 4.1 is satisfied with Ak = 2/1/,..

Let

!/(MW) = -1 i >6 + ^ + "Ckn — n nj= i

where A&, v are nonnegative numbers. Then the model selection criterion we consider is to

choose k to minimize

crit(fc) = F ( M (A°), (4.4)

where 8 ^ is the maximum likelihood estimator in model k. The final density estimator /

is / = i.e., the maximum likelihood density estimator in the selected model.

LetXk'm.k , uCkRn .(k ,9^) = D(f\ \Jk0(k)) + Lil_L -|_

ii

Then the index of resolvability is

R n( f ) = inf f in ( M (fc)).^er,o(fc)e0 fc

As mentioned before, in a rougli sense, R n{.f) characterizes the best trade-off among three

sources of discrepancy: approximation error, estimation error, and model complexity.

59

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 70: Yuhong Yang Thesis - Yale University

The asymptotic results we present requires suitable choice of the penalty constants \ k

(according to the cardinality constants A k) and v. Let

A (A) = 4.75 log /I + 27.93, (4 .5 )

v* = 9.49.

Theorem 4.1: Assume Assumption 4.1 is satisfied. Take > A£ = A(Ak) and u > u* in

the model selection criterion given in (4.2). Then for the density estimator ,/A- , we have

E d 2H( f , j k m ) < 2 6 5 7 R n(f ) ,

where the resolvability /?„,(/) is defined as follows

R M ) ~ ^ + (4.6)

In general, if Assumption 4.1 holds with p > for some 0 < 7 < A as in Lemma 4.0,

then for Xk > ^ log anci „ > _JL_ 5

M t/./* * ,) s i +**) *.(/).

The choice 7 — 0.039 minimizes j0g ^ a |; _4 ;. = i.

Corollary 4.1: Under the above conditions,

<104y jRn(f).

Corollary 4.1 follows from Theorem 4.1 using the familiar relationship between the

Hellinger distance and L\ distance, namely, | | / — (]\\il < 2<i//(/, <y) for densities / and g.

Corollary 4.2: Under the same conditions above, we have convergence in probability of

4 l {f, f kj{k )) at rate R n(.f), that is,

4 i U J - k>m ) = o P(Rn(f ) ) -

GO

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 71: Yuhong Yang Thesis - Yale University

Rem arks:

1. The resolvability bound in the theorem is valid for any sample size. So the model list

T is allowed to change according to sample size.

2 . In R n(,f), the estimation error term is allowed to depend 011 the dimensionality

constant Ak, which may not be uniformly bounded for all A: 6 F. For an unknown

density function in a class, if the sequence of models A:„ minimizing mi0(k)e&k D ( f ||

/fc,o(fc)) + have Ak„ bounded, then R n{f) is asymptotically comparable to

If furthermore, Ck„ = 0(nikn), then

E d l ( f , f - H m ) = 0 ( inf {D(,f || h , 0) +

which often gives the minimax optimal rate of convergence! for density functions in

many smooth nonparametric density classes. These conditions that .4*,,, is bounded

and = 0(m.k„) will be verified in a spline estimation setting in Section 4.

3. For the case when mjt’s are integers for A: € T, one way to assign the complexities

for the models is by considering only the number of models for each dimension. Let

N(m.) = card{A; € T : m,k = m ) be the number of models with dimension in. If

N (m) < 00 , then we may assign complexity Ck = logAr(m) + 21og(m + 1) for the

models with dimension m, which corresponds to the strategy of describing m first and

then specifying the model among all the models with the same dimension rn. Then

we haveE 4 l ( f J k , d w ) ^ 2657inffcer{infO(fc)g0Ji £> (/||/M (ao)

I _ j _ i'k ( l o w N j n i k ) + 2 l t > K ( - m « . + 1 ) )

If N m grows slowerly than exponential in m, then lo|?r Af(',u)+^lo ('l + i) gOOK (;o o_ i.e.,

the complexity is essentially negligible compared to the model dimension. Then the

complexity part of the penalty term can be ignored in the model selection criteria.

However, if there are exponentially many or more models in F, then the complexity

term is not negligible compared to ^ (for related discussions, see Yang (1993)

and Birge and Massart (1995)).

61

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 72: Yuhong Yang Thesis - Yale University

P ro o f o f T h e o rem 4.1: Clearly the working criterion is theoretically equivalent to select

k and 0 ^ to minimize

. ~ i > g +n f r { t k { ^ i , Q { } ) "■ "■ »■

by adding ^ £ " =1 log / ( X,) + - (which does not depend on k) to each criterion value. In our

analysis, we will concentrate on the above theoretically equivalent criterion. We relate it to

the resolvability. Indeed, for each fixed family k, we show that V ( k , B ^ ) ) > 7 f///(/,/* . 0(k))

for all B except in a set of small probability. Then the probability bound is summed over k

to obtain a corresponding bound uniformly over all the models.

For a fixed A:, let

ThenP*{ for some B ^ G 0 , , V(k, 0™)) < h ,ow )}

= P *{for some B<fe) G £ Ln(k,B

= P *{for some 0M G 0*., L „ (M W ) > ^+ T L + S - 7 ^ ( / 4 , # ) ) } .

If h > 1347 log ( 15'4/U7n/1- 41) , then by Lemma 4.0 with f = A + vCk + t.,

P *{for some B G 0 fc, V ( k , B ^ ) < v f y V J < 15.1 exp k'mk + vCk + A)) .

Now sum over k G F,

qn(t) =: P *{for some k G T, B(*> G 0 , , V ( k , B ^ ) < 7 d ? ,( / , /M <*>)}

< 15.1 Efcer exP ( - ^ T 2 {h-.rnk + v C k + t) )

< l0 .7 E t 6r e x p ( - ^ - C fc)

< 10.7 exp ) .

For the second inequality above, we use t > log 2 and v > For the last

inequality, we use Ylker e~Ck — I- For expectation bounds, it will be helpful to bound the

integral of the tail probability qn(t). From above,

Jo Qn{t)dt < (fzfp^y •

The above bound suggests it is unlikely that the criterion values are much smaller than

fk,ow) for any k , B ^ and the integral of tail probability is bounded. To obtain the

62

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 73: Yuhong Yang Thesis - Yale University

conclusion of the theorem, we next show that the criterion values for a sequence of nearly

best choices of k ,0 (fc) are not much greater than /?„ ( /) .

Assume Rn( f) < oo (otherwise the conclusion of the theorem is trivially true). Let

(kn,oj tn ) be a choice such that R n(kn,6^t” ) < (1 + e )/?.■„.(/) for some positive constant e.

(If there is a minimizer of R n(k, 0 ^ ) , then we may set A:„, 0 ^ to achieve the minimum.)

For simplicity, denote L n(kn,0^'n ) by L„. Then for t > t o = , we have

pn(t) =: P { V ( k n, 0 ^ ) ) > f i? n(A:,, ,# " ) ) }< P {L n > n[ tD(f || f knt0lku)) + _ i ]}< P { L n > f ( D ( f || / fcnifl(fcn)) + + i % ) }= P{L„ > f R n (kn, 0 ^ ) } .

For the last inequality above, we use the fact Atm*. > Xprik > 73^ for all k 6 T. Note

also %Rn(kn, 6 ^ ) > Let L n = and 5, = {Ln >

f R n(kn, 0 ^ ) } . Then for t > to, S t = {L„ > f R n(kn, 0 ^ ) } . Now,

l £ P n ( t ) d t < J ? E I StdtJto

I to= E( f ~ISldt)— f ) Lu_______ /„

a ^Rn(knMk«0 °= ^Rn(kn,0^n)) J{ L„ >)} Ln ' ~ <().

Here

> „ l 108,2^ ■ r ¥ a W - “ •2 log 2 — 1 + 47 l - 4 q (log 2)“

To bound the last integral involving the tail of an expected log-likelihood ratio, we apply

Lemma 4.4 in Section 6 with a* = a (e 16) = 1.07 and obtain

a ’ D ( f n \ \ fn ,)S Z M W < w t X o

2 a-C(;H/t„,,Ctn))

< 2 a * - t Q.

Now, from the analysis above,

7 < V ( k J W ) < V(kn, 6 ^ ) < tRn(kn, 6 < (1 + e)tRn(.f)

with exception probability no bigger than qn(t)+ pn{t). That is,

d?utfi.fyCk)) . . . .

63

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 74: Yuhong Yang Thesis - Yale University

T „ ,.1

Let z ~ 7 - 1(1+£)«»(;)’ n

E Z = J0°° P { Z > t}dt ^ io°° (ln{t)dt + j '° pn(t)dt + J “ pn(t.)dt < ^ + < o + 2 a * - t 0. = 7^ + 2q*.

Because e > 0 is arbitrary, by letting e -» 0, we conclude that

£ 4 ( / , / M li , ) < i ( ^ + 2.2) * .< /) .

This completes the proof of Theorem 4.1.

R e m a rk : In the proof of the theorem, for the (nearly) best models , we just use the

fact that D ( f || ./fy ^ ,,)) is finite. For many cases, || logy— - ||oo is bounded. Then we

can use Hoeffding’s inequality to obtain exponential bound on the tail probability for these;

models. Then we can show that d2H(f, /£.§$)) is bounded by /?„(/) in all moments, i.e.,

E d % ( f , f ^ )) = 0 ( R i ( f ) )

for all j > 0 .

The criteria in (4.2) can yield a criterion very similar to the familiar M D L criterion

when applied to a sequence of candidate densities. Suppose we have a countable collection

of densities q € T,,.. The description lengths of the indices are L(q) satisfying the K raft’s

inequality: £ « r „ < 1. Treat each density in T,, as a model, then Assumption 4.0 is

satisfied with Ak = 1, m*. = 1, and p = 1. Thus A)[.nik — log 2 is a constant independent

of k. Therefore when taking v — v*, the criterion in (4.2) is equivalent to minimizing

- X > g f k { X j ^ ) + V*L{q)i= 1

over q € Pn. This criterion is different from the M D L criterion only in that v* fy 1.

The corresponding resolvability given in our expression (4.6) is essentially the same as the

resolvability inf,/6r n{ D ( f || q) + considered by Barron and Cover (1991).

64

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 75: Yuhong Yang Thesis - Yale University

4.4 A pplications

4.4.1 Sequences o f exponential families

As an application of the theorem we develop in Section 3, we consider estimating an un­

known density by sequences of exponential models. The log-dcnsity is modeled by sequences

of finite dimensional linear spaces of functions.

L ocalized basis

Let S j , j € J ( J is an index set) be a linear function space on [0,1]'(. Assume for each j € .7,

there is a basis <pjti(x), <Pj,2 (x ) , •••> (x) for Sj satisfying the following two conditions

with constants T\ and T2 not depending on j :

|| Y 8iVjAx ) lloo< T\ max |0,-|, (4.7)

l l l j „

(4.8)I lllj

Here || H^and || ||o denote the sup-norm and Lo-norm respectively. The first condi­

tion is satisfied with localized basis. The second one is part of the requirement that

<Pj,i[x),<Pj,2 {%), forms a frame (see, e.g., Chui (1991), Chapter 3) (the other

half of the frame property can be used to bound the approximation error). It is assumed

that 1 € Sj.

For each Sj, consider the following family of densities with respect to Lebesgue measure

//,:l l l j

f j { x ,0 ) = exp( Y sW j d x ) ~ 'A/'WK/=i

where = log f exp(X^;=i Onpjj{x))dix is the normalizing constant. If there is no restric­

tion 011 the parameters (9 \ , ..., 9mj), the above parametrization is not identifiable. Since the

interest is on the risk of density estimation instead of parameter estimation, identifiability

is not an issue here. The model selection criterion will be used to choose an appropriate

model.

To apply the results in Section 3, the models need to satisfy the cardinality assumption.

For that purpose, we can not directly use the nature parameter space R"'J. Instead, we

consider a sequence of compact parameter spaces

&j,L = {0 € R."lj :|| log f j ( ; 0 ) !« ,< L),

65

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 76: Yuhong Yang Thesis - Yale University

where L takes positive integer values. We treat each choice of 0 7i/, as a model. The following

lemma gives upper bounds on the cardinality constants A y j ^ .

L em m a 4.2: There exists a constant A(L,TiT>) = 19.28(|f(Z, + \)e* + 0.06 such that

Assumption 4.]/ is satisfied with A ^ ^ = A(L,TiT-)) and n i y j ^ = m . j .

Note in Lemma 4.2, A ( L , T^T-i) does not depend 011 the number of parameters riij in

the models. So A y j % remain bounded for any fixed L . The proof of Lemma 4.2 is provided

in Section 5.

In practice, we might consider many different “types” of localized basis which satisfy

(4.7) and (4.8) for each type of basis. For example, different order splines are useful when

the smoothness condition of the true function is unknown. For such cases, the constants T(/,t

and Tq o may not be bounded for all considered type q's, which leads to the unboundedness

of A* y Ly It is hoped that through the use of the model selection criterion, good values of

q, j , and L will be chosen with corresponding penalty constants A*’s being bounded so that

the optimal rate of convergence could be achieved.

Assume for each q in an index set Q, we have a collection of models J f/ satisfying the

conditions (4.7) and (4.8) with T,h\ and T<h2. Let k = (;j,q,L) be the index of the models

,f j{x,9), Q £ ©j,/,, j € J,n q € Q and let T be the collection of the indices k. Let C\., k € F

be a complexity assigned for the models in F satisfying Y,kere~Ck ~ 1-

Let \ * ( q , L ) = A ( 2 A ( L , T (hy , T rto))- Let, k be the model minimizing

- f ; log f k(Xi, §W) + A *(9, Drri j + 9.49C,. (4.9)t=i

Then from Theorem 4.1, we have the following conclusion.

C o ro lla ry 4.3: For localized basis models for the log-density satisfying conditions (4.7)

and (4.8), for any underline density / ,

^ 4 (/> /M (t))< 2657i?.,,(/),

where

R n(f) = inf inf inf { inf D ( f || f j>0) +' l>\ <i£QieJn oe0j,,,,L J} n n

66

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 77: Yuhong Yang Thesis - Yale University

Corollary 4.3 can yield minimax optimal rates of convergence simultaneously for many

nonparametric classes of densities when the sup-norms of the log-densities in each class are

uniformly bounded (with the bound possibly unknown) and the log-densities in each class

can be “well” approximated by the models j <E J,, for some fixed <y. For such a class

of densities, when L is sufficiently large, a sequence of densities in 0 j „ f o r some j„ and

a fixed q* achieves the resolvability. W ith these L and <y*, the penalty constants K are

bounded for the particular sequence of densities. Suitable assignment of the complexities

might give us Ck = 0 ( m k), then JR.n(f ) = O (infie ./,;. {inf»e 0 . D ( f || f j fi) + ^ } ) , which

usually gives the minimax optimal rate of convergence for the density in the class.

E x am p le 4.1: Univariate Log-spline models.

Let Sm// (m > q) be the linear function space of splines of order q (piecewise polynomial

of order less than q) with rn—q+2 equally spaced knots. Let i.(a:), iprllt,h2 {x),

be the B-spline basis. Letin

i = L

where 0 ,n,q{8) = log f exp (^ ;= i 0i<pm,,/,i(%))d/j,. To make the family identifiable, we assume

X)i=i = 0. The model selection criterion will be used to choose appropriate number of

knots and spline order q.

Consider

e*m,L = { d e B m :|| log /,„,,(•,£>) ||oo< L],

where L > 1, q > l ,m > q are integers. Each parameter space Qi„,q,h corresponds to a

model.

The B-spline basis is known to satisfy the two conditions (4.7) and (4.8). In fact, the

sup-norm of spline expressed by B-splines is bounded by the sup-norm of the coefficients

(see, de Boor (1978, pp. 155)), that is,7/1

II ^ 1 ||oo^ ,5?$?' I^fl’T~, 1 </.<//?2 = 1

The second requirement follows from the frame property of the B-splines. From (12) of

Stone (1986),„ m m

/ ( B & - Pi)Vnh<Ui(x))2dx > ^ 5Z(/?i - [ i t? ,J m i= i

67

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 78: Yuhong Yang Thesis - Yale University

for some constant 7 ,, depending only on q. Thus, the two requirements are satisfied with

T,h 1 - 1 and T,h2 = 7 f/. Therefore, Corollary 4.3 is applicable to the log-spline models. Let

us index our models by k = ( m , q , L ) . We specify the model complexity in a natural way

to describe the index as follows:

1. describe L using log?] L bits

2 . describe q using log?! q bits

3. describe m using log?! rn bits,

where the function log* is defined by log* i - log(/ + 1) + 2 log log(/ + 1) for i > 0. Then the

total number of bits needed to describe k is log£ L + log'2 <7 + log/ in. Thus a natural choice

of C'k is C /t = log* L + log* q + log* rn.

Assume the logarithm of the target density belongs to W.f (U*) for some s* > 1 and

U* > 0, where (U) is the Sobolev space of functions rj on [0 ,1] for which is absolute

continuous and f ( g ^ ( x ) ) 2dx < U. The parameters s* and U* are not known.

C o ro lla ry 4.4: Let / = ^ be the density estimator with k selected by the criterion in

(4.9) with A * { q , L ) = 42.0 + 4.75 log ^ ( L + Then for any / with log / e W.f(U*),

This corollary guarantees the optimal rate of convergence for densities with logarithms

in Sobolev balls without knowing U and s in advance. It shows that with a good model

selection criterion, we could perform asymptotically as well as we knew the smoothness

parameters. This theorem demonstrates an example of success of a completely data-driven

strategy for nonparametric density estimation.

P ro o f o f C o ro lla ry 4.4: We examine the resolvability bounds for the classes of density

functions considered. To do so, we need to upper-bound the approximation error for a good

sequence of models. By Theorem 5.2 and Theorem 2 .1 of do Boor and Fix (1973), for

log / G W f’ (U*) and for each m > a*, there exists fj{x,/3) = Ya L i PiVm.,.*’A x ) suc^

where the constant M depend only on s*, || lo g / oc and || ( lo g / )^s ] Ha-

GS

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 79: Yuhong Yang Thesis - Yale University

where K and Ii are absolute constants. By Lemma 4.G in Section G,

| log j eadi.i\ = | log j jd /i - log j e'Jdg\ < || log / - g ||oo •

Let g = g — log J e'Jdf.i be the normalized log-density from g. Then

log f - g ||oo<|| log f - g ||oo + || log j e3dfi ||oo< 2 || log f - g OO *

Therefore3 Hoc < || log/ Hoc + 2 || log/ - g OO

<n log / iioo +

For the relative entropy approximation error, from Lemma 1 in B S,

D ( f || c5) < iell‘"sZ -sll-x || / Hoo x || log / - g g

2(m-.s-*+2)

Take L m = T|| log / ||oo + (m-i-'+Z)-'*-0-52K || logls*l/H2I (bounded for log / £ then

A*(s*,Lm) are bounded. Note also that is asymptotically negligible compared to

m. Thus

rn, we obtain the conclusion with the choice of m of order n a»’+1.

G eneral linear spaces

Unlike the localized basis that satisfy (4.7) and (4.8), general basis are not as well handled

by the present theory. Here we show a logarithmic factor arises in both the penalty term

and in the bound on the convergence rate for polynomial and trigonometric basis.

Let Sj, j £ .7 be a general linear function spaces on [0,1],/ spanned lay a bounded and

linearly independent (under L 2 norm) basis 1, v>j,i(.i:), ..., tpjiinj(x). The finite dimensional

families we consider are:

Or* 'rnrs n

where the two constants depend only on s*, || log / ||oo and || log^’ Optimizing over

f j (x ,0) = exp(£0i<pjti(x) - j £ J,7 = 1

69

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 80: Yuhong Yang Thesis - Yale University

where ipj(8) = log J' exp(^"= i 8;Lpjj(x))d/j, is the normalizing constant.

In B & S, the supreme of the ratio of sup-nonn and L 2-norm for functions in Sj plays

an important role in the analysis. For general linear spaces, we also consider this ratio.

The linear spaces S j , j G J we consider have the property that for each j , there exists a

positive constant K j such that

II h ||oo< I ij || h ||2 (4.10)

for all h G Sj. This property follows from the boundness and linear independence (under

Z/2-norm) assumption on the basis.

For the same reason as in subsection A, break the natural parameter space into a in­

creasing sequence of compact spaces

Qj<L = { 6 e K mi :|| log/_,-(•, 0) ||oo< L}, L > 1,

and treat each of them as a model. Then for each j , we have a sequence of models f j (x , 0),

8 G L > 1. We index the new models by k — (j,L) and let F be the collection of k.

L em m a 4.3: For each model k = (;/, L), Assumption 4.1* is satisfied with =

19.28/v'j(l -I- L)e* + 0.06 and = m.j + 1.

The proof of this lemma is in Section 5.

If an upper bound on || log / ||oo is known in advance, then for each we can consider

only L = [|| log / ||oo]. Then from the remark to Theorem 4.1, the model complexity can

be ignored. However, when || log / ||oo is unknown, we would like to consider all integer

values for L. Then for each model size, we have countably many models. To control the

selection bias, we consider the model complexity.

Let Cfc, k G T be any model complexity satisfying X) c~Ck < 1. Let Ajk ^ = A(2A(jj^) =

42.0 + 4.75log ( /v j(1 + . Let k be the model minimizing

- log f k (Xu 0(fc)) + A*{ j A m k + 9.49Ck.i= 1

Since the conditions for Theorem 4.1 are satisfied, we have the following result about model

selection for a sequence of exponential families with a general linear basis.

70

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 81: Yuhong Yang Thesis - Yale University

C o ro lla ry 4.5: For the log-density models with basis satisfying (4.10), for any underline

density / ,

E 4 i ( f ’k , m ) < 2657/?.,,.(/),

where

R M ) = inf inf { i n f D ( f \ \ f k,„) +i > i j e . i oe@j,L n ii.

To apply the corollary for a density class, the approximation error

infflg©.^ D(f \ \ fk to) should be examined. Then the resolvability will be determined.

E x am p le 4.2: Polynomial case.

Let Sj =span{l,.x',x2, ...,.tj’}, j > 1. Then mj = j . From Lemma G in B & S, I ij =

j + 1. It follows from Lemma 4.3 that AjL ^ = 42.0 + 4.75log ( ( j + 1)(L + l ) e ^ . Take

Ci, = log* L + log* j . For densities with logarithms in each of the Sobolev spaces Wo{U),

s > 1 and U > 0, when L is large enough, say L > L* (depending on U and . s ) , the relative

entropy approximation error of model (j, L ) is bounded by c o n s t y (the examination of

relative entropy approximation error is very similar to that in Example 1 in the previous

subsection. For details on L~i and Loo error bounds for polynomial approximation, see

Section 7 of B & S). Thus infV;e 0 . D ( f \ \ f k,0) + ^ < canMu,x + “ )•

Optimizing over j , we obtain that

/?,„.(/) < cousin^ x (log n)u 2-'+'.

1(since the infimum will produce a value at least as small at j — n-''+, and L = L*).

Therefore, the statistical risks of the density estimators based on the polynomial basis

(without knowing the parameters s and U in advance) are within a logarithmic factor log n

of the minimax risks.

E x am p le 4.3: Trigonometric case.

Let Sj =span{l, \/2cos(27r.r), \/2sin(27r.x),..., \/2sin(27rj.'):))}, j > 1. Then m j = 2j .

From (7.6) in B & S, I ij — \J2j + 1. Again by examining the resolvability (for L2 and

error bounds for trigonometric approximation, see Section 7 of B &c S), the same convergence

rates as those using polynomial bases can be shown for densities with logarithms in the

Sobolev spaces and satisfying certain boundary conditions.

71

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 82: Yuhong Yang Thesis - Yale University

The risk bounds derived here using the nonlocalized polynomial or trigonometric basis

have an extra log n factor compared to the minimax risk. The extra factor comes in because

the penalty coefficient A j in the criteria is of order log j for both cases. Recently, Birge and

Massart (1995) uses a theorem of Talagrand (1994) to show that if Kj < counts/], then

their penalized projection estimator (PPE) with the bias-correction penalty term const £

converges a t the optimal rate. This result is applicable for the trigonometric basis, but

not the polynomial basis. Their argument can also be used for log-density estimation using-

maximum likelihood method with trigonometric basis to derive a criterion giving the optimal

convergence rate.

4 .4 .2 N e u r a l n etw ork m o d els

Let f{x) be an unknown density function on [—5 , with respect to Lebesgue measure.

The traditional methods to estimate densities often fail when d is moderately large due to

the “curse of dimensionality” . Neural network models have been shown to bo promising

in some statistical applications. Here we consider the estimation of the logarithm of the

density log / by neural nets.

We approximate fj(x) = log f ( x ) using feedforward neural network models with one

layer of sigmoidal nonlinearities, which have the following form:

kfjkiv, = V j ^ j x + bj ) + '/()•

The function is parametrized by 6, consisting of a.j G B d,bj,n:j G R, for j — 1,2,.../,;.

The normalizing constant 770 is 770 = — log exp{X)j=i ''lj<Kaj x + bj)dx. The integer

fc > 1 is the number of nodes (or hidden units). Here 0 is a given sigmoidal function

with || 0 ||oo< 1, Iimz_x3o 0 (2 ) = 1 and lim-_>_oo 0(.s) = 0- Assume also that 0 satisfies

Lipschitz condition |0(^i) — 0 (^ ) | < v\\zi - zo\, 21,-2 € R for some constant > 0. Let

v = max(ut, 1). Let

kf k(x, 6) = exp{gk(x, 0)} = exp{£] rjj4>{ajx + b:j) + no}

.7=1

lie the approximating families. The parameter 0 will be estimated and the number of nodes

will be automatically selected based on the sample.

72

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 83: Yuhong Yang Thesis - Yale University

The target class we are interested in here was previously studied by Barron (1993, 1994),

M odha and Masry (1994). The log-density g(x) is assumed to have a Fourier representation■ T ■ iof the form g(x) = J/f,i e,ld xg{uj)du. Let a„ = / \u>\\\g(oj)\duj, where |u>|i = £'■-i |aq|

is the 11 norm of u> in R (l. For the target density, we assume a;i < a. Recent work of

Barron (1994) gives nice approximation bounds using the network models for the class of

functions with a(J bounded and the bounds are applied to obtain good convergence rates for

nonparametric regression. Modha and Masry prove similar convergence results for density

estimation. In these works, the parameter spaces are discretized. We here intend to obtain

similar conclusion without discretization.

Consider the parameter spacek

©fc,rfc,<T = {9 ■ max |a j |i < Tfc, max \bj\ < n , < 2a).

The constant t*. is chosen such that

dis(</>Tfc)Sgn) =: in f ( 2er + sup \(f>{n,z) - sgn( .:) | ) < - 7=.0<£^2 \ ld>e J

The compact parameter spaces are used so that the cardinality assumption is satisfied. From

Theorem 3 in Barron (1993), for a log-density g with a,, < a , there exists a 0 € @k,Tk,<r such

that4(7

II fJ - {lk,0 Il[_x,i]rf< (4.11)

where || ||[_i i],< denote the lA-norm for functions defined on [—7 ,

For simplicity, for the target density class, the upper bound a 011 a,; is assumed to be

known (otherwise an increasing sequence of a values can be considered and let the model

selection criterion choose a suitable one).

Now we want to show that Assumption 4.1* is satisfied for these models. For any

e > 0, a > 1, from the proof of Lemma 6 in Barron (1994), there exists a set &k,e,Tk,a such

th a t for any 6 e &k,Tk,<n there is 8 G 0 fc,£,Tfc,<r satisfying || .%(.'/;, 0) — gu{-x,0) ||oo< 8 urrs with

Take e = Because B k (8*.r) = {(9 € @k,Tk,<r ■ dfr[fk,0 ‘ Jk,o) < ' ,2} C 0fc,Tfc,<T, so for

S < pr, we have a 5-net in Bk(6*,r) with cardinality bounded by

/ 2e(8vctt/c + pr) \ k(d+l'> / 2 (8ua + pr) 'V 5 J \ 8

(2e{8v<JTk + p r ) \ k{d+l) (2{8va + p r ) \ k f r ' ^ kd+2k

73

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 84: Yuhong Yang Thesis - Yale University

Notice that Assumption 4.0 only needs to be checked for r > where rnk =

kd + 2k + 1 is the number of parameters, for such r, the above quantity is bounded by

/ \ k (d -\- 1 ) / \ A:

( l d e v a n + \ | ^\ /llS E .g m T J \ / 41og2mAT\ y 1 -47 n ' \ \ L-47 n

Thus Assumption 4.1' is satisfied with

W + l ) , A-k,l+2k ( \ k,l+2klCeeoTfc \ / IGvcr n

M = Ak n = '■ + 2ep ■ <77—A-......+ 2P] < C0Ilst X n-\i —1 I Vrat

As shown in Barron (1993), if <f>(z) approaches its limits at least polynomially fast, then there

exist constants /?i and /?2 such that 7* < [hk?2. As a consequence, A k<n < const x kP2~% y/fi..

By Theorem 4.1, when we choose the penalty constants \ k — X*k = A(2/1Av,) and u = 9.49

in the model selection criterion given in (4.2), for the density estimator J-h fj(i), we have

Ed2II( , f J ) < 2 6 5 7 R n(f),

where R n(,f) = inffc>|{inffle0 ,, Tfc D ( f || / M ) + + 9.49 iog* A:}.

For the targeted densities, under the assumption a , , < i t , the log-density is uniformly

bounded (see Lemma 5.3 in Modha and Masry (1994)). Indeed, because || fj{x) — cj(0) ||oo=||

In'1 ( e'W X ~ 1) 9 {u)dw || < j R,i \uTx\\g(u)\duj < f Rd \u\i\cj(u)\dw < ^a ,,, so |cy(0)| = | -

l°g Jj_ !,!]<< e ^ - ' A 0)d,x\ < || fj{x)-fj{0) 1100 < l ^ a- It follows that || ;j(x) U ^ H fj{x)-fj(Q)

+|ff(0)| < <xg- Thus by Lemma 1 of B & S, for the target densities, D ( f || f kfl) < const,, ||

9 - 9 k ,0 for e € 0 fc ,T fc,< r- So from (4.11), infOe0fcrfcff D ( f || / M ) < const,,, £. Note X*k2 ’ '2 J

is of order log A k = 0(log(nAr^2 L) = O(log n). Therefore

M ± ^ logn) ) = 0 ( 2 t e ;1

Note that for the class of functions considered, the rate of convergence ^ is independent of

the function dimension as in Barron (1993), Modha and Masry (1994).

4.4.3 Estim ating a not strictly positive density

An unpleasant property of the exponential families, log neural network models, or some

other log-density estimation methods is that each density is bounded away from 0 011 the

whole space [0, l]rf. If the support of the true density is only a subset of [0 ,l]f/, the resolv­

ability bounds derived in the above sections are still valid. However, for such densities, the

74

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 85: Yuhong Yang Thesis - Yale University

approximation capability of the exponential families may be very poor. Here we present a

way to get around this difficulty. We get the optimal rates in L\ with localized basis while

still using the resolvability for upper bounds.

We here use the same idea in Section 2.3 in Chapter 2 to change the original estimation

problem to another one which can be handled more easily. In addition to the observed i.i.cl.

sample A /, A2, ..., A',,, from / with respect to /t on a compact space X with //,(X) = 1, let

Y i , Y2 ,...Yn be a generated i.i.d. sample (independent of A/s) from the uniform distribution

on X (with respect to /.i). Let Z-, be A/ or Y-,, with probability (4, 7 ) using V,; ~ B ernou l l i ^ )

independently for i = 1, Then Z\ has density g(x) — ^ ( / + 1). Then g is bounded

below from 0. We will first use the exponential models fk(x, ()),(■) G 0/,. to estimate g and

then construct a suitable estimator for / .

Let g be the density estimator of g based on Z \ , ...Z„ using the criterion in (4.2) from

the models in T, which satisfy Assumption 4 . l \ Then when A*, and // are chosen large

enough, by Corollary 4.1,

E \ \ g - g \ \ Ll< l M ^ R ^ i ) .

Let g{x) = (j{x )I{j)(x)>l2} + 2^{5(.r)<i}- Then because pointwise in x, \g - g\ < \g - g |,

E j | g - fj\dg < E j \ g - g\dg. < 104\ J R„{g) .

In particular, E f gdg, — 1 < 104^ R n(g) . Let

F ( n - \ - 2g(3;) ~ 1JvandW - 2 Jg(x)dg. - 1 '

Then .f rand{x ) is a nonnegative and normalized probability density estimate and depend

on the X i , . . . , X n and the auxiliary variables Y \ ,..., Y„, So it is a randomized

estimator. Now

E f I/Or) - frand(x)\dg, < E j !/(*•) - 2g(x) + 1|dg, + E f l U M ) - 2g(x) + 1|d/t = 2E J |g - g\dg + 2E ( f g(x)dg. - 1)< 416N/ 1 ^ J .

Thus, we have the following result.

T h e o re m 4.2: Let be constructed in the above way with a choice of the penalty

constants satisfying A/,, > A/., v > 9.49, k G T, then

E j ' | f ( x ) - frarulMdil < 416

75

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 86: Yuhong Yang Thesis - Yale University

Because of convexity, a nonrandomized estimator can Ire obtained with no bigger L\

risk.

Because g is bounded below from 0 , g can be better approximated by the exponential

families. Then \ / R n{g) can yield a much faster rate of convergence’ compared to \ / R n{f).

We next give an example to show that for some classes of densities, with the modifications,

the modified estimator achieves the optimal rate of convergence.

E x am p le 4.1: (con tinued): We now assume that / ( / ( s"‘)(x))2c/.'i; < oo for some unknown

integer .s*. Note that the densities considered here are not necessarily strictly positive on

[0 ,1).

Let / be the estimator constructed according to the above procedure. Then we have

E [ \ f(x) - f (x) \dx < 4 1 6 .

From J'(f(s’)(x))2dx < oo, it can be shown that f ((logg ) ^ ’^ dx < oo. Then from previous

result, R n(g) = 0 ( n ~ ). Thus

E j |f ( x ) - f(x)\d,x < Cn_ ^ + r ,

where the constant ( depends only on s* and f ( f ( s")(x))'2dx. Therefore, the density esti­

mator converges in Li-norm to the true density at the optimal rate simultaneously for the

classes of densities G(.s, U),s > 1, U > 0 , where G(r,U) is defined to be the collection of

densities with square-integral of the s-th derivative bounded by U.

4 .4 .4 C o m p le te m o d e ls versu s sparse su b se t m o d els

As in section 4.1, we consider the estimation of the log-density log f ( x ) on [0, l]f/ using a

sequence of linear spaces. Traditionally, the linear spaces are chosen by spanning the basis

functions in a series expansion using polynomial, or trigonometric, or splines, etc., up to

certain orders. Then use a model selection criterion to select the order for good statistical

estimation. When the true function is sparse in the sense that only a small fraction of the

basis functions in the linear spaces are needed to provide a nearly as good approximation as

that using all the basis functions, then a subset model might dramatically outperform the

76

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 87: Yuhong Yang Thesis - Yale University

complete models, because excluding many (nearly) unnecessary terms significantly reduces

the variability of the function estimate.

For simplicity, assume the linear spaces are nested, i.e., 5/ C Sj for i < j . Let Sj be

spanned by a bounded and linearly independent (under Lo norm) basis 1, <pjj{x),pjo{x),

Let

Ljf j { x , 0) = e x p (£ ]0npjj{x) - 0 = {0\, ...0L.) G 0 j ,

i=l

where i>j{0) = log f exp( ^ [ ( ! 1 0.;(pjj(x))d/.i is the normalizing constant. Including all of the

Lj terms, we have dimension nij = Lj. We call such a model a complete one (with respect

to the given linear spaces) because it uses all the Lj basis functions in Sj. On the other

hand, we can also consider the subset models

f i ^ x , 0) = e x p ( £ ] 0i . ipj j . ix) - ' t l i , j { 0 ) ) , 0 e Q i j ,ieij

where ipij{0) = lo g /e x p (£ ig /. 0npjj{x))dn and l j C {1,2,..., L j } is a subset. We next

show the possible advantage of considering these subset models through the comparison of

the resolvability for the complete models with that for the subset models for some classes

of densities.

Suppose that Assumption 4.1 is satisfied with dimensionality constant A j and dimension

L j for the complete models and with A j j and for the subset models, where m,i. = \I j \

is the number of parameters in model l j . We also assume that there exist two positive

constants /?i and /% such that Aij < P\_LjP'- for all the subset models. To satisfy this

requirement, we may need to restrict the parameters to compact spaces Qijj, = [0 G

R m,i :|| log f i j (-,0) ||oo< L } for a fixed value L . Then from Lemma 4.3, this condition is

satisfied if K j in (4.10) is bounded by a polynomial of L j . which is satisfied by polynomial,

spline, and trigonometric basis. (When || log/ ||oo< oo but no upper bound on || log / ||oo

is known, increasing sequences of compact parameter spaces could be considered and the

condition could be replaced by A i j t i < f i i j ^ L j P 2, where is allowed to grow in L . Then

similar asymptotic results hold.)

For a sequence of positive integers N n t oo, let Fn = {j : Lj < N,, } and F,, = {(./,//) :

Lj < Nn and l j C {1,2, . ..Lj}}. For each sample size n, the list of the models we consider

is either F„ (complete models) or Pn (subset models). In our analysis, we need the condi­

tion that N n grows no faster than polynomially in n to have a good control of the model

77

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 88: Yuhong Yang Thesis - Yale University

complexities for the subset modes. This restriction is quite reasonable because usually a

model with the number of parameters bigger than the number of observations can not be

estimated well.

For the complete models, the model complexity Cj can be taken as Cj = log * j . Let

A* = A(Aj). Let j be the model minimizing the following criterion value

- X > g f j ( X i J W ) + XjLj + 9.49C/.i— 1

over j G r n. Then from Theorem 4.1, the statistical risk of the density estimator /- ^n

from the selected model j under the squared Hellinger loss is bounded by a multiple' of the

following index of resolvability

R n(f ) = inf { inf £>(/||/iifl0)) + ^ + P / 4 9 1 o g * J ' } .jer„ ooigGj. n v.

Let j n be the optimal model which minimizes R n{,f).

Now consider the subset models. We have exponentially many (2Li to be exact) subset

models from the complete model j . To apply the model select,ion results, we consider

choosing an appropriate model complexity. A natural way to describe a subset model is

that first describe j , then describe the number of terms ?vi/. in the model, and finally

describe which one the model is among (m/j'j possibilities. This strategy suggests the

following choice of complexity:

C f . = log* j + log Lj + log ) .

Take XJ = A(A;J.). Let j and 1 = 1- be the minimizer of the following criterion value

- £ log f h (Xi, § M ) + X) .mtj + 9.4DC7,i— 1

over (j, l j ) G r„. Again from Theorem 4.1, the risk of the density estimator f j ^ j ) resulting

from model selection among the subset models is bounded by a multiple of the following

index of resolvability for the subset models

X*, tiii. 9 4 9 CiRn{.f) = inf {inf{ inf D ( f \ \ f

jer„ /j o'b)g0 . ii- a

For the subset models, another quantity similar to the above resolvability is of interest. Let

i n l f ) = inf inf max ( inf D{f \ \ f „(/,)),— ) . jer„ ij \oVRee,. J’ J

78

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 89: Yuhong Yang Thesis - Yale University

Then rn(f ) is roughly the ideal best trade-off between the approximation error and the

estimation error among all the subset models. Let j n, I* = I*, and 0* = oi1 ' be the

minimizer o f /•„.(/). Ideally, we wish the density estimator ,/y fj(j) converges at the same rate

as /'„,(/). But this may not be possible because so many models are present that it is too

much to hope that the likelihood processes behave well uniformly for all the models. In the

next proposition, we compare /?„(/), R n{f) and »•„.(/).

P roposition 4.1:

1. The resolvability for the subset models is at least as good as that for the complete

models asymptotically. That is,

i i m n ^ o o ^ l y y < 1- (4.12)

2. Let N n < n K for some positive constant k. Then the resolvability for the subset

models is within a logn factor of the ideal convergence rate r„,(/). That is,

#».(/) = 0 ( r n(f) logn)- (4.13)

3. W ith the above choice of N n , the improvement of the subset models over the complete

models in terms of resolvability is characterized by how small the optimal subset model

size is compared to the optimal complete model size as suggested by the following

inequality:

* . ( / > = o f e i IOE„ y ( 4 . 1 4 )Rni f ) \ L jn

The results in the proposition can be easily proved. The inequality (4.12) is suggested

by that the complete model is included as one of the subset models. Indeed, R n{.f) <

infjer„ {inf0O)e 0 . D (f \ \ f j 0 u)) + + dMilos ^+loR )}}, and since the logarithmic terms in

this case are of smaller order than the ^ term, it follows that limn-»oo < 1- When the

true density is sparse, we have a good chance of obtaining a much more accurate estimate.

For (4.13), Because log < m log L j and L j < n K, we have that C ' i j — 0 ( ‘inij logn).

Since Aij < Lj@2, AJ = O(logn). It follows that R n(f) = 0 { r n(f) lagn) . Finally, from

/w /) _ n (max(DU\\h (4-13) ’ RM) ~ ° ^ ----- S g*'’ Tl“ } l°g n )• For the best trade-off, jD (/||//* ,o.) and

79

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 90: Yuhong Yang Thesis - Yale University

are of the same order, so = O (jr?- logn). This bound is useful when is of smaller

order than r-T-.lo g n

The ratio an = describes how small the (ideally) optimal (in the sense that it gives

the resolvability) subset model size is compared to the optimal size of the complete models.

We call it a sparsity index for sample size n. The obtained inequality < O (a„, logn)

shows that ignoring the logarithmic factor logn, the sparsity index characterizes the im­

provement of the index of resolvability bound using the subset models over the complete

models.

E x a m p le 4.4: Sparse trigonometric series.

Consider the trigonometric expansion on [0,1]. Let

OOEf = { / : log / € W9 , log./' — ^ 0 + ^ 2 0j sin(2j27r.i;) for some ()}.

j =1

Note that some functions in E,1 have no more than 1 derivatives. But the functions in Ej

are sparse. It can be shown that the resolvability resulting from the complete models is 2 ±

of order ’ , while the subset models give a resolvability of order logn • • The

sparsity index is of order 15 .

The above example is somewhat artificial. If we knew that only the frequencies of

the square numbers are useful before hand, then the good models {sin(27r:i:), sin(87r.x'), ...,

sin('2m+17nr)}, rn > 1 could be described much more simply than most subsets of size m

out of m 2 terms. However, in realistic situations, the knowledge of the best subset models

is not available, so we have to search over the subset models.

Even for one dimensional function estimation, the sparse subset models also turn out

to be advantageous in several related settings such as estimating a function with bounded

variation using histogram, and estimating a function in the Besov spaces using wavelets.

For high dimensional function estimation, there are even more advantages in considering

the sparse subset models. When the input dimension is large, the sparse models such as

additive models, low order interaction models might give good estimates if the true function

can be well approximated by these sparse models. The complete models, on the other hand,

often fail with moderate sample size due to the curse of dimensionality. The following ex-

80

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 91: Yuhong Yang Thesis - Yale University

ample demonstrates the advantage of the sparse subset models for high dimensional density

estimation.

E x am p le 4.5: Sparse tensor product series.

Let {ipo(x),(p\(x),tp2 (x), •••} be a bounded orthonormal basis for L j[0 ,1], Then the

tensor products

{</>*(*) = n i= l ipit {xi) : i = e {0 ,1 ,2 ,

provides an orthonormal basis for Lo[0, l]f/- Let |*| = max/<,/q. The complete models are

f j{x ,6) = exp( £ 0 m ( x ) - lil< i

where Vb'(^) = log/exp(53 |i|<J-6/,:y?i(a;))cZ/i and the model dimension is Lj = j '1. These

models often encounter a great difficulty when the function dimension d is large because

exponentially many coefficients need to be estimated even if j is small. However, when the

true function is sparse, then good estimates are possible by considering the sparse subset

models. The subset models are

/ / . ( x,6) = e x p ( £ 9i<pi{x) - '0 /j-(<?)),ieij

where ^ . ( 0 ) = lo g /e x p ( £ i6/ Qgpi(x))dn and lj C {£ : |£| < j} . Assume Assumption

4.1 is satisfied with Aj < 0\ aud dimension raj = j d for the com[)lete models and

with Afj < ( j dSj 2 and dimension m,ij = \Ij\ for the subset models for some positive

constants 0i and 0 2 (as stated before, satisfaction of this condition may require suitable

compactification of nature parameter spaces).

Assume || log / ||oo< M i, log f{x) = and the coefficients satisfy the following

two conditions for some positive constants Mo, A/3, and s:

£ 1 * 1 1 < A *2 , (4-15)i

£ ( i ? + - + i 3 ) s \9*\2 < M 3. (4.16)i

Let F(M\,M2, Mo,, s) be the collection of the densities satisfying the above conditions.

The hyper-parameters M i, M2, M3, and s are not necessarily known. In the following eval­

uations of the resolvabilities, these parameters are fixed.

81

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 92: Yuhong Yang Thesis - Yale University

Let cjj(x) = 0*ipi(x) be the best approximator of lo g / in the model j in L> sense.

Then the complete model j has an approximation error

l o g / l | | = E I9?)2 - 77T T W E & + -**)'(Oifsn m 3

m > r ^ + ' Vi; ~ U + 1>2‘ ‘Then using the same technique used in Subsection 4.4.1 (Lemma 1 in B & S is still applicable

because || fjj ||o o is bounded, which follows from boundedness of the basis functions and

E ; 10*1 < M 2 ), it can be shown that the resolvability for the complete models is of order, v\ n )

Now consider the approximation error for the subset models from the complete model

j . Let gmj{x) be the sum using the m largest |0*| among the j d terms. Let |0(()| > |0(2)| >

... > |0(j<i)| be the ordered coefficients of the first j d terms. Then the approximation error

of gm<j is || log / - (jmj \\l=\\ log / - gj \\% + || gm>j - gj ||§< ( j j^ s r+ || g, - gmj | | | .

But || fjj - (Jmj ||2= Efc>m+1 |0(*fc)|2 ^ Efc>m+1 |0(»n.+ l) I - l®pn + l)l Efc>m + l |0(*fc)l ^ OT ■

Thus || log / - gmj ||2< (j+iyta + For s > to achieve the approximation error of rate

j can be taken as m. The corresponding complexity is log * j 4- log + log =

O(mrilogm). Again, with the technique used in Subsection 4.4.1, the resolvability for the

sparse subset models is seen to be within a multiple (depending only on

P\ and A>) of y d ‘°s 11. The resulting rate of convergence is independent of the function

dimension d and is better than that from the complete models for F(M\ , Mo, with

2s < d. For s < in order to achieve the approximation error of rate j needs to Ire

at least of order m i . Then the model complexity is of order and the resolvability

for the sparse subset models is again within a multiple (depending only on M \ , Mo, M3 , s,

Pi and A>) of ] / d>°nU• For these cases, the subset models give a much better resolvability

than the complete models.

To achieve the rate O ^ y dl°s?lJ suggested by the resolvability of the sparse subset

models, we use the following criterion to select a suitable subset. Choose the model O’,/*-)

minimizing

“ , , \ ‘, m , 0.49 (log" j + log ( / ' ) + log ) )

t=l

where is the maximum likelihood estimator and \ j . = A (A q) . Denote /-• by I and 9 ^ ^

by 0 for short. The density estimator is then fjj}. By Theorem 4.1, we have the following

conclusion.

82

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 93: Yuhong Yang Thesis - Yale University

T h e o rem 4.3: For the density estimator / = f j g , for any M \ , M 2, M 2, h, the density

estimator / converges in squared Hellinger distance at a rate bounded above by \J

uniformly for / € F(Mi, Mo, M3, s). That is

sup E d j , ( f , / ) < C(M,, M 2,M :u a) • Jf e F ( M u M2 ,M3,s) V n

where the constant C,{M\, M 2, M%, s) depend only on M [ ,M 2, M:!, .s, / / and 02.

Note the model selection criterion does not depend on M i , M 2, M 2, s . Therefore, the

procedure is adaptive for the families F ( M i , M 2, M 2, s ), Mi > 0, M 2 > 0, M-,\ > 0, .s > 0.

Remarks:

1. If we use the usual trigonometric basis, the condition (4.16) corresponds to the fa­

miliar smoothness condition on log / when s is an integer, namely, 53s1+»a+..,+»,i=J( II

dx*1 1 w^ere /l *s ^ ie Lebesgue measure on [0, l]f/- For the class of

densities satisfying this condition, it is known that the optimal rate of convergence

under the squared Hellinger distance is • The condition (4.15) also controls

the high frequency components of the Fourier representation of log./', but it is not

directly connected with the derivative condition on lo g /.

2. The above analysis does not depend on any special properties of the tensor product

basis. Therefore, the result applies to any multi-indexed orthonormal basis satisfying

the two conditions (4.15) and (4.16).

The subset models considered here naturally correspond to the choices of the basis func­

tions in the linear spaces to include in the models. The problem of estimating nonlinear

parameters can also be changed into the problem of subset selection. In Subsection 4.4.2,

we estimate linear and nonlinear parameters in the neural network models by the maximum

likelihood principle. A different treatm ent is as follows. First suitably discretize the pa­

rameter spaces for the nonlinear parameters a and b. Treat 4>(aTx + b) as a basis function

for all the discretized values of a and b. Then selecting the number of hidden layers and

estimating the discretized values of the nonlinear parameters is equivalent to selecting the

basis functions among exponentially many possibilities.

83

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 94: Yuhong Yang Thesis - Yale University

4.5 Proofs of the main Lemmas

P ro o f o f L em m a 4.0: We use a “chaining” argument similar to that used in Birge and

Massart (1993, 1995).

We consider dividing the parameter space into rings as following:

©o = {0 £ 0 : 4 ( / , . / « ) < ^},

2i-1£ 2T0 . = {e € 0 : — 1 < djr (f , fo) < - i } i = 1, 2 , . . . .

Then 0 ; is a Hellinger ring with inner radius outer radius where /•; = 2^ro for

■i, > 0, r_ i = 0, and ro = \J£ . We first concentrate on ©,-.

Let a sequence Sj | 0 be given with (5q < pro, then by the assumption, there is a sequence

F0,F i ,F 2,... of nets in 0 ; satisfying the cardinality bounds. For each 0 6 0 ;, let

Tj(0) = argm in0/6/r. || log ||oo be the nearest representor of 0 in net Fj. Denote

„ ,„x . n x u M S ) )o( ) = " S e / m ■

m = - r i o e S ( X i ' Ti(e))

Then because limj^oo f ( x , Tj(ff)) = f ( x , 0 ) , so

= + £ < ,(» ) •n i= 1 j = 1

Let qt = P*{ for some 0 € ©,:, £ £ " =1 log ^ ~ 7 ^ //( /) /« ) + £}, tlieu because

E.r=i E m = ~ E log , we have

qi = P*{ for some 0 £ 0,:, eQ(0) + T , % i W ) ~ E(!;i(°))> E log - 7 4 l ( /. f o ) + ft}•

For 0o G Fo, consider Bo0 =: {0 : 0 € 0 ; , t o ( 0 ) = 0o}. For an arbitrary e > 0, choose

0O € B 0o satisfyingB h e M i M < mC £ l 0 M l M + e

H X i <«)Then let Fo = {0o : 0o € Fo}. By triangle inequality, Fo is a So net in 0 ;. Now replace Fo

by Fo and accordingly replace to by fo . For convenience, we will not distinguish fo from

to - Now notice for 0 £ Bg0,

= E ^ £% ^ + E \ o&% ^

* ~ M 0'6I3IIo E 1oS ~ e- + E loS 7 ^> - e ,

S 4

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 95: Yuhong Yang Thesis - Yale University

so we have

qi < P* {for some 6 £ Q i J 0(6)+ Z,T=dej W ~ E(>:i(8))> ~ 7 4 /( f> fo )+ £ ~ e}

< P*{for some 0 G 0 ;, Cq{0) > —2yrf + ^+ ET= l p {fo1' some 0 e 0 /1 lj{Q) - ELj{0) > rij}

= . a(L) 4. p ° ° 0(2) li ' 2-!] = I Q iJ 1

where ?/j, j > 1 are positive numbers satisfyingoo

X / '7 j< 7 '7 - (4.17)i=i

To bound q ^ \ we use a familiar exponential inequality as follows (see, e.g., Barron and

Cover (1991), Chernoff (1952).

Fact: Let g\ and <72 be two probability density functions with respect to some cr-finite

measure, then if A ^A ^ ,..., X n is an i.i.d. sample from <72, we have that for every t, G R,

P { - ’E l o g ^ 7 $ 7 > f-} <Xn f r [ b fj2(Xi) ~ ~

From the above fact, we have that for each Oq G Fo,

P { * E S U l o g ^ > - 2 7 » ? + £ - e }< e x p ( - f (d?H(/, f 0o) - 27rf + £ - f.}< e x p ( - f ( r t 1 - 27 r? + ^ - e } .

Note that for every 0q G Fo, do{Q) is the same for all 0 G B l)l] . Thus by the union bound,

q\l) < P ( u OoeFo{do(0Q) > —27/f + i - e})

< card(Fo) exp ( - § ('/f.j - 27 /f + £ - c)) .

Because e > 0 is arbitrary, we know

f \ OOn r,,2 o ,,‘2 1 S \ \ 1

CO

<Zi < card(Fo) exp ( ~ ( r ? _ 1 - 27rf + ^ 9 $ •' l=i

Note for i > 1,

and for i — 0 ,

so

r?_t - 27 r? + 1 = (1 - 2 7 )£ > (1 - 4 t)S ,

< card(F„) eXp ( . £ ± > f c M ) + g „P) .

85

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 96: Yuhong Yang Thesis - Yale University

Now because

I w I(-<Tj ( 0)) II < || 10 „. || I || l„„. / M ) ||l 1 0 S / ( • , r j _ l ( 0 ) ) l l ° o ^ / ( - , « ) M o o - i - II l o g , / ( . , T > _ | ( W ) ) l l c «

< 6 j - i + 6 j< 2 ,

we have

Observe that tj{6) is the same for all 6 such that (Tj-[{0),Tj{0)) — {0j_\,0j), for any pair

(8j- i ,6j) € Fj- i x Fj , together with Hoeffcling’s inequality (see, e.g., Pollard (1984, pp.

191-192)), we get

E J l i d l j < tard(F ,)

s («•■ *#+ss.?tr (er ■Given £, 7 ,A,rn, n, we choose the sequence Sjjij as follows. First, So is chosen such that

106 W 2 ) = — — ■

Similarly each Sj , j > 1 is chosen such that

log _ (.? + 1)(1 ~ 47 )?

and i ij , j > 1 is defined such that

, (2j + l ) ( l - 4 7 )€ , (i + l).y(l - 47 )£= (log 2)mi -|------------- +8^ j- t " 6 ; 4

W ith these choices, the bound on qi becomes

qi < exp (m log - I*±I}Uziz}£^ + exp iog d g ra + m iog j

5jL"+ ££L2 exp [■mlog + m log ^ g a - ^

< exp _ c t m - m .) + eXp(- (i± Di ^ 7 ,)^ •'

+ 2 exp ( -< f i + _____ 1______) e x n f _ i i± m z i 2 )i)S + l-exp(-h^) J exp I 8 J

For the third inequality, we need rni < , which is satisfied if >

_ J _ i0g M with p < A. The last inequality follows from (,+l)P~'l7)‘- > Issi.

80

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 97: Yuhong Yang Thesis - Yale University

From our choices of 5q, it follows that

S0 = 2Ai'o exp(— ^ (4.18)

Sj = Ar0exp(- (j + 1) ~ 47)C) for j > 1,

i ] l = 2 A s / l - i j s / ’M + 9 - e x p ( - —n A m

and for j > 2,

Vj = Jj - iV '8 ( lo g 2 )f + ?(^+ iy-47)e, + (/+0./(1-4tK< § . _ l y 2(>:+l)(l-47)f ! 2 (2 j+ l) ( l -4 7 )g , (/+1 )j(l-4-y)€

< As/1 - 47 v/2?; + 5j + i j + 4% e x p ( - j(1~ ^ )< A s / 1 - 4 7 s / i + 5 s / ; j + 2 | exp( — )< A y i - 4 7 \/i + 5^ exp (l(j + 1) -< A s / 1 - 4 7 s / i + 5^ e x p ( - ^ 1~^7^ ) .

It remains to check whether do < pi'o and whether E 7L1 Vj ~ 7 '7 as required in (4.17).

Indeed,

Z % i V j < 2 A v T ^ v /3 T + 9 ^ e x p ( - ^ ^ )

< A s / 1 - 47\Z-i + exp(—■■-~ p -) ( 2s/Z +

- (2v^ + A ^ 1 ~ + 5 » cxp ( ~ -im)g)< 6 .8 8 A v T ^ v /r r 5 ^ e x p ( - i^ i) .

Thus, for Y / j / i Vj 5; 7 r i to hold, it suffices to have

6.88A\/1 — 47\ /i + 5— exp(——— 4---~) < 7/7 = 7 2 ' — .n 4m 11

Using - ^ = > ^ for i > 0, it is enough to require

i>-i-log(l£“ ^ 4*) < * 2 4 , (4.1,,m 1 — 47 V 7 / 1 - 4 7 P

where p = . r .15.4 /1^47'Finally we sum over the rings indexed by i,

P*{for some 9 G 0 , £ E"=i log > - 7 d //( /, /«) + £}

< E “ o ^ { f o r some 9 e ©,:, ± E?=i log ^ > - 7 4 ( / > /«) + £}< E £ o ( l + ( - l i± m r d 2 k )< e x p ( - i l^ i)

< 15 .1 ex p (-(1~g7)g) .

87

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 98: Yuhong Yang Thesis - Yale University

X ( l - 4 *y)£From (4.18) and (4.19), ^ = 2Ae~ ->m < 2/1 x ^ — p as required. This completes the

proof of the lemma.

P ro o f o f L em m a 4.2: We first show the Hellinger ball is contained in some square-norm

ball. Then for the square-norm ball, we provide a suitable 5-net satisfying the cardinality

bound.

Because 1 6 Sj , so 1 = Ya=i rHlPj,i{x ) for some 77 6 Then the log-density may be

written as mjlog f j{x ,0) = Y j fiiiPj,i{x) ,

t= 1

where Pi = 0i — i])j{8)m. Because for 8 G 0j,L, || log f j (x ,8) HocfC L , it follows that for any

8,8* E p(r ’o-) — e'2/j- Let M l = ^e~L, from Lemma 4.5 in Section 6 ,

d?n {fo- J o ) > j f fo* (log fo - log fo- )2(lp.> M l /(log fo - log fo. )2dx

For the last inequality, we use the frame assumption in (4.8). Therefore, for any 8* E ©j,L,

C Bj (r, J S g ) - V - . 0 * * % | » - r n

The inclusion above refers to the functions represented by the parameters 8 and p. Now

we want to find a suitable 5-net on B(P*, y jfijjv)- We consider a rectangular grid spaced

at width e > 0 for each coordinate. If P belongs to a cube with a t least one element p

corresponding to 8 £ Bj(8*,r), then

|| P - P * ||2< 2 || ~P - P * ||2 +2 || P - P ||2< + 2 r a j s 2 .

Thus, all the cubes with at least one element in Bj(8*,r) are included in B j (p* , r ) where

r = + 2 m,j£2. Therefore, the number of these cubes is bounded by

Vol(B(p*,f ) ) _ ( J n ) mj r mi ^ 1 f \ / 2 n e r \ " lj7* ~ r ( ^ + i)smi - Jm-w ^ jaqs J

From (4.7), for any P and P corresponding to 8 and 8 respectively in the same cube, we

have

|| log fo - log fjj j|oo< , max \Pi - fy\ < T\e.

88

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 99: Yuhong Yang Thesis - Yale University

Take e = then || log f 0 - log f d ||oo< <5. For <5 < pr, f < r \J + 2fi2.

Now, for each cube that intersects with Bj(0*, f ) , choose a parameter /? that corresponds

to a probability density function and let Fg be the collection of the corresponding densities.

Then ^

i (\-T 5 S ------ cyJnijTT o

\Clearly, Fg is a 5-net tor densities in Bj(9*,r). Thus Assumption 4.l/ is satisfied with

A k = 2ire2{1 z + 2p2) . From Lemma 4.5, M L > so A k < ^ 2 i re2 • +

y/2ire2 • 2p2 < 19.28^(1 + L )e t + 0.06.

P ro o f o f L em m a 4.3: We consider an orthonormal basis 1, ipjti(x),tpjt2(x),

in Sj. Let 0 = {9\.,G2, ...,9mjj4)j(6)). From the proof of Lemma 4.2, we know that for any

0,B* E C j,L,

4 l Uo 'Jo) > M L j (log f 0 - log /,,.)2* ; = M L || 0 ||2 .

Therefore

Bj{B*,r) C Bj{(¥

The inclusion above is meant for the functions that the parameters represent. Similarly

to the counting argument in the proof of Lemma 4.2, a rectangular grid spaced at width

K-y/mj+1 ^°r eac^ c00rclinate provides the desired 5-net. The cardinality constant =

I 2 / C ? I .y 2 n e 2( j j j - + 2 p2) < 19.28/\j(l + L)e * + 0.06 for p = 0.0056. This completes the proof

Lemma 4.3.

4.6 Som e sim ple inequalities used for main results

L em m a 4.4: Assume / and g are two probability density functions with respect to some

(j-finite measure p. Let s > 1 be any constant, then

[ / log -dp. < a{s )D{f || g)'{$>•) a

where a(s) = log‘°^ i _{ • Also a(s) is decreasing in s for s > 1 .

89

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 100: Yuhong Yang Thesis - Yale University

R em ark : The best available bound with s = 1 is

f I log U p < D ( f II a) + y j2 D ( f II fj).J{L> i} (J v

Here we avoid the square root with s > 1 . Note oc(s) -> 1 as .s -> oo . Improved bounds of

the form || g)) are possible under the condition var(log < cD ( f || g) . Here we

have chosen to avoid higher order moment conditions on the logarithm of the density ratio.

Hence no uniform tail rate of convergence to zero exists.

P ro o f o f L em m a 4.4: We consider a familiar expression of the relative entropy:

D[ f \ \ g ) = / / l o g jjdfi= , / ' / ( l o g 3 + f - l)d/.i,

= / {I > ,} /(log i + f ~ + ./{£<,} / ( log i + j - W v ■

Because (log £ + 'j — 1) > 0, to proof the lemma, it suffices to show

log - < a(.s)(log - + 7 - 1) for - > .s .0 3 f 3

This follows from the monotonicity of cv(.s), which can be shown from simple calculation.

This completes the proof of the lemma.

L em m a 4.5: Let p and q be two probability density functions with respect to some cr-finite

measure p.. If < V for all x, then

M v ) j P (log dp < D{p || q) < fc{V)d).,{p, q) .

where M V ) = > 2 + k v and M v ) = ^ (2 + loS K )‘

The above upper bound on the relative entropy is given in Birge & Massart (1994,

Lemma 4).

P ro o f o f L em m a 4.5: We note D(p || q) = / p (log + 'j - l ) dp,. It can be shown fromloir j ** b*

calculus that <p] (x ) = ——2JC— is decreasing on (0 , oo) , which implieslog X

l o g V + y - 1 '' 7 " x 2

log"^ J \ 3

90

I P (l°g “ j dp, < D(p || q) .

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 101: Yuhong Yang Thesis - Yale University

To prove the other inequality, we consider the following parts of D ( p | | q) and d 2n (p , q )

D{p 11 *> ■ p(*■?+1 - 0 *+ * ( X +,- 5)4<m)=!,>,,) Kv?-1) +/(,<„)"(vf-1)

For p < q , log 2 + £ _ 1 < 2 - l ) 2, so

/ P f l o g - + - - l ) d p < 2 I h<i>v) \ <1 P J J{<i. . . - . . , P \ \ r ' ~ l ) (1p- ■I{<i>p} \ q P J J {'/>/<} W P

2i logE + l_EFor p > q, 0-2 ({p = "aT- 7f— *s increasing in It follows that

IV ~ v

r j p l ogi + 1 - t ) d„ < w + l - uVi q q) log“ F VV <7 /

Combining the integrals together, we conclude

V log V + 1 - V n D(p II 9) < ( ^ _ 1)2 dff(P»g) ’

which completes the proof of Lemma 4.5.

Lem m a 4.6: Suppose /q and ho are two functions on [0,1] satisfying je^ 'dp, <

f ell-dp. < oo, where //. is the Lebesgue measure. Then

| log j elndp — log j ell2dp\ < 1 1 h i ~ h-L ~ II. 2 || oo

P roof: By Jensen’s inequality.

r , r , r e Vn-h■>)+h‘>log j e 1 dp, — log J e ' 2dp = log j —

,/>2’ I

-

> - || /q - ho ||o o •

Similarly,

log I ehldp - log I ell2dp, < || /q - h-2 | | o o ,

which completes the proof.

91

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 102: Yuhong Yang Thesis - Yale University

Bibliography

[1] H. Akaike, “Information theory and an extension of the maximum likelihood principle,”

in Proc. 2nd Int. Symp. Info. Theory, pp. 267-281, ed.s. B.N. Petrov and F. Csaki,

Akaclemia Kiado, Budapest, 1973.

[2] A.R. Barron, “Are Bayes rules consistent in information?” Open Problems in Commu­

nication and Computation, pp. 85-91. T. M. Cover and B. Gopinatli editors, Spinger-

Verlag, 1987.

[3] A.R. Barron and T.M. Cover, “Minimum complexity density estimation, ” IEEE.

Trans, on Information Theory, vol. 37, no. 4, pp. 1034-1054, 1991.

[4] A.R. Barron and C.-H Sheu, “Approximation of density function by sequences of ex­

ponential families,” Ann. Statistics, vol. 19, pp. 1347-1369, 1991.

[5] A.R. Barron, “Universal approximation bounds for superpositions of a sigmoidal func­

tion,” IEEE, Trans, on Information Theory, vol. 39, no. 3, pp. 930-945, 1993.

[6] A.R. Barron, “Approximation and estimation bounds for artificial neural networks,”

Machine Learning, vol. 14, pp. 115-133, 1994.

[7] A.R.. Barron, Y. Yang and B. Yu, “Asymptotically optimal function estimation by min­

imum complexity criteria,” in Proc. 1994 I'M,. Symp. Info. Theory, p. 38, Trondheim,

Norway, 1994.

[8] A.R. Barron and N. Hengartner, “Information theory and superefficiency,” preprint,

1995.

[9] J.M. Bernardo, “Reference prior distributions for Bayesian inference,” .7. Roy. Statist.

Soc. Ser. B, vol. 41, pp. 113-147, 1979.

92

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 103: Yuhong Yang Thesis - Yale University

[10] P.J. Bickel and Y. Ritov, “Estimating integrated squared density derivatives: sharp

best order of convergence estimates,” Sankhya: Indian J. Statist. Ser. A, vol. 50, pp.

381-393, 1988.

[11] L. Birge, “Approximation dans les espaces metriques et theorie de l’estirnation,” Z.

Wahrscheinlichkeitstheor. Verw. Geb., vol. 65, pp. 181-237, 1983.

[12] L. Birge, “On estimating a density using Hellinger distance and some other strange

facts,” Probab. Th. Rel. Fields, vol. 71 pp. 271-291, 1986.

[13] L. Birge and P. Massart, “Estimation of integral functionals of a density,” Technical

Report 024-92, Mathematical Sciences Research Institute, Berkeley, 1992.

[14] L. Birge and P. Massart, “Rates of convergence for minimum contrast estimators,”

Probability Theory and Related Fields, vol. 97, pp. 113-150, 1993.

[15] L. Birge and P. Massart, “Minimum contrast estimators on sieves,” Technical report,

Universite Paris-Sud, 1994.

[16] L. Birge and P. Massart, “From model selection to adaptive estimation,” Research

Papers in Probability and Statistics: Festschrift, for Lucien Le Cain, 1995.

[17] J. Bretagnolle and C. Huber, “Estimation des densites: risque minimax,” Z.

Wahrscheinlichkeitstheor. Verw. Geb., vol. 47, pp. 119-137, 1979.

[18] N.N. Cencov, Statistical Decision Rules and Optimal Inference, Arner. Math. Soc.

Transl., vol 53, Providence, RI, 1982.

[19] C.K. Chui, An Introduction to Wavelets, Academic Press, Inc., 1991.

[20] B. Clarke and A.R. Barron, “Jeffrey’s prior is asymptotically least favorable under

entropy risk,” J. Statist. Planning and Inference, vol. 41, pp. 37-40, 1994.

[21] B. Clarke and A.R. Barron, “Information-theoretic asymptotics of Bayes methods,”

IEEE Trans. Info. Theory, vol. 36, pp. 453-471, 1990.

[22] G.F. Clements, “Entropy of several sets of real valued functions,” Pacific ./. Math.,

vol. 13, pp. 1085-1095, 1963.

93

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 104: Yuhong Yang Thesis - Yale University

[23] T.M. Cover and J.A. Thomas, Elements of Information Theory, John Wiley and Sons,

1991.

[24] H. Chernoff, “A measure of asymptotic efficiency for tests of a hypothesis based on the

sum of observations,” Ann. Math. Statist,., vol. 23, pp. 493-507, 1952.

[25] P. Craven and G. Wahba, “Smoothing noisy data with spline functions: estimating the

correct degree of smoothing by the method of generalized cross-validation,” Numer.

Math., vol. 31, pp. 377-403, 1979.

[26] I. Csiszar and J. Korner, Information Theory: Codiny Theorems for Discrete Memo-

ryless Systems. Academic Press, 1981.

[27] L. Davisson, “Universal noiseless coding,” IEEE Trans. Info. Theory, vol. 19, pp. 783-

795, 1973.

[28] C. de Boor, A practical Guide to Splines, Springer-Vcrlag New York, 1978.

[29] C. de Boor and G.J. Fix, “Spline approximation by quasiinterpolents,” ./. Approx.

Theory, vol. 8 , pp. 19-45, 1973.

[30] L. Devroye, A Course in Density Estimation, Birkauser, Boston, 1987.

[31] D.L. Donoho and R.C. Liu, “Geometrizing rates of convergence, II,” Ann. Statistics,

vol. 19, pp. 633-667, 1991.

[32] D.L. Donoho, I.M. Johnstone, G. Kerkyacharian, and D. Picard, “Density estimation

by wavelet thresholding,” preprint, 1993.

[33] D.L. Donoho, I.M. Johnstone, G. Kerkyacharian, and D. Picard, “Wavelet shrinkage,

Asymptopia?,” .7. R. Statist. Soc. B, vol. 57, pp. 301-369, 1995.

[34] S.Yu. Efroimovich and M.S. Pinsker, “Estimation of square-integrable probability den­

sity of a random variable,” Problemy Peredachi Inforrnatsii, vol. 18, pp. 19-38, 1982.

[35] S.Yu. Efroimovich, “Nonparametric estimation of a density of unknown smoothness,”

Theory probah. AppL, vol. 30, pp. 557-568, 1985.

94

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 105: Yuhong Yang Thesis - Yale University

[36] W. Hardle, P. Hall, and S. Marron, “How far are automatically chosen regression

smoothing parameters from their optimum,” Mirneo Services #1589, Dept. Stat., North

Carolina State Univer.-Chapel Hill, 1985.

[37] R.Z. Hasminskii, “A lower bound on the risks of nonparametric estimates of densities

in the uniform metric,” Theory probab. Appi , vol. 23, pp. 794-796, 1978.

[38] R.Z. Hasminskii and I.A. Ibragimov, “On density estimation in the view of Kol­

mogorov’s ideas in approximation theory,” Ann. Statist., vol. 18, pp. 999-1010, 1990.

[39] D. Haughton, “Size of the error in the choice of a model to fit data from an exponential

family,” Sankhya: Indian .J. Statist. Ser. A, vol. 51, pp. 45-58, 1989.

[40] D. Haussler and M. Opper, “General bounds on the mutual information between a

parameter and n conditionally independent observations,” preprint, 1995.

[41] I.A. Ibragimov and R.Z. Hasminskii, “Estimation of distribution density,” Za.p.

Nauchn. Semin. LOMI vol. 98, pp. 61-85, 1980.

[42] I.A. Ibragimov and R.Z. Hasminskii, “Bounds for the risks of noil-parametric regression

estimates,” Theory probab. Appl. vol. 27, pp. 84-99, 1982.

[43] A.N. Kolmogorov and V.M. Tihomirov, “e-entropy and e-capacity of sets in function

spaces,” Uspehi Mat. Nauk vol. 14, pp. 3-86, 1959; English transl., Amer. Math. Soc.

Thansl. vol. 17, pp. 277-364, 1961.

[44] L.M. Le Cam, “Convergence of estimates under dimensionality restrictions,” Ann.

Statististics, vol. 1, pp. 38-53, 1973.

[45] K.C. Li, “Asymptotic optimality for Cv, C l , cross-validation and generalized cross-

validation: discrete index set,” Ann. Statistics, vol. 15, no. 3, pp. 958-975, 1987.

[46] G.G. Lorentz, “Metric entroy and approximation,” Bull. Amer. Math. Sod. vol. 72,

pp. 903-937, 1966.

[47] D.S. Modha and E. Masry, “Rates of convergence in density estimation using neural

networks,” manuscript, 1994.

95

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 106: Yuhong Yang Thesis - Yale University

[48] A. Nemirovskii, “Nonparametric estimation of smooth regression functions,” .7. Corn-

put,. Syst. Sci. vol. 23, no. 6 , pp. 1-11, 1986.

[49] D. Pollard, Convergence of Stochastic Processes, Springer-Verlag, New York, 1984.

[50] D. Pollard, “Hypercubes and minimax rates of convergence,” preprint, 1993.

[51] S. Portnoy, “Asymptotic behavior of likelihood methods for exponential families when

the number of parameters tends to infinity,” Ann. St,at,., vol. 16, pp. 356-366, 1988.

[52] J. Rissanen, “Universal coding, information, prediction, and estimation,” IEEE Trans,

on Information Theory, vol. 30, pp. 629-636, 1984.

[53] J. Rissanen, “Stochastic complexity and modeling,” Ann. St.at... vol. 14, pp. 1080-1100,

1986.

[54] .J. Rissanen, T. Speed, and B. Yu, “Density estimation by stochastic complexity,” IEEE

Trans. Info. Theory, vol. 38, pp. 315-323, 1992.

[55] X. Shen and W.H. Wong, “Convergence rates of sieve estimates,” Ann. Statistics, vol.

22, No. 2, pp. 580-615, 1994.

[56] R. Shibata, “An optimal selection of regression variables,” Biometrika, vol. 68 , pp.

45-54, 1981.

[57] G. Shwartz, “Estimating the dimension of a model,” Ann. Statistics, vol. 6 , pp. 461-464,

1978.

[58] B.W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman and

Hall, London, 1986.

[59] T.P. Speed and B. Yu, “Model selection and prediction: Normal regression,” Arm. Inst.

Stat. Math., vol. 45, pp. 35-54, 1993.

[60] C.J. Stone, “Optimal global rates of convergence for nonparametric regression,” Ann.

Statistics, vol. 10, No. 4, pp. 1040-1053, 1982.

[61] C.J. Stone, “The dimensionality reduction principle for generalized additive models,”

Ann. Statistics, vol. 14, No. 2, pp. 590-606, 1986.

96

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 107: Yuhong Yang Thesis - Yale University

[62] C.J. Stone, “ Large-sample inference for log-spline models,” Ann. Statistics, vol. 18,

pp. 717-741, 1990.

[63] C.J. Stone, “The use of polynomial splines and their tensor products in multivariate

function estimation,” Ann. Statistics, vol. 22, No. 1, pp. 118-184, 1994.

[64] M. Talagrand, “Sharper bounds for Gaussian and empirical processes,” Arm. Probabil­

ity, vol. 22, pp. 28-76, 1994.

[65] S. van de Geer, “Estimating a regression function,” Ann. Statistics, vol. 18, No. 2, pp.

907-924, 1990.

[66] W.H. Wong and X. Shen, “Probability inequalities for likelihood ratios and convergence

rates of sieve MLEs,” Ann. Statistics, vol. 23, No. 2, pp. 339-362, 1995.

[67] Y. Yang, “Complexity-basecl model selection, ” prospectus submitted to Department

of Statistics, Yale University, 1993.

[68] Y. Yang, “An asymptotic property of model selection criteria, ” in Proc. 1991, IEEE-

IMS Workshop on Info. Theory and Stat., p. 103, Alexandria, Virginia, 1994.

[69] B. Yu, “Assouad, Fano, and Le Cam,” To appear in Festschrift in honor of L. Le Cam

on his 70t.h. birthday, 1995.

[70] B. Yu, “Lower bounds on expected redundancy for lion-parametric classes,” IEEE

Trans. Info. Theory, vol. 42, 1996.

[71] B. Yu and T. Speed, “Data compression and histogram,” Probability Theory and Re­

lated Fields, vol. 92, pp. 195-229, 1992.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.