142
Distribution of Mass in Convex Bodies Thesis submitted for the degree “Doctor of Philosophy” by Ronen Eldan Prepared under the supervision of Prof. Bo’az Klartag and Prof. Vitali Milman Submitted to the senate of Tel-Aviv University July 2012

Distribution of Mass in Convex Bodies

Embed Size (px)

Citation preview

Page 1: Distribution of Mass in Convex Bodies

Distribution of Mass in Convex Bodies

Thesis submitted for the degree “Doctor of Philosophy”by

Ronen Eldan

Prepared under the supervision ofProf. Bo’az Klartag and Prof. Vitali Milman

Submitted to the senate of Tel-Aviv UniversityJuly 2012

Page 2: Distribution of Mass in Convex Bodies

2

Abstract

The main topic of this thesis is the distribution of mass of convex bodies in finite dimensional Eu-clidean space. We are interested in phenomena which occur in high dimensions, and mainly investigatethe dependence of several geometric quantitative properties on the dimension. We consider the uniformmeasure on a convex body and establish connections between quantities such as covariance, entropy andspectral gap related to the measure.

The results of the first chapter evolve around the Central Limit Theorem (CLT) for Convex sets,the Hyperplane conjecture, the Thin-Shell conjecture and the conjecture by Kanan-Lovasz-Simonovits(KLS) for convex bodies, all related to volumetric properties of convex bodies. We prove a pointwise ver-sion of the CLT for convex bodies, and we establish some quantitative connections between the quantitiesrelated to the three conjectures. In particular, we show that a positive answer to the Thin-Shell conjecturewill imply a positive answer to the hyperplane conjecture, and will also imply a positive answer to theKLS conjecture, up to a logarithmic factor. Our proofs rely on the construction of a Riemannian manifoldrelated to a convex body and of a certain stochastic localization scheme for convex bodies.

In the second chapter we prove estimates related to the stability of the Brunn-Minkowski inequality,namely, for two convex bodies K,T ⊂ Rn of unit volume, we prove that the distance between thecovariance matrices of the two bodies and the Wasserstein distance between their respective uniformdistributions can be bounded by some function of the dimension and of the deficit, V ol

(K+T

2

)− 1.

Unlike existing results, we prove estimates that become better as the dimension grows, and already makesense when the deficit is rather large. In particular, our results already make sense in the case thatV ol((K + T )/2) ≤ 10V ol(K)V ol(T ). Furthermore, we establish connections between such stabilityestimates and some of the quantities related to the conjectures that appear in the first chapter.

The topic of the third chapter is information-theoretic bounds for the amount of samples required inorder to estimate certain properties of a convex body, when sampling independent random points takenaccording to its uniform distribution. We show that in order to estimate the volume of a convex body,a super-polynomial number of points is required. We also show that in order to estimate a single entryof the inverse covariance matrix of a high-dimensional distribution, a sample size proportional to thedimension is needed.

In the fourth chapter we consider the convex hull of a high-dimensional random walk. We establishasymptotics with respect to the dimension for the probability of some events related to it. In particular,we study the dependence on the dimension of the number of steps needed in order for the starting pointto be contained in the interior of the convex hull.

In the last chapter, we prove that in the hyperbolic setting, if A is a 1-extension of some set, then itsvolume is equivalent, up to some universal multiplicative constant, to the volume of its convex hull, thusdemonstrating how very basic principles from the theory of high dimensional convex bodies in Euclideanspace fail to hold when hyperbolic geometry is considered.

Page 3: Distribution of Mass in Convex Bodies

3

Acknowlegements

I cannot overstate my gratitude to my Phd advisors, Vitali Milman and Bo’az Klartag, who did afabulous job guiding me through my studies. Vitali Milman introduced me to the fascinating world ofasymptotic geometric analysis when I was near the end of my undergraduate studies. He believed in meand guided me onwards into my Phd, giving me confidence to explore unknown territories and teachingme how to do it. Through countless inspiring conversations, I learnt from Vitali how to approach aspecific problem as well as how to consider a problem from a wider point of view, seeking connections toother related problems and fields in mathematics, in order to gain a deeper understanding of more generalphenomena. I am grateful for Vitali’s constant dedication and fatherly concern, and am also thankful forVitali’s persistence in teaching me how to write, how to lecture and for his dedicated guidance in thewriting of this thesis and the other papers I have written.

Bo’az Klartag has all the qualities one could hope for in an advisor: his remarkable sharpness,deep understanding and wide mathematical perspective are inspiring; I was overwhelmed by Bo’az’sattentivity, patience and constant concern, care and interest in what I was doing and from his greatwillingness to support, patiently explain, share ideas, answer questions and thoroughly go over andcomment about everything I write. Through countless discussions with Bo’az, I have not only gained thevast part of what I know about many topics related to this work, but he was always also able to give mea perspective of the general scope of the problem I was working on and provided me with the necessarytools to help me realize where to start looking for a solution. I am grateful for the chance to work withBo’az, among the numerous things I have learnt from working with him there are two that I especiallyhope I will be able to preserve: first, that the sky is the limit when it comes to learning new methodsfields, rather than narrowing oneself down to familiar techniques. Second, that when viewed correctly,almost any mathematical proof, no matter how involved, can become relatively simple to grasp.

I would also like to express my thanks to some of the faculty members whom I could always ap-proach with questions and problems, and who gave me a vast amount of knowledge and a great diversityof perspectives, specifically Boris Tsirelson, Efim Gluskin, Shiri Artstein and Semyon Alesker. I amgrateful to my friend and mentor, Itai Benjamini for many enlightening professional discussions, forhis inspiration and his company. I am also immensely thankful to Dr. Esther Levenson for giving anextremely useful course on academic writing, in which I participated.

Finally, I owe a special thanks to Apostolos Giannopoulos and his family for a wonderful hospitalityduring my 3 months position in Athens, close to the beginning of my Phd studies. I could never expectsuch intense and dedicated efforts to teach me and guide me through the research as the ones I had inthe short period when I was working with Apostolos, who gave me a place in his own office and literallyguided me step by step, explaining, answering and teaching me how to approach a problem. But maybeeven more than that I am grateful for Apostolos’ and his family’s efforts, care and concern during thetime when I was hospitalized in Athens.

Page 4: Distribution of Mass in Convex Bodies

4

Page 5: Distribution of Mass in Convex Bodies

Contents

0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1 Distribution of mass in Convex bodies 11

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.1.1 The slicing problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.1.2 The thin-shell conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.1.3 The central limit theorem for convex bodies . . . . . . . . . . . . . . . . . . . . 13

1.1.4 The isoperimetric inequality for isotropic convex bodies: the KLS conjecture . . 14

1.2 A pointwise CLT for convex bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.2.1 Convolved marginals are Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.2.2 Deconvolving the Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1.2.3 Proof of main theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.3 The thin-shell conj. and the hyperplane conj. . . . . . . . . . . . . . . . . . . . . . . . 27

1.3.1 A Riemannian metric associated with a convex body . . . . . . . . . . . . . . . 28

1.3.2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

1.4 Thin shell and KLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

1.4.1 A stochastic localization scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 38

1.4.2 Analysis of the matrix At . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

1.4.3 Thin shell implies spectral gap . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

1.4.4 Loose ends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2 Stability of the Brunn-Minkowski inequality 55

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2.2 Stability of the covariance matrix: the unconditional case . . . . . . . . . . . . . . . . . 60

2.2.1 Background on log-concave densities on the line . . . . . . . . . . . . . . . . . 60

2.2.2 Transportation in one dimension . . . . . . . . . . . . . . . . . . . . . . . . . . 63

2.2.3 Unconditional Convex Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

2.2.4 Obtaining a thin-shell estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

2.3 Stability estimates for the general case . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5

Page 6: Distribution of Mass in Convex Bodies

6 CONTENTS

2.3.1 Stability via CLT for convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . 71

2.3.2 Stability via a transportation arguement . . . . . . . . . . . . . . . . . . . . . . 75

2.3.3 Obtaining a stability estimate using a stochastic localization scheme . . . . . . . 81

3 Complexity results 87

3.1 Nonexistence of a volume estimation algorithm . . . . . . . . . . . . . . . . . . . . . . 87

3.1.1 The Deletion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

3.1.2 Building the two profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

3.1.3 Tying up Loose Ends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

3.2 Estimation of the inverse covariance matrix . . . . . . . . . . . . . . . . . . . . . . . . 103

3.2.1 Proof of the theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4 High dimensional random walks 109

4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

4.2 The Lower Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

4.3 The Upper Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.4 The Discrete Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.5 Spherical covering times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

4.6 Remarks and Further Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

4.6.1 Probability for intermediate points in the walk to be extremal. . . . . . . . . . . 125

4.6.2 Covering times and Comparison to independent origin-symmetric random points 126

4.6.3 A random walk that does not start from the origin . . . . . . . . . . . . . . . . . 126

4.6.4 Possible Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

5 Convex hulls in the Hyperbolic space 129

5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

5.2 The volume of the convex hull of N points is sublinear . . . . . . . . . . . . . . . . . . 131

5.3 The convex hull of a set whose boundary has bounded curvature . . . . . . . . . . . . . 134

Page 7: Distribution of Mass in Convex Bodies

0.1. INTRODUCTION 7

0.1 Introduction

High dimensional problems of geometric nature appear in various branches of mathematics, mathemati-

cal physics and theoretical computer science, and have been extensively studied in the last few decades.

A better understanding of such objects may lead to important applications, as demonstrated by numer-

ous results that appeared in the last years. Some of these results have already been applied in a variety

of subjects such as statistical mechanics, signal processing, computer vision, tomography and machine

learning. The general subject of this thesis is the theory behind some of these high dimensional objects.

A well-known phenomenon in high dimensional systems is the exponential growth of information

and complexity with respect to the dimension. For example, the number of starting moves in a 10-

dimensional Tic-Tac-Toe game is around 60, 000, and the number of states of a spin-system with 100

particles is 2100 (a number with over 30 digits in decimal representation). This phenomenon is some-

times referred to as ”the curse of dimensionality”, since, in many cases, analyzing a high dimensional

system can be a very complicated, many times impossible, task. However, as recent research suggests, in

many cases the contrary is actually true. There is a rapidly growing theory that demonstrates that a high

dimension can in fact be a blessing rather than a curse. When viewed correctly, some high dimensional

objects appear to have more order and simplicity than low-dimensional ones. Two examples which il-

lustrate this phenomenon are Dvoretzky’s theorem and the Central Limit Theorem for Convex Bodies.

Both theorems show that, in some sense, typical projections of a high dimensional convex body have a

”normal” or ”common” behavior: the first one states that any high-dimensional convex body has nearly-

Euclidean sections of a high dimension, while according to the second, any high-dimensional convex

body has approximately Gaussian marginals. There is a number of principles, such as the Levy-Milman

concentration of measure, which seem to compensate for the diversity. Convexity is one of the ways to

take advantage of those principles in order to prove easy-to-formulate, nontrivial theorems. The focus of

this thesis is on the role of convexity in the high dimensional setting.

Our starting point is Dvoretzky’s theorem, which states the following: there exists a function k(n, ε)

which tends to infinity as n → ∞, such that for any convex body K ⊂ Rn and ε > 0, there exists a

subspace E ⊆ Rn of dimension at least k(n, ε) such that K is (1 + ε)-isometric to a euclidean ball.

Now let us consider something a bit different: again, let E ⊆ Rn be a subspace, denote by X a random

vector uniformly distributed over K. Suppose that K is isotropic, hence V ol(K) = 1, E(X) = ~0 and

E < X, θ >2 is constant in θ ∈ Sn−1. We consider the marginal of K on the subspace E, hence, the

Page 8: Distribution of Mass in Convex Bodies

8 CONTENTS

distribution of ProjEX . The Central Limit Theorem for Convex Bodies states that if the dimension

of E is small enough, then in some sense this random vector approximates a Gaussian random vector,

with high probability of choosing E. This result was first proved, in its full form, by Klartag [K1, K2].

Volume marginals of convex bodies were considered already by Gromov (87’), see also Anttila, Ball

and Perissinaki [ABP] and Brehm and Voigt [BV], where the central limit theorem was proved in some

special cases.

Below, we attain a certain pointwise version of the central limit theorem for convex sets. Namely, we

show that there exist universal constants c1, c2, c3, such that for every isotropic convex bodyK, for a ran-

dom choice of a subspace E of dimension nc1 uniformly chosen from the corresponding Grassmannian,

with probability close to 1, ∣∣∣∣ProjE(K)(x)

γ(x)− 1

∣∣∣∣ ≤ C

nc2(1)

for all x ∈ E with |x| ≤ nc3 , where ProjE(K) is the density of the marginal of the uniform random

vector on K and γ is the Gaussian measure. For a more precise formulation see theorem 1.1.5 below.

The idea of considering marginals of convex bodies and volume related properties leads us to several

other questions, all related to the way in which the volume of a convex body is distributed. One remark-

able example of a volume-related property of convex bodies is Concentration of mass. A specific type

of concentration of mass which is highly related to the central limit theorem for convex bodies is the fact

that for an isotropic convex body almost all of the mass is concentrated at more or less the same distance

from the origin. Namely, there exists a sequence σn such that for every isotropic random vector X

uniformly distributed in a convex body K ⊂ Rn, one has

P(∣∣ |X| − √n ∣∣ > σn

)<

1

2(2)

with σn√n→ 0 as n → ∞. The first non-trivial bound for σn was gives by Klartag in [K1], and several

improvements have been introduced (see section 2.1.2 for a more detailed overview). The question about

the maximal thickness of the thin-shell remains open:

Question 0.1.1 Can σn be taken to smaller than some constant C > 0 independent of the dimension?

The above question is known as the thin-shell conjecture.

Two more well-known conjectures related to the way mass is distributed over a convex body are the

hyperplane conjecture and the conjecture by Kanan, Lovasz and Simonovits (or the KLS conjecture).

Given an isotropic random vector uniformly distributed in a convex body K ⊂ Rn, the hyperplane con-

jecture suggests that its density at the origin is smaller than Cn for some universal constant C > 0, and

the KLS conjecture suggests that its second eigenvalue corresponding to the Neumann Laplacian is larger

Page 9: Distribution of Mass in Convex Bodies

0.1. INTRODUCTION 9

than some universal constant. In the first chapter of this thesis, we establish quantitative connections be-

tween the thin-shell conjecture and this two conjectures. In particular, we show that a positive answer to

the thin-shell conjecture will imply a positive answer to the hyperplane conjecture, as well as a positive

answer, up to a logarithmic factor, to the KLS conjecture. Moreover, we show that any quantitative bound

for the constant σn implies a respective bound for the quantities relating to these two conjectures, and in

particular every non-trivial bound for σn implies a non-trivial bound in the isoperimetric inequality for

convex bodies.

In chapter 2, we introduce some results related to stability of the Brunn-Minkowski inequality. In one

of its forms, this inequality states that for two convex bodies, K,T of volume 1, one has,

V ol

(K + T

2

)≥ 1,

and an equality is attained if an only if T is a translation of K. A stability result for this inequality

deals with the case that there is almost an equality in the above equation. In this case, it is reasonable

to expect that K and T are approximately similar in some sense, or in other words, close to each other

with respect to a certain metric. Some examples of possible metrics are the Hausdorff distance, the

Wasserstein distance and the volume of the symmetric difference between the bodies.

Unlike previous results, we approach the topic from a high-dimensional point of view, and try to

attain estimates that have a correct dependence on the dimension. Our results demonstrate how, in some

cases, the estimates may actually become better as the dimension grows. The techniques and ideas we

use mostly from the theory of high-dimensional convex bodies, and many are related to concentration of

mass results.

The third chapter is dedicated to attaining certain information-theoretical bounds related to the uni-

form measure on convex bodies. Considering sequences of points sampled from a random vector dis-

tributed according to the uniform measure on a certain convex body, which is assumed to be unknown

to us, one may try to ask what is the minimal number of samples needed in order to estimate certain

quantities related to that body. Two examples of such quantities are the volume of the body and its

covariance matrix. We attain lower bounds for the number of samples related these two quantities: in

the first section we show that in order to estimate the volume of a convex body, one needs a number

of samples which is super-polynomial in the dimension. In the second section we show that in order to

reconstruct a single entry in the inverse covariance matrix, one needs a number of samples proportional

to the dimension. Those two bounds imply the non-existence of certain algorithms, the question about

the existence of which has raised by statisticians and computer scientists.

Page 10: Distribution of Mass in Convex Bodies

10 CONTENTS

Both results of the third chapter also serve to demonstrate a certain qualitative phenomenon the uni-

form distribution on a high dimensional convex bodies: when normalized correctly, if a random rotation

is applied to the convex body before generating the random point, it is hard to distinct between the dis-

tribution and a corresponding spherically-symmetric distribution. In other words, the distribution of a

sequence of random points from a randomly-rotated convex body is in some sense close to a certain

rotationally invariant distribution. This qualitative phenomenon is, in fact, the main idea behind the two

results of the chapter.

The subject of the fourth chapter is the high-dimensional random walk. Its main point is to derive

asymptotics for the probability that the origin is an extremal point of a random walk as the dimension

goes to infinity. We show that in order for the probability to be roughly 1/2, the number of steps of the

random walk should be between ecn/ logn and eCn logn. As a result, we attain a bound for the π2 -covering

time of al Brownian motion on the sphere. The proofs in this chapter can be regarded as examples of

how methods related to distribution of mass in convex bodies can be applied in order to derive estimates

related to high dimensional random walks.

The last chapter of this thesis demonstrates how certain elementary principles of high dimensional

Euclidean convex geometry fail to hold true when a different geometry is considered. Looking at the

hyperbolic space, we show that in every dimension, the volume of every polytope is sub-linear with re-

spect to its number of vertices. As a consequence, we show that given set which is the 1-extension of

some other set, the growth volume upon taking the convex hull is at most by a constant factor. This two

facts may be counter-intuitive for convex geometers who are used to the Euclidean setting: in Euclidean

space, even if the vertices of a polytope are restricted to a prescribed ball, the growth of volume with

respect to the number of vertices is far from linear, and the volume of the convex hull of a set may be

much larger than the volume of the original set even if the boundary is highly regular.

Page 11: Distribution of Mass in Convex Bodies

Chapter 1

Distribution of mass in Convex bodies

1.1 Introduction

The opening point of this chapter is two well-known conjectures in convex geometry, the hyperplane

conjecture and the thin-shell conjecture . The former may be formulated as follows:

1.1.1 The slicing problem

Question 1.1.1 Is there a universal constant c > 0 such that for any dimension n and any convex body

K ⊂ Rn with V oln(K) = 1, there exist a hyperplane H ⊂ Rn for which V oln−1(K ∩H) > c?

Here, V olk stands for k-dimensional volume. A convex body is a compact convex set. The conjec-

ture stating that the answer to this question may be positive was first proposed by Bourgain [Bou1]. In

order to understand the latter conjecture, and to better understand the two conjectures, we begin with

some notation.

A probability density ρ : Rn → [0,∞) is called log-concave if it takes the form ρ = exp(−H) for a con-

vex function H : Rn → R ∪ ∞. A probability measure is log-concave if it has a log-concave density.

The uniform probability measure on a convex body is an example for a log-concave probability measure,

as well as, say, the gaussian measure in Rn. A log-concave probability density decays exponentially at

infinity (e.g., [K9, Lemma 2.1]), and thus has moments of all orders. For a probability measure µ on

Rn with finite second moments, we consider its barycenter b(µ) ∈ Rn and covariance matrix Cov(µ)

defined by

b(µ) =

∫Rnxdµ(x), Cov(µ) =

∫Rn

(x− b(µ))⊗ (x− b(µ))dµ(x)

where for x ∈ Rn we write x ⊗ x for the n × n matrix (xixj)i,j=1,...,n. A log-concave probability

measure µ on Rn is isotropic if its barycenter lies at the origin and its covariance matrix is the identity

matrix. For an isotropic, log-concave probability measure µ on Rn we denote

Lµ = Lf = f(0)1/n

where f is the log-concave density of µ. It is well-known (see, e.g., [K9, Lemma 3.1]) that Lf > c, for

some universal constant c > 0. Define

Ln = supµLµ

11

Page 12: Distribution of Mass in Convex Bodies

12 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

where the supremum runs over all isotropic, log-concave probability measure µ on Rn. As follows from

the works of Ball [Ba1], Bourgain [Bou1], Fradelizi [Fra], Hensley [Hen] and Milman and Pajor [MP],

Question 1.1.1 is directly equivalent to the following:

Question 1.1.2 Is it true that Ln ≤ C, for a universal constant C > 0?

See also Milman and Pajor [MP] and [K8] for a survey of results revolving around this question. For

a convex bodyK ⊂ Rn we write µK for the uniform probability measure onK. A convex bodyK ⊂ Rn

is centrally-symmetric if K = −K. It is well-known that

Ln ≤ C supK⊂Rn

LµK (1.1)

where the supremum runs over all centrally-symmetric convex bodiesK ⊂ Rn for which µK is isotropic.

Indeed, the reduction from log-concave distributions to convex bodies was proven by Ball [Ba1] (see [K8]

for the easy generalization to the non-symmetric case), and the reduction from general convex bodies to

centrally-symmetric ones was outlined, e.g., in the last paragraph of [K7]. The best estimate known to

date is Ln < Cn1/4 for a universal constant C > 0 [K8], which slightly sharpens a bound of Bourgain

[Bou3].

1.1.2 The thin-shell conjecture

Next, we establish some more notation in order to formulate the thin-shell conjecture. We write | · |for the standard Euclidean norm in Rn, and x · y is the standard scalar product of x, y ∈ Rn. We say

that a random vector X in Rn is isotropic and log-concave if it is distributed according to an isotropic,

log-concave probability measure. Let σn ≥ 0 satisfy

σ2n = sup

XE(|X| −

√n)2 (1.2)

where the supremum runs over all isotropic, log-concave random vectors X in Rn. Thus, the parameter

σn measures the width of the “thin spherical shell” of radius√n in which most of the mass of X is

located. See (1.85) below for another definition of σn, equivalent up to a universal constant, which is

perhaps more common in the literature. It is known that σn ≤ Cn1/3 (see [Gu-M]), where C > 0 is a

universal constant.

The following question is known as the thin-shell conjecture.

Question 1.1.3 Is it true that,

σn ≤ C (1.3)

for a universal constant C > 0?

This question was initially asked by Anttila-Ball-Perissinaki, [ABP], and Brehm-Voigt, [BV], where

positive answers were provided for some specific families of convex bodies. The first nontrivial bound

for σn which holds in the general case was given by Klartag in [K1], who showed that σn ≤ C n1/2

logn .

Several improvements have been introduced around the same method, see e.g. [K4] and [Fl1]. The best

Page 13: Distribution of Mass in Convex Bodies

1.1. INTRODUCTION 13

known bound for σn at the time this thesis was written is due to Guedon and E. Milman, in [Gu-M],

extending previous works of Klartag, Fleury and Paouris, who show that σn ≤ Cn13 .

The thin-shell conjecture was shown to be true for several specific classes of convex bodies, such as

bodies with a symmetry to coordinate reflections (Klartag, [K3]) and certain random bodies (Fleury,

[Fl2]). In chapter 2 of this thesis we give an alternative proof of the conjecture for the case of bodies

with symmetries to coordinate reflections, up to logarithmic factors.

In section 1.3 we establish links between the constants corresponding to questions 1.1.2 and 1.1.3.

Namely, we show:

Theorem 1.1.4 There exists a universal constant C > 0 such that,

Ln < Cσn, ∀n

Inequality 1.1.4 states, in particular, that an affirmative answer to the slicing problem follows from

the thin shell conjecture (1.3). This sharpens a result announced by Ball [Ba2], according to which a

positive answer to the slicing problem is implied by the much stronger conjecture suggested by Kannan,

Lovasz and Simonovits[KLS], described below.

1.1.3 The central limit theorem for convex bodies

The parameter σn plays an important role in the so-called central limit theorem for convex bodies

[K1]. This theorem asserts that most of the one-dimensional marginals of an isotropic, log-concave ran-

dom vector are approximately gaussian. According to a theorem of Sudakov, the Kolmogorov distance

to the standard gaussian distribution of a typical marginal has roughly the order of magnitude of σn/√n

(see [K1] for details). Therefore the conjectured bound (1.3) actually concerns the quality of the gaussian

approximation to the marginals of high-dimensional log-concave measures.

In the next section we will establish a pointwise version of the central limit theorem for convex

bodies.

The Grassman manifoldGn,` of all `-dimensional subspaces of Rn carries a unique rotationally-invariant

probability measure µn,`. Whenever we say that E is a random `-dimensional subspace in Rn, we relate

to the above probability measure µn,`. Our estimate reads as follows:

Theorem 1.1.5 Let X be an isotropic random vector in Rn with a log-concave density. Let 1 ≤ ` ≤ nc1

be an integer. Then there exists a subset E ⊆ Gn,` with µn,`(E) ≥ 1 − C exp(−nc2) such that for any

E ∈ E , the following holds: Denote by fE the density of the random vector ProjE(X). Then,∣∣∣∣fE(x)

γ(x)− 1

∣∣∣∣ ≤ C

nc3(1.4)

for all x ∈ E with |x| ≤ nc4 . Here, γ(x) = (2π)−`/2 exp(−|x|2/2) is the standard gaussian density in

E, and C, c1, c2, c3, c4 > 0 are universal constants.

Page 14: Distribution of Mass in Convex Bodies

14 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

This result is a follow-up for the work of Klartag [K2] and provides stronger estimates. Its proof is found

in section 1.2.

1.1.4 The isoperimetric inequality for isotropic convex bodies: the KLSconjecture

The next topic of this chapter relates to a conjecture by Kannan, Lovasz, and Simonovits (in short, the

KLS conjecture) about the isoperimetric inequality for convex bodies in Rn. Roughly speaking, The

KLS conjecture asserts that, up to a constant, the best way to cut a convex body into two parts is with

a hyperplane. To be more precise, given convex body K whose barycenter is at the origin, and a subset

T ⊂ K with V oln(T ) = RV oln(K), the KLS conjecture states that

V oln−1(∂T ∩ Int(K)) ≥ RC infθ∈Sn−1

V oln−1(K ∩ θ⊥) (1.5)

for some universal constant C > 0, whenever R ≤ 12 .

We will show that, up to a logarithmic correction, this conjecture may be reduced to the case where T is

an ellipsoid.

In order to give a more precise formulation of the KLS conjecture, we introduce some more notation.

Given a measure µ, Minkowski’s boundary measure of a Borel set A ⊂ Rn, is defined by,

µ+(A) = lim infε→0

µ(Aε)− µ(A)

ε

where

Aε := x ∈ Rn; ∃y, |x− y| ≤ ε

is the ε-extension of A. Define,

G−1n := inf

µinfA⊂Rn

µ+(A)

µ(A)(1.6)

where µ runs over all isotropic log-concave measures in Rn and A ⊂ Rn runs over all Borel sets with

µ(A) ≤ 12 .

The constant Gn is known as the optimal inverse Cheeger constant. G−2n is also equivalent, up to a

universal constants, to the optimal spectral gap constant of isotropic log-concave measures in Rn. For

an extensive review of this constant and equivalent formulations, see [Mil1]. One property of Gn of

particular importance for us is,

1

CG2n ≤ sup

µsupϕ

∫ϕ2dµ∫|∇ϕ|2dµ

≤ CG2n (1.7)

Where µ runs over all isotropic log-concave measures and ϕ runs over all smooth enough functions with∫ϕdµ = 0 and C > 0 is some universal constant.

In [KLS], it is conjectured that,

Page 15: Distribution of Mass in Convex Bodies

1.1. INTRODUCTION 15

Conjecture 1.1.6 There exists a universal constant C such that Gn < C for all n ∈ N.

Recall the thin-shell conjecture, question 1.1.3. An application of (1.7) on the function ϕ(x) = |x|2

shows that the thin-shell conjecture is implied by the KLS conjecture. We show that to some extent, the

inverse implication is also true. Before we can give a precise formulation of this fact, we will need a few

more definitions. Write,

κ = lim infn→∞

log σnlog n

, τn = max1≤j≤n

σjjκ, (1.8)

so that σn ≤ τnnκ. Note that the thin-shell conjecture implies κ = 0 and τn < C.

We will prove the following:

Theorem 1.1.7 There exists a constant C > 0 such that,

Gn ≤ Cτn√

log nmax(nκ,√

log n).

Under the thin-shell conjecture, the theorem gives Gn < C log n.

Remark 1.1.8 Plugging the above result into the best known bound for σn (proven in [Gu-M]), it follows

that

Gn ≤ Cn1/3√

log n.

This slightly improves the previous bound, Gn ≤ Cn5/12, which is a corollary of [Gu-M] and [Bob4].

Remark 1.1.9 Compare this result with the result in [Bob4]. Bobkov’s theorem states that for any log-

concave random vector X and any smooth function ϕ, one has

V ar[ϕ(X)]

E [|∇ϕ(X)|2]≤ CE[|X|]

√V ar[|X|].

Under the thin-shell hypothesis, Bobkov’s theorem gives Gn ≤ Cn1/4.

The bound in theorem 1.1.7 will rely on the following intermediate constant which corresponds to a

slightly stronger thin shell bound. Define,

K2n := sup

Xsup

θ∈Sn−1

n∑i,j=1

E[XiXj〈X, θ〉]2,

where the supremum runs over all isotropic log-concave random vectors X in Rn. Obviously, an equiv-

alent definition of Kn will be,

Kn := supµ

∣∣∣∣∣∣∣∣∫Rnx1x⊗ xdµ(x)

∣∣∣∣∣∣∣∣HS

where the supremum runs over all isotropic log-concave measures in Rn. Here, || · ||HS stands for the

Hilbert-Schmidt norm of a matrix.

There is a simple relation between Kn and σn, namely,

Page 16: Distribution of Mass in Convex Bodies

16 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Lemma 1.1.10 There exists a constant C > 0 such that,

Kn ≤ Cτn max(nκ,√

log n).

Theorem 1.1.7 will be a consequence of the above lemma along with,

Proposition 1.1.11 There exists a constant C > 0 such that,

Gn ≤ CKn

√log n.

Remark 1.1.12 It can be shown that the constant Kn satisfies the following bound:

K−1n ≥ inf

µinfE⊂Rn

µ+(E)

µ(E)

where µ runs over all isotropic log-concave measures in Rn and E runs over all ellipsoids with µ(E) =

12 . This shows that up to the extra factor

√log n, in order to control the minimal possible surface area

among all possible subsets of measure 12 on the class of isotropic log-concave measures, it is enough to

control the surface area of ellipsoids. See section 2.5.5 for details.

The proofs of lemma 1.1.10 and proposition 1.1.11 are found in section 1.4.

1.2 A pointwise version of the central limit theorem for con-vex bodies

The goal of this section is to prove theorem 1.1.5.

The basic idea of the proof of the theorem is the following: It is shown in [K2], using concentration

techniques, that the density of ProjE(X + Y ) is pointwise approximately radial, where Y is an inde-

pendent small gaussian random vector. It is furthermore proved that this density is concentrated in a thin

spherical shell. We combine these facts to deduce, in Section 3.1.1, that the density of ProjE(X+Y ) is

not only radial, but in fact very close to the gaussian density in E. Then, in Section 1.3.2, we show that

the addition of the gaussian random vector Y is not required. That is, we prove that when a log-concave

density convolved with a small gaussian is almost gaussian – then the original density is also approxi-

mately gaussian. This completes the sketch of the proof.

This section is based on a joint work with Bo’az Klartag.

1.2.1 Convolved marginals are Gaussian

For a dimension n and v > 0 we write

γn[v](x) =1

(2πv)n/2exp

(−|x|

2

2v

)(x ∈ Rn). (1.9)

Page 17: Distribution of Mass in Convex Bodies

1.2. A POINTWISE CLT FOR CONVEX BODIES 17

That is, γn[v] is the density of a gaussian random vector in Rn with mean zero and covariance matrix

vId. LetX be an isotropic random vector with a log-concave density in Rn, and let Y be an independent

gaussian random vector in Rn whose density is γn[n−α], for a parameter α to be specified later on.

Denote by fX+Y the density of the random vector X + Y . Our first step is to show that the density of

the projection of X + Y onto a typical subspace is pointwise approximately gaussian.

We follow the notation of [K2]. For an integrable function f : Rn → [0,∞), a subspace E ⊆ Rn

and a point x ∈ E we write

πE(f)(x) =

∫x+E⊥

f(y)dy, (1.10)

where x + E⊥ is the affine subspace orthogonal to E that passes through the point x. In other words,

πE(f) : E → [0,∞) is the marginal of f onto E. The group of all orthogonal transformations of

determinant one in Rn is denoted by SO(n). Fix a dimension ` and a subspaceE0 ⊂ Rn with dim(E0) =

`. For x0 ∈ E0 and a rotation U ∈ SO(n), set

Mf,E0,x0(U) = log πE0(f U)(x0). (1.11)

Define

M(|x0|) =

∫SO(n)

MfX+Y ,E0,x0(U)dµn(U), (1.12)

where µn stands for the unique rotationally-invariant Haar probability measure on SO(n). Note that

M(|x0|) is independent of the direction of x0, so it is well defined. We learned in [K2] that the function

U 7→MfX+Y ,E0,x0(U) is highly concentrated with respect to U in the special orthogonal group SO(n),

around its mean value M(|x0|). This implies that the function πE(fX+Y ) is almost spherically symmet-

ric, for a typical subspace E. This information is contained in our next Lemma, which is equivalent to

[K2, Lemma 3.3].

Lemma 1.2.1 Let 1 ≤ ` ≤ n be integers, let 0 < α < 105 and denote λ = 15α+20 . Assume that ` ≤ nλ.

Suppose that X is an isotropic random vector with a log-concave density and that Y is an independent

random vector with density γn[n−αλ]. Denote the density of X + Y by fX+Y .

Let E ∈ Gn,` be a random subspace. Then, with probability greater than 1−Ce−cn1/10of selecting

E, we have

|log πE(fX+Y )(x)−M(|x|)| ≤ Cn−λ, (1.13)

for all x ∈ E with |x| ≤ 5nλ/2. Here c, C > 0 are universal constants.

Sketch of Proof: We have to follow the proof of Lemma 3.3 in [K2], choosing for instance, u = 910 ,

λ = 15α+20 , k = nλ and η = 1. Throughout the argument in [K2], it was assumed that the dimension of

the subspace is exactly k = nλ, while in the present version of the statement, note that it could possibly

be smaller, i.e., ` ≤ k (note also that here, k need not be an integer). We re-run the proofs of Lemmas 2.7,

2.8, 3.1 and 3.3 from [K2], allowing the dimension of the subspace we are working with to be smaller

than k, noting that the reduction of the dimension always acts in our benefit.

We refer the reader to the original argument in the proof of Lemma 3.3 in [K2] for more details.

Page 18: Distribution of Mass in Convex Bodies

18 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Our main goal in this subsection is to show that M(|x|) behaves approximately like log γn[1 +

n−αλ](x). Once we prove this, it would follow from the above lemma that the density of X + Y

is pointwise approximately gaussian. Next we explain why no serious harm is made if we take the

logarithm outside the integral in the definition of M(|x|). Denote, for x ∈ E0,

M(|x|) =

∫SO(n)

πE0(fX+Y U)(x)dµn(U). (1.14)

Lemma 1.2.2 Under the notation and assumptions of Lemma 1.2.1, for |x| ≤ 5nλ/2 we have

0 ≤ log M(|x|)−M(|x|) ≤ C

n1/5, (1.15)

where C > 0 is a universal constant.

Proof: Recall that E0 ⊂ Rn is some fixed `-dimensional subspace. Fix x0 ∈ E0 with |x0| ≤ 5nλ/2.

Lemma 3.1 of [K2] states that for any U1, U2 ∈ SO(n),∣∣MfX+Y ,E0,x0(U1)−MfX+Y ,E0,x0(U2)∣∣ ≤ Cnλ(2α+2) · d(U1, U2), (1.16)

where d(U1, U2) stands for the geodesic distance betweenU1 andU2 in SO(n). As we mentioned before,

Lemma 3.1 is proved in [K2] under the assumption that the dimension of the subspace E0 is exactly nλ.

In our case, the dimension `might be smaller than nλ, but a direct inspection of the proofs in [K2] reveals

that the reduction of the dimension can only improve the estimates. Hence (1.16) holds true.

We apply the Gromov-Milman concentration inequality on SO(n), quoted as Proposition 3.2 in

[K2], and conclude from (1.16) that for any ε > 0,

µnU ∈ SO(n);

∣∣MfX+Y ,E0,x0(U)−M(|x0|)∣∣ ≥ ε ≤ C exp

(−cnε2/L2

), (1.17)

with L = Cnλ(2α+2). That is, the distribution of

F (U) =

√n

L

(MfX+Y ,E0,x0(U)−M(|x0|)

)(U ∈ SO(n))

on SO(n) has a subgaussian tail. Note also that∫SO(n) F (U)dµn(U) = 0. A standard computation

shows for any p ≥ 1, ∫SO(n)

F p(U)dµn(U) ≤(C ′√p)p, (1.18)

where C ′ is a universal constant. Hence, for any 0 < t < 1,∫SO(n)

exp (tF (U)) dµn(U) ≤ 1 + t

∫SO(n)

F (U)dµn(U) +∞∑i=2

(C ′√i)i tii!

(1.19)

≤ 1 +∞∑i=2

(Ct2)i/2

bi/2c!≤ 1 + (

√C + 1)

∞∑j=1

(Ct2)j

j!≤∞∑j=0

(Ct2)j

j!= exp(Ct2).

The left-hand side of (1.15) follows by Jensen’s inequality. We may clearly assume that n ≥ C ′ when

proving the right-hand side inequality of (1.15) (otherwise, 1 − Cn−1/5 can be made negative, for an

appropriate choice of a universal constant C). We use (1.19) for the value

t =L√n

= Cn2α+25α+20

− 12 ≤ Cn−1/10 < 1,

Page 19: Distribution of Mass in Convex Bodies

1.2. A POINTWISE CLT FOR CONVEX BODIES 19

to conclude thatM(|x0|)

exp(M(|x0|))=

∫SO(n) exp

(MfX+Y ,E0,x0(U)

)dµn(U)

exp(M(|x0|))

=

∫SO(n)

exp(MfX+Y ,E0,x0(U)−M(|x0|)

)dµn(U) ≤ exp(Cn−1/5).

Taking logarithms of both sides completes the proof.

Let X,Y, α, λ, ` be as in Lemma 1.2.1. We choose a slightly different normalization. Define

Z =X + Y√1 + n−λα

, (1.20)

and denote by fZ the corresponding density. Clearly fZ is isotropic and log-concave. Next we define,

for x ∈ E0,

M1(|x|) :=

∫SO(n)

πE0(fZ U)(x)dµn(U). (1.21)

Our goal is to show that the following estimate holds:∣∣∣∣∣M1(|x|)γ`[1](x)

− 1

∣∣∣∣∣ < C1n−c1 (1.22)

for all x ∈ R` with |x| < c2nc2 for some universal constants C1, c1, c2 > 0.

We write Sn−1 = x ∈ Rn; |x| = 1, the unit sphere in Rn. Define:

fZ(x) =

∫Sn−1

fZ(|x|θ)dσn(θ) =

∫SO(n)

fZ(Ux)dµn(U), (x ∈ Rn) (1.23)

where σn is the unique rotationally-invariant probability measure on Sn−1. Since fZ is spherically

symmetric, we shall also use the notation fZ(|x|) = fZ(x). Clearly, for any x ∈ E0,

M1(|x|) =

∫SO(n)

πE0(fZ U)(x)dµn(U) =

∫SO(n)

πE0(fZ U)(x)dµn(U) = πE0(fZ)(x). (1.24)

We will use the following thin-shell estimate, proved in [K2, Theorem 1.3]:

Proposition 1.2.3 Let n ≥ 1 be an integer and let X be an isotropic random vector in Rn with a log-

concave density. Then,

P∣∣∣∣ |X|√n − 1

∣∣∣∣ ≥ 1

n1/15

< C exp

(−cn1/15

)(1.25)

where C, c > 0 are universal constants.

Applying the above for fZ , denoting ε = n−1/15, and defining

A = x ∈ Rn;√n(1− ε) ≤ |x| ≤

√n(1 + ε),

we get, ∫AfZ(x)dx > 1− Ce−cn1/15

. (1.26)

From the definition of fZ , it is clear that the above inequality also holds when we replace fZ with fZ . In

other words, if we define

g(t) = tn−1ωnfZ(t) (t ≥ 0) (1.27)

Page 20: Distribution of Mass in Convex Bodies

20 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

where ωn is the surface area of the unit sphere Sn−1 in Rn, and use integration in polar coordinates, we

get

1 ≥∫ √n(1+ε)

√n(1−ε)

g(t)dt > 1− Ce−cn1/15. (1.28)

Our next step is to apply the methods in Sodin’s paper [Sod] in order to prove a generalization of [Sod,

Theorem 2], for a multi-dimensional marginal rather than a one-dimensional marginal. Our estimate will

be rather crude, but suitable for our needs.

Denote by σn,r the unique rotationally-invariant probability measure on the Euclidean sphere of radius r

around the origin in Rn. A standard calculation shows that the density of an `-dimensional marginal of

σn,r is given by the following formula:

ψn,`,r(x) = ψn,`,r(|x|) := Γn,`1

r`

(1− |x|

2

r2

)n−`−22

1[−r,r](|x|) (1.29)

where

Γn,` =

(1√π

)` Γ(n2 )

Γ(n−`2 )(1.30)

and where 1[−r,r] is the characteristic function of the interval [−r, r]. (see for example [DF, remark

2.10]). When ` <<√n we have Γn,`

(2πn

)`/2 ≈ 1. By the definition (1.27) of g, and since fZ is

spherically symmetric, we may write

πE0(fZ)(x) =

∫ ∞0

ψn,`,r(|x|)g(r)dr (x ∈ E0). (1.31)

Indeed, the measure whose density is fZ equals∫∞

0 g(r)σn,rdr, hence its marginal onto E0 has density

x 7→∫∞

0 ψn,`,r(x)g(r)dr. We will show that the above density is approximately gaussian for x ∈ E0

when |x| is not too large. But first we need the following technical lemma:

Lemma 1.2.4 Let g be the density defined in (1.27), and suppose that n ≥ C ′ and ` ≤ n1/20. For

ε = n−1/15 denote U = t > 0; t < (1− ε)√n or t > (1 + ε)

√n. Then,∫

Ut−`g(t)dt < C ′ exp

(−c′n1/15

). (1.32)

Here, c′, C ′ > 0 are universal constants.

Proof: Define for convenience,

h(t) = t−`g(t). (1.33)

Denote

A =

[0,

1

n2

], B =

[1

n2,√n(1− ε)

]∪[√n(1 + ε),∞

),

and write ∫Uh(t)dt =

∫Ah(t)dt+

∫Bh(t)dt. (1.34)

We estimate the two terms separately. For t > 1n2 we have

h(t)/g(t) < (n2`) = e2` logn. (1.35)

Page 21: Distribution of Mass in Convex Bodies

1.2. A POINTWISE CLT FOR CONVEX BODIES 21

Thus we can estimate the second term as follows:∫Bh(t)dt < e2` logn

∫Bg(t)dt < e2` lognCe−cn

1/15< Ce−

12cn1/15

, (1.36)

where for the second inequality we apply the reformulation (1.28) of Proposition 1.2.3 (recall that ε =

n−1/15 and that ` < n1/20).

To estimate the first term in the right-hand side of (1.34), we use the fact that fZ is isotropic and

log concave, so we can use a crude bound for the isotropic constant (see e.g. [?, Corollary 4.3] or [LV,

Theorem 5.14(e)]) which gives supRn fZ < en logn, thus, also supRn fZ < en logn. Hence we can

estimate ∫Ah(t)dt =

∫ 1n2

0t−`g(t)dt =

∫ 1n2

0tn−`−1ωnfZ(t)dt (1.37)

< n−2(n−`)ωn sup fZ < e−1.5n logn+n logn < e−n,

as ωn < C. The combination of (1.36) and (1.37) completes the proof.

We are now ready to show that the marginals of fZ are approximately gaussian. Our desired bound

(1.22) is contained in the following lemma.

Lemma 1.2.5 Let 1 ≤ ` ≤ n be integers, with n ≥ C and ` ≤ n1/20. Let g : R+ → R+ be a function

that satisfies (1.28) and (1.32). Then we have,∣∣∣∣∣M1(|x|)γ`[1](x)

− 1

∣∣∣∣∣ =

∣∣∣∣∫∞

0 ψn,`,r(|x|)g(r)dr

γ`[1](x)− 1

∣∣∣∣ < Cn−1/60 (1.38)

for all x ∈ R` with |x| < 2n140 where C > 0 is a universal constant.

Proof: The left-hand side equality in (1.38) follows at once from (1.24) and (1.31). We move to the proof

of the right-hand side inequality. We begin by using a well-known fact, that follows from a straightfor-

ward computation using asymptotics of Γ-functions: for |x| < n1/8,

∣∣∣∣ψn,`,√n(|x|)γ`[1](x)

− 1

∣∣∣∣ =

∣∣∣∣∣∣∣(

n

)`/2Γn,`

(1− |x|

2

n

)(n−`−2)/2

e−|x|2/2− 1

∣∣∣∣∣∣∣ ≤C√n

(1.39)

(We omit the details of the simple computation. An almost identical computation is done, for example, in

[Sod, Lemma 1]. Note that in addition to the computation there, we have to use, e.g., Stirling’s formula

to estimate the constants εn). Using the above fact (1.39), we see that it suffices to prove the following

inequality: ∣∣∣∣∣∫∞

0 ψn,`,r(|x|)g(r)dr

ψn,`,√n(|x|)

− 1

∣∣∣∣∣ < Cn−160 (1.40)

for all x ∈ R` with |x| < 2n140 . To that end, fix x0 ∈ R` with |x0| < 2n

140 , define

A = [√n(1− n−

115 ),√n(1 + n−

115 )], B = [0,∞) \A,

Page 22: Distribution of Mass in Convex Bodies

22 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

and write ∫ ∞0

ψn,`,r(|x0|)g(r)dr =

∫Aψn,`,r(|x0|)g(r)dr +

∫Bψn,`,r(|x0|)g(r)dr. (1.41)

We estimate the two terms separately. For the second term, we have,∫Bψn,`,r(|x0|)g(r)dr = Γn,`

∫B

1

r`

(1− |x0|2

r2

)n−`−22

1[−r,r](|x0|)g(r)dr

< Γn,`

∫B

1

r`g(r)dr < Γn,`Ce

−cn1/15, (1.42)

where the last inequality follows from (1.32). Therefore,∫B ψn,`,r(|x0|)g(r)dr

ψn,`,√n(|x0|)

<Ce−cn

1/15

( 1√n

)`(

1− |x0|2n

)n−l−22

(1.43)

< Ce−cn1/15+|x0|2+ 1

2` logn < Ce−n

1/20.

To estimate the first term on the right-hand side of (1.41), we will show that the following inequality

holds: ∣∣∣∣∣∫A ψn,`,r(|x0|)g(r)dr

ψn,`,√n(|x0|)

− 1

∣∣∣∣∣ < Cn−1/60 (1.44)

for some constant C > 0. For r > 0 such that |x0|2r2 < 1

2 , we have,∣∣∣∣ ddr logψn,`,r(|x0|)∣∣∣∣ =

∣∣∣∣∣∣− `r + (n− `− 2)|x0|2

r3

1(1− |x0|2

r2

)∣∣∣∣∣∣ < `

r+ 2n

|x0|2

r3. (1.45)

Recalling that |x0| < 2n140 and ` ≤ n1/20, the above estimate gives that for all r ∈ [1

2

√n, 3

2

√n],∣∣∣∣ ddr logψn,`,r(|x0|)

∣∣∣∣ < 2n120− 1

2 + 16n1+ 120− 3

2 < Cn−920 (1.46)

which gives, for r ∈ [12

√n, 3

2

√n],∣∣∣∣∣ ψn,`,r(|x0|)ψn,`,

√n(|x0|)

− 1

∣∣∣∣∣ < Cn−920 |r −

√n|. (1.47)

Recall that for r ∈ A we have |r −√n| ≤ n

1330 . Hence the last estimate yields,∣∣∣∣∣

∫A ψn,`,r(|x0|)g(r)dr

ψn,`,√n(|x0|)

∫A g(r)dr

− 1

∣∣∣∣∣ < Cn−920n

1330 = Cn−

160 . (1.48)

Combining the last inequality with (1.28), we get∣∣∣∣∣∫A ψn,`,r(|x0|)g(r)dr

ψn,`,√n(|x0|)

− 1

∣∣∣∣∣ < Ce−cn115 + Cn−

160 < C ′n−

160 . (1.49)

From (1.43) and (1.49) we deduce (1.40), and the lemma is proved.

Recall the definitions (1.14) and (1.21) of M(|x|) and M1(|x|); the only difference is the normaliza-

tion of X + Y . By an easy scaling argument, we deduce from Lemma 1.2.5 that when n ≥ C,∣∣∣∣∣ M(|x|)γ`[1 + n−λα](x)

− 1

∣∣∣∣∣ < C1n− 1

60 (1.50)

Page 23: Distribution of Mass in Convex Bodies

1.2. A POINTWISE CLT FOR CONVEX BODIES 23

for all x ∈ R` with |x| < n140 , for C1 > 0 a universal constant. By plugging (1.15) and (1.50) into

Lemma 1.2.1, we conclude the following:

Proposition 1.2.6 Let 1 ≤ ` ≤ n be integers. Let 0 < α < 105 and denote λ = 15α+20 . Assume that

` ≤ nλ. Suppose that f : Rn → [0,∞) is a log-concave function that is the density of an isotropic

random vector. Define g = f ∗ γn[n−λα], the convolution of f and γn[n−λα]. Let E ∈ Gn,` be a random

subspace. Then, with probability greater than 1− Ce−cn1/10of selecting E, we have∣∣∣∣ πE(g)(x)

γ`[1 + n−λα](x)− 1

∣∣∣∣ ≤ Cn−λ (1.51)

for all x ∈ E with |x| < nλ/2, where C > 0 is a universal constant.

We did not have to explicitly assume that n ≥ C in Proposition 1.2.6, since otherwise the proposition

is vacuously true. In the next section we will show that the above estimate still holds without taking the

convolution, perhaps with slightly worse constants.

1.2.2 Deconvolving the Gaussian

Our goal in this subsection is to establish the following principle: Suppose thatX is a random vector with

a log-concave density, and that Y is an independent, gaussian random vector whose covariance matrix is

small enough with respect to that of X . Then, in the case where X + Y is approximately gaussian, the

density of X is also approximately gaussian, in a rather large domain. We begin with a lower bound for

the density of X .

(Note that the notation n in this subsection corresponds to the dimension of the subspace, that was

denoted by ` in the previous subsection.)

Lemma 1.2.7 Let n ≥ 1 be a dimension, and let α, β, ε, R > 0. Suppose that X is an isotropic random

vector in Rn with a log-concave density, and that Y is an independent gaussian random vector in Rn

with mean zero and covariance matrix αId. Denote by fX and fX+Y the respective densities. Suppose

that,

fX+Y (x) ≥ (1− ε)γn[1 + α](x) (1.52)

for all |x| ≤ R. Assume that α ≤ c0n−8 and that

100(2n)max3β,3/2α1/4 < ε <1

100. (1.53)

Then,

fX(x) ≥ (1− 6ε)γn[1](x) (1.54)

for all x ∈ Rn with |x| ≤ minR− 1, (2n)β

. Here, 0 < c0 < 1 is a universal constant.

Proof: Suppose first that fX is positive everywhere in Rn. Fix x0 ∈ Rn with |x0| ≤ minR −1, (2n)β. Assume that ε0 > 0 is such that

fX(x0) < (1− ε0)γn[1](x0). (1.55)

Page 24: Distribution of Mass in Convex Bodies

24 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

To prove the lemma (for the case where fX is positive everywhere) it suffices to show that

ε0 ≤ 6ε. (1.56)

Consider the level set L = x ∈ Rn; fX(x) ≥ fX(x0). Then L is convex and bounded, as fX is

log-concave and integrable (here we used the fact that fX(x0) > 0). Let H be an affine hyperplane that

supports L at its boundary point x0, and denote by D the open ball of radius α1/4 tangent to H at x0,

that is disjoint from the level set L. By definition, fX(x) < fX(x0) for x ∈ D. Denote the center of D

by x1. Then, |x1 − x0| ≤ α1/4 with |x0| ≤ (2n)β , and a straightforward computation yields∣∣|x1|2 − |x0|2∣∣ ≤ (2(2n)β + α1/4

)α1/4 ≤ ε

2, (1.57)

where we used (1.53). Note that |x1| ≤ |x0|+ α1/4 ≤ R. Apply the last inequality and (1.52) to obtain,

fX+Y (x1) ≥ (1− ε)γn[1 + α](x0)e|x0|

2−|x1|2

2(1+α) > (1− 2ε)γn[1 + α](x0). (1.58)

By definition,

fX+Y (x1) =

∫RnfX(x)γn[α](x1 − x)dx = (1.59)∫

x∈DfX(x)γn[α](x1 − x)dx+

∫x/∈D

fX(x)γn[α](x1 − x)dx.

We will estimate both integrals. First, recall that fX(x) < fX(x0) for x ∈ D and use (1.55) to deduce∫x∈D

fX(x)γn[α](x1 − x)dx < fX(x0) < (1− ε0)γn[1](x0). (1.60)

For the integral outside D, a rather rough estimate would suffice. We may write,∫x/∈D

fX(x)γn[α](x1 − x)dx < P(|Gn| ≥

1

α1/4

)supRn

fX (1.61)

where Gn ∼ γn[1] is a standard gaussian random vector. To bound the right-hand side term, we shall use

a standard tail bound for the norm of a gaussian random vector,

P(|Gn| > t√n) < Ce−ct

2, (1.62)

and the following crude bound for the isotropic constant of fX (see e.g [LV, Theorem 5.14(e)]),

supRn

fX < e12n logn+6n < eCn logn. (1.63)

Consequently, ∫x/∈D

fX(x)γn[α](x1 − x)dx < Ce−cn−1α−1/2

eCn logn < e−α−1/3

, (1.64)

for an appropriate choice of a sufficiently small universal constant c0 > 0 (so that all other constants are

absorbed). Combining (1.85), (1.60) and (1.64) gives

fX+Y (x1) <

(1− ε0 +

e−α−1/3

γn[1](x0)

)γn[1](x0). (1.65)

Page 25: Distribution of Mass in Convex Bodies

1.2. A POINTWISE CLT FOR CONVEX BODIES 25

Using the fact that n+ (2n)2β < α−1/3

2 , which follows easily from our assumptions, we have

e−α−1/3

γn[1](x0)= e

|x0|2

2+n

2log(2π)−α−1/3

< e−12α−1/3 ≤ 2α1/3 <

ε

2<ε02

(1.66)

(for the last inequality, note that if ε0 < 6ε then we have nothing to prove. So we can assume that ε0 > ε).

From (1.65) and (1.66) we obtain the bound

fX+Y (x1) <(

1− ε02

)γn[1](x0). (1.67)

Combining (1.58) and (1.67) we get,

(1− 2ε)γn[1 + α](x0) <(

1− ε02

)γn[1](x0). (1.68)

A calculation yields,

γn[1](x0)

γn[1 + α](x0)≤ γn[1](0)

γn[1 + α](0)= (1 + α)

n2 < 1 + ε. (1.69)

From the above two inequalities, we finally deduce,

1− ε0/21− 2ε

>1

1 + ε> 1− ε ⇒ ε0 < 6ε, (1.70)

which proves (1.56). The lemma is proved, under the additional assumption that fX never vanishes. The

general case follows by a standard approximation argument.

After proving a lower bound, we move to the upper bound. We will show that if we add to the

requirements of the previous lemma an assumption that the density of fX+Y is bounded from above,

then we can provide an upper bound for fX .

Lemma 1.2.8 Let n,X, Y, α, β, ε, R, c0 be defined as in Lemma 4.2.1, and suppose that all the condi-

tions of Lemma 4.2.1 are satisfied. Suppose that in addition, we have the following upper bound for

fX+Y :

fX+Y (x) < (1 + ε)γn[1 + α](x) (1.71)

for all |x| < R. Then we have:

fX(x) < (1 + 8ε)γn[1](x) (1.72)

for all x with |x| < min(2n)β, R − 3.

Proof: Denote F (x) = − log fX(x). Again we use the upper bound for the supremum of the density

(1.63),

F (x) > 6n− 1

2n log n > −n log n, ∀x ∈ Rn. (1.73)

Use the conclusion of Lemma 4.2.1 to deduce that for |x| < min(2n)β, R − 1 the following holds:

F (x) < − log

(1

2γn[1](x)

)< log 2 +

n

2log(2π) + (2n)2β < 3(2n)max2β, 3

2. (1.74)

Next we will show that for x, y ∈ A =x ∈ Rn; |x| < min(2n)β, R − 2

, the following Lipschitz

condition holds:

|F (x)− F (y)| ≤ 5(2n)max2β, 32|x− y|. (1.75)

Page 26: Distribution of Mass in Convex Bodies

26 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

To that end, denote a = 5(2n)max2β, 32 and suppose by contradiction that x, y ∈ A are such that

F (y)− F (x) > a|y − x|. (1.76)

Since F (y)− F (x) < a (as implied by (1.73) and (1.74)), we have |y − x| < 1 and for the point

y1 := x+y − x|y − x|

,

we have, using the convexity of F ,

F (y1)− F (x) ≥ F (y)− F (x)

|y − x|> a.

Note that |y1| ≤ |x| + 1 < min(2n)β, R − 1, so we get a contradiction to (1.73) and (1.74). This

proves (1.75).

Therefore, given two points x, x0 ∈ A such that |x0 − x| < α1/4, (1.75) implies,

|F (x0)− F (x)| < 5α1/4(2n)max2β,3/2 < ε/20. (1.77)

Recall that F = − log fX , hence the above translates to

|fX(x0)− fX(x)| < 2(eε/20 − 1)fX(x0) <ε

4fX(x0). (1.78)

Now, suppose x0 ∈ Rn and 0 < ε0 < 1 are such that

fX(x0) > (1 + ε0)γn[1](x0), (1.79)

with |x0| < minR, (2n)β − 3. Again, to prove the Lemma it suffices to show that in fact ε0 < 8ε. Let

D be a ball of radius α1/4 around x0.

Since we can assume that ε0 > ε (otherwise, there is nothing to prove), we deduce from (1.78) and (1.79)

that for all x ∈ D,

fX(x) >(

1− ε04

)(1 + ε0) γn[1](x0) >

(1 +

ε02

)γn[1](x0). (1.80)

Thus,

fX+Y (x0) =

∫RnfX(x)γn[α](x0 − x)dx (1.81)

>

∫x∈D

fX(x)γn[α](x0 − x)dx

>(

1 +ε02

)γn[1](x0) ·

(1− P

(|Gn| >

1

α1/4

))>(

1 +ε03

)γn[1](x0),

where in the last inequality we used the estimate (1.62) and the assumption ε0 > ε. Now, a computation

yields,γn[1 + α](x0)

γn[1](x0)< e

12

(|x0|2− |x0|2

1+α) = e

12|x0|2 α

1+α < e(2n)2βα < 1 + ε. (1.82)

We thus obtain, combining (1.71) and (1.81) and using (1.82), that

1 + ε0/3

1 + ε<γn[1 + α](x0)

γn[1](x0)< 1 + ε,

so ε0 < 8ε, and the proof of the lemma is complete.

The combination of the two above lemmas gives us the desired estimate for the density of X , as

advertised in the beginning of this section.

Page 27: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 27

1.2.3 Proof of main theorem

Proof of Theorem 1.1.5: We may clearly assume that n exceeds some positive universal constant (oth-

erwise, take E = ∅). Let 1 ≤ ` ≤ n1/100 be an integer, and let δ ≥ 0 be such that ` = nδ. Set α = 10

and λ = 15α+20 = 1

70 . Let Y to be a gaussian random vector in Rn with mean zero and covariance

matrix n−αλId, independent of X . We first apply Proposition 1.2.6 for the random vector X + Y with

parameters ` and α (noting that ` ≤ n1/100 ≤ nλ). According to the conclusion of that proposition, if E

is a random subspace of dimension `, then∣∣∣∣ πE(fX+Y )(x)

γn[1 + n−αλ](x)− 1

∣∣∣∣ ≤ Cn−1/100, (1.83)

for all x ∈ E with |x| < n1

200 , with probability greater than 1− Ce−cn1/10of choosing E.

Next, we apply Lemma 4.2.1 and Lemma 4.3.1 in the `-dimensional subspaceE, with the parameters

α = n−10λ ≤ n−1/20`−8, β = 1600(δ+1/ log2 n) , R = n1/200, ε = Cn−1/100 where C is the constant from

(1.83). It is straightforward to verify that the requirements of these two lemmas hold, since n may be

assumed to exceed a given universal constant. According to the conclusions of Lemma 4.2.1 and Lemma

4.3.1, for any x ∈ E with |x| < n1

700 ,∣∣∣∣πE(fX)(x)

γn[1](x)− 1

∣∣∣∣ ≤ C ′n−1/100.

This completes the proof.

Remark. The numerical values of the exponents c1, c2, c3, c4 provided by our proof of Theorem

1.1.5 are far from optimal. The theorem is tight only in the sense that the thinshellconjw dependencies on

n cannot be improved to, say, exponential dependence. The only constant among c1, c2, c3, c4 for which

the best value is essentially known to us is c2. It is clear from the proof that c2 can be made arbitrarily

close to 1 at the expense of decreasing the other constants. Note also that necessarily c4 ≤ 1/4, as is

shown by the example where X is distributed uniformly in a Euclidean ball (see [Sod, Section 4.1]).

1.3 The relation between the slicing problem and the thin-shell conjecture

The goal of this section is to prove theorem 1.1.4, which states that,

Ln < Cσn (1.84)

for some universal constant C > 0.

In fact, inequality 1.84 may be refined as follows. We write Sn−1 = x ∈ Rn; |x| = 1 for the unit

sphere, and denote

σn =1√n

supX

∣∣EX|X|2∣∣ =1√n

supX

supθ∈Sn−1

E(X · θ)|X|2,

where the supremum runs over all isotropic, log-concave random vectors X in Rn. The simple proof of

the following lemma is given in section 1.3.2 below.

Page 28: Distribution of Mass in Convex Bodies

28 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Lemma 1.3.1 For any n ≥ 1,

σ2n ≤

1

nsupX

E(|X|2 − n)2 ≤ Cσ2n. (1.85)

where the supremum runs over all isotropic, log-concave random vectors X in Rn. Furthermore,

1 ≤ σn ≤ Cσn ≤ Cn0.41.

Here, C, C > 0 are universal constants.

Inequality 1.84 may be sharpened, in view of Lemma 1.3.1, to the bound Ln ≤ Cσn, for a universal

constant C > 0, as explained in the proof. Our argument involves a certain Riemannian structure, which

is presented in the next section.

Throughout this section, we use the letters c, c, c′, C, C, C ′ to denote positive universal constants,

whose value is not necessarily the same in different appearances. Further notation relevant for this

section: The support of a Borel measure µ on Rn is the minimal closed set of full measure. When µ is

log-concave, its support is a convex set. For a Borel measure µ on Rn and a Borel map T : Rn → Rk

we define the push-forward of µ under T as the the measure ν = T∗(µ) on Rk with

ν(A) = µ(T−1(A)) for any Borel set A ⊂ Rk.

Note that for any log-concave measure µ on Rn, there is an invertible affine map T such that T∗(µ) is

isotropic. When T is a linear function and k < n, we say that T∗(µ) is a marginal of µ. The Euclidean

unit ball is denoted by Bn2 = x ∈ Rn; |x| ≤ 1, and its volume satisfies

c√n≤ V oln(Bn

2 )1/n ≤ C√n.

We write∇ϕ for the gradient of the function ϕ, and∇2ϕ for the Hessian matrix. For θ ∈ Sn−1 we write

∂θ for differentiation in direction θ, and ∂θθ(f) = ∂θ(∂θf).

This section is based on a joint work with Bo’az Klartag.

1.3.1 A Riemannian metric associated with a convex body

The main idea of the proof is a certain Riemannian metric associated with any convex bodyK ⊂ Rn. Our

construction is affinely invariant: We actually associate a Riemannian metric with any affine equivalence

class of convex bodies (recall that two convex bodies in Rn are affinely equivalent if there exists an

invertible affine map that maps one to the other).

Begin by recalling the technique from [K8]. Suppose that µ is a compactly-supported Borel proba-

bility measure on Rn whose support is not contained in a hyperplane. Denote by K ⊂ Rn the interior of

the convex hull of Supp(µ), so K is a convex body. The logarithmic laplace transform of µ is

Λ(ξ) = Λµ(ξ) = log

∫Rn

exp(ξ · x)dµ(x) (ξ ∈ Rn). (1.86)

Page 29: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 29

The function Λ is strictly convex and C∞-smooth on Rn. For ξ ∈ Rn let µξ be the probability measure

on Rn for which the density dµξ/dµ is proportional to x 7→ exp(ξ ·x). Differentiating under the integral

sign, we see that

∇Λ(ξ) = b(µξ), ∇2Λ(ξ) = Cov(µξ) (ξ ∈ Rn),

where b(µξ) is the barycenter of the probability measure µξ and Cov(µξ) is the covariance matrix. We

learned the following lemma from Gromov’s work [Gr]. A proof is provided for the reader’s convenience.

Lemma 1.3.2 In the above notation,∫Rn

det∇2Λ(ξ)dξ = V oln(K).

Proof: It is well-known that the open set ∇Λ(Rn) = ∇Λ(ξ); ξ ∈ Rn is convex, and that the map

ξ 7→ ∇Λ(ξ) is one-to-one (see, e.g., Rockafellar [Ro, Theorem 26.5]). Furthermore,

∇Λ(Rn) ⊆ K (1.87)

since for any ξ ∈ Rn, the point ∇Λ(ξ) is the barycenter of a certain probability measure supported on

the convex set K. Next we show that the closure of ∇Λ(Rn), denoted by ∇Λ(Rn), contains all of the

exposed points of Supp(µ). Let x0 ∈ Supp(µ) be an exposed point, i.e., there exists ξ ∈ Rn such that

ξ · x0 > ξ · x for all x0 6= x ∈ Supp(µ). (1.88)

We claim that

limr→∞

∇Λ(rξ) = x0. (1.89)

Indeed, (1.89) follows from (1.88) and from the fact that x0 belongs to the support of µ: The measure

µrξ converges weakly to the delta measure δx0 as r →∞, hence the barycenter of µrξ tends to x0. Any

exposed point of K is an exposed point of Supp(µ), and we conclude that all of the exposed points of K

are contained in ∇Λ(Rn). From Straszewicz’s theorem (see, e.g., Schneider [Sch, Theorem 1.4.7]) we

deduce that

K = ∇Λ(Rn).

Since ∇Λ(Rn) is a convex, open set, necessarily ∇Λ(Rn) = K. Since Λ is strictly-convex, its hessian

is positive-definite everywhere, and from the change of variables formula,

V oln(K) = V oln (∇Λ(Rn)) =

∫Rn

det∇2Λ(ξ)dξ.

Recall that µ is any compactly-supported probability measure on Rn whose support is not contained

in a hyperplane. For each ξ ∈ Rn the hessian matrix∇2Λ(ξ) = Cov(µξ) is positive definite. For ξ ∈ Rn

we set

g(ξ)(u, v) = gµ(ξ)(u, v) = Cov(µξ)u · v = ∇2Λ(ξ)u · v (ξ ∈ Rn).

Page 30: Distribution of Mass in Convex Bodies

30 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Then gµ(ξ) is a positive-definite bilinear form, for any ξ ∈ Rn, and thus gµ is a Riemannian metric on

Rn. We also set

Ψµ(ξ) = logdet∇2Λ(ξ)

det∇2Λ(0)(ξ ∈ Rn). (1.90)

We say that Xµ = (Rn, gµ,Ψµ, 0) is the “Riemannian package” associated with the measure µ.

Definition 1.3.3 A “Riemannian package” of dimension n is a quadruple X = (U, g,Ψ, x0) where

U ⊂ Rn is an open set, g is a Riemannian metric on U , x0 ∈ U and Ψ : U → R is a function with

Ψ(x0) = 0.

Suppose X = (U, g,Ψ, x0) and Y = (V, h,Φ, y0) are Riemannian packages. A map ϕ : U → V is

an isomorphism of X and Y if the following conditions hold:

1. ϕ is a Riemannian isometry between the Riemannian manifolds (U, g) and (V, h).

2. ϕ(x0) = y0.

3. Φ(ϕ(x)) = Ψ(x) for any x ∈ U .

In this case we say that X and Y are isomorphic, and we write X ∼= Y .

Let us describe an additional construction of the same Riemannian package associated with µ, con-

struction which is dual to the one mentioned above. Consider the Legendre transform

Λ∗(x) = supξ∈Rn

[ξ · x− Λ(ξ)] (x ∈ K).

Then Λ∗ : K → R is a strictly-convex C∞-function, and ∇Λ∗ : K → Rn is the inverse map to

∇Λ : Rn → K (see Rockafellar [Ro, Chapter V]). Define

Φµ(x) = logdet∇2Λ∗(b(µ))

det∇2Λ∗(x)(x ∈ K),

and for x ∈ K set

h(x)(u, v) = hµ(x)(u, v) =[∇2Λ∗

](x)u · v ∀u, v ∈ Rn.

Thus hµ is a Riemannian metric on K. Note the identity[∇2Λ(ξ)

]−1=[∇2Λ∗

](∇Λ(ξ)) (ξ ∈ Rn).

Using this identity, it is a simple exercise to verify that the Riemannian package Xµ = (K,hµ,Φµ, b(µ))

is isomorphic to the Riemannian package Xµ = (Rn, gµ,Ψµ, 0) constructed earlier, with x = ∇Λ(ξ)

being the isomorphism. We call Xµ (or Xµ) the “Riemannian package associated with µ”.

The constructions Xµ and Xµ are equivalent, and each has advantages over the other. It seems that

Xµ is preferable when carrying out computations, as the notation is usually less heavy in this case. On

the other hand, the definition Xµ is perhaps easier to visualize: Suppose that µ is the uniform probability

measure on K. In this case Xµ equips the convex body K itself with a Riemannian structure. One is

thus tempted to imagine, for instance, how geodesics look on K, and what is a Brownian motion with

respect to this metric in the body K. The following lemma shows that this Riemannian structure on K is

invariant under linear transformations.

Page 31: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 31

Lemma 1.3.4 Suppose µ and ν are compactly-supported probability measures on Rn whose support is

not contained in a hyperplane. Assume that there exists a linear map T : Rn → Rn such that

ν = T∗(µ).

Then Xµ∼= Xν .

Proof: It is straightforward to check that the linear map T t (the transposed matrix) is the required

isometry between the Riemannian manifolds (Rn, gν) and (Rn, gµ). However, perhaps a better way

to understand this isomorphism, is to note that the construction of Xµ may be carried out in a more

abstract fashion: Suppose that V is an n-dimensional linear space, denote by V ∗ the dual space, and let

µ be a compactly-supported probability measure on V whose support is not contained in a proper affine

subspace of V . The logarithmic Laplace transform Λ : V ∗ → R is well-defined, as well as the family of

probability measures µξ (ξ ∈ V ∗). For a point ξ ∈ V ∗ and two tangent vectors η, ζ ∈ TξV ∗ ≡ V ∗, set

gξ(η, ζ) =

∫Vη(x)ζ(x)dµξ(x)−

(∫Vη(x)dµξ(x)

)(∫Vζ(x)dµξ(x)

). (1.91)

A moment of reflection reveals that the definition (1.91) of the positive-definite bilinear form gξ is equiv-

alent to the definition given here. Furthermore, there exists a linear operator Aξ : V ∗ → V ∗, which is

symmetric and positive-definite with respect to the bilinear form g0, that satisfies

gξ(η, ζ) = g0(Aξη, ζ) for all η, ζ ∈ V ∗.

We may then define Ψ(ξ) = log detAξ, which coincides with the definition (1.90) of Ψµ above. There-

fore, Xµ = (V ∗, g,Ψ, 0) is the Riemannian package associated with µ. Back to the lemma, we see that

Xµ is constructed from exactly the same data as Xν , hence they are isomorphic.

Corollary 1.3.5 Suppose µ and ν are compactly-supported probability measures on Rn whose support

is not contained in a hyperplane. Assume that there exists an affine map T : Rn → Rn such that

ν = T∗(µ).

Then Xµ∼= Xν .

Proof: The only difference from Lemma 1.3.4 is that the map T is assumed to be affine, and not

linear. It is enough to deal with the case where T is a translation, i.e.,

T (x) = x+ x0 (x ∈ Rn)

for a certain vector x0 ∈ Rn. From the definition (1.86) we see that

Λν(ξ) = ξ · x0 + Λµ(ξ) (ξ ∈ Rn).

Adding a linear functional does not influence second derivatives, hence gµ = gν and also Ψµ = Ψν .

Therefore Xµ = (Rn, gµ,Ψµ, 0) is trivially isomorphic to Xν = (Rn, gν ,Ψν , 0).

Page 32: Distribution of Mass in Convex Bodies

32 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

An n-dimensional Riemannian package is of “log-concave type” if it is isomorphic to the Riemannian

packageXµ associated with a compactly supported, log-concave probability measure µ on Rn. Note that

according to our not-entirely-standard terminology, a log-concave probability measure has a density with

respect to Lebesgue measure on Rn, hence its support is never contained in a hyperplane.

Lemma 1.3.6 SupposeX = (U, g,Ψ, ξ0) is an n-dimensional Riemannian package of log-concave type.

Let ξ1 ∈ U . Denote

Ψ(ξ) = Ψ(ξ)−Ψ(ξ1) (ξ ∈ U). (1.92)

Then also Y = (U, g, Ψ, ξ1) is an n-dimensional Riemannian package of log-concave type.

Proof: Let µ be a compactly supported log-concave probability measure on Rn whose associated

Riemannian package Xµ = (Rn, gµ,Ψµ, 0) is isomorphic to X . Thanks to the isomorphism, we may

identify ξ1 with a certain point in Rn, which will still be denoted by ξ1 (with a slight abuse of notation).

We now interpret the definition (1.92) as

Ψ(ξ) = Ψ(ξ)−Ψ(ξ1) (ξ ∈ Rn).

In order to prove the lemma, we need to demonstrate that

Y = (Rn, gµ, Ψ, ξ1) (1.93)

is of log-concave type. Recall that µξ1 is the compactly-supported probability measure on Rn whose

density with respect to µ is proportional to x 7→ exp(ξ1 · x). A crucial observation is that µξ1 is log-

concave. Set ν = µξ1 , and note the relation

Λν(ξ) = Λµ(ξ + ξ1)− Λµ(ξ1) (ξ ∈ Rn) (1.94)

which follows directly from the definition (1.86) above. It suffices to show that the Riemannian package

Y in (1.93) is isomorphic to Xν = (Rn, gν ,Ψν , 0). An isomorphism ϕ between Xν and Y is simply the

translation

ϕ(ξ) = ξ + ξ1 (ξ ∈ Rn).

In order to see that ϕ is indeed an isomorphism, note that (1.94) yields

∇2Λν(ξ) = ∇2Λµ(ξ + ξ1) (ξ ∈ Rn), (1.95)

hence ϕ is a Riemannian isometry between (Rn, gν) and (Rn, gµ), with ϕ(0) = ξ1. The relation (1.95)

implies that Ψ(ϕ(ξ)) = Ψν(ξ) for all ξ ∈ Rn. Hence ϕ is an isomorphism between Riemannain

packages, and the lemma is proven.

1.3.2 Inequalities

Proof of Lemma 1.3.1: First, note that for any random vector X in Rn with finite fourth moments,

E(|X| −√n)2 ≤ 1

nE(|X| −

√n)2(|X|+

√n)2 =

1

nE(|X|2 − n)2.

Page 33: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 33

This proves the left-hand side inequality in (1.85). Regarding the right-hand side inequality, we use the

bound

E|X|41|X|>C√n ≤ C exp

(−√n), (1.96)

that follows from Paouris theorem [Pa]. Here 1|X|>C√n is the random variable that equals one when

|X| > C√n and vanishes otherwise. Apply again the identity |X|2 − n = (|X| −

√n)(|X| +

√n) to

conclude that

E(|X|2 − n)2 = E(|X|2 − n)21|X|≤C√n + E(|X|2 − n)21|X|>C

√n

≤ (C + 1)2nE(|X| −√n)2 + E|X|41|X|>2

√n, (1.97)

where C > 0 is the universal constant from (1.96). Clearly σn ≥ c, as is witnessed by the case where

X is a standard gaussian random vector in Rn. Thus (1.85) follows from (1.96) and (1.97). Our proof

of (1.85) uses the deep Paouris theorem. Another possibility could be to use [K4, Theorem 4.4] or the

deviation inequalities for polynomials proved first by Bourgain [Bou3].

In order to prove the second assertion in the lemma, observe that since EX = 0,

E(X · θ)|X|2 = E(X · θ)(|X|2 − n) ≤√E(X · θ)2E(|X|2 − n)2 ≤ C

√nσn,

where we used the Cauchy-Schwartz inequality, the fact that E(X · θ)2 = 1 and (1.85). It remains to

prove that σn ≥ 2. To this end, consider the case where Y1, . . . , Yn are independent random variables,

all distributed according to the density t 7→ e−I(t+1) on the real line, where I(a) = a for a ≥ 0 and

I(a) = ∞ for a < 0. Then Y = (Y1, . . . , Yn) is a random vector distributed according to an isotropic,

log-concave probability measure on Rn, and

E∑n

j=1 Yj√n|Y |2 = 2

√n.

This completes the proof.

When ϕ is a smooth function on a Riemannian manifold (M, g), we write ∇gϕ(x0) ∈ Tx0(M) for

its gradient at the point x0 ∈ M . Here Tx0(M) stands for the tangent space to M at the point x0. The

subscript g in ∇gϕ(x0) means that the gradient is computed with respect to the Riemannian metric g.

The usual gradient of a function ϕ : Rn → R at a point x0 ∈ Rn is denoted by ∇ϕ(x0) ∈ Rn, without

any subscript. For v ∈ Tx0(M) we write |v|g =√gx0(v, v) for its length.

Lemma 1.3.7 SupposeX = (U, g,Ψ, ξ0) is an n-dimensional Riemannian package of log-concave type.

Then, for any ξ ∈ U ,

|∇gΨ(ξ)|g ≤√nσn.

Proof: Suppose first that ξ = ξ0. We thus need to establish the bound

|∇gΨ(ξ0)|g ≤√nσn (1.98)

Page 34: Distribution of Mass in Convex Bodies

34 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

for any log-concave package X = (U, g,Ψ, ξ0) of dimension n. Any such package X is isomorphic to

Xµ = (Rn, gµ,Ψµ, 0) for a certain log-concave probability measure µ on Rn. Furthermore, according to

Corollary 1.3.5, we may apply an appropriate affine map and assume that µ is isotropic. Thus our goal

is to prove that

|∇gµΨµ(0)|gµ ≤√nσn. (1.99)

Since µ is isotropic, then ∇2Λµ(0) = Cov(µ) = Id, where Id is the identity matrix. Consequently, the

desired bound (1.99) is equivalent to

|∇Ψµ(0)| ≤√nσn.

Equivalently, we need to show that

∂θ logdet∇2Λµ(ξ)

det∇2Λµ(0)

∣∣∣∣ξ=0

≤√nσn for all θ ∈ Sn−1.

A straightforward computation shows that ∂θ log det∇2Λµ(ξ) equals the trace of the matrix(∇2Λµ(ξ)

)−1∇2∂θΛµ(ξ). Since µ is isotropic, then

∂θ logdet∇2Λµ(ξ)

det∇2Λµ(0)

∣∣∣∣ξ=0

= 4∂θΛµ(0) =

∫Rn

(x · θ)|x|2dµ(x) ≤√nσn,

according to the definition of σn, where 4 stands for the laplacian. This completes the proof of (1.98).

The lemma in thus proven in the case where ξ = ξ0.

The general case follows from Lemma 1.3.6: When ξ 6= ξ0, we may consider the log-concave

package Y = (U, g, Ψ, ξ), where Ψ differs from Ψ by an additive constant. Applying (1.98) for the

log-concave package Y , we see that

|∇gΨ(ξ)|g = |∇gΨ(ξ)|g ≤√nσn.

The next lemma is a crude upper bound for the Riemannian distance, valid for any Hessian metric

(that is, a Riemannian metric on U ⊂ Rn which is induced by the hessian of a convex function).

Lemma 1.3.8 Let µ be a compactly-supported probability measure on Rn whose support is not con-

tained in a hyperplane. Denote by Λ its logarithmic Laplace transform, and let Xµ = (Rn, gµ,Ψµ, 0)

be the associated Riemannian package. Then for any ξ, η ∈ Rn,

d(ξ, η) ≤√

Λ(2ξ − η)− Λ(η)− 2∇Λ(η) · (ξ − η). (1.100)

where d(ξ, η) is the Riemannian distance between ξ and η, with respect to the Riemannian metric gµ. In

particular, when the barycenter of µ lies at the origin,

d(ξ, 0) ≤√

Λ(2ξ). (1.101)

Page 35: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 35

Proof: The conclusion of the lemma is obvious when ξ = η. When ξ 6= η, we need to exhibit a path

from η to ξ whose Riemannian length is at most the right hand side of (1.100). Set θ = (ξ − η)/|ξ − η|and R = |ξ − η|. Consider the interval

γ(t) = η + tθ (0 ≤ t ≤ R).

This path connects η and ξ, and its Riemannian length is∫ R

0

√gµ(γ(t)) (θ, θ)dt =

∫ R

0

√[∂θθΛ](η + tθ)dt

=

∫ R

0

√d2Λ(η + tθ)

dt2dt ≤

√∫ 2R

0(2R− t)d

2Λ(η + tθ)

dt2dt

∫ R

0

dt

2R− t,

according to the Cauchy-Schwartz inequality. Clearly,∫ R

0 dt/(2R− t) = log 2 ≤ 1. Regarding the other

integral, recall Taylor’s formula with integral remainder:∫ 2R

0(2R− t)d

2Λ(η + tθ)

dt2dt = Λ(η + 2Rθ)− [Λ(η) + 2Rθ · ∇Λ(η)] .

The inequality (1.100) is thus proven. Furthermore, Λ(0) = 0, and when the barycenter of µ lies at the

origin, also∇Λ(0) = 0. Thus (1.101) follows from (1.100).

For a convex body K ⊂ Rn we write

v.rad.(K) = (V oln(K)/V oln(Bn2 ))1/n

for the radius of the Euclidean ball that has exactly the same volume as K. When E ⊆ Rn is an

affine subspace and K ⊂ E is a convex body, we interpret v.rad.(K) as (V ol(K)/V ol(Bk2 ))1/k. For a

subspace E ⊂ Rn, we write ProjE for the orthogonal projection operator onto E in Rn.

A Borel measure µ on Rn is even if µ(A) = µ(−A) for any measurable A ⊂ Rn.

Corollary 1.3.9 Let µ be an even, isotropic, log-concave probability measure on Rn. Let 1 ≤ t ≤√n,

and denote byBt ⊂ Rn the collection of all ξ ∈ Rn with d(0, ξ) ≤ t, where d(0, ξ) is as in Lemma 1.3.8.

Then,

V oln(Bt)1/n ≥ c t√

n. (1.102)

Here, as everywhere, c > 0 is a universal constant and V oln stands for the Lebesgue measure in Rn

(and not the Riemannian volume).

Proof: It suffices to prove the corollary under the assumption that t is an integer. According to

Lemma 1.3.8,

Kt := ξ ∈ Rn; Λ(2ξ) ≤ t2 ⊆ Bt.

Let E ⊂ Rn be any t2-dimensional subspace, and denote by fE : Rn → [0,∞) the density of the

probability measure (ProjE)∗µ. Then fE is a log-concave function, according to the Prekopa-Leindler

Page 36: Distribution of Mass in Convex Bodies

36 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

inequality, and it is also an even function. Since (ProjE)∗µ is an isotropic measure on the subspace E,

according to our definition,

fE(0)1/t2 = LfE ≥ c,

where this standard inequality is proven, e.g., in [K9, Lemma 3.1]. that the restriction of Λ to the

subspace E is the logarithmic Laplace transform of (ProjE)∗µ. According to [K9, Lemma 2.8],

v.rad.(Kt ∩ E) ≥ ctfE(0)1/t2 ≥ c′t. (1.103)

The bound (1.103) holds for any subspace E ⊂ Rn of dimension t2. From [K6, Corollary 3.1], we

deduce that

v.rad.(Kt) ≥ ct.

Since Kt ⊆ Bt, the bound (1.102) follows.

Lemma 1.3.10 Let µ be an even, isotropic, log-concave probability measure on Rn. Denote K =

Supp(µ), a convex set in Rn with a non-empty interior. Then,

V oln(K)1/n ≥ c/σn,

where c > 0 are universal constants.

Proof: Set t = max√n/σn, 1. Then 1 ≤ t ≤

√n, according to Lemma 1.3.1. Recall the definition

of the set Bt ⊂ Rn from Corollary 1.3.9. Consider the Riemannian package Xµ = (Rn, gµ,Ψµ, 0) that

is associated with the measure µ. According to Lemma 1.3.7, for any ξ ∈ Bt,

Ψµ(ξ)−Ψµ(0) ≤√nσnd(0, ξ) ≤ t

√nσn ≤ Cn.

Since Ψµ(ξ) = log det∇2Λµ(ξ) for any ξ ∈ Rn, and Ψµ(0) = 0, then

det∇2Λµ(ξ) ≥ e−Cn for any ξ ∈ Bt.

From Lemma 1.3.2,

V oln(K) =

∫Rn

det∇2Λµ(ξ)dξ ≥∫Bt

det∇2Λµ(ξ)dξ ≥ e−CnV oln(Bt)

as Λµ is convex and hence det∇2Λµ(ξ) ≥ 0 for all ξ. Corollary 1.3.9 yields that

V oln(K)1/n ≥ e−C(ct√n

)≥ c′

σn.

The lemma is proven.

Proof of Inequality 1.84: Let K ⊂ Rn be a centrally-symmetric convex body, such that the uniform

probability measure µK is isotropic. Then,

LµK =1

V oln(K)1/n≤ Cσn

where the inequality follows from Lemma 1.3.10. In view of (1.1), the bound Ln ≤ Cσn is proven.

The following proposition is never applied in this article, it is nevertheless included as it might help

understand the nature of the elusive quantity∣∣EX|X|2∣∣ for an isotropic, log-concave random vector X

in Rn.

Page 37: Distribution of Mass in Convex Bodies

1.3. THE THIN-SHELL CONJ. AND THE HYPERPLANE CONJ. 37

Proposition 1.3.11 Suppose X is an isotropic random vector in Rn with finite third moments. Then,∣∣EX|X|2∣∣2 ≤ Cn3

∫Sn−1

(E(X · θ)3

)2dσn−1(θ)

where σn−1 is the uniform Lebesgue probability measure on the sphere Sn−1, and C > 0 is a universal

constant.

Proof: Denote F (θ) = E(X · θ)3 for θ ∈ Rn. Then F (θ) is a homogenous polynomial of degree

three, and its laplacian is

4F (θ) = 6E(X · θ)|X|2.

Denote v = EX|X|2 ∈ Rn. The function

θ 7→ F (θ)− 6

2n+ 4|θ|2(θ · v) (θ ∈ Rn)

is a homogenous, harmonic polynomial of degree three on Rn. In other words, the restriction F |Sn−1

decomposes into spherical harmonics as

F (θ) =6

2n+ 4(θ · v) +

(F (θ)− 6

2n+ 1(θ · v)

)(θ ∈ Sn−1).

Since spherical harmonics of different degree are orthogonal to each other,∫Sn−1

F 2(θ)dσn−1(θ) ≥ 36

(2n+ 4)2

∫Sn−1

(θ · v)2dσn−1(θ) =36

n(2n+ 4)2|v|2.

According to Proposition 1.3.11 and the above discussion, if we could show that E(X · θ)3 ≤ C/n

for most unit vectors θ ∈ Sn−1, we would obtain a positive answer to Question 1.1.1. It is interesting to

note that the quite similar function

F (θ) = E|X · θ| (θ ∈ Sn−1)

admits corresponding tight concentration bounds. For instance,∫Sn−1

(F (θ)− E)2dσn−1(θ) ≤ C/n

where E =∫Sn−1 F (θ)dσn−1(θ), whenever X is distributed according to a suitably normalized log-

concave probability measure in Rn. The normalization we currently prefer here is slightly different from

the isotropic normalization. The details will be explained elsewhere, as well as some relations to the

problem of stability in the Brunn-Minkowski inequality.

We conclude this section with a comment concerning curvature. We were not able to extract mean-

ingful information from the local structure of the Riemannian manifold introduced in Section 3.1.1.

Nevertheless, when µ is an isotropic probability measure, the bilinear form

Q(ξ, η) =1

4

∫Rnx|x|2dµ(x) ·

∫Rnx(ξ · x)(η · x)dµ(x)

− 1

4

∫Rn

(x⊗ x)(ξ · x)dµ(x) ·∫Rn

(x⊗ x)(η · x)dµ(x) (ξ, η ∈ Rn)

is the Ricci curvature of (Rn, gµ) at the origin. For two matrices A and B, by A ·B we mean the trace of

the product of A with the transpose of B. In particular, when µ is a product measure, one sees that the

manifold (Rn, gµ) is Ricci flat.

Page 38: Distribution of Mass in Convex Bodies

38 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

1.4 The relation between the KLS conjecture and the thin-shell conjecture

The goal of this section is to prove theorem 1.1.7 which connects the constants σn and Gn. The main

tool of the proof the use of a stochastic localization scheme, whose construction is described in the next

subsection. In section 1.4.2, we establish a bound for the covariance matrix of the measure throughout

the localization process, which will be essential for its applications. In section 1.4.3, we prove theorem

1.1.7, and in section 1.4.4 we tie up some loose ends and prove lemma 1.1.10.

1.4.1 A stochastic localization scheme

In this section we construct the localization scheme which will be the principal component in our proofs.

The construction will use elementary properties of semimartingales and stochastic integration. For defi-

nitions, see [Dur].

We begin with some isotropic random vector X ∈ Rn with density f(x). Well-known concentration

bounds for log-concave measures (see, e.g., section 2 of [K2]) will allow us to assume throughout the

paper that

supp(f) ⊆ nBn, (1.104)

where Bn is the Euclidean ball of radius 1.

We construct a 1-parameter family of functions Γt(f), in the following way:

Let Wt be a standard Wiener process in Rn. Define the process Ft(x) by the equations:

F0(x) = 1, dFt(x) = 〈dWt, A−1/2t (x− at)〉Ft(x) (1.105)

where,

at =

∫Rn xf(x)Ft(x)dx∫Rn f(x)Ft(x)dx

is the barycenter of fFt, and,

At =

∫Rn

(x− at)⊗ (x− at)f(x)Ft(x)dx

is its covariance matrix.

Finally, we write Γt(f)(x) = f(x)Ft(x). In the following, we also agree that ft := Γt(f).

Remark 1.4.1 In some sense, the above is just the continuous version of the following iterative process:

at every time step, normalize the measure to be isotropic, and multiply it by a linear function, equal to 1

at the origin, whose gradient has a random direction. In some sense, this construction is a variant of the

Brownian motion on the Riemannian manifold constructed in [EK1].

Page 39: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 39

The remaining part of this section is dedicated to analyzing some basic properties of Γt(f). We begin

with:

Lemma 1.4.2 The process Γt(f) satisfies the following properties:

(i) The function Γt(f) is almost surely well defined and finite for all t > 0.

(ii) For all t > 0,∫Rn ft(x)dx = 1.

(iii) The process has a semi-group property, hence,

Γs+t(f) ∼ 1√detAs

Γt(√

detAsΓs(f) L−1) L,

where

L(x) = A−1/2s (x− as)

(iv) For every x ∈ Rn, the process ft(x) is a martingale.

Proof:

To prove (i), we have to make sure thatA−1/2t does not blow up. To this end, define t0 = inft| detAt =

0. By continuity, t0 > 0. We have to show that, in fact, t0 =∞.

We start by showing that both (ii) and (iii) hold for any t < t0.

We first calculate,

d

∫Rnf(x)Ft(x)dx =

∫Rnf(x)dFt(x)dx =∫

Rnf(x)Ft(x)〈A−1/2

t dWt, x− at〉dx = 0, (1.106)

with probability 1. The last equality follows from the definition of at as the barycenter of the measure

f(x)Ft(x)dx. We conclude (ii).

We continue with proving (iii). To do this, fix some 0 < s < t0 − t and write,

L(x) = A−1/2s (x− as). (1.107)

We normalize fs by defining,

g(x) =√

detAsfs(L−1(x)),

which is clearly an isotropic probability density. Let us inspect Γt(g(x)). We have,

dΓt(g)(x)|t=0 = g(x)〈x, dWt〉 =√

detAsfs(L−1(x))〈L(L−1(x)), dWt〉 =√

detAsfs(L−1(x))〈L−1(x)− as, A−1/2

s dWt〉,

On the other hand,

dfs(L−1(x)) = fs(L

−1(x))〈L−1(x)− as, A−1/2s dWs〉

in other words,

dΓt(√

detAsΓs(f) L−1) |t=0 ∼√

detAsdΓt(f) L−1 |t=s

Page 40: Distribution of Mass in Convex Bodies

40 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

which proves (iii).

We are left with showing that t0 =∞. To see this, write,

s = mint ; ||A−1/2t ||OP = 2,

which is, by continuity, well-defined and almost-surely positive. When time s comes, we may define L as

in (1.107), and continue running the process on the function f L−1 as above. We repeat this every time

||A−1/2t ||OP hits the value 2. This equivalent description of the process shows us that t0 is the infinite

sum of almost surely positive, independent and identically distributed random times, and so t0 =∞.

Part (iv) follows immediately from the definition of Γt, and the lemma is proven.

The next lemma is a simple, but importatnt observation:

Lemma 1.4.3 If f is log-concave then Γt(f) is log concave for every t > 0.

Proof:

By Ito’s formula,

d logFt(x) =dFt(x)

Ft(x)−

∣∣∣A−1/2t (x− at)

∣∣∣2 Ft(x)2

2Ft(x)2dt = (1.108)

= 〈dWt, A−1/2t (x− at)〉 −

1

2

∣∣∣A−1/2t (x− at)

∣∣∣2 dtthis shows that Ft is actually of the form:

Ft(x) = Ct exp(〈x, ct〉 −1

2|Btx|2) (1.109)

for some Ct > 0, ct ∈ Rn, where B2t :=

∫ t0 A−1s ds.

Remark 1.4.4 Equation (1.109) shows that the process Γt(f) may be defined as the solution to a finite

system of stochastic differential equations, rather than an infinite one as (1.105) suggets. The existence

and uniqueness of its solution follows from a standard existence theorem which may be found in [Ok],

section 5.2.

Our next task is to analyze the path of the barycenter at =∫Rn xft(x)dx. We have,

dat = d

∫Rnxf(x)Ft(x)dx =

∫Rnxf(x)Ft(x)〈x− at, A−1/2

t dWt〉dx = (1.110)

(∫Rn

(x− at)⊗ (x− at)ft(x)dx

)(A−1/2t dWt) = A

1/2t dWt.

Page 41: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 41

where the third equality follows from the defition of at, which implies,∫Rnatf(x)Ft(x)〈x− at, A−1/2

t dWt〉 = 0.

One of the crucial points, when using this localization scheme, will be to show that the barycenter of the

measure does not move too much throughout the process. For this, we would like to attain upper bounds

on the eigenvalues of the matrix At. We start with a simple observation:

Equation (1.109) shows that the measure ft is log-concave with respect to the measure e−12|Btx|2 . A

well-known theorem by Brascamp and Lieb, [BL], shows that measures which possess this property

attain certain concentration inequalities. The next theorem is well known to experts:

Theorem 1.4.5 (Brascamp-Lieb)

Let φ : Rn → R be a convex function and let K > 0. Suppose that,

dµ(x) = Ze−φ(x)− 12K2 |x|2dx

is a probability measure whose barycenter lies at the origin. Then,

(i) There exists a universal constant ∆ > 0 such that for all Borel sets A ⊂ Rn, with 0.1 ≤ µ(A) ≤ 0.9,

one has,

µ(AK∆) ≥ 0.95

where AK∆ is the K∆-extension of A, defined in the previous section.

(ii) For all θ ∈ Sn−1, ∫〈x, θ〉2dµ(x) ≤ K2.

(iii) Furthermore, if φ : Rn → R is convex, A is a positive-definite matrix and

dν(x) = Ze−φ(x)− 12|A−1x|2dx

is a probability measure whose barycenter lies at the origin, then for all θ ∈ Sn−1, one has∫〈x, θ〉2dµ(x) ≤ |Aθ|2.

Plugging (1.109) into part (ii) of this theorem gives,

At ≤ ||B−2t ||OP Id ≤

(∫ t

0

ds

||As||OP

)−1

Id, ∀t > 0. (1.111)

By our assumption (1.104) we deduce that At is bounded by n2Id, which immediately gives

At <n2

tId. (1.112)

The bound (1.112) will be far from sufficient for our needs, and the next section is dedicated to attaining

a better upper bound. However, it is good enough to show that the barycenter, at, converges in distribu-

tion to the density f(x).

Page 42: Distribution of Mass in Convex Bodies

42 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Indeed, (1.112) implies that

limt→∞

W2(ft, δat) = 0 (1.113)

where δat is the probability measure supported on at. In other words the probability density ft(x)

converges to a delta measure. By the martingale property, part (iv) of lemma 1.4.2, we know that

E[ft(x)] = f(x), thus, Xt := at converges, in Wasserstein metric, to the original random vector X

as t→∞.

Remark 1.4.6 It is interesting to compare this construction with the construction by Lehec in [Leh]. In

both cases, a certain Ito process converges to a given log-concave measure. In the result of Lehec, the

convergence is ensured by applying a certain adapted drift, while here, it is ensured by adjusting the

covariance matrix of the process.

We end this section with a simple calculation in which we analyze the process Γt(f) in the simple

case that f is the standard Gaussian measure. While the calculation will not be necessary for our proofs,

it may provide the reader a better understanding of the process. Define,

f(x) = (2π)−n/2e−|x|2/2.

According to formula (1.109), the function ft takes the form,

ft(x) = Ct exp

(〈x, ct〉 −

1

2

⟨(B2

t + Id)x, x⟩)

where Ct ∈ R, ct ∈ Rn are certain Ito processes. It follows that the covariance matrix At satisfies,

A−1t = B2

t + Id.

Recall that B2t =

∫ t0 A−1s ds. It follows that,

d

dtB2t = B2

t + Id, B0 = 0.

So,

B2t = (et − 1)Id,

which gives,

At = e−tId.

Next, we use (1.110) to derive that,

dat = e−t/2dWt,

which implies,

at ∼W1−exp(−t).

We finally get,

ft = ent/2(2π)−n/2 exp

(−1

2et∣∣(x−W1−exp(−t))

∣∣2) .

Page 43: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 43

1.4.2 Analysis of the matrix At

In the previous section we saw that the covariance matrix of the ft, At, satisfies (1.112). The goal of this

section is to give a better bound, which holds also for small t. Namely, we want to prove:

Proposition 1.4.7 There exists a universal constant C > 0 such that the following holds: Let f : Rn →R+ be an isotropic, log concave probability density. Let At be the covariance matrix of Γt(f). Then,

(i) Define the event F by,

F :=||At||OP < CK2

n(log n)e−t, ∀t > 0. (1.114)

One has,

P(F ) ≥ 1− (n−10). (1.115)

(ii) For all t > 0, E[Tr(At)] ≤ n.

(iii) Whenever the event F holds, the following also holds:

For all t > 1K2n logn

there exists a convex function φt(x) such that the function ft is of the form,

ft(x) = exp

(−∣∣∣∣ x

CKn√

log n

∣∣∣∣2 − φt(x)

). (1.116)

Before we move on to the proof, we have to establish some simple properties of the matrix At.

For a symmetric matrix A, denote by λi(A) the i-th eigenvalue of A, in decreasing order. For conve-

nience, we also denote λi = λi(At). Suppose that,

λ1(At) > λ2(At) > ... > λn(At). (1.117)

Denote by αi,j the (i, j)-th entry of At. Whenever the above formula holds, dλi(At) are analytic func-

tions of the variables αi,j in a small neighborhood of At, and thus we can find how they vary in means

of Ito calculus.

Fix a time t > 0. By choosing suitable coordinates, we can assume that At is diagonal, with diagonal

entries appearing in deacreasing order, hence, αi,i = λi(At).

The next lemma is a simple calculation,

Lemma 1.4.8 Let A be a diagonal matrix whose eigenvalues are distinct.

For i ≥ j, denote the (i, j)-th and (j, i)-th entries of A by αi,j . One has, (i)

∂λi(A)

∂αj,k= δi,jδi,k (1.118)

(ii) Whenever i 6= j,∂2λi(A)

∂α2i,j

=2

λi − λj(1.119)

(iii) Whenever (j, k) 6= (l,m) or i /∈ j, k,

∂2λi(A)

∂αj,k∂αl,m= 0

Page 44: Distribution of Mass in Convex Bodies

44 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

We postpone the proof of the lemma to subsection 1.4.4.

Next, we calculate dαi,j . We have,

dAt = d

∫Rn

(x− at)⊗ (x− at)ft(x)dx =

∫Rn

(x− at)⊗ (x− at)dft(x)dx− 2

∫Rndat ⊗ (x− at)dft(x)dx+ dat ⊗ dat =∫

Rn(x− at)⊗ (x− at)〈x− at, A−1/2

t dWt〉ft(x)dx−

2

∫RnA

1/2t dWt ⊗ (x− at)〈x− at, A−1/2

t dWt〉ft(x)dx+ dat ⊗ dat.

Let us try to understand the second term. Our aim is to show,∫RnA

1/2t dWt ⊗ (x− at)〈x− at, A−1/2

t dWt〉ft(x)dx = dat ⊗ dat, (1.120)

which would imply,

dAt =

∫Rn

(x− at)⊗ (x− at)〈x− at, A−1/2t dWt〉ft(x)dx− dat ⊗ dat.

To that end, we write ft(y) =√

detAtft(A1/2t y+ at), note that ft is an isotropic probability density, so

we can calculate, ∫RnA

1/2t dWt ⊗ (x− at)〈x− at, A−1/2

t dWt〉ft(x)dx = (1.121)

(substituting y = A−1/2t (x− at))

=

∫RnA

1/2t dWt ⊗A1/2

t y〈y, dWt〉ft(y)dy =

A1/2t dWt ⊗A1/2

t

∫Rny〈y, dWt〉ft(y)dy.

We may clearly assume that |dWt| 6= 0. Write y = dWt|dWt|

⟨y, dWt|dWt|

⟩+u(y). Note that, by the isotropicity

of ft, ∫Rnu(y)〈y, dWt〉ft(y)dy = 0.

which implies, ∫Rny〈y, dWt〉ft(y)dy =

∫RndWt

⟨y,

dWt

|dWt|

⟩2

ft(y)dy = dWt. (1.122)

Join (1.121) and (1.122) to get (1.120). Thus, we have established that,

dAt =

∫Rn

(x− at)⊗ (x− at)〈x− at, A−1/2t dWt〉ft(x)dx− dat ⊗ dat.

Note that the term dat⊗ dat is a positive semi-definite matrix, hence, subtracting it can only decrease all

of the eigenvalues of At (as a matter of fact, this term induces a rather strong drift of all the eigenvalues

towards 0, that we will not even use). To that end, define a matrix At by the equation,

dAt =

∫Rn

(x− at)⊗ (x− at)dft(x)dx = (1.123)

Page 45: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 45∫Rn

(x− at)⊗ (x− at)〈x− at, A−1/2t dWt〉ft(x)dx

and A0 = A0 = Id. Clearly, At ≤ At for all t > 0. In order to control ||At||OP , it is thus enough to

bound λ1(At). From now on, we assume αi,j are the entries of the matrix At (with respect to some

diagonal basis), and λi are its eigenvalues.

Equation (1.123) implies,

dαi,j =

∫Rnxixj〈A−1/2

t x, dWt〉gt(x)dx

where, for convenience, we denote gt(x) = ft(x+ at).

Denote,

ξi,j =1√λiλj

∫Rn〈x, vi〉〈x, vj〉A−1/2

t xgt(x)dx, (1.124)

where vi, vj are the unit norm eigenvectors of At corresponding to the eigenvalues λi, λj . So,

dαi,j =√λiλj〈ξi,j , dWt〉, (1.125)

andd

dt[αi,j ]t = λiλj |ξi,j |2.

Where [αi,j ]t denotes the quadratic variation of αi,j . Due to (1.118) and (1.119), and by Ito’s formula,

we can conclude the following:

Lemma 1.4.9 One has,

dλi = 〈λiξi,i, dWt〉+n∑

j=1,j 6=iλiλj

|ξi,j |2

λi − λjdt

where, ξi,j are defined in (1.124).

We are now ready to prove the main proposition of the section.

Proof of proposition 2.2.6:

Define again, as above, ft(x) =√

detAtft(A1/2t x+ at). By the definition of Kn, we have,

n∑j=1

|ξi,j |2 = ||∫Rnx⊗ xxift(x)dx||2HS ≤ K2

n, ∀1 ≤ i ≤ n (1.126)

We fix some α ≥ 2 whose value will be chosen later, and define,

St =n∑i=1

λαi .

We have (as usual, by Ito’s formula),

dSt =n∑i=1

αλα−1i dλi +

1

2α(α− 1)λα−2

i d[λi]t

Page 46: Distribution of Mass in Convex Bodies

46 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

where [λi]t is the quadratic variation of λi. So, by lemma 1.4.9,

dSt =

n∑i=1

αλα−1i

〈λiξi,i, dWt〉+∑

1≤j≤n,j 6=i

λiλj |ξi,j |2

λi − λjdt

+1

2α(α− 1)λαi |ξi,i|2dt

. (1.127)

Turning to deal with points in time in which (1.117) does not hold, we note that St is smooth with

respect to the entries of the matrix At. This implies that St is in fact an Ito process which satisfies that

above equation for all t ≥ 0. A well-known property of Ito processes is existence and uniqueness of the

decomposition St = Mt + Et, where Mt is a local martingale and Et is an adapted process of locally

bounded variation. Recall that for j > i, one has λi ≥ λj . We calculate,

dEt ≤n∑i=1

|ξi,i|2λ2iα(α− 1)λα−2

i dt+ 2∑

1≤i<jλ2i |ξi,j |2

αλα−1i − αλα−1

j

λi − λjdt ≤

n∑i=1

α(α− 1)λαi

|ξi,i|2 +n∑

j=1,j 6=i|ξi,j |2

dt

where in the last inequality we used Lagrange’s theorem, and the fact that the second derivative of λαi is

increasing.

Using (1.126), we derive,

dEt ≤ K2nα

2Stdt, (1.128)

whenever (1.117) holds. Now, (1.127) implies,

d[S]tdt

=

∣∣∣∣∣n∑i=1

αλαi ξi,i

∣∣∣∣∣2

.

where [S]t represents the quadratic variation of St. By the compactness of the space of 3-dimensional

isotropic log-concave measures, we easily deduce that |ξi,i| < C. So,

d[S]tdt≤ Cα2S2

t . (1.129)

By equations (1.128) and (1.129) we learn that logSt is a semimartingale whose drift and variance are

dominated by the ones of the process Zt which satisfies the following equation:

dZt = CαdWt +K2nα

2dt, Z0 = log n.

By elementary properties of Ito processes, we learn that there exists a universal constant C > 0 such

that,

P

maxt∈[0, 1

K2nα

]logSt − log n > Cα

< e−10α.

We choose α = log n to get,

P

maxt∈[0, 1

K2n logn

]S

1/αt > C ′

<1

n10,

Page 47: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 47

for some universal constant C ′ > 0. Define the event F as the complement of the event in the equation

above,

F :=

maxt∈[0, 1

K2n logn

]S

1/αt ≤ C ′

.

Clearly, whenever the event F holds, we have,

||At||OP ≤ ||At||OP ≤ C ′, ∀t ∈[0,

1

K2n log n

]. (1.130)

Recall the bound (1.111). Recalling that B2t =

∫ t0 A−1s ds, and applying (1.111) gives,

d

dtB2t = A−1

t ≥Id

||B−2t ||OP

.

So,d

dt

1

||B−2t ||OP

≥ 1

||B−2t ||OP

. (1.131)

By the definition of Bt and by (1.130), it follows that whenever F holds one has,

1

||B−2δ2 ||OP

≥ Cδ2 (1.132)

where δ2 = 1K2n logn

. Equations (1.131) and (1.132) imply,

B2t ≥ cδ2et−δ

2Id, ∀t > δ2

which gives, using (1.111),

At ≤ Cδ−2eδ2−tId.

Part(i) of the proposition is established. In order to prove the bound for E[Tr(At)], write St =∑n

i=1 λi.

Setting α = 1 in (1.127) gives, ddtE[St] = 0, which implies (ii). Part (iii) of the proposition follows

directly from equations (1.132) and (1.109). The proposition is complete.

Theorem 1.4.5 gives an immediate corollary to part (iii) of proposition 2.2.6:

Corollary 1.4.10 There exist universal constants c,∆ > 0 such that whenever the event F defined in

(3.1) holds, the following also holds:

Define δ = 1Kn√

logn. Let t > δ2 and let E ⊂ Rn be a measurable set which satisfies,

0.1 ≤∫Eft(x)dx ≤ 0.9. (1.133)

One has, ∫E∆/δ\E

ft(x)dx ≥ c (1.134)

where E∆/δ is the ∆δ -extension of E, defined in the introduction.

Page 48: Distribution of Mass in Convex Bodies

48 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

1.4.3 Thin shell implies spectral gap

In this section we use the localization scheme constructed in the previous sections in order to prove the-

orem 1.1.7.

Let f(x) be an isotropic log-concave probability density in Rn and let E ⊂ Rn be a measurable set.

Suppose that, ∫Ef(x)dx =

1

2. (1.135)

Our goal in this section is to show that, ∫E∆/δ\E

f(x)dx ≥ c (1.136)

for some universal constants c,∆ > 0, where δ = 1Kn√

lognand E∆/δ is the ∆

δ -extension of E.

The idea is quite simple. Define ft := Γt(f), the localization of f constructed in section 2, and fix

t > 0. By the martingale property of the localization, we have,∫E∆/δ\E

f(x)dx = E

[∫E∆/δ\E

ft(x)dx

]. (1.137)

Corollary 1.4.10 suggests that if t is large enough, the right term can be bounded from below if we only

manage to bound the integral∫E ft(x)dx away from 0 and from 1.

Define,

g(t) =

∫Eft(x)dx.

In view of the above, we would like to prove:

Lemma 1.4.11 There exists a constant C > 0 such that,

P (0.1 ≤ g (t) ≤ 0.9) > C, ∀t ∈ [0, 1].

Before we prove this lemma, we need the following elementary fact:

Lemma 1.4.12 Let µ be an isotropic log-concave measure on Rn and let A ⊂ Rn be a borel set. One

has, ∣∣∣∣∫A xdµ

µ(A)

∣∣∣∣ ≤ C log1

µ(A)

for some universal constant C > 0.

Proof: Define,

v =

∫A xdµ

µ(A).

Page 49: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 49

By considering the marginal of µ onto spv, we realize that this claim is actually 1-dimensional.

Write, θ = v|v| , L = |v|. Also denote,

ϕ(s) = µ(x| 〈x, θ〉 ≤ s)

and,

α = ϕ−1(µ(A)).

A moment of reflection reveals that,

L ≤∫∞α xϕ′(x)dx∫∞α ϕ′(x)dx

. (1.138)

It is a well known property of isotropic log-concave measures that, ϕ is differentiable and there exists

constants C1, c1 such that,

ϕ′(x) ≤ C1e−c1|x|. (1.139)

Equations (1.139) and (1.138) give L < max(α, 0) + C. Using (1.139) once again gives,

µ(A) < Ce−cα

and the lemma follows.

Proof of lemma 1.4.11:

We calculate,

dg(t) =

∫Eft(x)〈x− at, A−1/2

t dWt〉dx = (1.140)

(substitute y = A−1/2t (x− at))√

detAt

∫A−1/2t (E−at)

ft(A1/2t y + at)〈y, dWt〉dy =

⟨√detAt

∫A−1/2t (E−at)

ft(A1/2t y + at)ydy, dWt

⟩=

g(t)

⟨∫A−1/2t (E−at)

ft(A1/2t y + at)ydy∫

A−1/2t (E−at)

ft(A1/2t y + at)dy

, dWt

Define,

ft =√

detAtft(A1/2t y + at), Et = A

−1/2t (E − at)

and,

ξ(t) =

∫Etyft(y)dy∫

Etft(y)dy

.

The above equation becomes,

dg(t) = g(t)〈ξt, dWt〉. (1.141)

Page 50: Distribution of Mass in Convex Bodies

50 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Recalling that At is the covariance matrix of ft and at is its barycenter, it is clear that ft is isotropic.

Inspect the definition of ξt: it is the barycenter of a set of measure g(t) with respect to the isotropic

log-concave measure ft. By lemma (1.4.12), we get

|ξt|2 ≤ C log(g(t))2. (1.142)

Clearly, by (1.141), g(t) is a martingale, and along with (1.142), we get

d

dt[g]t < C ′,

where [g]t is the quadratic variation of g(t). By elementary properties of Ito processes, P(∃t ∈ [0, 1], |g(t)−0.5| = 0.4) is monotone with respect to the process d

dt [g]t. This implies that there exists a constant

C ′′ > 0 such that,

P(0.1 ≤ g(t) ≤ 0.9) > C ′′, ∀t ∈ [0, 1]

and the lemma is proven.

Proof of proposition 1.1.11:

Pick T = 1 and denote,

G = 0.1 ≤ g(T ) ≤ 0.9 ∩ F.

where F is the event defined in (3.1). According to lemma 1.4.11 and to (1.115), one has P(G) > c for

some universal constant c > 0.

By (1.137) and by corollary 1.4.10, there exist universal constants c,∆ > 0 such that∫E∆/δ\E

f(x)dx = E

[∫E∆/δ\E

fT (x)dx

]≥ (1.143)

P (G)E

[∫E∆/δ\E

fT (x)dx

∣∣∣∣∣G]≥ c.

The result now follows directly from an application of theorem 2.1 in [Mil2].

Remark 1.4.13 In the above proof, we used E. Milman’s result in order to reduce the theorem to the

case where∫E f(x)dx is exactly 1

2 , as well as to attain an isoperimetric inequality from a certain con-

centration inequality for distance functions. Alternatively, we may have replaced theorem 1.4.5 with an

essentially stronger result due to Bakry-Emery, proven in [BE] (see also Gross, [Gros1]). Their result,

which relies on the hypercontractivity principle, asserts that a density of the form (1.109) actually pos-

sesses a respective Cheeger constant. Using this fact, we may have directly bounded from below the

surface area of any set with respect to the measure whose density is ft.

The proof of lemma 1.1.10 is given below. Along with this lemma, we have established theorem

1.1.7.

Page 51: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 51

1.4.4 Loose ends

Proof of lemma 1.1.10: Let X be an isotropic, log concave random vector in Rn, and fix θ ∈ Sn−1.

Denote A = E[X ⊗X〈X, θ〉]. Our goal is to show,

||A||2HS ≤ Cτ2n max(n2κ, log n).

Let k ≤ n and let Ek be a subspace of dimension k. Denote P (X) = ProjEk(X) and Y = |P (X)| −√k. By definition of σk,

V ar[Y ] ≤ σ2k

Note that, by the isotropicity of X , E[|P (X)|2] = k. It easily follows that,

V ar[|P (X)|2] ≤ CkV ar[Y ] ≤ Ckσ2k.

Using the last inequality and applying Cauchy-Schwartz gives,

E[〈X, θ〉|P (X)|2] ≤√V ar[〈X, θ〉]V ar[|P (X)|2] ≤ C

√kσk.

In other words,

Tr[ProjEkAProjEk ] ≤ C√kσk.

Let λ1, ..., λn be the eigenvalues of A in decreasing order. The last inequality implies that the matrix

ProjEkAProjEk has at least one eigenvalue smaller than C√

1kσk, and therefore,

λ2k < C ′τ2

nk2κ−1.

We can thus calculate,

||A||2HS =n∑`=1

λ2` ≤ λ2

1 + C ′τ2n

∫ n

1t2κ−1dt ≤ C ′′τ2

n max(n2κ, log n).

The proof is complete.

Next, in order to provide the reader with a better understanding of the constantKn, we introduce two

new constants. First, define

Q2n = sup

X,Q

V ar[Q(X)]

E [|∇Q(X)|2]

where the supremum runs over all isotropic log-concave random vectors, X , and all quadratic forms

Q(x). Next, define

R−1n = inf

µ,E

µ+(E)

µ(E)

where µ runs over all isotropic log-concave measures and E runs over all ellipsoids with µ(E) ≤ 1/2.

Page 52: Distribution of Mass in Convex Bodies

52 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Fact 1.4.14 There exist universal constants C1, C2 such that

Kn ≤ C1Qn ≤ C2Rn.

The proof of the right inequality is standard and uses the coarea formula and the Cauchy-Schwartz

inequality. We will prove the left inequality. To that end, fix an isotropic log-concave random vector X ,

denote A = E[X ⊗XX1]. We have,

||A||HS = supB

Tr(BA)

||B||HS

whereB runs over all symmetric matrices. LetB be a symmetric matrix. Fix coordinates under whichB

is diagonal, and write X = (X1, ..., Xn) and B = diaga1, .., an. Define Q(x) = 〈Bx, x〉. We have,

Tr(BA) = E

[X1

n∑i=1

aiX2i

]≤√E[X2

1

]√√√√V ar

[n∑i=1

aiX2i

]=

√V ar[Q(X)] ≤

√√√√2Q2n

n∑i=1

a2iE[X2

i ] =√

2Qn||B||HS .

So,

||A||HS ≤√

2Qn.

This shows that Kn ≤ CQn.

Remark 1.4.15 We suspect that there exists a universal constant C > 0 such that Kn ≤ Cσn, but we

are unable to prove that assertion.

We move on to the proof of the lemma 1.4.8 which is a straightforward calculation of the first two

derivatives of the eigenvalues of a diagonal matrix with respect to its entries.

Proof of lemma 1.4.8:

Equation (1.118) is obvious. We proceed to the second derivatives.

Let j, k 6= i and assume all of the off-diagonal entries of At are zero except for αi,j = αj,i =: t and

αi,k = αk,i =: w. Let λ ∈ R, Denote γ =∏`6=i,j,k(α` − λ). A moment of reflection reveals that

the calculation of the desired derivatives narrows down to calculating the derivatives of the following

function,

f(λ, t, w) = det

αi,i − λ t wt αj,j − λ 0w 0 αk,k − λ

. (1.144)

For w, t small enough, one has,

det(A− λI) = [(αi,i − λ)(αj,j − λ)(αk,k − λ)− t2(αk,k − λ)− w2(αj,j − λ)]γ.

So,∂2 det(A)

∂t∂w|t=w=0 = 0,

Page 53: Distribution of Mass in Convex Bodies

1.4. THIN SHELL AND KLS 53

and∂2 det(A)

∂t2|t=w=0 = 2(αk,k − λ)γ.

Also,∂ det(A− λI)

∂λ|λ=αi,i,t=w=0 = (αj,j − αi,i)(αk,k − αi,i)γ.

The last three equations and the inverse function theorem imply (1.119).

Page 54: Distribution of Mass in Convex Bodies

54 CHAPTER 1. DISTRIBUTION OF MASS IN CONVEX BODIES

Page 55: Distribution of Mass in Convex Bodies

Chapter 2

Stability of the Brunn-Minkowskiinequality

2.1 Introduction

The Brunn-Minkowski inequality states, in one of its normalizations, that

V oln

(K + T

2

)≥√V oln(K)V oln(T ) (2.1)

for any compact sets K,T ⊂ Rn, where (K + T )/2 = (x + y)/2;x ∈ K, y ∈ T is half of the

Minkowski sum of K and T , and where V oln stands for Lebesgue measure in Rn. For convex bodies,

equality in (2.1) holds if and only if K equal to T up to translation and dilation. A stability result for

the Brunn-Minkowski inequality will deal with cases in which the left and right terms of (2.1) are close

to equality. In that case, we would like to say that, in some sense, the bodies K and T are not far from

dilations of each other.

All of the stability results we found in the literature share a common feature: Their estimates de-

teriorate quickly as the dimension increases. For instance, suppose that K,T ⊂ Rn are convex sets

with

V oln(K) = V oln(T ) = 1 and V oln

(K + T

2

)≤ 5. (2.2)

The present stability estimates do not seem to imply much about the proximity of K to a translate of T

under the assumption (2.2). Only if the constant “5” from (2.2) is replaced by something like 1 + 1/n or

so, then the results of Figalli, Maggi and Pratelli [FMP2] can yield meaningful information.

Here, we try to raise the possibility that the stability of the Brunn-Minkowski inequality actually

improves as the dimension increases. In particular, we would like to deduce from (2.2) that∣∣∣∣∫K p(x− bK)dx∫T p(x− bT )dx

− 1

∣∣∣∣ 1 (2.3)

55

Page 56: Distribution of Mass in Convex Bodies

56 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

for a family of non-negative functions p, when the dimension n is high. Here, bK and bT denote the

barycenters of K and T respectively. Furthermore, in some non-trivial cases we may conclude (2.3)

even when the constant “5” in (2.2) is replaced by an expression that grows with the dimension, such as

log n or nα for a small universal constant α > 0.

In fact, we are interested mainly in the quadratic form

qK(x) =1

V oln(K)

∫K〈x, y〉2dy −

(1

V oln(K)

∫K〈x, y〉dy

)2

(x ∈ Rn) (2.4)

where 〈·, ·〉 is the standard scalar product in Rn. Observe that when the barycenter ofK lies at the origin,

the second term in (2.4) vanishes. When qK(x) = |x|2 = 〈x, x〉, we say that K is isotropic. The unit

ball of the norm√qK(x) is known as the Binet ellipsoid of K. See [MP] for equivalent definitions.

The inertia form of the convex body K ⊂ Rn is defined as

pK(x) = sup〈x, y〉2 ; qK(y) ≤ 1

. (2.5)

Note that pK is a quadratic polynomial in Rn. The most important case is when K ⊂ Rn is isotropic,

as pK is proportional to |x|2 in this case. The quadratic polynomial pK depends on K in a linearly-

equivariant way: That is, if K ⊂ V is a convex body where V is a finite-dimensional vector space, then

the definition of the quadratic polynomial pK : V → R makes sense. The unit ball of the norm√pK(x)

is known as the Legendre ellipsoid of K.

The Hilbert-Schmidt distance between two positive-definite quadratic forms p1, p2 : Rn → R is

defined as follows: Write p1(·, ·) for the inner product induced by p1 on Rn. There exists a unique linear

operator A : Rn → Rn, symmetric and positive-definite with respect to p1(·, ·), such that

p2(x) = p1(Ax, x) for x ∈ Rn.

We then set

dHS(p1, p2) =

√√√√ n∑i=1

(λi − 1)2 (2.6)

where λ1, . . . , λn are the eigenvalues ofA, repeated according to their multiplicity. Observe that dHS(p1, p2) =

0 if and only if p1 ≡ p2. Note also that dHS(p1, p2) is not necessarily symmetric in p1 and p2; this is of

no importance here.

Our first stability result involves unconditional convex bodies. A convex body in Rn is said to be

unconditional if

(x1, ..., xn) ∈ K ⇔ (±x1, ...,±xn) ∈ K

for all (x1, ..., xn) ∈ Rn. In other words, K is invariant under coordinate reflections. The theorem reads,

Page 57: Distribution of Mass in Convex Bodies

2.1. INTRODUCTION 57

Theorem 2.1.1 Let K,T ⊂ Rn be unconditional convex bodies, and R ≥ 1. Assume that

V oln

(K + T

2

)≤ R

√V oln(K)V oln(T ).

Let pK(x) and pT (x) be the inertia forms of K and T , respectively, defined in (2.4) and (2.5). Then

dHS(pK , pT ) ≤ C(log n)(R− 1)5 + CR4n−5. (2.7)

In particular, abbreviating p(x) = pK(x),∣∣∣∣∫K p(x)dµK(x)∫T p(x)dµT (x)

− 1

∣∣∣∣ ≤ C (R− 1)5 log n√n

+ CR4n−4. (2.8)

Here, C > 0 is a universal constant.

The above estimate may be used to provide a positive answer to question 1.1.3 in the case uncondi-

tional bodies, up to a logarithmic factor. Namely,

Theorem 2.1.2 There exists a universal constant C > 0 such that,

supX

√E(|X| −

√n)2 < C log n (2.9)

where the supremum runs over all isotropic, random vectors X in Rn, uniformly distributed over an

unconditional convex body.

Remark 2.1.3 In [K3], using different techniques, Klartag gave a proof of a slightly stronger inequality,

namely, with a universal constant in the right hand side.

Theorem 2.1.1 is connected to theorem 2.1.2 through the following proposition:

Proposition 2.1.4 Let ε > 0 and let K ⊂ Rn be an isotropic convex body. For s > 0 denote Ks =

K ∩ (sBn2 ). Assume that ∣∣∣∣∣

∫Ks|x|2dµKs(x)∫

K |x|2dµK(x)− 1

∣∣∣∣∣ ≤ ε (2.10)

for any s > 0 with V oln(Ks)/V oln(K) ∈ [1/8, 7/8]. Then,∫K

(|x|2

n− 1

)2

dµK(x) ≤ Cε2 (2.11)

where C > 0 is a universal constant.

Note that (2.10) will follow directly from theorem 2.1.1 and that (2.11) is exactly the thin-shell esti-

mate we want.

In the second section of this chapter we derive stability estimates for general convex bodies, dropping

the assumption that the bodies are unconditional. Our first estimates uses the pointwise estimate for the

central limit theorem for convex bodies that we derived in the previous chapter, theorem 1.1.5, in order

to derive an estimate of the same spirit of theorem 2.1.1 that applies to the general setting. We show,

Page 58: Distribution of Mass in Convex Bodies

58 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

Theorem 2.1.5 Let K,T ⊂ Rn be convex bodies and R ≥ 1. Assume that

V oln

(K + T

2

)≤ R

√V oln(K)V oln(T ).

Let pK(x) and pT (x) be the inertia forms of K and T , respectively, defined in (2.4) and (2.5). Then,∣∣∣∣∫T pK(x− bT )dµT (x)∫K pK(x− bK)dµK(x)

− 1

∣∣∣∣ ≤ CRα2

nα1. (2.12)

Furthermore,1

ndHS(pK , pT ) ≤ CRα2/nα1 . (2.13)

Here C,α1, α2 > 0 are universal constants and bK =∫K xdx/V oln(K) is the barycenter of K, and

similarly for bT .

The next theorem we present demonstrates an alternative approach for proving stability estimates in

the general case. The theorem consists of two estimates: The first estimate resembles the bound (2.13),

and in fact has a far better dependence on the dimension, with the cost of a worse dependence on the

constant R. The second estimate is of a slightly different kind: it concerns with the Wasserstein distance

between the uniform measures on the two bodies. It suggests that the Wasserstein distance is dominated

by some function of the dimension and of the constant R.

Recall that for two densities f , g on Rn, define the Wasserstein distance, W2(f, g), by

W2(f, g)2 = infξ

∫Rn×Rn

|x− y|2dξ(x, y)

where the infimum is taken over all measures ξ on R2n whose marginals onto the first and last n coordi-

nates are the measures whose densities are f and g respectively (see, e.g. [Vil] for more information). By

slight abuse of notation, for two convex bodies K,T , by W2(K,T ) we refer to the Wasserstein distance

between the uniform measures on the bodies.

We will also need to recall the following definitions (appearing in the previous chapter):

κ = lim infn→∞

log σnlog n

, τn = max(1, max1≤j≤n

σjjκ

), (2.14)

so that σn ≤ τnnκ. Note that the thin-shell conjecture implies κ = 0 and τn < C.

We are now ready to formulate,

Theorem 2.1.6 Let K,T ⊂ Rn be convex bodies and R ≥ 1. Assume that

V oln

(K + T

2

)≤ R

√V oln(K)V oln(T ) < n10.

Let pK(x) and pT (x) be the inertia forms of K and T , respectively, defined in (2.4) and (2.5). Then,

dHS(pK , pT ) ≤ C ′(R5 + τnRmax(

√log n, nκ)

). (2.15)

Page 59: Distribution of Mass in Convex Bodies

2.1. INTRODUCTION 59

Furthermore, if the barycenter of both K and T is the origin and Pk(x) = |x|2, then,

W2(K,T ) ≤ n1/4√σnR5/2. (2.16)

The last result of this chapter continues the same line. It is in fact weaker than theorem 2.1.6 under

the currently best known bound for σn. However, if better bounds for the thin-shell constant are proved,

the result below may become stronger, and is in fact tight under the thin-shell hypothesis. The result

reads,

Theorem 2.1.7 For every ε > 0 there exists a constant C(ε) such that the following holds: Let K,T be

convex bodies whose volume is 1 and whose barycenters lie at the origin. Suppose that the covariance

matrix of the uniform measure on K is equal to LKId for a constant LK > 0. Denote,

V = V oln

(K + T

2

), (2.17)

and define

δ = C(ε)LKV5τnn

2(κ−κ2)+ε.

Then,

V oln(Kδ ∩ T ) ≥ 1− ε.

(where Kδ is the δ-extension of K, defined as Kδ = x ∈ Rn ; ∃y ∈ K s.t |x− y| ≤ δ)

Remark 2.1.8 Using the bound in [Gu-M], theorem 2.1.7 gives,

δ = C(ε)n49

+εV 5LK .

Note that if no assumption is made on the value of the constant V , even if the covariance matrices of K

and T are assumed to be equal, the best corresponding bound would be δ = C√nLK as demonstrated,

for example, by a cube and a ball.

Remark 2.1.9 The bounds of theorems 2.1.6 and 2.1.7 complement, in some sense, the result proven in

[Seg], based on the result in [FMP1], which reads,

V oln((K + x0)∆T )2 ≤ n4(V oln((K + T )/2)− 1)

for some choice of x0, where ∆ denotes the symmetric difference between the sets. Unlike our results,

the results in [Seg] and [FMP1] give much more information as the expression V oln((K + T )/2) − 1

approaches zero. On the other hand the results presented here already give some information when

V oln((K + T )/2) = 10.

As for the structure of this chapter, theorem 2.1.1 and proposition 2.1.4 are proven in section 2.2.

Theorem 2.1.5 is proven in section 2.3.1, theorem 2.1.6 is proven in section 2.3.2 and the proof of theo-

rem 2.1.7 is in 2.3.3.

This entire chapter, except for subsection 2.3.3, is based on a joint work with B. Klartag.

Page 60: Distribution of Mass in Convex Bodies

60 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

2.2 Stability of the covariance matrix: the unconditional case

The goal of this section is to prove theorems 2.1.2 and 2.1.1. Its structure is as follows: in the first two

subsections we establish some well known facts about one-dimensional log-concave measures. In the

third one we prove theorem 2.1.1 and in the fourth, we prove theorem 2.1.1.

2.2.1 Background on log-concave densities on the line

In this section we recall some facts, all of which are well-known to experts, about log-concave densities.

A function ρ : R→ [0,∞) is log-concave if for any x, y ∈ R,

ρ (λx+ (1− λ)y) ≥ ρ(x)λρ(y)1−λ for all 0 < λ < 1.

A probability measure on R is called log-concave if it has a log-concave density. Let µ be a log-concave

probability measure on R, whose log-concave density is denoted by ρ : R→ [0,∞). Write

Φ(t) = µ ((−∞, t]) =

∫ t

−∞ρ(s)ds (t ∈ R).

A nice characterization of log-concavity we learned from Bobkov [Bob2] is that µ is log-concave if and

only if the function

t 7→ ρ(Φ−1(t)) t ∈ [0, 1]

is a concave function. This characterization lies at the heart of the proof of the following Poincare-type

inequality which appears as Corollary 4.3 in Bobkov [Bob1] :

Lemma 2.2.1 Let µ be a log concave probability measure on the real line, and set

V ar(µ) =

∫x2dµ(x)−

(∫xdµ(x)

)2

for the variance of µ. Then for any smooth function f with∫fdµ = 0,∫

Rf2(t)dµ(t) ≤ 12V ar(µ)

∫R|f ′(t)|2dµ(t).

Further information about log-concave densities on the line is provided by the following standard

lemma.

Lemma 2.2.2 Let f : R → [0,∞) be a log-concave probability density. Denote b =∫xf(x)dx, the

barycenter of the density f , and let σ2 be the variance of the probability measure whose density is f .

Then, for any t ∈ R,

(a) f(t) ≤ C

σexp(−c|t− b|/σ); and

(b) If |t− b| ≤ cσ, then f(t) ≥ c

σ.

Here, c, C > 0 are universal constants.

Page 61: Distribution of Mass in Convex Bodies

2.2. STABILITY OF THE COVARIANCE MATRIX: THE UNCONDITIONAL CASE 61

Proof: Part (a) is the content of Lemma 3.2 in Bobkov [Bob3]. In order to prove (b), we show that

for some t0 ≥ b+ c0σ,

f(t0) ≥ 1/(10C1σ) (2.18)

with c0 = 1/(10C), C1 = c−1 log(10C/c) where here c, C are the constants from part (a). Indeed, if

there is no such t0, then by (a),∫ ∞b

f(t)dt ≤∫ b+c0σ

b

C

σdt+

∫ b+C1σ

b+c0σ

dt

10C1σ+

∫ ∞b+C1σ

C

σexp(−c|t− b|/σ)dt ≤ 3

10<

1

e,

in contradiction to Lemma 3.3 in Bobkov [Bob3]. By symmetry, there exists some t1 ≤ b− c0σ with

f(t1) ≥ 1/(10C1σ).

From log-concavity, f(t) ≥ 1/(10C1σ) for t ∈ [t1, t0], and (b) is proven.

The following lemma is essentially a one-dimensional version of the theorems proven in our paper. It

is concerned with supremum-convolution, which is a functional version of Minkowski sum. The Lemma

states, roughly, that if the supremum-convolution of two log-concave probability densities has integral

close to 1, then their respective variances cannot be too far from each other.

Lemma 2.2.3 LetX,Y be random variables with corresponding densities fX , fY and variances σ2X , σ

2Y .

Assume that fX and fY are log-concave. Define

h(t) = sups∈R

√fX(t+ s)fY (t− s), (2.19)

a supremum-convolution of fX and fY . Then,∫Rh(t)dt ≥ c

√max

σXσY

,σYσX

where c > 0 is a universal constant.

Proof: It follows from Lemma 2.2.2(b) that there there exists intervals IX , IY such that,

Length(IX) ≥ cσX , Length(IY ) ≥ cσY

and,

fX(t) ≥ c

σX, ∀t ∈ IX ; fY (s) ≥ c

σY, ∀s ∈ IY .

Combining this with (2.19), we learn that there exists an interval IZ with Length(IZ) ≥ c(σX + σY )/2

such that,

h(t) ≥ c√σXσY

, ∀t ∈ IZ .

This implies, ∫Rh(t)dt ≥

∫IZ

h(t)dt ≥ c2

2

σX + σY√σXσY

≥ c2

2

√max

σXσY

,σXσY

.

which finishes the proof.

Recall the definition (2.4) of the inertia form qK(x) associated with a convex body K ⊂ Rn. As a

corollary of Lemma 2.2.3, we have,

Page 62: Distribution of Mass in Convex Bodies

62 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

Corollary 2.2.4 Let R > 1 and let K,T ⊂ Rn be convex bodies such that nd

V oln

(K + T

2

)< R

√V oln(K)V oln(T ).

Then,1

CR4qK(x) ≤ qT (x) ≤ CR4qK(x) for all x ∈ Rn (2.20)

where C > 0 is a universal constant.

Proof: Fix a unit vector θ ∈ Rn. Let X, Y be random vectors uniformly distributed onK,T respectively,

and define X = 〈X, θ〉 and Y = 〈Y , θ〉. Observe that

qK(θ) = V ar(X), qT (θ) = V ar(Y ).

In order to prove (2.20), it suffices to show that

max

V ar(X)

V ar(Y ),V ar(Y )

V ar(X)

≤ CR4. (2.21)

Denote the respective densities of X,Y by fX , fY . The Prekopa-Leindler theorem (see, e.g., the first

pages of Pisier [Pis]) implies that fX and fY are log-concave. Furthermore, using the Prekopa-Leindler

theorem again we derive,

V oln

(K + T

2

)≥∫R

sups∈R

√fX(t− s)V oln(K)fY (t+ s)V oln(T )dt. (2.22)

Hence, ∫R

sups∈R

√fX(t− s)fY (t+ s)dt ≤ R.

Plugging this into lemma 2.2.3 we deduce (2.21).

For a measure µ and a measurable set A ⊂ R with 0 < µ(A) < ∞ define the measure µ|A as

follows,

µ|A(B) =µ(A ∩B)

µ(A),

the conditioning of the measure µ to A. Clearly, for a log-concave measure µ and an interval I , the

measure µI remains log-concave. The following lemma is well-known to experts.

Lemma 2.2.5 Let µ be a log-concave probability measure on R. Then for any two intervals J1 ⊆ J2 ⊂R,

V ar(µ|J1) ≤ V ar(µ|J2).

(the “intervals” may also include rays, or the entire line: Any convex set in R).

Proof: It is enough to prove the lemma for J1, J2 being rays. Denote by I the interior of the support

of µ, and by ρ the density of µ. Abbreviate Φ(t) = µ (−∞, t] , µt = µ|(−∞,t] and set

e(t) =

∫Rxdµt(x), v(t) = V ar(µt) =

∫Rx2dµt(x)− e2(t) t ∈ I.

Page 63: Distribution of Mass in Convex Bodies

2.2. STABILITY OF THE COVARIANCE MATRIX: THE UNCONDITIONAL CASE 63

Then for any t ∈ I ,

e′(t) =ρ(t)

Φ(t)(t− e(t)) , v′(t) =

ρ(t)

Φ(t)

((t− e(t))2 − v(t)

).

To prove the lemma, it suffices to show that v′(t) ≥ 0 for any t, or equivalently, that

V ar(µt)− (t− Eµt)2 = v(t)− (t− e(t))2 ≤ 0 for all t ∈ I.

This is equivalent to showing that for any log concave random variable X such that X ≥ 0 almost surely

and E[X] = 1, one has V ar[X] ≤ 1. This follows immediately from Borell [Bor, Lemma 4.1], see also

Lovasz and Vempala [LV, Lemma 5.3(c)].

Remark. When µ is an absolutely-continuous measure on R, whose support is a connected set, and

whose smooth density does not vanish on the support – Lemma 2.2.5 is in fact a characterization of

log-concavity.

2.2.2 Transportation in one dimension

In this section we recall some basic definitions concerning transportation of one-dimensional measures.

We also the transportation in the case where both the source measure and the target measure are log-

concave. For a measure µ and a map F we denote by F∗(µ1) the push-forward of the measure µ by the

map F , that is

F∗(µ1)(A) = µ1(F−1(A))

for any measurable set A. Suppose µ1 and µ2 are Borel probability measures on the real line, with

continuous densities ρ1 and ρ2 respectively. We further assume that the support of µ2 is connected. For

t ∈ R set

Φj(t) = µj ((−∞, t]) j = 1, 2.

For j = 1, 2, the map Φ−1j pushes forward the uniform measure on [0, 1] to µj . The monotone trans-

portation map between µ1 and µ2 is the continuous, non-decreasing function

F (t) = Φ−12 (Φ1(t)),

defined for t ∈ Supp(µ1), where Supp(µ1) is the support of the measure µ1. Observe that

F∗(µ1) = µ2

and

ρ1(t) = F ′(t)ρ2(F (t)) for t ∈ Supp(µ1). (2.23)

We define a distance-function between µ1 and µ2 by setting

d(µ1, µ2) =

√∫R

min(F ′(t)− 1)2, 1dµ1(t).

Page 64: Distribution of Mass in Convex Bodies

64 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

The purpose of this definition will become clear only in the next section. A more standard metric between

probability measures is the L2-Wasserstein metric, see Vilanni’s book [Vil] for more information. In our

case, the L2-Wasserstein metric has the simple formula

W2(µ1, µ2) =

√∫R|x− F (x)|2dµ1(x). (2.24)

One difference between our distance-function d and the Wasserstein metric is that with respect to d, the

distance between a measure and its translation is zero. The goal of the rest of the section is to prove the

following stability result with respect to the distance-function d. A probability measure on R is said to

be even if µ(A) = µ(−A) for any measurable A ⊂ R, where −A = −x;x ∈ A.

Proposition 2.2.6 Suppose that µ1 and µ2 are even log-concave probability measures on R. Denote

σ =√V ar(µ1) + V ar(µ2). Then,

|V ar(µ2)− V ar(µ1)| ≤ Cσ2d(µ1, µ2)

where C > 0 is a universal constant.

We begin the proof of Proposition 2.2.6 with the following crude lemma.

Lemma 2.2.7 Let µ1 and µ2 be probability measures on the real line.

(i) If µ1 and µ2 are even, then,

W2(µ1, µ2)2 ≤ 2(V ar(µ1) + V ar(µ2))

(ii) If µ1, µ2 are supported on [A,∞) and [B,∞) respectively, and have non-increasing densities,

then one has

W2(µ1, µ2) ≤ |B −A|+ 10√V ar(µ1) + V ar(µ2).

Proof: Denote by δ0 the Dirac measure at the origin. Assume that µ0 and µ1 are even. By the triangle

inequality for the Wasserstein metric,

W2(µ1, µ2) ≤W2(µ1, δ0) +W2(δ0, µ2) =√V ar(µ1) +

√V ar(µ2),

and (i) follows. We move to the proof of (ii). Denote e = E[µ1]. It follows from the fact that the density

of µ1 is non-increasing that the expectation of µ1 is larger than its median. Hence

µ1 ([A, e]) ≥ 1

2, and µ1

([A,A+

e−A2

])≥ 1

4.

Therefore,

V ar(µ1) ≥∫ A+ e−A

2

A(e− x)2dµ1(x) ≥ (e−A)2

16.

Let δA, δB, δe be the Dirac measures supported on A,B, e respectively. Then by the triangle inequality,

W2(µ1, δA) ≤W2(µ1, δe) +W2(δe, δA) = W2(µ1, δe) + (e−A) ≤ 5√V ar(µ1).

Page 65: Distribution of Mass in Convex Bodies

2.2. STABILITY OF THE COVARIANCE MATRIX: THE UNCONDITIONAL CASE 65

In the same manner,

W2(µ2, δB) ≤ 5√V ar(µ2).

Therefore, by using W2(µ1, µ2) ≤W2(µ1, δA) +W2(δA, δB) +W2(δB, µ2),

W2(µ1, µ2) ≤ 10√V ar(µ1) + V ar(µ2) + |B −A|.

Observe that when µ1 and µ2 are even, log-concave probability measures, with V ar(µ1)+V ar(µ2) ≤σ2, then by the Cauchy-Schwartz inequality,

V ar(µ2)− V ar(µ1) =

∫R|F (x)|2 − |x|2dµ1(x) (2.25)

≤(∫

R|F (x)− x|2dµ1(x)

∫R

(2|F (x)|2 + 2|x|2)dµ1(x)

)1/2

≤ 2σW2(µ1, µ2).

With this inequality, the proof of Proposition 2.2.6 is reduced to the following proposition:

Proposition 2.2.8 Suppose that µ1 and µ2 are even log-concave probability measures on R. Denote

σ =√V ar(µ1) + V ar(µ2). Then,

W2(µ1, µ2) ≤ Cσd(µ1, µ2) (2.26)

where C > 0 is a universal constant.

Proof: Use (2.23), the definition of F , and the fact that Φ−11 pushes forward the uniform measure on

[0, 1] to µ1, in order to obtain∫R

min(F ′(t)− 1)2, 1dµ1(t) =

∫ 1

0min

(ρ1(Φ−1

1 (t))

ρ2(Φ−12 (t))

− 1

)2

, 1

dt.

Recall that when µj is a log-concave measure, the function ρj(Φ−1j (t)) is concave on [0, 1]. Denote

Ij(t) = ρj(Φ−1j (t)) for j = 1, 2, which are concave non-negative functions on [0, 1], with the property

that Ij(t) = Ij(1 − t) for any t ∈ [0, 1]. These functions are therefore continuous on (0, 1), increasing

on [0, 1/2], and decreasing on [1/2, 1]. Let ε > 0 be such that

ε2 = d2(µ1, µ2) =

∫ 1

0min

(I1(t)

I2(t)− 1

)2

, 1

dt. (2.27)

Suppose first that ε > 1/10. In this case, by part (i) of lemma 2.2.7,

W2(µ, ν)2 ≤ 2 (V ar(µ) + V ar(ν))

So whenever ε > 1/10, the inequality (2.26) holds trivially for a sufficiently large universal constant

C > 0.

From now on, we restrict attention to the case where ε ≤ 1/10. We divide the rest of the proof into

several steps.

Page 66: Distribution of Mass in Convex Bodies

66 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

Step 1: Let us prove that there exists a universal constant C > 0 such that∫ 1−2ε2

2ε2

(I1(t)

I2(t)− 1

)2

dt ≤ Cε2. (2.28)

To that end, we will show that

I1(t) ≤ 4I2(t) for all t ∈ [2ε2, 1− 2ε2]. (2.29)

Once we prove (2.29), the advertised bound (2.28) follows from (2.27). We thus focus on the proof of

(2.29). Suppose that t1 ∈ (0, 1/2] satisfies I1(t1) > 4I2(t1). We will show that in this case

t1 ≤ 2ε2. (2.30)

If I1(t) > 2I2(t) for all t ∈ (0, t1), then t1 ≤ ε2 according to (2.27). Thus (2.30) holds true in this case.

Otherwise, there exists 0 < t < t1 with I1(t) ≤ 2I2(t). Let t0 be the supremum over all such t. Since I1

and I2 are continuous and non-decreasing on (0, t1], then

I1(t0) = 2I2(t0) ≤ 2I2(t1) < I1(t1)/2.

Since I1 is concave, non-decreasing and non-negative on [0, t1], then necessarily t0 < t1/2. We conclude

that I1(t) > 2I2(t) for any t ∈ [t1/2, t1]. From (2.27) it follows that t1 ≤ 2ε2. Therefore (2.30) is proven

in all cases. By symmetry, we conclude (2.29), and the proof of (2.28) is complete.

Step 2: For any 0 ≤ T ≤ Φ−11 (1− 2ε2) we have∫ T

−T(F ′(t)− 1)2dµ1(t) ≤

∫ 1−2ε2

2ε2

(I1(t)

I2(t)− 1

)2

dt ≤ Cε2,

where the last inequality is the content of Step 1. Denote ν = µ1|[−T,T ], an even log-concave probability

measure. According to Lemma 2.2.5, we have V ar(ν) ≤ V ar(µ1) ≤ σ. Note that the function F (t)− tis odd, hence its ν-average its zero. Using the Poincare-type inequality of Lemma 2.2.1, we see that for

any 0 ≤ T ≤ Φ−11 (1− 2ε2),∫ T

−T(F (t)− t)2dµ1(t) ≤ 12V ar(ν)

∫ T

−T(F ′(t)− 1)2dµ1(t) ≤ Cσ2ε2. (2.31)

Step 3: Let T1 = Φ−11 (1 − 3ε2) and T2 = Φ−1

1 (1 − 2ε2). We use (2.31) and conclude that there

exists T1 ≤ T ≤ T2 with

|F (T )− T |2 ≤ Cσ2ε2 /µ1 ([T1, T2]) = Cσ2. (2.32)

Denote ν1 = µ1|[T,∞) and ν2 = µ2|[F (T ),∞), log-concave probability densities with V ar(ν1)+V ar(ν2) ≤σ2. Note that we have, thanks to (2.31),

W2(µ1, µ2)2 =

∫ T

−T(F (t)− t)2dµ1(t) + 2

∫ ∞T

(F (t)− t)2dµ1(t)

≤ Cσ2ε2 + 2µ1([T,∞))W2(ν1, ν2)2.

In order to prove the lemma it remains to show that W2(ν1, ν2)2 ≤ Cσ2. But thanks to (2.32), the latter

is a direct consequence of part (ii) in lemma 2.2.7: Since T, F (T ) > 0, then the log-concave densities of

ν1 and ν2 are non-increasing. This finishes the proof.

Page 67: Distribution of Mass in Convex Bodies

2.2. STABILITY OF THE COVARIANCE MATRIX: THE UNCONDITIONAL CASE 67

2.2.3 Unconditional Convex Bodies

In this section we prove Theorem 2.1.1. The main tool in the proof is the Knothe map from [Kn],

which we define next. Let µ1 and µ2 be Borel probability measures on Rn, with densities ρ1 and ρ2

respectively. We further assume that the support of µ2 is a convex set, and that ρ2 does not vanish in the

interior of Supp(µ2). The Knothe map between µ1 and µ2 is the continuous function F = (F1, . . . , Fn) :

Supp(µ1)→ Supp(µ2) for which

1. F∗(µ1) = µ2.

2. For any j, the function Fj(x1, . . . , xn) depends actually only on the variables x1, . . . , xj . We may

thus speak of Fj(x1, . . . , xj).

3. For any j, and for any fixed x1, . . . , xj−1, the function Fj(x1, . . . , xj) is increasing in xj .

It may be proven by induction on n (see [Kn]) that the Knothe map between µ1 and µ2 always exists, and

in fact, the three requirements above determine the function F completely. Furthermore, assume that µ1

and µ2 have densities ρ1 and ρ2, respectively, and that ρi is continuous in the interior of Supp(µi) for

i = 1, 2. Denoting λj(x) = ∂Fj(x)/ ∂xj , we have

n∏j=1

λj(x) = JF (x) =ρ1(x)

ρ2(F (x))

for any x in the interior of Supp(µ1), where JF (x) is the Jacobian of the map F .

We say that a function ρ : Rn → [0,∞) is unconditional if it is invariant under coordinate reflections,

i.e., if

ρ(x1, ..., xn) = ρ(±x1, ...,±xn)

for all (x1, ..., xn) ∈ Rn and for any choice of signs. We say that a probability measure on Rn is

unconditional if it has an unconditional density. For j = 1, . . . , n and x ∈ Rn we denote

πj(x) = xj and Sj(x) = (x1, . . . , xj−1,−xj , xj+1, . . . , xn).

In what follows, we abbreviate πj(µ) = (πj)∗(µ).

Lemma 2.2.9 LetK1 andK2 be convex bodies in Rn, let µi = µKi (i = 1, 2) be the uniform probability

measure on Ki, and let F = (F1, . . . , Fn) be the Knothe map between µ1 and µ2. Fix j = 1, . . . , n and

assume that

K1 = Sj(K1) and K2 = Sj(K2). (2.33)

That is, K1 and K2 are invariant under reflection with respect to the jth coordinate. Then,

(V ar(πj(µ1))− V ar(πj(µ2)))2 ≤ C(log n)2σ4j

(∫K1

min(λj(x)− 1)2, 1dµ1(x) +1

n20

)where σj =

√V ar(πj(µ1)) + V ar(πj(µ2)) and where, as above, λj(x) = ∂Fj(x)/ ∂xj .

Page 68: Distribution of Mass in Convex Bodies

68 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

Proof: Fix some α > 0 whose value will be chosen later. For a measurable A ⊂ Rn and for

1 ≤ j ≤ n, define

µ1(A) =µ1(A ∩ |xj | ≤ ασj log n)µ1(|xj | ≤ ασj log n)

.

and,

µ2(A) =µ2(A ∩ |xj | ≤ ασj log n)µ2(|xj | ≤ ασj log n)

.

By part (i) of lemma 2.2.2, we deduce that there exists a universal constant C > 0 such that whenever

α > C, one has,

|V ar(πj(µ1))− V ar(πj(µ1))|+ |V ar(πj(µ2))− V ar(πj(µ2))| ≤σ2j

n10. (2.34)

In view of the last inequality, we may restrict our attention to the measures µ1, µ2. DenoteP (x1, . . . , xn) =

(x1, . . . , xj). Consider the log-concave probability measures ν1 = P∗(µ1) and ν2 = P∗(µ2) on Rj . Ob-

serve that the map T = (F1, . . . , Fj) : Rj → Rj is the Knothe map between ν1 and ν2. Furthermore, fix

x = (x1, . . . , xj−1) ∈ Rj−1 and consider the line segment ` = `(x) = (x1, ..., xj); xj ∈ R∩P (K1).

Then T (`) is again a line segment in Rj , parallel to `.

Since ν1 has a continuous density, one may speak of ν1|`, which is the log-concave probability

measure on the line-segment ` whose density is proportional to that of ν1. We may similarly consider

the log-concave probability measure ν2|T (`). Observe that

xj 7→ Fj(x1, . . . , xj)

is the monotone transportation map between πj(ν1|`) and πj(ν2|T (`)). Thanks to (2.33), we may apply

Proposition 2.2.8 for the even, log-concave measures πj(ν1|`) and πj(ν2|T (`)). We get

W2(πj(ν1|`), πj(ν2|T (`))) (2.35)

≤ C√V ar(πj(ν1|`)) + V ar(πj(ν2|T (`)))

√∫`min(λj(x)− 1)2, 1dν1|`(x).

Denote by ν1 the push-forward of ν1 under the map (x1, . . . , xj) 7→ (x1, . . . , xj−1), so

ν1 =

∫Rj−1

ν1|`(x)dν1(x) and ν2 =

∫Rj−1

ν2|T (`(x))dν1(x).

By definition of µ1, µ2, we have,√V ar(πj(ν1|`(x))) + V ar(πj(ν2|T (`(x)))) ≤ Cσj log n.

Using the last equation together with (2.35), we obtain

W2(πj(µ1), πj(µ2)) = W2(πj(ν1), πj(ν2)) ≤∫Rj−1

W2(ν1|`(x), ν2|T (`(x)))dν1(x)

≤C∫Rj−1

√V ar(πj(ν1|`(x))) + V ar(πj(ν2|T (`(x))))

∫`(x)

min(λj(t)− 1)2, 1dν1|`(t) dν1(x)

≤ C ′σj log n

√∫K1

min(λj(x)− 1)2, 1dµ1(x),

Page 69: Distribution of Mass in Convex Bodies

2.2. STABILITY OF THE COVARIANCE MATRIX: THE UNCONDITIONAL CASE 69

where we also used Holder’s inequality. Plugging (2.25) and (2.34) finishes the proof.

We shall need the following calculus lemma:

Lemma 2.2.10 Let α, λ1, . . . , λn > 0 be such that∏j λj = α. Then,

√α exp

c n∑j=1

min(λj − 1)2, 1

≤ n∏j=1

1 + λj2

,

where c > 0 is a universal constant.

Proof: We begin by showing that for any x ∈ R,

log

(1 + ex

2

)≥ x

2+ cminx2, 1 (2.36)

where c > 0 is a universal constant. To that end, consider the function Ψ(x) = log(12 + 1

2 exp(x)). Then

Ψ′(0) = 1/2 and

Ψ′′(x) =ex

(1 + ex)2> 0.

Therefore Ψ is convex, with Φ′′(x) ≥ 1/20 for x ∈ [−1, 1]. From Taylor’s theorem,

Ψ(x) = Ψ(0) + Ψ′(0)x+

∫ x

0Ψ′′(t)(x− t)dt ≥ x

2+

1

40min1, x2,

and (2.36) is proven. Denote θi = log(λi). Note that∑

i θi = logα, so,

∑i

log

(1 + exp(θi)

2

)≥∑i

(θi2

+ cminθ2i , 1

)=

logα

2+ c

∑i

minθ2i , 1.

Noting that | log x| > cmin|1− x|, 1 for some universal constant c > 0, we get,

∑i

log

(1 + exp(θi)

2

)≥ logα

2+ c

∑i

min(1− λi)2, 1,

for some universal constant c > 0. Exponentiating both sides completes the proof.

Proof of theorem 2.1.1: Define α = V oln(T )/V oln(K). Let F be the Knothe map between µKand µT , and as above denote λj(x) = ∂Fj/∂xj . The map G(x) = (F (x)+x)/2 is increasing in each of

the coordinates and consequently G is one-to-one. Furthermore, G(K) ⊂ (K + T )/2 and the Jacobian

of G is

JG(x) =

n∏j=1

1 + λj(x)

2.

By the change-of-variables formula,∫K

n∏j=1

1 + λj(x)

2dx ≤ V oln

(K + T

2

)≤ R

√V oln(K)V oln(T )

Page 70: Distribution of Mass in Convex Bodies

70 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

with∏j λj(x) = α for all x. From Lemma 2.2.10,

1

V oln(K)

∫K

exp

c n∑j=1

min(λj(x)− 1)2, 1

dx ≤ R.

Using Jensen’s inequality

c

∫K

n∑j=1

minλj(x)− 1)2, 1dµK(x) ≤ logR.

We now use Lemma 2.2.9 and deduce that

cn∑j=1

σ−4j (V ar(πj(µK))− V ar(πj(µT )))2 ≤ (log n)2

(logR+ n−10

)i.e.,

n∑j=1

(1− V ar(πj(µT ))/V ar(πj(µK))

1 + V ar(πj(µT ))/V ar(πj(µK))

)2

≤ C(log n)2(logR+ n−10

)(2.37)

Corollary 2.2.4 implies that V ar(πj(µT )) ≤ CR4V ar(πj(µK)) . So,

n∑j=1

(1− V ar(πj(µT ))

V ar(πj(µK))

)2

≤ CR8(log n)2(logR+ n−10) ≤ C(log n)2(R− 1)9 +R8n−10. (2.38)

Since µK and µT are unconditional, observe that the inertia forms are

pK(x) =n∑j=1

x2j /V ar(πj(µK)) , pT (x) =

n∑j=1

x2j /V ar(πj(µT )) .

Therefore, the left-hand side of (2.38) is precisely d2HS(pK , pT ), as may be verified directly from the

definition. This completes the proof of (2.7). To prove (2.8), observe that∫K pK(x)dµK(x) = n, while

∣∣∣∣∫TpK(x)dµT (x)− n

∣∣∣∣ =

∣∣∣∣∣∣n∑j=1

(V ar(πj(µT ))

V ar(πj(µK))− 1

)∣∣∣∣∣∣ ≤ C√n(

(log n)(R− 1)9/2 +R4n−5)

according to (2.38). This implies (2.8).

2.2.4 Obtaining a thin-shell estimate

Here we explain why Theorem 2.1.1 provides yet another proof for the thin-shell estimate from [K3], up

to a logarithmic factor. We write Bn2 = x ∈ Rn; |x| ≤ 1 for the Euclidean unit ball, centered at the

origin in Rn. Observe that when K ⊂ Rn is a convex body and T ⊂ K, then

V oln

(T +K

2

)≤ V oln(K) = R

√V oln(K)V oln(T )

for R =√V oln(K)/V oln(T ).

Page 71: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 71

Proof of proposition 2.1.4: Standard bounds on the distribution of polynomials on high-dimensional

convex sets (see Bourgain [Bou3] or Nazarov, Sodin and Volberg [NSV]) reduce the desired inequality

(2.11) to the estimate

µK

(x ∈ K;

∣∣∣∣ |x|2n − 1

∣∣∣∣ ≥ 20ε

)≤ 1

2. (2.39)

In order to prove (2.39), select a > 0 such that V oln(Ka) = V oln(K)/4. From (2.10),

maxx∈Ka

|x|2

n≥∫Ka

|x|2

ndµKa(x) ≥ 1− ε,

or equivalently,

µK

(x ∈ K;

|x|2

n≤ 1− ε

)≤ 1

4. (2.40)

For the upper bound, let s < t be such that V oln(Ks) = 3V oln(K)/4 and V oln(Kt) = 7V oln(K)/8.

Then, from (2.10),

1 + ε ≥∫Kt

|x|2

ndµKt(x) ≥ 6

7

∫Ks

|x|2

ndµKs(x) +

1

7maxx∈Ks

|x|2

n.

≥ 6

7(1− ε) +

1

7maxx∈Ks

|x|2

n.

Hence, maxx∈Ks|x|2n ≤ 1 + 13ε, or equivalently,

µK

(x ∈ K;

|x|2

n≥ 1 + 13ε

)≤ 1

4. (2.41)

Clearly (2.39) follows from (2.40) and (2.41).

2.3 Stability estimates for the general case

2.3.1 Deriving a stability estimate from the CLT for convex sets

In this section we prove theorem 2.1.5.

Our main ingredient is theorem 1.1.5 above, which shows that there exists a subspace of dimension

nc, where c > 0 is some universal constant, on which the marginals of both K and T are both ap-

proximately Gaussian density-wise. The Prekopa-Leindler inequality then implies that the marginal of

(K + T )/2 on the same subspace is pointwise greater than the supremum-convolution of the respective

marginals of K and T , hence, must be greater than the supremum convolution of two densities which are

both approximately Gaussian, but typically have different variances.

A second ingredient will be a calculation which shows that the integral of the supremum-convolution

of two Gaussian densities whose convariance matrix is a multiple of the identity becomes very large when

their respective variances are not close to each other. This will imply that when V oln((K +T )/2) is not

large, the covariance matrices of both marginals are roughly the same multiple of the identity. Therefore

Page 72: Distribution of Mass in Convex Bodies

72 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

the inertia forms ofK and T must have had roughly the same trace (the trace of the matrix will determine

the multiple of the identity).

For the convenience of the reader, we reformulate theorem 1.1.5, slightly changing its formulation

in order to suit our needs.

Recall that we write Gn,` for the Grassmannian of all `-dimensional subspaces in Rn, and σn,`

stands for the Haar probability measure on Gn,`. A random vector X in Rn is centered if EX = 0 and

is isotropic if its covariance matrix is the identity matrix. For a subspace E ⊂ Rn we write πE for the

orthogonal projection operator onto E in Rn. Furthermore, define γk,α(x) = (2πα2)−k/2 exp(− x2

2α2 ),

the centered gaussian density in Rk with variance α2 and abbreviate γk(x) = γk,1(x). An alternative

formulation of the theorem would be:

Theorem 2.3.1 Let X be a centered, isotropic random vector in Rn with a log-concave density. Let

1 ≤ ` ≤ nc1 be an integer. Then there exists a subset E ⊆ Gn,` with σn,`(E) ≥ 1 − C exp(−nc2) such

that for any E ∈ E , the following holds: Denote by fE the log-concave density of the random vector

πE(X). Then, ∣∣∣∣fE(x)

γ`(x)− 1

∣∣∣∣ ≤ C

nc3(2.42)

for all x ∈ E with |x| ≤ nc4 . Here, C, c1, c2, c3, c4 > 0 are universal constants.

It can be quite easily seen from the proof that the constants in the theorem can be picked to be

c1, c2, c3 = 130 , c4 = 1

60 , C = 500. Different constants would imply different universal constants in

Theorem 2.1.5.

The second ingredient of the proof of theorem 2.1.5 is the following technical lemma, whose point

is that the integral of the supremum-convolution of two spherically-symmetric Gaussian densities must

be quite large when the variances are not close to each other.

Lemma 2.3.2 Let k ∈ N and A,B, α > 0. Let f, g, h : Rk → R satisfy,

h(x) ≥ supy∈Rk

√f(x− y)g(x+ y), ∀x ∈ Rk

and suppose that,

f(x) ≥ Aγk,1(x)

whenever |x| ≤ 10√k, and

g(x) ≥ Bγk,α(x),

whenever |x| ≤ 10α√k. Then,∫

Rkh(x)dx ≥ 1

2

√AB

(1 + (α− 1)2/4

)k/4. (2.43)

Page 73: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 73

Proof: By homogeneity, we may assume that A = B = 1. Denote a = 1/α2. Fix a unit vector

θ ∈ Rn and t > 0. Then for any s ∈ R with |s+ t| ≤ 10√k and |s− t| ≤ 10α

√k,

h(tθ) ≥√f((t+ s)θ)g((t− s)θ) ≥

(√a

)k/2exp

(−1

4((t+ s)2 + a(t− s)2)

). (2.44)

We would like to find s which maximizes the right-hand side in (2.44). We select s = t(a− 1)/(a+ 1)

and verify that when |t| < 5√

(1 + a)k/a we have |s+ t| ≤ 10√k and |s− t| ≤ 10α

√k. We conclude

that for any |t| < 5√

(1 + a)k/a,

h(tθ) ≥(√

a

)k/2exp

(−t2a/(1 + a)

).

Consequently,∫Rkh(x)dx ≥

(√a

)k/2 ∫5√

(1+a)k/aBn2

exp

(− a|x|

2

1 + a

)dx

=

(1 + a

4π√a

)k/2 ∫√

50kBn2

exp

(−|x|

2

2

)dx ≥ 1

2

(1 + a

2√a

)k/2,

where Bn2 = x ∈ Rn; |x| ≤ 1, and where we used the fact that

P(|Z|2 ≥ 50k) ≤ E|Z|2/(50k) =1

50< 1/2

when Z is a standard Gaussian in Rk. All that remains is to note that for any α > 0,

1 + a

2√a

=α+ 1/α

2≥√

1 + (α− 1)2/4.

(The proof of the last inequality boils down to the arithmetic/geometric means inequality α−2/3 +

2α/3 ≥ 1 via elementary algebraic manipulations).

The following lemma combines theorem 1.1.5 with the estimate we have just proved.

Lemma 2.3.3 Let f, g be log concave probability densities on Rn, such that f is isotropic and the

barycenter of g is at the origin. Let λini=1 eigenvalues of Cov(g). Denote,

R =

∫Rn

supy∈Rn

√f(x+ y)g(x− y)dx.

Then, for 0 < δ < 1,

#i ; |λi − 1| ≥ δ ≤ C(

logR

δ

)C1

for some universal constants C,C1 > 1.

Proof:

We may clearly assume that the sequence λi is increasing. Let X and Y be random vectors that are

distributed according to the laws f, g, respectively. Fix 0 < δ < 1. Consider the subspace E spanned by

Page 74: Distribution of Mass in Convex Bodies

74 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

ei;λi− 1 ≥ δ, where ei is an orthonormal basis whose vectors satisfy 〈Cov(g)ei, ei〉 = λi. Denote

d = dimE. Since the λi’s are in increasing order, E has the form,

E = spanei, i ≥ i0

for some 1 ≤ i0 ≤ n. Write j0 =⌊n−i0

2

⌋, and V 2 = λi0+j0 . Now, fix 1 ≤ j ≤ j0. Define,

vj(θ) = θei0+j0+j +√

1− θ2ei0+j0−j .

Inspect the function f(θ) = 〈Cov(g)vj(θ), vj(θ)〉. We have f(0) ≤ V 2 and f(1) ≥ V 2. By continuity,

there exists a certain 0 ≤ θj ≤ 1 for which,

〈Cov(g)vj(θj), vj(θj)〉 = V 2 (2.45)

Denote,

F = span vj(θj) | 1 ≤ j ≤ j0 .

equation (2.45) and the fact that vj(θj)j0j=1 is an orthonormal system implies that for every v ∈ F , one

has 〈Cov(g)v, v〉 = V 2. Moreover, dimF = j0 ≥ 12d − 1. We now apply theorem 2.3.1 which claims

that if d ≥ C, then there exists a subspace G ⊂ F with dimG ≥ d1/30 such that,

f(x) ≥ 1

2γk,1(x), g(y) ≥ 1

2γk,V (y)

for all x with |x| ≤ d1/60 and for all |y| ≤ 10V d1/60, where f and g are the densities of πG(X), πG(Y )

respectively. Next, we use lemma 2.3.2 to attain,∫G

sups∈G

√f(t− s)g(t+ s)dt ≥ 1

10(1 + (V − 1)2/4)dimG/4.

On the other hand, we may use the Prekopa-Leindler inequality as in (2.22) above, and conclude that∫G

sups∈G

√f(t− s)g(t+ s)dt ≤ R.

Consequently, under the assumption that d1/60 ≥ C,

(V − 1)2 ≤ C logR/dim(G). (2.46)

Since V ≥√

1 + δ ≥ 1 + δ/3, we conclude,

#i ; λi − 1 ≥ δ ≤ C(

logR

δ

)C1

By repeating the argument, with the subspace ei;λi− 1 ≤ −δ replacing the subspace E, we conclude

the proof.

We are ready to prove the main theorem of this section.

Page 75: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 75

Proof of theorem 2.1.5: By applying affine transformations to both K and T , we can assume that

both bodies have the origin as their barycenter, and that pK(x) = |x|2 while pT (x) =∑

i x2i /λi. By

lemma 2.3.3,

# i; |λi − 1| ≥ δ ≤ C(

logR

δ

)C1

, (2.47)

for any 0 < δ < 1. Since λi ≤ CR4 for all i, as follows from Corollary 2.2.4, then

1

n

n∑i=1

(λi − 1)2 ≤ C

n

∫ 1

0min

n,

(logR

δ

)C1dδ +

C

nRC2 ≤ CR

α2

nα1

where C,α1, α2 > 0 are universal constants. This proves (2.12). To obtain (2.13), note that

∣∣∣∣∫T pK(x− bT )dµT (x)∫K pK(x− bK)dµK(x)

− 1

∣∣∣∣ =1

n

∣∣∣∣∣n∑i=1

(λi − 1)

∣∣∣∣∣ ≤√√√√ 1

n

n∑i=1

(λi − 1)2.

2.3.2 The general case: obtaining stability estimates using a transporta-tion argument

The goal of this section is to prove theorem 2.1.6.

We begin with several core definitions which will be used in the proof. For two functions f, g : Rn →R+, denote by Hλ(f, g) the supremum convolution of the two functions, hence,

Hλ(f, g)(x) := supy∈Rn

f1−λ(x+ λy)gλ(x− (1− λ)y).

We will consider this function as a log-concave density in Rn+1. We define,

Kλ(f, g) =

∫RnHλ(f, g)(x)dx,

the volume of a section, and

K(f, g) =

∫ 1

0Kλ(f, g)dλ,

the entire volume. Next, we write,

d(f, g) =1

K(f, g)

∫Rn

∫ 1

0xHλ(f, g)(x)dλdx

and,

D(f, g) =1

K(f, g)

∫Rn

∫ 1

0x⊗ xHλ(f, g)(x+ d(f, g))dλdx, (2.48)

the barycenter and covariance matrix of a marginal of H(f, g). Finally, we normalize this density by

defining

L(f, g)(λ, x) =1

K(f, g)

√detD(f, g)Hλ(f, g)(D1/2x+ d(f, g))

Page 76: Distribution of Mass in Convex Bodies

76 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

and,

l(f, g)(x) =

∫ 1

0L(f, g)(λ, x)dλ

the marginal of L(f, g) with respect to the axis λ. Note that, by the prekopa Leindler inequality, l(f, g)

is an isotropic log-concave probability density.

The results of this section rely on the so-called Brenier map between two given log-concave mea-

sures. Gives two log-concave probability densities f, g on Rn, one may consider the Monge-Ampere

equation,

det(Hessϕ) =g ∇ϕf

.

A theorem of Brenier asserts that such a solution to the above equation on the domain supp(f) exists.

For a precise definition and properties, see [Vil]. The map F = ∇ϕ pushes forward the measure fdx to

the measure gdx, and is referred to as the Brenier map between the two measures.

Remark 2.3.4 The Knothe map, used in section 3, is in some sense a limiting case of the Brenier map.

See [CGF].

The next lemma contains the central idea of this section.

Lemma 2.3.5 Let f, g be log-concave probability densities in Rn. DenoteK = K(f, g). Let x→ F (x)

be the Brenier map taking f to g. Suppose that X is a random vector distributed according to the law

l(f, g). Then,

V ar[|X|2] ≥ 1

K(f, g)

∫Rnf(x)V ar

[∣∣∣D−1/2((1− Λ)x+ ΛF (x)− d(f, g))∣∣∣2] dx (2.49)

where D = D(f, g) and Λ ∼ U([0, 1]).

Proof:

Denote D = D(f, g) and L(λ, x) = L(f, g)(λ, x). Furthermore, define,

f(x) =√

detDf(D1/2x+ d(f, g)), g(x) =√

detDg(D1/2x+ d(f, g))

so that f(x) = K(f, g)L(0, x) and g(x) = K(f, g)L(1, x). Define,

F (x) = D−1/2(F (D1/2x+ d(f, g))− d(f, g)).

Next, define,

M(λ, x) = (M1(λ, x),M2(λ, x)) = (λ, (1− λ)x+ λF (x))

By elementary properties of the Brenier map, M is a bijective map from [0, 1]× supp(f) onto supp(L).

Define a density,

q(λ, x) =f(x)(1−λ)g(F (x))λ

K(f, g).

Page 77: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 77

By the fact that L is log-concave, we get,

q(λ, x) ≤ L(M(λ, x)), ∀λ ∈ [0, 1], x ∈ supp(f) (2.50)

An easy calculation shows that the Jacobian of M(λ, x) is,

J(λ, x) = det((1− λ)Id+ λ∇F (x)).

Recall that ∇F (x) is a positive definite matrix and that det(∇F (x)) = f(x)

g(F (x)). By Alexandrov’s

inequality,

J(λ, x) ≥

(f(x)

g(F (x))

)λ,

and therefore,

J(λ, x)q(λ, x) ≥ f(x)

K(f, g), ∀λ ∈ [0, 1], x ∈ Rn. (2.51)

By changing variables with M−1 and applying (2.50) and (2.51), we calculate,

V ar[|X|2

]=

∫Rn

∫[0,1]

(|x|2 −

∫Rn

∫[0,1]|y|2L(η, y)dηdy

)2

L(λ, x)dλdx ≥

∫Rn

∫[0,1]

(|M2(λ, x))|2 −

∫Rn

∫[0,1]|y|2L(η, y)dηdy

)2

J(λ, x)q(λ, x)dλdx ≥

∫Rn

f(x)

K(f, g)

∫[0,1]

(|M2(λ, x))|2 −

∫Rn

∫[0,1]|y|2L(η, y)dηdy

)2

dx ≥

∫Rn

f(x)

K(f, g)

∫[0,1]

(|M2(λ, x))|2 −

∫[0,1]|M2(η, x))|2dη

)2

dx =

∫Rn

f(x)

K(f, g)V ar

[∣∣∣(1− Λ)x+ ΛF (x)∣∣∣2] dx.

Applying the change of variables x→ D−1/2(x− d(f, g)) finishes the proof.

By the definition of the thin-shell constant σn, for any isotropic random vector X , one has,

V ar[|X|2] ≤ nσ2n (2.52)

Combining this with the above lemma gives,∫Rnf(x)V ar

[∣∣∣D(f, g)−1/2((1− Λ)x+ ΛF (x)− d(f, g))∣∣∣2] dx ≤ K(f, g)nσ2

n. (2.53)

For x, y ∈ Rn, define,

v(x, y) = V ar[|Λx+ (1− λ)y|2

]In view of (2.53), we would like to a lower bound for v(x, y) in terms of |x|2 − |y|2 and in terms of

|x− y|. These bounds are summarized in the following lemma,

Page 78: Distribution of Mass in Convex Bodies

78 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

Lemma 2.3.6 There exist constants C1, C2 > 0, such that for all x, y ∈ Rn,

v(x, y) ≥ C1(|x|2 − |y|2)2 + C2|x− y|4. (2.54)

Proof:

Define

f(λ) = |λx+ (1− λ)y|2, g(λ) = λ|x|2 + (1− λ)|y|2,

and h(λ) = f(λ)− g(λ).

By the symmetry of h around 12 , it follows that Cov(g(Λ), h(Λ)) = 0, so

V ar[f(Λ)] = V ar[h(Λ)] + V ar[g(Λ)]. (2.55)

It is easy to check that,

V ar[g(Λ)] ≥ C1(|x|2 − |y|2)2 (2.56)

Next, by the parallelogram law,

f(1/2) =f(0) + f(1)

2− 1

4|x− y|2.

Consequently,

V ar[h(Γ)] ≥ C2|x− y|4. (2.57)

Combining (2.55), (2.56) and (2.57) finishes the proof.

The next lemma applies the estimate of the last one with equation (2.53), towards the proof of theo-

rem 2.1.6.

Lemma 2.3.7 Let f, g be log-concave probability measures whose barycenter is at the origin. Suppose

that f is isotropic. Then,

W 22 (f, g) ≤

√nK1/2(f, g)5σn. (2.58)

Moreover, there exists a universal constant C1 > 0 such that whenever K1/2(f, g) < exp(nC1). Then

there exists two unit vectors θ1, θ2 ∈ Sn−1, such that

〈Cov(g)θ1, θ1〉 ≤ 1 + Cσn

√K(f, g)

n, (2.59)

and

〈Cov(g)θ2, θ2〉 ≥ 1− Cσn

√K(f, g)

n. (2.60)

where D = D(f, g) and C is some universal constant.

Proof:

Write d = d(f, g). Plugging the result of lemma 2.3.6 with equation (2.53) gives,∫Rnf(x)

((|D−1/2(x− d)|2 − |D−1/2(F (x)− d)|2

)2+ |D−1/2(x− F (x))|4

)dx (2.61)

Page 79: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 79

≤ CK(f, g)nσ2n.

Let X,Y be random vectors whose densities are f, g respectively. By definition of the Wasserstein

distance,

W2(D−1/2X,D−1/2Y )2 ≤∫Rnf(x)|D−1/2(x− F (x))|2dx. (2.62)

The fact that f(x) and g(x) have barycenters at the origin, implies,

E[〈D−1/2X,D−1/2d〉] = E[〈D−1/2Y,D−1/2d〉] = 0,

and consequently, ∫Rnf(x)

(|D−1/2(x− d)|2 − |D−1/2(F (x)− d)|2

)dx = (2.63)

= Tr(Cov(D−1/2X)− Cov(D−1/2Y ))

The Holder inequality and equations (2.61), (2.62) and (2.63) yield,

W2(X, Y )4 +(Tr(Cov(X)− Cov(Y ))

)2≤ CnK(f, g)σ2

n (2.64)

where X = D−1/2X and Y = D−1/2Y . Consequently,

W2(X,Y )2 ≤ C√nK(f, g)σn||D||OP

An application of corollary 2.2.4 gives,

||D||OP ≤ K1/2(f, g)4.

Since K(f, g) ≤ K1/2(f, g)2, (2.58) is proven. To establish (2.59), we fix α > 0, and assume by

contradiction that,

〈Cov(Y )θ, θ〉 > 1 + ασn

√K(f, g)

n, ∀θ ∈ Sn−1.

In that case, noting that Cov(X) = D−1,

〈Cov(X)−1/2Cov(Y )Cov(X)−1/2θ, θ〉 − 1 > ασn

√K(f, g)

n, ∀θ ∈ Sn−1.

The last equation implies, ∣∣∣∣∣Tr(Cov(Y ))

Tr(Cov(X))− 1

∣∣∣∣∣ > ασn

√K(f, g)

n.

This shows that in order to establish equation (2.59), it is enough to show that, for some universal constant

C > 0, ∣∣∣Tr(Cov(Y ))− Tr(Cov(X))∣∣∣ < CTr(Cov(X))σn

√K(f, g)

n.

In view of (2.64), the last equation will be concluded if we only manage to show,

Tr(Cov(X)) = Tr(D−1) ≥ n

2. (2.65)

Page 80: Distribution of Mass in Convex Bodies

80 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

The above fact follows from an application of lemma 2.3.3 with δ = 1/2 and from the assumption that

K1/2(f, g) ≤ exp(nC1). Equation (2.59) is established. The proof of equation (2.60) is analogous. The

lemma is complete.

We move on to,

Lemma 2.3.8 Let f, g be log-concave probability measures whose barycenter is at the origin. Suppose

that f is isotropic. Define K = K1/2(f, g) and denote by λi the eigenvalues of Cov(g). Assume that

the sequence |λi − 1| is decreasing. Then one has,

|λi − 1| ≤ CK4, ∀1 ≤ i ≤ n (2.66)

and,

|λi − 1| ≤ CKτniκ−12 , ∀(logK)C1 ≤ i ≤ n (2.67)

where C,C1 > 0 are some universal constants.

Proof: Equation (2.66) follows directly from corollary (2.2.4). To establish equation (2.67), denote by eithe unit eigenvector corresponding to the eigenvalue λi and define,

E1 = spej ; 1 ≤ j ≤ i, λj ≥ 1, E2 = spej ; 1 ≤ j ≤ i, λj ≤ 1

Let E be the subspace with the larger dimension among these two subspaces. Then k = dimE ≥ i2 .

By the assumption (logK)C1 ≤ i, we may apply lemma 2.3.7. Using equation (2.59) on the densities

fπE(K), fπE(T ) gives,

|λi − 1| ≤ |λk − 1| ≤ Cσk

√K2

k≤ C ′Kτniκ−

12 . (2.68)

where in the first inequality we use the fact that K(f, g) ≤ K1/2(f, g)2.

We are ready to prove the main theorem of the section.

Proof of theorem 2.1.6:

We may clearly assume that the barycenters of K,T are at the origin and that pK(x) = |x|2, while

pT (x) =∑

i x2i /λi, where the sequence |λi − 1| is decreasing. Using lemma 2.3.8, we calculate, We

may calculate,n∑i=1

|λi − 1|2 ≤ CR8(logR)C1 + CR2τ2n

n∑i=1

i2κ−1 ≤ CR2

(1 +

∫ n

1s2κ−1ds

)≤

C ′(R9 + τ2nR

2 max(log n, n2κ)).

Equation (2.15) follows. Equation (2.16) follows immediately from equation (2.58). The proof is com-

plete.

Page 81: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 81

2.3.3 Obtaining a stability estimate using a stochastic localization scheme

The main goal of this section is to prove theorem 2.1.7.

The idea of the proof is as follows: Given two log-concave densities, f and g, we run the localiza-

tion process we constructed in section 1.4.1 on both functions, so that their corresponding localization

processes are coupled together in the sense that we take the same Wiener process Wt for both functions.

Recall formula (1.113), whose point is that the barycenters of the localized functions ft and gt converge,

in the Wasserstein metric, to the measures whose densities are f and g, respectively. In view of this, it

is enough to consider the paths of the barycenters and show that they remain close to each other along

the process. Recall that if at is the barycenter of ft, we have dat = A1/2t dWt. This formula tells us that

as long as we manage to keep the covariance matrices of ft and gt approximately similar to each other,

the barycenters will not move too far apart. In order to do this, we use the ideas of the previous section:

when the integral of the supremum convolution of two given densities is rather small, these densities

can essentially be regarded as parallel sections of an isotropic convex body, which means, by thin-shell

concentration, that the corresponding covariance matrices cannot be very different from each other.

We begin with some notation (in order to simplify the notation in our proof, we use a notation which is

slightly different from the notation appearing in the previous section). For two functions f, g : Rn → R+,

denote by Hλ(f, g) the supremum convolution of the two functions, hence,

H(f, g)(x) := supy∈Rn

√f(x+ y)g(x− y).

Next, define,

K(f, g) =

∫RnH(f, g)(x)dx.

Our main ideas in this section are contained in the following lemma:

Lemma 2.3.9 Let ε > 0 and let f , g be log-concave probability densities in Rn such that f is isotropic

and the barycenter of g lies at the origin. In that case, there exist two densities, f , g, which satisfy,

f(x) ≤ f(x), g(x) ≤ g(x), ∀x ∈ Rn,∫Rnf(x)dx =

∫Rng(x)dx ≥ 1− ε

and,

W2(f , g) ≤ C

ε6τnK(f, g)5n2(κ−κ2)+ε (2.69)

Proof: As explained in the beginning of the section, we will couple between the measures f and g in

means of coupling between the processes Γt(f) and Γt(g). To that end, we define, as in (1.105),

F0(x) = 1, dFt(x) = 〈A−1/2t dWt, x− at〉Ft(x) (2.70)

Page 82: Distribution of Mass in Convex Bodies

82 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

where,

at =

∫Rn xf(x)Ft(x)dx∫Rn f(x)Ft(x)dx

is the barycenter of fFt, and,

At =

∫Rn

(x− at)⊗ (x− at)f(x)Ft(x)dx

is the covariance matrix of fFt. As usual denote ft = Ftf .

Next, we define,

G0(x) = 1, dGt(x) = 〈A−1/2t dWt, x− bt〉Gt(x)

where,

bt =

∫Rn xg(x)Gt(x)dx∫Rn g(x)Gt(x)dx

,

and denote gt(x) = g(x)Gt(x).

Finally, we ”interpolate” between the two processes by defining,

H0(x) = 1, dHt(x) = 〈A−1/2t dWt, x− (at + bt)/2〉,

and,

ht(x) = Ht(x)H(f, g)(x).

By a similar calculation to the one carried out in lemma 1.4.2, we learn that for all t ≥ 0,∫ft(x)dx =∫

gt(x)dx = 1. Fix x, y ∈ Rn. Formula (1.108) implies,

d log ft(x+ y) = 〈x+ y − at, A−1/2t dWt〉 −

1

2|A−1/2

t (x+ y − at)|2dt,

d log gt(x− y) = 〈x− y − bt, A−1/2t dWt〉 −

1

2|A−1/2

t (x− y − bt)|2dt,

and

d log ht(x) =

⟨x− at + bt

2, A−1/2t dWt

⟩− 1

2|A−1/2

t (x− (at + bt)/2)|2dt.

Consequently,

2d log ht(x) ≥ d log ft(x+ y) + d log gt(x− y).

It follows that,

ht(x) ≥ H(ft, gt)(x).

Define St =∫Rn ht(x)dx. The definition of Ht suggests that St is a martingale. By the Dambis /

Dubins-Schwarz theorem, there exists a non-decreasing function A(t) such that,

St = K(f, g) +WA(t)

where Wt is distributed as a standard Wiener process. Since St ≥ 1 almost surely, it follows from the

Doob’s optional sampling theorem that,

P(Gt) ≥ 1− ε/2, ∀s > 0. (2.71)

Page 83: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 83

where,

Gt =

maxs∈[0,t]

Ss ≤2K(f, g)

ε

. (2.72)

Next, define,

Ft :=||As||OP < CK2

n(log n)e−t, ∀0 ≤ s ≤ t.

where C is the same constant as in (3.1). Finally, denote Et = Gt ∩Ft. By proposition 3.1 and equation

(2.71), P (Et) > 1− ε for all t > 0. Define a stopping time by the equation,

ρ = supt| Et holds.

Our next objective is to define the densities f , g by, in some sense, neglecting the cases where Et does

not hold. We begin by defining the density ft by the following equation,∫Bft(x)dx = E

[1Et

∫Bft(x)dx

],

for all measurable B ⊂ Rn. Likewise, we define∫Bgt(x)dx = E

[1Et

∫Bgt(x)dx

].

Recall that f(x) = E[ft(x)] for all x ∈ Rn and t > 0. It follows that,∫Rnft(x)dx =

∫Rngt(x)dx = P (Et) ≥ 1− ε,

and that

ft(x) ≤ f(x), gt(x) ≤ g(x), ∀x ∈ Rn.

We construct a coupling between ft and gt by defining a measure µt on Rn × Rn using the formula

µt(A×B) = E[1Et

∫A×B

ft(x)gt(y)dxdy

],

for any measurable sets A,B ⊂ Rn. It is easy to check that ft and gt are the densities of the marginals

of µt onto its first and last n coordinates respectively. Thus, by definition of the Wasserstein distance,

W2(ft, gt) ≤(∫

Rn×Rn|x− y|2dµt(x, y)

)1/2

=

(E[1Et

∫Rn×Rn

|x− y|2ft(x)gt(y)dxdy

])1/2

(E[1Et (W2(ft, at) +W2(gt, bt) + |at − bt|)2

])1/2.

Now, thanks to formula (1.113), we can take T large enough (and deterministic) such that,

W2(fT , gT ) ≤ 2(E[1ET |aT − bT |

2])1/2

+ 1 ≤ (2.73)

2(E[|aT∧ρ − bT∧ρ|2

])1/2+ 1.

Page 84: Distribution of Mass in Convex Bodies

84 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

We will define f := fT and g := gT . In view of the last equation, our main goal will be to attain a bound

for the process |at − bt|. A similar calculation to the one carried out in (1.110) gives,

dat = A1/2t dWt, dbt = BtA

−1/2t dWt. (2.74)

Therefore,

d|at − bt|2 = 2〈at − bt, dat〉 − 2〈at − bt, dbt〉+

〈dat, dat〉+ 〈dbt, dbt〉 − 2〈dat, dbt〉.

The first two terms are martingale. We use the unique decomposition

|at − bt|2 = Mt +Nt

where Mt is a local martingale and Nt is an adapted process of locally bounded variation. We get,

d

dtNt = 〈dat − dbt, dat − dbt〉 =

〈(At −Bt)A−1/2t dWt, (At −Bt)A−1/2

t dWt〉 =

||A1/2t (I −A−1/2

t BtA−1/2t )||2HS .

By the Optional Stopping Theorem,

E[|at∧ρ − bt∧ρ|2

]= E[Nt∧ρ] = E

[∫ t∧ρ

0||Ds||2HSds

](2.75)

where Dt = A1/2t (I − A−1/2

t BtA−1/2t ). Our next task is to use the results of section 2.3.2 in order to

bound ||Dt||HS under the assumption ft < τ .

We start by denoting the eigenvalues of the matrix I−A−1/2t BtA

−1/2t by δi, in decreasing order, and the

eigenvalues of the matrix At by λi, also in decreasing order. By theorem 1 in [T],

||Dt||2HS ≤n∑j=1

λjδ2j . (2.76)

By lemma 2.3.7, we learn that

δj ≤CK(ft, gt)

5τnjκ

√j

. (2.77)

Plugging this into (2.76) yields,

||Dt||2HS ≤ CK(ft, gt)10τ2

n

n∑j=1

λjj2κ−1.

Fix some constant (1− 2κ) < α < 1, whose value will be chosen later. For now, we assume that κ > 0.

Using Holder’s inequality, we calculate,

||Dt||2HS ≤ CK(ft, gt)10τ2

n

n∑j=1

λ1/(1−α)j

1−α n∑j=1

j(2κ−1)/α

α

≤ (2.78)

Page 85: Distribution of Mass in Convex Bodies

2.3. STABILITY ESTIMATES FOR THE GENERAL CASE 85

CK(ft, gt)10τ2

n

λ1/(1−α)−11

n∑j=1

λj

1−α(1 +

∫ n

1t(2κ−1)/α

)α≤

CK(ft, gt)10τ2

nλα1 (βn)1−α

(n(2κ−1)/α+1 + 2

)α( 1

(2κ− 1)/α+ 1

)αwhere β = 1

n

∑nj=1 λj . Recall that α > (1− 2κ), which gives,(

n(2κ−1)/α+1 + 2)α≤ 3nαn2κ−1. (2.79)

Take α such that ε = α− (1− 2κ). Equations (2.78) and (2.79) give,

||Dt||2HS ≤C ′

εK(ft, gt)

10τ2nβ

1−αλα1n2κ ≤

C ′′

εK(ft, gt)

10τ2n max(β, 1)λ1−2κ+ε

1 n2κ.

Recall that we assume that t < τ . By the definition of τ , we get λ1 ≤ Cτ2nn

2κ log n and K(ft, gt) ≤2K(f, g)/ε. Part (ii) of proposition 2.2.6 implies E[β] ≤ 1. Plugging these facts into the last equation

gives,

E[||Dt||2HS

]≤ C

ε11K(f, g)10τ2

n

(τ2nn

2κ log n)1−2κ+ε

n2κe−t ≤

≤ C ′

ε11K(f, g)10τ2

nn4κ−4κ2+εe−t.

Finally, using equations (2.73) and (2.75), we conclude,

W2(fT , gT )2 ≤ E[∫ T∧ρ

0||Ds||2HSds

]≤ (2.80)

C

ε11K(f, g)10τ2

nn4κ−4κ2+ε.

The proof is complete.

Remark 2.3.10 In the above lemma, if we replace the assumption that f is isotropic by the assumption

that f, g are log-concave with respect to the Gaussian measure, then following the same lines of proof

while using theorem 1.4.5, one may improve the bound (2.69) and get,

W2(f , g) ≤ C(ε)K(f, g)√

log n.

We move on to the proof of theorem 2.1.7.

Proof of theorem 2.1.7: Let K,T be convex bodies of volume 1 such that the covariance matrix of

K is L2kId. Fix ε > 0. Define,

f(x) = 1K/LK (x)LnK , g(x) = 1T/LK (x)LnK ,

Page 86: Distribution of Mass in Convex Bodies

86 CHAPTER 2. STABILITY OF THE BRUNN-MINKOWSKI INEQUALITY

so both f and g are probability measures and f is isotropic. We have,

K(f, g) = V oln

(K + T

2

)= V.

We use lemma 2.3.9, which asserts that there exist two measures f , g, such that,

f(x) ≤ f(x), g(x) ≤ g(x), ∀x ∈ Rn, (2.81)∫f(x)dx =

∫g(x)dx ≥ 1− ε (2.82)

and such that,

W2(f , g) ≤ ∆

where ∆ = C(ε)V 5τnn2(κ−κ2)+ε. Since g is supported on T , it follows that,∫

Kd2(x, T/LT )f(x)dx ≤ ∆2

where d(x, T/LT ) = infy∈(T/LT ) |x− y|. Denote,

Kα = x ∈ K/LK ; d(x, T ) ≥ α∆.

It follows from Markov’s inequality and from (2.81) and (2.82) that,

V oln(Kα) ≤ L−nK

(ε+

1

α2

).

Finally, taking δ = LK∆/√ε gives

V oln(K \ Tδ) ≤ 2ε. (2.83)

This completes the proof.

Page 87: Distribution of Mass in Convex Bodies

Chapter 3

Complexity results using probabilisticconstructions

This chapter is divided into two sections, each ones contains a proof of an information-complexity result

of estimating a certain parameter of a convex body. In the first section, we show that the volume of

a convex body cannot be estimated using a polynomial number of random points generated from the

uniform measure on the body. In the second section, we show that in order to reconstruct a single entry

in the inverse-covariance matrix of some random vector in Rn, one needs at least cn samples.

3.1 Nonexistence of a volume estimation algorithm

Volume-related properties of high-dimensional convex bodies is one of the main topics of convex ge-

ometry in research today. Naturally, calculating or approximating the volume of a convex body is an

important problem. Starting from the 1980’s, several works have been made in the area of finding a fast

algorithm for computing the volume of a convex body (see for example [B],[BF],[LS],[DFK],[LV] and

references therein).

These algorithms usually assume that the convex body K ⊂ Rn, is given by a certain oracle. An

oracle is a ”black box” which provides the algorithm some information about the body. One example of

an oracle is the membership oracle, which, given a point x ∈ Rn, answers either ”x ∈ K” or ”x /∈ K”.

Another example, is the random point oracle, which generates random points uniformly distributed

over K.

All volume computing algorithms, known to the author, which appear in the literature use the mem-

bership oracle. This note deals with a question asked by L. Lovasz about the random point oracle. It has

been an open problem for a while whether or not it is possible to find a fast algorithm which computes

the volume of K provided access to the random point oracle ([GR], [Lo]).

We answer this question negatively. In order to formulate our main result, we begin with some

definitions.

87

Page 88: Distribution of Mass in Convex Bodies

88 CHAPTER 3. COMPLEXITY RESULTS

An algorithm which uses the random point oracle is a (possibly randomized) function whose input is a

finite sequence of random points generated according to the uniform measure on K and whose output is

number, which is presumed be an approximation for the volume of K. The complexity of the algorithm

will be defined by the length of the sequence of random points. We are interested in the existence of

algorithms with a complexity which depends polynomially on the dimension n.

We say that an algorithm is correct up to C with probability p, if for any K ⊂ Rn, given the sequence of

random points from K, the output of the algorithm is between V ol(K)C and CV ol(K), with probability at

least p.

We prove the following theorem:

Theorem 3.1.1 There do not exist constants C, p, κ > 0 such that for any dimension n, there exists an

algorithm with complexity O(nκ) which is correct in estimating the volume of convex bodies in Rn up to

C with probability p.

It is important to emphasize that this result is not a result in complexity theory. Here we show that

a polynomial number of points actually does not contain enough information to estimate the volume,

regardless of the number of calculations, and hence, it is of information-theoretical nature.

For convex geometers, the main point in this study may be the additional information on volume distri-

bution in convex bodies it provides. We suggest the reader to look this result in view of the recent results

concerning the distribution of mass in convex bodies. In particular, results regarding thin-shell concen-

tration and the Central Limit Theorem for Convex bodies, proved in the general case by B. Klartag, show

that essentially all of the mass of an isotropic convex body K is contained in a very thin-shell around the

origin, and that almost all of the marginals are approximately gaussian. This may suggest that, in some

way, all convex bodies, when neglecting a small portion of the mass, behave more or less the same as

a Euclidean ball in many senses. Philosophically, one can also interpret these results as follows: pro-

vided a small number of points from a logarithmically-concave measure, one cannot distinguish it from

a spherically symmetric measure. For definitions and results see [K4]. One of the main stages of our

proof is to show that one cannot distinguish between the uniform distribution over certain convex bodies,

which are geometrically far from a Euclidean ball, and some spherically symmetric distribution, when

the number of sample points is at most polynomially large.

Here is a more quantitative formulation of what we prove:

Theorem 3.1.2 There exists a constant ε > 0 and a number N ∈ N such that for all n > N , there does

not exist an algorithm whose input is a sequence of length enε

of points generated randomly according

to the uniform measure in a convex body K ⊂ Rn, which determines V ol(K) up to enε

with probability

more than e−nε

to be correct.

Remark. After showing that the volume of a convex body cannot be approximated, one may fur-

ther ask: what about an algorithm that estimates the volume radius of a convex body, defined by

Page 89: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 89

V olRad(K) = V ol(K)1n ? A proof which shows that it is also impossible has to be far more deli-

cate than our proof. For example, under the hyperplane conjecture, it is easy to estimate the volume

radius of a convex body up to some C > 0.

One may also compare this result to the two following related results: in a recent result N.Goyal and

L.Rademacher ([GR]) show that in order to learn a convex body, one needs at least 2c√

nε random points.

Learning a convex body rougly means finding a set having at most ε relative symmetric difference with

the actual body (see [GR]). Klivans, O’Donnel and Servedio ([KOS]), show that any convex body can

be agnostically learned with respect to the gaussian distribution using 2O(√n) labelled gaussian samples.

The general idea of the proof is as follows. Let Kαα∈I1 and Kαα∈I2 be two families of convex

bodies. For i = 1, 2, a probability measure µi on the set of indices Ii induces a random construction of

convex bodies, which in turn induces a probability measure Pi on the set of sequences of points in Rn in

the following simple way: first generate an index α according to µi, and then generate a sequence of N

uniformly distributed random samples from Kα.

In the proof we will define two distinct random constructions of convex bodies,Ki = (Kαα∈Ii , µi), i =

1, 2 such that:

1. For every α1 ∈ I1 and α2 ∈ I2, the ratio between V ol(Kα1) and V ol(Kα2) is large.

2. IfN is not too large, both distributions P1, P2 are close in total variation distance to some distributions

of samples in which the samples are independent and have a spherically symmetric law.

3. The radial profiles (hence the distribution of the Euclidean norm of a random sample) of typical ran-

dom bodies K1,K2 are very close to each other.

In other words, we will define two constructions of random convex bodies for which: 1. The typical

volumes of the bodies they produce will be far from equal. 2. They will be both indistiguishable from

spherically symmetric constructions for a polynomial number of samples. 3. The radial profiles they

produce are indistiguishable from each other for a polynomial number of samples.

To go on with the proof, a simple application of Yao’s lemma will help us assume that the algorithm

is deterministic. A deterministic algorithm is actually a function F : Rnκ+1 → R which takes a sequence

of points and returns the volume of the body. If the total variation distance between the probabilities

P1 and P2 defined above is small, then, there exists a set A ⊂ Rnκ+1which has a high probability with

respect to both P1 and P2. Obviously, for all x ∈ A, F (x) is wrong in approximating the volume of at

least one of the families.

In section 2, we will describe how we build these families of bodies, Kα, using a random con-

struction which starts from a Euclidean ball, to which deletions which cut out parts of it, generated by

some Poisson process, are applied. Then, using elementary properties of the Poisson process and some

concentration of measure properties of the ball, we will see that the correlation between different points

Page 90: Distribution of Mass in Convex Bodies

90 CHAPTER 3. COMPLEXITY RESULTS

in polynomially long sequence of random points generated uniformly from the body will be very weak

(with respect to the generation of the body itself). Using this fact, we will only have to inspect the dis-

tribution of a single random point. The construction will have a spherically-symmetric nature, so the

density of a single random point will only depend on its distance from the origin, and therefore we will

only have to care about the distribution of the distance of a point from the origin in the generated bodies.

The role of the following section, which is more technical but fairly delicate, will be to calibrate this

construction so that these families have different volumes, yet, approximately the same distribution of

distance from the origin.

Before we proceed to the proof, let us introduce some notation. In chapter the number n will always

denote a dimension. For an expression f(n) which depends on n, by f(n) = SE(n) we mean: there

exists some n0 ∈ N and ε > 0 such that for all n > n0, |f(n)| < e−nε. Also write f(n) = g(n)(1 +

SE(n)) for∣∣∣f(n)g(n) − 1

∣∣∣ = SE(n) and f(n) = g(n) + SE(n) for |f(n)− g(n)| = SE(n). The notation

f(n) . g(n) and f(n) & g(n) will be interpreted as f(n) < g(n) and f(n) > g(n) for n large enough.

Moreover, we decide that N = N(n), denotes the length of the sequence of random points. All

throughout this section we assume that there exists a universal constant ε > 0, such that N(n) < enε.

3.1.1 The Deletion Process

In this section we will describe the construction of the random bodies which will later be used as counter-

examples. Our goal, after describing the actual construction, will be to prove, using some simple proper-

ties of the Poisson distribution, a weak-correlation property between different points generated from the

body.

Denote by Dn the n dimensional Euclidean ball of unit radius, centered at the origin, and by ωn its

Lebesgue measure.

Recall that for two probability measures P1, P2 on a set Ω, the total variation distance between the two

measures is defined by

dTV (P1, P2) = supA⊆Ω|P1(A)− P2(A)|

One can easily check that if these measures are absolutely continuous with respect to some third measure

Q, then it is also equal half the L1(Q) distance between the two densities.

Define r0 = n−13 , and

T0(θ) = Dn ∩ x; 〈x, θ〉 ≤ r0.

Let T be a function from the unit sphere to the set of convex bodies, such that for every θ ∈ Sn−1,

T (θ) satisfies T0(θ) ⊆ T (θ) ⊆ Dn. (Recall that most of the mass of the Euclidean ball is contained in

x1 ∈ [−1, Cn−12 ]. So T (θ) contains almost all the mass of the Euclidean ball). Moreover let m > 0.

We will now describe our construction of a random convex body, KT,m. First, suppose that m ∈ N. Let

Page 91: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 91

Θ = (θ1, θ2, ..., θm) be m independent random directions, distributed according to the uniform measure

on Sn−1. We define KT,m as,

KT,m = Dn

⋂i

T (θi).

Finally, instead of taking a fixed m ∈ N, we take ζ to be a a Poisson random variable with expectation

m, independent of the above. We can now define KT,ζ in the same manner.

Let us denote the probability measure on the set of convex bodies induced by the process described

above by µ. After generating the body KT,m, which, from now on will be denoted just by K wherever

there is no confusion caused, we consider the following probability space: let Ω = (Dn)N be the set of

sequences of length N of points from Dn. Denote by λ the uniform probability measure on Ω, and for

a convex body K denote by λK the uniform probability measure on KN =∏

1≤i≤N K ⊆ Ω. Finally,

define a probability measure P = PT,m on Ω as follows: for A ⊆ Ω,

P (A) =

∫λK(A)dµ(K) =

∫V ol(KN ∩A)

V ol(KN )dµ(K)

(The measure P describes the following process: first generate the random set K according to construc-

tion described above, and then generate N i.i.d random points, independent of the above, according to

the uniform measure on K). Moreover, for p = (x1, ..., xN ) ∈ Ω, define πi(p) = xi, the projections

onto the i-th copy of the Euclidean ball.

It it easy to check that P is absolutely continuous with respect to λ. We define the following function on

Ω:

fT,m(p) = P(p ∈ KNT,m) = P(∀1 ≤ i ≤ N, πi(p) ∈ KT,m). (3.1)

As we will see later, the function f is related in a simple way to dPdλ . Namely, we will have,

dP

dλ(p) = (1 + SE(n))

f(p)∫Ω f

for all p in some subset of Ω with measure close to 1. For convenience, from now on fT,m will be denoted

by f .

We start with some simple geometric observations regarding Ω. Denote by σ the rotation invariant

probability measure on Sn−1. Define, for p ∈ Ω, 1 ≤ i ≤ N ,

Ai(p) = θ ∈ Sn−1;πi(p) /∈ T (θ) (3.2)

For 1 ≤ i, j ≤ N , let Fi,j ⊂ ΩN be the event, defined by

Fi,j =

p;

σ(Ai(p) ∩Aj(p))σ(Ai(p))

< e−n0.1

(3.3)

and let,

F =⋂

1≤i 6=j≤NFi,j (3.4)

(which should be understood as ”no two points are too close to each other”, and, as we will see, will

imply that points are weakly correlated). We start with the following simple lemma.

Page 92: Distribution of Mass in Convex Bodies

92 CHAPTER 3. COMPLEXITY RESULTS

Lemma 3.1.3 Under the above notations:

(i) λ(F ) = 1 + SE(n).

(ii) There exists some ε0 > 0 such that: if we assume that following condition holds,

Pµ(V ol(K) < ωne−nε0 ) < e−n

ε0 (3.5)

(hence, the volume of K is typically not much smaller than the volume of Dn). Then we have P (F ) =

1 + SE(n).

Proof:

(i) Let p be uniformly distributed in Ω. Denote xi = πi(p), so x1, x2 are independent points uniformly

distributed in Dn. Let us calculate λ(F1,2).

First, for a fixed θ ∈ Sn−1, one has

P(x1 /∈ T (θ)) ≤ P(x1 /∈ T0(θ)) = P(〈x1, θ〉 ≥ r0)

Recalling that r0 = n−13 n−

12 , by elementary calculations regarding marginals of the Euclidean ball,

one gets

P(x1 /∈ T (θ)) . e−n0.2

Now, fix x′2 ∈ Dn. Define Ai := Ai(p). One has,

E(σ(A1 ∩A2)|x2 = x′2) =

∫A2

P(θ ∈ A1)dσ(θ) =

∫A2

P(x1 /∈ T (θ))dσ(θ) . σ(A2)e−n0.2

And so,

E(σ(A1 ∩A2)

σ(A2)|x2 = x′2) . e−n

0.2(3.6)

Now, this is true for every choice of x′2, so integrating over x′2 gives

Eσ(A1 ∩A2)

σ(A2). e−n

0.2

Now we use Markov’s inequality to get

λ(FC1,2) = λ(

σ(A1 ∩A2)

σ(A2)> e−n

0.1

) = SE(n) (3.7)

A union bound completes the proof of (i).

Proof of (ii) First, we can condition on the event V ol(K) > ωneε0 (with ε0 to be chosen later). (3.5)

ensures us that it will happen with probability = 1 − SE(n). Observe that for any event E ⊂ Ω which

is measurable by the σ-field generated by π1, π2, we have

λK(E) =ω2nλ((K ×K ×Dn × ...×Dn) ∩ E)

V ol(K)2≤ ω2

nλ(E)

V ol(K)2(3.8)

Now, taking E = FC1,2, choosing ε0 to be small enough and using (3.7) and (3.8) along with (3.5), one

gets

P (F1,2) = 1 + SE(n).

Page 93: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 93

Applying a union bound finishes the proof.

We can now turn to the lemma which contains the main ideas of this section:

Lemma 3.1.4 : There exist ε0, ε1 > 0 and n0 such that for every n > n0, the following holds: Whenever

m is small enough such that the following condition is satisfied:

P(θ ∈ K) > e−nε0, ∀θ ∈ Sn−1 (3.9)

(hence, we are not removing too much volume, in expectation, even from the outer shell). Then:

(i) We have,

P (|V ol(K)− E(V ol(K))| > e−nε1E(V ol(K))) = SE(n) (3.10)

and also (3.5) holds.

(ii) For all p ∈ F , we have

f(p) = (1 + SE(n))

N∏j=1

P(πj(p) ∈ K)

In other words, if we define f : Dn → R as,

f(x) = P(x ∈ K) (3.11)

then

f(p) = (1 + SE(n))∏i

f(πi(p)), ∀p ∈ F. (3.12)

and,

(iii)E(V ol(KN ∩ F ))

(EV ol(K))N− 1 = SE(n)

Proof: We begin by proving (ii).

Fix p ∈ F . Define xi = πi(p), and Ai = Ai(p) ⊂ Sn−1 as in (3.2). Also define Gj =⋂i≤jxi ∈ K.

Fix 2 ≤ j ≤ N . Let us try to estimate P (Gj |Gj−1).

When conditioning on the event Gj−1, we can consider our Poisson process as a superposition of three

”disjoint” Poisson processes: the first one, with intensity λs, only generates deletions that cut xj , but

leave all the xi’s for i < j intact. The second one, with intensity λu deletes xj along with one of the

other xi’s, and the third one is the complement (hence, deletions that do not affect xj). We have, recalling

that the the expectation of the number of deletions is m,

λs(Sn−1) + λu(Sn−1) = mσ(Aj) (3.13)

Moreover,

λu(Sn−1) ≤ m∑i<j

σ(Ai ∩Aj) (3.14)

Page 94: Distribution of Mass in Convex Bodies

94 CHAPTER 3. COMPLEXITY RESULTS

(in the above formula we are including, multiple times, deletions that cut more than two points, hence

the inequality rather than equality).

Now, using the definition of F one gets

λu(Sn−1)

λs(Sn−1) + λu(Sn−1)= SE(n) (3.15)

Note that (3.9) implies

e−(λs(Sn−1)+λu(Sn−1)) ≥ e−mσ(θ;xj|xj |

/∈T (θ)) ≥ e−nε0 (3.16)

(the first inequality follows from the fact that T (θ) are star-shaped). The last two inequalities give,

λu(Sn−1) = SE(n) (3.17)

It follows that, ∣∣∣∣ P (Gj |Gj−1)

P (xj ∈ K)− 1

∣∣∣∣ =e−λs(S

n−1)

e−(λs(Sn−1)+λu(Sn−1))− 1 = SE(n) (3.18)

Moreover, one has

P (GN ) =∏j

P (Gj |Gj−1) =∏j

(P (Gj |Gj−1)

P (xj ∈ K)P (xj ∈ K)

)(3.19)

Using (3.18) and (3.19) we get

f(p) = P (GN ) = (1 + SE(n))∏j

P (xj ∈ K) (3.20)

This proves (ii).

Proof of (i): Showing that (3.5) holds is just a matter of noticing that P(x ∈ K) is monotone decreasing

with respect to |x| and taking ε0 to be small enough. We turn to estimate E(V ol(K)2). We have

E(V ol(K)2) =

∫Dn×Dn

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2 = (3.21)

∫(Dn×Dn)∩F1,2

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2+ (3.22)∫(Dn×Dn)∩FC1,2

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2

(we will later see that the second summand is negligible). Now, (3.20) gives∫(Dn×Dn)∩F1,2

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2 = (3.23)

(1 + SE(n))

∫(Dn×Dn)∩F1,2

P(x1 ∈ K)P(x2 ∈ K)dx1dx2,

which also implies that∫(Dn×Dn)∩F1,2

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2 >1

2e−2nε0

Page 95: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 95

Recall that λ(FC1,2) = SE(n) (as a result of the previous lemma). Taking ε0 to be small enough, we will

get

E(V ol(K)2) = (1 + SE(n))

∫(Dn×Dn)∩F1,2

P(x1 ∈ K ∩ x2 ∈ K)dx1dx2 =

(1 + SE(n))

∫(Dn×Dn)∩F1,2

P(x1 ∈ K)P(x2 ∈ K)dx1dx2.

On the other hand,

E(V ol(K))2 =

∫(Dn×Dn)

P(x1 ∈ K)P(x2 ∈ K)dx1dx2. (3.24)

Using the same considerations as above, the part of the integral over FC1,2 can be ignored, hence,

E(V ol(K))2 = (1 + SE(n))

∫(Dn×Dn)∩F1,2

P(x1 ∈ K)P(x2 ∈ K)dx1dx2. (3.25)

So we finally get

E(V ol(K)2) = (1 + SE(n))E(V ol(K))2 (3.26)

Recalling that we assume (3.9), using Chebishev’s inequality, this easily implies (i), which finishes (ii).

For the proof of (iii),

E(V ol(KN ∩ F )) =

∫FP(p ∈ KN )dp = (1 + SE(n))

∫F

∏i

P(πi(p) ∈ K) ≤ (EV ol(K))N .

Consider the density dPdλ . Our next goal is to find a connection between this density and the function

f . Let A ⊆ F ⊂ Ω. Using the concentration properties of V ol(K), we will prove the following,

P (A) =

∫A f(p)dp

(∫Dn

f(x))N+ SE(n). (3.27)

where f, f are defined in equations (3.1) and (3.11).

We have,

P (A) = Eµ(V ol(KN ∩A)

V ol(KN )

)= Eµ

(V ol(KN ∩A)

V ol(K)N

). (3.28)

By Fubini,

EµV ol(KN ∩A) =

∫Af(p)dp. (3.29)

Consider the event

G :=

∣∣∣∣ V ol(K)N

E(V ol(K))N− 1

∣∣∣∣ < e−nε12

(where ε1 is the constant from lemma 3.1.4). We have by the definition of G,∫

G

V ol(KN ∩A)

V ol(K)Ndµ(K) =

∫G V ol(K

N ∩A)dµ(K)

E(V ol(K))N+ SE(n). (3.30)

It follows from part (i) of lemma 3.1.4 that,

µ(G) = P(

∣∣∣∣( V ol(K)

E(V ol(K)))N − 1

∣∣∣∣ ≤ e−n ε12 ) ≥

Page 96: Distribution of Mass in Convex Bodies

96 CHAPTER 3. COMPLEXITY RESULTS

P(

∣∣∣∣ V ol(K)

E(V ol(K))− 1

∣∣∣∣ ≤ 2Ne−nε12 ) ≥ P(

∣∣∣∣ V ol(K)

E(V ol(K))− 1

∣∣∣∣ ≤ e−nε1 ) = 1 + SE(n).

So µ(G) = 1 + SE(n) which gives,∫GC

V ol(KN ∩A)

V ol(K)Ndµ(K) ≤ µ(GC) = SE(n) (3.31)

We will also need: ∫GC V ol(K

N ∩A)dµ(K)

(EV ol(K))N= SE(n) (3.32)

To prove this, first recall that A ⊆ F . This gives,∫GC V ol(K

N ∩A)dµ(K)

(EV ol(K))N≤∫GC V ol(K

N ∩ F )dµ(K)

(EV ol(K))N= (3.33)

EµV ol(KN ∩ F )

(EV ol(K))N−∫G V ol(K

N ∩ F )dµ(K)

(EV ol(K))N.

Now, ∫G

V ol(KN ∩ F )

V ol(KN )dµ(K) = Eµ

V ol(KN ∩ F )

V ol(KN )+ SE(n) = P (F ) + SE(n) = 1 + SE(n)

so, ∫G V ol(K

N ∩ F )dµ(K)

(EV ol(K))N= 1 + SE(n) (3.34)

Using part (iii) of lemma 3.1.4 along with (3.33) and (3.34) proves (3.32).

Plugging together (3.28), (3.30), (3.31) and (3.32) imply

P (A) = EµV ol(KN ∩A)

V ol(K)N=

∫G

V ol(KN ∩A)

V ol(K)Ndµ(K) + SE(n) (3.35)

=

∫G V ol(K

N ∩A)dµ(K)

E(V ol(K))N+ SE(n) =

EµV ol(KN ∩A)

E(V ol(K))N+ SE(n)

Recall that, as a result of Fubini’s theorem,

Eµ(V ol(K)) =

∫Dn

f(x)dx. (3.36)

Plugging (3.35), (3.36) and (3.29) proves (3.27). We would now like to use the result of lemma 3.1.4, to

replace f with f . Let A′ ⊆ Ω. Define A = A′ ∩ F ,

P (A′) = P (A) + P (A′ ∩ FC).

Part (ii) of lemma 3.1.3 with (3.27) gives

P (A′) = P (A) + SE(n) =

∫A f(p)dp

(∫Dn

f(x)dx)N+ SE(n).

We can now plug in (3.12) to get

P (A′) =

∫A

∏i f(πi(p))dp

(∫Dn

f(x))N+ SE(n).

So, finally definingdP

dp=

1p∈F∏i f(πi(p))

(∫Dn

f(x))N= 1p∈F

∏i

f(πi(p))∫Dn

f(x)dx

we have proved the following lemma:

Page 97: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 97

Lemma 3.1.5 Suppose that the condition (3.9) from Lemma 3.1.4 holds. Then one has

dTV (P, P ) = SE(n)

Note that the measure P is not, in general, a probability measure. The lemma, however, ensures us that

P (Ω) is very close to 1.

Recall that our plan is to find two families of convex bodies, which are achieved by two pairs (T1,m1)

and (T2,m2), such that dTV (P1, P2) is small, even though their volumes differ.

The above lemma motivates us to try to find such pairs with f1∫f1

= f2∫f2

+ SE(n). We formulate this

accurately in the following lemma.

Lemma 3.1.6 Suppose there exist two pairs (Ti,mi) for i = 1, 2 such that (3.9) is satisfied, and in

addition, defining f1 and f2 as in (3.11),∣∣∣∣∣∣∣∣∣∣ f1∫Dn

f1

− f2∫Dn

f2

∣∣∣∣∣∣∣∣∣∣L1(Dn)

= SE(n) (3.37)

Then dTV (P1, P2) = SE(n).

Proof:

Using the previous lemma, it is enough to show that dTV (P1, P2) = SE(n). Define gi = fi∫Dn

fi. We

have

dTV (P1, P2) ≤∫

Ω

∣∣∣∣∣∣∏

1≤i≤Ng1(πi(p))−

∏1≤i≤N

g2(πi(p))

∣∣∣∣∣∣ ≤∑

1≤j≤N

∫Ω

∣∣∣∣∣∣∏

1≤i≤jg1(πi(p))

∏j+1≤i≤N

g2(πi(p))−∏

1≤i≤j+1

g1(πi(p))∏

j+2≤i≤Ng2(πi(p))

∣∣∣∣∣∣ =

N

∫Dn

|g1(x)− g2(x)| = SE(n)

In the next section we deal with how to calibrate Ti and mi so that (3.37) holds.

3.1.2 Building the two profiles

Our goal in this section is to build convex bodies with a prescribed radial profile.

For a measurable body L ⊂ Rn, define

gL(r) = 1− σ(1

rL ∩ Sn−1), (3.38)

This function should be understood as the ”profile” of mass of the complement of L, which will even-

tually be the ratio of mass which a single deletion removes, in expectation, as a function of the distance

from the origin. Define gi(r) = gTi(r).

Page 98: Distribution of Mass in Convex Bodies

98 CHAPTER 3. COMPLEXITY RESULTS

Let us try understand exactly what kind of construction we require. Fix x ∈ Dn. Keeping in mind

that the function Ti(θ) commutes with orthogonal transformations, we learn that the probability that x

is removed in a single deletion of Ti is exactly gi(|x|). By elementary properties of the Poisson process,

this gives,

P(x ∈ Ki) = exp[−migi(|x|)]. (3.39)

In view of (3.37), we would like the ratio P(x∈K1)P(x∈K2) to be (approximately) constant. Using (3.39), we see

that the latter follows from

m1g1(|x|)−m2g2(|x|) = C.

If we choose to pick m2 = 2m1, this equality will be implied by the following requirements on T1, T2:

g1(1) = g2(1) 6= 0, and g′1(r) = 2g′2(r), r ∈ [0, 1]. (3.40)

Assuming (3.5) holds and making use of the concentration of the radial profile of Dn, we will actually

only be required to make sure the derivatives are proportional for r ∈ [1− n−0.99, 1].

Note that when (3.40) is attained, by picking different values of m1, the ratio between the expected

volumes of K1 and K2 can be made arbitrarily large while the expected radial profiles remain about as

close. Lemma (3.1.6) will then ensure us that this is enough for the distributions to be indistiguishable.

The above is established in the main lemma of this section:

Lemma 3.1.7 For every dimension n, there exist two convex bodies T1, T2 ⊂ Rn, satisfying the follow-

ing:

(i) Dn ⊇ Ti ⊇ Dn ∩ x; 〈x, e1〉 ≤ n−13 , i = 1, 2 (3.41)

(ii) The radial profiles satisfy,

g1(1) = g2(1) 6= 0, and g′1(r) = 2g′2(r) ∀r ∈ [1− n−0.99, 1] (3.42)

To achieve this, we begin by describing the following construction: Define δ0 = n−14 , δ1 = n−0.99. For

every two constants a, b such that a ∈ [2, 200] and b ∈ [−1000, 1000], let f = fa,b be the linear function

with negative slope which satisfies:

f(δ0(1 + δ1b)) =√

1− (δ0(1 + δ1b))2 (3.43)

and,

minx∈R

√x2 + f2(x) = aδ0 (3.44)

(hence, it is a line of distance aδ0 from the origin which meets the unit circle at x = δ0(1 + bδ1). Note

that there exists such a linear function with negative slope since aδ0 δ0(1 + bδ1)). We define a convex

body Ta,b by,

Ta,b = Dn ∩

(x, ~y) ∈ R× Rn−1 = Rn; |y| ≤ f(x)

(3.45)

Page 99: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 99

(an intersection of the ball with a cone defined by a linear equation the coefficients of which depend of

a, b).

Recall that we require that a > 2 and b > −1000. First of all, it follows directly from requirement (3.43)

and from the fact that the slope of f is negative, that Ta,b satisfies (3.41) (since δ0 n−1/3).

Define ga,b(r) = gTa,b(r) as in (3.38). Let us find an expression for ga,b(r). First, a simple calculation

shows that (3.44) implies that the function fa,b intersects the x axis at x < 2aδ0. This shows that

Ta,b ∩ rSn−1 has only one connected component for all r > 12 (hence, the vertex of the cone is inside the

sphere).

Consider the intersection 1rTa,b ∩ Sn−1. If r > 1

2 , it must be a set of the form Sn−1 ∩ x1 < x(a, b, r),for some function x(a, b, r). Let us try to find the expression for this function. Equation (3.44) shows that

Ta,b is an intersection of Dn with halfspaces at distance aδ0 from the origin. This implies that x(a, b, r)

must satisfy

x(a, b, r) = sin(arcsin(aδ0

r) + c)

for some constant c (draw a picture). To find the value of c, we use (3.43) to get x(a, b, 1) = δ0(1 + bδ1),

and so

x(a, b, r) = sin(arcsin(aδ0

r)− arcsin(aδ0) + arcsin(δ0(1 + bδ1))). (3.46)

Next, define

Ψ(x) =1

ωn

∫ 1

min(x,1)(1− t2)

n−32 dt,

the surface area measure of a cap the base of which has distance x from the origin. We have finally,

ga,b(r) = σ(Sn−1 ∩ x1 ≥ x(a, b, r)) = Ψ(x(a, b, r)). (3.47)

Given a subset I ′ ⊆ R× R, we define

KI′ =⋂

(a,b)∈I′Ta,b. (3.48)

Clearly

gI′(r) := gKI′ (r) = sup(a,b)∈I′

ga,b(r)

Our goal is to choose such a subset so that (3.42) is fulfilled. We will use the following elementary result:

Lemma 3.1.8 Let c > 0, and let fαα∈I be a family of twice-differentiable functions defined on [x1, x2]

such that for every triplet (x, y, y′) ∈ [x1, x2]× [y1, y2]× [y′1, y′2], there exists α ∈ I such that

fα(x) = y, fα(x)′ = y′, f ′′(t) ≤ c,∀t ∈ [x1, x2] (3.49)

then for every twice differentiable function g : [x1, x2]→ [y1, y2] with

g′(x) ∈ [y′1, y′2], g′′(x) > c, (3.50)

there exists a subset I ′ ⊂ I such that

g(x) = supα∈I′

fα(x) (3.51)

Page 100: Distribution of Mass in Convex Bodies

100 CHAPTER 3. COMPLEXITY RESULTS

In view of the above lemma, we would like to show that by choosing appropriate values of a, b,

one can attain functions ga,b which, for a fixed r0, have prescribed values ga,b(r0), g′a,b(r0), and a small

enough second derivative.

Define r(u) = 1− δ1u. Note that substituting r → u, almost all of the mass of the Euclidean ball is

contained in u ∈ [0, 1] (the thin shell of the Euclidean ball). We now turn to prove the following lemma:

Lemma 3.1.9 Suppose that (u, g0, g′0) satisfy 0 ≤ u ≤ 1,

Ψ(δ0)− 100δ0δ1Ψ′(δ0) ≤ g0 ≤ Ψ(δ0) + 100δ0δ1Ψ′(δ0),

10δ0δ1Ψ′(δ0) ≤ g′0 ≤ 100δ0δ1Ψ′(δ0).

There exist constants a ∈ [2, 200], b ∈ [−1000, 1000] such that ga,b(r(u)) = g0, (ga,b(r(u)))′ = g′0 and

ga,b(r(t))′′ ≤ δ0δ1Ψ′(δ0),∀0 ≤ t ≤ 1.

Proof: Throughout this proof we always assume u ∈ [0, 1], a ∈ [2, 200] and b ∈ [−1000, 1000].

Let us inspect the function x(a, b, r)) defined in (3.46). Differentiating it twice, while recalling that

aδ0 12 , gives us the following fact: there exists C > 0 independent of n, such that | ∂2

∂r2x(a, b, r)| < C.

Consider x(a, b, u) := x(a, b, r(u)). One has,

xuu(a, b, u) = O(δ21) (3.52)

(here and afterwards, by ”O”, we mean that the term is smaller than some universal constant times the

expression inside the brackets, which is valid as long as u, a, b attain values in the intervals defined

above). This implies that for all u ∈ [0, 1],

xu(a, b, u) = xu(a, b, 0) +O(δ21) =

aδ0δ1 sin′(arcsin(δ0(1 + bδ1)))(1 +O(δ0)) +O(δ21) =

aδ0δ1(1 +O(δ0))

and so,

x(a, b, u) = x(a, b, 0) + aδ0δ1u(1 +O(δ0)) = δ0 + δ0δ1(au+ b)(1 +O(δ0))

Let us now define w(a, b, u) = 1δ1

(x(a,b,r(u))δ0

− 1). So,

w(a, b, u) = (au+ b)(1 +O(δ0)) (3.53)

and,

wu(a, b, u) = a(1 +O(δ0)), wuu(a, b, u) = O(δ1

δ0). (3.54)

Next, we consider ga,b(r(u)) = Ψ(x(a, b, u)) = Ψ(δ0(1 + δ1w(a, b, u)). We have,

Ψ(x(a, b, u)) = Ψ(δ0) + δ0δ1Ψ′(δ0)w(a, b, u) +δ2

0δ21

2Ψ′′(t)w(a, b, u)2 (3.55)

Page 101: Distribution of Mass in Convex Bodies

3.1. NONEXISTENCE OF A VOLUME ESTIMATION ALGORITHM 101

for some t ∈ [δ0, x(a, b, u)]. But, note that the following holds,

(log Ψ′(v))′ =Ψ′′(v)

Ψ′(v)= −

2v n−32 (1− v2)

n−52

(1− v2)n−3

2

= −v(n− 3)

(1− v2), (3.56)

and for all v ∈ [ δ02 , 2δ0],

(log Ψ′(v))′ = O(nδ0).

Integration of this inequality yields that for t such that t− δ0 = O(δ0δ1), one has

log Ψ′(t)− log Ψ′(δ0) = O(nδ20δ1)

or,

Ψ′(t) = Ψ′(δ0)(1 +O(nδ20δ1)) (3.57)

Combining (3.56) and (3.57) gives

δ20δ

21Ψ′′(t) = O(Ψ′(δ0)δ2

1nδ30) = o(Ψ′(δ0)δ0δ1). (3.58)

This finally gives,

ga,b(r(u)) = Ψ(x(a, b, u)) = Ψ(δ0) + (δ0δ1Ψ′(δ0)w(a, b, u))(1 + o(1)) (3.59)

= Ψ(δ0) + δ0δ1Ψ′(δ0)(au+ b)(1 + o(1))

Next we try to estimate the derivative of Ψ(x(a, b, u)). We have,

∂uga,b(r(u)) =

∂uΨ(x(a, b, u)) = (3.60)

Ψ′(x(a, b, u))xu(a, b, u) = Ψ′(x(a, b, u))δ0δ1wu(a, b, u)

And using (3.57),∂

∂uΨ(x(a, b, u)) = Ψ′(δ0)(1 + o(1))δ0δ1wu(a, b, u) = (3.61)

(aδ0δ1Ψ′(δ0))(1 + o(1)).

Using the continuity and Ψ and x(a, b, u), we can now conclude the following: for any fixed b ∈[−1000, 1000] and u ∈ [0, 1], an inspection of equation (3.61) teaches us that when a varies in [2, 200],∂∂uΨ(x(a, b, u)) can attain all values in the range [3δ0δ1Ψ′(δ0), 100δ0δ1Ψ′(δ0)]. An inspection of equa-

tion (3.59) shows that afterwards, by letting b vary in [−1000, 1000], ga,b(r(u)) will attain all values in

[Ψ(δ0)− 100δ0δ1Ψ′(δ0),Ψ(δ0) + 100δ0δ1Ψ′(δ0)]. To estimate the second derivative, g′′a,b, we write

∂2

∂u2Ψ(x(a, b, u)) = Ψ′′(x(a, b, u))δ2

0δ21w

2u(a, b, u) + δ0δ1Ψ′(x(a, b, u))wuu(a, b, u)wu(a, b, u)

(using (3.54) and (3.58))

= o(δ0δ1Ψ′(δ0)) +O(δ21Ψ′(δ0)) = o(δ0δ1Ψ′(δ0))

This completes the lemma.

Page 102: Distribution of Mass in Convex Bodies

102 CHAPTER 3. COMPLEXITY RESULTS

We are now ready to prove the main lemma of the secion.

Proof of lemma 3.1.7

Define:

fi(r) = Ψ(δ0) + Ciδ0δ1Ψ′(δ0)(u+ 1)2

with C1 = 20, C2 = 40. Usage of lemmas (3.1.9) and (3.1.8) shows that there exist two subsets I1, I2 of

[2, 200]× [−1000, 1000] such that the bodies Ti = TIi that we constructed in (3.48) satisfy (3.42). Also,

(3.41) is satisfied, since it is satisfied for Ta,b for all (a, b) ∈ [2, 200]× [−1000, 1000], as we have seen.

3.1.3 Tying up Loose Ends

Proof of theorem 3.1.2:

Use lemma 3.1.7 two build the two bodies Ti. Let Uθ be an orthogonal transformation which sends e1

to θ. Define Ti(θ) = Uθ(Ti) (note that the choice of orthogonal transformation does not matter because

Ti are bodies of revolution around e1). Define the functions gi = gTi as in (3.38). Let m1 = nε

g1(1) , with

ε > 0 to be chosen later. Define m2 = 2m1. So, (3.42) implies that

m2g2(r) = m1g1(r) +m1g1(1), ∀r ∈ [1− n−0.99, 1]. (3.62)

Now, let Ki = KTi,mi be the random bodies we constructed in section 2.

For a fixed x ∈ Dn, as in (3.39), we have,

fi(x) = P (x ∈ Ki) = e−miσ(x/∈Ti(θ)) = e−migi(|x|). (3.63)

Now, (3.62) and (3.63) givef1(x)

f2(x)= em1(g(1)) = en

ε(3.64)

for all x with |x| ∈ [1− n−0.99, 1].

Let us choose ε to be small enough so that

m2g2(1) < nε0

where ε0 is the constant from (3.9). Clearly, that ensures that (3.9) holds for both random bodies Ki.

Now, ε can be made further smaller, so that concentration properties of the Euclidean ball will give us,∫Dn

fi = (1 + SE(n))

∫Dn\(1−n−0.99)Dn

fi (3.65)

for i = 1, 2. Clearly, the above can still be satisfied for some universal constant ε > 0 as long as n is

large enough. Next, (3.64) and (3.65) imply that∫Dn

f1∫Dn

f2

= (1 + SE(n))enε

Page 103: Distribution of Mass in Convex Bodies

3.2. ESTIMATION OF THE INVERSE COVARIANCE MATRIX 103

and so (again, taking ε to be small enough) one gets∫Dn

∣∣∣∣∣ f1∫Dn

f1

− f2∫Dn

f2

∣∣∣∣∣ dx =

∫(1−n−0.99)Dn

∣∣∣∣∣ f1∫Dn

f1

− f2∫Dn

f2

∣∣∣∣∣ dx+ SE(n) = SE(n).

Now use Lemma 3.1.6 to get that

dTV (P1, P2) = SE(n) (3.66)

Denote R = 12enε . Then,

E(V ol(K1)) = (1 + SE(n))2RE(V ol(K2)).

Suppose by negation that there exists a classification function F : ω → R that determines the volume

of a body K up to a constant enε2 with probability 0.52. Denote L = [E(V ol(K1))

R , RE(V ol(K1))]. Note

that using (3.10), the ”correctness” of the function implies that

P1(F (p) ∈ L) ≥ 0.51

Denote A ⊂ Ω as A = p ∈ Ω;F (p) ∈ L. Then P1(A) > 0.51, and (3.66) implies that also

P2(A) > 0.51. But this means that

P2(F (p) ∈ L) > 0.5

But clearly, again, (3.10) implies that with probability = 1 + SE(n), the volume of K2 is not in L. This

contradicts the existence of such a function F .

We still have to generalize the above in two aspects: for an even smaller probability of estimating the

volume, and the possibility that the algorithm is non-deterministic. Upon inspection of the proof above,

we notice that it can be easily extended in the following way: instead of taking just two families of

random bodies, K1 and K2, one may take d different families, d > 2, which are all indistinguishable by

the algorithm, and have different volumes. The proof can be stretched as far as d = enε2 . To deal with

non-determinsitic algorithms, we will use Yao’s lemma (See [RV], Lemma 11). Let us generate an index

i uniformly distributed in 1, ..., d, then a bodyK from the familyKi, and then a sequence of uniformly

distributed random points on K. Following the lines of the above proof, we see that every deterministic

algorithm, given this sequence, will be incorrect in estimating the volume ofK with probability (at least)

= 1− 1d +SE(n). It follows from Yao’s lemma that every non-deterministic algorithm will be incorrect

with the same probability for at least one of the families Ki. This finishes the theorem.

3.2 Estimation of the inverse covariance matrix

The problem of estimating the population covariance matrix given a sample of n i.i.d. observations

X1, ..., Xn in Rd has been extensively studied. Estimation of covariance matrices plays a key role in

many data analysis techniques (e.g. in principal component analysis, discriminant analysis, graphical

Page 104: Distribution of Mass in Convex Bodies

104 CHAPTER 3. COMPLEXITY RESULTS

models).

It has been shown in [ALPT], that the empirical covariance matrix gives a good approximation when

n = Ω(d). In the case n < d, it is clear that the empirical covariance matrix cannot give a good approxi-

mation for the population covariance matrix, since it is not of full rank. However, apriori, we could hope

that other approximation schemes may still work. Later in this section, we will see that it is not the case.

An easier goal than approximating the entire convariance matrix A would be to approximate a single

entry in A−1. The latter has a rather natural interpretation: Given a multivariate gaussian random vector

Y = (Y1, ..., Yd), and two indices i, j, define

αi,j = limε→0

E[YiYj | |Yk| < ε,∀k /∈ i, j] (3.67)

one may interpret the quantity αi,j as the effective correlation between Yi and Yj , in the sense that it

neutralizes the effect of correlation with a third variable Yk, k /∈ i, j. Now, it is easily seen that there

is a simple relation the numbers αi,j and the matrix A−1, namely,(αi,i αi,jαj,i αj,j

)−1

=

((A−1)i,i (A−1)i,j(A−1)j,i (A−1)j,j

).

As an example, if the indices represent a set of genes, and the quantity Yi represents presence or absence

of the ith gene, biologists are often interested to know whether or not a certain correlation between the

presence of two different genes is due to the fact that both genes depend on a third gene. The number αi,jgives an estimate to whether these two genes are directly correlated, rather than being both correlated

with a third gene.

The goal of this short note is to introduce an information-theoretic lower bound for the above ques-

tion, and show that the number of samples needed in order to estimate the numbers αi,j is essentially

the same as the minimum number of samples needed to estimate the entire population covariance matrix

using the empirical covariance matrix.

Before we formulate the result, let us introduce some notation. Fix a dimension d, and consider the

Euclidean space Rd, and its standard basis e1, ..., ed. Denote by B be the unit euclidean ball and define

E = spane1, e2.For a symmetric matrix A ∈ GL(d), define CE(A) to be the covariance matrix of the uniform dis-

tribution on the 2-dimensional ellipse AD ∩ E (the entries of the matrix CE(A) will be the numbers

α1,1, α1,2, α2,2 defined in (3.67)).

Let X1, ..., Xn independent samples of the standard gaussian distribution in Rd. We prove the following

theorem:

Theorem 3.2.1 Suppose n < d2 . There does not exist a function F satisfying,

P(F (AX1, ..., AXn) = rank(CE(A))) > 0.9 (3.68)

Page 105: Distribution of Mass in Convex Bodies

3.2. ESTIMATION OF THE INVERSE COVARIANCE MATRIX 105

for all A ∈ GL(n).

In other words, given d2 samples or less, not only we cannot approximate the constants αi,j , but we can-

not even determine the rank of the matrix CE(A) with a reasonable probability.

The idea of the proof is the following: we construct two multivariate gaussian random vectors X,Y

with covariance matrices AX , AY such that rankCE(AX) 6= rankCE(AY ), while the total variation

distance between two sequences of n samples from X and Y is rather small. A small total variation

distance implies that for every function F , the total variation distance between the random variables

F (X1, ..., Xn) and F (Y1, ..., Yn) will be rather small, which means that F we cannot distinguish be-

tween the two.

It is interesting to inspect the result of this section in view of some positive results concerning the

estimation of the covariance matrix which appeared recently. The results provide methods to approxi-

mate the covariance matrix and its inverse when some extra assumptions about the distribution of X can

be made. For example, when the covariance matrix is assumed to be rather sparse, some methods can

be used in order to estimate the inverse matrix given a rather small number of samples. See for example

[BLRZ], [Ver], [LV] and references therein.

Acknowledgements I would like to thank Roman Vershynin for introducing him to the question and

for fruitful discussions.

3.2.1 Proof of the theorem

To prove the theorem, we assume by contradiction that there exists a function F : (Rd)n → 0, 1, 2satisfying (3.68).

We begin with the construction of two gaussian vectors in Rd:

Let X1, ..., Xn, Y1, ..., Yn be independent samples of a standard gaussian vector, and let let θ be a ran-

dom variable uniformly distributed on Sn−1. Define Yi = Projθ⊥ Yi. Clearly, Y1, ..., Yn are independent

samples of some (random) distribution. Moreover, since 〈Y1, θ〉 = 0, it is clear that CE(A) is of rank 1

whenever θ /∈ E⊥.

Our first step is showing, under the assumption of the existence of F , that there also exists a function

G satisfying (3.68) which is invariant under the action of SO(n).

To this end, let T be a random orthogonal matrix distributed uniformly according to the haar measure on

SO(n). The rotation invariance of the sequences and (3.68) imply,

P(F (T (X1), ..., T (Xn)) = 2) > 0.9, P(F (T (Y1), ..., T (Yn)) = 1) > 0.9 (3.69)

Page 106: Distribution of Mass in Convex Bodies

106 CHAPTER 3. COMPLEXITY RESULTS

Therefore, denoting

G(Z1, ..., Zn) =

2 , ET (F (T (Z1), ..., T (Zn))) > 3

2

1 , 12 ≤ ET (F (T (Z1), ..., T (Zn))) < 3

2

0 , otherwise

it is easily checked that G will satisfy:

P(G(T (X1), ..., T (Xn)) = 2) > 0.8, P(G(T (Y1), ..., T (Yn)) = 1) > 0.8. (3.70)

The total variation distance between two random variables X,Y with values in W is defined as

dTV (X,Y ) = supA⊂W

|P(X ∈ A)− P(Y ∈ A)|.

Equation (3.70) implies that,

dTV (G(X1, ..., Xn), G(Y1, ..., Yn)) > 0.6. (3.71)

Since G is invariant under rotations, and since one can always choose an orthogonal transformation T

such that,

T (Xi) ∈ spane1, ..., ei, ∀1 ≤ i ≤ d,

it is clear that the function G must only depend on the Gram matrix of the samples. So,

dTV (G(X1, ..., Xn), G(Y1, ..., Yn)) ≤ dTV (Gr(X1, ..., Xn), Gr(Y1, ..., Yn))

Where Gr(·) denotes the Gram matrix.

Clearly,

Gr(X1, ..., Xn) ∼Wn(Id, d), Gr(X1, ..., Xn) ∼Wn(Id, d− 1),

where Wn(C, p) is the Wishart distribution of dimension n with p degrees of freedom and covariance

matrix C. Our task is therefore to estimate,

dTV (Wn(Id, d− 1),Wn(Id, d))

(where the above random matrices are independent).

It is well known that a random matrix A ∼ Wn(Id, d) has the following density with respect to the

lebesgue measure on Rd2:

fn,p(A) :=det(A)

12

(p−n−1) exp(−12Trace(A))

2−12pnπ

14n(n−1)∏n

i=1 Γ(12(p+ 1− i))

Denote the measure expressing the law of A by µn,p. We would like to estimate the total variation metric

between µn,d and µn,d−1. For this, we write,

dTV (Wn(Id, d− 1),Wn(Id, d)) =1

2

∫|fn,d−1(A)− fn,d(A)|dλ(A) =

Page 107: Distribution of Mass in Convex Bodies

3.2. ESTIMATION OF THE INVERSE COVARIANCE MATRIX 107

(where λ is the lebesgue measure on Rn2)

1

2

∫ ∣∣∣∣∣1− det(A)1/2∫det(A)1/2dµn,d−1(A)

∣∣∣∣∣ dµn,d−1(A) ≤

1

2

√∫ (1− det(A)1/2∫

det(A)1/2dµn,d−1

)2

dµn,d−1

Let X be a random variable such that E[|X|4] exists. It follows from Liapunov’s theorem that

E[|X|4]

E[|X|]≥(E[|X|2]

E[|X|]

)3

So,

E[|X|4]− E[|X|2]2 ≥ E[|X|2]3

E[|X|]2− E[|X|2]2

And so,V ar[|X|2]

E[|X|2]≥ V ar[|X|]

E[|X|]2.

It follows that,

dTV (Wn(Id, d− 1),Wn(Id, d)) ≤ 1

2

√V ar[det(Wn(Id, d− 1))]

E[det(Wn(Id, d− 1))]. (3.72)

As shown in [DMO], Theorem 4.4 that one has,

V ar[det(Wn(Id, d− 1))] =(d− 1)!

(d− 1− n)!

(((d− 1) + 2)!

((d− 1) + 2− n)!− (d− 1)!

((d− 1)− n)!

)and,

E[det(Wn(Id, d− 1)] =(d− 1)!

(d− 1− n)!.

So,

dTV (Wn(Id, d− 1),Wn(Id, d)) ≤ 1

2

√(d+ 1)!(d− 1− n)!

(d+ 1− n)!(d− 1)!− 1 =

1

2

√(d)(d+ 1)

(d− n)(d− n+ 1)− 1

The above expression is clearly smaller than 0.6 whenever n < d2 . This contradicts (3.71) and the proof

is finished.

Remark 3.2.2 It is easy to seen that when n d, the function F cannot do much better than being

correct with probability 13 , hence, it cannot do better than guessing the rank of CE(A).

Remark 3.2.3 Following the same lines of proof, one can also show that the correlation between two

coordinates cannot be approximated also when conditioning on all but k coordinates to be zero, whenever

k is a small enough.

Page 108: Distribution of Mass in Convex Bodies

108 CHAPTER 3. COMPLEXITY RESULTS

Page 109: Distribution of Mass in Convex Bodies

Chapter 4

Extremal points and the convex hull of arandom walk and Brownian motion

In this chapter, using some techniques related to concentration of mass for high-dimensional convex bod-

ies, we derive asymptotics for the probability of the origin to be an extremal point of a random walk in

Rn. Let us formulate our results.

4.1 Results

Fix a dimension n ∈ N. For a set K ⊂ Rn, by ∂K we denote its boundary, by Int(K) its interior,

and by conv(K) we denote its convex hull. Let t1 ≤ ... ≤ tN be a Poisson point process on [0, 1] with

intensity α, let B(t) be an n-dimensional standard brownian motion. Define X0 = 0, Xi = B(ti). We

call X1, ..., XN a random walk in Rn. We say that the origin is an extremal point of this random walk if

0 ∈ ∂K, where K := conv(X0, X1, ..., XN).

Denote by p(n, α) the probability that the origin is an extremal point of the the random walkX0, X1, ..., XN .

For n ∈ N, note that p(n, α) is a decreasing function of α and denote by α(n) the smallest number, α,

such that p(n, α) ≤ 12 . Our aim in this note is to prove the following asymptotic bound:

Theorem 4.1.1 With α(n) defined as above, one has

ecn/ logn < α(n) < eCn logn.

for some universal constants c, C > 0.

Following rather similar lines, one can also prove that the same asymptotics are correct for the

standard random walk on Zn. Namely, one can prove the following result:

Theorem 4.1.2 Let S1, ..., SN be the standard random walk on Zn. Define,

N(n) = min

N ∈ N

∣∣∣∣ P (0 is an extremal point of convS1, ..., SN) ≤1

2

109

Page 110: Distribution of Mass in Convex Bodies

110 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

Then,

ecn/ logn < N(n) < eCn logn.

for some universal constants c, C > 0.

The latter theorem may be, in fact, more interesting for probabilists than the former. Nevertheless,

we choose to omit some of the details of its proof since it is more involved than the proof of theorem

4.1.1, and the two proofs share the same ideas. We will provide an outline of proof along with some

remarks about the further technical work that should be done in order to prove it.

Remark 4.1.3 By means of the so-called reflection principle, it may be shown that for a 1-dimensional,

simple random walk, the probability to remain non-negative after N steps is of the order 1/√N . The

expectation of the first time it becomes negative is therefore∞. It follows that the expectation of the first

time that the convex hull of a random walk in any dimension contains the origin in its interior is also

infinite.

A corollary of the above result concerns with covering times of the spherical brownian motion. We

define Sn−1 = x ∈ Rn, |x| = 1, | · | being the standard Euclidean norm. Given a standard brown-

ian motion B(t) in Rn, n > 2, the function θ(t) = B(t)|B(t)| is almost surely defined for all t > 0. By

the Dambis / Dubins-Schwarz theorem, there exists a non-decreasing (random) function T (·) such that

θ(T (·)) is a strong Markov process whose quadratic variation as time t is equal to (n− 1)t. We refer to

the process θ(T (t)) as a spherical brownian motion (or a brownian motion on Sn−1). Furthermore, we

denote by d(·, ·) the geodesic distance on Sn−1, equipped with the standard metric. The ε-neighbourhood

of a point x ∈ Sn−1 is defined as νx(ε) = y ∈ Sn−1, d(x, y) < ε. We say that a set A ⊂ Sn−1 is an

ε-covering of the sphere if⋃x∈A νx(ε) = Sn−1.

Let us now consider the following question: given a brownian motion on Sn−1, how long does it

typically take until the path is not contained in an open hemisphere? Equivalently, how long does it

take for a brownian motion to be a π/2-covering of the sphere? Covering times of random walks and

brownian in different settings is a subject that has been widely studied in the past decades (see e.g., [Ad],

[DPRZ], [M] and references therein). Matthews [M] studied the ε-cover time for brownian motion on an

n-dimensional sphere. In his work, he considers the asymptotics as ε tends to zero and the dimension is

fixed.

One motivation for the study of covering times on the sphere is a technique for viewing multidimen-

sional data developed by Asimov [As], known as the Grand Tour. In this technique, a high dimensional

object (usually, a measure on Rn) is analyzed through visual inspection of its projections onto subspaces

of small dimension. When considering one-dimensional marginals, the set of directions may be taken

from the range of a spherical brownian motion. In this case, one may be interested in estimating how

long should takes for the brownian motion to visit the a certain neighbourhood of all possible directions

Page 111: Distribution of Mass in Convex Bodies

4.2. THE LOWER BOUND 111

on the sphere, thus indicating that the set of inspected marginals is rather dense.

Let E(n) be the expected time it takes the spherical brownian is a π2 -covering of the sphere, in other

words,

E(n) = E [inf t > 0; 0 is in the interior of conv(SBn(s); 0 < s ≤ t)] ,

where SBn(s) is brownian motion on Sn−1. A corollary of our bounds for α(n) is a corresponding

bound for the asymptotics of E(n), as n goes to infinity. Namely,

Corollary 4.1.4 There exists a universal constant C > 0 such that,

1

C log n< E(n) < C log n, ∀n ≥ 1.

The above corollary and the work of Matthews complete each other in a certain sense: The asymp-

totics derived by Matthews for E(n) in the case of ε-covering, when ε → 0, is roughly E(n) ∼√nεn−3 log(ε−1). In other words, for small ε, the time is exponential in the dimension. Our result

therefore suggests a rather significant phase shift as ε approaches π/2.

Another possible application the last corollary is related to the following illumination problem: a

high dimensional convex object (say, a planet) is rotating randomly. A single light source is located

very far from the object. How long will it take until every point on the surface of the object has been

illuminated at least once?

The organization of the rest of this chapter is as follows: the lower bound of theorem 4.1.1 will be

proven in section 4.2 and the upper bound will be proven in section 4.3. Section 4.4 is devoted to filling

some of the missing details for the proof of theorem 4.1.2. In section 5, we prove corollary 4.1.4. Finally,

in section 4.6, we list some further facts that can be derived using the same methods of proof and raise

some questions for possible further research.

Throughout this chapter, the symbols C,C ′, C ′′, c, c′, c′′ denote positive universal constants whose

values may change between different formulas. We write f(n) = O(g(n)) if there is a positive constant

M > 0 such that f(n) < M(g(n)) for all n, and we write f(n) = o(g(n)) if f(n)/g(n) → 0 as

n → ∞. Given a subset A ⊂ Rn, by conv(A) we denote the convex hull of A. Given two random

variables, X and Y , the notation X ∼ Y is to say that the two variables have the same distribution.

For random vector X ∈ Rn we denote its barycenter by b(X) := E[X], and its covariance matrix by

Cov(X) := E[(X − b(X))⊗ (X − b(X))].

I would like to express my thanks to Itai Benjamini for introducing me to this question.

4.2 The Lower Bound

The aim of this section is to prove the following bound:

Page 112: Distribution of Mass in Convex Bodies

112 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

Theorem 4.2.1 There exists a universal constant c > 0 such that the following holds: Suppose α <

ecn/ logn. Let B(t) be a standard brownian motion in Rn. Then,

P(0 is in the interior of conv(B(t) | α−1 ≤ t ≤ 1)

)< 0.1. (4.1)

In particular, if t1 ≤ ... ≤ tN are points generated according to a poisson process on [0, 1] with intensity

cα, independently of B(t), then

P (0 is an extremal point of the set B(0), B(t1), ..., B(tN )) > 1

2. (4.2)

Before we begin the proof, we will need the following ingredient: recall Bernstein’s inequality, the proof

of which can be found in [U, Chapter 9]:

Theorem 4.2.2 (Bernstein’s inequality) Let X1, ..., Xn be independent random variables. Suppose that

for some positive L > 1 and every integer k > 0,

E[|Xi − E[Xi]|k] <E[X2

i ]

2Lk−2k! (4.3)

Then,

P

∣∣∣∣∣n∑i=1

(Xi − E[Xi])

∣∣∣∣∣ > 2t

√√√√ n∑i=1

V ar[Xi]

< e−t2

for every 0 < t <

√∑ni=1 V ar[Xi]

2L .

Proof of theorem 4.2.1:

First of all, we note that equation (4.2) follows easily from equation (4.1). Indeed, by a small enough

choice of the constant c, we can make sure that with probability at least 3/4, none of the points t1, ..., tNfall inside the interval [0, α−1]. We turn to prove equation (4.1).

By choosing a suitable (small enough) value for the constant c, we may always assume that the di-

mension, n, is larger than some universal constant. Define m =⌊

cnlogn

⌋, where the value of the constant

c > 0 will be chosen later on. Since the probability in equation (4.1) is increasing with α, we may as-

sume that α = 2m−1. Moreover, in order to simplify the below formulas, we note that by using a scaling

argument we can assume that our time interval is [0, 2m−1] (rather than the interval [0, 1]), and show that,

P(0 is in the interior of conv(B(t) | 1 ≤ t ≤ 2m−1)

)<

1

4.

We will show that with high probability there exists a vector v which demonstrates that the origin is not

in the interior, i.e, that 〈B(t), v〉 > 0 for all 1 ≤ t ≤ 2m−1.

The construction of the vector v is as follows. Define,

vi = B(2i)−B

(2i−1

),

Page 113: Distribution of Mass in Convex Bodies

4.2. THE LOWER BOUND 113

for i = 0, ...,m− 1, and

v =1√m

m−1∑i=0

vi√E[|vi|2]

=1√m

m−1∑i=0

vi√n(√

2)i−1.

Note that the vectors vi√E[|vi|2]

are independent, identically distributed gaussian random vectors with

expectation 0 and with covariance matrix 1nId. It follows that the vector v is also a gaussian random

vector whose expectation is 0 and whose covariance matrix is equal to 1nId. A calculation then gives,

P(

1

2< |v| < 2

)> 1− e−c′n (4.4)

for some universal constant c′ > 0.

Fix 0 ≤ k ≤ m − 1. Let us inspect the scalar product p = 〈B(2k), v〉. for all 0 ≤ i ≤ m − 1, we

denote vi = (vi,1, ..., vi,n). Note that bothB(2k) and v are linear combinations of vi’s with deterministic

coefficients, hence p admits the form

p =

n∑j=1

m−1∑i=0

m−1∑l=0

αiβlvi,jvl,j

for some constants αim−1i=0 , βl

m−1l=0 . Define,

wj =m∑i=1

m∑l=1

αiβlvi,jvl,j , for j = 1, .., n.

Clearly, the wj’s are independent and identically distributed, so there exist numbers a, b such that

wj ∼ X(aX + bY ) (4.5)

where X,Y are independent standard gaussian random variables.

Our next goal is to calculate the expectation and the variance of wj . To that end, we may write, for all

j = 1, .., n,

wj =

(k∑i=0

vi,j

)(1√nm

m−1∑l=0

vl,j

(√

2)l−1

)=

1√nm

k∑i=0

m−1∑l=0

1

(√

2)l−1vi,jvl,j . (4.6)

So,

E[wj ] ≥1√nm

E[v2k,j ]

√2k−1

=

√2k−1

√nm

,

which means that,

E[p] ≥√

2k−1√

n√m

. (4.7)

Next, in order to estimate V ar[wj ] we use (4.6) again to obtain,

E[w2j ] =

1

nmE

( k∑i=0

m−1∑l=0

1

(√

2)l−1vi,jvl,j

)2 =

Page 114: Distribution of Mass in Convex Bodies

114 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

1

nm

∑i 6=l,0≤i≤k,0≤l≤m−1

1

2l−1E[v2

l,j ]E[v2i,j ] +

∑i6=l,

0≤i,l≤k

1√

2i+l−2

E[v2l,j ]E[v2

i,j ] +k∑i=0

1

2i−1E[v4

i,j ]

≤1

nm

m k∑i=0

2i + 2∑

0≤i≤l≤k

1

2i−1E[v2

l,j ]E[v2i,j ] + 3

k∑i=0

2i

<2k+2

n.

So,

V ar[p] < 2k+2. (4.8)

Note that E[p] >√

n8mV ar[p] >

√0.1c−1 log n

√V ar[p].

It follows from representation (4.5), from that fact that a standard Gaussian random variable, X , sat-

isfies E[|X|p] ≤ pp/2 for all p > 1, and from the Cauchy-Schwartz inequality that,

E[|wj − E[wj ]|p] < (10V ar[wj ])p/2p!, ∀p ∈ N. (4.9)

We may therefore invoke theorem 4.2.2 on the random variables wj . Setting t =√

n10m , L = 10

√2k+2

√n

and plugging into (4.3) leads to:

P(|p− E[p]| >√

m

10n

√V ar[p]) < e−

n10m .

Plugging in (4.7) and (4.8) and using the assumption that c can be smaller than any universal constant

gives,

P(p <1

2E[p]) < e−

n10m < n−5.

Define A to be the following event:

A =

〈v,B(2k)〉 > 1

2

√n

m

√2k−1

, ∀0 ≤ k ≤ m− 1

Applying a union bound for k = 0, ...,m− 1, we learn that

P(A) > 1− 1

n2. (4.10)

Recall that the distribution of the maximal value of a brownian bridge (see e.g., [Ro]) starting at y = a

at time 0 and ending at y = b at time T is,

fMa→b(T )(y) = 1y/∈[a,b]4y − a+b

2

Te−

2T

(y−a)(y−b). (4.11)

Define the events,

Ck := 〈B(t), v〉 > 0, ∀2k ≤ t ≤ 2k+1.

Our next goal is to show that when conditioning on A, the probability of Ck is close to one, using the

following idea: instead of generating the brownian motion, one can alternatively generate the points

B(2k) and then ”fill in” the missing gaps by independent brownian bridges. When the event A holds, the

endpoints of the bridges 〈B(t), v〉, 2k ≤ t ≤ 2k+1 are quite large with respect to the standard deviation

Page 115: Distribution of Mass in Convex Bodies

4.3. THE UPPER BOUND 115

of their midpoint, and we may use (4.11).

More formally, Let B(t) be a brownian bridge such thatB(0) = B(1) = 0, independent ofB(t). Define,

Bk(t) = B(2k) + (B(2k+1)−B(2k))t+√

2kB(t).

By a representation theorem for the brownian bridge, the functions Bk(t) and B(2k + 2kt) share the

same distribution. Moreover, if an event A is measurable by the sigma algebra generated by the points

B(2j), 0 ≤ j ≤ m− 1, then the distribution of these two functions is the same, event when conditioned

on the event A. Therefore, one has,

P(Ck|A) = P(〈Bk(t), v〉 > 0, ∀0 ≤ t ≤ 1 | A).

Since the maximum of a brownian bridge is monotone with respect to its endpoints, it follows that

P(〈Bk(t), v〉 > 0, ∀0 ≤ t ≤ 1|A) > P(〈B(t), v〉 <

√n

8m, ∀0 ≤ t ≤ 1

). (4.12)

Using (4.11) then yields,

P(Ck | A) > 1− exp(− log n/(8c|v|2)

). (4.13)

Using the above with (4.4) and choosing c small enough, we get

P(Ck | A) > 1− 1

n3.

Finally, combining with (4.10) and using a union bound yields,

P(〈B(t), v〉 > 0, ∀1 ≤ t ≤ 2m−1

)> P(A)

(1−

m∑k=1

(1− P(Ck|A))

)> 1− 1

n.

The proof is complete.

4.3 The Upper Bound

The goal of this section is the proof of the following estimate:

Theorem 4.3.1 There exists a universal constantC > 0 such that the following holds: Let α = eCn logn.

Let t1 ≤ ... ≤ tN be points generated according to a poisson process on [0, 1] with intensity α, and let

B(t) be a standard brownian motion, independent of the point process. Consider the random walk

B(0), B(t1), ..., B(tN ). The probability that the origin is an extremal point of this random walk is

smaller than n−n.

We open the section with some well-known facts concerning the probabilities that random walks and

discrete brownian bridges stay positive. Again let 0 ≤ t1 ≤ ... ≤ tN ≤ 1 be a poisson point process on

[0, 1] with intensity α, and let W (t) be a standard 1-dimensional brownian motion. Consider the random

walk W (0),W (t1), ...,W (tN ). By slight abuse of notation, for 1 ≤ j ≤ n, denote W (j) = W (tj). Let

Page 116: Distribution of Mass in Convex Bodies

116 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

us calculate the probability that W (j) ≥ 0 for all 1 ≤ j ≤ N .

Recall the second arcsine law of P.Levi, (see for example [Ro], page 241). Define a random variable,

X =

∫ 1

01W (t)<0dt.

According to the second arcsine law, X has the same distribution as (1 + C2)−1 where C is a Cauchy

random variable with parameter 1. Using the definition of the Poisson distribution, this means that,

P(B(ti) > 0, ∀1 ≤ i ≤ N(m)) = E[e−α(1+C2)−1

]=

1

π

∫ ∞−∞

e− α

1+x21

1 + x2dx =

2

π

∫ π/2

0e−α cos2 tdt =

1

π

∫ 1

0e−αw

1√w(1− w)

dw =

1

π√α

∫ α

0e−s

1√s(1− s

α)ds.

It is easy to check that the latter integral has a limit as α→∞. Consequently,

P(B(ti) > 0, ∀1 ≤ i ≤ N) =1√α

(1

π

∫ ∞0

e−s√sds

)(1 + o

(1

α

))= (4.14)

1√πα

(1 + o

(1

α

)).

Now suppose that W (t) is a brownian bridge such that W (0) = W (1) = 0 and consider the discrete

brownian bridge W (0),W (t1), ...,W (tN ),W (1).

The cyclic shifting principle (see e.g., [B]) is the following observation: for every 0 ≤ s ≤ 1,

define Γs(t) = t + s, where the sum is to be understood as a sum on the torus [0, 1]. Then the function

W Γs(t)−W (s) has the same distribution as the functionW (t). Now, since there is exactly one choice

i between 1 and N such that W (tj) −W (ti) will be non-negative for every 1 ≤ j ≤ N , it follows that

for only one choice of 1 ≤ i ≤ N , the function

W Γti(·)−W (ti)

will be positive for all the points tj − ti, 1 ≤ j ≤ N (where the subtraction is again understood on the

torus [0, 1]). Since the points t1, ..., tN are independent of the function W (t), it follows that

P(W (ti) ≥ 0, ∀1 ≤ i ≤ N) = E[

1

N

]=

1

α+O

(1

α3/2

). (4.15)

(recall that N was a poisson random variable with expectation α).

We now have the necessary ingredients for proving the upper bound.

Proof of theorem (4.3.1):

For 0 ≤ s1 < ... < sn ≤ 1, s = (s1, ..., sn), define Fs to be the convex hull of B(s1), ..., B(sn). This

is a.s an n− 1 dimensional simplex. Let Es be the measure zero event that Fs is a facet in the boundary

Page 117: Distribution of Mass in Convex Bodies

4.3. THE UPPER BOUND 117

of the convex hull of the random walk. Our aim is to show that with high probability, none of the events

Es hold for s1 = 0, which means that the convex hull does not contain any facet the origin is a vertex of

which.

For a point s defined as above, we define r(s) = (r1, ...rn) by r1 = s1, ri = si−si−1 for 2 ≤ i ≤ n.

The point r(s) lives in the n-dimensional simplex, which we denote by ∆n. Analogously, for a point

r ∈ ∆n define by s(r) the corresponding point s = (s1, .., sn). By slight abuse of notation we will also

write Er and Fr, allowing ourselves to interchange freely between s and r.

Denote by Wr the measure zero event that the point r ∈ ∆n is also in the poisson process (hence the

event that all the points r1, r1 + r2, ..., r1 + ...+ rn are in the set 0, t1, ..., tN).

For a Borel subset A ⊂ ∆n, define

µ(A) = E

[∑r∈A

1Er

],

the expected number of facets Fr, with r ∈ A, and

ν(A) = E

[∑r∈A

1Wr

].

Clearly µ and ν are σ-additive, and µ ν. Denote

pn(r) =dµ

dν(r), ∀r ∈ ∆n.

So pn(r) can be understood as P(Er |Wr).

Define ∆n = ∆n ∩ r1 = 0 and,

D = r = (r1, ..., rn) ∈ ∆n | ri > 0, ∀2 ≤ i ≤ n.

Let s = (s1, ..., sn) and ε > 0 be such that si − si−1 > ε for all 2 ≤ i ≤ n. Define

Q = r((x1, ..., xn); xi ∈ [si, si + ε], for i = 1, .., n).

Then, by the independence of the number of poisson points on disjoint intervals,

ν(Q) = E

[n∏i=1

#j; tj ∈ [si, si + ε]

]= (εα)n.

By the σ-additivity of ν, it follows that for a measurable A ⊂ ∆n \ ∆n,

ν(A ∩D) = αnV oln(s(A)) = αnV oln(A).

where in the last equality we use the fact that the Jacobian of the function r → s(r) is identically one.

Using analogous considerations on ∆n, we get,

ν(A ∩D) = αnV oln(A) + αn−1V oln−1(A ∩ ∆n)

Page 118: Distribution of Mass in Convex Bodies

118 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

for all A ⊂ ∆n measurable. By the definition of pn(r),

µ(A) = αn∫Apn(r)dλn(r) + αn−1

∫A∩∆n

pn(r)dλn−1(r),

for all measurable A ⊂ ∆n, λn, λn−1 being the respective Lebesgue measures.

We would like to obtain an upper bound for µ(∆n). Using the above formula, this is reduced to

obtaining an upper bound for pn(r). To that end, we use the following idea: the representation theorem

for the brownian bridge suggests that we may equivalently construct B(t) by first generating the differ-

ences B(sj) − B(sj−1) as independent gaussian random vectors, and then ”fill in” the gaps between

them by generating a brownian motion up to B(s1), a brownian bridge for each 1 < j ≤ n, and a ”final”

brownian motion between B(sn) and B(1), all of the above independent from each other. To make it

formal, fix r ∈ ∆n and define s = s(r). For all i, 2 ≤ i ≤ n, we write,

Di = B(ti)−B(ti−1)

and define Ci : [si−1, si]→ Rn by,

Ci(t) = B(t)−B(si−1)− t− si−1

si − si−1(B(si)−B(si−1)),

the bridges that correspond to the intervals [si−1, si]. Finally, we define two functions B0 : [0, s1]→ Rn

and Bf : [sn, 1] → Rn by B0(t) = B(s1 − t) − B(s1) and Bf (t) = B(t) − B(sn). By the indepen-

dence of the differences of a brownian motion on disjoint intervals and by the representation theorem for

the brownian bridge, it follows that the variables Dini=2, Cini=2, B0, Bf are all independent, each Cibeing a brownian bridge and B0 and Bf being brownian motions.

Define θs to be an orthogonal unit normal to Fs. Denote,

Ci = 〈Ci, θs〉, ∀2 ≤ i ≤ n,

and also B0 = 〈B0, θs〉 and Bf = 〈Bf , θs〉. Since θs is fully determined by Dini=2, it follows that

Cini=2, B0 and Bf are independent. Observe that for all 2 ≤ i ≤ n, Ci is a one-dimensional brownian

bridge fixed to be zero at its endpoints, and B0 and Bf are one dimensional brownian motions starting

from the origin.

A moment of reflection reveals that the event Es is reduced to the intersection of the following con-

ditions for one of the two possible choices of θs:

(i) Ws holds.

(ii) For all 2 ≤ i ≤ n, the function Ci is non-negative at all points tj such that si ≤ tj ≤ si+1.

(iii) The function B0 is non-negative at all points tj such that tj < s1.

(iv) The function Bf is non-negative at all points tj such that sn < tj ≤ 1.

Page 119: Distribution of Mass in Convex Bodies

4.4. THE DISCRETE SETTING 119

As explained above, Cini=2, B0 and Bf are independent, thus we can estimate p(r) using equations

(4.14) and (4.15). We get,

pn(r) =

n∏j=2

1

αrj

1

π

1√αr1√αrn+1

n+1∏j=1

(1 +O

(1

αrj

)). (4.16)

Using the fact that each probability in the product can be bounded by 1, we see that there exists a constant

c > 0 such that,

pn(r) < cn

n∏j=2

min

1

αrj, 1

min

1√αr1

, 1

min

1

√αrn+1

, 1

=

cn

αn

n∏j=2

min

1

rj, α

min

1√r1,√α

min

1

√rn+1

,√α

.

Now,

F (∆n) = αn−1

∫∆n

p(r)dλn−1(r) =

αn−1

∫∆n−1

pn−1(r)λn−1(r) < αn−1

∫Kn−1

pn−1(r)λn−1(r),

where Kn−1 = 0 × [0, 1]n−1 is the n− 1-dimensional cube. So,

F (∆n) < αn−1 cn

αn−12

(∫ 1

0min1

r, αdr

)n−1 ∫ 1

0min 1√

r,√αdr <

cn√α

(∫ 1

0min1

r, αdr

)n−1 ∫ 1

0

1√rdr <

(c′ logα)n√α

.

Suppose α = n2Ln having L > 3, then

(c′ logα)n√α

=(2nLc′ log n)n

nLn=

(2nLc′ log n

nL

)n<

(2Lc′′

nL−2

)n.

We may clearly assume that n ≥ 2. It follows that there exists a universal constant C > 0 such that

whenever L ≥ C/2, we have F (∆n) < n−n. Note that the assumption that L ≥ C/2 may be written

α ≥ eCn logn. Finally, an application of Markov’s inequality then teaches us that in this case, the proba-

bility of having one face containing the origin is smaller than n−n, which finishes the proof.

We have now established theorem 4.1.1.

4.4 The Discrete Setting

The aim of this section is to sketch the proof of theorem 4.1.2.

Fix a dimension n ∈ N. Let S1, ..., SN be a standard random walk on Zn. The following lemma is

the discrete analogue of formulas (4.14) and (4.15) derived in the previous section:

Page 120: Distribution of Mass in Convex Bodies

120 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

Lemma 4.4.1 Suppose N > 2. Let θ ∈ Sn−1. Define,

Sj := 〈θ, Sj〉, ∀1 ≤ j ≤ N.

The following estimates hold:

P(Sj ≥ 0, ∀1 ≤ j ≤ N

)<

10n√N

(4.17)

and,

P(Sj ≥ 0, ∀1 ≤ j ≤ N

∣∣∣ SN = 0)<

2 logN

N. (4.18)

Proof: The proof of (4.18) follows again from the cyclic shifting principle, explained in the last section.

However, it is a bit more involved than the continuous case, since a discrete random walk can attain its

global minimum more than once. Define by Zi the event that Sk = 0 for exactly i distinct values of k,

and define,

pi = P(

Sj ≥ 0, ∀1 ≤ j ≤ N∩ Zi

∣∣∣ SN = 0)

and,

p = P(Sj ≥ 0, ∀1 ≤ j ≤ N

∣∣∣ SN = 0)

=∞∑i=1

pi.

we now use the following observation: consider random walk conditioned on attaining a certain value

T ∈ R, ` times. The probability that T is the global minimum of this random walk is smaller than 2−`,

since each of the segments between two points can be reflected around the value T . It follows that,

∞∑i=dlog2Ne+2

pi ≤∞∑

i=dlog2Ne+2

2−i+1 ≤ 1

N.

By the cyclic shifting principle, described in the previous section, we have pi ≤ i/N . So,

p =

∞∑i=1

pi ≤1

N+

dlog2 Ne+2∑i=1

i

N.

Equation (4.18) follows.

We turn to prove (4.17). Denote θ = (θ1, ..., θn). Without loss of generality, we can assume that the

θi’s are all non-negative and decreasing. Define the event,

A := S1 = θ1.

Clearly,

P(Sj ≥ 0, ∀1 ≤ j ≤ N) ≤ P(Sj ≥ 0, ∀1 ≤ j ≤ N | A).

Define MN = max1≤j≤NSj. From the symmetry of the random walk,

P(Sj ≥ 0, ∀1 ≤ j ≤ N | A) = P (MN−1 ≤ θ1)

Page 121: Distribution of Mass in Convex Bodies

4.4. THE DISCRETE SETTING 121

Observe that once a random walk went past θ1 for the first time, it is still at most 2θ1. Thus, using the

reflection principle, conditioning on the event MN−1 > θ1, we have,

P(SN−1 > 2θ1 |MN−1 > θ1) ≤ 1

2.

Therefore,

P(MN−1 > θ1) ≥ 2P(SN−1 > 2θ1),

and so,

P(MN−1 ≤ θ1) ≤ 1− 2P(SN−1 > 2θ1) = P(|SN−1| ≤ 2θ1).

Define,

φ = (θ1, 0, ..., 0) ∈ Rn

and define a new random walk, Wj = 〈φ, Sj〉. Next we show that for all a ∈ R,

P(|SN−1| < a) ≤ P(|WN−1| < a). (4.19)

Indeed, for all λ ∈ R,

E[exp(λSN−1)] =N−1∏j=1

E[exp(λ(Sj − Sj−1))] ≥

N−1∏j=1

E[exp(λ(Wj −Wj−1))] = E[exp(λWN−1)]

where the last equality follows from the independence of the differencesWj−Wj−1. Using the symmetry

of this differences gives, for all λ ∈ R,

E[exp(λSN−1) + exp(−λSN−1)] ≥ E[exp(λWN−1) + exp(−λWN−1)],

which implies (4.19). We are left with estimating P(|WN−1| ≤ 2θ1). We have,

P(|WN−1| < a) =N−1∑k=0

1

n

k(n− 1

n

)N−1−k (N − 1

k

) 2∑j=−2

(k

bk2c+ j

) <10n√N.

This finishes the proof.

Sketch of the proof of theorem 4.1.2: We begin with the upper bound. We follow that same lines

as the ones in the proof of theorem 4.3.1. The only extra tool needed for the proof of the upper bound is

lemma 4.4.1.

Fix N ∈ N. For 1 ≤ j ≤ N and t = jN , define B(t) := Zj . Let r = (r1, ..., rn) ∈ ∆n ∩ 1

NZn,

and tk =∑k

j=1 rj . Define the event Er in the same manner:

Er := conv(B(t1), ..., B(tn)) is contained in the boundary of K

Page 122: Distribution of Mass in Convex Bodies

122 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

For A ⊂ ∆n ∩ 1NZn, define

F (A) = E

[∑r∈A

1Er

].

Next, for any r ∈ ∆n ∩ 1NZn, equations (4.17) and (4.18) are used to obtain,

P(Er) < 100(logN)2nn2

n∏j=2

min

1

Nrj, 1

min

1√Nr1

, 1

min

1√

Nrn+1, 1

.

Define ∆0 = ∆n ∩ 1NZn ∩ r1 = 0. We are left with estimating,

F (∆0) =∑r∈∆0

P(Er).

This can be done by showing that these are Riemann sums converging to an integral which can be

estimated in the same manner as in theorem 4.3.1. An analogous calculation gives,

F (∆0) ≤ (Cn2 log3N)n√N

.

For some universal constant C > 0, which implies the upper bound.

Next, we prove the lower bound. Again follow the same lines as in the proof of theorem 4.2.1.

Assume that N = 2m−1 where m =⌊

cnlogn

⌋, the value of the constant c will be chosen later. We

construct a vector v in an analogous manner to the construction in theorem 4.2.1. Define v0 = Z1 and,

vi = S2i − S2i−1

for i = 1, ...,m− 1. Define,

v =1√m

m−1∑i=0

vi√E[|vi|2]

=1√m

m∑i=1

vi

(√

2)i−1

Fix a 1 ≤ k ≤ m, and define

p = 〈S2k , θ〉.

The expectation and variance of p can be computed directly, as in the proof of theorem 4.2.1. Defining,

the wj’s analogously, Chernoff’s inequality can be used to prove the bound (4.9). Theorem 4.2.2 is used

to show that for a small enough value of c,

P(p <1

2E[p]) < n−5.

By applying a union bound, we can make sure that 〈S2k , θ〉 ≥ 12E [〈S2k , θ〉] for all 1 ≤ k ≤ m.

Next, a formula analogous to (4.11) should be applied in order to control the conditional random walks

Page 123: Distribution of Mass in Convex Bodies

4.5. SPHERICAL COVERING TIMES 123

found between consecutive points of the form 2k. To this end, we observe that for our random walk

Sn := 〈θ, Sn〉 one has,

P(

max1≤j≤k

Sj < u

)≤ P

(max

1≤j≤kSj < u

∣∣∣∣ Sk = 0

), ∀k ∈ N, u > 0.

Hence, instead of bounding a conditional random walk, we may bound the usual random walk. Using

Bernstein’s inequality, theorem 4.2.2, in order to derive a bound analogous to (4.13). Using a union

bound gives,

P(〈Sj , v〉 > 0, ∀1 ≤ j ≤ N) > 1− 1

n.

This finishes the sketch of proof.

4.5 Spherical covering times

The goal of this section is to prove corollary 4.1.4.

Let B(t) be a standard brownian motion in Rn, n > 2. Denote θ(t) = B(t)|B(t)| and observe that θ(t) is

almost surely well-defined for all t > 0. Let T (t) be the solution of the equation

T ′(t) = |B(T (t))|2, T (0) = 1.

We denote by [S]t the quadratic variation of an Ito process, St, between time 0 and time t. We have,

d

dt[θ T ]t = T ′(t)

(ddt [B]t

)|t=T

|B(T (t))|2=

(d

dt[B]t

)∣∣∣∣t=T

= n− 1.

which implies that θ(T (t)) is a strong Markov process, and is therefore a spherical brownian motion.

Proof of corollary 4.1.4:

First, observe that for every τ > 0, the origin lies in the interior of conv(B(t); 1 ≤ t ≤ τ) if and only

if it lies in the interior of conv(θ(t); 1 ≤ t ≤ τ), thus we have E(n) = E[τ1] where

τ1 = inf τ > 0; Fτ holds ,

and

Fτ = 0 ∈ Int(conv(B(T (s)); 0 ≤ s ≤ τ)).

We aim to use the bounds from theorems 4.2.1 and 4.3.1. For that, we will need to establish certain

bounds on the distribution of T−1(s) for a given s > 0.

Page 124: Distribution of Mass in Convex Bodies

124 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

Since E(|B(T )|2) = nT , it follows that E(T (t)) = ent + 1. Using Markov’s inequality gives

P(T (t) > 10ent + 10

)≤ 0.1. (4.20)

By theorem 4.2.1, there exists a constant c > 0 such that for

τ2 = infτ > 0; T (τ) ≥ ecn/ logn,

one has

P(Fτ2) < 0.1. (4.21)

According to equation (4.20),

P(τ2 < c1/ log n) < 0.1, (4.22)

for some universal constant c1 > 0. Using a union bound with (4.21) and (4.22) gives,

P(τ1 < c1/ log n) < 0.2,

which implies

E[τ1] ≥ 0.8c1/ log n.

The lower bound is established.

We continue with the upper bound. Observe that T (t) is a bijective map from [0,∞) to [1,∞). We

may define f(s) = T−1(s) for all s ≥ 1. One has,

f ′(s) =1

T ′(f(s))=

1

|B(s)|2.

Consequently, by Fubini’s theorem,

E[f(s)] =

∫ s

1E[

1

|B(t)|2

]dt =

∫ s

1

1

tE[

1

|Γ|2

]dt,

where Γ is a standard gaussian random vector in Rn. A calculation gives E[

1|Γ|2

]< C1

n for some

universal constant C1 > 0. It follows that E[f(s)] ≤ C1 log sn . By Markov’s inequality,

P(f(s) >

10C1 log s

n

)< 0.1. (4.23)

According to theorem 4.3.1, there exists a universal constant C > 0 such that for

τ3 = infτ > 0; T (τ) ≥ eCn logn,

one has,

P(Fτ3) > 0.9. (4.24)

Now, an application of equation (4.23) with s = eCn logn gives,

P(τ3 > C2 log n) < 0.1. (4.25)

Page 125: Distribution of Mass in Convex Bodies

4.6. REMARKS AND FURTHER QUESTIONS 125

for some universal constant C2 > 0. Using a union bound with equations (4.24) and (4.25) gives,

P(τ1 > C2 log n) < 0.2.

In other words,

P (0 ∈ Int(conv(θ(T (t)); 0 ≤ t ≤ C2 log n))) > 0.8.

Now, by the strong Markov property and time-homogeneouity of θ T , we also have

P (0 ∈ Int(conv(θ(T (t)); kC2 log n ≤ t ≤ (k + 1)C2 log n))) > 0.8.

for all k ∈ N. Finally, since the above event is invariant under rotations,

P (0 ∈ Int(conv(θ(T (t)); 0 ≤ t ≤ kC2 log n))) > 1− 0.2k.

In other words,

P(τ1 > C2k log n) < 0.2k,

which easily implies that E[n] ≤ C3 log n, for some universal constant C3 > 0. The proof is complete.

4.6 Remarks and Further Questions

In this section state a few results that can easily be obtained using the same ideas used above, and suggest

possible related directions of research.

4.6.1 Probability for intermediate points in the walk to be extremal.

The methods used above can easily be adopted in order to estimate the probability that an intermediate

point of a random walk is an extremal point. To see this, observe that this probability is equivalent to

the probability that the origin is an extremal point of two independent random walks of length λN and

(1 − λ)N respectively. Thus, theorem 4.3.1 can still be used for an upper bound since either λ ≥ 12 or

1 − λ ≥ 12 . For the lower bound we should do a little extra work: we follow the lines of the proof of

theorem 4.2.1, only defining the vector v as,

v = λv1 + (1− λ)v2

where v1 and v2 are constructed in the same manner that the vector v is constructed in theorem 4.2.1.

The exact same calculations can be carried out to show that with high probability v separates the origin

from the points of both of the random walks. This yields,

Proposition 4.6.1 There exist universal constants C, c > 0 such that the following holds: Let S1, S2, ...

be the standard random walk on Zn and let j,N ∈ N, j < N . Then:

(i) If N > eCn logn then P(Sj ∈ Int(convS1, ..., SN)) > 12 .

(ii) If N < ecn/ logn then P(Sj ∈ ∂convS1, ..., SN) > 12 .

Page 126: Distribution of Mass in Convex Bodies

126 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

4.6.2 Covering times and Comparison to independent origin-symmetricrandom points

The result of corollary 4.1.4 can also be viewed as an upper bound on a certain mixing time of the spher-

ical brownian motion: Let µ be an origin-symmetric distribution on Rn which is absolutely continuous

with respect to the Lebesgue measure. There is a beautiful proof by Wendel, [?], if X1, ..., XN are

independent random vectors with law µ, one has

P(0 /∈ convX1, . . . , XN) =1

2N−1

n−1∑k=0

(N − 1

k

). (4.26)

Hence, the probability does not depend on µ as long as it is centrally symmetric and absolutely continu-

ous. Note that in order for this probability to be 12 one should take N(n) ≈ n log n.

This suggests that the correct mixing time in the sense of the π2 -covering should be 1

n .

An easy computation shows that after time of order 1n , a brownian motion that started at an arbitrary

point on the sphere will be approximately uniformly distributed on the sphere in the sense that the density

will be bounded between two universal constants, independent of the dimension. If we assume that the

correct mixing time is therefore 1n for this purpose, this suggests that our upper bound of en logn should

be a natural conjecture for the correct asymptotics in theorem 4.1.1.

4.6.3 A random walk that does not start from the origin

Our techniques may be also used to find the asymptotics of the time it takes for the origin to be encom-

passed by a random walk when the starting point is different than the origin. By the scaling property of

brownian motion,

P(0 ∈ Int(ConvB(t); 1 ≤ t ≤M)) = P(0 ∈ Int(ConvB(t);L ≤ t ≤ LM))

For all M > 1, L > 0. Using the concentration of |B(t)| around its expectation, it is not hard to derive,

Proposition 4.6.2 There exist universal constants C, c > 0 such that the following holds: Let B(t) be a

brownian motion started at a point x0 whose distance from the origin is L. Then:

(i) If M > L2eCn logn then P(0 ∈ Int(convB(t); 0 ≤ t ≤M)) > 12 .

(ii) If M < L2ecn/ logn then P(0 ∈ Int(convB(t); 0 ≤ t ≤M)) < 12 .

4.6.4 Possible Further Research

In this chapter we tried to find the correct asymptotics, with respect to the dimension n, of the value N

such that p(n,N) ≈ 12 . One related question is:

Question 4.6.3 For a fixed value of n, how does p(n,N) behave asymptotically as N →∞?

Page 127: Distribution of Mass in Convex Bodies

4.6. REMARKS AND FURTHER QUESTIONS 127

In view of (4.26) and the discussion following it, one might expect that this probability could have

approximately the following law, for a certain range of values of N ,

p ≈ (logN)n

N c

where p is the probability in question, n is the dimension and N is the length of the random walk, and

c > 0 is some constant.

Two other possible questions are:

Question 4.6.4 Given two numbers j, k < N , what is the joint distribution of Sj , Sk being extremal

points of the random walk S1, ..., SN? Is there repulsion or attraction between extremal points of a

random walk?

Question 4.6.5 How does the result of theorem 4.1.1 change is one replaces the brownian motion by a

p-stable process?

Page 128: Distribution of Mass in Convex Bodies

128 CHAPTER 4. HIGH DIMENSIONAL RANDOM WALKS

Page 129: Distribution of Mass in Convex Bodies

Chapter 5

Convex hulls in the Hyperbolic space

The aim of this chapter is to prove two fairly basic facts about the volume of convex sets in the hyperbolic

space. It is a well known fact that unlike the Euclidean case, the volume of a simplex in the hyperbolic

space is bounded from above. The first goal of this chapter is to show that the volume of any polytope is

sublinear with respect to its number of vertices. In dimension 2 the latter fact follows immediately from

the former, since any polytope can be triangulated such that the number of triangles is smaller than the

number of vertices. This is argument obviously does not work in higher dimensions, since the number

of simplices in a triangulation in not linear with respect to the number of vertices. The authors could not

find a simple, clean argument which shows this.

5.1 Results

We denote by Hn the n-dimensional hyperbolic space, or in other words, the unique maximally symmet-

ric, simply connected, n-dimensional Riemannian manifold with constant sectional curvature −1. Our

first result may be formulated as follows:

Theorem 5.1.1 The volume of the convex hull of any N points in Hn is smaller than 2(2√π)n

Γ(n2

) N .

Remark 5.1.2 An easy corollary of theorem 5.1.1 is the fact that the volume of the fundamental domain

of any reflection group in Hn is bounded by a constant which only depends on the dimension and on the

number of generators of the group.

Our second result is an application of the first one for estimating the growth of the volume of a set when

taking the convex hull. It is easily seen that the volume of a general set can grow in an arbitrary fashion

when taking its convex hull. However, it turns out that for a set of bounded curvature this is not the case.

For a set A ⊂ Hn, the ε-extension of A is defined as,

Aε :=⋃x∈A

BH(x, ε)

Where BH(x, ε) is ball of radius ε around x with respect to the hyperbolic metric. Our second results

reads,

129

Page 130: Distribution of Mass in Convex Bodies

130 CHAPTER 5. CONVEX HULLS IN THE HYPERBOLIC SPACE

Theorem 5.1.3 For each dimension n, and every ε > 0, there exists a constant C(n, ε) such that the

following holds: For any measurable A ⊂ Hn, one has

V ol(Conv(Aε)) < C(n, ε)V ol(Aε)

It is clear that in Euclidean space, even if one adds the assumption that the set is connected, the above is

far from true.

Remark 5.1.4 In dimension 2, the above theorem has a fairly simple proof in case the set is connected:

LetA ⊂ H2 be a connected set. It follows from the isoperimetric inequality that the surface area ofA1 is

comparable to its volume. Moreover, when taking the convex hull of a connected set in in 2 dimensional

riemmanian space, the perimeter becomes smaller. It follows that

V ol2(conv(A1)) < C1V ol1(∂conv(A1)) < C1V ol1(∂A1) < C2V ol(A1)

where the first and last inequalities follow from the isoperimetric inequality of H2.

A section will be devoted to the proof of each of the theorems. The main idea of the proof of

the first theorem is to compare the volume of a convex polytope with that of the union of cone-like

objects centered around the vertices of the polytope. One of its main ingredients is a calculation which

roughly shows that every cone centered at a point in infinity and such that the endpoint of every line in

its boundary has a right angle with the geodesic coming from the origin has a bounded volume. This

calculation might be of benefit in understanding the distribution of mass in hyperbolic convex sets. The

latter theorem follows from the former rather easily.

Before we move on to the proofs, let us introduce some notation. We consider the Klein model for

the hyperbolic space Hn. For a detailed construction refer to [CFKP], [Milnor]. The Klein model is the

Euclidean unit ball in Rn, denoted by Bn2 , equipped with the following metric:

ds2K =

dx21 + ...+ dx2

n

1− x21 − ...− x2

n

+(x1dx1 + ...+ xndxn)2

(1− x21 − ...− x2

n)2=

dx2

1− r2+

r2dr2

(1− r2)2

The volume form on the Klein model has the expression,

vn(r) =1

(1− r2)n−1

2

√1

1− r2+

r2

(1− r2)2dx =

1

(1− r2)n+1

2

dx

The main advantage of the Klein model over other models, for our purposes, is the fact that geodesics

with respect to the hyperbolic metric are also geodesics with respect to the Euclidean metric, which

means that the hyperbolic convex hull of a set is the same as the Euclidean one. A related work of Rivin,

[Riv] has several applications of this fact.

For two points x, y ∈ Bn2 we denote by x+ y the standard sum of x and y with respect to the Euclidean

linear structure, by |x − y| we denote the Euclidean distance between x and y, and finally by dH(x, y)

we denote the hyperbolic distance between x and y.

Page 131: Distribution of Mass in Convex Bodies

5.2. THE VOLUME OF THE CONVEX HULL OF N POINTS IS SUBLINEAR 131

This chapter is based on a joint work with Itai Benjamini.

5.2 The volume of the convex hull of N points is sublinear

Let x1, ..., xN ∈ Hn, define A = conv(x1, ..., xN ). Our goal in this section is to give an upper bound

for V ol(A) which depends linearly on N .

Define S to be the sphere at infinity, S = Hn(∞) = ∂Bn2 . Clearly, we can assume WLOG that

x1, ..., xN ∈ S, and x0 = 0. Furthermore, by applying a slight perturbation and using the continu-

ity of the volume, we can assume that the n− 1-dimensional facets of the polytope A are all simplices.

Let’s introduce some notation. For each x ∈ S let Tx be the unit sphere of the tangent space of S

at the point x, which can be identified with S ∩ x⊥.

For each θ ∈ Tx, let Ax(θ) = A ∩ span+x, θ (where span+ denotes the positive span) and let

Lx(θ) be the (unique) line which lies on the relative boundary of Ax(θ) and passes through x and not

through 0. Let yx(θ) be the endpoint of Lx(θ) ∩ B2 which is not x, and let zx(θ) be the endpoint of

Lx(θ) ∩A which is not x.

We make the following definitions,

Cx(θ) := conv(x,x+ yx(θ)

2, 0)

Cx(θ) := conv(x,x+ zx(θ)

2, 0)

(the addition is taken with respect to the euclidean structure) and,

Cx :=⋃θ∈Tx

Cx(θ), Cx :=⋃θ∈Tx

Cx(θ).

Our proof will consist of two main steps. The first one will be to show that,

V ol(A) ≤ 2nN∑i=1

V ol(Cxi) ≤ 2nN∑i=1

V ol(Cxi) (5.1)

(the second inequality is obvious by the fact that Ci ⊆ Ci). The second step will be to show that

V ol(Ci) ≤ πn/2

Γ(n/2) .

We start with proving (5.1). To this end, let F be an n− 1 dimensional facet of A. Assume WLOG

that,

F = conv(x1, ..., xn)

Denote,

D = conv(0, F ), Di = D ∩ Cxi

It is clear that (5.1) will follow immediately from the next lemma:

Page 132: Distribution of Mass in Convex Bodies

132 CHAPTER 5. CONVEX HULLS IN THE HYPERBOLIC SPACE

Lemma 5.2.1 In the above notation,

V ol(D) ≤ 2nn∑i=1

V ol(Di)

Proof: We can assume D has a nonempty interior, in which case each y ∈ D, can be uniquely expressed

as y =∑n

j=1 αjxj , 0 ≤ αj ≤ 1. For each 1 ≤ i ≤ n, define a function Ti:

Ti(

n∑j=1

αjxj) =1

2

n∑j=1

αjxj +1

2(

n∑j=1,j

αj)xi

evidently, Ti(D) = Di.

Next we note that for each y ∈ D, there exists 1 ≤ i ≤ n such that |Ti(y)| ≥ |y|. Indeed, this follows

from the fact that the vectors T1(y)− y, ..., Tn(y)− y positively span an n− 1 dimensional subspace,

and from the convexity of | · |. Define,

`(y) = arg max1≤i≤n

|Ti(y)|

and,

T (y) = T`(y)(y)

It is easy to see that T (y) is well defined and differentiable for a set whose complement is of measure 0

in D. We can now calculate,

V ol(D) =

∫Dvn(|x|)dx ≤

∫Dvn(T (x))dx ≤

n∑i=1

∫Dvn(Ti(x))dx =

n∑i=1

det(Ti)

∫Cxi

vn(|x|)dx = 2nn∑i=1

V ol(Di).

We move on to the second step, namely, proving the following:

Lemma 5.2.2 In the above notation, V ol(Cxi) <πn/2

Γ(n/2)

Proof: Note that Cx(θ) is a right triangle. Denote its angle at the origin by ϕ. One can calculate the

volume of Ci by means of polar integration (to be exact, by means of revolution around the axis [0, x]):

V ol(Cx) = ωn−2

∫Tx

∫Cx(θ)

d(y, [0, x])n−2vn(y)dydσ(θ) (5.2)

Where ωn−2 is the Lebesgue measure of the unit n − 2 dimensional Euclidean sphere , σ is the haar

measure on Tx, and,

d(y, [0, x]) = min0≤t≤1

|tx− y|.

Let’s pick a coordinate system for spanx, θ in the following way: define the origin to be x, and let

e1 be the unit vector in the direction −x, and e2 be the unit vector in the direction of θ. For u, v ∈ R+

denote (u, v) = (1− u)x+ vθ. The volume form vn becomes,

vn(u, v) =1

(1− (1− u)2 − v2)n+1

2

.

Page 133: Distribution of Mass in Convex Bodies

5.2. THE VOLUME OF THE CONVEX HULL OF N POINTS IS SUBLINEAR 133

And we get,∫Tx

∫Cx(θ)

d(y, [x, 0])n−2dvn(y)dy =

∫ 1

0

∫ L(u)

0

vn−2

(1− (1− u)2 − v2)n+1

2

dvdu

where

L(u) =

u cotϕ , 0 ≤ u ≤ sin2 ϕ

(1− u) tanϕ , sin2 ϕ ≤ u ≤ 1.

We estmate,∫ 1

0

∫ L(u)

0

vn−2

(1− (1− u)2 − v2)n+1

2

dvdu ≤∫ 1

0

∫ L(u)

0

vn−2

(2u− u2 − L(u)2)n+1

2

dvdu =

∫ 1

0

∫ L(u)

0

vn−2

(u+ t(u))n+1

2

dvdu

where

t(u) = u− u2 − L(u)2.

Now, t(0) = t(1) = 0 and t(sin2 ϕ) = sin2 ϕ− sin4 ϕ− sin4 ϕ cot2 ϕ = 0 which means that t(u) ≥ 0

for 0 ≤ u ≤ 1. So we have,∫ 1

0

∫ L(u)

0

vn−2

(1− (1− u)2 − v2)n+1

2

dvdu ≤∫ 1

0

∫ L(u)

0

vn−2

un+1

2

dvdu =

1

n− 1

∫ 1

0

L(u)n−1

un+1

2

du =1

n− 1

∫ sin2 ϕ

0(cotϕ)n−1u

n−32 du+

1

n− 1

∫ 1

sin2 ϕ(tanϕ)n−1(1− u)

n−32 du =

2

(n− 1)2

((cotϕ)n−1(sinϕ)n−1 + (tanϕ)n−1(cosϕ)n−1

)≤ 2

(n− 1)2

Plugging these estimates into (5.2) yields,

V ol(Cx) ≤ 2ωn(n− 1)2

.

Joining the results of these two lemmata yields,

V ol(A) ≤ 2nN2ωn

(n− 1)2

Which proves theorem 5.1.1.

Remark 5.2.3 It can easily be seen that if for each N we denote by Vn(N) the maximal volume of a

polytope with N vertices in Hn, then

Vn(N) > cnN, ∀N ≥ n+ 1

for some constant cn depending only on the dimension. This is easily achieved by partitioning N to

subsets of size n+ 1 and constructing disjoint simplices of volume bounded from below. This fact shows

us that up to these constants, our result is, in some sense, sharp.

Page 134: Distribution of Mass in Convex Bodies

134 CHAPTER 5. CONVEX HULLS IN THE HYPERBOLIC SPACE

Remark 5.2.4 A possible extension suggested to us by Igor Rivin, related to his paper [Ro], might be

that there should be an upper bound on the volume of a convex set in terms of the Minkowski dimension

of its ”limit set” and the Minkowski measure in that dimension. The result here would be a sub-case of

the zero-dimensional limit set.

Another possible generalization suggested to us by M. Gromov is to prove a similar asymptotics for all

Riemannian manifolds whose sectional curvatures are all smaller than −1.

Remark 5.2.5 In Euclidean space, almost all of the volume of any simplex is very far from its vertices.

The calculation carried out in this section suggests that in hyperbolic space this may not be the case. It

may be interesting to find out if the following assertion is true: do there exist constants cn → 0, rn →∞(as n → ∞) such that for any n-dimensional simplex such that the distance between each two vertices

is at least rn, at least 0.9 of its volume is at distance < cnrn from one of its vertices?

If the latter is true, the calculation carried out in this section would imply that in some sense, unlike the

Euclidean case, most of the volume any polytope is rather close to its vertices.

5.3 The convex hull of a set whose boundary has boundedcurvature

It is a well known fact that a simplices in Hn have volume bounded by some universal constant. This

easily implies the following fact:

Lemma 5.3.1 For each n ∈ N, there exists a consant Cn > 0 such that the convex hull of the union of

any n+ 1 metric balls of radius 1 in Hn has a volume smaller than Cn.

Proof: This immediately follows from the facts that a ball of radius 1 is contained in the convex hull of

finitely many points, and that the volume of any simplex is bounded by a constant.

We are now ready to prove our second result.

Proof of theorem (5.1.3): Let A ⊂ Hn. In view of (5.1.1) and lemma (5.3.1), it is clearly enough to

show that there exists a constant Cn depending only on n such that the covering number of Aε by ε-balls

is smaller than C(n, ε)V ol(Aε).

Let N be the maximal packing number of A, hence, the maximal number of points x1, ..., xN ∈ A such

that

dH(xi, xj) > ε, ∀1 ≤ i < j ≤ N. (5.3)

By the maximality of the packing, we have,

A ⊂N⋃i=1

B(xi, ε)

which implies that,N⋃i=1

B(xi,ε

2) ⊂ Aε ⊂

N⋃i=1

B(xi, 2ε)

Page 135: Distribution of Mass in Convex Bodies

5.3. THE CONVEX HULL OF A SET WHOSE BOUNDARY HAS BOUNDED CURVATURE135

which shows that N is also the covering number of A. Moreover, the last relation and (5.3) imply that,

V ol(A) ≥ NV ol(B(0,ε

2)) (5.4)

which is exactly what we need.

Remark 5.3.2 It may be interesting to find the correct assymptotics for the optimal constants C(n, ε).

In view of lemma (5.3.1) it is quite clear that the constant provided by our proof will be far from optimal.

It is interesting to ask whether the constants C(n, 1) are bounded by some universal constant, in other

words, if the maximal ratio between the volume of a set and its convex hull is universally bounded.

Page 136: Distribution of Mass in Convex Bodies

136 CHAPTER 5. CONVEX HULLS IN THE HYPERBOLIC SPACE

Page 137: Distribution of Mass in Convex Bodies

Bibliography

[ALPT] R. Adamczak, A. Litvak, A. Pajor and N. Tomczak-Jaegermann, Quantitative estimates of the

convergence of the empirical covariance matrix in log-concave ensembles. J. Amer. Math. Soc.

23 535-561, 2010.

[ABBP] D. Alonso-Gutierrez, J. Bastero, J. Bernues and G. Paouris, High-dimensional random sections

of isotropic convex bodies. Journal of Mathematical Analysis and Applications Volume 361,

Issue 2, 15 January 2010, Pages 431?439.

[ABP] M. Anttila, K. Ball and I. Perissinaki, The central limit problem for convex bodies. Trans. Amer.

Math. Soc., 355, no. 12, (2003), 4723-4735.

[Ad] D. J. Aldous An introduction to covering problems for random walks on graphs. J. Theor. Prob.

2 (1989), 87-89.

[As] D. Asimov, The grand tour: a tool for viewing multidimensional data. SIAM J. Sci. Statist.

Comput. 6 (1985), 128?143.

[Ba1] K. Ball, Logarithmically concave functions and sections of convex sets in Rn. Studia Math., 88,

no. 1, 1988, 69-84.

[Ba2] K. Ball, lecture at the Institut Henri Poincare, Paris, June 2006.

[BF] I. Barany and Z. Furedi, Computing the volume is difficult. Discrete & Computational Geometry

2 (1987) 319-326

[BB] J. Bastero and J. Bernues, Asymptotic behavior of averages of k-dimensional marginals of mea-

sures on Rn. Studia Math. 190 (2009), 1-31

[BE] D. Bakry and M. Emery, Diffusions hypercontractives, in Seminaire de probabilites, XIX,

1983/84, vol. 1123 of Lecture Notes in Math., Springer, Berlin, 1985, pp. 177206.

[BL] H. J. Brascamp and E. H. Lieb, On extensions of the Brunn-Minkowski and Prekopa Leindler

theorems, including inequalities for log concave functions, and with an application to the diffu-

sion equation. J. Functional Analysis, 22(4):366-389, 1976

[BLRZ] P.J. Bickel, E. Levina, A.J. Rothman and J. Zhu, Sparse Permutation Invariant Covariance

Estimation. Electronic Journal of Statistics 2:494-515. (2008)

137

Page 138: Distribution of Mass in Convex Bodies

138 BIBLIOGRAPHY

[Bob1] S.G. Bobkov, Isoperimetric and analytic inequalities for log-concave probability measures.

Ann. Probab., Vol. 27, No. 4, (1999), 1903–1921.

[Bob2] S.G. Bobkov, Extremal properties of half-spaces for log-concave distributions. Ann. Probab.,

vol. 24, no. 1, (1996), 35–48.

[Bob3] S.G Bobkov, On coccentration of distributions of random weighted sums. Ann. Probab., vol. 31,

No. 1, (2003), 195-215.

[Bob4] S.G. Bobkov, On isoperimetric constants for log-concave probability distributions, in Geomet-

ric Aspects of Functional Analysis Israel Seminar 2004-2005, Springer Lecture Notes in Math.

1910 (2007), 8188.

[BK] S.G. Bobkov and A. Koldobsky, On the central limit property of convex bodies. Geometric

aspects of functional analysis, Lecture Notes in Math., 1807, Springer, Berlin, (2003), 44–52.

[B] B.Bollobas, Volume Estimates and Rapid Mixing Flavors of Geometry, MSRI Publications, Vol

31 (1997).

[Bor] C. Borell, Complements of Lyapunov’s inequality. Math. Ann., vol. 205 (1973), 323–331.

[Bou1] J. Bourgain, On high-dimensional maximal functions associated to convex bodies. Amer. J.

Math., 108, no. 6, (1986), 1467–1476.

[Bou2] J. Bourgain, Geometry of Banach spaces and harmonic analysis. Proceedings of the Interna-

tional Congress of Mathematicians, (Berkeley, Calif., 1986), Amer. Math. Soc., Providence, RI,

(1987), 871–878.

[Bou3] J. Bourgain, On the distribution of polynomials on high-dimensional convex sets. Geometric as-

pects of functional analysis, Israel seminar (1989–90), Lecture Notes in Math., 1469, Springer,

Berlin, (1991), 127–137.

[BV] U. Brehm and J. Voigt, Asymptotics of cross sections for convex bodies. Beitrage Algebra

Geom., 41, no. 2, (2000), 437–454.

[CFKP] J. W. Cannon, W.J Floyd, R. Kenyon and W.R Parry, Hyperbolic Geometry. Flavors of Geome-

try, MSRI Publications Volume 31, 1997.

[CGF] G. Carlier, A. Galichon and F. Santambrogio, From Knothe’s Transport to Brenier’s Map and a

Continuation Method for Optimal Transport. SIAM J. Math. Anal. 41, pp. 2554-2576.

[DF] P. Diaconis and D. Freedman, A dozen de Finetti-style results in search of a theory. Ann. Inst.

H. Poincare Probab. Statist., 23, no. 2, (1987), 397–423.

[Dis] V.I. Diskant, Stability of the Solution of the Minkowski Equation (in Russian). Sibirsk. Mat. 14

(1973), 669–673, 696. English translation in Siberian Math. J. 14 (1973), 466-469.

Page 139: Distribution of Mass in Convex Bodies

BIBLIOGRAPHY 139

[DMO] M. Drton, H. Massam and I.Olkin, Moments of Minors of Wishart Matrices ,

arXiv:math/0604488v2.

[DPRZ] A. Dembo, Y. Peres, J. Rosen, and O. Zeitouni, Cover times for Brownian motion and random

walks in two dimensions. Annals of Mathematics, 160 (2004), 433?464

[Dur] R. Durrett, Stochastic Calculus: A Practical Introduction Cambdidge university press, 2003.

[DFK] M. E. Dyer, A. M. Frieze and R. Kannan, A random polynomial time algorithm for approximat-

ing the volume of convex bodies. J. ACM, 38,1 (1991) 1-17

[EK1] R. Eldan and B. Klartag, Approximately gaussian marginals and the hyperplane conjecture.

Proc. of a workshop on “Concentration, Functional Inequalities and Isoperimetry”, Contempo-

rary Math., vol. 545, Amer. Math. Soc., (2011), 55–68.

[EK2] R. Eldan and B Klartag, Dimensionality and the stability of the Brunn-Minkowski inequality.

Annali SNS, 2011.

[Fl1] B. Fleury, Concentration in a thin euclidean shell for log-concave measures , J. Func. Anal. 259

(2010), 832841.

[Fl2] B. Fleury, Poincare inequality in mean value for Gaussian polytopes, Probability theory and

related fields, Volume 152, Numbers 1-2, 141-178.

[FMP1] A. Figalli, F. Maggi and A. Pratelli, A refined Brunn-Minkowski inequality for convex sets. Ann.

Inst. H. Poincare Anal. Non Lineaire, vol. 26, no. 6, (2009), 2511-2519.

[FMP2] A. Figalli, F. Maggi and A. Pratelli, A mass transportation approach to quantitative isoperimet-

ric inequalities. Invent. Math., vol. 182, no. 1, (2010), 167- 211.

[GR] N. Goyal and L.Rademacher. Learning convex bodies is hard (Submitted manuscript, 2008).

[Gor] Y. Gordon, Gaussian processes and almost spherical sections of convex bodies. Ann. Probab.,

vol. 16, no. 1, (1988) 180-188.

[Gu-M] O. Guedon and E. Milman, Interpolating thin-shell and sharp large-deviation estimates for

isotropic log-concave measures, 2010

[Fra] M. Fradelizi, Sectional bodies associated with a convex body. Proc. Amer. Math. Soc., 128, no.

9, (2000), 2735–2744.

[Gr] M. Gromov, Convex sets and Kahler manifolds. Advances in differential geometry and topol-

ogy, World Sci. Publ., Teaneck, NJ, (1990), 1–38.

[Gr-M] M. Gromov and V. D. Milman, A topological application of the isoperimetric inequality. Amer.

J. Math., 105(4):843854, 1983

Page 140: Distribution of Mass in Convex Bodies

140 BIBLIOGRAPHY

[Groe] H. Groemer, On the Brunn-Minkowski theorem. Geom. Dedicata, vol. 27, no. 3, (1988), 357-

371.

[Gros1] L. Gross, Logarithmic Sobolev inequalities Amer. J. Math. 97 (1975), no. 4, 1061-1083

[Gros2] L. Gross, Logarithmic Sobolev inequalities and contractivity properties of semigroups, Dirich-

let forms Varenna, 1992, 54-88, Lecture Notes in Math., 1563, Springer, Berlin, 1993.

[Hen] D. Hensley, Slicing convex bodies—bounds for slice area in terms of the body’s covariance.

Proc. Amer. Math. Soc., 79, no. 4, (1980), 619–625.

[Leh] J. Lehec, A representation formula for the entropy and functional inequalities. arXiv:

1006.3028, 2010.

[LeVr] E. Levina and R. Vershynin, Partial estimation of covariance matrices. Probability theory and

related fields DOI: 10.1007/s00440-011-0349-4

[LV] L. Lovasz and S. Vempala, The geometry of logconcave functions and sampling algorithms.

Random Structures & Algorithms, Vol. 30, no. 3, (2007), 307–358.

[K1] B. Klartag, A central limit theorem for convex sets. Invent. Math., 168, (2007), 91–131.

[K2] B. Klartag, Power-law estimates for the central limit theorem for convex sets. J. Funct. Anal.,

Vol. 245, (2007), 284–310.

[K3] B. Klartag, A Berry-Esseen type inequality for convex bodies with an unconditional basis.

Probab. Theory Related Fields, vol. 145, no. 1-2, (2009), 1-33.

[K4] B. Klartag, Power-law estimates for the central limit theorem for convex sets. wherever, 2007.

[K5] B. Klartag, On nearly radial marginals of high-dimensional probability measures. J. Eur. Math.

Soc., vol. 12, no. 3, (2010), 723-754.

[K6] B. Klartag, A geometric inequality and a low M-estimate. Proc. Amer. Math. Soc., Vol. 132, No.

9, (2004), 2919–2628.

[K7] B. Klartag, An isomorphic version of the slicing problem. J. Funct. Anal. 218, no. 2, (2005),

372–394.

[K8] B. Klartag, On convex perturbations with a bounded isotropic constant. Geom. and Funct. Anal.

(GAFA), Vol. 16, Issue 6, (2006), 1274–1290.

[K9] B. Klartag, Uniform almost sub-gaussian estimates for linear functionals on convex sets. Alge-

bra i Analiz (St. Petersburg Math. Journal), Vol. 19, no. 1 (2007), 109–148.

[KLS] R. Kannan, L. Lovasz and M. Simonovits. Isoperimetric problems for convex bodies and a

localization lemma. Discrete Comput. Geom., 13(3-4):541559, 1995

Page 141: Distribution of Mass in Convex Bodies

BIBLIOGRAPHY 141

[KOS] A.R. Klivans, R. O’Donnell and R. A. Servedio Learning Geometric Concepts via Gaussian

Surface Area Proc. of the 49th IEEE Symposium on Foundations of Computer Science (2008)

[Kn] H. Knothe, Contributions to the theory of convex bodies. Michigan Math. J., vol. 4, (1957),

39-52.

[Lo] L. Lovasz. private communication.

[LS] L. Lovasz and M.Simonovits. Random Walks in a Convex Body and an Improved Volume Algo-

rithm Random Structures Algorithms 4 (1993), no. 4, 359–412.

[LV1] L. Lovasz and S. Vempala, The geometry of logconcave functions and sampling algorithms.

Random Structures Algorithms, vol. 30, no. 3, (2007), 307-358.

[LV2] L. Lovasz and S. Vempala. Simulated Annealing in Convex Bodies and an 0∗(n4) Volume Algo-

rithm FOCS 2003: 650-672

[Mar] T.H. Marshall, Asymptotic volume formulae and hyperbolic ball packing. Annales Academiae

Scientiarum Fennicae Mathematica Volumen 24, 3143, 1999.

[M] P. Matthews, Covering problems for brownian motion on spheres. Ann. of probability, Vol 16,

no.1 189-199, 1988

[Mil1] E. Milman, On the role of Convexity in Isoperimetry, Spectral-Gap and Concentration, Invent.

Math. 177 (1), 1-43, 2009.

[Mil2] E. Milman, Isoperimetric Bounds on Convex Manifolds, Contemporary Math., proceedings of

the Workshop on ”Concentration,Functional Inequalities and Isoperimetry” in Florida, Novem-

ber 2009.

[Milnor] J.W. Milnor, Hyperbolic geometry: The first 150 years. Bull. Amer. Math. Soc. (N.S.) Volume

6, Number 1, pp. 924, 1982.

[MP] V.D. Milman and A. Pajor, Isotropic position and inertia ellipsoids and zonoids of the unit ball

of a normed n-dimensional space. Geometric aspects of functional analysis (1987–88), Lecture

Notes in Math., Vol. 1376, Springer, Berlin, (1989), 64–104.

[MS] V.D. Milman and G. Schechtman, Asymptotic theory of finite-dimensional normed spaces. Lec-

ture Notes in Mathematics, vol. 1200, Springer-Verlag, Berlin, 1986.

[NSV] F. Nazarov, M. Sodin and A. Volberg, The geometric Kannan-Lovsz-Simonovits lemma,

dimension-free estimates for the distribution of the values of polynomials, and the distribu-

tion of the zeros of random analytic functions., St. Petersburg Math. J., vol. 14, no. 2, (2003),

351-366.

[Ok] B. Oksendal Stochastic Differential Equations: An Introduction with Applications. Berlin:

Springer. ISBN 3-540-04758-1, (2003).

Page 142: Distribution of Mass in Convex Bodies

142 BIBLIOGRAPHY

[Oss] R. Osserman, Bonnesen-style isoperimetric inequalities. Amer. Math. Monthly, 86, no. 1,

(1979), 1-29.

[Pa] G. Paouris, Concentration of mass in convex bodies. Geom. Funct. Anal., 16, no. 5, (2006),

1021-1049.

[Pis] G. Pisier, The volume of convex bodies and Banach space geometry. Cambridge Tracts in Math-

ematics, 94. Cambridge University Press, Cambridge, 1989.

[RV] L.Rademacher and S. Vempala. Dispersion of Mass and the Complexity of Randomized Geo-

metric Algorithms Proc. of the 47th IEEE Symposium on Foundations of Computer Science

(2006)

[Riv] I. Rivin, Asymptotics of convex sets in Euclidean and hyperbolic spaces. Advances in Mathe-

matics 220, 1297–1315, 2009.

[Ro] R.T. Rockafellar, Convex analysis. Princeton Mathematical Series, No. 28, Princeton University

Press, Princeton, N.J., 1970.

[Sch] R. Schneider, Convex bodies: the Brunn-Minkowski theory. Encyclopedia of Mathematics and

its Applications, 44, Cambridge University Press, Cambridge, 1993.

[Seg] A. Segal. Remark on Stability of Brunn-Minkowski and Isoperimetric Inequalities for Convex

Bodies. To appear in Gafa Seminar notes.

[Sod] S. Sodin, Tail-sensitive gaussian asymptotics for marginals of concentrated measures in high

dimension. Geometric aspects of functional analysis, Israel seminar, Lecture notes in Math.,

Vol. 1910, Springer, (2007), 271-295.

[T] T. Tam, On Lei-Miranda-Thompson’s result on singular values and diagonal elements. Linear

Algebra and Its Applications, 272 (1998), 91-101.

[U] J.V. Uspensky, ”Introduction to Mathematical Probability”, McGraw-Hill Book Company, 1937

[Ver] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices Compressed

Sensing, Theory and Applications. Chapter 5, Cambridge University Press (2012).

[Vil] C. Villani, Topics in optimal transportation. Graduate Studies in Mathematics, 58. American

Mathematical Society, Providence, RI, 2003.