68
Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002

Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

Convex Geometric Analysis

Seminar Notes

Department of MathematicsUniversity of Crete

Heraklion, 2002

Page 2: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski
Page 3: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

Contents

1 Brunn-Minkowski inequality 11.1 Introduction 1

1.2 Classical proofs 2

1.2a Brunn’s concavity principle 2

1.2b Elementary sets 5

1.2c Induction on the dimension 7

1.2d Refinements involving hyperplane projections and sections8

1.3 Volume preserving transformations 11

1.3a Knothe map 11

1.3b Brenier map 14

1.4 Functional forms 17

1.4a Prekopa-Leindler inequality 18

1.4b The logarithmic Sobolev inequality 21

1.4c Concentration of measure in Gauss space 24

1.5 Applications 27

1.5a An inequality of Rogers and Shephard 27

1.5b Hyperplane sections of a convex body 28

1.5c Borell’s lemma 30

1.5d The isoperimetric inequality for convex bodies 32

1.5e The spherical isoperimetric inequality 33

1.5f Blaschke-Santalo inequality 34

1.5g Urysohn’s inequality 35

2 Rearrangement inequalities 372.1 Introduction 37

2.2 A general rearrangement inequality 38

2.2a Symmetric decreasing rearrangement 38

2.2b Brascamp-Lieb-Luttinger inequality 40

2.2c Generalization to functions of several variables 43

2.3 Volume of restricted Minkowski sums 44

2.3a An inequality for restricted Minkowski sums 45

2.3b Shannon’s inequality 48

2.4 Brascamp-Lieb inequality 51

Page 4: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

iv · Contents

2.5 Reverse Brascamp-Lieb inequality 54

2.6 Multidimensional versions 61

Page 5: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

Chapter 1

Brunn-Minkowski

inequality

1.1 Introduction

The Brunn-Minkowski inequality gives a fundamental relation between Minkowskiaddition and volume in Rn.

Brunn-Minkowski inequality: Let K and T be two non-empty compact sub-sets of Rn. Then,

(1.1) |K + T |1/n ≥ |K|1/n + |T |1/n.

If we make the additional hypothesis that K and T are convex bodies, then wecan have equality in (1.1) only if K and T are homothetical.

The inequality expresses in a sense the fact that volume is a concave func-tion with respect to Minkowski addition. For this reason, it is often written inthe following form: If K,T are non-empty compact subsets of Rn and λ ∈ (0, 1),then

(1.2) |λK + (1− λ)T |1/n ≥ λ|K|1/n + (1− λ)|T |1/n.

Using (1.2) and the arithmetic-geometric means inequality we can also write

(1.3) |λK + (1− λ)T | ≥ |K|λ|T |1−λ.

This weaker form of the Brunn-Minkowski inequality has the advantage of beingdimension free. It is actually equivalent to (1.1) in the following sense: if weknow that (1.3) holds for all K, T and λ, we can recover (1.1) as follows.

Consider non-empty compact sets K and T (we may assume that |K| > 0and |T | > 0; otherwise there is nothing to prove), and define

K1 = |K|−1/nK , T1 = |T |−1/nT , λ =|K|1/n

|K|1/n + |T |1/n.

Page 6: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2 · Brunn-Minkowski inequality

Then, K1 and T1 have volume 1, and hence (1.3) gives

(1.4) |λK1 + (1− λ)T1| ≥ 1.

SinceλK1 + (1− λ)T1 =

K + T

|K|1/n + |T |1/n,

we immediately get (1.1).

There are many interesting proofs of the Brunn-Minkowski inequality, all ofthem related to important ideas. In the next three sections we discuss a varietyof arguments which imply (1.1).

In Section 1.2 we present some classical proofs of the Brunn-Minkowski in-equality. The first one is the original proof of Minkowski which is based onthe Brunn concavity principle. We give two proofs of Brunn’s theorem: onegoes by Schwarz symmetrization, while the other one uses a bisection argu-ment of Gromov and Milman. The second proof works for (not necessarilyconvex) compact sets: it is based on elementary sets and is due to Hadwigerand Ohmann. The third proof clarifies the equality case for convex bodies: itis due to Kneser and Suss and goes by induction on the dimension. Finally,we prove Bonnesen’s refinements of the Brunn-Minkowski inequality: a linearinequality for convex bodies whose projections onto some hyperplane have thesame (n− 1)-dimensional volume and another version involving the volumes ofhyperplane sections.

In Section 1.3 we introduce two volume preserving, one to one and ontotransformations between two open convex bodies: the Knothe map and theBrenier map. Both maps can be used to give proofs of the Brunn-Minkowski in-equality, but they will also be quite useful tools for several results in subsequentchapters.

In Section 1.4 we prove the Prekopa-Leindler inequality, which may beviewed as a functional form of the Brunn-Minkowski inequality. As an ap-plication, we prove the logarithmic Sobolev inequality and discuss some of itsconsequences in Gauss space.

Finally, in Section 1.5 we give a first list of important geometric applica-tions of (1.1) and its relatives. This is far from being complete but shows thefundamental character and strength of the Brunn-Minkowski inequality.

1.2 Classical proofs

1.2a Brunn’s concavity principle

Historically, the first proof of the Brunn-Minkowski inequality was based onBrunn’s concavity principle:

1.2.1. Let K be a convex body in Rn and let F be a k-dimensional subspace.Then, the function f : F⊥ → R defined by f(x) = |K ∩ (F + x)|1/k is concaveon its support.

Page 7: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.2 Classical proofs · 3

Sketch of the proof: The proof goes by symmetrization. The Steiner sym-metrization of K in the direction of θ ∈ Sn−1 is the set Sθ(K) consisting of allpoints of the form x+λθ, where x is in the projection Pθ⊥(K) of K onto θ⊥ and|λ| ≤ 1

2 × length(x + Rθ) ∩K. In other words, we obtain Sθ(K) by sliding itschords so that their midpoint will be on θ⊥ and take the union of all resultingchords.

Some basic properties of the Steiner symmetrization are listed below. Weleave the details to the reader.

(i) Steiner symmetrization preserves convexity: if K is a convex body thenSθ(K) is also a convex body. In fact, this is the Brunn concavity principlefor k = 1.

(ii) Sθ(K) can be described as follows:

Sθ(K) =

x +t1 − t2

2θ : x ∈ Pθ⊥K, x + t1θ ∈ K, x + t2θ ∈ K

.

(iii) Steiner symmetrization preserves volume: |Sθ(K)| = |K|.(iv) Let K1 and K2 be two convex bodies. For every λ ∈ (0, 1),

Sθ(λK1 + (1− λ)K2) ⊇ λSθ(K1) + (1− λ)Sθ(K2).

Let F be a k-dimensional subspace of Rn (1 ≤ k ≤ n). A well known factwhich goes back to Steiner and Schwarz is that for every convex body K onecan find a sequence of successive Steiner symmetrizations in directions θ ∈ F sothat the limiting convex body K has the following property:

For every x ∈ F⊥, K ∩ (F + x) is a ball with center at x and radiusr(x) such that |K ∩ (F + x)| = |K ∩ (F + x)|.

Now, the proof of the theorem is immediate. Convexity of K implies that r

is concave on its support, and this shows that f is also concave. 2

Proof of the Brunn-Minkowski inequality: Brunn’s concavity principleimplies the Brunn-Minkowski inequality as follows. If K,T are convex bodiesin Rn, we define

K1 = K × 0 and T1 = T × 1in Rn+1 and consider their convex hull L. If

L(t) = x ∈ Rn : (x, t) ∈ L , t ∈ [0, 1]

we easily check that L(0) = K, L(1) = T and

L(1/2) =K + T

2.

Then, Brunn’s concavity principle for F = Rn shows that∣∣∣K + T

2

∣∣∣1/n

≥ 12|K|1/n +

12|T |1/n. 2

Page 8: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

4 · Brunn-Minkowski inequality

Let us give an alternative proof of Theorem 1.2.1, which is due to Gromovand Milman. Their argument proves a more general statement.

Definition Let K be a convex set in Rn and let f : K → R+. We say that f isα-concave for some α > 0 if f1/α is concave on K. Equivalently, if

f1/α (λx1 + µx2) ≥ λf1/α(x1) + µf1/α(x2)

for all x1, x2 ∈ K and λ, µ > 0 with λ + µ = 1.

1.2.2. Let f, g : K → R+. If f is α-concave and g is β-concave, then f · g is(α + β)-concave.

Proof: Applying Holder’s inequality with p = (α + β)/α and q = (α + β)/β,we get

λ(f(x1)g(x1)

) 1α+β + µ

(f(x2)g(x2)

) 1α+β ≤

(λf(x1)1/α + µf(x2)1/α

) αα+β

×(λg(x1)1/β + µg(x2)1/β

) βα+β

≤ (f(λx1 + µx2))1

α+β (g(λx1 + µx2))1

α+β

for all x1, x2 ∈ K and λ, µ > 0 with λ + µ = 1, by the α-concavity of f and theβ-concavity of g. 2

Let now K be a convex body in Rn and θ ∈ Sn−1. For every y ∈ Pθ⊥(K)we write Iy for the set t ∈ R : y + tθ ∈ K. From the convexity of K it followsthat Iy is an interval for every y ∈ Pθ⊥(K). Let f : K → R+ be a continuousfunction. We define the projection Pθf of f with respect to θ by

(Pθf)(y) :=∫

Iy

f(y + tθ)dt, y ∈ Pθ⊥(K).

1.2.3. If f is α-concave, then Pθf is (1 + α)-concave.

Proof: Concavity of a function is a “two-dimensional” notion, so we may clearlyassume that K ⊆ R2, in which case Pθ⊥(K) is an interval. Let y1, y2 ∈ Pθ⊥(K)and write Iyi = [ai, bi], i = 1, 2. For every λ, µ > 0 with λ + µ = 1, we setyλ = λy1 + µy2. Then,

Iyλ⊇ [λa1 + µa2, λb1 + µb2].

We define ci ∈ Iyi by the equations∫ bi

ai

f(yi + tθ)dt = 2∫ ci

ai

f(yi + tθ)dt = 2∫ bi

ci

f(yi + tθ)dt.

If K ′ is the convex hull of the intervals [yi + ciθ, yi + biθ], i = 1, 2 and K ′′ is theconvex hull of the intervals [yi + aiθ, yi + ciθ], i = 1, 2, we define f ′ = f |K′ andf ′′ = f |K′′ . By the definition of ci it is not hard to check that if Pθf

′ and Pθf′′

are (1 + α)-concave then Pθf satisfies the (1 + α)-concavity condition at y1 andy2. Thus, we only need to prove that Pθf

′ and Pθf′′ are (1 + α)-concave.

Page 9: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.2 Classical proofs · 5

For every n ≥ 2 we define partitions ai = t0,i < t1,i < . . . < tn−1,i < tn,i = bi

of [ai, bi] such that∫ bi

ai

f(yi + tθ)dt = n

∫ tk,i

tk−1,i

f(yi + tθ)dt

for all k ∈ 1, . . . , n. The same observation as above shows that we only needto check that Pθf

k is (1 + α)-concave for every k, where fk = f |K(k) and K(k)

is the convex hull of the intervals [yi + tk−1,iθ, yi + tk,iθ]. Passing to the limitwe see that we have to check the following:

Claim: Let ti ∈ Iyi and di > 0, i = 1, 2. Given λ ∈ (0, 1), set yλ = λy1 + (1−λ)y2, t(λ) = λt1 + (1 − λ)t2 and d(λ) = λd1 + (1 − λ)d2. Then, the functionλ 7→ f(yλ + t(λ)θ) · d(λ) is (1 + α)-concave.

This claim follows from Lemma 1.2.1, since λ 7→ f(yλ + t(λ)θ) is α-concaveand the linear function λ 7→ d(λ) is 1-concave. 2

We now finish the proof of Brunn’s theorem as follows. The characteristicfunction of K is constant on K, and hence it is α-concave for every α > 0. Wechoose an orthonormal basis θ1, . . . , θk of F and perform successive projectionsin the directions of θi. Lemma 1.2.2 shows that the function x 7→ |K ∩ (F + x)|is (α + k)-concave on PF⊥(K), for every α > 0. It follows that f(x) = |K ∩(F + x)|1/k is concave. 2

1.2b Elementary sets

Next, we give a proof of the Brunn-Minkowski inequality for non-empty compactsets in Rn, which is due to Hadwiger and Ohmann. The argument is based onelementary sets. An elementary set is a finite union of non-overlapping boxeswhose edges are parallel to the coordinate axes.

1.2.4. Let A and B be elementary sets in Rn. Then,

|A + B|1/n ≥ |A|1/n + |B|1/n.

Proof: We first examine the case where both A and B are boxes. Assume thata1, . . . , an > 0 are the lengths of the edges of A, and b1, . . . , bn > 0 are thelengths of the edges of B. Then, A + B is also a box, whose edges have lengthsa1 + b1, . . . , an + bn. Thus, we have to show that

((a1 + b1) · · · (an + bn))1/n ≥ (a1 · · · an)1/n + (b1 · · · bn)1/n.

This is equivalent to the inequality(

a1

a1 + b1· · · an

an + bn

)1/n

+(

b1

a1 + b1· · · bn

an + bn

)1/n

≤ 1.

But, the arithmetic-geometric means inequality shows that the left hand side isless than or equal to

1n

(a1

a1 + b1+ · · ·+ an

an + bn

)+

1n

(b1

a1 + b1+ · · ·+ bn

an + bn

)= 1.

Page 10: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

6 · Brunn-Minkowski inequality

For every pair of elementary sets A and B, we define the complexity of (A, B)to be the total number of boxes in A and B. We will prove the theorem byinduction on the complexity m of (A,B). The case m = 2 was our first step.

Assume then that m ≥ 3 and that the statement holds true for all pairs ofelementary sets with complexity less than or equal to m − 1. Since m ≥ 3, wemay assume that A consists of at least two boxes. Let I1 and I2 be two of them.Since they are non-overlapping we can separate them by a hyperplane which,without loss of generality, may be described by the equation xn = ρ for someρ ∈ R. We define

A+ = A ∩ x ∈ Rn : xn ≥ ρ , A− = A ∩ x ∈ Rn : xn ≤ ρ.Then, A+ and A− are non-overlapping elementary sets, and each one of themconsists of a smaller number of boxes than A. We now pass to B, and find ahyperplane xn = s such that if B+ = B∩x ∈ Rn : xn ≥ s and B− = B∩x ∈Rn : xn ≤ s, then

(2.1)|A+||A| =

|B+||B| =: λ.

Then, B+ and B− are elementary sets, we clearly have 0 < λ < 1, and thecomplexity of each pair (A±, B±) is less than m. It is clear that

A + B = (A+ + B+) ∪ (A+ + B−) ∪ (A− + B+) ∪ (A− + B−).

On the other hand, since A+ + B+ ⊆ x : xn ≥ ρ + s and A− + B− ⊆ x :xn ≤ ρ + s, the sets A+ + B+ and A− + B− have disjoint interiors. Therefore,

|A + B| ≥ |A+ + B+|+ |A− + B−|.Our inductive hypothesis can be applied to the right hand side, giving

|A+ + B+|1/n ≥ |A+|1/n + |B+|1/n and |A− + B−|1/n ≥ |A−|1/n + |B−|1/n.

Taking (2.1) into account we get

|A + B| ≥((

λ|A|)1/n +(λ|B|)1/n

)n

+((

(1− λ)|A|)1/n +((1− λ)|B|)1/n

)n

= λ(|A|1/n + |B|1/n

)n

+ (1− λ)(|A|1/n + |B|1/n

)n

=(|A|1/n + |B|1/n

)n

,

which implies|A + B|1/n ≥ |A|1/n + |B|1/n.

2

Since every compact set in Rn can be approximated by a decreasing sequenceof elementary sets in the Hausdorff metric, Theorem 1.2.2 is easily extended tonon-empty compact sets A and B in Rn.

1.2.5. Let A and B be non-empty compact sets in Rn. Then,

|A + B|1/n ≥ |A|1/n + |B|1/n. 2

Page 11: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.2 Classical proofs · 7

1.2c Induction on the dimension

Our third proof of the Brunn-Minkowski inequality is by induction on the di-mension and is due to Kneser and Suss. The argument works only for convexbodies. One of the advantages of the proof is that it clarifies the cases of equalityin the convex case. Another one is that suitable modification leads to “stabilityresults”.

For the proof we need to introduce the support function hK of a convexbody K. For every u ∈ Sn−1 we define

hK(u) = maxx∈K

〈x, u〉.

Observe that the sum hK(u) + hK(−u) is the width of K in the direction ofu. A separation argument shows that two convex bodies K0 and K1 coincide ifand only if hK0(u) = hK1(u) for every u ∈ Sn−1.

1.2.6. Let K0 and K1 be convex bodies in Rn. For every λ ∈ [0, 1],

|(1− λ)K0 + λK1|1/n ≥ (1− λ)|K0|1/n + λ|K1|1/n

with equality only if K0 and K1 are homothetical.

Proof: We prove the theorem by induction on n. The statement is clearly truewhen n = 1, therefore we may assume that n ≥ 2 and the theorem has beenproved in dimension n− 1. We may also assume that |K0| = |K1| = 1 and thatthe origin is the centroid for both K0 and K1. Given θ ∈ Sn−1 we define (fori = 0, 1)

fi(t) = |Ki ∩ y + tθ : y ∈ θ⊥|n−1

gi(t) = |Ki ∩ y + sθ : y ∈ θ⊥, s ≤ t|.

on(− hKi(−θ), hKi(θ)

). Then,

gi(t) =∫ t

−hKi(−θ)

fi(s)ds.

Since fi is continuous and strictly positive, we have

g′i(t) = fi(t), −hKi(−θ) < t < hKi(θ)

and gi is strictly increasing. Now, if hi : (0, 1) → R is the inverse function of gi,we have

h′i(u) =1

fi(hi(u)), 0 < u < 1.

Let Kλ = (1− λ)K0 + λK1 and write Kλ(u) = Kλ ∩ y + hλ(u)θ : y ∈ θ⊥ forevery λ ∈ [0, 1] and 0 < u < 1, where hλ = (1−λ)h0 +λh1. We can easily checkthat

Kλ(u) ⊇ (1− λ)K0(u) + λK1(u).

Page 12: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

8 · Brunn-Minkowski inequality

Therefore, if we make the change of variables t = hλ(u) we get

|Kλ| =∫ hKλ

(θ)

−hKλ(−θ)

|Kλ ∩ (θ⊥ + tθ)|n−1dt

≥∫ 1

0

|Kλ(u)|n−1h′λ(u)du

≥∫ 1

0

|(1− λ)K0(u) + λK1(u)|n−1

(1− λ

f0(h0(u))+

λ

f1(h1(u))

)du.

The inductive hypothesis shows that this is greater than or equal to

∫ 1

0

((1− λ)f0(h0(u))

1n−1 + λf1(h1(u))

1n−1

)n−1(

1− λ

f0(h0(u))+

λ

f1(h1(u))

)du.

By the arithmetic-geometric means inequality, the integrand is greater than orequal to 1 on (0, 1) with equality only if f0(h0(u)) = f1(h1(u)). This shows that|Kλ| ≥ 1 and completes the inductive step.

Suppose that for some λ ∈ (0, 1) we have |(1 − λ)K0 + λK1| = 1. Then,f0(h0(u)) = f1(h1(u)) for every u ∈ (0, 1), which implies h′0 = h′1. Thus, h1−h0

is a constant function. Since 0 is the centroid of Ki,

0 =∫

Ki

〈x, θ〉dx =∫

tfi(t) dt

=∫ 1

0

hi(u)fi(hi(u))h′i(u) du

=∫ 1

0

hi(u)du,

which shows that h0 ≡ h1. Then, hK0(θ) = hK1(θ). Since this should hold forevery θ ∈ Sn−1, we must have K0 = K1. 2

1.2d Refinements involving hyperplane projections and sections

First, we give a linear refinement of the Brunn-Minkowski inequality for twoconvex bodies K and T which have projections of the same volume onto somehyperplane.

1.2.7. Let K0 and K1 be two convex bodies in Rn and let θ ∈ Sn−1 be suchthat Pθ⊥(K0) = Pθ⊥(K1) = P . Then,

|(1− λ)K0 + λK1| ≥ (1− λ)|K0|+ λ|K1|

for every λ ∈ (0, 1).

Proof: Let Kλ = (1− λ)K0 + λK1. Note that Pθ⊥(Kλ) = P for all λ ∈ [0, 1].We may write

Kλ = y + tθ : y ∈ P, fλ(y) ≤ t ≤ gλ(y),

Page 13: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.2 Classical proofs · 9

where fλ is convex on P , gλ is concave on P , and fλ ≤ gλ. We easily check thatgλ ≥ (1− λ)g0 + λg1 and fλ ≤ (1− λ)f0 + λf1 on P , hence

|Kλ| =∫

P

(gλ(y)− fλ(y)

)dy

≥∫

P

((1− λ)g0(y) + λg1(y)− (1− λ)f0(y)− λf1(y)

)dy

= (1− λ)∫

P

(g0(y)− f0(y)

)dy + λ

P

(g1(y)− f1(y)

)dy

= (1− λ)|K0|+ λ|K1|. 2

Lemma 1.2.3 and a symmetrization argument give a stronger result.

1.2.8. Let K0 and K1 be two convex bodies in Rn and let θ ∈ Sn−1 be suchthat |Pθ(K0)| = |Pθ(K1)|. Then,

|(1− λ)K0 + λK1| ≥ (1− λ)|K0|+ λ|K1|

for every λ ∈ (0, 1).

Proof: We first symmetrize K0 and K1 with respect to θ. If K ′0 = Sθ(K0) and

K ′1 = Sθ(K1), then |K0| = |K ′

0|, |K1| = |K ′1| and

(1− λ)K ′0 + λK ′

1 ⊆ Sθ

((1− λ)K0 + λK1

).

Since Steiner symmetrization preserves volume, it suffices to prove that

(2.2) |(1− λ)K ′0 + λK ′

1| ≥ (1− λ)|K0′|+ λ|K ′1|.

By a sequence of Steiner symmetrizations in directions ξ ∈ θ⊥, we obtain tworotation bodies K ′′

0 and K ′′1 which have the same projection onto θ⊥: a centered

ball of volume V = |Pθ⊥(K0)| = |Pθ⊥(K1)|. Lemma 1.2.3 shows that

|(1− λ)K ′′0 + λK ′′

1 | ≥ (1− λ)|K ′′0 |+ λ|K ′′

1 |,

and (2.2) follows because

|(1− λ)K ′0 + λK ′

1| ≥ |(1− λ)K ′′0 + λK ′′

1 |

by the same reasoning as in the beginning of the proof. 2

For the second result we need to introduce some notation: Let f and g be twobounded non-negative Borel measurable functions on R which vanish outside afinite interval. We define

h(r, s) = f(r) + g(s) if f(r)g(s) > 0

and h(r, s) = 0 otherwise. Then, we set

m(t) = suph(r, s) | r + s = t.

We also write γ = supr f(r) and δ = sups g(s).

Page 14: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

10 · Brunn-Minkowski inequality

1.2.9. With this notation,∫

Rmp(t)dt ≥ (γ + δ)p

[1γp

Rfp(t)dt +

1δp

Rgp(t)dt

]

for every p > 0.

Proof: If u : R→ R+ is a measurable function, we set

Au(α) = s : u(s) ≥ α.By the definition of m we easily check that

Am((γ + δ)t) ⊇ Af (γt) + Ag(δt)

for every t ∈ (0, 1). One dimensional Brunn-Minkowski inequality implies that

|Am((γ + δ)t)| ≥ |Af (γt)|+ |Ag(δt)|.It follows that

Rmp =

∫ γ+δ

0

|Am(α)|pαp−1dα

= (γ + δ)p

∫ 1

0

|Am((γ + δ)t)|ptp−1dt

≥ (γ + δ)p

(∫ 1

0

|Af (γt)|ptp−1dt +∫ 1

0

|Ag(δt)|ptp−1dt

)

= (γ + δ)p

[1γp

Rfp(t)dt +

1δp

Rgp(t)dt

]. 2

1.2.10. Let K and T be two convex bodies in Rn. Fix θ ∈ Sn−1 and set

γ = supr|K ∩ (θ⊥ + rθ)| 1

n−1 , δ = sups|T ∩ (θ⊥ + sθ)| 1

n−1 .

Then,

|K + T | ≥ (γ + δ)n−1

( |K|γn−1

+|T |

δn−1

).

Proof: Consider the functions

f(r) = |K ∩ (θ⊥ + rθ)| 1n−1 , g(s) = |T ∩ (θ⊥ + sθ)| 1

n−1

andu(t) = |(K + T ) ∩ (θ⊥ + tθ)| 1

n−1 .

An application of the Brunn-Minkowski inequality in dimension n − 1 showsthat u ≥ m, where m is defined as in Lemma 1.2.4. Observe that

Rfn−1 = |K| ,

Rgn−1 = |T | and

Run−1 = |K + T |.

Applying Lemma 1.2.4 with p = n− 1 we conclude the proof. 2

This inequality is formally stronger than the Brunn-Minkowski inequality.Also, if the bodies K and T have equal maximal hyperplane sections in somedirection, we obtain a linear refinement of the Brunn-Minkowski inequality.

Page 15: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.3 Volume preserving transformations · 11

1.2.11. Let K and T be two convex bodies in Rn. If

supr|K ∩ (θ⊥ + rθ)| = sup

s|T ∩ (θ⊥ + sθ)|

for some θ ∈ Sn−1, then∣∣∣∣K + T

2

∣∣∣∣ ≥|K|+ |T |

2. 2

1.3 Volume preserving transformations

Let K and T be two open convex bodies in Rn. By a volume preserving trans-formation we mean a map φ : K → T which is one to one, onto and has aJacobian with costant determinant equal to |K|/|T |. In this section we describetwo such maps, the Knothe map and the Brenier map. Applying each one ofthem we may obtain alternative proofs of the Brunn-Minkowski inequality.

1.3a Knothe map

We fix an orthonormal basis e1, . . . , en on Rn, and consider two open convexbodies K and T . The properties of the Knothe map from K to T with respectto the given coordinate system are described in the following theorem.

1.3.1. There exists a map φ : K → T with the following properties:(a) φ is triangular: the i-th coordinate function of φ depends only on x1, . . . , xi.

That is,

φ(x1, . . . , xn) = (φ1(x1), φ2(x1, x2), . . . , φn(x1, . . . , xn)).

(b) The partial derivatives ∂φi

∂xiare positive on K, and the determinant of the

Jacobian of φ is constant. More precisely, for every x ∈ K

|J(φ)(x)| =n∏

i=1

∂φi

∂xi(x) =

|T ||K| .

Proof: For each i = 1, . . . , n and s = (s1, . . . , si) ∈ Ri we consider the section

Ks = y ∈ Rn−i : (s, y) ∈ K

of K (similarly for T ). We shall define a one to one and onto map φ : K → T

as follows.Let x = (x1, . . . , xn) ∈ K. Then, Kx1 6= ∅ and we can define φ1(x) = φ1(x1)

by1|K|

∫ x1

−∞|Ks1 |n−1ds1 =

1|T |

∫ φ1(x1)

−∞|Tt1 |n−1dt1.

That is, we move in the direction of e1 until we “catch” a percentage of T whichis equal to the percentage of K occupied by K ∩ s = (s1, . . . , sn) : s1 ≤ x1.

Page 16: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

12 · Brunn-Minkowski inequality

Note that φ1 is defined on K but φ1(x) depends only on the first coordinate ofx ∈ K. Also,

∂φ1

∂x1(x) =

|T ||K|

|Kx1 |n−1

|Tφ1(x1)|n−1.

We continue by induction. Assume that we have defined φ1(x) = φ1(x1),φ2(x) = φ2(x1, x2) and φj−1(x) = φj−1(x1, . . . , xj−1) for some j ≥ 2. If x =(x1, . . . , xn) ∈ K then K(x1,...,xj−1) 6= ∅, and we define φj(x) = φj(x1, . . . , xj)by

1|K(x1,...,xj−1)|

∫ xj

−∞|K(x1,...,xj−1,sj)|n−jdsj

=1

|T(φ1(x1),...,φj−1(x1,...,xj−1)|∫ φj(x1,...,xj)

−∞|T(φ1(x1),...,φj−1(x1,...,xj−1),tj))|n−jdtj .

It is clear that

∂φj

∂xj(x) =

|T(φ1(x),...,φj−1(x))|n−j+1

|K(x1,...,xj−1)|n−j+1

|K(x1,...,xj)|n−j

|T(φ1(x),...,φj(x))|n−j.

Continuing in this way, we obtain a map φ = (φ1, . . . , φn) : K → T . It is easyto check that φ is one to one and onto. Note that

∂φn

∂xn(x) =

|T(φ1(x),...,φn−1(x))|1|K(x1,...,xn−1)|1

.

By construction, φ has properties (a) and (b). 2

Remark. Observe that each choice of coordinate system in Rn produces adifferent Knothe map from K onto T .

Using the Knothe map we can give one more proof of the Brunn-Minkowskiinequality for convex bodies. We may clearly assume that K and T are open.Consider the Knothe map φ : K → T . It is clear that

(I + φ)(K) ⊆ K + φ(K) = K + T,

and hence,

|K + T | ≥∫

(I+φ)(K)

dx =∫

K

|J(I + φ)(x)|dx

=∫

K

n∏

j=1

(1 +

∂φj

∂xj(x)

)dx

≥∫

K

1 +

n∏

j=1

∂φj

∂xj(x)

1/n

n

dx

= |K|(

1 +( |T ||K|

)1/n)n

=(|K|1/n + |T |1/n

)n

. 2

Page 17: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.3 Volume preserving transformations · 13

As an example of application, we give a direct proof of Theorem 1.2.5, whichavoids the use of symmetrization.

1.3.2. Let K and T be two convex bodies in Rn. Assume that |PF (K)| =|PF (T )| for some (n− 1)-dimensional subspace F of Rn. Then,

|λK + (1− λ)T | ≥ λ|K|+ (1− λ)|T |

for every λ ∈ (0, 1).

Proof: We may assume that F = 〈e1, . . . , en−1〉, where e1, . . . , en is thestandard orthonormal basis in Rn. We may also replace K and T by theirinteriors. We consider the Knothe map φ : PF (K) → PF (T ). Since |PF (K)| =|PF (T )|, we have

J(φ)(x) = 1, x ∈ PF (K).

For every x ∈ PF (K), we define

K(x) = y ∈ R : (x, y) ∈ K.

T (w) is defined in the same way for every w ∈ PF (T ). Given x ∈ PF (K), weconsider the “ratio of lengths” preserving linear map ψx : K(x) → T (φ(x)).Then,

ψ′x(y) =|T (φ(x))||K(x)|

for all x ∈ PF (K) and y ∈ K(x). Finally, we define f : K → T as follows: everyz ∈ K can be written in the form z = (x, y), where x = PF (z) ∈ PF (K) andy = z − PF (z) ∈ K(x). We then set

f(x, y) = (φ(x), ψx(y)).

It is not hard to check that f is one-to-one from K onto T , and hence,

λK + (1− λ)T ⊇ (λI + (1− λ)f)(K).

It follows that

|λK + (1− λ)T | ≥ |(λI + (1− λ)f)(K)| =∫

(λI+(1−λ)f)(K)

dw

=∫

K

J(λI + (1− λ)f)(z)dz.

By the definition of f , we see that

J(λI + (1− λ)f)(z) = J(λI + (1− λ)φ)(x) · (λ + (1− λ)ψ′x(y)

for all z = (x, y) ∈ K, which implies

|λK + (1− λ)T | ≥∫

PF (K)

J(λI + (1− λ)φ)(x) (λ|K(x)|+ (1− λ)|T (φ(x))|) dx.

Page 18: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

14 · Brunn-Minkowski inequality

By the arithmetic-geometric means inequality we see that J(λI+(1−λ)φ)(x) ≥ 1on PF (K). It follows that

|λK + (1− λ)T | ≥∫

PF (K)

(λ|K(x)|+ (1− λ)|T (φ(x))|) dx

= λ|K|+ (1− λ)∫

φ−1(PF (T ))

|T (φ(x))|dx

= λ|K|+ (1− λ)∫

PF (T )

|T (u)|du

= λ|K|+ (1− λ)|T |.

This completes the proof. 2

1.3b Brenier map

Consider the space P(Rn) of Borel probability measures on Rn as a subsetof the unit ball of C∞(Rn)∗ (the dual of the space of infinitely differentiablefunctions which vanish uniformly at infinity). Let µ, ν ∈ P(Rn). We say that aprobability measure γ ∈ P(Rn×Rn) has marginals µ and ν if for every boundedBorel measurable f, g : Rn → R we have

Rn

f(x)dµ(x) =∫

Rn×Rn

f(x)dγ(x, y)

and ∫

Rn

g(y)dν(y) =∫

Rn×Rn

g(y)dγ(x, y).

Definition Let G ⊂ Rn × Rn. We say that G is cyclically monotone if forevery m ≥ 2 and (xi, yi) ∈ G, i ≤ m, we have

〈y1, x2 − x1〉+ 〈y2, x3 − x2〉+ . . . + 〈ym, x1 − xm〉 ≤ 0.

1.3.3. Let µ and ν be Borel probability measures on Rn. There exists a jointprobability measure γ on Rn × Rn which has cyclically monotone support andmarginals µ, ν.

The proof of the Proposition follows by the next discrete lemma: then,one has to show that if γn has cyclically monotone support and marginals µn,νn which converge in the weak-∗ topology to µ and ν, there exists a weak-∗subsequential limit γ which has cyclically monotone support and µ, ν as itsmarginals. 2

1.3.4. Let xi, yi ∈ Rn, i = 1, . . . , m and consider the measures

µ =1m

m∑

i=1

δxi and ν =1m

m∑

i=1

δyi .

There exists a probability measure γ on Rn×Rn which has cyclically monotonesupport and marginals µ, ν.

Page 19: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.3 Volume preserving transformations · 15

Proof: For every permutation σ of 1, . . . , m we consider the measure

γσ =1m

m∑

i=1

δ(xσ(i),yi).

It is clear that γσ has marginals µ and ν for every σ. Let

F (σ) =m∑

i=1

〈xσ(i), yi〉.

We will show that if F (σ) is maximal, then the support (xσ(i), yi) of γσ iscyclically monotone.

Without loss of generality we may assume that F (I) is maximal, where I

denotes the identity permutation. We want to show that G = (xi, yi) : i ≤ mis cyclically monotone. Let k ≤ m, i1, . . . , ik be distinct indices and considerthe points (xis , yis) ∈ G. If σ is the permutation defined by σ(is) = is+1 ifs < k, σ(ik) = i1 and σ(i) = i otherwise, we have

0 ≥ F (σ)− F (I) =k∑

s=1

(〈xσ(is), yis〉 − 〈xis , yis〉)

= 〈xi2 − xi1 , yi1〉+ 〈xi3 − xi2 , yi2〉+ . . . + 〈xi1 − xik, yik

〉.

This proves the lemma. 2

Let f : Rn → R be a proper convex function. This means that f takesvalues in R ∪ +∞, it is not identically +∞ and its restriction onto any linein Rn is a convex function. The domain of f is the convex set dom(f) = x :f(x) < +∞. If A is the interior of dom(f), then f is continuous on A and ∇f

exists almost everywhere in A. Moreover, we can modify the values of f on theboundary of A so that f will become lower semicontinuous (then, we say thatf is closed).

The subdifferential of f is the set

∂(f) = (x, y) ∈ Rn × Rn : f(z) ≥ f(x) + 〈y, z − x〉, z ∈ Rn.

In other words, the subdifferential parametrizes the supporting hyperplanes off . For each x ∈ A, the set ∂(f)(x) = y : (x, y) ∈ ∂(f) is a closed and boundedconvex set. It is easy to see that ∇f(x) exists if and only if ∂(f)(x) = ∇f(x).1.3.5. Let G ⊂ Rn×Rn. Then, G is contained in the subdifferential of a properconvex function f : Rn → R if and only if G is cyclically monotone.

Proof: It is easy to check that the subdifferential of a proper convex functionis cyclically monotone. Let (xi, yi) ∈ ∂(f), i = 1, . . . ,m. Then,

〈y1, x2 − x1〉 ≤ f(x2)− f(x1)

〈y2, x3 − x2〉 ≤ f(x3)− f(x2)

〈ym, x1 − xm〉 ≤ f(x1)− f(xm)

Page 20: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

16 · Brunn-Minkowski inequality

by the definition of the subdifferential. Adding the inequalities we get

〈y1, x2 − x1〉+ 〈y2, x3 − x2〉+ . . . + 〈ym, x1 − xm〉 ≤ 0.

It follows that every G ⊆ ∂(f) is cyclically monotone.For the other direction, assume that G is non-empty and cyclically monotone,

and fix (x0, y0) ∈ G. We define f : Rn → R by

f(x) = sup〈ym, x− xm〉+ 〈ym−1, xm − xm−1〉+ . . . + 〈y0, x1 − x0〉

,

where the supremum is taken over all m ≥ 1 and (xi, yi) ∈ G, i ≤ m. Thefunction f is convex since it is the supremum of a family of affine functions.Using the cyclic monotonicity of G we easily check that f(x0) = 0. This showsthat f is proper. Finally, G is contained in the subdifferential of f : Let (x, y) ∈G. We will show that

t + 〈z − x, y〉 < f(z)

for every t < f(x) and every z ∈ Rn. This implies that (x, y) ∈ ∂(f). Sincet < f(x), there exist (x1, y1), . . . , (xm, ym) ∈ G such that

t < 〈ym, x− xm〉+ · · ·+ 〈y0, x1 − x0〉.

By the definition of f again,

f(z) ≥ 〈y, z − x〉+ 〈ym, x− xm〉+ · · ·+ 〈y0, x1 − x0〉> 〈y, z − x〉+ t.

This completes the proof. 2

Let µ, ν ∈ P(Rn). Let T : Rn → Rn be a measurable function which isdefined µ-almost everywhere and satisfies

ν(B) = µ(T−1(B))

for every Borel subset B of Rn. We then say that T pushes forward µ to ν

and write Tµ = ν. It is easy to see that Tµ = ν if and only if for every boundedBorel measurable g : Rn → R we have

Rn

g(y)dν(y) =∫

Rn

g(T (x))dµ(x).

1.3.6. Let µ, ν ∈ P(Rn) and assume that µ is absolutely continuous withrespect to Lebesgue measure. Then, there exists a convex function f : Rn → Rsuch that ∇f : Rn → Rn is defined µ-almost everywhere, and (∇f)µ = ν.

Proof: Proposition 1.3.1 shows that there exists a probability measure γ onRn×Rn which has cyclically monotone support and marginals µ, ν. By Propo-sition 1.3.2, the support of γ is contained in the subdifferential of a properconvex function f : Rn → R.

Page 21: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.4 Functional forms · 17

Since f is convex and µ is absolutely continuous with respect to Lebesguemeasure, f is differentiable µ-almost everywhere. Since supp(γ) ⊂ ∂(f), by thedefinition of the subdifferential we have y = ∇f(x) for almost all pairs (x, y)with respect to γ. Then, for every bounded Borel measurable g : Rn → R wesee that ∫

g(y)dν(y) =∫

g(y)dγ(x, y)

=∫

g(∇f(x))dγ(x, y)

=∫

g(∇f(x))dµ(x),

which shows that (∇f)µ = ν. 2

A remarkable property of the Brenier map ∇f is that it is uniquely deter-mined. Assume that µ is the normalized Lebesgue measure on some convexbody K and ν is the normalized Lebesgue measure on some other convex bodyT . Regularity results of Caffarelli show that in this case f may be assumedtwice continuously differentiable.

1.3.7. Let K and T be open convex bodies in Rn. There is a convex functionf ∈ C2(K) such that ψ = ∇f : K → T is one to one, onto and volumepreserving. 2

We can now show the Brunn-Minkowski inequality using ψ. It is clear thatthe Jacobian J(ψ) = Hessf is a symmetric positive definite matrix for everyx ∈ K. Since (I + ψ)(K) ⊆ K + T ,

|K + T | ≥∫

K

|J(I + ψ)(x)|dx =∫

K

| (I + Hessf) (x)|dx =∫

K

n∏

i=1

(1 + λi(x))dx,

where λi(x) are the non negative eigenvalues of Hessf . Moreover, by the ratioof volumes preserving property of ψ, we have

∏ni=1 λi(x) = |T |/|K| for every

x ∈ K. Therefore, the arithmetic-geometric means inequality gives

|K + T | ≥∫

K

(1 + [

n∏

i=1

λi(x)]1/n

)n

dx =(|K|1/n + |T |1/n

)n

.

This proof has the advantage of providing a description for the equality cases:either K or T must be a point, or K must be homothetical to T . 2

1.4 Functional forms

A functional form of the Brunn-Minkowski inequality is an integral inequal-ity which reduces to the Brunn-Minkowski inequality by appropriate choice ofthe functions involved. Such functional inequalities can be applied to differentcontexts: as an example, in this Section we will see how the Prekopa-Leindelerinequality may be applied to yield the logarithmic Sobolev inequality and severalimportant concentration results in Gauss space.

Page 22: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

18 · Brunn-Minkowski inequality

1.4a Prekopa-Leindler inequality

The inequality of Prekopa and Leindler is the following statement.

1.4.1. Let f, g, h : Rn → R+ be measurable functions, and let λ ∈ (0, 1). Weassume that f and g are integrable, and for every x, y ∈ Rn

(4.1) h(λx + (1− λ)y) ≥ f(x)λg(y)1−λ.

Then, ∫

Rn

h ≥(∫

Rn

f

)λ (∫

Rn

g

)1−λ

.

Proof: The proof goes by induction on the dimension n.

(a) n = 1: We may assume that f and g are continuous and strictly positive.We define x, y : (0, 1) → R by the equations

∫ x(t)

−∞f = t

∫f and

∫ y(t)

−∞g = t

∫g.

In view of our assumptions, x and y are differentiable, and for every t ∈ (0, 1)we have

x′(t)f(x(t)) =∫

f and y′(t)g(y(t)) =∫

g.

We now define z : (0, 1) → R by

z(t) = λx(t) + (1− λ)y(t).

Since x and y are strictly increasing, z is also strictly increasing, and thearithmetic-geometric means inequality shows that

z′(t) = λx′(t) + (1− λ)y′(t) ≥ (x′(t))λ(y′(t))1−λ.

Hence, we can estimate the integral of h making the change of variables s = z(t),as follows:

∫h =

∫ 1

0

h(z(t))z′(t)dt

≥∫ 1

0

h(λx(t) + (1− λ)y(t))(x′(t))λ(y′(t))1−λdt

≥∫ 1

0

fλ(x(t))g1−λ(y(t))( ∫

f

f(x(t))

)λ ( ∫g

g(y(t))

)1−λ

dt

=(∫

f

)λ (∫g

)1−λ

.

(b) Inductive step: We assume that n ≥ 2 and the Theorem has been provedin all dimensions k ∈ 1, . . . , n − 1. Let f, g and h be as in the Theorem.For every s ∈ R we define hs : Rn−1 → R+ setting hs(w) = h(w, s), and

Page 23: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.4 Functional forms · 19

fs, gs : Rn−1 → R+ in an analogous way. From (4.1) it follows that if x, y ∈ Rn−1

and s0, s1 ∈ R then

hλs1+(1−λ)s0(λx + (1− λ)y) ≥ fs1(x)λgs0(y)1−λ,

and our inductive hypothesis gives

H(λs1 + (1− λ)s0) :=∫

Rn−1hλs1+(1−λ)s0

≥(∫

Rn−1fs1

)λ (∫

Rn−1gs0

)1−λ

=: Fλ(s1)G1−λ(s0).

Applying the inductive hypothesis once again, this time with n = 1, to thefunctions F, G and H, we get

∫h =

RH ≥

(∫

RF

)λ (∫

RG

)1−λ

=(∫

f

)λ (∫g

)1−λ

. 2

Proof of the Brunn-Minkowski inequality: Let K and T be non-emptycompact subsets of Rn, and λ ∈ (0, 1). We define f = χK , g = χT , andh = χλK+(1−λ)T . It is easily checked that the assumptions of Theorem 1.4.1 aresatisfied, therefore

|λK + (1− λ)T | =∫

h ≥(∫

f

)λ (∫g

)1−λ

= |K|λ|T |1−λ. 2

In order to state our next result, we introduce some notation: If p > 0 andλ ∈ (0, 1), for all x, y > 0 we set

Mλp (x, y) = (λxp + (1− λ)yp)1/p.

If x, y ≥ 0 and xy = 0 we set Mλp (x, y) = 0. Observe that

limp→0+

Mλp (x, y) = xλy1−λ.

By Holder’s inequality, if x, y, z, w ≥ 0, a, b, γ > 0 and 1a + 1

b = 1γ , then

(4.2) Mλa (x, y) ·Mλ

b (z, w) ≥ Mλγ (xz, yw).

1.4.2. Let f, g, h : Rn → R+ be measurable functions, and let p > 0, λ ∈ (0, 1).We assume that f and g are integrable, and for every x, y ∈ Rn

(4.3) h(λx + (1− λ)y) ≥ Mλp (f(x), g(y)).

Then, ∫

Rn

h ≥ Mλp

pn+1

(∫

Rn

f,

Rn

g

).

Page 24: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

20 · Brunn-Minkowski inequality

Proof: We will consider only the case n = 1. As in the proof of the Prekopa-Leindler inequality, we define x, y : (0, 1) → R by the equations

∫ x(t)

−∞f = t

∫f and

∫ y(t)

−∞g = t

∫g.

Then,

x′(t)f(x(t)) =∫

f and y′(t)g(y(t)) =∫

g.

We define z : (0, 1) → R by

z(t) = λx(t) + (1− λ)y(t).

Then, z is strictly increasing, and

z′(t) = λx′(t) + (1− λ)y′(t) = Mλ1 (x′(t), y′(t)).

Hence, using (4.2) and (4.3) we can estimate the integral of h making the changeof variables s = z(t), as follows:

∫h =

∫ 1

0

h(z(t))z′(t)dt

≥∫ 1

0

h(λx(t) + (1− λ)y(t))Mλ1 (x′(t), y′(t))dt

≥∫ 1

0

Mλp (f(x(t)), g(y(t)))Mλ

1 (x′(t), y′(t))dt

≥∫ 1

0

Mλp

p+1(f(x(t))x′(t), g(y(t))y′(t)) dt

= Mλp

p+1

(∫f,

∫g

).

The inductive step is exactly as in Theorem 1.4.1. 2

Remark: Using Proposition 1.4.1 we may give an alternative proof of Lemma1.2.2. The claim was the following: K is a two-dimensional convex body andf : K → R+ is an α-concave function. If

(Pf)(y) :=∫

RχK(y, t)f(y, t)dt,

then Pf is (1 + α)-concave.

Proof: Let Fy(t) = χK(y, t)f(y, t), y ∈ PK. Then, for all y, z ∈ PK we have

Fλy+(1−λ)z(λt + (1− λ)s) ≥ Mλ1/α(Fy(t), Fz(s))

by the α-concavity of f and the convexity of K. The claim follows from Propo-sition 1.4.1. 2

Page 25: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.4 Functional forms · 21

1.4b The logarithmic Sobolev inequality

Let γn denote the standard Gaussian measure in Rn with density

γn(x) = (2π)−n/2 exp(−|x|2).

Maurey observed that the Prekopa-Leindler inequality may be used to give asimple proof of the approximate isoperimetric inequality in the Gauss space(Rn, γn).

1.4.3. Let A be a non-empty Borel subset of Rn. Then,∫

Rn

ed(x,A)2/4dγn(x) ≤ 1γn(A)

,

where d(x,A) = inf|x− y| : y ∈ A. Therefore, if γn(A) = 12 we have

1− γn(At) ≤ 2 exp(−t2/4)

for every t > 0, where At = y | d(y,A) ≤ t.Proof: Consider the functions

f(x) = ed(x,A)2/4γn(x) , g(x) = χA(x)γn(x) , m(x) = γn(x).

For every x ∈ Rn and y ∈ A we see that

(2π)nf(x)g(y) = ed(x,A)2/4e−|x|2/2e−|y|

2/2 ≤ exp( |x− y|2

4− |x|2

2− |y|2

2)

= exp(− |x + y|2

4)

=(

exp(−12|x + y

2|2)

)2

= (2π)n(m(

x + y

2))2

,

by the parallelogram law and the fact that d(x,A) ≤ |x − y|. Since g(y) = 0whenever y /∈ A, this implies that f, g, m satisfy the assumptions of the Prekopa-Leindler inequality with λ = 1/2. Therefore,

(∫ed(x,A)2/4γn(dx)

)γn(A) =

(∫f

)(∫g

)≤

(∫m

)2

= 1.

This proves the first assertion of the theorem. For the second one, observe thatif γn(A) = 1

2 then

et2/4γn(x : d(x,A) ≥ t) ≤∫

ed(x,A)2/4γn(dx) ≤ 1γn(A)

= 2.

This shows that γn(Act) ≤ 2 exp(−t2/4). 2

Bobkov and Ledoux exploited this idea to deduce the logarithmic Sobolevinequality in a more general setting: Let E = (Rn, ‖ · ‖) be a finite dimensionalnormed space and let E∗ = (Rn, ‖ · ‖∗) be its dual. Consider a probability

Page 26: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

22 · Brunn-Minkowski inequality

measure µ on E with density ρe−V (x), where V is a convex function defined onsome open convex subset Ω of E. Assume further that there is a constant c > 0with the following property: for any s, t > 0 with s + t = 1 and any x, y ∈ Ω,

tV (x) + sV (y)− V (tx + sy) ≥ cts

2‖x− y‖2.

Let f ∈ C∞(Ω). The entropy of f2 with respect to µ is defined by

Entµ(f2) =∫

f2 log f2dµ−∫

f2dµ · log∫

f2dµ.

The logarithmic Sobolev inequality is the following statement.

1.4.4. For all f ∈ C∞(Ω),

Entµ(f2) ≤ 2c

∫‖∇f‖2∗dµ.

Proof: We may set f2 = eg where g ∈ C∞b (Ω), that is, g has compact supportin Ω and bounded partial derivatives. Let t, s > 0 with t + s = 1. Consider thefunctions u(x) = eg(x)/t−V (x), v(y) = e−V (y) and w(z) = egt(z)−V (z), where

gt(z) = supg(x)− [tV (x) + sV (y)− V (tx + sy)] : z = tx + sy, x, y ∈ Ω.

Then, u, v, w : Rn → R+ are measurable and if z = tx + sy we have w(z) ≥u(t)tu(y)s. Indeed,

ut(x)vs(y) = exp(g(x)− tV (x)) exp(−sV (y))

= exp(g(x)− tV (x)− sV (y))

= exp(g(x)− [tV (x) + sV (y)− V (tx + sy)]− V (tx + sy))

≤ exp(

supz=tx+sy

g(x)− [tV (x) + sV (y)− V (tx + sy)] − V (tx + sy))

= w(z).

An application of the Prekopa-Leindler inequality gives∫

egtdµ = ρ

∫egt(z)−V (z)dx

≥(

ρ

∫eg(x)/t−V (x)dx

)t (ρ

∫e−V (x)

)s

=(∫

eg/tdµ

)t

.

If we develop the right hand side around t = 1 we get(∫

eg/tdµ

)t

=∫

egdµ + sEntµ(eg) + O(s2).

To see this, let h(t) = (∫

eg/tdµ)t = et log∫

eg/tdµ. Then,

h(t) = h(1) + h′(1)(t− 1) + O((t− 1)2) = h(1)− h′(1)s + O(s2).

Page 27: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.4 Functional forms · 23

On the other hand,

h′(t) =(∫

eg/tdµ

)t (log

∫eg/tdµ−

∫eggdµ

t∫

egdµ

),

and hence h′(1) = −Entµ(eg).We now pass to the left hand side. By assumption,

g(x)− [tV (x) + sV (y)− V (tx + sy)] ≤ g(x)− cts

2‖x− y‖2

for all x, y ∈ Ω. The definition of gt shows that

gt(z) ≤ sup

g(x)− cts‖x− y‖22

: z = tx + sy, x, y ∈ Ω

.

Now, z = tx + sy can be written in the form x = z + s(z−y)t . If we set h = z− y

and η = s/t, then

gt(z) ≤ suph∈E

g(z + ηh)− cη‖h‖2

2.

Observe that

g(z + ηh) = g(z) + η〈∇g(z), h〉+ η2O(‖h‖2),

independently from z. Hence, we can write

gt(z) ≤ supg(z) + η〈∇g(z), h〉+ η2O(‖h‖2)− cη

2‖h‖2

= g(z) + η sup〈∇g(z), h > −(

c

2− ηC)‖h‖2.

Every h ∈ Rn can be written in the form h = λe where λ ≥ 0 and ‖e‖ = 1. Ifwe set θ = c− 2ηC, we have

gt(z) ≤ g(z) + η supλ≥0

sup‖e‖=1

λ〈∇g(z), e〉 − (

c

2− ηC)λ2

= gt(z) ≤ g(z) + η supλ‖∇g(z)‖∗ − θ

λ2

2: λ ≥ 0

= g(z) + η‖∇g(z)‖2∗

2θ.

Now,η

2θ=

η

2c+ O(η2), |η| < c

2C

and since ‖∇g(z)‖∗ is bounded, we get

η

2θ‖∇g(z)‖2∗ =

η

2c‖∇g(z)‖2∗ + O(η2).

Therefore,gt(z) = g(z) +

η

2c‖∇g(z)‖2∗ + O(η2).

Page 28: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

24 · Brunn-Minkowski inequality

Applying Taylor’s theorem to x 7→ ex at x we have

ey = ex + ex(y − x) + O((y − x)2).

This shows that

egt(z) = eg(z)+ η2c‖∇g(z)‖2∗+O(η2)

= eg(z) + eg(z) η

2c‖∇g(z)‖2∗ + O(η2),

and hence,∫

egt(z)dµ =∫

eg(z)dµ +η

2c

∫‖∇g(z)‖2∗eg(z)dµ + O(η2)

=∫

eg(z)dµ +η

2c

∫‖∇g(z)‖2∗eg(z)dµ + O(η2).

Going back to the inequality( ∫

eg/tdµ)t ≤ ∫

egtdµ we get

sEntµ(eg) =η

2c

∫‖∇g‖2∗egdµ + O(s2),

orEntµ(eg) =

12c(1− s)

∫‖∇g‖2∗egdµ + O(s).

Letting s → 0 we arrive at

(4.4) Entµ(eg) ≤ 12c

∫‖∇g‖2∗egdµ.

From f = eg/2 we see that 2∇f = eg/2∇g, which implies 4‖∇f‖2 = eg‖g‖2.Then, (4.4) gives

Entµ(f2) = Entµ(eg) ≤ 12c

∫4‖f‖2dµ =

2c

∫‖∇f‖2dµ. 2

Remark: If µ = γn, then V (x) = |x|2/2 and the condition in Theorem 1.4.3 issatisfied as an equality with c = 2. Since ‖ · ‖∗ = | · |, the logarithmic Sobolevinequality takes the form

Ent(f2) ≤∫|∇f |2dγn

in Gauss space.

1.4c Concentration of measure in Gauss space

In this subsection we give two applications of the logarithmic Sobolev inequality.The first one is an inequality of Poincare.

1.4.5. Let γn be the standard Gaussian measure on Rn. If f and |∇f | areintegrable, then

Rn

f2dγn −(∫

Rn

fdγn

)2

≤∫

Rn

|∇f |2dγn.

Page 29: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.4 Functional forms · 25

Proof: By homogenuity we may assume that∫

Rn

fdγn = 0 and∫

Rn

f2dγn = 1.

We apply the logarithmic Sobolev inequality for 1 + εf where ε > 0 is small:we have

Rn

(1 + εf)2 log(1 + εf)dγn − 12

Rn

(1 + εf)2 log(∫

Rn

(1 + εf)2)

dγn

≤ ε2

Rn

|∇f |2dγn,

which implies∫

Rn

εf − ε2f2

2+ 2ε2f2 + O(ε3)

dγn − 1

2

Rn

(1 + 2εf + ε2f2)

× log(

1 + 2ε

Rn

f + ε2

Rn

f2

)dγn ≤ ε2

Rn

|∇f |2dγn.

Using the assumptions about f , we arrive at

1 ≤∫

Rn

|∇f |2dγn + O(ε).

Letting ε → 0 we see that∫

Rn

f2dγn = 1 ≤∫

Rn

|∇f |2dγn.

This completes the proof. 2

Our second application is the concentration inequality for Lipschitz functions(this is equivalent to Theorem 1.4.2).

1.4.6. Let f : Rn → R be a Lipschitz function with ‖f‖Lip ≤ 1. Then,

γ

(x :

∣∣∣∣f(x)−∫

Rn

fdγ

∣∣∣∣ > r

)≤ 2e−r2/2.

Theorem 1.4.5 is a consequence of the following Proposition.

1.4.7. If F : Rn → R is a Lipschitz function with ‖F‖Lip ≤ 1 and if∫Rn Fdγn =

0, then, for every λ > 0 ∫

Rn

eλF dγ ≤ eλ2/2.

Proof: Assume that∫Rn F = 0 and ‖F‖Lip ≤ 1. We set f2 = eλF . Then,

∇f =λeλF∇F

2f=

λeλF∇F

2eλF/2=

λ

2eλF/2∇F.

Applying the logarithmic Sobolev inequality to f we have∫

Rn

eλF λF

2dγ − 1

2

Rn

eλF dγ log(∫

Rn

eλF dγ

)≤

Rn

λ2

4eλF |∇F |2dγ.

Page 30: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

26 · Brunn-Minkowski inequality

Since |∇F | ≤ 1, we have∫

Rn

eλF |∇F |2dγ ≤∫

Rn

eλF dγ.

Therefore, if we define

H(λ) =∫

Rn

eλF dγ

we haveλ

2H ′(λ)− 1

2H(λ) log H(λ) ≤ λ2

4H(λ).

In other words, for the function k(λ) = log H(λ)λ we have

k′(λ) =λH ′(λ)−H(λ) log H(λ)

λ2H(λ)≤ 1

2.

Note that

limλ→0

log H(λ)λ

= limλ→0

H ′(λ)H(λ)

=∫

Rn

Fdγ = 0.

It follows thatk(λ) ≤ k(0) +

12λ =

λ

2,

which implies ∫

Rn

eλF dγ ≤ eλ2/2. 2

Proof of Theorem 1.4.5: Let f be a Lipschitz function with ‖f‖Lip ≤ 1. Ifwe set F = f − ∫

fdγ, then∫

Fdγ = 0 and

|F (x)− F (y)| = |f(x)− f(y)| ≤ ‖f‖Lip|x− y| ≤ |x− y|.That is, ‖F‖Lip ≤ 1. Proposition 1.4.1 shows that

Rn

eλF dγ ≤ eλ2/2.

Therefore,

eλrγ(x : F (x) ≥ r) ≤∫

F≥reλF dγ ≤

Rn

eλF dγ ≤ eλ2/2.

Thus, for every λ > 0

γ

(x : f(x)−

Rn

fdγ ≥ r

)≤ e

λ22 −λr.

Choosing λ = r we get

γ

(x : f(x)−

∫fdγ ≥ r

)≤ e−r2/2.

Applying the same reasoning for −f we conclude that

γ

(x :

∣∣∣∣f(x)−∫

fdγ

∣∣∣∣ ≥ r

)≤ 2e−r2/2. 2

Page 31: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.5 Applications · 27

1.5 Applications of the Brunn-Minkowski inequality

The Brunn-Minkowski inequality is of fundamental importance. In this sectionwe present a few of its applications, the ones which are more closely related tothe topics we discuss in these notes.

1.5a An inequality of Rogers and Shephard

The difference body of a convex body K is the symmetric convex body

K −K = x− y | x, y ∈ K.

From the Brunn-Minkowski inequality it is clear that |K − K| ≥ 2n|K| withequality if and only if K has a centre of symmetry. Rogers and Shephard gavea sharp upper bound for the volume of the difference body.

1.5.1. Let K be a convex body in Rn. Then,

|K −K| ≤(

2n

n

)|K|.

Proof: The Brunn-Minkowski inequality enters the proof through the observa-tion that f(x) = |K ∩ (x + K)|1/n is a concave function supported on K −K.

We define a second function g : K −K → R+ as follows: each x ∈ K −K

can be written in the form x = rθ, where θ ∈ Sn−1 and 0 ≤ r ≤ ρK−K(θ). [ByρW we denote the radial function of W

ρW (θ) = maxt > 0 | tθ ∈ W, θ ∈ Sn−1.]

We then set g(x) = f(0)(1 − r/ρK−K(θ)). By definition, g is linear on theinterval [0, ρK−K(θ)θ], it vanishes on the boundary of K −K, and g(0) = f(0).Since f is concave, we see that f ≥ g on K −K. Therefore,

K−K

|K ∩ (x + K)|dx =∫

K−K

fn(x)dx ≥∫

K−K

gn(x)dx

= [f(0)]nnωn

Sn−1

∫ ρK−K(θ)

0

rn−1(1− r/ρ)ndrσ(dθ)

= nωn|K|∫

Sn−1ρn

K−K(θ)σ(dθ)∫ 1

0

tn−1(1− t)ndt

= |K||K −K|nΓ(n)Γ(n + 1)Γ(2n + 1)

=(

2n

n

)−1

|K||K −K|.

Page 32: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

28 · Brunn-Minkowski inequality

On the other hand, Fubini’s theorem gives∫

K−K

|K ∩ (x + K)|dx =∫

Rn

|K ∩ (x + K)|dx

=∫

Rn

Rn

χK(y)χx+K(y)dydx

=∫

Rn

χK(y)(∫

Rn

χy−K(x)dx

)dy

=∫

K

|y −K|dy = |K|2.

Combining the above, we conclude the proof. 2

Remark: If we take a closer look at the argument and take into account theequality case in the Brunn-Minkowski inequality, we see that we can haveequality in Theorem 1.5.1 if and only if K has the following property: if(rK + x) ∩ (sK + y) 6= ∅ for some r, s > 0 and x, y ∈ Rn, then

(rK + x) ∩ (sK + y) = tK + w

for some t ≥ 0 and w ∈ Rn. That is, the non-empty intersection of homotheticalcopies of K is again homothetical to K or a point. Rogers and Shephard provedthat this property characterizes the simplex.

The usefulness of Theorem 1.5.1 rests upon the fact that the volume of K−K

is not much larger than the volume of K. We have

|K −K|1/n ≤ 4|K|1/n,

which means that every convex body (which contains the origin) is containedin a symmetric convex body with the same more or less volume radius. Thisobservation will find many applications in the sequel.

1.5b Hyperplane sections of a convex body

Consider a convex body K in Rn, and fix a direction θ ∈ Sn−1. We define afunction f = fK,θ : R→ R+ by

f(t) = |K ∩ (θ⊥ + tθ)|.

Here | · | denotes (n − 1)-dimensional volume. So, f(t) gives the area of thesection of K with the hyperplane which is perpendicular to θ, at a distance t

from θ⊥.

1.5.2. Let K be a convex body in Rn, θ ∈ Sn−1, and

f(t) = |K ∩ (θ⊥ + tθ)|.

Then, f1/(n−1) is concave on its support.

Page 33: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.5 Applications · 29

Proof: We may assume that θ = en, and identify θ⊥ with Rn−1. For every t

we setK(t) = x ∈ Rn−1 | (x, t) ∈ K.

Let I = t | K(t) 6= ∅. Then, K(t) is convex for every t ∈ I, and if t, s ∈ I,λ ∈ (0, 1), then

λK(t) + (1− λ)K(s) ⊆ K(λt + (1− λ)s).

Applying the Brunn-Minkowski inequality on Rn−1, we get

|K(λt + (1− λ)s)| 1n−1 ≥ λ|K(t)| 1

n−1 + (1− λ)|K(s)| 1n−1 .

Since f(t) = |K ∩ (θ⊥ + tθ)| = |K(t)|, the proof is complete. 2

Remark As we saw in Section 1.2, one can prove Theorem 1.5.2 by Schwarzsymmetrization and then deduce the Brunn-Minkowski inequality from it.

1.5.3. With the notation of Theorem 1.5.2, f is a log-concave function on itssupport. 2

1.5.4. If K is a symmetric convex body, then ‖f‖∞ = f(0).

Proof: Since K is symmetric, we have K(−t) = −K(t) for every t ∈ I. Thus,f is an even, log-concave function on I. It follows that

f(0) = f

(t + (−t)

2

)≥

√f(t)

√f(−t) = f(t)

for every t ∈ I. 2

If we don’t assume symmetry of K, we still have that the hyperplane sectionpassing through the centroid of K is comparable to the maximal section in thisdirection.

1.5.5. Let K be a convex body with volume 1 and centroid at 0. If θ ∈ Sn−1

and f(t) = |K ∩ (θ⊥ + tθ)|, then

‖f‖∞ ≤ ef(0) = e|K ∩ θ⊥|.

Proof: Let [−a, b] be the support of f , where a, b > 0, and assume that ‖f‖∞ =f(t0). We will show that

f(0) ≥(

n

n + 1

)n−1

f(t0) ≥ 1ef(t0).

Replacing θ by −θ if needed, we may assume that 0 < t0 ≤ b. We can alsoassume that f(0) < f(t0) (otherwise, there is nothing to prove). Since K hasits centroid at 0, we have

∫ b

−a

t[f(t)]dt =∫

K

〈x, θ〉dx = 0.

Page 34: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

30 · Brunn-Minkowski inequality

Therefore, ∫ 0

−a

(−t)[f(t)]dt =∫ b

0

t[f(t)]dt ≥∫ t0

0

t[f(t)]dt.

We set h = f1

n−1 and consider the linear function g defined by the equationsg(0) = h(0) and g(t0) = h(t0). By the Brunn-Minkowski inequality, h is concave,which implies that g ≥ h on [−a, 0] and g ≤ h on [0, t0]. Since g(0) < g(t0),we see that g is strictly increasing and g(−γ) = 0 for some −γ < −a. In otherwords, g(t) = c(t + γ) for some c > 0. Then,

∫ 0

−γ

(−t)[g(t)]n−1dt ≥∫ 0

−a

(−t)[g(t)]n−1dt ≥∫ 0

−a

(−t)[h(t)]n−1dt

≥∫ t0

0

t[h(t)]n−1dt ≥∫ t0

0

t[g(t)]n−1dt.

This means that

0 ≥∫ t0

−γ

t[g(t)]n−1dt = cn−1

∫ t0

−γ

t(t + γ)n−1dt

= cn−1

∫ γ+t0

0

tn−1(t− γ)dt = cn−1

((γ + t0)n+1

n + 1− γ

(γ + t0)n

n

),

which impliest0 ≤ γ

n.

Therefore,

f(0)f(t0)

=(

g(0)g(t0)

)n−1

=(

γ

γ + t0

)n−1

≥(

n

n + 1

)n−1

. 2

1.5c Borell’s lemma

1.5.6. Let K be a convex body in Rn with volume |K| = 1, and let A be aclosed, convex and symmetric set such that |K ∩ A| = δ > 1

2 . Then, for everyt > 1 we have

|K ∩ (tA)c| ≤ δ

(1− δ

δ

) t+12

.

Proof: We first show that

Ac ⊇ 2t + 1

(tA)c +t− 1t + 1

A.

If this is not so, there exists a ∈ A which can be written in the form

a =2

t + 1y +

t− 1t + 1

a1,

Page 35: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.5 Applications · 31

for some a1 ∈ A and y /∈ tA. Then,

1ty =

t + 12t

a +t− 12t

(−a1) ∈ A,

because of the convexity and symmetry of A. This means that y ∈ tA, which isa contradiction.

Since K is convex, we have

Ac ∩K ⊇ 2t + 1

[(tA)c ∩K] +t− 1t + 1

[A ∩K].

An application of the Brunn-Minkowski inequality gives

1− δ = |Ac ∩K| ≥ |(tA)c ∩K| 2t+1 |A ∩K| t−1

t+1 = |(tA)c ∩K| 2t+1 δ

t−1t+1 .

This proves the theorem. 2

Borell’s lemma implies concentration of volume in Rn: If A∩K capturesmore than half of the volume of K, then the percentage of K which stays outsidetA, t > 1 decreases exponentially with respect to t as t → ∞, at a rate whichis independent from the body K and the dimension n. We shall give a firstapplication of this fact, showing that linear functionals on convex bodies satisfyinverse Holder inequalities.

1.5.7. There exists an absolute constant c > 0 with the following property. IfK is a convex body in Rn with volume |K| = 1, and θ ∈ Rn, then for everyp > 1, (∫

K

|〈x, θ〉|pdx

) 1p

≤ cp

K

|〈x, θ〉|dx.

Proof: We define I1 =∫

K|〈x, θ〉|dx and set

A = x ∈ Rn : |〈x, θ〉| ≤ 3I1.

Now, A is obviously convex and symmetric, and

I1 =∫

K

|〈x, θ〉|dx ≥∫

K∩Ac

|〈x, θ〉|dx ≥ 3I1|K ∩Ac|,

which shows that|A ∩K| ≥ 2

3.

We write ∫

K

|〈x, θ〉|pdx =∫ ∞

0

ptp−1|x ∈ K : |〈x, θ〉| > t|dt,

and observe that x ∈ K : |〈x, θ〉| > t = K ∩ (Rn\(t/3I1)A). So, Borell’slemma applies for t > 3I1. We write

K

|〈x, θ〉|pdx =∫ 3I1

0

ptp−1|K ∩ (Rn\(t/3I1)A)|dt

+∫ ∞

3I1

ptp−1|K ∩ (Rn\(t/3I1)A)|dt.

Page 36: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

32 · Brunn-Minkowski inequality

The first integral is simply bounded by

∫ 3I1

0

ptp−1dt = (3I1)p,

while, after the substitution t = 3I1s, the second one becomes

(3I1)p

∫ ∞

1

psp−1|K ∩ (Rn\sA)|ds ≤ (3I1)p

∫ ∞

1

psp−12−s/2ds.

A simple computation with the Gamma-function gives∫

K

|〈x, θ〉|pdx ≤ (3I1)p(c1p)p

for some absolute constant c1 > 0, which gives

(∫

K

|〈x, θ〉|pdx

) 1p

≤ cpI1 = cp

K

|〈x, θ〉|dx. 2

1.5d The isoperimetric inequality for convex bodies

The surface area (or Minkowski content) A(K) of a convex body K is definedby

A(K) = limt→0+

|K + tDn| − |K|t

.

It is a well-known fact that among all convex bodies of a given volume theball has minimal surface area. This is an immediate consequence of the Brunn-Minkowski inequality: If K is a convex body in Rn with |K| = |rDn|, then forevery t > 0

|K + tDn|1/n ≥ |K|1/n + t|Dn|1/n = (r + t)|Dn|1/n.

It follows that the surface area A(K) of K satisfies

A(K) = limt→0+

|K + tDn| − |K|t

≥ limt→0+

(r + t)n − rn

t|Dn| = n|Dn| 1n |K|

n−1n

with equality if K = rDn. The question of uniqueness in the equality case ismore delicate.

Note that a stronger isoperimetric statement holds true: If, say, |K| = |Dn|then

|K + tDn| ≥ |Dn + tDn|for every t > 0. In other words, for fixed volume and for every t > 0, thet-extension

Kt = y | d(y, K) ≤ thas minimal volume if K is a ball.

Page 37: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.5 Applications · 33

1.5e The spherical isoperimetric inequality

Consider the unit sphere Sn−1 with the geodesic distance ρ and the rotationallyinvariant probability measure σ. For every Borel subset A of Sn−1 and for everyt > 0, we define the t-extension of A:

At = x ∈ Sn−1 : ρ(x, A) ≤ t.

Then, the isoperimetric inequality for the sphere is the following statement:

“Among all Borel subsets A of Sn−1 with given measure α ∈ (0, 1),a spherical cap B(x, r) of radius r > 0 such that σ(B(x, r)) = α hasminimal t-extension for every t > 0.”

This means that if A ⊆ Sn−1 and σ(A) = σ(B(x0, r)) for some x0 ∈ Sn−1

and r > 0, thenσ(At) ≥ σ(B(x0, r + t))

for every t > 0. Since the σ-measure of a cap is easily computable, one can givea lower bound for the measure of the t-extension of any subset of the sphere. Weare mainly interested in the case σ(A) = 1

2 , and a straightforward computationshows the following:

1.5.8. If A is a Borel subset of Sn+1 and σ(A) = 1/2, then

σ(At) ≥ 1−√

π/8 exp(−t2n/2)

for every t > 0. 2

[The constant√

π/8 may be replaced by a sequence of constants an tending to12 as n →∞.]

The spherical isoperimetric inequality can be proved by spherical symmetriza-tion techniques. However, it was recently observed that one can give a verysimple proof of an analogous exponential estimate using the Brunn-Minkowskiinequality. The key point is the following lemma:

1.5.9. Consider the probability measure µ(A) = |A|/|Dn| on the Euclideanunit ball Dn. If A,B are subsets of Dn with µ(A) ≥ α, µ(B) ≥ α, and ifρ(A,B) = inf|a− b| : a ∈ A, b ∈ B = ρ > 0, then

α ≤ exp(−ρ2n/8).

[In other words, if two disjoint subsets of Dn have positive distance ρ, then atleast one of them must have small volume (depending on ρ) when the dimensionn is high.]

Proof: We may assume that A and B are closed. By the Brunn-Minkowskiinequality, µ(A+B

2 ) ≥ α. On the other hand, the parallelogram law shows thatif a ∈ A, b ∈ B then

|a + b|2 = 2|a|2 + 2|b|2 − |a− b|2 ≤ 4− ρ2.

Page 38: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

34 · Brunn-Minkowski inequality

It follows that A+B2 ⊆

√1− ρ2

4 Dn, hence

µ

(A + B

2

)≤

(1− ρ2

4

)n/2

≤ exp(−ρ2n/8). 2

Proof of Theorem 1.5.5 (with weaker constants). Assume that A ⊆ Sn−1

with σ(A) = 1/2. Let t > 0 and define B = (At)c ⊆ Sn−1. We fix λ ∈ (0, 1)and consider the subsets

A =⋃tA : λ ≤ t ≤ 1 and B =

⋃tB : λ ≤ t ≤ 1

of Dn. These are disjoint with distance ' λt. Lemma 1.5.1 shows that µ(B) ≤exp(−cλ2t2n/8), and since µ(B) = (1− λn)σ(B) we obtain

σ(At) ≥ 1− 11− λn

exp(−cλ2t2n/8).

We conclude the proof by choosing λ = 1/2. 2

1.5f Blaschke-Santalo inequality

Let K be a symmetric convex body in Rn. The polar body of K is thesymmetric convex body

K = y ∈ Rn | ∀x ∈ K |〈x, y〉| ≤ 1.

The volume product s(K) of K is the product |K| · |K| of the volumes ofK and its polar. It is easily seen that the volume product is an invariant of theclass of K: If T ∈ GL(n), then

s(K) = |K| · |K| = |TK| · |(TK)| = s(TK).

The Blaschke-Santalo inequality asserts that s(K) is maximized when K is anellipsoid.

1.5.10. Let K be a symmetric convex body in Rn. Then, |K| · |K| ≤ ω2n.

We will give a proof of this fact which is based on Steiner symmetrization.

1.5.11. Let K be a symmetric convex body in Rn and let θ ∈ Sn−1. If K1 =Sθ(K) is the Steiner symmetrization of K in the direction of θ, then

|K||K| ≤ |K1||(K1)|.

Proof: Steiner symmetrization preserves volume. Therefore, it suffices to provethat |K| ≤ |(K1)|.

For simplicity we assume that θ⊥ = Rn−1. It is not hard to see that

Sθ(K) = K1 =

(x,t1 − t2

2) : x ∈ Pθ⊥K, (x, t1) ∈ K, (x, t2) ∈ K

.

Page 39: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

1.5 Applications · 35

As usual, for every A ⊆ Rn we write

A(t) = x ∈ Rn−1 : (x, t) ∈ A.

Claim: For every s ∈ R,

K(s) + K(−s)2

⊆ (K1)(s).

Proof of the claim: Let y1 ∈ K(s) and y2 ∈ K(−s). Then, (y1, s) ∈ K and(y2,−s) ∈ K. We want to show that (y1 + y2)/2 ∈ (K1)(s), or equivalently,(y1+y2

2 , s) ∈ (K1).Let (x, t1−t2

2 ) ∈ K1, where (x, t1) ∈ K and (x, t2) ∈ K. We have to checkthat

〈(x,t1 − t2

2), (

y1 + y2

2, s)〉 ≤ 1.

But this last quantity is equal to

〈x,y1 + y2

2〉+

st1 − st22

=〈(x, t1), (y1, s)〉+ 〈(x, t2), (y2,−s)〉

2≤ 1. 2

By the Brunn-Minkowski inequality,

|(K1)(s)| ≥ |K(s)| 12 |K(−s)| 12 ,

and since |K(s)| = |K(−s)| by the symmetry of K, we get

|(K1)(s)| ≥ |K(s)|for every s ∈ R. Integration shows that

|(K1)| =∫ +∞

−∞|(K1)(s)|ds ≥

∫ +∞

−∞|K(s)|ds = |K|. 2

The Blaschke-Santalo inequality now follows by applying a suitable sequenceof Steiner symmetrizations to K.

1.5g Urysohn’s inequality

Recall the definition of the support function hK and of the width function of aconvex body K. For every u ∈ Sn−1 we define

hK(u) = maxx∈K

〈x, u〉

andwK(u) = hK(u) + hK(−u).

The mean width w(K) of K is the average

w(K) =∫

Sn−1wK(u)σ(du) = 2

Sn−1hK(u)σ(du).

A classical inequality of Urysohn states that for fixed volume, Euclidean ballhas minimal mean width.

Page 40: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

36 · Brunn-Minkowski inequality

1.5.12. Let K be a convex body in Rn. Then,

w(K)2

≥( |K||Dn|

)1/n

.

We will give a simple proof of this fact using Steiner symmetrization. By con-tinuity of the mean width with respect to the Hausdorff metric, it suffices toprove the following.

1.5.13. Let K be a convex body in Rn and let θ ∈ Sn−1. Then,

w(SθK) ≤ w(K).

Proof: We may assume that θ = en and write

Sθ(K) = K1 =

(x,t1 − t2

2) : x ∈ Pθ⊥K, (x, t1) ∈ K, (x, t2) ∈ K

.

Let u = (u1, . . . , un) ∈ Sn−1. We set u′ = (u1, . . . , un−1,−un). Then,

hSθK(u) = max⟨(

x,t1 − t2

2), u

⟩ | (x, ti) ∈ K

=12

max〈(x, t1), u〉 | (x, t1) ∈ K

+12

max〈(x,−t2), u〉 | (x, t2) ∈ K

=12

max〈(x, t1), u〉 | (x, t1) ∈ K

+12

max〈(x, t2), u′〉 | (x, t2) ∈ K

=12hK(u) +

12hK(u′).

Since u and u′ have the same distribution in Sn−1, we see that

w(SθK) = 2∫

Sn−1hSθK(u)σ(du)

≤∫

Sn−1hK(u)σ(du) +

Sn−1hK(u′)σ(du)

= w(K). 2

Page 41: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

Chapter 2

Rearrangement inequalities

2.1 Introduction

The purpose of this Chapter is to present the Brascamp-Lieb inequality and itsreverse form. The Brascamp-Lieb inequality concerns the multilinear operatorΦ : Lp1(R)× · · · × Lpm(R) → R defined by

(1.1) Φ(f1, . . . , fm) =∫

Rn

m∏

j=1

fj(〈uj , x〉) dx,

where m ≥ n, p1, . . . , pm > 1 with 1p1

+ · · ·+ 1pm

= n, and u1, . . . , um ∈ Rn.

Brascamp and Lieb proved that the norm of Φ is the supremum of

(1.2)Φ(g1, . . . , gm)∏m

j=1 ‖gj‖pj

over all centered Gaussian functions g1, . . . , gm, i.e. over all functions of theform

(1.3) gj(t) = e−λjt2 , λj > 0.

Many sharp geometric inequalities that we are going to see later on make heavyuse of this fact.

The original proof of the Brascamp-Lieb inequality was based on a generalrearrangement inequality of Brascamp, Lieb and Luttinger, which is the subjectof Section 2.2. We introduce the symmetric decreasing rearrangement f∗ of aBorel measurable function f vanishing at infinity and show that

(1.4) Φ(f1, . . . , fm) ≤ Φ(f∗1 , . . . , f∗m).

This reduces the problem to the class of radially symmetric and non-increasingfunctions vanishing at infinity. A generalization to functions of several variables

Page 42: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

38 · Rearrangement inequalities

is also obtained by Steiner symmetrization: this was actually needed for theproof of the Brascamp-Lieb inequality.

In Section 2.3 we go back to the Brunn-Minkowski inequality and discuss aninequality of Szarek and Voiculescu about the volume of restricted Minkowskisums. A restricted sum of two subsets A and B of Rn is a set of the form

(1.5) A +Θ B = x + y | (x, y) ∈ Θ,where Θ is a subset of A × B. Szarek and Voiculescu proved the following. Ifρ ∈ (0, 1), A,B ⊂ Rn with

(1.6) |B|1/n = ρ|A|1/n

and Θ is a subset of A×B with

(1.7) |Θ| ≥ (1− c minρ√n, 1)|A| · |B|,then,

(1.8) |A +Θ B|2/n ≥ |A|2/n + |B|2/n.

The proof of this inequality requires the multivariable rearrangement inequal-ity of Brascamp, Lieb and Luttinger. As an application, we give a proof ofShannon’s entropy power inequality through restricted Minkowski sums.

Section 2.4 is devoted to the argument of Brascamp and Lieb. In Section 2.5we discuss the reverse (or dual) form of the Brascamp-Lieb inequality which wasconjectured by Ball in connection with problems of Convex Geometric Analysisand was proved by Barthe. Gaussian functions are the extremal functions inthis inequality as well. Barthe’s argument exploits the transportation of measureidea which was already used in the proof of the Prekopa-Leindler inequality. Itgives a new proof of the Brascamp-Lieb inequality at the same time.

Finally, in Section 2.6 we discuss the multidimensional version of the Brascamp-Lieb inequality and its reverse form. It concerns the multilinear operator Ψ :Lp1(Rn1)× · · · × Lpm(Rnm) → R defined by

(1.9) Ψ(f1, . . . , fm) =∫

Rn

m∏

j=1

fj(Bjx)dx,

where Bj is a linear mapping from Rn into Rnj , pj > 1 and certain naturalconditions on nj , pj and Bj are satisfied. Lieb proved that Gaussian functionsare again the maximizers. We present Barthe’s argument which proves a dualform of this inequality as well. Here, the main technical tool is the Brenier mapwhich was discussed in Section 1.3.

2.2 A general rearrangement inequality

2.2a Symmetric decreasing rearrangement

In this Section we consider Borel measurable functions f : Rn → R+ whichsatisfy the following: for every t > 0, the set x ∈ Rn : f(x) > t has finiteLebesgue measure. We say that such an f vanishes at infinity.

Page 43: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.2 A general rearrangement inequality · 39

Let A be a Borel subset of Rn with finite Lebesgue measure. The symmetricrearrangement A∗ of A is the open ball with centre at the origin, whosevolume is equal to the measure of A. Since we choose A∗ to be open, χA∗ islower semicontinuous.

The symmetric decreasing rearrangement of χA is defined by

(2.1) χ∗A = χA∗ .

If f : Rn → R+ is a Borel measurable function which vanishes at infinity, we set

(2.2) f∗(x) =∫ ∞

0

χ∗f>t(x)dt =∫ ∞

0

χf>t∗(x)dt.

Several basic properties of the symmetric decreasing rearrangement follow.

(i) f∗ is a non-negative function.

(ii) f∗ is radially symmetric and non-increasing:

(2.3) f∗(x) = f∗(y) if |x| = |y|

and

(2.4) f∗(x) ≥ f∗(y) if |x| ≤ |y|.

(iii) f∗ is lower semicontinuous: for every t > 0 we have x : f∗(x) > t = x :f(x) > t∗, which is an open set. In particular, f∗ is measurable.

(iv) The level sets of f∗ are the symmetric rearrangements of the correspondinglevel sets of f : for every t > 0,

(2.5) x : f∗(x) > t = x : f(x) > t∗.

(v) If f and g are non-negative Borel measurable functions on Rn which vanishat infinity, and if f(x) ≤ g(x) for all x ∈ Rn, then f∗(x) ≤ g∗(x) for allx ∈ Rn. This follows from the fact that the level sets of g contain thecorresponding level sets of f .

(vi) We easily check that

(2.6) ‖f‖p = ‖f∗‖p

for all 1 ≤ p ≤ ∞. For example, if 1 ≤ p < ∞, using (2.5) we have

‖f‖pp = p

∫ ∞

0

tp−1|x : f(x) > t|dt

= p

∫ ∞

0

tp−1|x : f(x) > t∗|dt

= p

∫ ∞

0

tp−1|x : f∗(x) > t|dt

= ‖f∗‖pp.

Page 44: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

40 · Rearrangement inequalities

There are many well-known inequalities about rearrangements. The simplestone is the following.

2.2.1. Let f, g : Rn → R+ be two non-negative Borel measurable functionsvanishing at infinity. Then,

(2.7)∫

Rn

f(x)g(x)dx ≤∫

Rn

f∗(x)g∗(x)dx.

Proof: We write∫

Rn

f(x)g(x)dx =∫ ∞

0

∫ ∞

0

Rn

χf>t(x)χg>s(x)dx ds dt

and∫

Rn

f∗(x)g∗(x)dx =∫ ∞

0

∫ ∞

0

Rn

χf∗>t(x)χg∗>s(x)dx ds dt.

Thus, it suffices to show that

(2.7) |f > t ∩ g > s| ≤ |f∗ > t ∩ g∗ > s|

for all t, s > 0. If we set Ft = f > t and Gs = g > s, then (2.7) reduces tothe inequality

(2.8) |Ft ∩Gs| ≤ |F ∗t ∩G∗s|.

This is clearly true: F ∗t and G∗s are centered balls, and hence one of them iscontained in the other. It follows that

|F ∗t ∩G∗s| = min|F ∗t |, |G∗s| = min|Ft|, |Gs| ≥ |Ft ∩Gs|. 2

2.2b Brascamp-Lieb-Luttinger inequality

A rearrangement inequality of Riesz states that if f, g, h ∈ L1(R), then

(2.9)∫ ∫

R2f(x)g(y)h(x− y) dy dx ≤

∫ ∫

R2f∗(x)g∗(y)h∗(x− y) dy dx.

Brascamp, Lieb and Luttinger generalized this inequality as follows.

2.2.2. Assume that f1, . . . , fm : R→ R+ are measurable functions and u1, . . . , um ∈Rn. Then,

(2.10)∫

Rn

m∏

j=1

fj(〈x, uj〉)dx ≤∫

Rn

m∏

j=1

f∗j (〈x, uj〉)dx.

Remark: It is clear that (2.9) is a special case of (2.10): we just set f1 = f ,f2 = g, f3 = h and u1 = (1, 0), u2 = (0, 1), u3 = (1,−1) ∈ R2.

The proof of Theorem 2.2.2 is based on Brunn’s concavity principle.

Page 45: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.2 A general rearrangement inequality · 41

2.2.3. The theorem holds true if each fj is the characteristic function of aclosed bounded interval.

Proof: Assume that Ij = [bj − cj , bj + cj ] and fj = χIj , j = 1, . . . , m. Wedefine fj(·|t) by

fj(x|t) = fj(x + bjt) = χIj−bjt(x),

where Ij − bjt = [(1− t)bj − cj , (1− t)bj + cj ]. We also define G : [0, 1] → R by

(2.11) G(t) =∫

Rn

m∏

j=1

fj(〈uj , x〉|t)dx.

Since fj(x|1) = χ[−cj ,cj ](x) = f∗j (x), the proposition will follow if we show thatG(0) ≤ G(1).

We will prove something stronger: G is increasing. To this end, we considerthe symmetric convex set

(2.12) K = x = (x, xn+1) ∈ Rn+1 : −cj ≤ 〈uj , x〉 − bjxn+1 ≤ cj.

Then,

(2.13) |K ∩ (e⊥n+1 + ten+1)| = |x ∈ Rn : −cj ≤ 〈uj , x〉− tbj ≤ cj| = G(1− t).

By Brunn’s concavity principle, G is a decreasing function of 1− t. 2

2.2.4. The theorem holds true if each fj is the characteristic function of afinite union of closed bounded intervals.

Proof: Assume that fj is the characteristic function of the union of nj disjointclosed intervals: fj = χAj , where Aj =

⋃nj

k=1[bkj−ckj , bkj +ckj ] and bkj +ckj <

bk+1,j − ck+1,j , k = 1, . . . , nj − 1.We will work by induction on N = n1, . . . , nm. We say that M ≺ N

if mj ≤ nj for every j and there exists i for which mi < ni. Proposition2.2.1 covers the case N = 1, 1, . . . , 1 and we assume that N is given and theproposition has been proved for every M ≺ N .

For every t ∈ [0, 1] we define

fkj(x|t) = χ[bkj(1−t)−ckj ,bkj(1−t)+ckj ](x), j = 1, . . . , m, k = 1, . . . , nj .

The new distances between “successive intervals” are

bk+1,j(1− t)− ck+1,j − bkj(1− t)− ckj

and hence, they are decreasing functions of t. We define

fj(x, t) =nj∑

k=1

fkj(x|t), j = 1, . . . , m

and set

τ = mink,j

(1− ckj + ck+1,j

bk+1,j − bkj

).

Page 46: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

42 · Rearrangement inequalities

Thus, τ is the smallest number t in (0, 1] for which two successive intervalsof some fj(·|t) will join. This means that if M(τ) = n1(τ), . . . nm(τ) is them-tuple which corresponds to f1(·|τ), . . . , fm(·|τ), we have M(τ) ≺ N .

We write

Rn

m∏

j=1

fj(〈uj , x〉)dx =∫

Rn

m∏

j=1

nj∑

k=1

fkj(〈uj , x〉)dx

=n1∑

k1=1

. . .

nm∑

km=1

Rn

m∏

j=1

fkjj(〈uj , x〉)dx

where fkj = χ[bkj−ckj ,bkj+ckj ]. The fact that the function G in the proof ofProposition 2.2.1 is increasing shows that

(2.14)∫

Rn

m∏

j=1

fkjj(〈uj , x〉)dx ≤∫

Rn

m∏

j=1

fkjj(〈uj , x〉|τ)dx

If we add and change the order of integration and summation, we obtain

Rn

m∏

j=1

fj(〈uj , x〉)dx ≤∫

Rn

m∏

j=1

(nj∑

k=1

fkj(〈uj , x〉|τ)

)dx

=∫

Rn

m∏

j=1

fj(〈uj , x〉|τ)dx.

Now, we apply the inductive hypothesis for fj(·|τ). We have

Rn

m∏

j=1

fj(〈uj , x〉|τ)dx ≤∫

Rn

m∏

j=1

f∗j (〈uj , x〉|τ)dx,

where f∗j (·|τ) is the symmetric decreasing rearrangement of fj(·|τ). On observ-ing that fj and fj(·|τ) have the same symmetric decreasing rearrangement, weget

Rn

m∏

j=1

fj(〈x, uj〉)dx ≤∫

Rn

m∏

j=1

f∗j (〈x, uj〉)dx. 2

Standard approximation arguments give the following.

2.2.5. The theorem holds true if each fj is the characteristic function of ameasurable set Aj with |Aj | < ∞, j = 1, . . . , m. 2

We can now give the

Proof of Theorem 2.2.2: Assuming that f1, . . . , fm : R→ R+ are measurable

Page 47: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.2 A general rearrangement inequality · 43

functions and using Proposition 2.2.3, we write

Rn

m∏

j=1

fj(〈x, uj〉)dx =∫

Rn

m∏

j=1

∫ ∞

0

χfj>tj(〈uj , x〉)dtjdx

=∫ ∞

0

. . .

∫ ∞

0

Rn

m∏

j=1

χfj>tj(〈uj , x〉)dx

dtm . . . dt1

≤∫ ∞

0

. . .

∫ ∞

0

Rn

m∏

j=1

χ∗fj>tj(〈uj , x〉)dx

dtm . . . dt1

=∫ ∞

0

. . .

∫ ∞

0

Rn

m∏

j=1

χf∗j >tj(〈uj , x〉)dx

dtm . . . dt1

= . . .

=∫

Rn

m∏

j=1

f∗j (〈x, uj〉)dx. 2

2.2c Generalization to functions of several variables

Let f : Rk → R+ be a measurable function vanishing at infinity. We considera (k − 1)-dimensional subspace V of Rk and fix a coordinate system so that e1

is the normal vector to V . The Steiner symmetrization f∗(·|V ) of f withrespect to V is defined as follows: If x2, . . . , xk ∈ R, we consider the functionh(t) = f(t, x2, . . . , xk) and define

f∗(t, x2, . . . , xk|V ) = h∗(t).

It is easily checked that x : f∗(x|V ) > t is the Steiner symmetrization ofx : f(x) > t with respect to V . In particular, p-norms are preserved byf 7→ f∗(·|V ).

2.2.6. Let f1, . . . , fm : Rk → R+ be measurable functions vanishing at infinity,and let A be an m × n matrix. If V is a (k − 1)-dimensional subspace of Rk,then ∫

Rk

. . .

Rk

m∏

j=1

fj

(n∑

i=1

ajixi

)dxn . . . dx1

≤∫

Rk

. . .

Rk

m∏

j=1

f∗j

(n∑

i=1

ajixi|V)

dxn . . . dx1.

Proof: We set xi = (ti, yi), i = 1, . . . , n. Changing the order of integration, weget

Rk

. . .

Rk

m∏

j=1

fj

(n∑

i=1

ajixi

)dxn . . . dx1

Page 48: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

44 · Rearrangement inequalities

=∫

Rk−1. . .

Rk−1

R. . .

R

m∏

j=1

fj

(n∑

i=1

aji(ti, yi)

)dtn . . . dt1

dyn . . . dy1

=∫

Rk−1. . .

Rk−1

Rn

m∏

j=1

gj(〈uj , t〉)dt

dyn . . . dy1,

where uj = (uj1, . . . , ujn), t = (t1, . . . , tn) and, for fixed yi,

gj(〈uj , t〉) = fj

(n∑

i=1

ajixi

).

Observe that

g∗j (〈uj , t〉) = f∗j

(n∑

i=1

ajixi|V)

.

Then, Theorem 2.2.2 completes the proof. 2

Successive symmetrizations with respect to (k− 1)-dimensional subspaces yieldthe symmetric decreasing rearrangement f∗j of each fj , j = 1, . . . , m. Thus, wehave:

2.2.7. Let f1, . . . , fm : Rk → R+ be measurable functions vanishing at infinity,and let A be an m× n matrix. Then,∫

Rk

. . .

Rk

m∏

j=1

fj(n∑

i=1

ajixi)dxn . . . dx1 ≤∫

Rk

. . .

Rk

m∏

j=1

f∗j (n∑

i=1

ajixi)dxn . . . dx1.2

If we set zl = (x1l, . . . , xnl), l = 1, . . . , k and uj = (aj1, . . . , ajn) ∈ Rn, j =1, . . . , m, changing the order of integration we obtain the following reformulationof Theorem 2.2.3.

2.2.8. Let f1, . . . , fm : Rk → R+ be measurable functions vanishing at infinity,and let u1, . . . , um ∈ Rn. Then,

Rn

. . .

Rn

m∏

j=1

fj(〈uj , z1〉, . . . , 〈uj , zk〉)dzk . . . dz1

≤∫

Rn

. . .

Rn

m∏

j=1

f∗j (〈uj , z1〉, . . . , 〈uj , zk〉)dzk . . . dz1. 2

Theorem 2.2.4 plays an important role in the original proof of the Brascamp-Lieb inequality (see Section 2.4).

2.3 Volume of restricted Minkowski sums

In this Section we present an application of the Brascamp-Lieb-Luttinger in-equality: it is a Brunn-Minkowski inequality for “restricted Minkowski sums”.This is then applied to yield a proof of Shannon’s entropy power inequality.

Page 49: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.3 Volume of restricted Minkowski sums · 45

2.3a An inequality for restricted Minkowski sums

Let A and B be non-empty subsets of Rn with finite Lebesgue measure. Supposethat Θ is a non-empty subset of A×B. The restricted sum (with respect toΘ) of A and B is the set

(3.1) A +Θ B = x + y | (x, y) ∈ Θ.Szarek and Voiculescu proved the following.

2.3.1. Let ρ ∈ (0, 1) and let A,B ⊂ Rn with

(3.2) |B|1/n = ρ|A|1/n.

Assume that Θ ⊆ A×B ⊂ R2n satisfies

(3.3) |Θ| ≥ (1− c minρ√n, 1)|A| · |B|.Then,

(3.4) |A +Θ B|2/n ≥ |A|2/n + |B|2/n.

Remarks: (a) In the statement above, c > 0 is an absolute constant (indepen-dent from ρ, n, A, B and Θ).

(b) (3.4) is weaker than the Brunn-Minkowski inequality: the point is that Θis a “large” subset of A×B, but A +Θ B is “smaller” than the Minkowski sumA + B.

(c) For the proof of the theorem we may clearly assume that Θ is “maximalwith respect to A +Θ B”. That is,

(3.5) Θ = (x, y) ∈ A×B | x + y ∈ A +Θ B.(d) We may choose any normalization of volumes so that |B|1/n = ρ|A|1/n. Wewill assume that |A| = |Dn| and |B| = |ρDn|.Example: Let A = Dn, B = ρDn and

(3.6) Θ = (x, y) ∈ Dn × (ρDn) | 〈x, y〉 ≤ 0.One can easily check that

|Θ| = |A| · |B|2

andA +Θ B =

√1 + ρ2Dn.

On the other hand,

(3.7) |A +Θ B|2/n = (1 + ρ2)|Dn| = |A|2/n + |B|2/n.

This shows that the exponent 2/n is “best possible”.

The proof of Theorem 2.3.1 is based on the Brascamp-Lieb-Luttinger rear-rangement inequality (Theorem 2.2.3) and a more careful examination of theabove example, which is done in the following lemma.

Page 50: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

46 · Rearrangement inequalities

2.3.2. Let ρ ∈ (0, 1) and consider the set

(3.8) Θ =(x, y) ∈ Dn × (ρDn) | |x + y| ≤

√1 + ρ2

.

Then,

(3.9) |Θ| ≤ (1− c minρ√n, 1)|Dn| · |ρDn|,

where c > 0 is an absolute constant.

Proof: We set τ = minρ√n, 1/2. We will show that if 1 − τ/n ≤ |x| ≤ 1,then

(3.10)∣∣y ∈ ρDn | |x + y| >

√1 + ρ2

∣∣ ≥ c1|ρDn|,

where c1 > 0 is an absolute constant. This implies (3.9), because

|(Dn × (ρDn))\Θ| ≥

x∈Dn:1−τ/n≤|x|≤1|y ∈ ρDn : |x + y| >

√1 + ρ2| dx

≥ c1|ρDn| · |Dn| (1− (1− τ/n)n)

≥ c2τ |ρDn| · |Dn|.

For the proof of (3.10) we may assume that x = (r, 0, . . . , 0), where 1 − τ/n ≤r ≤ 1. Drawing a picture, the reader will be convinced that the set M := y ∈ρDn | |x + y| ≤

√1 + ρ2 consists of two parts: the first one is

(3.11) M1 := y = (y1, . . . , yn) ∈ ρDn | y1 ≤ s, where s =1− r2

2r.

The volume of M1 can be estimated by

(3.12) |M1| = ωn−1

∫ s

−ρ

(ρ2 − u2)(n−1)/2du ≤ c|ρDn|,

where 0 < c < 1 is an absolute constant. This follows by direct computation,since

(3.13) s = (1− r) ·(

1 +1− r

2r

)≤ τ

n·(1 +

τ

2r

)=

ρ

2√

n(1 + O(1/n)).

The second part of M is the set

(3.14) M2 := y = (y1, . . . , yn) ∈√

1 + ρ2Dn | r + s < y1 ≤√

1 + ρ2.

The volume of M2 is equal to

(3.15) |M2| = ωn−1

∫ t

s

(1 + ρ2 − (r + u)2)(n−1)/2du,

where t =√

1 + ρ2 − r. One can easily check that

(3.16) |M2| ≤ (ρ/√

n)|ρDn|.

Page 51: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.3 Volume of restricted Minkowski sums · 47

It follows that |M | ≤ c1|ρDn| for some absolute constant c1 > 0. 2

Proof of Theorem 2.3.1: We may assume that |A| = |Dn|, |B| = |ρDn| and

Θ = (x, y) ∈ A×B | x + y ∈ A +Θ B.

Let R > 0 be defined by the equation

(3.17) |A +Θ B| = |RDn|.

Applying Theorem 2.2.3 we have

|Θ| = |(x, y) ∈ A×B | x + y ∈ A +Θ B|=

∫ ∫χA(x)χB(y)χA+ΘB(x + y)dxdy

≤∫ ∫

χ∗A(x)χ∗B(y)χ∗A+ΘB(x + y)dxdy

=∫ ∫

χDn(x)χρDn(y)χRDn(x + y)dxdy

= |(x, y) ∈ Dn × (ρDn) | x + y ∈ RDn|.

Let Θ1 = (x, y) ∈ Dn × (ρDn) | |x + y| ≤√

1 + ρ2. Since

|Θ1| ≤ (1− c minρ√n, 1)|Dn| · |ρDn|≤ |Θ|≤ |(x, y) ∈ Dn × (ρDn) | x + y ∈ RDn|,

we get

(3.18) R ≥√

1 + ρ2.

It follows that

|A +Θ B|2/n = R2|Dn| ≥ (1 + ρ2)|Dn| = |A|2/n + |B|2/n,

and the proof is complete. 2

Theorem 2.3.1 may be also stated in the following form.

2.3.3. There exist c, C > 0 with the following property: if 0 < δ < c and A,B

are subsets of Rn with finite Lebesgue measure, then, for any Θ ⊂ A×B with

(3.19) |Θ| ≥ (1− δ)|A| · |B|

one has

(3.20) |A +Θ B|2/n ≥(

1− Cδ

n

) (|A|2/n + |B|2/n

).

Page 52: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

48 · Rearrangement inequalities

Proof: We may normalize volumes so that

|A| = 1 ≥ |B| = ρn

Let c > 0 be the constant in Theorem 2.3.1 (one may clearly assume thatc < 1/2). We distinguish two cases.

(a) If ρ√

n ≥ δ/c, then 1−δ ≥ 1−c minρ√n, 1. Hence, Theorem 2.3.1 appliesand

(3.21) |A +Θ B|2/n ≥ |A|2/n + |B|2/n,

so we have the assertion of the Theorem in a stronger form.

(b) Suppose that ρ√

n < δ/c. We observe that

(1− δ)|A| · |B| ≤ |Θ|=

B

|x ∈ A | x + y ∈ A +Θ B| dx

≤ |B| · |A +Θ B|.

This implies

(3.22) |A +Θ B| ≥ (1− δ) ≥(

1− Cδ

n

)(1 + ρ2)

if C > 0 is chosen large enough (independent from n). This is exactly theinequality

(3.23) |A +Θ B|2/n ≥(

1− Cδ

n

) (|A|2/n + |B|2/n

)

and the proof is complete. 2

An alternative way of stating the result is the following.

2.3.4. If 0 < δ < δ0 and Θ ⊂ A×B satisfies

(3.24) |Θ| ≥ (1− δ)n|A| · |B|,

then,

(3.25) |A +Θ B|2/n ≥ (1− ε)(|A|2/n + |B|2/n

)

for some 0 < ε ≤ c√

δ.

2.3b Shannon’s inequality

Let X be an Rn valued random variable whose distribution is absolutely con-tinuous with respect to the Lebesgue measure λn, and let f be its density. Theentropy of X is the quantity

(3.26) h(X) = −∫

Rn

f log f dλn.

Page 53: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.3 Volume of restricted Minkowski sums · 49

Shannon’s entropy power inequality states that if X,Y are two such randomvariables and if X, Y are independent, then

(3.27) exp(2h(X)/n) + exp(2h(Y )/n) ≤ exp(2h(X + Y )/n).

Shannon’s inequality is one of the most important results of Information Theory.In this subsection we give a proof of this inequality, which is due to Szarekand Voiculescu. The proof is an application of the restricted Minkowski sumsestimate in the form of Corollary 2.3.1.

We consider only the real valued case. Let X1, X2, . . . , XN , . . . be a sequenceof independent copies of X on some probability space (Ω, P ). For every N ∈ N,the joint density FN of X1, X2, . . . , XN with respect to Lebesgue measure is thefunction

(3.28) F (x1, x2, . . . , xN ) = f(x1)f(x2) . . . f(xN ).

2.3.5. There exist αk, βk > 0 with αk → 0 and βk → 0, such that

(3.29) P ((X1, X2, . . . , XN ) ∈ VN (X)) > 1− βN ,

where(3.30)VN (X) = x ∈ RN | exp(N(−h(X)− αN )) ≤ FN (x) ≤ exp(N(−h(X) + αN ).

Proof: Consider the function

(3.31)log FN

N(x) :=

1N

N∑

j=1

log f(xj)

as a random variable on R∞ equipped with the product measure P := ⊗∞j=1(fdλ1),Then,

log FN

N=

1N

N∑

j=1

log f(Xj).

By the law of large numbers, log FN/N converges in probability to

(3.32) E(log f(X)) =∫

f log f dλ1 = −h(X).

Therefore, there exists a sequence (αN ) of positive reals with limN αN = 0, suchthat

P

−h(X)− αN ≤ 1

N

N∑

j=1

log f(Xj) ≤ −h(X) + αN

→ 1

as N →∞. This is equivalent to the statement of the Proposition. 2

2.3.6. In the notation of Proposition 2.3.1,

(3.33) (1− βN )eN(h(X)−αN ) ≤ |VN (X)| ≤ eN(h(X)+αN ).

Page 54: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

50 · Rearrangement inequalities

Proof: We just observe that

(3.34) P ((X1, X2, . . . , XN ) ∈ VN (X)) =∫

VN (X)

FN (x) dx > 1− βN ,

and use the definition of VN (X). 2

Letting N →∞ we get

(3.35) limN→∞

log |VN (X)|N

= h(X)

and

(3.36) limN→∞

|VN (X)|2/N = exp(2h(X)).

2.3.7. Let X, Y be independent random variables with densities f, g with respectto λ1. Then,

exp(2h(X)) + exp(2h(Y )) ≤ exp(2h(X + Y )).

Proof: Let (Xk) and (Yk) be sequences of jointly independent copies of X andY respectively. We define AN = VN (X), BN = VN (Y ) and

(3.37) ΘN = (x, y) ∈ AN ×BN | x + y ∈ VN (X + Y ).

We may assume that Proposition 2.3.1 holds true for AN , BN and CN :=VN (X + Y ) simultaneously (i.e. with the same sequences (αk), (βk)).

If XN = (X1, . . . , XN ) and YN = (Y1, . . . , YN ), then

1− βN ≤ P (XN + YN ∈ CN )

≤ P (XN ∈ AN , YN ∈ BN ,XN + YN ∈ CN )

+P ((XN ,YN ) /∈ AN ×BN )

= P ((XN ,YN ) ∈ ΘN ) + 1− P (XN ∈ AN ) · P (YN ∈ BN )

≤ P ((XN ,YN ) ∈ ΘN ) + 1− (1− βN )2.

Therefore, if FN , GN are the densities of XN ,YN with respect to λN , we have

(3.38)∫

ΘN

FNGNdλN = P ((XN ,YN ) ∈ ΘN ) ≥ 1− 3βN + β2N .

By the definition of AN , BN we have

(3.39)∫

ΘN

FNGNdλN ≤ |ΘN | · eN(−h(X)+αN )eN(−h(Y )+αN ),

while Corollary 2.3.2 shows that

(3.40) |AN | · |BN | ≤ eN(h(X)+αN )eN(h(Y )+αN ).

From (3.38), (3.39) and (3.40) we see that

(3.41)|ΘN |

|AN | · |BN | ≥ (1− 3βN + β2N )e−4NαN ,

Page 55: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.4 Brascamp-Lieb inequality · 51

which shows that

(3.42) limN→∞

( |ΘN ||AN | · |BN |

)1/N

= 1.

Now, Corollary 2.3.1 shows that

(3.43) |CN |2/N = |AN +ΘN BN |2/N ≥ (1− εN )(|AN |2/N + |BN |2/N

)

where εN → 0+ as N →∞. Taking into account (3.36) we conclude the proof.2

Remark: The same proof gives the inequality

exp(2h(X)/n) + exp(2h(Y )/n) ≤ exp(2h(X + Y )/n

if X and Y are Rn valued. Then, VN (X) and VN (Y ) are subsets of RNn, wehave |VN (X)|2/(Nn) → exp(2h(X)/n), etc.

2.4 Brascamp-Lieb inequality

Let m ≥ n, p1, . . . , pm ≥ 1 with 1p1

+ · · · + 1pm

= n, and u1, . . . , um ∈ Rn.Consider the multilinear operator Φ : Lp1(R)× · · · × Lpm(R) → R defined by

(4.1) Φ(f1, . . . , fm) =∫

Rn

m∏

j=1

fj(〈uj , x〉) dx.

Brascamp and Lieb proved that the norm of Φ is the supremum of

(4.2)Φ(g1, . . . , gm)∏m

j=1 ‖gj‖pj

over all centered Gaussian functions g1, . . . , gm. The proof of this fact is basedon the Brascamp-Lieb-Luttinger rearrangement inequality (Theorem 2.2.4).

2.4.1. Let m ≥ n, and p1, . . . , pm ≥ 1 with 1p1

+ · · · + 1pm

= n. Givenu1, . . . , um ∈ Rn, consider the operator Φ (depending on the uj’s). Then,

(4.3)Φ(f1, . . . , fm)∏m

j=1 ‖fj‖pj

≤ D := sup

Φ(g1, . . . , gm)∏mj=1 ‖gj‖pj

: gj(t) = e−λjt2 , λj > 0

for all fj ∈ Lpj (R).

Remarks: (a) We may clearly assume that the fj ’s are non-negative.

(b) We saw that the symmetric decreasing rearrangement preserves p-norms andincreases the left hand side in Theorem 2.4.1. Therefore, we may assume thateach fj is even and decreasing on R+. Such a function can be approximated byfunctions of the form

k∑

l=1

alχ[−γl,γl]

where al > 0 and γl are decreasing. Hence, it suffices to prove the theorem forfunctions of this type.

Page 56: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

52 · Rearrangement inequalities

2.4.2. Let ψ1, . . . , ψs be non-negative functions in Lp(Rk), p ≥ 1. Then,

(4.4)∥∥∥∥

s∑

j=1

ψj

∥∥∥∥p

≥ s−1/qs∑

j=1

‖ψj‖p,

where 1p + 1

q = 1.

Proof: Since each ψj is non-negative and p > 1, for every x ∈ Rk we have

s∑

j=1

ψj(x)p ≤

s∑

j=1

ψj(x)

p

.

It follows that

s∑

j=1

‖ψj‖p

p

≤ sp/qs∑

j=1

‖ψj‖pp = sp/q

Rk

s∑

j=1

ψpj (x)dx

≤ sp/q

Rk

s∑

j=1

ψj(x)

p

dx

= sp/q

∥∥∥∥s∑

j=1

ψj

∥∥∥∥p

p

. 2

2.4.3. Let p ≥ 1, and let ηa be the characteristic function of the ball x ∈ Rk :|x| ≤ a. We define

φa(x) = exp((

1− |x|2a2

)k

2p

).

Then, ηa(x) ≤ φa(x) for every x ∈ Rk and

(4.5) ‖φa‖p ≤ ‖ηa‖p(3√

k)1/p.

Proof: The inequality ηa(x) ≤ φa(x) is obvious since φa(x) ≥ 1 when |x| ≤ a.We have ‖ηa‖p

p = ωkak and

‖φa‖pp = kωk

∫ ∞

0

tk−1e(1−t2/a2) k2 dt

= kωkek/2

∫ ∞

0

tk−1e−t2k2a2 dt

= ωkek/2ak

(2k

)k/2

Γ(

k

2+ 1

)

= ‖ηa‖ppe

k/2

(2k

)k/2

Γ(

k

2+ 1

).

Using the precise form of Stirling’s formula we obtain(‖φa‖p

‖ηa‖p

)p

≤ 3√

k. 2

Page 57: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.4 Brascamp-Lieb inequality · 53

Remark: Observe that φa is a product of centered Gaussian functions:

φa(x1, . . . , xk) = ek/(2p)k∏

i=1

exp(− kx2

i

2a2p

).

Proof of the Brascamp-Lieb inequality: We may assume that each fj is afunction of the form

fj(t) =A∑

l=1

aljχ[−γlj ,γlj ](t)

where alj ≥ 0, γlj ≥ γl+1,j and A is a positive integer (the same for all fj).

For every j ≤ m we consider the function Gj : Rk → R+ defined by

Gj(t1, . . . , tk) =k∏

i=1

fj(ti),

and the symmetric decreasing rearrangement Fj of Gj . Theorem 2.2.4 andFubini’s theorem show that

Rn

m∏

j=1

fj(〈uj , x〉)dx

k

=∫

Rnk

m∏

j=1

Gj(〈uj , x1〉, . . . , 〈uj , xk〉)dxk . . . dx1

≤∫

Rnk

m∏

j=1

Fj(〈uj , x1〉, . . . , 〈uj , xk〉)dxk . . . dx1.

Since fj takes only A values, Gj , and so Fj as well, takes at most (k+1)A values(if λ1, . . . , λA are the values of fj , then the values of Gj are λr1

1 . . . λrA

A where0 ≤ rl ≤ k and

∑rl = k: so, we estimate by the number of distinct A-tuples of

elements of 0, 1, . . . , k). It follows that each Fj is a function of the form

(4.6) Fj =(k+1)A∑

l=1

Hljηalj,

where Hlj ≥ 0 and alj is decreasing. From Lemma 2.4.1,

(4.7) ‖fj‖kpj

= ‖Fj‖pj =∥∥∥∥

(k+1)A∑

l=1

Hljηalj

∥∥∥∥pj

≥ (k + 1)−A/qj

(k+1)A∑

l=1

Hlj‖ηalj‖pj ,

where qj is the conjugate exponent of pj . We write

(Φ(f1, . . . , fm)∏m

j=1 ‖fj‖pj

)k

=

∫Rnk

∏mj=1 Gj(〈uj , x1〉, . . . , 〈uj , xk〉)dxk . . . dx1∏m

j=1 ‖Gj‖pj

≤∫Rnk

∏mj=1 Fj(〈uj , x1〉, . . . , 〈uj , xk〉)dxk . . . dx1∏m

j=1 ‖Fj‖pj

.

Page 58: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

54 · Rearrangement inequalities

From (4.6) and (4.7) this is bounded by

J :=

∑l1

. . .∑

lmHl1,1 . . . Hlm,m

∫Rnk

∏mj=1 ηalj ,j

(〈uj , x1〉, . . . , 〈uj , xk〉)dxk . . . dx1

(k + 1)−A∑ 1

qj∑

l1. . .

∑lm

Hl1,1 . . . Hlm,m

∏mj=1 ‖ηalj ,j

‖pj

.

Let us consider an m-tuple (ηb1 , . . . , ηbm) and the functions (φb1 , . . . , φbm) as inLemma 2.4.2. We have ηbj ≤ φbj , and hence

(4.8)∫

Rnk

m∏

j=1

ηbj (〈uj , x1〉, . . . , 〈uj , xk〉) ≤∫

Rnk

m∏

j=1

φbj (〈uj , x1〉, . . . , 〈uj , xk〉).

Also,

(4.9)m∏

j=1

‖φbj‖pj

≤ (3√

k)∑ 1

pj

m∏

j=1

‖ηbj‖pj

= (3√

k)nm∏

j=1

‖ηbj‖pj

.

Each φbjis a product of k identical one dimensional centered Gaussians φ

(l)bj

evaluated at the points 〈uj , xl〉 respectively. Therefore, the right hand side of(4.8) is the k-th power of the integrals on Rn of the product of m centeredGaussians evaluated at 〈uj , x〉, j = 1, . . . ,m. Since

(4.10) ‖φbj‖pj = ‖k∏

l=1

φ(l)bj‖pj =

k∏

l=1

‖φ(l)bj‖pj ,

it follows that

(4.11)

∫Rnk

∏mj=1 φbj (〈uj , x1〉, . . . , 〈uj , xk〉)∏m

j=1 ‖φbj‖pj

≤ Dk.

Combining the above, we obtain(

Φ(f1, . . . , fm)∏mj=1 ‖fj‖pj

)k

≤ J ≤ (k + 1)A∑m

j=11

qj (3√

k)nDk

= (k + 1)(m−n)A(3√

k)nDk.

Then,

(4.12)Φ(f1, . . . , fm)∏m

j=1 ‖fj‖pj

≤ (k + 1)(m−n)A

k (3√

k)nk D,

and letting k →∞, we conclude the proof. 2

2.5 Reverse Brascamp-Lieb inequality

Recall the statement of the Brascamp-Lieb inequality for non-negative functions:If m ≥ n, u1, . . . , um ∈ Rn and p1, . . . , pm ≥ 1 with 1

p1+ · · ·+ 1

pm= n, then

(5.1)∫

Rn

m∏

j=1

fj(〈x, uj〉)dx ≤ D ·m∏

j=1

‖fj‖pj

Page 59: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.5 Reverse Brascamp-Lieb inequality · 55

for all non-negative fj ∈ Lpj (R), where

(5.2) D = sup∫

Rn

∏mj=1 gj(〈x, uj〉)dx∏m

j=1 ‖gj‖pj

: gj(t) = e−λjt2 , λj > 0

.

If we set cj = 1/pj and replace fj by fcj

j we obtain the following:

2.5.1. If m ≥ n, u1, . . . , um ∈ Rn and c1, . . . , cm > 0 with c1 + · · · + cm = n,then

(5.3)∫

Rn

m∏

j=1

fcj

j (〈x, uj〉)dx ≤ D ·m∏

j=1

(∫

Rfj

)cj

for all integrable functions fj : R→ R+. 2

The value of the constant D is described by the following proposition.

2.5.2. Let m ≥ n, c1, . . . , cm > 0 with c1 + · · ·+ cm = n, and u1, . . . , um ∈ Rn.Then, D = 1/

√F where

(5.4) F = infdet

(∑mj=1 cjλjuj ⊗ uj

)∏m

j=1 λcj

j

| λj > 0

.

Proof: Recall that (u⊗ u)(x) = 〈x, u〉u. Let gj(t) = exp(−λjt2), j = 1, . . . , m,

where λj are positive reals. Then,

Rn

m∏

j=1

gcj

j (〈x, uj〉)dx =∫

Rn

exp

m∑

j=1

cjλj〈x, uj〉2 dx

=∫

Rn

exp

−⟨( m∑

j=1

cjλjuj ⊗ uj

)(x), x

⟩ dx

=πn/2

√det

(∑mj=1 cjλjuj ⊗ uj

) .

On the other hand,m∏

j=1

(∫

Rgj

)cj

=m∏

j=1

(∫

Rexp(−λjt

2)dt

)cj

=m∏

j=1

( √π√λj

)cj

=πn/2

√∏mj=1 λ

cj

j

since c1 + · · ·+ cm = n. It follows that

1D2

= inf ( ∏m

j=1

(∫R gj

)cj

∫Rn

∏mj=1 g

cj

j (〈x, uj〉)dx

)2

| gj(t) = e−λjt2 , λj > 0

= infdet

(∑mj=1 cjλjuj ⊗ uj

)∏m

j=1 λcj

j

| λj > 0

. 2

Page 60: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

56 · Rearrangement inequalities

In the sequel we do not assume the Brascamp-Lieb inequality. The computationin Proposition 2.5.1 shows the following.

2.5.3. Let m ≥ n, c1, . . . , cm > 0 with c1 + · · ·+ cm = n, and u1, . . . , um ∈ Rn.If f1, . . . , fm : R→ R+ are measurable functions, let

(5.5) I(f1, . . . , fm) =∫

Rn

m∏

j=1

fcj

j (〈x, uj〉)dx.

Then,

(5.6) sup

I(f1, . . . , fm) |∫

Rfj = 1 , j = 1, . . . ,m

≥ 1√

F.

Proof: Obvious, by homogenuity. The Brascamp-Lieb inequality in the form ofTheorem 2.5.1 asserts that there is equality in (5.6). Here, we just use centeredGaussians in order to check the easy part. 2

Barthe proved the following reverse form of Theorem 2.5.1 (this was inspiredby applications in Convex Geometric Analysis and was conjectured by Ball; wewill discuss the applications in detail in subsequent chapters of these notes).

2.5.4. Let m ≥ n, c1, . . . , cm > 0 with c1 + · · ·+ cm = n, and u1, . . . , um ∈ Rn.If h1, . . . , hm : R→ R+ are measurable functions, we set

(5.7) K(h1, . . . , hm) =∫

Rn

sup m∏

j=1

hcj

j (θj) | θj ∈ R , x =m∑

j=1

θjcjuj

dx.

Then,

(5.8) inf

K(h1, . . . , hm) |∫

Rhj = 1 , j = 1, . . . , m

=√

F.

Testing centered Gaussian functions as in Proposition 2.5.1, we get the easypart of this reverse Brascamp-Lieb inequality.

2.5.5. With the notation of Theorem 2.5.2 we have

(5.9) inf

K(h1, . . . , hm) |∫

Rhj = 1 , j = 1, . . . , m

≤√

F.

Proof: Let λj > 0, j = 1, . . . , m and consider the functions hj(t) = exp(−t2/λj).Then, the function

m(x) := sup m∏

j=1

hcj

j (θj) | x =m∑

j=1

θjcjuj

is given by

m(x) = exp

− inf

m∑

j=1

cj

λjθ2

j | x =m∑

j=1

θjcjuj

.

Page 61: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.5 Reverse Brascamp-Lieb inequality · 57

Define

(5.10) ‖x‖2 =m∑

j=1

cjλj〈x, uj〉2 = 〈Ax, x〉

where A is the symmetric positive-definite operator A :=∑m

j=1 cjλjuj ⊗ uj . Itis not hard to check that the dual norm is exactly

(5.11) ‖y‖2∗ = inf m∑

j=1

cj

λjθ2

j | y =m∑

j=1

θjcjuj

.

Therefore,

(5.12) ‖y‖2∗ = 〈By, y〉,

where B = A−1. It follows that

(5.13)∫

Rn

m(x)dx =πn/2

√detB

= πn/2√

detA.

On the other hand,

(5.14)m∏

j=1

(∫

Rexp(−t2/λj)dt

)cj

= πn/2m∏

j=1

λcj/2j .

This shows that

inf

K2(h1, . . . , hm) |∫

Rhj = 1

≤ inf

det( ∑m

j=1 cjλjuj ⊗ uj

)∏m

j=1 λcj

j

| λj > 0

= F

and the proof is complete. 2

The main step in Barthe’s argument is the following proposition.

2.5.6. Let f1, . . . , fm : R → R+ and h1, . . . , hm : R → R+ be integrablefunctions with

Rfj(t)dt =

Rhj(t)dt = 1, j = 1, . . . ,m.

Then,

(5.15) F · I(f1, . . . , fm) ≤ K(h1, . . . , hm).

Proof: We may assume that fj , hj are continuous and strictly positive. Wemay also assume that 0 < F < +∞ (F is not degenerated). We use thetransportation of measure idea that was used for the proof of the Prekopa-Leindler inequality: For every j = 1, . . . , m we define Tj : R → R by theequation

(5.16)∫ Tj(t)

−∞hj(s)ds =

∫ t

−∞fj(s)ds.

Page 62: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

58 · Rearrangement inequalities

Then, each Tj is strictly increasing, 1-1 and onto, and

(5.17) T ′j(t)hj(Tj(t)) = fj(t), t ∈ R.

We now define W : Rn → Rn by

(5.18) W (y) =m∑

j=1

cjTj(〈y, uj〉)uj .

A simple computation shows that

(5.19) J(W )(y) =m∑

j=1

cjT′j(〈y, uj〉)uj ⊗ uj .

This shows that〈[J(W )(y)](v), v〉 > 0 if v 6= 0

and hence, W is injective. Consider the function

m(x) = sup m∏

j=1

hcj

j (θj) | x =m∑

j=1

θjcjuj

.

Then, (5.18) shows that

(5.20) m(W (y)) ≥m∏

j=1

hcj

j (Tj(〈y, uj〉))

for every y ∈ Rn. It follows that∫

Rn

m(x)dx ≥∫

W (Rn)

m(x)dx

=∫

Rn

m(W (y)) · |J(W )(y)| dy

≥∫

Rn

m∏

j=1

hcj

j (Tj(〈y, uj〉)) det

m∑

j=1

cjT′j(〈y, uj〉)uj ⊗ uj

dy.

By the definition of F we have

(5.21) det

m∑

j=1

cjT′j(〈y, uj〉)uj ⊗ uj

≥ F ·

m∏

j=1

(T ′j(〈y, uj〉)

)cj.

Therefore, taking (5.17) into account we have

Rn

m(x)dx ≥ F ·∫

Rn

m∏

j=1

hcj

j (Tj(〈y, uj〉)) ·m∏

j=1

(T ′j(〈y, uj〉)

)cjdy

= F ·∫

Rn

m∏

j=1

fcj

j (〈y, uj〉)dy

= F · I(f1, . . . , fm).

Page 63: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.5 Reverse Brascamp-Lieb inequality · 59

In other words, F · I(f1, . . . , fm) ≤ K(h1, . . . , hm). 2

We can now prove simultaneously the Brascamp-Lieb inequality (Theorem2.5.1) and the reverse Brascamp-Lieb inequality (Theorem 2.5.2).

2.5.7. Let m ≥ n, c1, . . . , cm > 0 with c1 + · · ·+ cm = n, and u1, . . . , um ∈ Rn.If f1, . . . , fm : R→ R+ and h1, . . . , hm : R→ R+ are integrable functions with

Rfj(t)dt =

Rhj(t)dt = 1, j = 1, . . . ,m,

then,

(5.22) I(f1, . . . , fm) ≤ 1√F

and

(5.23) K(h1, . . . , hm) ≥ F.

Proof: Combining Lemma 2.5.1, Proposition 2.5.2 and Theorem 2.5.3, we have

1√F

≤ sup

I(f1, . . . , fm) |∫

Rfj = 1

≤ 1F· inf

K(h1, . . . , hm) |

Rhj = 1

≤ 1√F

.

We must have equality everywhere, and this proves the theorem. 2

We close this Section with a definition and a remark which will be veryimportant for the applications to Convex Geometry.

Definition: Let m ≥ n. We say that the vectors u1, . . . , um ∈ Sn−1 arein isotropic position with weights c1, . . . , cm, if the identity operator I isdecomposed in the form

(5.24) I =m∑

j=1

cjuj ⊗ uj .

Taking traces in both sides we see that

(5.25) c1 + · · ·+ cm = n

is a necessary condition for (5.24). It is easy to check that the isotropic condition(5.24) is equivalent to the following: For every x ∈ Rn,

(5.26)m∑

j=1

cj〈x, uj〉2 = |x|2.

Page 64: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

60 · Rearrangement inequalities

2.5.8. Assume that the vectors u1, . . . , um ∈ Sn−1 are in isotropic positionwith weights c1, . . . , cm > 0. Then,

(5.27) F = F (uj, cj) = 1,

where F is the constant in the Brascamp-Lieb inequality and its reverse for thisset of uj’s and cj’s.

Proof: Let λj > 0, j = 1, . . . , m. For every I ⊆ 1, . . . ,m with cardinality|I| = n we define

(5.28) λI =∏

i∈I

λj and UI =(det

(√cjuj : j ∈ I

) )2.

By the Cauchy-Binet formula we have(5.29)

det

m∑

j=1

cjλjuj ⊗ uj

= det

m∑

j=1

λj(√

cjuj)⊗ (√

cjuj)

=

|I|=n

λIUI .

Setting λj = 1 in (5.29) and using the isotropic condition, we see that

(5.30)∑

|I|=n

UI = 1.

By the arithmetic-geometric means inequality,

(5.31)∑

|I|=n

λIUI ≥∏

|I|=n

λUI

I =m∏

j=1

λ∑I:j∈I UI

j .

Applying the Cauchy-Binet formula again, we have

I:j∈IUI =

|I|=n

UI −∑

I:j /∈IUI

= 1− det(I − (

√cjuj)⊗ (

√cjuj)

)

= 1− (1− cj |uj |2)= cj .

Going back to (5.29) and (5.30) we see that

(5.32) det

m∑

j=1

cjλjuj ⊗ uj

m∏

j=1

λcj

j .

Since the λj ’s were arbitrary, we conclude that F ≥ 1. The choice λj = 1 givesequality in (5.32). Therefore, F = 1. 2

Page 65: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.6 Multidimensional versions · 61

2.6 Multidimensional versions

Let S+(Rk) be the set of k × k symmetric, positive definite matrices. If A ∈S+(Rk), we write GA for the centered Gaussian function GA : Rk → R definedby

(6.1) GA(x) = exp(−〈Ax, x〉).

Finally, we write L+1 (Rk) for the class of integrable non-negative functions f :

Rk → R. Our purpose in this section is to prove the multidimensional versionof the Brascamp-Lieb inequality and its reverse. The setting is the following:

Let m ≥ n. Suppose we are given real numbers c1, . . . , cm > 0 and integersn1, . . . , nm less than or equal to n, such that

(6.2)m∑

j=1

cjnj = n.

For each j = 1, . . . , m, we are also given a linear map Bj : Rn → Rnj which isonto. We also assume that

(6.3)m⋂

j=1

Ker(Bj) = 0.

We define two operators I,K : L+1 (Rn1)× · · · × L+

1 (Rnm) → R by

(6.4) I(f1, . . . , fm) =∫

Rn

m∏

j=1

fcj

j (Bjx)dx

and

(6.5) K(h1, . . . , hm) =∫ ∗

Rm

m(x)dx,

where

(6.6) m(x) = sup m∏

j=1

hcj

j (yj) | yj ∈ Rnj andm∑

j=1

cjB∗j yj = x

.

Let E be the largest constant for which

(6.7) K(h1, . . . , hm) ≥ E ·m∏

j=1

(∫

Rnj

hj

)cj

holds true for all hj ∈ L+1 (Rnj ), and let F be the smallest constant for which

(6.8) I(f1, . . . , fm) ≤ F ·m∏

j=1

(∫

Rnj

fj

)cj

holds true for all fj ∈ L+1 (Rnj ). Then, we have the following theorem.

Page 66: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

62 · Rearrangement inequalities

2.6.1. The constants E and F can be computed using centered Gaussian func-tions. That is,

(6.9) E = inf

K(g1, . . . , gm)∏mj=1

(∫Rnj gj

)cj| gj is a centered Gaussian , j = 1, . . . , m

and(6.10)

F = sup

I(g1, . . . , gm)∏mj=1

(∫Rnj gj

)cj| gj is a centered Gaussian , j = 1, . . . , m

.

Moreover, if D is the largest real number for which

(6.11) det

m∑

j=1

cjB∗j AjBj

≥ D ·

m∏

j=1

(detAj)cj ,

for all Aj ∈ S+(Rnj ), we have

(6.12) E =√

D and F =1√D

.

The proof follows the steps of the one-dimensional case. We consider the con-stants(6.13)

Eg = inf

K(g1, . . . , gm)∏mj=1

(∫Rnj gj

)cj| gj is a centered Gaussian , j = 1, . . . ,m

and(6.14)

Fg = sup

I(g1, . . . , gm)∏mj=1

(∫Rnj gj

)cj| gj is a centered Gaussian , j = 1, . . . , m

.

What we want to prove is that

(6.15) E = Eg =√

D and F = Fg =1√D

.

2.6.2. In the notation of Theorem 2.6.1, we have Fg = 1/√

D.

Proof: Let gj = GAj , j = 1, . . . , m, where Aj ∈ S+(Rnj ). Then,

Rn

m∏

j=1

gcj

j (Bjx)dx =∫

Rn

exp

m∑

j=1

cj〈AjBjx,Bjx〉 dx

=∫

Rn

exp

−⟨( m∑

j=1

cjB∗j AjBj

)(x), x

⟩ dx

=πn/2

√det

(∑mj=1 cjB∗

j AjBj

) .

Page 67: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

2.6 Multidimensional versions · 63

On the other hand,m∏

j=1

(∫

RGAj

)cj

=m∏

j=1

(∫

Rexp(−〈Ajx, x〉)

)cj

=m∏

j=1

(πnj/2

√det Aj

)cj

=πn/2

√∏mj=1(det Aj)cj

since c1n1 + · · ·+ cmnm = n. It follows that

1F 2

g

= inf ( ∏m

j=1

(∫Rnj gj

)cj

∫Rn

∏mj=1 g

cj

j (Bjx)dx

)| gj is a centered Gaussian , j = 1, . . . , m

= infdet

(∑mj=1 cjB

∗j AjBj

)∏m

j=1(det Aj)cj| Aj ∈ S+(Rnj )

= D. 2

2.6.3. We have Eg · Fg = 1 and Eg = 0 if and only if Fg = +∞. Therefore,Fg =

√D.

Proof: Let Aj ∈ S+(Rnj ) and consider the quadratic form Q defined by

(6.16) Q(y) =⟨ m∑

j=1

cjB∗j AjBjy, y

⟩.

We consider the function

(6.17) R(x) = inf m∑

j=1

cj〈A−1j xj , xj〉 | xj ∈ Rnj and x =

m∑

j=1

cjB∗j xj

.

We will show that

(6.18) R(x) = Q∗(x) = sup〈x, y〉2 | Q(y) ≤ 1.Note that if x =

∑mj=1 cjB

∗j xj where xj ∈ Rnj , then

〈x, y〉2 =⟨ m∑

j=1

cjB∗j xj , y

⟩2 =

m∑

j=1

〈√cjA−1/2j xj ,

√cjA

1/2j Bjy〉

2

,

and hence, Cauchy-Schwarz inequality shows that

〈x, y〉2 ≤

m∑

j=1

|√cjA−1/2j xj |2

·

m∑

j=1

|√cjA1/2j Bjy|2

=

m∑

j=1

cj〈xj , A−1j xj〉

·

⟨ m∑

j=1

cjB∗j AjBjy, y

≤ R(x)Q(y).

Page 68: Convex Geometric Analysisusers.uoa.gr/~apgiannop/notes.pdf · Convex Geometric Analysis Seminar Notes Department of Mathematics University of Crete Heraklion, 2002. Contents 1 Brunn-Minkowski

64 · Rearrangement inequalities

On the other hand, we have equality in the previous computation if we choose

y =

m∑

j=1

cjB∗j AjBj

−1

(x)

and xj = AjBjy, j = 1, . . . ,m. It follows that R = Q∗.Direct computation shows that if Aj ∈ S+(Rnj ), then

(6.19)I(GA1 , . . . , GAm

)∏mj=1

(∫GAj

)cj=

(∏mj=1(detAj)cj

detQ

)1/2

and

(6.20)K(GA−1

1, . . . , GA−1

m)

∏mj=1

(∫GA−1

j

)cj=

(∏mj=1(det Aj)−cj

det R

)1/2

.

Since R = Q∗, we have (det Q) · (det R) = 1. Therefore,

(6.21)I(GA1 , . . . , GAm)∏m

j=1

(∫GAj

)cj·K(GA−1

1, . . . , GA−1

m)

∏mj=1

(∫GA−1

j

)cj= 1.

By the definition of Eg and Fg we get the conclusion of the lemma. 2