40
DIFFUSIVE PROCESSES AND STOCHASTIC DIFFERENTIAL EQUATIONS Wroc law Lectures Draft, not for publication Wojbor A. WOYCZY ´ NSKI Case Western Reserve University Copyright: WAW, May 2013

DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

DIFFUSIVE PROCESSES ANDSTOCHASTIC DIFFERENTIAL

EQUATIONS

Wroc law Lectures

Draft, not for publication

Wojbor A. WOYCZYNSKICase Western Reserve University

Copyright: WAW, May 2013

Page 2: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

2

Page 3: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Contents

1 Random walk and its parabolic rescaling limit 11.1 Stochastic processes and their finite-dimensional distributions 11.2 Finite-dimensional distributions . . . . . . . . . . . . . . . . . 11.3 Symmetric random walk; parabolic rescaling . . . . . . . . . . 21.4 Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Brownian motion as a measure on the space of continuousfunctions 52.1 Basic properties of Brownian motion . . . . . . . . . . . . . . 52.2 Almost sure continuity of sample paths . . . . . . . . . . . . . 62.3 Nowhere differentiability of Brownian motion . . . . . . . . . . 7

3 Poisson processes and their mixtures, Levy processes 83.1 Why Poisson process? . . . . . . . . . . . . . . . . . . . . . . 83.2 Finite dimensional distributions and covariance structure . . . 93.3 Waiting times and interjump times . . . . . . . . . . . . . . . 9

4 Levy-Khinchne formula and infinitesimal generators of Levyprocesses 114.1 From Poisson processes to Levy processes . . . . . . . . . . . . 114.2 Infinitesimal generators of Levy processes . . . . . . . . . . . . 12

5 Selfsimilar Levy processes and singular integrals 155.1 Selfsimilarity of Levy processes . . . . . . . . . . . . . . . . . 155.2 Properties of α-stable motions . . . . . . . . . . . . . . . . . . 155.3 Infinitesimal generators of α-stable processes . . . . . . . . . . 16

6 Stochastic integrals for Brownian motion and general Levyprocesses 176.1 Wiener random integral . . . . . . . . . . . . . . . . . . . . . 17

3

Page 4: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

4 CONTENTS

6.2 Ito stochastic integral for Brownian motion . . . . . . . . . . . 186.3 Ito stochastic integral for α-stable motion . . . . . . . . . . . 19

7 Ito stochastic differential equations 217.1 Ito’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217.2 Stochastic differential equations . . . . . . . . . . . . . . . . . 22

8 Asymmetric exclusion processes and their scaling limits 238.1 Asymmetric exclusion principles . . . . . . . . . . . . . . . . . 238.2 Scaling limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248.3 Other queuing regimes related to non-nearest neighbor systems 25

9 Nonlinear diffusion equations 279.1 Hyperbolic equations . . . . . . . . . . . . . . . . . . . . . . . 279.2 Nonlinear diffusion approximations . . . . . . . . . . . . . . . 28

10 Interacting diffusions approximations for nonlinear diffusionequations 3310.1 Nonlinear processes . . . . . . . . . . . . . . . . . . . . . . . . 3310.2 Interacting diffusions and Monte-Carlo methods . . . . . . . . 34

Page 5: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 1

Random walk and its parabolicrescaling limit

1.1 Stochastic processes and their finite-

dimensional distributions

Let T be a subset of the real line and (Ω,F , P ) a probability space. Amapping

T × ω 3 (t, ω) 7−→ Xt(ω) ∈ R, (1.1)

is called a stochastic process. So, on the one hand, stochastic process is afunction

T 3 t 7−→ Xt ∈ L0(Ω,F , P ; R), (1.2)

The values of this function are random variables with the 1-D probabilitydistributions

µt(B) = P (Xt ∈ B), B ∈ B, (1.3)

On the other hand, the mapping

Ω 3 ω 7−→ X·(ω) ∈ TR (1.4)

has as its values real-valued functions on T which are called the sample-paths(trajectories) of the process X. This duality will be explored in some detail.

1.2 Finite-dimensional distributions

More complete information about the process is given by its finite-dimensional distributions

µt1,...,tn(B) = P ((Xt1 , . . . , Xtn) ∈ B), B ∈ Bn, (1.5)

1

Page 6: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

2 Lecture 1

n = 1, 2, . . . ; t1, . . . , tn ∈ T , and Kolmogorov’s Consistency Theorem assuresus that if we know all finite dimensional distribution than we can determine(at least theoretically) the full infinite dimensional probability distributionof the process on the space TR of real functions on T (equipped with thenatural sigma-field generated by the cylindrical sets).

1.3 Symmetric random walk; parabolic

rescaling

Consider first a symmetric random walk on the one-dimensional lattice.Starting from the origin, the particle moves one step to the right, or left,with equal probability 1/2.

The consecutive steps are independent and are taken at the times

tk = k∆t, k = 1, 2, . . . ,

with the step size (lattice distance) ∆x, so that the set of possible positionsof the particle is

xk = k∆x, k = 1, 2, . . . .

If X(t, x) indicates whether the site x is occupied (X = 1), or unoccupied(X = 0) at time t, and we denote by

u(tk, xk) = P X(tk, xk) = 1 (1.6)

the probability that at time tk the particle is at site xk then, clearly,

u(tk+1, xk) =1

2u(tk, xk−1) +

1

2u(tk, xk+1). (1.7)

This equation can be rewritten as a difference equation for function u

u(tk+1, xk)− u(tk, xk) =1

2

(u(tk, xk−1) + u(tk, xx+1)− 2u(tk, xk)

), (1.8)

k = 1, 2, . . . , with the initial condition u(0, x) = δ(x). Instead of directlysolving the system (3) it is easier to notice that with parabolic scaling

∆t = (∆x)2

we can rewrite (3) in the form

u(tk+1, xk)− u(tk, xk)

δt=

1

2

u(tk, xk−1) + u(tk, xx+1)− 2u(tk, xk)

(∆x)2, (1.9)

Page 7: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Finite dimensiona distribution of processes 3

k = 1, 2, . . . , which, in the hydrodynamic limit ∆t = (∆x)2 → 0, becomesthe usual linear diffusion equation

∂u

∂t=

1

2

∂2u

∂x2. (1.10)

Solving via Fourier transform?

1.4 Brownian motion

The above passage from the difference to differential equation has its analoguein terms of the random walk itself approaching the continuous-time Brownianmotion.

Consider a sequence ξi, i = 1, 2, . . . , of independent random variables withPξi = ±1 = 1/2, so that Eξi = 0,Var ξi = 1. Then the position of theparticle at time n is described by

X(n) = ξ1 + · · ·+ ξn, (1.11)

with EX(n) = 0,VarX(n)∞ (see, Fig. 2.1.3). But

Bn = X(n)/√n

has variance 1 for all n = 1, 2, . . . , and its Fourier tranform

φBn(λ) = EeiλBn =(E exp

[iλξ1/

√n])n

= cosn(u/√n).

As n→∞, applying twice the l’Hospital rule to variable n,

limn→∞

log φBn(λ) = limn→∞

n log cosλ√n

= limn→∞

log cos(λ/√n)

1/n

= limn→∞

− sin(λ/√n)(−λ/2n3/2)

−(1/n2) cos(λ/√n)

= limn→∞

− cos(λ/√n)(λ2/2n3/2)

1/n3/2= −λ

2

2,

which means that limn→∞ φBn(λ) = exp[−λ2/2], so that, in law, the randomvariables Bn converge to a standard Gaussian random variable, say B∞, i.e.,for all x ∈ R,

PB∞ ≤ x =

∫ x

−∞

exp[−y2/2]√2π

dy. (1.12)

Page 8: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

4 Lecture 1

This is, of course, the elementary version of the Central Limit Theorem.Interpolating linearly the parabolically rescaled in time and space randomwalk we get

Bn(t) =1√n

bntc∑i=1

ξi =X(bntc)√

n, (1.13)

where bxc is the greatest integer ≤ x, and the finite-dimensional distributionsof processes Bn(t), t ∈ R, converge to the finite-dimensional distributions ofthe Brownian motion B(t), i.e., the Gaussian process with independent andstationary increments, mean EB(t) = 0, and variance VarB(t) = t, so that

Cov (B(t), B(s)) = t ∧ s ≡ mint, s. (1.14)

This is the celebrated Invariance Principle (see, e.g., Billingsley (1986), The-orem 37.8).

-3 -2 -1 1 2 3

0.2

0.4

0.6

0.8

x

x

Page 9: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 2

Brownian motion as a measureon the space of continuousfunctions

2.1 Basic properties of Brownian motion

Several basic properties of the Brownian motion follow directly from thedefinition which only asserts that it is a mean-zero Gaussian process withthe covariance function of the form t ∧ s. Indeed, we immediately get that,

(a) B0 = 0,

(b) EB2t = t, and, consequently,

P(Xt ≤ z) =1√2πt

∫ z

−∞e−x

2/(2t) dx. (2.1)

(c) Bt has orthogonal (uncorellated) increments over non-overlappingtime intervals.

(d) Its finite-dimensional distributions are explicitly calculated:

P(Bt1 ≤ z1, . . . , Btn ≤ zn) =

∫ z1

−∞· · ·∫ zn

−∞ft1,...,tn(x1, . . . .xn) dx1 · · · dxn,

(2.2)with the pdf, for 0 < t1 < · · · < tn,

ft1,...,tn(x1, . . . .xn) =e−x

21/(2t1)

√2πt1

· e−(x2−x1)2/(2(t2−t1))√

2π(t2 − t1)· · · e

−(xn−xn−1)2/(2(tn−tn−1))√2π(tn − tn−1)

,

(2.3)

5

Page 10: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

6 Lecture 2

which can be obtained from (b)and (c) and the n-dimensional change-of-variables formula in the appropriate integral.

(e) For any t0 > 0, the process Xt = Bt−Bt0 , t > 0, is also a Brownianmotion.

(f) Brownian motion has stationary increments

(g) Brownian motion has parabolic scaling, that is, for any c > 0, theprocess X(t) = cB(t/c2), t > 0 is also a Brownian motion.

(h) Inversion in time: The process X(t) = tB(1/t), t > 0, is also aBrownian motion. Thus, the behavior of B(t) at infinity determines its be-havior at zero, and vice versa.

(i) Reflection in the time axis: The process X(t) = −B(t), t > 0, isalso a Brownian motion.

2.2 Almost sure continuity of sample paths

Brownian motion (has a version that) has continuous trajectories with prob-ability 1. This result was proved by Norbert Wiener in 1926. In the 1960s,Zbigniew Ciesielski found a very elegant proof of this fact using the orthonor-mal system of Haar wavelets which are defined on the unit interval which aredefined by the formula:

hk2−n(t) =

2(n−1)/2, for (k − 1)2−n < t ≤ k2−n;−2(n−1)/2, for k2−n < t ≤ (k + 1)2−n;0, elsewhere.

(2.4)

for n = 1, 2, . . . , and odd k < 2n, with h0 ≡ 1.

Theorem: The process defined by the infinite random series

Xt = γ0

∫ t

0

h0 +∑n≥1

∑odd k<2n

γk2−n

∫ t

0

hk2−n , t > 0, (2.5)

where γk2−n are independent, zero-mean Gaussian random variables withvariance 1, is a Brownian motion. The series converges uniformly with prob-ability 1, so that Xt is continuous with probability 1.

A proof of this major results is illuminating and we will explore it below.

Page 11: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Brownian motion 7

2.3 Nowhere differentiability of Brownian

motion

The good news about the trajectories of the Brownian motion end at theircontinuity (actually a little more can be proved). Actually, the form ofthe covariance function immediately gives us a hint that B(t) cannot bedifferentialble in the L2 sense. But Dvoretzky, Erdos, and kakutani provedin 1961 that a much stronger result holds true.

Theorem: Brownian motion is nowhere differentiable with probability 1.More precisely,

P(B(t) exists for some t) = 0 (2.6)

We shall explore a subtle proof of this result.

Although, W (t) = B(t), which is called the white noise, does not existin the classical sense, it is an object that is commonly used in physics andengineering. It can be rigorously introduced in the framework of the theoryof distributions (generalized functions) and we will briefly describe below howthis can be done.

Page 12: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

8 Lecture 2

Page 13: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 3

Poisson processes and theirmixtures, Levy processes

3.1 Why Poisson process?

Let us now move in the opposite direction from the continuous sample pathBrownian motion model considered in the previous lecture, and consider astochastic process Nt, t ≥ 0, enjoying the following properties

(a) N(t) takes values in the set of nonnegative integers, and N(0) = 0.

(b) N(t) has stationary increments independent over non-overlapping in-tervals.

(c) N(t) is nontrivial in the sense that, for each t > 0, we have0 < P(N(t) > 0) < 1.

(d) N(t) has only jumps of size 1.

The above set of properties seems like only a qualitative description of aprocess, but it turns out that they are restrictive enough to characterize theprocess completely in the quantitative sense.

Theorem: If process N(t), t > 0, satisfies the above conditions (a)–(d),then it is a Poisson process, that is, there exists a constant µ > 0, such that

P(N(t = k) = e−µtµktk

k!, k = 0, 1, 2, . . . . (3.1)

We will provide a proof of this result.

8

Page 14: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Poisson processes 9

3.2 Finite dimensional distributions and co-

variance structure

Also, all the finite dimensional distributions can be explicitly calculated.Similarly, various conditional probabilities can be explicitly evaluated. Inparticular, order statistics of a uniform distribution play here an importantrole.

3.3 Waiting times and interjump times

The n-th waiting time is the random time when the process reaches the leveln for the first time, i.e.,

Wn = mint : N(t) = n. (3.2)

Theorem: The n-th waiting time Wn of the Poisson process with param-eter µ has the Gamma pdf

fWn(t) = e−µtµntn−1

(n− 1)!, t ≥ 0, (3.3)

for n = 1, 2, . . . .

The proof will be provided below.

On the other hand, the random interjump times

Tn = Wn −Wn−1, n = 1, 2, . . . (3.4)

have a simpler structure

Theorem: Random variable Tn, n = 1, 2, . . . , are independent, identi-cally distributed with the exponential pdf

fTn(t) = µe−µt, t ≥ 0. (3.5)

The proof will be provided below

Page 15: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

10 Lecture 3

Page 16: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 4

Levy-Khinchne formula andinfinitesimal generators of Levyprocesses

4.1 From Poisson processes to Levy processes

In this section we provide a brief, ab ovo, review of Levy’s model for a self-similar nonlocal hopping surface transport model that in other context wascalled anomalous diffusion. The role of the fractional Laplacian is explainedand the model is compared to the traditional random walk/Brownian motionmodel.

Let us begin with an elementary one-dimensional example of the Poissonprocess where a particle is located on a 1-D lattice with unit spacing, startsat the origin, waits a random exponential time and then moves one unitto the right. Then the step is independently repeated. If we denote byX(t) the position of the particle at time t > 0, then it is well known thatthis random quantity has the standard Poisson distribution, i.e., PrX(t) =k = e−t · tk/k!, k = 0, 1, 2, . . . . This distribution can be described in termsof its Fourier transform Φ(u) (characteristic function) as follows

Φ(u) = EeiuX(t) = e−t∞∑k=0

(eiu)k · tk

k!= exp[t(eiu − 1)] (4.1)

If the jumps are of size a, i.e., PrX(t) = ak = e−t · tk/k!, k = 0, 1, 2, . . . .then the analogous calculation gives the Fourier transform

Φ(u) = EeiuX(t) = e−t∞∑k=0

(eiu)ak · tk

k!= exp[t(eiua − 1)] (4.2)

11

Page 17: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

12 Appendix C: Distributions

Now consider a more complex model which is a composition of n independentsimple Poisson processes, each with jump sizes a1, . . . , an, respectively. Forthe resulting stochastic process X(t), the Fourier transform

Φ(u) = EeiuX(t) =k∏j=1

e−t∞∑k=0

(eiu)ajk · tk

k!= exp

(t

k∑j=1

(eiuaj − 1)

). (4.3)

If jumps sizes are continuously distributed, say with intensity L(da), then thenatural infinitesimal limiting procedure for the corresponding Levy process,leads to the following representation of its Fourier transform

Φ(u) = EeiuX(t) = exp

(t

∫ ∞−∞

(eiua − 1)L(da)

)(4.4)

assuming that the time increments are stationary and independent in dis-joint time intervals. Define Ψ so that, Φ(u) = exp(−tΨ(u)) with Ψ(u) =∫∞−∞(eiua− 1)L(da). The intensity measure L(da) is called the Levy measure

of the process X(t) (see Bertoin (1996)).When the Levy measure has power scaling, i.e., L(da) = da/|a|α+1, 0 <

α < 2, a simple calculation leads to the α-stable Fourier transform

Φ(u) = e−ct|u|α

and Ψ(u) = c|u|α, c > 0, (4.5)

and the corresponding α-stable Levy process.

Remark 4.1.1 The special case of α = 2 gives the Brownian motion withthe Fourier transform of the form exp(−ctu2) which can be inverted to yieldthe familiar Gaussian density of the form exp(−x2/(2ct)). For α = 1, weobtain another familiar density, namely the Cauchy (Lorentz) density of theform (π(1 + x2))−1. In general, for other values of 0 < α < 2, the Fouriertransform exp[−ct|u|α] cannot be inverted explicitly (see, e.g., Feller (1966),Bertoin (1996)).

Remark 4.1.2. In view of (A.5), increments of the Levy α-stable processhave distributions with the following self-similar scaling property: for anyt, c > 0, X(ct) ∼ c1/αX(t), where ∼ stands for the equality of distributionsof random quantities.

4.2 Infinitesimal generators of Levy pro-

cesses

Levy processes are Markov processes with the associated Markov semigroup(i.e. Pt+s = PtPs) of convolution operators Pt acting on a bounded function

Page 18: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

C.1. Basic definitions and operations 13

f(x) via the formula

Ptf(x) = Ex(f(X(t)) =

∫R

f(x+ y)P (X(t) ∈ dy) (4.6)

The infinitesimal generator A of such a semigroup is defined by the formula

A = limh→0

Ph − P0

h(4.7)

and the family of functions (densities) v(t, x) = Ptf(x) satisfies clearly the(generalized) Fokker-Planck evolution equation

vt = Av, (4.8)

because

limh→0

Pt+h − Pth

= limh→0

Ph − P0

hPt = APt. (4.9)

In the case of the usual Brownian motion the infinitesimal operator A isjust the classical Laplacian ∆. In the case of general Levy processes, we havethe identity

F(Af)(u) = −Ψ(−u)Ff(u) (4.10)

where F stands for the Fourier transform, because

F(Ptf)(u) = E

(∫R

eiuxf(X(t) + x) dx

)= E

(∫R

eiu(y−X(t))f(y) dy

)

= Ee−iuX(t)

∫R

eiuyf(y) dy = exp(−tΨ(−u)Ff(u),

which implies that

F(U qf)(u) = (q + Ψ(−u))−1Ff(u), q > 0

where

U qf(x) =

∫ ∞0

e−qtPtf(x) dt = Ex

(∫ ∞0

e−qtf(X(t)) dt

)is the family of resolvent operators which satisfy the relation U q(qI−A) = I,for any q > 0.

Page 19: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

14 Appendix C: Distributions

Inverting the Fourier transform in (4.10) one gets the following represen-tation for the infinitesimal operator of the Levy process

Af(x) =

∫R

(f(x+ y)− f(x))L(dy) (4.11)

In the special case of the α-stable Levy process, i.e., when Ψ(u) = c|u|α,the infinitesimal operator

Af(x) =

∫R

(f(x+ y)− f(x))dy

|y|α+1(4.12)

can be identified with the fractional power of the (negative) Laplacian A =−c(−∆)α/2, since, in view of (B.7),

F(Af)(u) = −c(|u|2)α/2Ff(u) (4.13)

The above exposition only gives a sketch of the fractional Laplacian ma-chinery. There are some mathematical details that have been omitted not tocloud the basic formal structure. Those details can be found in Feller(1966),Bertoin(1996), and Saichev and Woyczynski (1997).

Page 20: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 5

Selfsimilar Levy processes andsingular integrals

5.1 Selfsimilarity of Levy processes

Brownian motion enjoyed the self-similarity property

Bct =d c1/2Bt

Question: Can we find Levy processes which are self-similar, perhaps withthe parameter α 6= 2? This turns out to be true if the Levy measure in theLevy-Khinchine formula has the self-similarity property itself. Indeed,

E exp(iuXt) = exp

(2t

∫ ∞0

(cosux− 1)dx

|x|α+1

)= e−ct|u|

α

, (5.1)

so that

Xct =d c1/αXt. (5.2)

Such processes are called α-stable motions (processes).

5.2 Properties of α-stable motions

1) Moments

2) Tail properties of α-stable distributions

3) Trajectories of α-stable motions. Fractal properties.

15

Page 21: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

16 Lecture 4

5.3 Infinitesimal generators of α-stable pro-

cesses

Fractional Laplacians and their properties as Fourier multiplier operators.Potential estimates for fractional laplacians.

Page 22: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 6

Stochastic integrals forBrownian motion and generalLevy processes

6.1 Wiener random integral

Random measure (orthogonally scattered) on ([0, 1],B): A mapping

B 3 A 7−→M(A) ∈ L2(Ω.F ,P) (6.1)

such thatEM(A) = 0

M(A ∪B) = M(A) +M(B), if A ∩B = ∅,and

EM(A) ·M(B) = 0, if A ∩B = ∅.Brownian motion generates a random measure by the extension of the

formulaM((a, b]) = Bb −Ba.

So does the symmetrization of the Poisson process.Construction of the Wiener random integral. Start with simple functions

f(t) =∑i

aiI(Ai),

where Ai’s are disjoint and define∫f(t)M(dt) =

∑i

aiM(Ai). (6.2)

17

Page 23: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

18 Lecture 6

Isometry property,

E

∣∣∣∣∫ f(t)M(dt)

∣∣∣∣2 =

∫|f(t)|2m(dt) (6.3)

where m(A) = E|M(A)|2 is the control measure. The isometry permitsextension of the random integral to all functions f ∈ L2([0, 1],B,m).

Xn =

∫eintM(dt),

gives a representation of second-order weakly stationary processes.

6.2 Ito stochastic integral for Brownian mo-

tion

Try to extend the definition of the Wiener integral to the case of stochasticintegrands: ∫

f(t, ω)dBt(ω) (6.4)

Again start with simple integrands, this time taking random values on dis-joint intervals. Immediate problem trying to get the isometry property is thestructure of statistical dependence between ft and BT . For nonanticipatingintegrands we also have isometry,

E

∣∣∣∣∫ ftdBt

∣∣∣∣2 = E

∫|ft|2dt. (6.5)

Lebesgue measure is the control measure for Brownian motion.Martingale structure of Ito integrals

E

(∫ s

0

ftdBt

∣∣∣ Fu) =

∫ u

0

ftdBt, u < s, (6.6)

Maximal inequalities. Example of direct calculation:∫ s

0

BtdBt =1

2(B2

s − s)

Is there a more general formula behind ?

Page 24: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Stochastic integrals 19

6.3 Ito stochastic integral for α-stable motion

What to do if the second moments are infinite? Define∫fs(ω)dXs(ω)

relying on the knowledge of the characteristic function of the α-stable motionXt.

Page 25: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

20 Lecture 6

Page 26: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 7

Ito stochastic differentialequations

7.1 Ito’s formula

Because the integrator in the Ito’s Brownian integral has infinite variation, the standard rules of calculus do not apply. In particular , the change-of-variables formula (chain rule) takes a different form

Theorem: Let

Xt = X0 +

∫ t

0

u(s, ω)ds+

∫ t

0

v(s, ω)dBs (7.1)

be a stochastic integral process with nonanticipating processes u, and v, suchthat

P

(∫ t

0

v2(s)ds <∞,∀t > 0

)= 1,

and

P

(∫ t

0

|u(s)|ds <∞,∀t > 0

)= 1.

If g(t, x)t ≥ 0, x ∈ R is twice differentiable then the process

Yt = g(t,XT )

is also a stochastic integral and

Ys = Y0 +

∫ s

0

∂g

∂t(t,Xt)dt+

∫ s

0

∂g

∂x(t,Xt)dXt+

1

2

∫ s

0

∂2g

∂x2(t,Xt)(dXt)

2, (7.2)

21

Page 27: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

22 Lecture 7

where (dXt)2 is calculated according to the following rules:

dtdt = 0, dtdBt = 0, dBtdBt = dt.

Proof:

Examples

7.2 Stochastic differential equations

Differential equations with noise

dXt

dt= b(t,Xt) + σ(t,Xt)Bt

where Bt is the so-called white noise, .i.e., the derivative of the Brownianmotion. However, since Brownian motion is not differentiable, the rigorousapproach call for interpretation of the above stochastic equation as Ito’sstochastic integral equation

Xs = X0 +

∫ s

0

b(t,Xt)dt+

∫ s

0

σ(t,Xt)dBt, (7.3)

which is traditional written in the form of the stochastic differential equation

dXt = b(t,Xt)dt+ σ(t,Xt)dBt. (7.4)

1) Existence and uniqueness2) Explicit solutions?3) Properties of solutions.

Example 1: Population growth model. Ornstein-Uhlenbeck process.

Example 2. Bessel process

Example 3. Kalman-Bucy filter

Page 28: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 8

Asymmetric exclusion processesand their scaling limits

8.1 Asymmetric exclusion principles

Particles occupy integer lattice sites on the real line. Description: X(k, t) =1, if site k is occupied at time t, and 0 otherwise. They obey the exclusionprinciple: two particles cannot occupy the same site at the same time.

This is the example considered by Kipnis (1986), Benassi, Fouque (1987),and Srinivasan (1991,1993). Another starting point is the observation thatthe queuing system consisting of an infinite series of queues can be interpretedin the language of the one-dimensional nearest neighbor simple exclusionprocess (see, e.g., Liggett(1985)). Indeed, if the lattice location of the i-thparticle is denoted by xi then, in view of the exclusion dynamics and nearestneighbor jumps, at time t

· · · < x−1(t) < x0(t) < x1(t) < . . . (8.1)

Assume that the rate of this process is 1. If we denote by ηi(t) the randomvariable equal to the number of empty sites between xi(t) and xi+1(t), thenηi(t) can be considered as the length of the ith queue for an infinite queuingsystem with single servers in series, each with an exponential service timewith intensity 1. Indeed, when the ith particle jumps to the right by oneunit, then ηi(t) changes into ηi(t) − 1 which means that the service for onecustomer was completed at the ith server, and ηi+1 is changed to ηi+i(t) + 1,which means that a new customer was added to the queue at the (i + 1)stserver. In other words, the customer in the ith queue is served in exponentialtime with rate 1 and then joins the (i − 1)st queue with probability p and(i+ 1)st queue with probability 1− p.

23

Page 29: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

24 Lecture 8

Another way to code the asymmetric exclusion interacting particle systemis by listing its states

X(t) = X(k, t) : t ≥ 0, k ∈ Z ∈ 0, 1Z, (8.2)

The set k : X(k, t) = 1 ⊂ Z is the set of occupied sites at time t. Inthe totally asymmetric case p = 1, the infinitesimal generator for the Markovprocess X(t) (which does exist, see, e.g., Liggett (1985))

Lf(X) =∑k∈Z

X(k)(1−X(k + 1)

)[f(Xk,k+1)− f(X)

], (8.3)

where the state Xk,k+1 is obtained from the state X by setting X(k) =0, X(k + 1) = 1 and keeping the other values fixed.

The above system’s dynamics can also be encoded in the infinite systemof ordinary stochastic differential equations

dX(t, k) = X(t−, k − 1)[1−X(t−, k)

]dP (t, k − 1)

−X(t−, k)[1−X(t−, k + 1)

]dP (t, k)

+X(t−, k + 1)[1−X(t−, k)

]dQ(t, k + 1)

−X(t−, k)[1−X(t−, k − 1)

]dQ(t, k) (8.4)

where P (t, k) and Q(t, k), k ∈ Z, are independent Poisson processes withintensities p and (1 − p), representing jumps to the right and jumps to theleft, respectively.

8.2 Scaling limit

Define the hyperbolic rescalings

Xh(t, x) =∑k∈Z

X

(t

h, k

)1[hk,h(k+1))(x), (8.5)

P h(t, x) =∑k∈Z

P

(t

h, k

)1[hk,h(k+1))(x), (8.6)

Qh(t, x) =∑k∈Z

Q

(t

h, k

)1[hk,h(k+1))(x), (8.7)

and introduce notation

F±hu(x) = u(x)(1− u(x± h)

), (8.8)

Page 30: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Asymmetric exclusion processes 25

D±hu(x) = ±u(x± h)− u(x)

h. (8.9)

A direct verification shows that the system (4) can now be written in theform

dXh(t, x) = −D−h[Fh(Xh(t−, x)

)d(hP h(t, x))

]+Dh

[F−h

(Xh(t−, x)

)d(hQh(t, x))

]. (8.10)

Theorem (Benassi and Fouque (1987)) Let p 6= 1/2. As h → 0, thesolution Xh(t, x) dx of (10) converges weakly to u(t, x) dx, where u(t, x) is adecreasing and right continuous in the x-variable weak solution of the non-linear Cauchy problem

∂u

∂t+ (2p− 1)

∂B(u)

∂x= 0, (8.11)

u(0, x) = u0(x) = b1(−∞,0](x) + a1(0,∞](x), (8.12)

with some 0 ≤ a < b <∞ and

B(u) = u(1− u). (8.13)

Moreover, for all t, x, we have a ≤ u(t, x) ≤ b.

Recall that the weak solution is understood in the following sense: Forevery smooth function φ : R+ ×R→ R with compact support,∫

R+

∫R

[uφt + (2p− 1)F (u)φx] dx dt = −∫

R

u0(x)φ(0, x) dx. (8.14)

Heuristically, the result is plausible, since, as h→ 0 in (10), hP h(t, x)→pt, hQh(t, x)→ (1− p)t,D±h → ∂/∂x, and F±h → F .

8.3 Other queuing regimes related to non-

nearest neighbor systems

Other queuing regimes related to non-nearest neighbor systems lead to scal-ing limits

∂u

∂t+∂H(u)

∂x= 0, (8.15)

withH(u) = (1− u)− (1− u)N+1,

orH(u) = u(1− u)m,

where N , and m, are some integers.

Page 31: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

26 Lecture 8

Page 32: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 9

Nonlinear diffusion equations

9.1 Hyperbolic equations

The nonlinear hyperbolic equations describing the density profiles for thequeuing networks in Chapter 8 are special cases of general conservation laws(see, e.g., Smoller (1994)) of the form

∂u

∂t+∂H(u)

∂x= 0. (9.1)

and in the case of initial conditions of the form

u(0, x) = u0(x) = ul1(−∞,0](x) + ur1(0,∞](x), (9.2)

where ul and ur are constants (so called Riemann problem), they can besolved explicitly under some extra conditions on function H.

Let us recall (see, e.g., Smoller (1994)) that a bounded and measurablefunction u(t, x) is called a (weak) solution of the initial-value problem

∂u

∂t+∂H(u)

∂x= 0, u(0, x) = u0(x), (9.3)

with bounded and measurable initial data u0 if∫t≥0

∫R

(uφt +H(u)φx

)dx dt+

∫t=0

u0φ dx = 0 (9.4)

In general, solutions are not unique unless additional assumptions, such asthe entropy condition mentioned below, are satisfied.

27

Page 33: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

28 Lecture 9

The solutions of the Riemann problem (1-2) are obviously invariant underhyperbolic rescaling, that is, for every constant λ > 0

uλ(t, x) = u(λt, λx)

is a solution whenever u is. Thus one looks for the solutions of the form

u(t, x) = v(x/t) (9.5)

This gives rise to three types of local behavior of the solutions of u:

• u(t, x) is constant;

• u(t, x) is a shock wave of the form

u(t, x) = u01(−∞,V t)(x) + u11[V t,∞)(x), (9.6)

traveling with the velocity

V =H(u0)−H(u1)

u0 − u1

.

For the sake of uniqueness one adds here the entropy conditionH ′(u0) >V > H ′(u1).

• u(t, x) is a continuous rarefaction wave of the form (44) where v satisfiesthe ordinary differential equation

v′(ξ)(H ′(v(ξ))− ξ

)= 0. (9.7)

9.2 Nonlinear diffusion approximations

For initial conditions not of Riemann type, in particular those with integrabledata, or for more general random initial conditions, obtaining solutions of theconservations law is not a simple matter, even in approximate fashion. Theusual approach then is to consider a parabolic regularization (the viscositymethod) by considering the nonlinear diffusion equations

∂u

∂t+∂H(u)

∂x= εLu, u(0, x) = u0(x), (9.8)

where L is a dissipative operator of elliptic type, like e.g. the Laplacian.Then, of course, with the exception of the quadratic case giving rise to the

Page 34: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Nonlinear diffusions 29

Burgers equation, one can not count on finding explicit solutions but twotypes of asymptotic results can be used as approximations.

The first kind provides the large time asymptotics of the regularized con-servation laws and the second kind gives a Monte Carlo method of solvingthem via the interacting diffusions scheme (so-called propagation of chaos).We will briefly describe the two approaches.

Asymptotics for nonlinear diffusion equations. Not surprisingly,given the decay of their solution in time, the large time asymptotic behaviorfor parabolically regularized conservation laws is dictated by the asymptoticbehavior of the nonlinearity H(u) at points where the function is small.So, we have the following asymptotic results for regularized versions of thehyperbolic equations:

Theorem 1. Let ε > 0,m ≥ 1 and u(t, x) be a positive weak solution ofthe Cauchy problem

∂u

∂t+ (2p− 1)

∂F (u)

∂x= ε

∂2u

∂x2, 1 ≥ u(0, x) = u0(x) ≥ 0, (9.9)

with F (u) = [u(1− u)m]. Then(i) If u0 ∈ L1(R) then u has the same large time asymptotics as the

solution of the linear diffusion equation

∂u

∂t+ (2p− 1)

∂u

∂x= ε

∂2u

∂x2, 1 ≥ u(0, x) = u0(x) ≥ 0, (9.10)

or more precisely

‖u(t, x)− U(t, x)‖1 → 0 as t→∞, (9.11)

where U(t, x) = (g ∗ u0)(t, x − (2p − 1)t) and g(t, x) = (4πt)−1/2

exp(−|x|2/(4t)) is the standard Gaussian kernel.(ii) If 1− u0 ∈ L1(R) then:In the case m = 1, u has the same large time asymptotics as the solution

of the linear diffusion equation.In the case m = 2, u has the same large time asymptotics as the selfsimilar

source solution of the Burgers equations or more precisely, for each p > 1

t(1−1/p)/2‖u(t, x)− UM(t, x)‖p → 0 as t→∞, (9.12)

where

UM(t, x) = t−1/2 exp(−x2/(4t))(K(M) +

1

2

∫ x/(2√t

0

exp(−ξ2/4) dξ)−1

,

(9.13)

Page 35: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

30 Lecture 9

and UM(t, x)→Mδ(x) as t→ 0 with M = ‖u0‖1.In the case m ≥ 3, u has the same large time asymptotics as the solution

of the heat equations or more precisely, for each p > 1 there exists a constantC such that

‖u(t, x)− U(t, x)‖p ≤ Ct−(1−1/p)/2, (9.14)

where U(t, x) = (g ∗ u0)(t, x).

Sketch of the Proof. By the results of Escobedo and Zuazua (1991),Escobedo, Velazquez and Zuazua (1993) (see also Biler, Karch and Woy-czynski (1999) for other regularizations of conservation laws) the asymp-totic behavior of the solutions of the conservation laws depends on theasymptotic behavior of the nonlinearity H at its small values. So, forH(u) = (2p− 1)F (u) = (2p− 1)u(1− u)m,

limu→0

(2p− 1)F (u)

u= 2p− 1, (9.15)

and

limu→1

(2p− 1)F (u)

(1− u)m= 2p− 1. (9.16)

The first condition, together with the standard step removing the drift termin the linear diffusion equation gives (i), and the case m = 1 in the secondcondition gives the first part of (ii).

The critical case m = 2 yields the Burgers equation type asymptoticsclaimed in the second part of (ii), and the supercritical case m ≥ 3 wherethe effect of the nonlinear convection term disappears in the limit.

Theorem 2. Let ε > 0, N ≥ 1 and u(t, x) be a positive weak solution ofthe Cauchy problem

∂u

∂t+ (2p− 1)

∂G(u)

∂x= ε

∂2u

∂x2, 1 ≥ u(0, x) = u0(x) ≥ 0, (9.17)

with G(u) = (1−u)−(1−u)N+1. Then if either u0 ∈ L1(R) or 1−u0 ∈ L1(R)then u has the same large time asymptotics as the solution of the lineardiffusion equation

∂u

∂t+ (2p− 1)

∂u

∂x= ε

∂2u

∂x2, 1 ≥ u(0, x) = u0(x) ≥ 0, (9.18)

or more precisely

‖u(t, x)− U(t, x)‖1 → 0 as t→∞, (9.19)

Page 36: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Nonlinear diffusions 31

where U(t, x) = (g ∗ u0)(t, x − (2p − 1)t) and g(t, x) = (4πt)−1/2

exp(−|x|2/(4t)) is the standard Gaussian kernel.

Sketch of the Proof: The proof of this result relies on the same asymptoticsresults that were employed in the proof of Theorem 1. But in this caseH(u) = (2p − 1)[(1 − u) − (1 − u)N+1] which has the linear asymptotics atboth u = 0 and u = 1. So, the result follows by the usual reduction to theheat equation.

Page 37: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

32 Lecture 9

Page 38: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Chapter 10

Interacting diffusionsapproximations for nonlineardiffusion equations

10.1 Nonlinear processes

Interacting diffusions approximations for nonlinear diffusion equa-tions. This section discusses a possibility of a Monte Carlo type approxi-mation for solutions of nonlinear diffusion equations of the type that ariseas parabolic regularizations of conservation laws of the encountered before.The idea is to use the following scheme known as the propagation of chaosresult and depends on the the construction of the so-called nonlinear McKeanprocess for our equations.

The basic observation is that if the regularizing operator L is the infinites-imal generator of a Levy process then the parabolic equation of the previ-ous chapter (say, ε = 1) can be formally interpreted as a “Fokker–Planck–Kolmogorov equation” for a “nonlinear” diffusion process in the McKean’ssense. Indeed, consider a Markov process X(t), t ≥ 0, which is a solution ofthe stochastic differential equation

dX(t) = dS(t)− u−1H(u(X(t), t)) dt, (10.1)

X(0) ∼ u0(x) dx in law,

where S(t) is the Levy process with generator −L. Assuming that X(t) isa unique solution of (10.1), we see that the measure-valued function v(dx, t) =

33

Page 39: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

34 Lecture 10

P (X(t) ∈ dx) satisfies the weak forward equation

d

dt〈v(t), η〉 = 〈v(t), Lu(t)η〉, η ∈ S(Rn), (10.2)

v(0) = u(x, 0) dx

with Lu = −L + u−1H(u) · ∇. On the other hand u(dx, t) = u(x, t) dx alsosolves (10.2) since

d

dt〈u(t), η〉 = 〈−Lu−∇ ·H(u), η〉 = 〈u, (−L+ u−1H(u) · ∇)η〉

so that v(dx, t) = u(dx, t) and, by uniqueness, u is the density of the solutionof (1).

10.2 Interacting diffusions and Monte-Carlo

methods

The above construction makes possible approximation of solutions of theparabolic equation equations via finite systems of interacting diffusions. Toillustrate our point we will formulate this Monte Carlo algorithm in thespecial, and well known Burgers equation case where L = ∆, is the usualLaplacian and the nonlinearity H(x) = x2 is quadratic. The more generalresults needed for the analysis of GS and multiserver queuing networks areunder development (see Calderoni and Pulvirenti (1983), Sznitman (1991),Zhang (1995), Funaki and Woyczynski (1998), Woyczynski (1998), Biler,Funaki and Woyczynski (2000), Margolius, Subramanian and Woyczynski(2000), for more details on the subject).

For each n ∈ N, let us introduce independent, symmetric, real-valuedstandard Brownian motion processes Si(t), i = 1, 2, . . . , n, and let δε(x) :=

(2πε)−1/2 exp [−x2/2ε] , ε > 0, be a regularizing kernel. Consider a system ofn interacting particles with positions X i(t)i=1,...,n ≡ X i,n,ε(t)i=1,...,n, andthe corresponding measure-valued process (empirical distribution) Xn(t) ≡Xn,ε(t) := 1

n

∑ni=1 δ(X

i,n,ε(t)), with the dynamics provided by the system ofregularized singular stochastic differential equations

dX i(t) = dSi(t) +1

n

∑j 6=i

δε(Xi(t)−Xj(t)) dt, i = 1, . . . , n, (10.3)

and the initial conditions X i(0) ∼ u0(x) (in distribution, thus, u0 ∈L1 here). Then, for each ε > 0, the empirical process Xn,ε(t) =⇒

Page 40: DIFFUSIVE PROCESSES AND STOCHASTIC ...stanczr/semin/20122013/Wroclaw...4 Lecture 1 This is, of course, the elementary version of the Central Limit Theorem. Interpolating linearly the

Interacting diffusions 35

uε(x, t) dx, in probability, as n → ∞, where ⇒ denotes the weak con-vergence of measures, and the limit density uε ≡ uε(x, t), t > 0, x ∈ R,satisfies the regularized Burgers equation uεt + (1

2(δε ∗ uε) · uε)x = ∆uε. with

the initial condition u(0, x) = u0(x). The speed of convergence is controlled(see Bossy and Talay (1996)). Moreover, under some additional technicalconditions, for a class of test functions φ, E|〈Xn,ε(n)(t) − u(t), φ〉| −→ 0, asn → ∞, ε(n) → 0, where u(t) = u(x, t) is a solution of the nonregularizedBurgers equation ut + (u2)x = ∆u with the initial condition u(0, x) = u0(x).