The Annals of Probability2013, Vol. 41, No. 3B, 2225â2260DOI: 10.1214/11-AOP717© Institute of Mathematical Statistics, 2013
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEATEQUATION, BEFORE THE ONSET OF INTERMITTTENCY
BY DANIEL CONUS1, MATHEW JOSEPH2 AND DAVAR KHOSHNEVISAN3
Lehigh University, University of Utah and University of Utah
We consider a nonlinear stochastic heat equation âtu = 12âxxu +
Ï(u)âxtW , where âxtW denotes spaceâtime white noise and Ï : R â R isLipschitz continuous. We establish that, at every fixed time t > 0, the globalbehavior of the solution depends in a critical manner on the structure ofthe initial function u0: under suitable conditions on u0 and Ï , supxâR ut (x)
is a.s. finite when u0 has compact support, whereas with probability one,lim sup|x|ââ ut (x)/(log|x|)1/6 > 0 when u0 is bounded uniformly awayfrom zero. This sensitivity to the initial data of the stochastic heat equationis a way to state that the solution to the stochastic heat equation is chaotic atfixed times, well before the onset of intermittency.
1. Introduction and main results. Let W := {W(t, x)}tâ„0,xâR denote a real-valued Brownian sheet indexed by two parameters (t, x) â R+ Ă R. That is, W isa centered Gaussian process with covariance
Cov(W(t, x),W(s, y)) = min(t, s) Ă min(|x|, |y|) Ă 1(0,â)(xy).(1.1)
And let us consider the nonlinear stochastic heat equation
â
âtut (x) = Îș
2
â2
âx2 ut(x) + Ï(ut (x))â2
ât âxW(t, x),(1.2)
where x â R, t > 0, Ï : R â R is a nonrandom and Lipschitz continuous function,Îș > 0 is a fixed viscosity parameter and the initial function u0 : R â R is bounded,nonrandom and measurable. The mixed partial derivative â2W(t, x)/(ât âx) is theso-called âspaceâtime white noise,â and is defined as a generalized Gaussian ran-dom field; see Chapter 2 of Gelfand and Vilenkin [17], Section 2.7, for example.
It is well known that the stochastic heat equation (1.2) has a (weak) solution{ut (x)}t>0,xâR that is jointly continuous; it is also unique up to evanescence; see,for example, Chapter 3 of Walsh [23], (3.5), page 312. And the solution can be
Received March 2011; revised November 2011.1Supported in part by the Swiss National Science Foundation Fellowship PBELP2-122879.2Supported in part by the NSF Grant DMS-07-47758.3Supported in part by the NSF Grants DMS-07-06728 and DMS-10-06903.MSC2010 subject classifications. Primary 60H15; secondary 35R60.Key words and phrases. Stochastic heat equation, chaos, intermittency.
2225
2226 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
written in mild form as the (a.s.) solution to the following stochastic integral equa-tion:
ut(x) = (pt â u0)(x) +â«(0,t)ĂR
ptâs(y â x)Ï (us(y))W(ds dy),(1.3)
where
pt(z) := 1
(2ÏÎșt)1/2 exp(â z2
2Îșt
)(t > 0, z â R)(1.4)
denotes the free-space heat kernel, and the final integral in (1.3) is a stochasticintegral in the sense of Walsh [23], Chapter 2. Chapter 1 of the minicourse byDalang et al. [10] contains a quick introduction to the topic of stochastic PDEs ofthe type considered here.
We are interested solely in the physically interesting case that u0(x) â„ 0 forall x â R. In that case, a minor variation of Muellerâs comparison principle [21]implies that if in addition Ï(0) = 0, then with probability one ut (x) â„ 0 for allt > 0 and x â R; see also Theorem 5.1 of Dalang et al. [10], page 130, as well asTheorem 2.1 below.
We follow Foondun and Khoshnevisan [15], and say that the solution u :={ut (x)}t>0,xâR to (1.2) is (weakly) intermittent if
0 < lim suptââ
1
tlog E(|ut (x)|Îœ) < â for all Îœ â„ 2,(1.5)
where âlogâ denotes the natural logarithm, to be concrete. Here, we refer to prop-erty (1.5), if and when it holds, as mathematical intermittency [to be distinguishedfrom physical intermittency, which is a phenomenological property of an objectthat (1.2) is modeling].
If Ï(u) = const ·u and u0 is bounded from above and below uniformly, then thework of Bertini and Cancrini [2] and Muellerâs comparison principle [21] togetherimply (1.5). In the fully nonlinear case, Foondun and Khoshnevisan [15] discuss aconnection to nonlinear renewal theory, and use that connection to establish (1.5)under various conditions; for instance, they have shown that (1.5) holds providedthat lim inf|x|ââ |Ï(x)/x| > 0 and infxâR u0(x) is sufficiently large.
If the lim sup in (1.5) is a bona fide limit, then we arrive at the usual descriptionof intermittency in the literature of mathematics and theoretical physics; see, forinstance, Molchanov [20] and Zeldovich et al. [24â26].
Mathematical intermittency is motivated strongly by a vast physics literature on(physical) intermittency and localization, and many of the references can be foundin the combined bibliographies of [15, 20, 24, 25]. Let us say a few more wordsabout âlocalizationâ in the present context.
It is generally accepted that if (1.5) holds, then u := {ut (x)}t>0,xâR ought toundergo a separation of scales (or âpattern/period breakingâ). In fact, one can arguethat property (1.5) implies that, as t â â, the random function x ïżœâ ut (x) starts
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2227
to develop very tall peaks, distributed over small x-intervals (see Section 2.4 ofBertini and Cancrini [2] and the Introduction of the monograph by Carmona andMolchanov [7]). This âpeaking propertyâ is called localization, and is experiencedwith very high probability, provided that: (i) the intermittency property (1.5) holds;and (ii) t ïżœ 1.
Physical intermittency is expected to hold both in space and time, and not onlywhen t ïżœ 1. And it is also expected that there are physically-intermittent pro-cesses, not unlike those studied in the present paper, which, however, do not satisfythe (mathematical) intermittency condition (1.5) on Liapounov exponents; see, forexample, the paper by Cherktov et al. [8].
Our wish is to better understand âphysical intermittencyâ in the setting of thestochastic heat equation (1.2). We are motivated strongly by the literature onsmooth finite-dimensional dynamical systems ([22], Section 1.3), which ascribesintermittency in part to âchaos,â or slightly more precisely, sensitive dependenceon the initial state of the system.
In order to describe the contributions of this paper, we first recall a consequenceof a more general theorem of Foondun and Khoshnevisan [16]: if Ï(0) = 0, andif u0 is Hölder continuous of index > 1
2 and has compact support, then for everyt > 0 fixed,
lim supzââ
1
log zlog P
{supxâR
ut(x) > z}
= ââ.(1.6)
It follows in particular that the global maximum of the solution (at a fixed time) isa finite (nonnegative) random variable.
By contrast, one expects that if
infxâR
u0(x) > 0,(1.7)
then the solution ut is unbounded for all t > 0. Here we prove that fact and a gooddeal more; namely, we demonstrate here that there in fact exists a minimum rateof âblowupâ that applies regardless of the parameters of the problem.
A careful statement requires a technical condition that turns out to be necessaryas well as sufficient. In order to discover that condition, let us consider the case thatu0(x) âĄ Ï > 0 is a constant for all x â R. Then, (1.7) clearly holds; but there canbe no blowup if Ï(Ï) = 0. Indeed, in that case the unique solution to the stochasticheat equation is ut (x) ⥠Ï, which is bounded. Thus, in order to have an unboundedsolution, we need at the very least to consider the case that Ï(x) = 0 for all x > 0.[Note that Ï(0) = 0 is permitted.] Instead, we will assume the following seeminglystronger, but in fact more or less equivalent, condition from now on:
Ï(x) > 0 for all x â R \ {0}.(1.8)
We are ready to present the first theorem of this paper. Here and throughout wewrite âf (R) ïżœ g(R) as R â ââ in place of the more cumbersome âthere exists
2228 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
a nonrandom C > 0 such that lim infRââ f (R)/g(R) â„ C.â The largest such C
is called âthe constant in ïżœ.â We might sometimes also write âg(R) ïżœ f (R)â inplace of âf (R) ïżœ g(R).â And there is a corresponding âconstant in ïżœ.â
THEOREM 1.1. Let {ut (x)}t>0,xâR be a solution to
â
âtut (x) = Îș
2
â2
âx2 ut(x) + Ï(ut (x))â2
ât âxW(t, x) (t > 0, x â R)(1.9)
written in mild form (1.3), where the initial function u0 : R â R is bounded, non-random and satisfies
infxâR
u0(x) > 0.(1.10)
Then, the following hold:
(1) If infxâR Ï(x) ℠Δ0 > 0 and t > 0, then a.s.,
supxâ[âR,R]
ut(x) ïżœ (logR)1/6
Îș1/12 as R â â;(1.11)
and the constant in ïżœ does not depend on Îș.(2) If Ï(x) > 0 for all x â R and there exists Îł â (0,1/6) such that
lim|x|ââÏ(x) log(|x|)(1/6)âÎł = â,(1.12)
then for all t > 0 the following holds almost surely:
supxâ[âR,R]
|ut (x)| ïżœ (logR)Îł
Îș1/12 as R â â;(1.13)
and the constant in ïżœ does not depend on Îș.
Note in particular that if Ï is uniformly bounded below then a.s.,
lim sup|x|ââ
ut(x)
(log|x|)1/6 â„ const
Îș1/12 .(1.14)
We believe that it is a somewhat significant fact that a rate (logR)1/6 of blowupexists that is valid for all u0 and Ï in the first part of Theorem 1.1. However, theactual numerical estimateâthat is, the (1/6)th power of the logarithmâappears tobe less significant, as the behavior in Îș might suggest (see Remark 1.5). In fact, webelieve that the actual blowup rate might depend critically on the fine propertiesof the function Ï . Next, we highlight this assertion in one particularly interestingcase. Here and throughout, we write âf (R) ïżœ g(R) as R â ââ as shorthand forâf (R) ïżœ g(R) and g(R) ïżœ f (R) as R â â.â The two constants in the precedingtwo ïżœâs are called the âconstants in ïżœ.â
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2229
THEOREM 1.2. If Ï is uniformly bounded away from 0 and â and t > 0, then
supxâ[âR,R]
ut (x) ïżœ (logR)1/2
Îș1/4 a.s. as R â â.(1.15)
Moreover, for every fixed Îș0 > 0, the preceding constants in ïżœ do not depend onÎș â„ Îș0.
In particular, we find that if Ï is bounded uniformly away from 0 and â, thenthere exist constants câ, câ â (0,â) such that
câÎș1/4 †lim sup
|x|ââut(x)
(log|x|)1/2 †câ
Îș1/4 a.s.,(1.16)
uniformly for all Îș â„ Îș0.The preceding discusses the behavior in case Ï is bounded uniformly away
from 0; that is, a uniformly-noisy stochastic heat equation (1.2). In general, wecan say little about the remaining case that Ï(0) = 0. Nevertheless in the well-known parabolic Anderson model, namely (1.2) with Ï(x) = cx for some constantc > 0, we are able to obtain some results (Theorem 1.3) that parallel Theorems 1.1and 1.2.
THEOREM 1.3. If Ï(x) = cx for some c > 0, then a.s.,
log supxâ[âR,R]
ut(x) ïżœ (logR)2/3
Îș1/3 as R â â,(1.17)
and the constants in ïżœ do not depend on Îș > 0.
Hence, when Ï(x) = cx we can find constants Câ,Câ â (0,â) such that
0 < lim sup|x|ââ
ut(x)
exp{Câ(log|x|)2/3/Îș1/3} †lim sup|x|ââ
ut(x)
exp{Câ(log|x|)2/3/Îș1/3} < â,
almost surely.
REMARK 1.4. Thanks to (1.3), and since Walsh stochastic integrals have zeromean, it follows that Eut (x) = (pt â u0)(x). In particular, Eut(x) †supxâR u0(x)
is uniformly bounded. Since ut (x) is nonnegative, it follows from Fatouâs lemmathat lim inf|x|ââ ut (x) < â a.s. Thus, the behavior described by Theorem 1.1 isone about the highly-oscillatory nature of x ïżœâ ut(x), valid for every fixed timet > 0. We will say a little more about this topic in Appendix B below.
REMARK 1.5. We pay some attention to the powers of the viscosity parameterÎș in Theorems 1.2 and 1.3. Those powers suggest that at least two distinct univer-sality classes can be associated to (1.2): (i) when Ï is bounded uniformly away
2230 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
from zero and infinity, the solution behaves as random walk in weakly-interactingrandom environment; and (ii) when Ï(x) = cx for some c > 0, then the solutionbehaves as objects that arise in some random matrix models.
REMARK 1.6. In [18], (2), Kardar, Parisi and Zhang consider the solution u
to (1.2) and apply formally the HopfâCole transformation ut(x) := exp(λht (x))
to deduce that h := {ht (x)}tâ„0,xâR satisfies the following âSPDEâ: for t > 0 andx â R,
â
âtht (x) = Îș
2
â2
âx2 ht (x) + Îșλ
2
(â
âxht (x)
)2
+ â2
ât âxW(t, x).(1.18)
This is the celebrated âKPZ equation,â named after the authors of [18], and therandom field h is believed to be a universal object (e.g., it is expected to arise asa continuum limit of a large number of interacting particle systems). Theorem 1.3implies that there exist positive and finite constants at and Atâdepending onlyon tâsuch that
at
Îș1/3 < lim sup|x|ââ
ht (x)
(log|x|)2/3 <At
Îș1/3 a.s. for all t > 0.(1.19)
This is purely formal, but only because the construction of h via u is not rigorous.More significantly, our proofs suggest strongly a kind of asymptotic spaceâtimescaling â|logx| â t±1/2.â If so, then the preceding verifies that fluctuation expo-nent 1/z of h is 2/3 under quite general conditions on the h0. The latter has beenpredicted by Kardar et al. [18], page 890, and proved by Balazs, Quastel and Sep-pĂ€lĂ€inen [1] for a special choice of u0 (hence h0) and t â 0.
The proofs of our three theorems involve a fairly long series of technical compu-tations. Therefore, we conclude the Introduction with a few remarks on the meth-ods of proofs for the preceding three theorems in order to highlight the âpicturesbehind the proofs.â
Theorem 1.1 relies on two well-established techniques from interacting parti-cle systems [14, 19]: namely, comparison and coupling. Comparison reduces ourproblem to the case that u0 is a constant; at a technical level this uses Muellerâscomparison principle [21]. And we use coupling on a few occasions: first, wedescribe a two-step coupling of {ut (x)}t>0,xâR to the solution {vt (x)}t>0,xâR of(1.2)âusing the same spaceâtime white noise â2W/(ât âx)âin the case that Ï isbounded below uniformly on R. The latter quantity [i.e., {vt (x)}t>0,xâR] turns outto be more amenable to moment analysis than {ut (x)}t>0,xâR, and in this way weobtain the following a priori estimate, valid for every t > 0 fixed:
log infxâR
P{ut(x) ℠λ} ïżœ ââÎșλ6 as λ â â.(1.20)
Theorem 1.1 follows immediately from this and the BorelâCantelli lemma, pro-vided that we prove that if x and x âČ are âO(1) distance apart,â then ut(x) and ut(x
âČ)
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2231
are âapproximately independent.â A quantitative version of this statement followsfrom coupling {ut (x)}t>0,xâR to the solution {wt(x)}t>0,xâR of a random evolu-tion equation that can be thought of as the âlocalizationâ of the original stochasticheat equation (1.2). The localized approximation {wt(x)}t>0,xâR has the propertythat wt(x) and wt(x
âČ) are (exactly) independent for âmostâ values of x and xâČ thatare O(1) distance apart. And this turns out to be adequate for our needs.
Theorem 1.2 requires establishing separately a lower and an upper bound onsupxâ[âR,R] ut(x). Both bounds rely heavily on the following quantitative im-provement of (1.20): if Ï is bounded, then
log infxâR
P{ut(x) ℠λ} ïżœ ââÎșλ2 as λ â â.(1.21)
And, as it turns out, the preceding lower bound will perforce imply a correspondingupper estimate,
log infxâR
P{ut(x) ℠λ} ïżœ ââÎșλ2 as λ â â.(1.22)
The derivation of the lower bound on supxâ[âR,R] ut (x) follows closely the proofof Theorem 1.1, after (1.21) and (1.22) are established. Therefore, the remainingdetails will be omitted.
The upper bound on supxâ[âR,R] ut(x) requires only (1.22) and a well-knownquantitative version of the Kolmogorov continuity theorem.
Our proof of Theorem 1.3 has a similar flavor to that of Theorem 1.1, for thelower bound, and Theorem 1.2, for the upper bound. We make strong use of themoments formulas of Bertini and Cancrini [2], Theorem 2.6. [This is why we areonly able to study the linear equation in the case that Ï(0) = 0.]
Throughout this paper, we use the following abbreviation:
uât (R) := sup
xâ[âR,R]ut (x) (R > 0).(1.23)
We will also need the following elementary facts about the heat kernel:
âpsâ2L2(R)
= (4ÏÎșs)â1/2 for every s > 0;(1.24)
and therefore, â« t
0âpsâ2
L2(R)ds =
ât/(ÏÎș) for all t â„ 0.(1.25)
We will tacitly write LipÏ for the optimal Lipschitz constant of Ï ; that is,
LipÏ := supââ<x =xâČ<â
âŁâŁâŁâŁÏ(x) â Ï(xâČ)x â xâČ
âŁâŁâŁâŁ.(1.26)
Of course, LipÏ is finite because Ï is Lipschitz continuous. Finally, we use thefollowing notation for the LÎœ(P) norm of a random variable Z â LÎœ(P):
âZâÎœ := {E(|Z|Îœ)}1/Îœ.(1.27)
2232 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
2. Muellerâs comparison principle and a reduction. Muellerâs comparisonprinciple [21] is one of the cornerstones of the theory of stochastic PDEs. In itsoriginal form, Muellerâs comparison principle is stated for an equation that is sim-ilar to (1.2), but for two differences: (i) Ï(z) := Îșz for some Îș > 0; and (ii) thevariable x takes values in a compact interval such as [0,1]. In his Utah Minicourse([10], Theorem 5.1, page 130), Mueller outlines how one can include also the moregeneral functions Ï of the type studied here. And in both cases, the proofs assumethat the initial function u0 has compact support. Below we state and prove a smallvariation of the preceding comparison principles that shows that Muellerâs theorycontinues to work when: (i) the variable x takes values in R; and (ii) the initialfunction u0 is not necessarily compactly supported.
THEOREM 2.1 (Muellerâs comparison principle). Let u(1)0 and u
(2)0 denote two
nonnegative bounded continuous functions on R such that u(1)0 (x) â„ u
(2)0 (x) for all
x â R. Let u(1)t (x), u
(2)t (x) be solutions to (1.2) with respective initial functions u
(1)0
and u(2)0 . Then,
P{u
(1)t (x) â„ u
(2)t (x) for all t > 0 and x â R
}= 1.(2.1)
PROOF. Because the solution to (1.2) is continuous in (t, x), it suffices toprove that
P{u
(1)t (x) â„ u
(2)t (x)
}= 1 for all t > 0 and x â R.(2.2)
In the case that u(1)0 and u
(2)0 both have bounded support, the preceding is proved
almost exactly as in Theorem 3.1 of Mueller [21]. For general u(1)0 and u
(2)0 , we
proceed as follows.Let v0 : R â R+ be a bounded and measurable initial function, and define a new
initial function v[N]0 : R â R+ as
v[N]0 (x) :=
â§âȘâȘâšâȘâȘâ©
v0(x), if |x| †N ,v0(N)(âx + N + 1), if N < x < N + 1,v0(âN)(x + N + 1), if â(N + 1) < x < âN ,0, if |x| â„ N + 1.
(2.3)
Then, let v[N]t (x) be the solution to (1.2) with initial condition v
[N]0 . We claim that
ÎŽ[N]t (x) := vt (x) â v
[N]t (x) â 0 in probability as N â â.(2.4)
Let u(1),[N]t and u
(2),[N]t denote the solutions to (1.2) with initial conditions u
(1),[N]0
and u(2),[N]0 , respectively, where the latter are defined similarly as v
[N]0 above.
Now, (2.4) has the desired result because it shows that u(1),[N]t (x) â u
(1)t (x) and
u(2),[N]t (x) â u
(2)t (x) in probability as N â â. Since u
(1),[N]t (x) â„ u
(2),[N]t (x)
a.s. for all t > 0 and x â R, (2.2) follows from taking limits.
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2233
In order to conclude we establish (2.4); in fact, we will prove that
supxâR
suptâ(0,T )
E(âŁâŁÎŽ[N]
t (x)âŁâŁ2)= O(1/N) as N â â(2.5)
for all T > 0 fixed. Recall that
ÎŽ[N]t (x) = (
pt â ÎŽ[N]0
)(x)
(2.6)+â«(0,t)ĂR
ptâs(y â x){Ï(vs(y)) â Ï
(v[N]s (y)
)}W(ds dy).
Because (pt â ÎŽ[N]0 )(x) †2(supzâR v0(z))
â«|y|>N pt(y)dy, a direct estimate of the
latter stochastic integral yields
E(âŁâŁÎŽ[N]
t (x)âŁâŁ2)†const · tâ1/2eâN2/(2t)
+ const · Lip2Ï
â« t
0ds
â« âââ
dy p2tâs(y â x)E
(âŁâŁÎŽ[N]s (y)
âŁâŁ2)(2.7)
†const · tâ1/2eâN2/(2t)
+ const · Lip2Ï eÎČtM
[N]t (ÎČ)
â« â0
eâÎČrâprâ2L2(R)
dr,
where ÎČ > 0 is, for the moment, arbitrary and
M[N]t (ÎČ) := sup
sâ(0,t),yâR
[eâÎČsE
(âŁâŁvs(y) â v[N]s (y)
âŁâŁ2)].(2.8)
We multiply both sides of (2.7) by exp(âÎČt) and take the supremum over all t â(0, T ) where T > 0 is fixed. An application of (1.24) yields
M[N]T (ÎČ) †const ·
[sup
tâ(0,T )
{tâ1/2eâN2/(2t)}+ ÎČâ1/2M
[N]T (ÎČ)
].(2.9)
The quantity in suptâ(0,T ){· · ·} is proportional to 1/N (with the constant of propor-tionality depending on T ), and the implied constant does not depend on ÎČ . There-
fore, it follows that if ÎČ were selected sufficiently large, then M[N]T (ÎČ) = O(1/N)
as N â â for that choice of ÎČ . This implies (2.4). ïżœ
Next we apply Muellerâs comparison principle to make a helpful simplificationto our problem.
Because B := infxâR u0(x) > 0 and B := supxâR u0(x) < â, it follows fromTheorem 2.1 that almost surely,
ut (x) †ut(x) †ut (x) for all t > 0 and x â R,(2.10)
where u solves the stochastic heat equation (1.2) starting from initial functionu0(x) := B , and u solves (1.2) starting from u0(x) := B . This shows that it suf-fices to prove Theorems 1.1, 1.2 and 1.3 with ut (x) everywhere replaced by ut (x)
2234 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
and ut (x). In other words, we can assume without loss of generality that u0 isidentically a constant. In order to simplify the notation, we will assume from nowon that the mentioned constant is one. A quick inspection of the ensuing proofsreveals that this assumption is harmless. Thus, from now on, we consider in placeof (1.2), the following parabolic stochastic PDE:â§âš
â©â
âtut (x) = Îș
2
â2
âx2 ut (x) + Ï(ut (x))â2
ât âxW(t, x), (t > 0, x â R),
u0(x) = 1.
(2.11)
We can write its solution in mild form as follows:
ut (x) = 1 +â«[0,t]ĂR
ptâs(y â x)Ï (us(y))W(ds dy).(2.12)
3. Tail probability estimates. In this section we derive the following corol-lary which estimates the tails of the distribution of ut (x), where ut(x) solves (2.11)and (2.12). In fact, Corollary 3.5, Propositions 3.7 and 3.8 below readily imply thefollowing:
COROLLARY 3.1. If infzâR[Ï(z)] > 0, then for all t > 0,
ââÎșλ6 ïżœ log P{|ut(x)| ℠λ} ïżœ ââ
Îș(logλ)3/2,(3.1)
uniformly for x â R and λ > e. And the constants in ïżœ do not depend on Îș.If (1.12) holds for some Îł â (0,1/6), then for all t > 0,
âÎș1/(12Îł )λ1/Îł ïżœ log P{|ut (x)| ℠λ} ïżœ ââ
Îș(logλ)3/2,(3.2)
uniformly for all x â R and λ > e. And the constants in ïżœ do not depend on Îș.
3.1. An upper-tail estimate. We begin by working toward the upper bound inCorollary 3.1.
LEMMA 3.2. Fix T > 0, and define a := T (LipÏ âš 1)4/(2Îș). Then, for allreal numbers k â„ 1,
supxâR
suptâ[0,T ]
E(|ut (x)|k) †Ckeak3where C := 8
(1 + |Ï(0)|
21/4(LipÏ âš 1)
).
PROOF. We follow closely the proof of Theorem 2.1 of [15], but matters sim-plify considerably in the present, more specialized, setting.
First of all, we note that because u0 ⥠1 is spatially homogeneous, the dis-tribution of ut (x) does not depend on x; this property was observed earlier byDalang [11], for example.
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2235
Therefore, an application of Burkholderâs inequality, using the CarlenâKreebound [6] on Davisâs optimal constant [12] in the BurkholderâDavisâGundy in-equality [3â5] and Minkowskiâs inequality imply the following: for all t â„ 0, ÎČ > 0and x â R,
âut (x)âk †1 +â„â„â„â„â«[0,t]ĂR
ptâs(y â x)Ï (us(y))W(ds dy)
â„â„â„â„k
†1 + eÎČt2â
k(|Ï(0)| + LipÏ sup
râ„0[eâ2ÎČrâur(x)âk]
)(3.3)
Ă(â« â
0eâ2ÎČsâpsâ2
2 ds
)1/2
= 1 +â
keÎČt
(8ÎșÎČ)1/4
(|Ï(0)| + LipÏ sup
râ„0[eâ2ÎČrâur(x)âk]
).
See Foondun and Khoshnevisan [15], Lemma 3.3, for the details of the derivationof such an estimate. (Although Lemma 3.3 of [15] is stated for even integers k â„ 2,a simple variation on the proof of that lemma implies the result for general k â„ 1;see Conus and Khoshnevisan [9].) It follows that
Ï(ÎČ, k) := suptâ„0
[eâÎČtâut (x)âk](3.4)
satisfies
Ï(ÎČ, k) †1 +â
k
(4ÎșÎČ)1/4
(|Ï(0)| + LipÏÏ(ÎČ, k)).(3.5)
If LipÏ = 0, then clearly Ï(ÎČ, k) < â. If LipÏ > 0, then Ï(ÎČ, k) < â for allÎČ > k2 Lip4
Ï /(4Îș); therefore, the preceding proves that if ÎČ > k2 Lip4Ï /(4Îș), then
Ï(ÎČ, k) †1
1 â (â
k LipÏ /(4ÎșÎČ)1/4)·(
1 +â
k|Ï(0)|(4ÎșÎČ)1/4
).(3.6)
We apply this with ÎČ := k2(LipÏ âš 1)4/(2Îș) to obtain the lemma. ïżœ
REMARK 3.3. In the preceding results, the term LipÏ âš 1 appears in place ofthe more natural quantity LipÏ only because it can happen that LipÏ = 0. In thelatter case, Ï is a constant function, and the machinery of Lemma 3.2 is not neededsince ut(x) is a centered Gaussian process with a variance that can be estimatedreadily. (We remind the reader that the case where Ï is a constant is covered byTheorem 1.2; see Section 6.)
Next we describe a real-variable lemma that shows how to transform the mo-ment estimate of Lemma 3.2 into subexponential moment estimates.
2236 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
LEMMA 3.4. Suppose X is a nonnegative random variable that satisfies thefollowing: there exist finite numbers a,C > 0 and b > 1 such that
E(Xk) †Ckeakb
for all real numbers k â„ 1.(3.7)
Then, E exp{α(log+ X)b/(bâ1)} < ââfor log+ u := log(u âš e)âprovided that
0 < α <1 â bâ1
(ab)1/(bâ1).(3.8)
Lemmas 3.2, 3.4 and Chebyshevâs inequality together imply the following re-sult.
COROLLARY 3.5. Choose and fix T > 0, and define c0 := â2/3 â 0.8165.
Then for all α < c0â
Îș/(â
T (LipÏ âš 1)),
supxâR
suptâ[0,T ]
E(eα(log+ ut (x))3/2)
< â.(3.9)
Consequently,
lim supλââ
1
(logλ)3/2 supxâR
suptâ[0,T ]
log P{ut (x) > λ} †âc0â
ÎșâT (LipÏ âš 1)
.(3.10)
We skip the derivation of Corollary 3.5 from Lemma 3.4, as it is immediate.The result holds uniformly in t â [0, T ] and x â R as the constants a and C inLemma 3.2 are independent of t and x. Instead we verify Lemma 3.4.
PROOF OF LEMMA 3.4. Because[log+
(X
C
)]b/(bâ1)
†2b/(bâ1) · {(log+ X)b/(bâ1) + (log+ C)b/(bâ1)},(3.11)
we can assume without loss of generality that C = 1; for otherwise we may con-sider X/C in place of X from here on.
For all z > e, Chebyshevâs inequality implies that
P{eα(log+ X)b/(bâ1)
> z}†eâmaxk g(k),(3.12)
where
g(k) := k
(log z
α
)(bâ1)/b
â akb.(3.13)
One can check directly that maxk g(k) = c log z, where
c := 1 â bâ1
α · (ab)1/(bâ1).(3.14)
Thus, it follows that P{exp[α(log+ X)b/(bâ1)] > z} = O(zâc) for z â â. Conse-quently, E exp{α(log+ X)b/(bâ1)} < â as long as c > 1; this is equivalent to thestatement of the lemma. ïżœ
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2237
3.2. Lower-tail estimates. In this section we proceed to estimate the tail of thedistribution of ut(x) from below. We first consider the simplest case in which Ï isbounded uniformly from below, away from zero.
PROPOSITION 3.6. If Δ0 := infzâR Ï(z) > 0, then for all t > 0,
infxâR
E(|ut (x)|2k) â„ (â2 + o(1)
)(ÎŒtk)k (as k â â)(3.15)
where the âo(1)â term depends only on k, and
ÎŒt := 2
e· Δ2
0
ât
ÏÎș.(3.16)
PROOF. Because the initial function in (2.11) is u0(x) ⥠1, it follows that thedistribution of ut (x) does not depend on x; see Dalang [11]. Therefore, the âinfâin the statement of the proposition is superfluous.
Throughout, let us fix x â R and t > 0. Now we may consider a mean-onemartingale {MÏ }0â€Ïâ€t defined as follows:
MÏ := 1 +â«(0,Ï )ĂR
ptâs(y â x)Ï (us(y))W(ds dy) (0 â€ Ï â€ t).(3.17)
The quadratic variation of this martingale is
ăMăÏ =â« Ï
0ds
â« âââ
dy p2tâs(y â x)Ï 2(us(y)) (0 â€ Ï â€ t).(3.18)
Therefore, by ItĂŽâs formula, for all positive integers k, and for every Ï â [0, t],M2k
Ï = 1 + 2k
â« Ï
0M2kâ1
s dMs +(
2k
2
)â« Ï
0M2(kâ1)
s dăMăs
= 1 + 2k
â« Ï
0M2kâ1
s dMs(3.19)
+(
2k
2
)â« Ï
0M2(kâ1)
s ds
â« âââ
dy p2tâs(y â x)Ï 2(us(y)).
By the assumption of the lemma, Ï(us(y)) ℠Δ0 a.s. Therefore,
M2kÏ â„ 1 + 2k
â« Ï
0M2kâ1
s dMs +(
2k
2
)Δ2
0
â« Ï
0M2(kâ1)
s · âptâsâ2L2(R)
ds
(3.20)
= 1 + 2k
â« Ï
0M2kâ1
s dMs +(
2k
2
)Δ2
0
â« Ï
0
M2(kâ1)s
(4ÏÎș(t â s))1/2 ds.
We set Ï := t and then take expectations to find that
E(M2kt ) â„ 1 +
(2k
2
)Δ2
0
â« t
0E(M2(kâ1)
s
) ds
(4ÏÎș(t â s))1/2(3.21)
= 1 +(
2k
2
)Δ2
0
â« t
0E(M2(kâ1)
s
)Μ(t,ds),
2238 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
where the measures {Μ(t, ·)}t>0 are defined as
Μ(t,ds) := 1(0,t)(s)
(4ÏÎș(t â s))1/2 ds.(3.22)
We may iterate the preceding in order to obtain
E(M2kt ) â„ 1 +
kâ1âl=0
al,kΔ2(l+1)0 ·
â« t
0Μ(t,ds1)
â« s1
0Μ(s1,ds2) · · ·
(3.23)Ăâ« sl
0Μ(sl,dsl+1),
where
al,k :=lâ
j=0
(2k â 2j
2
)for 0 †l < k(3.24)
and s0 := t . The right-hand side of (3.23) is exactly equal to E(M2kt ) in the case
where Ï(z) ⥠Δ0 for all z â R. Indeed, the same computation as above works withidentities all the way through. In other words,
E(|ut(x)|2k) = E(M2kt ) ℠E(ηt (x)2k),(3.25)
where
ηt (x) := 1 + Δ0 ·â«(0,t)ĂR
ptâs(y â x)W(ds dy).(3.26)
We define
ζt (x) := Δ0 ·â«(0,t)ĂR
ptâs(y â x)W(ds dy),(3.27)
so that ηt (x) = 1 + ζt (x). Clearly,
E(ηt (x)2k) ℠E(ζt (x)2k).(3.28)
Since ζ is a centered Gaussian process,
E(ζt (x)2k) = [E(ζt (x)2)]k · (2k)!k! · 2k
(3.29)
and
E(ζt (x)2) = Δ20 ·
â« t
0ds
â« âââ
dy p2tâs(y â x) = Δ2
0 ·â
t
ÏÎș;(3.30)
see (1.25). The proposition follows from these observations and Stirlingâs formula.ïżœ
We can now use Proposition 3.6 to obtain a lower estimate on the tail of thedistribution of ut(x).
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2239
PROPOSITION 3.7. If there exists Δ0 > 0 such that Ï(x) ℠Δ0 for all x â R,then there exists a universal constant C â (0,â) such that for all t > 0,
lim infλââ
1
λ6 infxâR
log P{|ut(x)| ℠λ} â„ âC(LipÏ âš 1)4â
Îș
Δ60
ât
.(3.31)
PROOF. Choose and fix t > 0 and x â R. We apply the celebrated PaleyâZygmund inequality in the following form: for every integer k â„ 1,
E(|ut (x)|2k) †E(|ut (x)|2k; |ut (x)| ℠1
2âut (x)â2k
)+ 12E(|ut (x)|2k)
(3.32)â€â
E(|ut (x)|4k)P{|ut (x)| â„ 1
2âut (x)â2k
}+ 12E(|ut (x)|2k).
This yields the following bound:
P{|ut (x)| â„ 1
2âut (x)â2k
}â„ [E(|ut(x)|2k)]2
4E(|ut (x)|4k)(3.33)
â„ exp(â64t (LipÏ âš 1)4
Îșk3(1 + o(1)
))
as k â â; see Lemma 3.2 and Proposition 3.6. Another application of Proposi-tion 3.6 shows that âut (x)â2k â„ (1 + o(1))(ÎŒtk)1/2 as k â â where ÎŒt is definedin (3.16). This implies as k â â,
P{|ut (x)| â„ 1
2(ÎŒtk)1/2
}â„ exp
[â64t (LipÏ âš 1)4
Îșk3(1 + o(1)
)].(3.34)
The proposition follows from this by setting k to be the smallest possible integerthat satisfies (ÎŒtk)1/2 ℠λ. ïżœ
Now, we study the tails of the distribution of ut(x) under the conditions ofpart (2) of Theorem 1.1.
PROPOSITION 3.8. Suppose Ï(x) > 0 for all x â R and (1.12) holds for someÎł â (0,1/6). Then
lim infλââ inf
xâR
log P{|ut(x)| > λ}λ1/γ
â„ âC
((LipÏ âš 1)2/3
Îș1/12
t1/12
)1/Îł
,(3.35)
where C â (0,â) is a constant that depends only on Îł .
PROOF. For every integer N â„ 1, define
Ï (N)(x) :=â§âšâ©
Ï(x), if |x| †N ,Ï(âN), if x < âN ,Ï(N), if x > N .
(3.36)
2240 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
It can be checked directly that Ï (N) is a Lipschitz function, and that in fact:(i) LipÏ (N) †LipÏ ; and (ii) and infzâR Ï (N)(z) > 0.
Let u(N)t (x) denote the solution to (2.11), when Ï is replaced by Ï (N). We first
establish the bound
E(âŁâŁu(N)
t (x) â ut (x)âŁâŁ2)= O(Nâ2) as N â â.(3.37)
Let us observe, using the mild representation of the solution to (2.11), that
E(âŁâŁu(N)
t (x) â ut(x)âŁâŁ2)†2(T1 + T2),(3.38)
where
T1 := E(âŁâŁâŁâŁâ«(0,t)ĂR
ptâs(y â x)[Ï (N)(u(N)
s (y))â Ï
(u(N)
s (y))]
W(ds dy)
âŁâŁâŁâŁ2)
=â« t
0ds
â« âââ
dy p2tâs(y â x)E
(âŁâŁÏ (N)(u(N)s (y)
)â Ï(u(N)
s (y))âŁâŁ2) and
(3.39)
T2 := E(âŁâŁâŁâŁâ«(0,t)ĂR
ptâs(y â x)[Ï(u(N)
s (y))â Ï(us(y))
]W(ds dy)
âŁâŁâŁâŁ2)
=â« t
0ds
â« âââ
dy p2tâs(y â x)E
(âŁâŁÏ (u(N)s (y)
)â Ï(us(y))âŁâŁ2).
We can estimate the integrand of T1 by the following:
E(âŁâŁÏ (N)(u(N)
s (y))â Ï
(u(N)
s (y))âŁâŁ2)
†Lip2Ï Â· E
(âŁâŁN â u(N)s (y)
âŁâŁ2;u(N)s (y) > N
)(3.40)
+ Lip2Ï Â· E
(âŁâŁâN â u(N)s (y)
âŁâŁ2;u(N)s (y) < âN
)†4 Lip2
Ï Â·E(âŁâŁu(N)s (y)
âŁâŁ2; âŁâŁu(N)s (y)
âŁâŁ> N).
We first apply the CauchyâSchwarz inequality and then Chebyshevâs inequality(in this order) to conclude that
E(âŁâŁÏ (N)(u(N)
s (y))â Ï
(u(N)
s (y))âŁâŁ2) †4 Lip2
Ï
N2 · E(âŁâŁu(N)
s (y)âŁâŁ4)
(3.41)= O(Nâ2) as N â â,
uniformly for all y â R and s â (0, t). Indeed, Lemma 3.2 ensures thatE[|u(N)
s (y)|4] is bounded in N , because limNââ LipÏ (N) = LipÏ . This impliesreadily that T1 = O(Nâ2) as N â â.
Next we turn to T2; thanks to (1.25), the quantity T2 can be estimated as follows:
T2 †Lip2Ï Â·
â« t
0ds
â« âââ
dy p2tâs(y â x)E
(âŁâŁu(N)s (y) â us(y)
âŁâŁ2)(3.42)
†const ·⫠t
0
M(s)ât â s
,
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2241
where
M(s) := supyâR
E(âŁâŁu(N)
s (y) â us(y)âŁâŁ2) (0 †s †t).(3.43)
Notice that the implied constant in (3.42) does not depend on t .We now combine our estimates for T1 and T2 to conclude that
M(s) †const
N2 + const ·⫠s
0
M(r)âs â r
dr (0 †s †t)
(3.44)
†const ·{(⫠s
0[M(r)]3/2 dr
)2/3
+ 1
N2
},
thanks to Hölderâs inequality. We emphasize that the implied constant dependsonly on the Lipschitz constant of Ï , the variable t and the diffusion constant Îș.Therefore,
[M(s)]3/2 †const ·{⫠s
0[M(r)]3/2 dr + 1
N3
},(3.45)
uniformly for s â (0, t). Gronwallâs inequality then implies the bound M(t) =O(Nâ2), valid as N â â.
Now we proceed with the proof of Proposition 3.8. For all N â„ 1, the functionÏ (N) is bounded below. Let Δ(N) be such that Ï (N)(x) ℠Δ(N) for all x â R. LetD := Dt := (4t/(e2ÏÎș))1/4. According to the proof of Proposition 3.7, specifi-cally (3.34) applied to u(N), we have
P{|ut(x)| â„ D
4Δ(N)k1/2
}â„ exp
[â64t (LipÏ âš 1)4
Îșk3(1 + o(1)
)](3.46)
â P{âŁâŁut (x) â u
(N)t (x)
âŁâŁâ„ D
4Δ(N)k1/2
}.
Thanks to (1.12), we can write
Δ(N) ïżœ (logN)â(1/6âÎł ) as N â â,(3.47)
using standard notation. Therefore, if we choose
N :=â
exp{
64t (LipÏ âš 1)4k3
Îș
}â,(3.48)
then we are led to the bound
Δ(N) ïżœ(
64t (LipÏ âš 1)4
Îș
)â(1/6âÎł )
k3Îłâ(1/2).(3.49)
We can use Chebyshevâs inequality in order to estimate the second term on theright-hand side of (3.46). In this way we obtain the following:
P{|ut(x)| â„ D
4k3Îł
}â„ exp
{â64t (LipÏ âš 1)4
Îșk3(1 + o(1)
)}(3.50)
â 1
C1N2k6Îł,
2242 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
where
D := D
{Îș
64t (LipÏ âš 1)4
}(1/6)âÎł
,(3.51)
and C1 is a constant that depends only on t , LipÏ and Îș. For all sufficiently largeintegers N ,
1
C1N2k6γ†exp
[â128t (LipÏ âš 1)4
Îșk3(1 + o(1)
)],(3.52)
and the proposition follows upon setting λ := Dk3Îł /4. ïżœ
4. Localization. The next step in the proof of Theorem 1.1 requires us toshow that if x and xâČ are O(1) apart, then ut (x) and ut(x
âČ) are approximatelyindependent. We show this by coupling ut(x) first to the solution of a localizedversionâsee (4.1) belowâof the stochastic heat equation (2.11). And then a sec-ond coupling to a suitably-chosen Picard-iteration approximation of the mentionedlocalized version.
Consider the following parametric family of random evolution equations (in-dexed by the parameter ÎČ > 0):
U(ÎČ)t (x) = 1 +
â«(0,t)Ă[xââ
ÎČt,x+âÎČt]
ptâs(y â x)Ï(U(ÎČ)
s (y))W(ds dy)(4.1)
for all x â R and t â„ 0.
LEMMA 4.1. Choose and fix ÎČ > 0. Then, (4.1) has an almost surely uniquesolution U(ÎČ) such that for all T > 0 and k â„ 1,
supÎČ>0
suptâ[0,T ]
supxâR
E(âŁâŁU(ÎČ)
t (x)âŁâŁk)†Ckeak3
,(4.2)
where a and C are defined in Lemma 3.2.
PROOF. A fixed-point argument shows that there exists a unique, up to modi-fication, solution to (4.1) subject to the condition that for all T > 0,
suptâ[0,T ]
supxâR
E(âŁâŁU(ÎČ)
t (x)âŁâŁk)< â for all k â„ 1.(4.3)
See Foondun and Khoshnevisan [15] for more details on the ideas of the proof; andthe moment estimate follows as in the proof of Lemma 3.2. We omit the numerousremaining details. ïżœ
LEMMA 4.2. For every T > 0 there exists a finite and positive constant C :=C(Îș) such that for sufficiently large ÎČ > 0 and k â„ 1,
suptâ[0,T ]
supxâR
E(âŁâŁut(x) â U
(ÎČ)t (x)
âŁâŁk)†Ckkk/2eFk(k2âÎČ),(4.4)
where F â (0,â) depends on (T ,Îș) but not on (k,ÎČ).
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2243
PROOF. For all x â R and t > 0, define
Vt(x) := 1 +â«(0,t)ĂR
ptâs(y â x)Ï(U(ÎČ)
s (y))W(ds dy).(4.5)
Then, â„â„Vt(x) â U(ÎČ)t (x)
â„â„k
=â„â„â„â„â«(0,t)Ă{yâR : |yâx|>â
ÎČt}ptâs(y â x)Ï
(U(ÎČ)
s (y))W(ds dy)
â„â„â„â„k
(4.6)
†2â
k
â„â„â„â„â« t
0ds
â«|yâx|â„â
ÎČtdy p2
tâs(y â x)Ï 2(U(ÎČ)s (y)
)â„â„â„â„1/2
k/2.
The preceding hinges on an application of Burkholderâs inequality, using theCarlenâKree bound [6] on Davisâs optimal constant [12] in the BurkholderâDavisâGundy inequality [3â5]; see Foondun and Khoshnevisan [15] for the details of thederivation of such an estimate. Minkowskiâs inequality tells us then that the pre-ceding quantity is at most
2
âk
â« t
0ds
â«|yâx|â„â
ÎČtdy p2
tâs(y â x)â„â„Ï 2
(U
(ÎČ)s (y)
)â„â„k/2
(4.7)
†const ·â
k
â« t
0ds
â«|yâx|â„â
ÎČtdy p2
tâs(y â x)(1 + â„â„U(ÎČ)
s (y)â„â„2k
).
Equation (4.7) holds because the Lipschitz continuity of the function Ï ensures thatit has at-most-linear growth: |Ï(x)| †const · (1+|x|) for all x â R. The inequalityin Lemma 4.1 implies that, uniformly over all t â [0, T ] and x â R,
â„â„Vt(x) â U(ÎČ)t (x)
â„â„k †const ·
âkC2e2ak2
â« t
0dr
â«|z|â„â
ÎČtdzp2
r (z)
(4.8)
†const · k1/2eak2
âÎș
ââ« 1
0
dsâs
â«|w|â„â
2ÎČdw ps(w),
where we have used (1.4). Now a standard Gaussian tail estimate yieldsâ«|w|â„â
2ÎČps(w)dw †2eâÎČ/sÎș,(4.9)
and the latter quantity is at most 2 exp(âÎČ/Îș) whenever s â (0,1]. Therefore, onone hand,
supxâR
â„â„Vt(x) â U(ÎČ)t (x)
â„â„k †const · k1/2eak2
âÎș
eâÎČ/2Îș.(4.10)
2244 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
On the other hand,
ut (x) â Vt(x) =â«(0,t)ĂR
ptâs(y â x)[Ï(us(y)) â Ï
(U(ÎČ)
s (y))]
W(ds dy),
whence
âut (x) â Vt(x)âk
†2â
k
â„â„â„â„â« t
0ds
â« âââ
dy p2tâs(y â x)
[Ï(us(y)) â Ï
(U(ÎČ)
s (y))]2â„â„â„â„
1/2
k/2(4.11)
†2â
k LipÏ
â„â„â„â„â« t
0ds
â« âââ
dy p2tâs(y â x)
[us(y) â U(ÎČ)
s (y)]2â„â„â„â„
1/2
k/2
†2â
k LipÏ Â·ââ« t
0ds
â« âââ
dy p2tâs(y â x)
â„â„us(y) â U(ÎČ)s (y)
â„â„2k.
Consequently, (4.10) implies thatâ„â„ut(x) â U(ÎČ)t (x)
â„â„k
†2â
k LipÏ Â·ââ« t
0ds
â« âââ
dy p2tâs(y â x)
â„â„us(y) â U(ÎČ)s (y)
â„â„2k(4.12)
+ const · k1/2eak2
âÎș
eâÎČ/(2Îș).
Let us introduce a parameter ÎŽ > 0 and define the seminorms
Nk,ÎŽ(Z) := supsâ„0
supyâR
[eâÎŽsâZs(y)âk](4.13)
for every spaceâtime random field Z := {Zs(y)}s>0,yâR. Then, we have
Nk,ÎŽ
(u â U(ÎČ))
†2â
k LipÏ Nk,ÎŽ
(u â U(ÎČ)) ·
ââ« â0
eâ2ÎŽrâprâ2L2(R)
dr(4.14)
+ const · k1/2eak2âÎČ/(2Îș)
âÎș
.
Thanks to (1.24), if ÎŽ := Dk2 for some sufficiently large constant D, then thesquare root is at most [4â
k(LipÏ âš 1)]â1, whence it follows that (for that fixedchoice of ÎŽ)
Nk,ÎŽ
(u â U(ÎČ))†const · k1/2eak2âconst·(ÎČ/Îș)
âÎș
.(4.15)
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2245
The lemma follows from this. ïżœ
Now let us define U(ÎČ,n)t (x) to be the nth Picard-iteration approximation to
U(ÎČ)t (x). That is, U
(ÎČ,0)t (x) := 1, and for all ïżœ â„ 0,
U(ÎČ,ïżœ+1)t (x)
(4.16):= 1 +
â«(0,t)Ă[xââ
ÎČt,x+âÎČt]
ptâs(y â x)Ï(U(ÎČ,ïżœ)
s (y))W(ds dy).
LEMMA 4.3. There exist positive and finite constants Câ and Gâdependingon (t,Îș)âsuch that uniformly for all k â [2,â) and ÎČ > e,
supxâR
E(âŁâŁut(x) â U
(ÎČ,[logÎČ]+1)t (x)
âŁâŁk)†Ckâkk/2eGk3
ÎČk.(4.17)
PROOF. The method of Foondun and Khoshnevisan [15] shows that if ÎŽ :=DâČk2 for a sufficiently-large DâČ, then
Nk,ÎŽ
(U(ÎČ) â U(ÎČ,n))†const · eân for all n â„ 0 and k â [2,â).(4.18)
To elaborate, we follow the arguments in [15] of the proof of Theorem 2.1 leadingup to equation (4.6) but with vn there replaced by U(ÎČ,n) here. We then obtain
â„â„U(ÎČ,n+1) â U(ÎČ,n)â„â„k,Ξ †const ·
âkÏ
(2Ξ
k
)â„â„U(ÎČ,n) â U(ÎČ,nâ1)â„â„k,Ξ ,(4.19)
where
âf âk,Ξ :={suptâ„0
supxâR
eâΞtE(|f (t, x)|k)}1/k
(4.20)
and
Ï(Ξ) := 1
2Ï
â« âââ
dΟ
Ξ + Ο2 .(4.21)
A quick computation reveals that by choosing Ξ := DâČâČk3, for a large enough con-stant DâČâČ > 0, we obtainâ„â„U(ÎČ,n+1) â U(ÎČ,n)
â„â„k,Ξ †eâ1â„â„U(ÎČ,n) â U(ÎČ,nâ1)
â„â„k,Ξ .(4.22)
We get (4.18) from this.Next we set n := [logÎČ] + 1 and apply the preceding together with Lemma 4.2
to finish the proof. ïżœ
LEMMA 4.4. Choose and fix ÎČ, t > 0 and n â„ 0. Also fix x1, x2, . . . â R suchthat |xi â xj | â„ 2n
âÎČt whenever i = j . Then {U(ÎČ,n)
t (xj )}jâZ is a collection ofi.i.d. random variables.
2246 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
PROOF. The proof uses induction on the variable n, and proceeds by estab-lishing a little more. We will use the Ï -algebras P(A) as defined in Appendix A,where A â R ranges over all Lebesgue-measurable sets of finite Lebesgue measureProposition A.1.
Since U(ÎČ,0)t (x) ⥠1, the statement of the lemma holds tautologically for n = 0.
In order to understand the following argument better let us concentrate on the casen = 1. In that case,
U(ÎČ,1)t (x) = 1 + Ï(1) ·
â«(0,t)Ă[xââ
ÎČt,x+âÎČt]
ptâs(y â x)W(ds dy).(4.23)
In particular, if we define the process {U (ÎČ,1)s (t, x)}0â€sâ€t by
U (ÎČ,1)s (t, x) = 1 + Ï(1) ·
â«(0,s)Ă[xââ
ÎČt,x+âÎČt]
ptâr (y â x)W(dr dy),(4.24)
it follows from Proposition A.1 that U(ÎČ,1)âą (t, x) â P([x ââ
ÎČt, x +âÎČt]). Hence,
Corollary A.2 shows that {U (ÎČ,1)âą (t, xj )}jâZ are independent processes. Taking s =t shows the independence part of the lemma for n = 1. It is not hard to see that thelaw of U
(ÎČ,1)t (x) is independent of x, namely a Gaussian distribution with mean
and variance parameters that are independent of x. This concludes the proof forn = 1.
Now, let us define processes {U (ÎČ,n)s (t, x)}0â€sâ€t by
U (ÎČ,n)s (t, x)
= 1 + Ï(1) ·â«(0,s)Ă[xââ
ÎČt,x+âÎČt]
ptâr (y â x)Ï(U (ÎČ,nâ1)
r (r, y))
(4.25)
Ă W(dr dy).
If we proved that U(ÎČ,n)âą (t, x) â P([x ân
âÎČt, x +n
âÎČt]) for all x â R and t > 0,
it then would follow from (4.16) and Proposition A.1 that U(ÎČ,n+1)âą (t, x) â P([x â
(n + 1)â
ÎČt, x + (n + 1)â
ÎČt]) for all x â R and t > 0. Since this fact is true forn = 1, then we have proved that U
(ÎČ,n)âą (t, x) â P([x â nâ
ÎČt, x + nâ
ÎČt]) for allx â R, t > 0 and n â N. We then use this result, together with Corollary A.2 todeduce that {U (ÎČ,n)âą (t, xj )}jâZ are independent processes. The independence partof the lemma follows upon taking s := t in this discussion.
Since the law of the noise W is invariant by translation, it is not hard to prove byinduction that the law of U
(ÎČ,n)t (x) is indeed independent of x. (Notice that this is
always true when the initial condition is constant [11], Lemma 18.) This concludesthe (inductive) proof of the lemma. ïżœ
5. Proof of Theorem 1.1. We are ready to combine our efforts thus far inorder to verify Theorem 1.1.
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2247
PROOF OF THEOREM 1.1. Parts (1) and (2) of Theorem 1.1 are proved sim-ilarly. Therefore, we present the details of the second part. For the proof of thefirst part, we can take Îł := 1/6 in the following argument. We remind that theprocesses UÎČ and U(ÎČ,n) are defined, respectively, in (4.1) and (4.16).
For all x1, . . . , xN â R,
P{
max1â€jâ€N
|ut(xj )| < λ}
†P{
max1â€jâ€N
âŁâŁU(ÎČ,logÎČ)t (xj )
âŁâŁ< 2λ}
(5.1)+ P
{max
1â€jâ€N
âŁâŁU(ÎČ,logÎČ)t (xj ) â ut (xj )
âŁâŁ> λ}.
(To be very precise, we need to write (ÎČ, [logÎČ] + 1) in place of (ÎČ, logÎČ).) Thewhole program of Section 3, that led to Proposition 3.8 can be carried out for U(ÎČ)
instead of u. Only minor changes (typically in Proposition 3.6) are needed. Then,(4.18) shows that the same moment estimates are valid for U(ÎČ,n) as for U(ÎČ).
We can follow along the proof of Proposition 3.8 and prove similarly the exis-tence of constants c1, c2 > 0âindependent of ÎČ for all sufficiently large values ofÎČâso that for all x â R and λ â„ 1,
P{âŁâŁU(ÎČ,logÎČ)
t (x)âŁâŁâ„ λ
}â„ c1eâc2λ1/Îł
.(5.2)
Suppose, in addition, that |xi â xj | â„ 2â
ÎČt logÎČ whenever i = j . Then, Lem-mas 4.3 and 4.4 together imply the following:
P{
max1â€jâ€N
|ut (xj )| < λ}
(5.3)†(
1 â c1eâc2·(2λ)1/Îł )N + NCkâkk/2eGk3ÎČâkλâk.
The constants Câ and G may differ from the ones in Lemma 4.3. We now selectthe various parameters judiciously: choose λ := k, N := ïżœk exp(c2 · (2k)1/Îł )ïżœ andÎČ := exp(Ïk(1âÎł )/Îł ) for a large-enough positive constant Ï > 2 · 31/Îł c2. In thisway, (5.3) simplifies: for all sufficiently large integers k,
P{
max1â€jâ€N
|ut (xj )| < k}
†eâc1k + exp[c2 · (2k)1/Îł + log k + k logCâ â k logk
2+ Gk3 â Ïk1/Îł
2
](5.4)
†2eâc1k.
Now we choose the xi âs as follows: set x0 := 0, and define iteratively
xi+1 := xi + 2â
ÎČt([logÎČ] + 1)(5.5)
= 2(i + 1)â
ÎČt([logÎČ] + 1) for all i â„ 0.
2248 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
The preceding implies that for all sufficiently large k,
P{
supxâ[0,2(N+1)
âÎČt([logÎČ]+1)]
|ut (x)| < k}
†2eâc1k,(5.6)
whence
P{
sup|x|â€2(N+1)
âÎČt([logÎČ]+1)
|ut (x)| < k}
†2eâc1k(5.7)
by symmetry.It is not hard to verify that, as k â â,
2(N + 1)â
ÎČt([logÎČ] + 1) = O(eÏk1/Îł
).(5.8)
Consequently, the BorelâCantelli lemma, used in conjunction with a standardmonotonicity argument, implies that uâ
t (R) ℠const · (log(R)/c2)γ a.s. for all suf-
ficiently large values of R, where uât (R) is defined in (1.23). By Proposition 3.8,
c2 = const · Îș1/12Îł . Therefore, the theorem follows. ïżœ
6. Proof of Theorem 1.2. Next we prove Theorem 1.2. In order to obtainthe upper bound, the proof requires an estimate of spatial continuity of x ïżœâ ut (x).However, matters are somewhat complicated by the fact that we need a modulus ofcontinuity estimate that holds simultaneously for every x â [âR,R], uniformly forall large values of R. This will be overcome in a few steps. The first is a standardmoment bound for the increments of the solution; however, we need to pay closeattention to the constants in the estimate.
LEMMA 6.1. Choose and fix some t > 0, and suppose Ï is uniformly bounded.Then there exists a finite and positive constant A such that for all real numbersk â„ 2,
supââ<x =xâČ<â
E(|ut (x) â ut (xâČ)|2k)
|x â xâČ|k â€(
Ak
Îș
)k
.(6.1)
PROOF. Throughout, let S0 := supxâR|Ï(x)|.If x, xâČ â [âR,R] and t > 0 are held fixed, then we can write ut(x) â ut (x
âČ) =Nt , where {NÏ }Ïâ(0,t) is the continuous mean-one martingale described by
NÏ :=â«(0,Ï )ĂR
[ptâs(y â x) â ptâs(y â xâČ)]Ï(us(y))W(ds dy)(6.2)
for Ï â (0, t). The quadratic variation of {NÏ }Ïâ(0,t) is estimated as follows:
ăNăÏ â€ S20 ·
â« Ï
0ds
â« âââ
dy [ps(y â x) â ps(y â xâČ)]2
(6.3)†eÏ S2
0 ·⫠â
0eâs ds
â« âââ
dy [ps(y â x) â ps(y â xâČ)]2.
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2249
For every s > 0 fixed, we can compute the dy-integral using Plancherelâs theorem,and obtain Ïâ1 â«â
ââ(1 â cos(Ο |x â x âČ|)) exp(âÎșsΟ2)dΟ . Therefore, there exists afinite and positive constant a such that
ăNăÏ â€ eÏ S20
Ï·⫠âââ
1 â cos(Ο |x â xâČ|)1 + ÎșΟ2 dΟ †a
Îș|x â xâČ|,(6.4)
uniformly for all Ï â (0, t); we emphasize that a depends only on S0 and t . TheCarlenâKree estimate [6] for the Davis [12] optimal constant in the BurkholderâDavisâGundy inequality [3â5] implies the lemma. ïżœ
The second estimate turns the preceding moment bounds into an maximal expo-nential estimate. We use a standard chaining argument to do this. However, onceagain we have to pay close attention to the parameter dependencies in the impliedconstants.
LEMMA 6.2. Choose and fix t > 0, and suppose Ï is uniformly bounded. Thenthere exist a constant C â (0,â) such that
E[
supx,xâČâI :
|xâxâČ|â€ÎŽ
exp(
Îș|ut(x) â ut(xâČ)|2
CÎŽ
)]†2
ÎŽ,(6.5)
uniformly for every ÎŽ â (0,1] and every interval I â [0,â) of length at most one.
PROOF. Recall [10], (39), page 11, the Kolmogorov continuity in the follow-ing quantitative form: suppose there exist Îœ > Îł > 1 for which a stochastic process{Ο(x)}xâR satisfies the following:
E(|Ο(x) â Ο(xâČ)|Îœ)†C|x â xâČ|Îł ;(6.6)
we assume that the preceding holds for all x, xâČ â R, and C â (0,â) is indepen-dent of x and xâČ. Then, for every integer m â„ 0,
E(
supx,xâČâI :
|xâxâČ|â€2âm
|Ο(x) â Ο(xâČ)|Îœ)
â€(
2(2âÎł+Îœ)/ÎœC1/Îœ
1 â 2â(Îłâ1)/Îœ
)Μ
· 2âm(Îłâ1).(6.7)
(Reference [10], (39), page 11, claims this with 2âm(Îłâ1) replaced with 2âmÎł onthe right-hand side. But this is a typographical error; compare with [10], (38),page 11.)
If ÎŽ â (0,1], then we can find an integer m â„ 0 such that 2âmâ1 †Ύ †2âm,whence it follows that
E(
supx,xâČâI :
|xâxâČ|â€ÎŽ
|Ο(x) â Ο(xâČ)|Îœ)
â€(
2(2âÎł+Îœ)/ÎœC1/Îœ
1 â 2â(Îłâ1)/Îœ
)Μ
· 2âm(Îłâ1)
(6.8)
â€(
2(2âÎł+Îœ)/ÎœC1/Îœ
1 â 2â(Îłâ1)/Îœ
)Μ
· (2ÎŽ)Îłâ1.
2250 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
We apply the preceding with Ο(x) := ut(x), Îł := Îœ/2 := k and C := (Ak/Îș)k ,where A is the constant of Lemma 6.1. It follows that there exists a positive andfinite constant Aâ such that for all intervals I of length at most one, all integersk â„ 2, and every ÎŽ â (0,1],
E(
supx,xâČâI :
|xâxâČ|â€ÎŽ
|ut (x) â ut(xâČ)|2k
)â€(
AâkÎș
)k
ÎŽkâ1.(6.9)
Stirlingâs formula tells us that there exists a finite constant Bâ > 1 such that(Aâk)k †Bkâk! for all integers k â„ 1. Therefore, for all α, ÎŽ > 0,
E[
supx,xâČâI :
|xâxâČ|â€ÎŽ
exp(
α|ut (x) â ut(xâČ)|2
ÎŽ
)]†1
ÎŽ
ââk=0
(ζBâÎș
)k
.(6.10)
And this is at most two if α := Îș/(2Bâ). The result follows. ïżœ
Next we obtain another moments bound, this time for the solution rather thanits increments.
LEMMA 6.3. Choose and fix t > 0, and suppose Ï is uniformly bounded. Thenfor all integers k â„ 1,
supxâR
E(|ut (x)|2k) †(2â
2 + o(1))(ÎŒt k)k (as k â â)(6.11)
where the âo(1)â term depends only on k, and
ÎŒt := 8
e· S2
0
ât
ÏÎș.(6.12)
PROOF. Let us choose and fix a t > 0. Define S0 := supxâR|Ï(x)|, and recallthe martingale {MÏ }Ïâ(0,t) from (3.17). ItĂŽâs formula (3.19) tells us that a.s., for allÏ â (0, t),
M2kÏ â€ 1 + 2k
â« Ï
0M2kâ1
s dMs +(
2k
2
)S2
0
â« Ï
0
M2(kâ1)s
(4ÏÎș(t â s))1/2 ds.(6.13)
[Compare with (3.20).] We can take expectations, iterate the preceding and argueas we did in the proof of Proposition 3.6. To summarize the end result, let us define
ηt (x) := 1 + S0 ·â«(0,Ï )ĂR
ptâs(y â x)W(ds dy) (0 â€ Ï â€ t)(6.14)
and
ζt (x) := S0 ·â«(0,Ï )ĂR
ptâs(y â x)W(ds dy) (0 â€ Ï â€ t).(6.15)
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2251
Then we have E[M2kt ] †E[ηt (x)2k] †22k(1 + E[ζt (x)2k]), and similar computa-
tions as those in the proof of Proposition 3.6 yield the lemma. ïżœ
Next we turn the preceding moment bound into a sharp Gaussian tail-probabilityestimate.
LEMMA 6.4. Choose and fix a t > 0, and suppose that Ï is uniformlybounded. Then there exist finite constants C > c > 0 such that simultaneously forall λ > 1 and x â R,
c exp(âC
âÎșλ2)†P{|ut(x)| > λ} †C exp
(âcâ
Îșλ2).(6.16)
PROOF. The lower bound is proved by an appeal to the PaleyâZygmund in-equality, in the very same manner that Proposition 3.7 was established. How-ever, we apply the improved inequality in Lemma 6.3 (in place of the resultof Lemma 3.2). As regards the upper bound, note that Lemma 6.3 implies thatthere exists a positive and finite constant A such that for all integers m â„ 0,supxâR E(|ut (x)|2m) †(A/
âÎș)mm!, thanks to the Stirling formula. Thus,
supxâR
E exp(α|ut(x)|2) â€ââ
m=0
(αAâ
Îș
)m
= 1
1 â αAÎșâ1/2< â,(6.17)
provided that α â (0,â
Îș/A). Notice that this has a different behavior than (6.10)in terms of Îș . If we fix such an α, then we obtain from Chebyshevâs inequality thebound P{ut(x) > λ} †(1âαAÎș
â1/2)â1 · exp(âαλ2), valid simultaneously for allx â R and λ > 0. We write α := c
âÎș to finish. ïżœ
We are finally ready to assemble the preceding estimates in order to establishTheorem 1.2.
PROOF OF THEOREM 1.2. Consider the proof of Theorem 1.1: if we replacethe role of (3.2) by the bounds in Lemma 6.4 and choose λ := k, N := ïżœk Ăexp(c
âÎșk2)ïżœ and ÎČ := exp((LipÏ âš 1)4k2/Îș) in the equivalent of (5.3) with the
appropriate estimates, then we obtain the almost-sure bound lim infRââ uât (R)/
(logR)1/2 > const ·Îșâ1/4, where âconstâ is independent of Îș. It remains to derivea corresponding upper bound for the lim sup.
Suppose R â„ 1 is an integer. We partition the interval [âR,R] using a length-1mesh with endpoints {xj }2R
j=0 via
xj := âR + j for 0 †j †2R.(6.18)
Then we write
P{uât (R) > 2α(logR)1/2} †T1 + T2,(6.19)
2252 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
where
T1 := P{
max1â€jâ€2R
ut(xj ) > α(logR)1/2},
(6.20)T2 := P
{max
1â€jâ€2Rsup
xâ(xj ,xj+1)
|ut (x) â ut(xj )| > α(logR)1/2}.
By Lemma 6.4,
T1 †2R supxâR
P{ut(x) > α(logR)1/2} †const
Râ1+câ
Îșα2 .(6.21)
Similarly,
T2 †2R supI
P{
supx,xâČâI
|ut (x) â ut (xâČ)| > α(logR)1/2
},(6.22)
where âsupI â designates a supremum over all intervals I of length one. Cheby-shevâs inequality and Lemma 6.2 together imply that
T2 †2Râ(Îș/C)α2+1 supI
E[
supx,xâČâI
exp(
Îș
C|ut (x) â ut(x
âČ)|2)]
(6.23)†const
Râ1+(Îșα2)/C.
Let q := min(Îș/C, câ
Îș) to find that
ââR=1
P{uât (R) > 2α(logR)1/2} †const ·
ââR=1
Râqα2+1,(6.24)
and this is finite provided that α > (2/q)1/2. By the BorelâCantelli lemma,
lim supRââ:RâZ
uât (R)
(logR)1/2 â€(
8
q
)1/2
< â a.s.(6.25)
Clearly, (8/q)1/2 †const ·Îșâ1/4 for all Îș â„ Îș0, for a constant depends only on Îș0.And we can remove the restriction âR â Zâ in the lim sup by a standard monotonic-ity argument; namely, we findâby considering in the following R â 1 †X †Râthat
lim supXââ
uât (X)
(logX)1/2 †lim supRââ:RâZ
uât (R)
(log(R â 1))1/2 â€(
8
q
)1/2
a.s.(6.26)
This proves the theorem. ïżœ
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2253
7. Proof of Theorem 1.3. This section is mainly concerned with the proof ofTheorem 1.3. For that purpose, we start with tail-estimates.
LEMMA 7.1. Consider (2.11) with Ï(x) := cx, where c > 0 is fixed. Then,
log P{|ut(x)| ℠λ} ïżœ ââÎș(logλ)3/2 as λ â â.(7.1)
PROOF. Corollary 3.5 implies the upper bound (the boundedness of Ï is notrequired in the results of Section 3.1).
As for the lower bound, we know from [2], Theorem 2.6,
ek(k2â1)t/24Îș †E(|ut (x)|k) †2ek(k2â1)t/24Îș,(7.2)
uniformly for all integers k â„ 2 and x â R. Now we follow the same method as inthe proof of Proposition 3.7, and use the PaleyâZygmund inequality to obtain
P{|ut (x)| â„ 1
2âut (x)â2k
}â„ [E(|ut(x)|2k)]2
4E(|ut (x)|4k)(7.3)
â„ C1eâD1k3/Îș
for some nontrivial constants C1 and D1 that do not depend on x or k. We thenobtain the following: uniformly for all x â R and sufficiently-large integers k,
P{|ut (x)| â„ C
2e4Dk2/Îș
}â„ C1eâD1k
3/Îș.(7.4)
Let λ := (C/2) exp{4Dk2/Îș}, and apply a direct computation to deduce the lowerbound. ïżœ
We are now ready to prove Theorem 1.3. Our proof is based on roughly-similarideas to those used in the course of the proof of Theorem 1.1. However, at a tech-nical level, they are slightly different. Let us point out some of the essential differ-ences: unlike what we did in the proof of Theorem 1.1, we now do not choose thevalues of N , ÎČ and λ as functions of k, but rather as functions of R; the order ofthe moments k will be fixed; and we will not sum on k, but rather sum on a discretesequence of values of the parameter R. The details follow.
PROOF OF THEOREM 1.3. First we derive the lower bound by following thesame method that was used in the proof of Theorem 1.1; see Section 5. But wenow use Lemma 7.1 rather than Corollary 3.1.
The results of Section 4 can be modified to apply to the parabolic Andersonmodel, provided that we again apply Lemma 7.1 in place of Corollary 3.1. In thisway we obtain the following, where the xi âs are defined by (5.5)4: consider the
4To be very precise, we once again need to write (ÎČ, [logÎČ] + 1) in place of (ÎČ, logÎČ).
2254 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
event
ïżœ :={
max1â€jâ€N
|ut (xj )| < ïżœ}
where ïżœ := exp(C1
(logR)2/3
Îș1/3
).(7.5)
Then,
P(ïżœ) †P{
max1â€jâ€N
âŁâŁU(ÎČ,logÎČ)t (xj )
âŁâŁ< 2ïżœ}
+ P{âŁâŁut(xj ) â U
(ÎČ,logÎČ)t
âŁâŁ> ïżœ for some 1 †j †N}
(7.6)
†(1 â P
{âŁâŁU(ÎČ,logÎČ)t (xj )
âŁâŁâ„ 2ïżœ})N + NÎČâkCkâkk/2eGk3
ïżœ.
Note that we do not yet have a lower bound on P{|U(ÎČ,logÎČ)t (x)| ℠λ}. However,
we have
P{âŁâŁU(ÎČ,logÎČ)
t (xj )âŁâŁâ„ 2ïżœ
}â„ P{|ut (xj )| â„ 3ïżœ} â P
{âŁâŁut (xj ) â U(ÎČ,logÎČ)t (xj )
âŁâŁâ„ ïżœ}
(7.7)
℠α1Râα2C
3/21 â NÎČâkCkâkk/2eGk3
ïżœ,
valid for some positive constants α1 and α2. Now let us choose N := ïżœRaïżœ andÎČ := R1âa for a fixed a â (0,1). With these values of N and ÎČ and the lowerbound in (7.7), the upper bound in (7.6) becomes
P(ïżœ) â€(
1 â α1Râα2C
3/21 + Ckâkk/2eGk3
Rk(1âa)âaïżœ
)N
+ Ckâkk/2eGk3
Rk(1âa)âaïżœ.(7.8)
Let us consider k large enough so that k(1 â a) â a > 2. Notice that k will notdepend on R; this is in contrast with what happened in the proof of Theorem 1.1.
We can choose the constant C1 to be small enough to satisfy α2C3/21 < a/2.
Using these, we obtain
P{
supxâ[0,R]
|ut (x)| < eC1(logR)2/3/Îș1/3}†exp(âα1R
a/2) + const
R2 .(7.9)
The BorelâCantelli lemma yields the lower bound of the theorem.We can now prove the upper bound. Our derivation is modeled after the proof
of Theorem 1.2.First, we need a continuity estimate for the solution of (2.11) in the case that
Ï(x) := cx. In accord with (7.2),
E(|ut (x) â ut (y)|2k)
†(2â
2k)2k
[â« t
0dr âu(r,0)â2
2k
â«R
dz |ptâr (x â z) â ptâr (y â z)|2]k
(7.10)
†(2â
2k)2k
[â« t
0dr 21/ke8Dk2/Îș
â«R
dz |ptâr (x â z) â ptâr (y â z)|2]k
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2255
for some constant D which depends on t . Consequently [see the derivation of(6.4)],
E(|ut (x) â ut(y)|2k)†Ck2
( |y â x|Îș
)k
exp(
Bk3
Îș
)(7.11)
for constants B,C â (0,â) that do not depend on k. We apply an argument, sim-ilar to one we used in the proof of Lemma 6.2, in order to deduce that for simulta-neously all intervals I of length 1,
E(
supx,xâČâI : |xâxâČ|â€1
|ut (x) â ut (xâČ)|2k
)†Ck2
1 eC2k3/Îș
Îșk(7.12)
for constants C1,C2 â (0,â) that do not depend on k or Îș. Now, we follow theproof of Theorem 1.2 and partition [âR,R] into intervals of length 1. Let b > 0to deduce the following:
P{uâ
t (R) > 2eb(logR)2/3/Îș1/3}†T1 + T2,(7.13)
where
T1 := P{
max1â€jâ€2R
ut(xj ) > eb(logR)2/3/Îș1/3}
(7.14)
and
T2 := P{
max1â€jâ€2R
supxâ(xj ,xj+1)
|ut (x) â ut (xj )| > eb(logR)2/3/Îș1/3}
.(7.15)
[Compare with (6.19).]On one hand, Lemma 7.1 implies that
T1 †2R · P{ut (xj ) > eb(logR)2/3/Îș
1/3}†2c3R
Rc4b3/2(7.16)
for some constants c3, c4 > 0. On the other hand (7.12) and Chebyshevâs inequalityimply that
T2 †2RP{
supx,xâČâI : |xâxâČ|â€1
|ut (x) â ut(xâČ)| â„ eb(logR)2/3/Îș
1/3}(7.17)
†2RCk2
1 eC2k3/Îș
Îșke2kb(logR)2/3/Îș1/3 .
Now we choose k := ïżœÎș1/3(logR)1/3ïżœ in order to obtain T2 †const · R1+C2â2b
where the constant depends on Îș. With these choices of parameters we deducefrom (7.16) and (7.17) that if b were sufficiently large, then
ââR=1
P{uâ
t (R) > 2eb(logR)2/3/Îș1/3}
< â.(7.18)
The BorelâCantelli lemma and a monotonicity argument together complete theproof. ïżœ
2256 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
APPENDIX A: WALSH STOCHASTIC INTEGRALS
Throughout this Appendix, (ïżœ, F ,P) denotes (as is usual) the underlying prob-ability space. We state and prove some elementary properties of Walsh stochasticintegrals [23].
Let Ld denote the collection of all Borel-measurable sets in Rd that have fi-nite d-dimensional Lebesgue measure. (We could work with Lebesgue-measurablesets, also.)
Let us follow Walsh [23] and define for every t > 0 and A â Ld the randomfield
Wt(A) :=â«[0,t]ĂA
W(ds dy).(A.1)
The preceding stochastic integral is defined in the same sense as Wiener.Let Ft (A) denote the sigma-algebra generated by all random variables of the
form
{Ws(B) : s â (0, t],B â Ld,B â A}.(A.2)
We may assume without loss of generality that, for all A â Ld , {Ft (A)}t>0 isa right-continuous P-complete filtration (i.e., satisfies the âusual hypothesesâ ofDellacherie and Meyer [13]). Otherwise, we augment {Ft (A)}t>0 in the usual way.Let
Ft := âšAâLd
Ft (A) (t > 0).(A.3)
Let P denote the collection of all processes that are predictable with respectto {Ft }t>0. The elements of P are precisely the âpredictable random fieldsâ ofWalsh [23].
For us, the elements of P are of interest because if Z â P and
âZâ2L2(R+ĂRdĂïżœ)
:= Eâ« â
0dt
â«Rd
dx [Zt(x)]2 < â,(A.4)
then the Walsh stochastic integral It := â«[0,t]ĂR Zs(y)W(ds dy) is defined prop-
erly, and has good mathematical properties. Chief among those good propertiesare the following: {It }t>0 is a continuous mean-zero L2-martingale with quadraticvariation ăI ăt := â« t
0 dsâ«âââ dy [Zs(y)]2.
Let us define P(A) to be the collection of all processes that are predictable withrespect to {Ft (A)}t>0. Clearly, P(A) â P for all A â Ld .
PROPOSITION A.1. If Z â P(A) for some A â Ld and âZâL2(R+ĂRdĂïżœ) <
â, then the martingale defined by Jt := â«[0,t]ĂA Zs(y)W(ds dy) is in P(A).
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2257
PROOF. It suffices to prove this for a random field Z that has the form
Zs(y)(Ï) = 1[a,b](s)X(Ï)1A(y) (s > 0, y â R,Ï â ïżœ)(A.5)
where 0 †a < b, and X is a bounded Fa(A)-measurable random variable.But in that case, Jt (Ï) = X(Ï) · â«[0,t]â©[a,b]ĂA W(ds dy), whence the result fol-lows easily from the easy-to-check fact that the stochastic process defined byIt := â«
[0,t]â©[a,b]ĂA W(ds dy) is continuous (up to a modification). The latter as-sertion follows from the Kolmogorov continuity theorem; namely, we check firstthat E(|It â Ir |2) = |A| · |t â r|, where |A| denotes the Lebesgue measure of A.Then use the fact, valid for all Gaussian random variables including It â Ir , thatE(|It â Ir |k) = const · {E(|It â Ir |2)}k/2 for all k â„ 2. ïżœ
Proposition A.1 is a small variation on Walshâs original construction of hisstochastic integrals. We need this minor variation for the following reason:
COROLLARY A.2. Let A(1), . . . ,A(N) be fixed and nonrandom disjoint el-ements of Ld . If Z(1), . . . ,Z(N) are, respectively, in P(A(1)), . . . , P(A(N)) andâZ(j)âL2(R+ĂRdĂïżœ) < â for all j = 1, . . . ,N , then J (1), . . . , J (N) are indepen-dent processes, where
J(j)t :=
â«[0,t]ĂAj
Zs(y)W(ds dy) (j = 1, . . . ,N, t > 0).(A.6)
PROOF. Owing to Proposition A.1, it suffices to prove that if some sequenceof random fields X(1), . . . ,X(N) satisfies X(j) â P(A(j)) (j = 1, . . . ,N ), thenX(1), . . . ,X(N) are independent. It suffices to prove this in the case that the X(j)âsare simple predictable processes; that is, in the case that
X(j)t (Ï) = 1[aj ,bj ](s)Yj (Ï)1A(j)(y),(A.7)
where 0 < aj < bj and Yj is a bounded Faj(A(j))-measurable random variable.
In turn, we may restrict attention to Yj âs that have the form
Yj (Ï) := Ïj
(â«[αj ,ÎČj ]ĂA(j)
W(ds dy)
),(A.8)
where 0 < αj < ÎČj †aj and Ïj : R â R is bounded and Borel measurable. Butthe assertion is now clear, since Y1, . . . , YN are manifestly independent. In orderto see this we need only verify that the covariance between
â«[αj ,ÎČj ]ĂA(j) W(ds dy)
andâ«[αk,ÎČk]ĂA(k) W(ds dy) is zero when j = k; and this is a ready consequence of
the fact that A(j) â© A(k) = â when j = k. ïżœ
2258 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
APPENDIX B: SOME FINAL REMARKS
Recall that âUâk denotes the usual Lk(P)-norm of a random variable U for allk â (0,â). According to Lemma 3.2,
lim suptââ
1
tlog sup
xâRE(|ut (x)|k) †aCk3 if k â„ 2.(B.1)
This and Jensenâs inequality together imply that
Îł (Îœ) := lim suptââ
1
tlog sup
xâRE(|ut (x)|Îœ) < â for all Îœ > 0.(B.2)
And Chebyshevâs inequality implies that for all Îœ > 0,
supxâR
P{ut (x) â„ eâqt } †exp(Îœt
[q + 1
Îœtlog sup
xâRE(|ut (x)|Îœ)
])(B.3)
= exp(Îœt
[q + Îł (Îœ)
Μ+ o(1)
])(t â â).
Because ut(x) â„ 0, it follows that Îœ ïżœâ Îł (Îœ)/Îœ is nondecreasing on (0,â),whence
ïżœ := limÎœâ0
Îł (Îœ)
Μ= inf
Μ>0
Îł (Îœ)
Îœexists and is finite.(B.4)
Therefore, in particular,
lim suptââ
1
tlog sup
xâRP{ut(x) â„ eâqt } < 0 for all q â (ââ,âïżœ).(B.5)
Now consider the case that Ï(0) = 0, and recall that in that case u0(x) â„ 0 for allx â R. Muellerâs comparison principle tells us that ut(x) â„ 0 a.s. for all t â„ 0 andx â R, whence it follows that âut(x)â1 = E[ut (x)] = (pt â u0)(x) is bounded in t .This shows that Îł (1) = 0, and hence ïżœ †0. We have proved the following:
PROPOSITION B.1. If Ï(0) = 0, then there exists q â„ 0 such that
1
tlogut (x) †âq + oP(1) as t â â, for every x â R,(B.6)
where oP(1) is a term that converges to zero in probability as t â â.
Bertini and Giacomin [5] have studied the case that Ï(x) = cx and have shownthat, in that case, there exists a special choice of u0 such that for all compactlysupported probability densities Ï â Câ(R),
1
tlog
â« âââ
ut(x)Ï(x)dx = â c2
24Îș+ oP(1) as t â â.(B.7)
ON THE CHAOTIC CHARACTER OF THE STOCHASTIC HEAT EQUATION 2259
Equation (B.7) and more generally Proposition B.1 show that the typical behaviorof the sample function of the solution to (1.2) is subexponential in time, as onemight expect from the unforced linear heat equation. And yet, it frequently is thecase that ut (x) grows in time exponentially rapidly in Lk(P) for k â„ 2 [2, 7, 15].This phenomenon is further evidence of physical intermittency in the sort of sys-tems that are modeled by (1.2).
Standard predictions suggest that the typical behavior of ut(x) (in this and re-lated models) is that it decays exponentially rapidly with time. [Equation (B.7)is proof of this fact in one special case.] In other words, one might expect thattypically q > 0. We are not able to resolve this matter here, and therefore ask thefollowing questions:
Open problem 1. Is q > 0 in Proposition B.1? Equivalently, is ïżœ < 0 in (B.4)?
Open problem 2. Can the oP(1) in (B.6) be replaced by a term that convergesalmost surely to zero as t â â?
REFERENCES
[1] BALĂZS, M., QUASTEL, J. and SEPPĂLĂINEN, T. (2011). Fluctuation exponent of theKPZ/stochastic Burgers equation. J. Amer. Math. Soc. 24 683â708. MR2784327
[2] BERTINI, L. and CANCRINI, N. (1995). The stochastic heat equation: FeynmanâKac formulaand intermittence. J. Stat. Phys. 78 1377â1401. MR1316109
[3] BURKHOLDER, D. L. (1966). Martingale transforms. Ann. Math. Statist. 37 1494â1504.MR0208647
[4] BURKHOLDER, D. L., DAVIS, B. J. and GUNDY, R. F. (1972). Integral inequalities for convexfunctions of operators on martingales. In Proceedings of the Sixth Berkeley Symposium onMathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971),Vol. II: Probability Theory 223â240. Univ. California Press, Berkeley, CA. MR0400380
[5] BURKHOLDER, D. L. and GUNDY, R. F. (1970). Extrapolation and interpolation of quasi-linear operators on martingales. Acta Math. 124 249â304. MR0440695
[6] CARLEN, E. and KRĂE, P. (1991). Lp estimates on iterated stochastic integrals. Ann. Probab.19 354â368. MR1085341
[7] CARMONA, R. A. and MOLCHANOV, S. A. (1994). Parabolic Anderson problem and intermit-tency. Mem. Amer. Math. Soc. 108 viii+125. MR1185878
[8] CHERTKOV, M., FALKOVICH, G., KOLOKOLOV, I. and LEBEDEV, V. (1995). Statistics of apassive scalar advected by a large-scale two-dimensional velocity field: Analytic solution.Phys. Rev. E (3) 51 5609â5627. MR1383089
[9] CONUS, D. and KHOSHNEVISAN, D. (2010). Weak nonmild solutions to some SPDEs.Preprint. Available at http://arxiv.org/abs/1004.2744.
[10] DALANG, R., KHOSHNEVISAN, D., MUELLER, C., NUALART, D. and XIAO, Y. (2009).A Minicourse on Stochastic Partial Differential Equations. Lecture Notes in Math. 1962.Springer, Berlin. MR1500166
[11] DALANG, R. C. (1999). Extending the martingale measure stochastic integral with appli-cations to spatially homogeneous s.p.d.e.âs. Electron. J. Probab. 4 29 pp. (electronic).MR1684157
[12] DAVIS, B. (1976). On the Lp norms of stochastic integrals and other martingales. Duke Math. J.43 697â704. MR0418219
2260 D. CONUS, M. JOSEPH AND D. KHOSHNEVISAN
[13] DELLACHERIE, C. and MEYER, P.-A. (1982). Probabilities and Potential. B. Theory of Mar-tingales. North-Holland Mathematics Studies 72. North-Holland, Amsterdam. Translatedfrom the French by J. P. Wilson. MR0745449
[14] DURRETT, R. (1998). Lecture Notes on Particle Systems and Percolation. Wadsworth & BrooksCole, Pacific Grove, CA.
[15] FOONDUN, M. and KHOSHNEVISAN, D. (2009). Intermittence and nonlinear parabolicstochastic partial differential equations. Electron. J. Probab. 14 548â568. MR2480553
[16] FOONDUN, M. and KHOSHNEVISAN, D. (2010). On the global maximum of the solution toa stochastic heat equation with compact-support initial data. Ann. Inst. Henri PoincarĂ©Probab. Stat. 46 895â907. MR2744876
[17] GELâFAND, I. M. and VILENKIN, N. Y. (1964). Generalized Functions. Vol. 4: Applicationsof Harmonic Analysis. Academic Press, New York. MR0173945
[18] KARDAR, M., PARISI, G. and ZHANG, Y. C. (1986). Dynamic scaling of growing interfaces.Phys. Rev. Lett. 56 889â892.
[19] LIGGETT, T. M. (1985). Interacting Particle Systems. Grundlehren der Mathematischen Wis-senschaften [Fundamental Principles of Mathematical Sciences] 276. Springer, NewYork. MR0776231
[20] MOLCHANOV, S. A. (1991). Ideas in the theory of random media. Acta Appl. Math. 22 139â282. MR1111743
[21] MUELLER, C. (1991). On the support of solutions to the heat equation with noise. StochasticsStochastics Rep. 37 225â245. MR1149348
[22] RUELLE, D. (1985). The onset of turbulence: A mathematical introduction. In Turbulence,Geophysical Flows, Predictability and Climate Dynamics (âTurbolenza e PredicibilitĂ nella Fluidodinamica Geofisica e la Dinamica del Clima,â Rendiconti della Scuola Inter-nazionale di Fisica âEnrico Fermiâ). North-Holland, Amsterdam.
[23] WALSH, J. B. (1986). An introduction to stochastic partial differential equations. In ĂcoleDâĂ©tĂ© de ProbabilitĂ©s de Saint-Flour, XIVâ1984. Lecture Notes in Math. 1180 265â439.Springer, Berlin. MR0876085
[24] ZELâDOVICH, Y. B., MOLCHANOV, S. A., RUZMAIKIN, A. A. and SOKOLOFF, D. D. (1988).Intermittency, diffusion, and generation in a nonstationary random medium. Sov. Sci. Rev.C Math. Phys. 7 1â110.
[25] ZELDOVICH, Y. B., MOLCHANOV, S. A., RUZMAIKIN, A. A. and SOKOLOV, D. D. (1985).Intermittency of passive fields in random media. J. Exp. Theor. Phys. 89 2061â2072. [Ac-tual journal title: Zhurnal eksperimentalnoi teoreticheskoi fiziki.] (In Russian.)
[26] ZELâDOVICH, Y. B., RUZMAIKIN, A. A. and SOKOLOFF, D. D. (1990). The AlmightyChance. World Scientific Lecture Notes in Physics 20. World Scientific, River Edge, NJ.MR1141627
D. CONUS
DEPARTMENT OF MATHEMATICS
LEHIGH UNIVERSITY
CHRISTMASâSAUCON HALL
14 EAST PACKER AVENUE
BETHLEHEM, PENNSYLVANIA 18015USAE-MAIL: [email protected]
M. JOSEPH
D. KHOSHNEVISAN
DEPARTMENT OF MATHEMATICS
UNIVERSITY OF UTAH
155 SOUTH 1400 EAST
SALT LAKE CITY, UTAH 84112-0090USAE-MAIL: [email protected]