12
Statistics and Probability Letters 82 (2012) 2091–2102 Contents lists available at SciVerse ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro Almost sure asymptotic for Ornstein–Uhlenbeck processes of Poisson potential Fei Xing Department of Mathematics, University of Tennessee, 227 Ayres Hall 1403 Circle Drive, Knoxville, TN 37996-1320, United States article info Article history: Received 16 April 2012 Received in revised form 31 May 2012 Accepted 21 July 2012 Available online 4 August 2012 Keywords: Ornstein–Uhlenbeck process Poisson potential Feynman–Kac formula Principle eigenvalue abstract The objective of this paper is to study the large time asymptotic of the following exponential moment: E x exp t 0 V (X (s)) ds}, where {X (s)} is a d-dimensional Ornstein–Uhlenbeck process and {V (x)} xR d is a homogeneous ergodic random Poisson potential. It turns out that the positive/negative exponential moment has e ct growth/decay rate, which is different from the Brownian motion model studied by Carmona and Molchanov (1995) for positive exponential moment and Sznitman (1993) for negative exponential moment. © 2012 Elsevier B.V. All rights reserved. 1. Introduction Consider a particle {X (t )} t 0 moving randomly in R d . Independently, there is a family of obstacles randomly placed in the space R d according to a Poisson field ω(dx) (also known as Poisson random measure in some literature). Throughout the paper, we use ‘‘P x ’’ and ‘‘E x ’’ to represent the probability law and expectation related to a Markovian stochastic process X (t ) starting at x, respectively. And we use notations ‘‘P’’ and ‘‘E’’ for the probability law and expectation, respectively, related to Poisson field ω(dx). Given a shape function K (x) (known as ‘‘point-mass potential’’ by physicists) on R d , the Poisson potential associated with the random mass distribution ω(·) is given by V (x) = R d K (x y) ω(dy). (1.1) The quantity 1 t t 0 V (X (s)) ds represents the average Poisson potential along a random motion trajectory up to time t . The following exponential moment is of great interest: E x exp ± t 0 V (X (s)) ds . (1.2) Here we list two applications of this exponential moment. First, notice that (1.2) is the normalizing constant Z t of the following random Gibbs measure µ t : dµ t dP x = 1 Z t exp ± t 0 V (X (s)) ds . Tel.: +1 865 293 9694. E-mail address: [email protected]. 0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.07.012

Almost sure asymptotic for Ornstein–Uhlenbeck processes of Poisson potential

Embed Size (px)

Citation preview

Statistics and Probability Letters 82 (2012) 2091–2102

Contents lists available at SciVerse ScienceDirect

Statistics and Probability Letters

journal homepage: www.elsevier.com/locate/stapro

Almost sure asymptotic for Ornstein–Uhlenbeck processes ofPoisson potentialFei Xing ∗

Department of Mathematics, University of Tennessee, 227 Ayres Hall 1403 Circle Drive, Knoxville, TN 37996-1320, United States

a r t i c l e i n f o

Article history:Received 16 April 2012Received in revised form 31 May 2012Accepted 21 July 2012Available online 4 August 2012

Keywords:Ornstein–Uhlenbeck processPoisson potentialFeynman–Kac formulaPrinciple eigenvalue

a b s t r a c t

Theobjective of this paper is to study the large time asymptotic of the following exponentialmoment: Ex exp{±

t0 V (X(s)) ds}, where {X(s)} is a d-dimensional Ornstein–Uhlenbeck

process and {V (x)}x∈Rd is a homogeneous ergodic random Poisson potential. It turnsout that the positive/negative exponential moment has ect growth/decay rate, which isdifferent from the Brownian motion model studied by Carmona and Molchanov (1995) forpositive exponential moment and Sznitman (1993) for negative exponential moment.

© 2012 Elsevier B.V. All rights reserved.

1. Introduction

Consider a particle {X(t)}t≥0 moving randomly in Rd. Independently, there is a family of obstacles randomly placed inthe space Rd according to a Poisson fieldω(dx) (also known as Poisson randommeasure in some literature). Throughout thepaper, we use ‘‘Px’’ and ‘‘Ex’’ to represent the probability law and expectation related to a Markovian stochastic process X(t)starting at x, respectively. And we use notations ‘‘P’’ and ‘‘E’’ for the probability law and expectation, respectively, related toPoisson field ω(dx). Given a shape function K(x) (known as ‘‘point-mass potential’’ by physicists) on Rd, the Poisson potentialassociated with the randommass distribution ω(·) is given by

V (x) =

Rd

K(x − y) ω(dy). (1.1)

The quantity 1t

t0 V (X(s)) ds represents the average Poisson potential along a random motion trajectory up to time t . The

following exponential moment is of great interest:

Ex exp±

t

0V (X(s)) ds

. (1.2)

Here we list two applications of this exponential moment. First, notice that (1.2) is the normalizing constant Zt,ω of thefollowing random Gibbs measure µt,ω:

dµt,ω

dPx=

1Zt,ω

exp±

t

0V (X(s)) ds

.

∗ Tel.: +1 865 293 9694.E-mail address: [email protected].

0167-7152/$ – see front matter© 2012 Elsevier B.V. All rights reserved.doi:10.1016/j.spl.2012.07.012

2092 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

Second, for a large class of stochastic processes, we know by the Feynman–Kac formula that (1.2) satisfies the followingpartial differential equation

∂u∂t

= Lu + V · u,

where L is the infinitesimal operator of the Markovian semigroup. Therefore, exploring the asymptotic behavior of (1.2) att → ∞ is fundamental to understand the solution of this PDE (see Corollary 2 for a special case).

In the literature of this field, there is a long list of papers considering the behavior of Brownian motion under stationaryrandom potentials, which is related to the understanding of the parabolic Anderson model. Just list a few here: for thenegative exponent case in (1.2), Sznitman (1993, 1998) systematically studied the quenched asymptotic in (1.2) withcompactly supported and bounded shape function K and obtained a sub-linear exponential decay large deviation result:

limt→∞

(log t)2/d

tlogE0 exp

t

0V (B(s)) ds

= −λd

ωd

d

2/da.s.-P, (1.3)

whereλd > 0 is the principal eigenvalue of the Laplacian operator (1/2)∆ on the d-dimensional unit ballwith zero boundarycondition, and ωd is the volume of the d-dimensional unit ball.

As to the positive exponent regime, Carmona and Molchanov (1995) studied the first order quenched asymptotic for aclass of bounded shape functions K(x) and obtained a super-linear exponential growth large deviation result:

limt→∞

log log tt log t

logE0 exp t

0V (B(s)) ds

= d sup

x∈RdK(x) a.s.-P. (1.4)

Later on, Gartner et al. (2000) obtained the second order quenched asymptotic for the positive exponential moment. For anexcellent survey on the proceedings in this field before 2005, readers may refer to Gartner and Konig (2005). Most recently,Chen (2011) has studied the case where shape functions are neither compactly supported nor bounded and obtained thefirst order quenched asymptotic for both positive and negative exponentials.

In this paper, we let {X(t)}t≥0 be the Ornstein–Uhlenbeck (O–U) process rather than Brownian motion, and explorethe long time asymptotic of (1.2) under this setting. The motivation of studying the O–U process of Poisson potential isthat, on one hand, in practice a large class of stochastic dynamics has stationary, Markovian properties. For instance, noisyrelaxation process in physical sciences, interest rate derivatives in finance, and peptide bond angle of water molecules inbiochemistry, etc. Therefore, using the O–U process to model the randommotion and studying its long time behavior underrandom media is of practical interest. On the other hand, from mathematical point of view, since the O–U process behavesquite differently from Brownian motion due to stationarity, we need a new strategy to obtain the long time asymptotic ofexponential moment for the O–U process case. For the Brownian motion case, a successful strategy is to force the Brownianparticle stay in one of those roughly td prearranged disjoint balls in [−Rt , Rt ]

d for most of the time up to time t and provethat Brownian motion can contribute enough scale of exponential moment by following this strategy. (See, e.g. Chen, 2011for more details.) However, similar strategy does not work for the O–U process since it has a very strong pull-back force tothe equilibrium position in the long run. Hence, it is a very rare event to force an O–U process to stay in a relatively distantregion. Therefore, we need to explore a newmethod to deal with this situation. In this paper, we first obtain a large deviationresult for the exponential moment of which rate is in a functional form. Next, we estimate this functional form directly bychoosing a function weighted heavily near the equilibrium position and eventually get the following result, which is quitedifferent from (1.3) and (1.4):

limt→∞

1tlogEx exp

±

t

0V (X(s)) ds

= ±c,

where c is a (random, w.r.t. P) positive value for each case.

2. The model and results

Let X(t) = (X1(t), . . . , Xd(t)) be an O–U process starting at x = (x1, . . . , xd), which satisfies the following SDE: fori = 1, . . . , d,

dXi(t) = −Xi(t) dt + dBi(t), Xi(0) = xi. (2.1)

It is well known that X(t) is asymptotic stationary, Markovian and Gaussian (see Revuz and Yor, 1994).Assume that each random obstacle has mass 1 and the Poisson field ω(dx) has Lebesgue intensity measure dx. Let the

shape functionK be a positive, continuous, locally supported function and attain globalmaximumat 0. The randompotentialV (·) is modeled the same as (1.1) in the introduction.

Define u+(t, x) and u−(t, x): [0, ∞) × Rd→ R

u±(t, x) def= Ex exp

±

t

0V (X(s)) ds

.

F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102 2093

From the Feynman–Kac formula, we know that u+(t, x) and u−(t, x) solve the following PDEs:

∂u±

∂t=

121u± − x · ∇u± ± V (x) · u±, (t, x) ∈ [0, ∞) × Rd, (2.2)

u±(0, x) = 1, x ∈ Rd.

Now we state the main theorem of this paper.

Theorem 1. P-almost surely,

limt→∞

1tlogEx exp

t

0V (X(s)) ds

= λ1, (2.3)

and

limt→∞

1tlogEx exp

t

0V (X(s)) ds

= −λ2, (2.4)

where λ1, λ2 ∈ (0, ∞) are non-degenerate random variables.

Corollary 2. The solutions u±(t, x) of (2.2) satisfy

limt→∞

1tlog u+(t, x) = λ1 lim

t→∞

1tlog u−(t, x) = −λ2.

In the sequel, we use QRdef= [−R, R]d for the R-box in Rd and B(x, r) for r-Ball centered at x. Furthermore, the same

constants c, c1, c2, etc. denote positive constants of which values can change from line to line.Organization of the paper. We prove a large deviation result of the exponential moments in Section 3. And the rate is

represented in a variational form. Further studies of this variational form are given in Section 4, from which we obtain theresult of Theorem 1.

3. A large deviation result of the exponential moment

From (2.1), we know that

Xi(t)d= xie−t

+1

√2e−tW

e2t − 1

for 1 ≤ i ≤ d, (3.1)

where {W (t)}t≥0 is a 1-dimensional standard Brownian motion. The invariant measure for {X(t)}t≥0 is µ(dx) = φ(x) dx,where φ(x) = π−

d2 e−|x|2 is the probability density of N(0, Id/2).

In the following, we need some analytic results of {X(t)}t≥0 functionals. First we list the notations for some functionspaces used extensively in this section:

• L2(Rd, µ)—L2 space on Rd with reference measure µ;• L2(B(0, R), µ)—L2 space on B(0, R) with reference measure µ;• Poly(Rd)—space of all polynomials on Rd;• C∞

0 (Rd)—smooth function on Rd with compact support;• W 1,2(Rd, µ) =

g ∈ L2(Rd, µ) : |∇g| ∈ L2(Rd, µ)

, where ∇g is defined in the weak derivative sense;

• F∞ =g ∈ C∞

0 (Rd) : ∥g∥µ = 1;

• FR =g ∈ C∞

0 (Rd) : supp(g) ∈ B(0, R), ∥g∥µ = 1;

• P =g ∈ Poly(Rd) : ∥g∥µ = 1

.

Remark 1.

• W 1,2(Rd, µ) is a Hilbert space under the Sobolev norm

∥g∥2µ + ∥∇g∥2

µ (Evens, 1998).

• Both C∞

0 (Rd) and Poly(Rd) are dense inW 1,2(Rd, µ) under the Sobolev norm. Hence, functions in Poly(Rd) and C∞

0 (Rd)can approximate each other in the Sobolev norm sense.

2094 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

Let f (x) be a bounded continuous function on Rd. Define, for each g ∈ L2(Rd, µ),

Ttg(x) = Ex

exp

t

0f (X(s)) ds

g(X(t))

. (3.2)

Similarly, for g ∈ L2(B(0, R), µ), define

T Rt g(x) = Ex

exp

t

0f (X(s)) ds

g(X(t))1{τR>t}

, (3.3)

where τRdef= inf{t ≥ 0 : X(t) ∈ B(0, R)}.

{Tt}t≥0 and {T Rt }t≥0 are semigroups where each operator is bounded and self-adjoint. Let L and LR be the infinitesimal

operators for {Tt}t≥0 and {T Rt }t≥0, respectively. The following Feynman–Kac formula for {Tt}t≥0 on C∞

0 (Rd) holds (see,e.g., Chapters VII and VIII, Revuz and Yor, 1994):

Proposition 3. For all g(x) ∈ C∞

0 (Rd),

limt→0+

Ttg(x) − g(x)t

= −x · ∇g(x) +121g(x) + f (x)g(x). (3.4)

By Proposition 3, we have the following quadratic form for L on C∞

0 (Rd), which admits that L is a symmetric operator onC∞

0 (Rd) with respect to µ, i.e. ⟨Lg, h⟩µ = ⟨g, Lh⟩µ.

Proposition 4. For g, h ∈ C∞

0 (Rd),

⟨Lg, h⟩µ =

Rd

f (x)g(x)h(x)φ(x) dx −12

Rd

(∇g · ∇h) φ(x) dx.

Proof. Recall φ(x) = π−d/2 exp{−|x|2}. By the divergence theorem,Rd

12

(1g(x)) h(x)e−|x|2 dx = −

Rd

12∇g · ∇

he−|x|2

dx

= −

Rd

12

(∇g · ∇h) e−|x|2 dx +

Rd

(x · ∇g) h(x)e−|x|2 dx.

Hence,

⟨Lg, h⟩µ =

Rd

f (x)g(x)h(x)φ(x) dx −12

Rd

(∇g · ∇h) φ(x) dx. � (3.5)

SinceC∞

0 (Rd) and Poly(Rd) are dense inW 1,2(Rd, µ) and the quadratic form on the right hand side of (3.5) is continuous(both in g and h) under the Sobolev norm, we have

Corollary 5.

⟨Lg, h⟩µ = ⟨g, Lh⟩µ g, h ∈ Poly(Rd).

For g ∈ Poly(Rd), ⟨Lg, g⟩µ ≤ supx∈Rd |f (x)| · ∥g∥2µ, which means that L is upper semi-bounded. According to Friedrichs’

extension theorem (see, e.g., Theorem 2, Section 7, Chapter XI, Yosida, 1966), L admits a self-adjoint extension. We still usethe same notation for Friedrichs’ extension of L and still call it infinitesimal generator of the semigroup Tt . Denote D(L) asthe domain of the self-adjoint operator L, that is, for any g ∈ D(L), Lg ∈ L2(Rd, µ). By Proposition 4 and Corollary 5, it isclear that C∞

0 (Rd) ⊂ D(L) ⊂ L2(Rd, µ) and Poly(Rd) ⊂ D(L) ⊂ L2(Rd, µ).

For n = (n1, n2, . . . , nd) ∈ Nd and x = (x1, . . . , xd) ∈ Rd, let Hn(x)def=

di=1 Hni(xi), where {Hn}n∈N is the family of

one dimensional Hermite polynomials, that is, Hn(x) = (−1)nex2 dndxn e

−x2 . We know that Hn is an eigenfunction of L0 with

eigenvalue −|n|, where |n|def=

di=1 ni. Furthermore, let en = Hn/∥Hn∥µ, n ∈ Nd. Then {en}n∈N is an orthonormal basis of

L2(Rd, µ). See Section 2.3.4, Dunkl and Xu (2001) for details.Using standard approximation approach, we have the following isometry result.

F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102 2095

Proposition 6. Given that g ∈ L2(Rd, µ), then g ∈ W 1,2(Rd, µ) if and only if

n∈Nd(2|n| + 1)⟨g, en⟩2µ < ∞. Furthermore,for any g, h ∈ W 1,2(Rd, µ), we have

Rdg(x)h(x)φ(x) dx +

Rd

(∇g · ∇h)φ dx =

n∈Nd

(2|n| + 1)⟨g, en⟩µ⟨h, en⟩µ.

With the above isometry identity, quadratic form ⟨g, Lg⟩µ has the following representation.

Lemma 7. We have D(L) ⊂ W 1,2(Rd, µ). Furthermore,

⟨g, Lg⟩µ =

Rd

f (x)g2(x)φ(x) dx −12

Rd

|∇g|2φ(x) dx for g ∈ D(L).

Proof. Let g ∈ D(L). For any n ∈ N, write gn(x) =

|k|≤n⟨g, ek⟩µek(x) ∈ Poly(Rd). Then

⟨Lgn, g⟩µ =

Rd

f (x)gn(x)g(x)φ(x) dx +

|k|≤n

|k|⟨g, ek⟩2µ. (3.6)

Since {ek}k∈Nd is an orthonormal basis of L2(Rd, µ), we know gn → g in L2(Rd, µ) as n → ∞. Consequently,

limn→∞

⟨Lgn, g⟩µ = limn→∞

⟨gn, Lg⟩µ = ⟨g, Lg⟩µ. (3.7)

And due to the boundedness of f (x),

limn→∞

Rd

f (x)gn(x)g(x)φ(x) dx =

Rd

f (x)g2(x)φ(x) dx < ∞. (3.8)

Let n tend to infinity in (3.6). By (3.7) and (3.8), we have

k∈Nd |k|⟨g, ek⟩2µ < ∞. This implies that g ∈ W 1,2(Rd, µ) fromProposition 6. Furthermore, from (3.6),

⟨g, Lg⟩µ =

Rd

f (x)g2(x)φ(x) dx +

∞k=0

|k|⟨g, ek⟩2µ

=

Rd

f (x)g2(x)φ(x) dx −12

Rd

|∇g(x)|2φ(x) dx. �

In general, L2(B(0, R), µ) can be embedded in L2(Rd, µ) by the mapping U : L2(B(0, R), µ) → L2(Rd, µ), where

(Ug)(x) =

g(x) if x ∈ B(0, R)0 if x ∈ B(0, R).

Thus L2(B(0, R), µ) can be regarded as a closed subspace of L2(Rd, µ).An operator Q in L2(Rd, µ) is called a local operator if for any h ∈ D(Q ) and any open set Gwith Lebesgue measure 0 on

the boundary, one has hIG ∈ D(Q ) and IGQh = Q (IGh) as elements of L2(Rd, µ). We know from Lemma 7 that L is a localoperator. Therefore, D(LR) = D(L)∩L2(B(0, R), µ). Furthermore, Lg(x) = LRg(x) for all g ∈ D(LR). (See Theorems 4.2 and4.3, Getoor, 1959.)

In convention, we treat W 1,2(B(0, R), µ) as a subspace of W 1,2(Rd, µ). Then, by the connection between L and LD and asimilar approach being carried out for L, we know that D(LR) is a dense subset ofW 1,2(B(0, R), µ), and for any g ∈ D(LR)

⟨g, LRg⟩µ =

B(0,R)

f (x)g2(x)φ(x) dx −12

B(0,R)

|∇g(x)|2φ(x) dx. (3.9)

Furthermore, by the continuity (in g) of the quadratic form on the right-hand side of (3.9) under the Sobolev norm, one has

supg∈D(LR)∥g∥µ=1

⟨g, LRg⟩µ = supg∈FR

B(0,R)

f (x)g2(x) −

12|∇g|2

φ(x) dx

. (3.10)

Due to the fact that ∥T Rt ∥ ≤ eCt for some C > 0, we know T R

t = etLRon L2(B(0, R), µ), for all t ≥ 0 (see Chen, 2009

pp. 96–99). Therefore, T Rt has the following spectral representation:

T Rt =

−∞

exp{tλ} ER(dλ), (3.11)

2096 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

where {ER(λ); −∞ < λ < ∞} is the corresponding resolution of identity for self-adjoint operator LR. In addition, for anyg ∈ L2(B(0, R), µ),

⟨g, T Rt g⟩µ =

−∞

exp{tλ}mRg(dλ), (3.12)

wheremRg is known as the spectral measure on R induced by the distribution function FR(λ) ≡ ⟨g, ER(λ)g⟩µ with

mRg(R) = ∥g∥2

µ. (3.13)

Furthermore,mRg is bounded above by

λR≡ sup

g∈D(LR),∥g∥µ=1⟨g, LRg⟩µ. (3.14)

Now we consider the case of random potential V (·). We know that V (·) is continuous P-a.s. and have the following (seeCarmona and Molchanov, 1995 for the proof).

Proposition 8. With probability one,

limR→∞

log log Rlog R

maxx∈QR

V (x) = dK(0).

With the spectral representation of T Rt and Proposition 8, we can prove the following large deviation result for Ex

exp±

t0 V (X(s)) ds

, of which rates are given in variational forms.

Lemma 9. The following large deviation result holds P-a.s.:

limt→∞

1tlogEx exp

±

t

0V (X(s)) ds

= − inf

f∈P

Rd

12|∇f |2 ∓ V (x)f 2(x)

φ(x) dx

. (3.15)

Proof. We only give the proof for the positive exponential moment here. The proof for the negative case is similar. Letg ∈ C∞

0 (Rd) be compactly supported (for instance, say supp(g) ⊂ B(0, R)) and ∥g∥µ = 1.By definition ofV and Proposition 8,we know that,P-almost surely,V (x) is bounded and continuous on B(0, R). Therefore,

we can apply the above spectral representation of T R (with deterministic potential) here for each trajectory of V (·, ω) exceptfor a null-set.

Since V (·) is non-negative, we have

Exe t0 V (X(s)) ds

≥ Ex

e t1 V (X(s)) ds1

sup1≤s≤t

|X(s)|≤R

(3.16)

and

Ex

e t1 V (X(s)) ds1{ sup

1≤s≤t|X(s)|≤R}

≥ ∥g∥−2

∞Ex

g(X(1))e

t1 V (X(s)) dsg(X(t))1{ sup

1≤s≤t|X(s)|≤R}

= ∥g∥−2

∞Ex

g(X(1))EX(1)

e t−10 V (X(s)) dsg(X(t − 1))1{τR>t−1}

= ∥g∥−2

∞Ex

g(X(1))T R

t−1g(X(1))

= ∥g∥−2∞

Rd

p1(x, y)g(y)T Rt−1g(y) dy, (3.17)

where p1(x, y) is the density for X(1).From (3.1), we know X(1)

d∼ N(x e−1, 1

2 (1 − e−2)Id). Using reversibility of the O–U process,

p1(x, y)φ(y)−1= p1(y, x)φ(x)−1

= c exp|x|2 −

|x − ye−1|2

1 − e−2

. (3.18)

Hence, p1(x, y)φ(y)−1, as a function of y, is bounded below by a positive number on B(0, R). Together with (3.16) and (3.17)we establish that

Ex exp t

0V (X(s)) ds

≥ δ

Rd

g(y)T Rt−1g(y)φ(y) dy = δ⟨g, T R

t−1g⟩µ. (3.19)

F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102 2097

By (3.12), we get

⟨g, T Rt−1g⟩µ =

−∞

e(t−1)λmRg(dλ) ≥ e(t−1)

−∞λmR

g (dλ)

= e(t−1)⟨g,LRg⟩µ = exp−

t − 12

Rd

|∇g|2 − 2V (x)g2(x)

φ(x) dx

. (3.20)

From (3.19) and (3.20), we get

lim inft→∞

1tlogEx exp

t

0V (X(s)) ds

≥ −

12

Rd

|∇g|2 − 2V (x)g2(x)

φ(x) dx.

Taking supreme over all g ∈ C∞

0 (Rd), we obtain the lower bound.To prove the upper bound, we need the following lemma, and the proof is given in Appendix:

Lemma 10. Put γ (t) = αt1/2 log t for some constant α > 0. Then P-a.s.,

limt→∞

Ex

exp

t0 V (X(s)) ds

1

τQγ (t)>t

Ex exp t

0 V (X(s)) ds = 1,

where τQR = inf{s > 0 : X(s) ∈ QR}.

By Lemma 10 and Proposition 8, P-a.s. there exists C1(ω), C2(ω) > 0 such that for all large t , we have

Exe t0 V (X(s)) ds

≤ C1(ω)Ex

e t0 V (X(s)) ds1{τγ (t)>t}

, (3.21)

and

supx∈Qγ (t)

V (x) ≤ C2(ω) log γ (t). (3.22)

Therefore, for all t large enough, we have

Ex exp t

0V (X(s)) ds

≤ C1Ex

exp

t

0V (X(s)) ds

1{τγ (t)>t}

≤ C3tC4Ex

exp t

1V (X(s)) ds

1

sup1≤s≤t

|X(t)|≤γ (t)

= C3tC4Ex

1{|X(1)|≤γ (t)} exp t

1V (X(s)) ds

1

sup1≤s≤t

|X(t)|≤γ (t)

1{|X(t)|≤γ (t)}

. (3.23)

Let |gt | ≤ 1 be a smooth function such that, gt(y) ≡ 1 on B(0, γ (t)) and gt(y) ≡ 0 outside B(0, 2γ (t)). Denote ht(y) = c−1t

gt(y), where ct =

B(0,2γ (t)) g

2t (y)φ(y) dy

1/2. Then ⟨ht , ht⟩µ = 1 and ct ≤ 1. Therefore,

Ex

1{|X(1)|≤γ (t)}e t1 V (X(s)) ds1

sup1≤s≤t

|X(t)|≤γ (t)

1{|X(t)|≤γ (t)}

≤ Ex

gt(X(1))e t1 V (X(s)) ds1

sup1≤s≤t

|X(t)|≤2γ (t)

gt(X(t))

≤ Ex

ht(X(1))e t1 V (X(s)) ds1

sup1≤s≤t

|X(t)|≤2γ (t)

ht(X(t))

=

Rd

p1(x, y)φ(y)−1ht(y)T2γ (t)t−1 ht(y)φ(y) dy

≤ c exp|x|2

⟨ht , T

2γ (t)t−1 ht⟩µ, (3.24)

where the last inequality holds since p1(x, y)φ(y) ≤ c exp|x|2

by (3.18).

2098 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

Applying (3.12), we get

⟨ht , T2γ (t)t−1 ht⟩µ =

−∞

e(t−1)λm2γ (t)ht (dλ).

By (3.13) and the fact that ∥ht∥µ = 1,mht is a probability measure. In view of (3.10) (3.14), the smallest supporting set ofm2γ (t)

ht is bounded above by

suph∈D(L2γ (t))

⟨h, L2γ (t)h⟩µ = supg∈F2γ (t)

12

Rd

|∇g(x)|2 − 2V (x)g(x)2

φ(x) dx

.

Hence,

⟨ht , T2γ (t)t−1 ht⟩µ ≤ exp

(t − 1) sup

g∈F2γ (t)

12

Rd

|∇g|2 − 2V (x)g(x)2

φ(x) dx

≤ exp(t − 1) sup

g∈F∞

12

Rd

|∇g|2 − 2V (x)g(x)2

φ(x) dx

. (3.25)

Combine (3.23)–(3.25), we have

lim supt→∞

1tlogEx exp

t

0V (X(s)) ds

≤ sup

g∈F∞

12

Rd

|∇g(x)|2 − 2V (x)g(x)2

φ(x) dx

.

Therefore,

limt→∞

1tlogEx exp

±

t

0V (X(s)) ds

= − inf

f∈F∞

Rd

12|∇f |2 ∓ V (x)f 2(x)

φ(x) dx

,

= − inff∈P

Rd

12|∇f |2 ∓ V (x)f 2(x)

φ(x) dx

(3.26)

where (3.26) holds by the standard approximation procedure. �

For the convenience of estimation in Section 4, we rewrite (3.26) with respect to the Lebesgue measure. Let E =f (x) def

= f (x)e−|x|22 : f ∈ P

; then ∥f ∥2 = πd/2, where ∥ · ∥2 is the classic L2-norm. Hence,

Rd|∇f |2φ(x) dx = π−d/2

Rd

∇ f + xf (x)2 dx

= π−d/2

Rd

|∇ f |2 + |x|2 f 2

dx + 2π−d/2

Rd

x · ∇ f

f dx. (3.27)

Applying the divergence theorem to the second integral in (3.27),Rd

x · ∇ f

f dx =

12

Rd

x · ∇ f 2(x) dx =d2

Rd

f 2(x) dx =d2πd/2. (3.28)

Hence, by (3.27) and (3.26), (3.15) becomes

limt→∞

1tlogEx exp

±

t

0V (X(s)) ds

= −

12π−d/2 inf

g∈E

Rd

|∇g|2 +|x|2 ∓ 2V (x)

g2 dx

+

d2. (3.29)

4. The principle eigenvalue estimates

In this section, we analyze (3.29) and prove Theorem 1.

Lemma 11.

infg∈E

Rd

|∇g|2 + |x|2g2 dx

= dπd/2, (4.1)

where the unique minimizer is g0(x) = e−|x|2/2.

F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102 2099

Proof. By (3.28),

12dπd/2

= −

Rd

(x · ∇g)g dx ≤

Rd

|x|2g2 dx1/2

Rd|∇g|2 dx

1/2

≤12

Rd

|x|2g2(x) dx +

Rd

|∇g(x)|2 dx

.

Tomake both inequalities equal, we need g0(x) ·x = −∇g0(x). Under the condition that ∥g0∥2 = 1, we have g0(x) = e−|x|2/2.Clearly, g0 ∈ E . �

Applying Lemma 11, we have the following estimation for principal eigenvalues in Theorem 1. Throughout the proof, usethe same notation as in Lemma 11: g0(x) = e−|x|2/2.

Proposition 12. Let

λ1 = −12π−d/2 inf

g∈E

Rd

|∇g|2 +|x|2 − 2V (x)

g2 dx

+

d2,

λ2 =12π−d/2 inf

g∈E

Rd

|∇g|2 +|x|2 + 2V (x)

g2 dx

d2.

Then P-a.s., λ1, λ2 ∈ (0, ∞) are non-degenerate random variables.

Proof. First, consider λ1. By Proposition 8, |x|2 − 2V (x) has a (random) lower bound C(ω) on Rd. Then, we have

infg∈E

Rd

|∇g|2 +|x|2 − 2V (x)

g2 dx

≥ inf

g∈E

Rd

|x|2 − 2V (x)

g2 dx

≥ inf

g∈E

C(ω)

Rd

g(x)2 dx

= C(ω)πd/2.

Therefore, P-a.s., λ1 ≤12 (d − C(ω)) < ∞. On the other hand,

infg∈E

Rd

|∇g|2 +

|x|2 − 2V (x)

g2(x)

dx

Rd

|∇g0|2 +|x|2 − 2V (x)

g20 (x) dx

= dπd/2− 2

Rd

V (x)e−|x|2/2 dx < dπd/2 P-a.s.

The last inequality holds since PV ≡ 0 on Rd

= 0. Therefore, P-a.s.

λ1 = −12π−d/2 inf

g∈E

Rd

|∇g|2 +

|x|2 − 2V (x)

g2 dx

+

d2

> 0.

To prove the non-degeneracy of λ1, it suffices to show P (λ1 > α) > 0 for any α > 0.For continuous function K , there exists r > 0 such that K(x) > K(0)/2 for all x ∈ B(0, r). Then for any x ∈ B(0, r/2)

V (x) =

Rd

K(x − y) ω(dy) ≥

B(0,r/2)

K(x − y) ω(dy)

B(0,r/2)

K(0)2

ω(dy) =K(0)2

ω(B(0, r/2)).

Therefore,Rd

V (x)g20 (x) dx ≥

B(0,r/2)

V (x)g20 (x) dx

≥K(0)2

ω(B(0, r/2))B(0,r/2)

e−|x|2 dx

≥K(0)2

e−r2/4|B(0, r/2)|ω(B(0, r/2)), (4.2)

where | · | stands for Rd-volume in the last inequality. From (4.2) and (3.29), we get

λ1 ≥ −12π−d/2

Rd

|∇e−|x|2

|2+ |x|2e−|x|2

− V (x)e−|x|2

dx +d2

≥ cω(B(0, r/2)), (4.3)

2100 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

where c =14π

−d/2K(0)e−r2/4|B(0, r/2)| > 0. (4.3) implies that

P (λ1 ≥ cn) ≥ P (ω(B(0, r/2)) = n) > 0.

As to λ2, the upper bound holds since

λ2 ≤12π−d/2

Rd

|∇g0|2 +|x|2 + 2V (x)

g20 dx −

d2

= π−d/2

RdV (x)e−|x|2 dx < ∞,

where the last inequality holds by Proposition 8.For the lower bound, denote F(g) :=

Rd |∇g|2 +

|x|2 + 2V (x)

g2(x) dx. Notice that for g = g0 a.s.,

F(g0) =

Rd

|∇g0|2 +|x|2 + 2V (x)

g20 (x) dx

= dπd/2+ 2

Rd

V (x)e−|x|2/2 dx > dπd/2 P-a.s. (4.4)

For g = g0, by Lemma 11,

F(g) =

Rd

|∇g|2 +|x|2 + 2V (x)

g2(x) dx

Rd

|∇g|2 + |x|2g2(x) dx > dπd/2 P-a.s. (4.5)

Therefore, from (4.4), (4.5) and the continuity of F on E under the Sobolev norm,

λ2 =12π−d/2 inf

g∈E

Rd

|∇g|2 +

|x|2 + 2V (x)

g2 dx

d2

> 0.

As to the non-degeneracy of λ2, by continuity of K and the construction of V , we know that V has a positive probabilityof greater than any large value in a compact set. Therefore,

λ2 ≥ π−d/2 infg∈E

Rd

|x|2 + 2V (x)

g2(x) dx

d2

≥ infx∈Rd

(|x|2 + 2V (x)) −d2

:= c

happens with a positive probability. �

From Lemma 9, Proposition 12 and (3.29), we establish Theorem 1.

Appendix

Lemma 13. Let {X(t)}t≥0 be the O–U process defined in (2.1). There exist c > 0 and ax > 0 such that for all a > ax, the followinginequality holds for all t > 0:

Px

sup0≤s≤t

|X(s)| > a

≤ 2t exp−ca2

.

Proof. First, let x = 0. Since X(t) is an asymptotically stationary Gaussian process, by the classical Gaussian tail estimate(for reference, see Li and Shao, 2001) there exist c1 > 0 and a0 > 0 such that

max

P0 (|X(k)| > a) , P0

sup0≤s≤1

|X(s)| > a

≤ exp−c1a2

,

for a > a0 and all k ∈ N.For a > a0, by the Markov property and (3.1), we have

P0

sup

k≤s≤k+1|X(s)| > a

=

−∞

Ex

1sup

0≤s≤1|X(s)|>a

pk(0, dx)

|x|≤ a

2

Px

sup0≤s≤1

|X(s)| > apk(0, dx) + P0

|X(k)| >

a2

F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102 2101

=

|x|≤ a

2

P0

sup0<s≤1

|X(s) + xe−s| > a

p(k, 0, dx) + P0

|X(k)| >

a2

≤ P0

sup0≤s≤1

|X(s)| >a2

+ P0

|X(k)| >

a2

≤ 2 exp

c14a2

.

Hence,

P0

sup

0≤s≤k+1|X(s)| > a

≤ P0

sup0≤s≤k

|X(s)| > a

+ P0

sup

k≤s≤k+1|X(s)| > a

≤ P0

sup0≤s≤k

|X(s)| > a

+ 2 exp−

c14a2

.

Repeating this procedure and letting c := c1/4, we have P0sup0≤s≤t |X(s)| > a

≤ 2t exp

−ca2

.

For general x, notice that

Px

sup0≤s≤t

|X(s)| > a

= P0

sup0≤s≤t

|xe−s+ X(s)| > a

≤ P0

sup0≤s≤t

|X(s)| > a − |x|

.

Let ax := a0 + x; then for all a > ax we have

Px

sup0≤s≤t

|X(s)| > a

≤ 2t exp−ca2

, �

Using Lemma 13, we can prove Lemma 10.

Proof of Lemma 10. Notice that

0 ≤ Ex exp t

0V (X(s)) ds

− Ex

exp

t

0V (X(s)) ds

1

τQγ (t)>t

=

∞n=1

Ex

exp

t

0V (X(s)) ds

1

τQnγ (t) ≤ t < τQ(n+1)γ (t)

∞n=1

expt maxx∈Q(n+1)γ (t)

V (x)

Px

τQnγ (t) ≤ t

.

We know from Proposition 8 that there exists a constant c1 > 0 such that, with probability one,

maxx∈QR

V (x) ≤ c1 log R

for all sufficiently large R. Moreover, from Lemma 13 we have

PxτQa ≤ t

= Px

sup0≤s≤t

|X(s)| > a

≤ 2t exp−ca2

,

for a > ax. Therefore, with probability one, for sufficiently large t and all n, we have

expt maxx∈Q(n+1)γ (t)

V (x)

Px

τQnγ (t) ≤ t

≤ 2t exp

c1t

12log t + log log t + log(α(n + 1))

− cα2n2t(log t)2

≤ 2t exp

c2α2nt (log t)2

= 2t · t−

c22 α2nt log t .

Notice that the second inequality holds for t sufficiently large and uniformly in n.Therefore,

∞n=1

expt maxx∈Q(n+1)γ (t)

V (x)

Px

τQnγ (t) ≤ t

2t−c22 α2t log t+1

1 − t−c22 α2t log t

for large t. (A.1)

Let t → ∞; then the left hand side of (A.1) goes to 0, which completes the proof. �

2102 F. Xing / Statistics and Probability Letters 82 (2012) 2091–2102

References

Carmona, R.A., Molchanov, S.A., 1995. Stationary parabolic Anderson model and intermittency. Probab. Theory Related Fields 102, 433–453.Chen, X., 2009. Randomwalk intersections: large deviations and related topics. In:Mathematical Surveys andMonographs, vol. 157. AmericanMathematical

Society, Providence.Chen, X., 2011. Quenched asymptotics for Brownianmotion of renormalized Poisson potential and for the related parabolic Andersonmodels. Ann. Probab.Dunkl, C., Xu, Y., 2001. Orthogonal Polynomials of Several Variables. Cambridge Univ. Press, Cambridge.Evens, L.C., 1998. Partial Differential Equations. In: Graduate Studies in Mathematics, vol. 19. Am. Math. Soc., Providence.Gartner, J., Konig,W., 2005. The parabolic Andersonmodel. In: Deuschel, J.-D., Greven, A. (Eds.), Interacting Stochastic Systems. Springer, Berlin, pp. 153–179.Gartner, J., Konig, W., Molchanov, S.A., 2000. Almost sure asymptotics for the continuous parabolic Anderson model. Probab. Theory Related Fields 118 (4),

547–573.Getoor, R.K., 1959. Markov operators and their associated semi-groups. Pacific J. Math. 9, 449–472.Li, W.V., Shao, Q., 2001. Gaussian processes: inequalities, small ball probabilities and applications. In: Rao, C.R., Shanbhag, D. (Eds.), Stochastic Processes:

Theory and Methods. In: Handbook of Statistics, vol. 19. Elsevier, New York, pp. 533–598.Revuz, D., Yor, M., 1994. Continuous Martingale and Brownian Motion, second ed. Springer, Berlin.Sznitman, A.-S., 1993. Brownian motion in Poissonian potential. Probab. Theory Related Fields 97, 447–477.Sznitman, A.-S., 1998. Brownian Motion, Obstacles and RandomMedia. Springer-Verlag, Berlin.Yosida, K., 1966. Functional Analysis. Springer, Berlin.