Balan Stable Noise

Embed Size (px)

DESCRIPTION

stable noise

Citation preview

  • 5/21/2018 Balan Stable Noise

    1/49

    SPDEs with-stable Levy noise:a random field approach

    Raluca M. Balan

    November 25, 2013

    Abstract

    This article is dedicated to the study of an SPDE of the form

    Lu(t, x) =(u(t, x))Z(t, x) t >0, x Owith zero initial conditions and Dirichlet boundary conditions, where is a Lipschitz function, L is a second-order pseudo-differential oper-ator,O is a bounded domain in Rd, and Z is an -stable Levy noisewith (0, 2), = 1 and possibly non-symmetric tails. To give ameaning to the concept of solution, we develop a theory of stochasticintegration with respect to Z, by generalizing the method of [11] to

    higher dimensions and non-symmetric tails. The idea is to first solvethe equation with truncated noise ZK (obtained by removing fromZ the jumps which exceed a fixed value K), yielding a solution uK,and then show that the solutions uL, L > K coincide on the eventtK, for some stopping times K a.s. A similar idea was usedin [22] in the setting of Hilbert-space valued processes. A major stepis to show that the stochastic integral with respect to ZK satisfies ap-th moment inequality, for p(, 1) if 1.This inequality plays the same role as the Burkholder-Davis-Gundyinequality in the theory of integration with respect to continuous mar-tingales.

    MSC 2000 subject classification: Primary 60H15; secondary 60H05, 60G60Department of Mathematics and Statistics, University of Ottawa, 585 King Edward

    Avenue, Ottawa, ON, K1N 6N5, Canada. E-mail address: [email protected] supported by a grant from the Natural Sciences and Engineering Research

    Council of Canada.

    1

  • 5/21/2018 Balan Stable Noise

    2/49

    1 Introduction

    Modeling phenomena which evolve in time or space-time and are subjectto random perturbations is a fundamental problem in stochastic analysis.When these perturbations are known to exhibit an extreme behavior, asseen frequently in finance or environmental studies, a model relying on theGaussian distribution is not appropriate. A suitable alternative could be amodel based on a heavy-tailed distribution, like the stable distribution. Insuch a model, these perturbations are allowed to have extreme values with aprobability which is significantly higher than in a Gaussian-based model.

    In the present article, we introduce precisely such a model, given rigor-ously by a stochastic partial differential equation (SPDE) driven by a noise

    term which has a stable distribution over any space-time region, and hasindependent values over disjoint space-time regions (i.e. it is a Levy noise).More precisely, we consider the SPDE:

    Lu(t, x) =(u(t, x))Z(t, x), t >0, x O (1)

    with zero initial conditions and Dirichlet boundary conditions, where isa Lipschitz function, L is a second-order pseudo-differential operator on abounded domainO Rd, and Z(t, x) = d+1Ztx1...xd is the formal derivative ofan -stable Levy noise with (0, 2), = 1. The goal is to find sufficientconditions on the fundamental solution G(t,x,y) of the equation Lu = 0 on

    R+ O, which will ensure the existence of a mild solution of equation (1).We say that a predictable process u ={u(t, x); t 0, x O} is a mildsolutionof (1) if for any t >0, x O,

    u(t, x) =

    t0

    O

    G(t s,x,y)(u(s, y))Z(ds,dy) a.s. (2)

    We assume thatG(t,x,y) is a function int, which excludes from our analysisthe case of the wave equation with d3.

    To explain the connections with other works, we describe briefly the con-struction of the noise (the details are given in Section 2 below). This con-

    struction is similar to that of a classical-stable Levy process, and is basedon a Poisson random measure (PRM) N on R+ Rd (R\{0}) of intensitydtdx(dz), where

    (dz) = [pz11(0,)(z) +q(z)11(,0)(z)]dz (3)

    2

  • 5/21/2018 Balan Stable Noise

    3/49

    for somep, q0 withp+q= 1. More precisely, for any set B Bb(R+Rd),

    Z(B) =B{|z|1}

    zN(ds,dx,dz) + B{|z|>1}

    zN(ds,dx,dz) |B|, (4)

    whereN(B ) =N(B ) |B|() is the compensated process and isa constant (specified by Lemma 2.3 below). Here,Bb(R+ Rd) is the classof bounded Borel sets in R+ Rd and|B| is the Lebesque measure ofB .

    As the term on the right-hand side of (2) is a stochastic integral withrespect toZ, such an integral should be constructed first. Our constructionof the integral is an extension to random fields of the construction provided byGine and Marcus in [11] in the case of an-stable Levy process{Z(t)}t[0,1].Unlike these authors, we do not assume that the measure is symmetric.

    Since any Levy noise is related to a PRM, in a broad sense, one couldsay that this problem originates in Itos papers [12] and [13] regarding thestochastic integral with respect to a Poisson noise. SPDEs driven by a com-pensated PRM were considered for the first time in [14], using the approachbased on Hilbert-space-valued solutions. This study was motivated by anapplication to neurophysiology leading to the cable equation. In the caseof the heat equation, a similar problem was considered in [1], [26] and [3]using the approach based on random-field solutions. One of the results of[26] shows that the heat equation:

    ut

    (t, x) =12

    u(t, x) + U

    f(t,x,u(t, x); z)N(t,x,dz) +g(t,x,u(t, x))has a unique solution in the space of predictable processes u satisfyingsup(t,x)[0,T]RdE|u(t, x)|p 1, assumingthat the noise has only positive jumps (i.e. q= 0). The methods used by

    3

  • 5/21/2018 Balan Stable Noise

    4/49

    these authors are different from those presented here, since they investigate

    the more difficult case of a non-Lipschitz function (u) =u

    with >0. In[16], Mueller removes the atoms ofZof mass smaller than 2n and solvesthe equation driven by the noise obtained in this way; here we remove theatoms ofZof mass larger than Kand solve the resulting equation. In [18],Mytnik uses a martingale problem approach and gives the existence of a pair(u, Z) which satisfies the equation (the so-called weak solution), whereasin the present article we obtain the existence of a solution u for agivennoiseZ(the so-called strong solution). In particular, when >1 and= 1/,the existence of a weak solution of the heat equation with -stable Levynoise is obtained in [18] under the condition

  • 5/21/2018 Balan Stable Noise

    5/49

    In Section 5, we introduce the process ZK obtained by removing fromZ the jumps exceeding a fixed value K, and we develop a theory ofintegration with respect to this process. For this, we need to treatseparately the cases < 1 and > 1. In both cases, we obtain a

    p-th moment inequality for the integral process for p(, 1) if 1. This inequality plays the same role as theBurkholder-Davis-Gundy inequality in the theory of integration withrespect to continuous martingales.

    In Section 6 we prove the main result about the existence of the mildsolution of equation (1). For this, we first solve the equation withtruncated noise ZKusing a Picard iteration scheme, yielding a solu-

    tionuK. We then introduce a sequence (K)K1of stopping times withK a.s. and we show that the solutions uL, L > K coincide onthe event tK. For the definition of the stopping times K, we needagain to consider separately the cases 1.

    Appendix A contains some results about the tail of a non-symmetricstable random variable, and the tail of an infinite sum of random vari-ables. Appendix B gives an estimate for the Green function associatedto the fractional power of the Laplacian. Appendix C gives a localproperty of the stochastic integral with respect to Z (or ZK).

    2 Definition of the noise

    In this section we review the construction of the -stable Levy noise onR+ Rd and investigate some of its properties.

    Let N =

    i1 (Ti,Xi,Zi) be a Poisson random measure on R+ Rd (R\{0}), defined on a probability space (, F, P), with intensity measuredtdx(dz), where is given by (3). Let (j)j0 be a sequence of positivereal numbers such that j0 as j and 1 =0> 1 > 2 > . . .. Let

    j ={zR; j 1}.For any set B Bb(R+ Rd), we define

    Lj(B) =

    Bj

    zN(dt,dx,dz) =

    (Ti,Xi)B

    Zi1{Zij}, j0.

    5

  • 5/21/2018 Balan Stable Noise

    6/49

    Remark 2.1 The variable L0(B) is finite since the sum above contains

    finitely many terms. To see this, we note that E[N(B 0)] =|B|(0)

  • 5/21/2018 Balan Stable Noise

    7/49

    For any disjoint sets B1, . . . , Bn and for any u1, . . . , unR, we have:

    E[exp(ink=1

    ukY(Bk))] =E[exp(iY(nk=1

    uk1Bk))]

    = exp

    R+RdR

    (eizn

    k=1 uk1Bk (t,x) 1 iz1{|z|1}nk=1

    uk1Bk(t, x))dtdx(dz)

    = exp

    nk=1

    |Bk|R

    (eiukz 1 iukz1{|z|1})(dz)

    (10)

    =n

    k=1E[exp(iukY(Bk))],

    using (9) with =n

    k=1 uk1Bk for the second equality, and (6) for the lastequality. This proves that Y(B1), . . . , Y (Bn) are independent.

    (b) Let Sn =n

    k=1 Y(Bk) and S = Y(B), where B =n1 Bn. By

    Levys equivalence theorem, (Sn)n1converges a.s. if and only if it convergesin distribution. By (10), withui= u for all i = 1, . . . , k, we have:

    E(eiuSn) = exp

    |nk=1

    Bk|R

    (eiuz 1 iuz1{|z|1})(dz)

    .

    This clearly converges to E(eiuS) = exp|B| R(eiuz 1 iuz1{|z|1})(dz),and hence (Sn)n1 converges in distribution to S. Recall that a random variableXhas an-stable distributionwith param-

    eters(0, 2), [0, ), [1, 1], R if for anyu R,E(eiuX) = exp

    |u|

    1 isgn(u)tan

    2

    +iu

    if = 1, or

    E(eiuX) = exp

    |u|

    1 +isgn(u)

    2

    ln |u|

    +iu

    if = 1

    (see Definition 1.1.6 of [27]). We denote this distribution by S(,,).

    Lemma 2.3 Y(B) has aS(|B|1/

    , , |B|) distribution with=p q, =

    0

    sin x

    x dx=

    (2)

    1 cos 2 if= 1

    2 if= 1

    , =

    1 if= 1c0 if= 1

    andc0=0

    (sin z z1{z1})z2dz. If >1, thenE(Y(B)) =|B|.

    7

  • 5/21/2018 Balan Stable Noise

    8/49

    Proof: We first express the characteristic function (8) ofY(B) in Fellers

    canonical form (see Section XVII.2 of [9]):

    E(eiuY(B)) = exp

    iub|B| + |B|

    R

    eiuz 1 iu sin zz2

    M(dz)

    withM(dz) =z

    2(dz) andb =R

    (sin z z1{|z|1})(dz). Then the resultfollows from the calculations done in Example XVII.3.(g) of [9].

    From Lemma 2.2 and Lemma 2.3, it follows that

    Z={Z(B) =Y(B) |B|; B Bb(R+ Rd)}

    is an -stable random measure, in the sense of Definition 3.3.1 of [27], withcontrol measure m(B) = |B| and constant skewness intensity . In par-ticular,Z(B) has aS(|B|1/, , 0) distribution.

    We say that Z is an -stable Levy noise. Coming back to the originalconstruction (7) ofY(B) and noticing that

    |B|=|B|R

    z1{|z|1}(dz) =j1

    E(Lj(B)) if 1}(dz) =E(L0(B)) if >1,it follows that Z(B) can be represented as:

    Z(B) =j0

    Lj(B) =:

    B(R\{0})

    zN(dt,dx,dz) if 1. (12)Here

    N is the compensated Poisson measure associated to N, i.e.

    N(A) =

    N(A)

    E(N(A)) for any relatively compact set A in R+

    Rd

    (R\

    {0

    }).

    In the case = 1, we will assume that p = q so that is symmetricaround 0, E(Lj(B)) = 0 for all j1, and Z(B) admits the same represen-tation as in the case

  • 5/21/2018 Balan Stable Noise

    9/49

    3 The linear equation

    As a preliminary investigation, we consider first equation (1) with = 1:

    Lu(t, x) = Z(t, x), t >0, x O (13)

    with zero initial conditions and Dirichlet boundary conditions. In this sectionO is a bounded domain in Rd orO= Rd.

    By definition, the process{u(t, x); t0, x O} given by:

    u(t, x) =

    t0

    O

    G(t s,x,y)Z(ds, dy) (14)

    is a mild solution of (13), provided that the stochastic integral on the right-hand side of (14) is well-defined.

    We define now the stochastic integral of a deterministic function :

    Z() =

    0

    Rd

    (t, x)Z(dt,dx).

    If L(R+ Rd), this can be defined by approximation with sim-ple functions, as explained in Section 3.4 of [27]. The process{Z(); L(R+Rd)} has jointly -stable finite dimensional distributions. In par-ticular, eachZ() has a S(, , 0)-distribution with scale parameter:

    =

    0

    Rd

    |(t, x)|dxdt1/

    .

    More generally, a measurable function : R+Rd R is integrablewith respect to Zif there exists a sequence (n)n1 of simple functions suchthat n a.e., and for any B Bb(R+ Rd), the sequence{Z(n1B)}nconverges in probability (see [24]).

    The next results shows that conditionL(R+ Rd) is also necessaryfor the integrability of with respect to Z. Due to Lemma 2.2, this follows

    immediately from the general theory of stochastic integration with respectto independently scattered random measures developed in [24].

    Lemma 3.1 A deterministic function is integrable with respect to Zif andonly ifL(R+ Rd).

    9

  • 5/21/2018 Balan Stable Noise

    10/49

    Proof: We write the characteristic function ofZ(B) in the form used in [24]:

    E(eiuZ(B)) = expB

    iua+

    R

    (eiuz 1 iu(z))(dz) dtdxwitha = ,(z) =zif|z| 1 and(z) = sgn(z) if|z|>1. By Theorem2.7 of [24], is integrable with respect to Zif and only if

    R+Rd|U((t, x))|dtdx

  • 5/21/2018 Balan Stable Noise

    11/49

    is the generator of a Markov process with values in Rd, without jumps (a

    diffusion). Assume thatO is a bounded domain in Rd

    orO = Rd

    . ByAronson estimate (see e.g. Theorem 2.6 of [22]), under some assumptions onthe coefficientsaij, bi, there exist some constants c1, c2> 0 such that

    G(t,x,y)c1td/2 exp

    |x y|2

    c2t

    (18)

    for all t >0 and x, y O. In this case, condition (15) is implied by (5).Example 3.5 (Heat equation with fractional power of the Laplacian) LetL= t + () for some >0. Assume thatO is a bounded domain in Rdor

    O= Rd. Then (see e.g. Appendix B.5 of [22])

    G(t,x,y) =

    0

    G(s,x,y)gt,(s)ds= 0

    G(t1/s,x,y)g1,(s)ds, (19)

    whereG(t,x,y) is the fundamental solution of ut

    u = 0 onO and gt,is the density of the measure t,, (t,)t0 being a convolution semigroup ofmeasures on [0, ) whose Laplace transform is given by:

    0

    eusgt,(s)ds= exp (tu) , u > 0.

    Note that if < 1, gt, is the density ofSt, where (St)t0 is a -stablesubordinator with Levy measure (dx) = (1)

    x11(0,)(x)dx.

    Assume first thatO= Rd. ThenG(t,x,y) =G(t, x y), where

    G(t, x) =

    Rd

    eixet||2

    d. (20)

    If 1, then (15) holds if and only if

    1, then (15) is implied by (21).

    11

  • 5/21/2018 Balan Stable Noise

    12/49

    Example 3.6 (Cable equation in R) Let Lu = ut 2ux2

    +u andO = R.ThenG(t,x,y) =G(t, x y), where

    G(t, x) = 1

    4texp

    |x|

    2

    4t t

    ,

    and condition (15) holds for any (0, 2).

    Example 3.7 (Wave equation in Rd with d = 1, 2) Let L = 2

    t2 and

    O= Rd withd= 1 or d = 2. Then G(t,x,y) =G(t, x y), where

    G(t, x) =1

    2

    1{|x|

  • 5/21/2018 Balan Stable Noise

    13/49

    with 0 = t0 < t1 < . . . < tN 0.

    LetL be the class of all predictable processes X such that

    X,T,B :=E T0

    B

    |X(t, x)|dxdt 0 andB Bb(Rd). Note thatL is a linear space.Let (Ek)k1be an increasing sequence of sets in Bb(Rd) such that

    kEk =

    Rd. We define

    X =k1

    1 X,k,Ek2k

    if >1,

    X = k1

    1 X

    ,k,Ek

    2k if 1.

    We identify two processes XandY for whichX Y= 0, i.e. X=Y-a.e., where=Pdtdx. In particular, we identify two processes XandY ifX is a modification ofY, i.e. X(t, x) =Y(t, x) a.s. for all (t, x) R+ Rd.

    The spaceL becomes a metric space endowed with the metric d:

    d(X, Y) =X Y if >1, d(X, Y) =X Y if1.

    This follows using Minkowskis inequality if >1, and the inequality|a+b|

    |a|

    + |b|

    if1.The following result can be proved similarly to Proposition 2.3 of [29].

    Proposition 4.2 For anyX Lthere exists a sequence(Xn)n1of boundedsimple processes such thatXn X0 asn .

    13

  • 5/21/2018 Balan Stable Noise

    14/49

    By Proposition 5.7 of [25], the-stable Levy process {Z(t, B) =Z([0, t]B); t 0} has a cadlag modification, for any B Bb(R

    d

    ). We work withthese modifications. IfXis a simple process given by (23), we define

    I(X)(t, B) =N1i=0

    mij=1

    YijZ((ti t, ti+1 t] (Aij B)). (24)

    Note that for any B Bb(Rd), I(X)(t, B) isFt-measurable for any t 0,and{I(X)(t, B)}t0 is cadlag. We write

    I(X)(t, B) = t

    0 B X(s, x)Z(ds, dx).The following result will be used for the construction of the integral.

    This result generalizes Lemma 3.3 of [11] to the case of random fields andnon-symmetric measures .

    Theorem 4.3 IfX is a bounded simple process then

    sup>0

    P( supt[0,T]

    |I(X)(t, B)|> )cE T0

    B

    |X(t, x)|dxdt, (25)

    for anyT >0 andB

    Bb(R

    d), wherec is a constant depending only on.

    Proof: Suppose that X is of the form (23). Since{I(X)(t, B)}t[0,T] iscadlag, it is separable. Without loss of generality, we assume that its sep-arating set D can be written as D =nFn where (Fn)n is an increasingsequence of finite sets containing the points (tk)k=0,...,N. Hence,

    P( supt[0,T]

    |I(X)(t, B)|> ) = limn

    P(maxtFn

    |I(X)(t, B)|> ). (26)

    Fix n 1. Denote by 0 = s0 < s1 < . . . < sm = T the points of theset Fn. Say tk = sik for some 0 = i0 < i1 < . . . < iN. Then each interval

    (tk, tk+1] can be written as the union of some intervals of the form ( si, si+1]:

    (tk, tk+1] =iIk

    (si, si+1] (27)

    14

  • 5/21/2018 Balan Stable Noise

    15/49

    whereIk ={i; iki < ik+1}. By (24), for any k = 0, . . . , N 1 and iIk,

    I(X)(si+1, B) I(X)(si, B) =mkj=1

    YkjZ((si, si+1] (Akj B)).

    For any iIk, let Ni =mk, and for any j = 1, . . . , N i, define ij =Ykj ,Hij =Akj and Zij =Z((si, si+1] (Hij B)). With this notation, we have:

    I(X)(si+1, B) I(X)(si, B) =Nij=1

    ijZij for alli = 0, . . . , m .

    Consequently, for any l = 1, . . . , m

    I(X)(sl, B) =l1i=0

    (I(X)(si+1, B) I(X)(si, B)) =l1i=0

    Nij=1

    ijZij . (28)

    Using (26) and (28), it is enough to prove that for any >0,

    P( maxl=0,...,m1

    |li=0

    Nij=1

    ijZij |> )cE T0

    B

    |X(s, x)|dxds. (29)

    First, note that

    E

    T0

    B

    |X(s, x)|dxds=m1i=0

    (si+1 si)Nij=1

    E|ij ||Hij B|.

    This follows from the definition (23) of X and (27), since X(t, x) =N1i=0

    iIk

    1(si,si+1](t)Nij=1 ij1Hij(x).

    We now prove (29). Let Wi =Nij=1 ijZij . For the event on the left-hand

    side, we consider its intersection with the event{max0im1 |Wi|> } andits complement. Hence the probability of this event can be bounded by

    m1i=0

    P(|Wi|> ) +P( max0lm1

    |li=0

    Wi1{|Wi|}|> ) =:I+II.

    We treat separately the two terms.

    15

  • 5/21/2018 Balan Stable Noise

    16/49

    For the first term, we note that i = (ij)1jNi isFsi-measurable andZi= (Zij)1jNi is independent ofFsi. By Fubinis theorem

    I=m1i=0

    RNi

    P(|Nij=1

    xjZij|> )Pi(dx),

    wherex= (xj)1jNi and Pi is the law ofi.

    We examine the tail ofUi =Nij=1 xjZij for a fixed xRNi. By Lemma

    2.3,Zij has aS((si+1 si)1/|Hij B|1/, , 0) distribution. Since the sets(Hij)1jNi are disjoint, the variables (Zij)1jNi are independent. Usingelementary properties of the stable distribution (Properties 1.2.1 and 1.2.3of [27]), it follows that Ui has aS(i,

    i , 0) distribution with parameters:

    i =(si+1 si)

    Nij=1

    |xj||Hij B|

    i = Ni

    j=1 |xj||Hij B|Nij=1

    sgn(xj)|xj||Hij B|.

    By Lemma A.1 (Appendix A), there exists a constant c> 0 such that

    P(|Ui|> )c(si+1 si)Ni

    j=1|xj||Hij B| (30)

    for any >0. Hence

    Icm1i=0

    (si+1si)Nij=1

    E|ij ||HijB|= cE T0

    B

    |X(s, x)|dxds.

    We now treat II. We consider three cases. For the first two cases wedeviate from the original argument of [11] since we do not require that = 0.

    Case 1. ) (31)

    where{Ml =li=0 |Wi|1{|Wi|}, Fsl+1; 0 l m 1} is a submartingale.By the submartingale maximal inequality (Theorem 35.3 of [5]),

    P( max0lm1

    Ml > ) 1

    E(Mm1) = 1

    m1i=0

    E(|Wi|1|Wi|). (32)

    16

  • 5/21/2018 Balan Stable Noise

    17/49

    Using the independence betweeni andZi it follows that

    E[|Wi|1|Wi|] =RNi

    E[|Nij=1

    xjZij |1{|Nij=1 xjZij|}]Pi(dx)

    Let Ui=Nij=1 xjZij. Using (30) and Remark A.2 (Appendix A), we get:

    E[|Ui|1{|Ui|}]c 1

    1 1(si+1 si)

    Nij=1

    |xj||Hij B|.

    Hence

    E[|Wi|1|Wi|]c 1

    1 1(si+1 si)

    Nij=1

    E|ij ||Hij B|. (33)

    From (31), (32) and (33), it follows that:

    IIc 1

    1 E

    T0

    B

    |X(s, x)|dxds.

    Case 2. >1. We have

    IIP( max0lm1

    |li=0

    Xi|> /2) +P( max0lm1

    Yi> /2) =:I I +II,

    whereXi = Wi1{|Wi|} E[Wi1{|Wi|}|Fsi] andYi=|E[Wi1{|Wi|}|Fsi]|.We first treat the term II. Note that{Ml =

    li=0 Xi, Fsl+1; 0 l

    m 1}is a zero-mean square integrable martingale, and

    II =P( max0lm1

    |Ml|> /2) 42

    m1i=0

    E(X2i) 4

    2

    m1i=0

    E[W2i1{|Wi|}].

    Let Ui= Nij=1 xjZij. Using (30) and Remark A.2 (Appendix A), we get:E[U2i1{|Ui|}]2c

    1

    2 2(si+1 si)

    Nij=1

    |xj ||Hij B|.

    17

  • 5/21/2018 Balan Stable Noise

    18/49

    As in Case 1, we obtain that:

    E[W2i1{|Wi|}]c 2

    2 2(si+1 si)

    Nij=1

    E|ij||Hij B|, (34)

    and hence

    II 8c 1

    2 E

    T0

    B

    |X(s, x)|dxds.

    We now treat II. Note that{Nl =l

    i=0 Yi, Fsl+1; 0l m 1} is asemimartingale and hence, by the submartingale inequality,

    II

    2

    E(Nm1) =

    2

    m1

    i=0 E(Yi).To evaluate E(Yi), we note that for almost all ,

    E[Wi1{|Wi|}|Fsi]() =E[Nij=1

    ij()Zij1{|Nij=1 ij()Zij |}], (35)

    due to the independence between i and Zi. We let Ui =Nij=1 xjZij with

    xj =ij(). Since >1, E(Ui) = 0. Using (30) and Remark A.2, we obtain

    |E[Ui1{|Ui|}]| = |E[Ui1{|Ui|>}]| E[|Ui|1{|Ui|>}]

    c 11(si+1 si)

    Nij=1

    |xj||Hij B|.

    Hence,E(Yi)c 11(si+1 si)Nij=1 E|ij||Hij B| and

    II c 2

    1E

    T0

    B

    |X(t, x)|dxdt.

    Case 3. = 1. In this case we assume that = 0. Hence Ui =

    Nij=1 xjZij has a symmetric distribution for any x RNi. Using (35), it

    follows thatE[Wi1{|Wi|}

    |Fsi] = 0 a.s. for alli= 0, . . . , m

    1. Hence,

    {Ml =li=0 Wi1{|Wi|}, Fsl+1; 0 l m1} is a zero-mean square integrable

    martingale. By the martingale maximal inequality,

    II 12

    E[M2m1] = 1

    2

    m1i=0

    E[W2i1{|Wi|}].

    18

  • 5/21/2018 Balan Stable Noise

    19/49

    The result follows using (34).

    We now proceed to the construction of the stochastic integral. IfY ={Y(t)}t0 is a jointly measurable random process, we define:

    Y,T= sup>0

    P( supt[0,T]

    |Y(t)|> ).

    Let X L be arbitrary. By Proposition 4.2, there exists a sequence(Xn)n1 of simple functions such thatXnX0 asn . LetT >0and B Bb(Rd) be fixed. By linearity of the integral and Theorem 4.3,

    I(Xn)(, B) I(Xm)(, B),T cXn Xm,T,B0 (36)

    asn, m . In particular, the sequence{I(Xn)(, B)}n is Cauchy in prob-ability in the space D[0, T] equipped with the sup-norm. Therefore, thereexists a random element Y(, B) inD[0, T] such that for any > 0,

    P( supt[0,T]

    |I(Xn)(t, B) Y(t, B)|> )0.

    Moreover, there exists a subsequence (nk)k such that

    supt[0,T]

    |I(Xnk)(t, B) Y(t, B)| 0 a.s.

    as k . Hence Y(t, B) isFt-measurable for any t [0, T]. The processY(, B) does not depend on the sequence (Xn)n and can be extended to acadlag process on [0, ), which is unique up to indistinguishability. Wedenote this extension byI(X)(, B) and we write

    I(X)(t, B) =

    t0

    B

    X(s, x)Z(ds, dx).

    IfAand B are disjoint sets inBb(Rd), thenI(X)(t, A B) =I(X)(t, A) +I(X)(t, B) a.s. (37)

    Lemma 4.4 Inequality (25) holds for anyX L.Proof: Let (Xn)n be a sequence of simple functions such thatXnX0. For fixed B , we denoteI(X) =I(X)(, B). We let be the sup normon D[0, T]. For any >0, we have:

    P(I(X)> )P(I(X) I(Xn)> ) +P(I(Xn)> (1 )).

    19

  • 5/21/2018 Balan Stable Noise

    20/49

    Multiplying by, and using Theorem 4.3, we obtain:

    sup>0

    P(I(X)> ) sup>0

    P(I(X)I(Xn)> )+(1)cXn,T,B.Letn . Using (36) one can prove that sup>0 P(I(Xn) I(X)>)0. We obtain that sup>0 P(I(X) > )(1 )cX,T,B.The conclusion follows letting 0.

    For an arbitrary Borel setO Rd (possiblyO = Rd), we assume inaddition, that X L satisfies the condition:

    E

    T0

    O

    |X(t, x)|dxdt 0. (38)

    Then we can define I(X)(, O) as follows. LetOk =O Ek where (Ek)kis an increasing sequence of sets inBb(Rd) such that

    kEk = R

    d. By (37),Lemma 4.4 and (38),

    sup>0

    P(suptT

    |I(X)(t, Ok)I(X)(t, Ol)|> )cE T0

    Ok\Ol

    |X(t, x)|dxdt0

    as k, l . This shows that{I(X)(, Ok)}k is a Cauchy sequence in prob-ability in the space D[0, T] equipped with the sup norm. We denote byI(X)(, O) its limit. As above, this process can be extended to [0, ) andI(X)(t, O) isFt-measurable for anyt >0. We denote

    I(X)(t, O) = t0

    O

    X(s, x)Z(ds, dx).

    Similarly to Lemma 4.4, one can prove that for any X L satisfying (38),

    sup>0

    P(suptT

    |I(X)(t, O)|> )cE T0

    O

    |X(t, x)|dxdt.

    5 The truncated noise

    For the study of non-linear equations, we need to develop a theory of stochas-

    tic integration with respect to another process ZKwhich is defined by re-moving fromZthe jumps whose modulus exceed a fixed value K >0. Moreprecisely, for anyB Bb(R+ Rd), we define

    ZK(B) =

    B{0

  • 5/21/2018 Balan Stable Noise

    21/49

    ZK(B) = B{01. (40)We treat separately the cases1 and >1.

    5.1 The case 1Note that{ZK(B); B Bb(R+ Rd)} is an independently scattered randommeasure on R+ Rd with characteristic function given by:

    E(eiuZK(B)) = exp

    |B||z|K

    (eiuz 1)(dz)

    u R.

    We first examine the tail ofZK(B).

    Lemma 5.1 For any setB Bb(R+ Rd),sup>0

    P(|ZK(B)|> )r|B| (41)

    wherer> 0 a constant depending only on (given by Lemma A.3).

    Proof: This follows from Example 3.7 of [11]. We denote by,K the re-striction of to{zR; 0 t}) = t K if 0< tK0 ift > Kand hence supt>0 t

    ,K({z R; |z| > t}) = 1. Next we observe that wedo not need to assume that the measure ,K is symmetric since we use amodified version of Lemma 2.1 of [10] given by Lemma A.3 (Appendix A).

    In fact, since the tail of,K vanishes if t > K, we can obtain anotherestimate for the tail of ZK(B) which, together with (41), will allow us tocontrol itsp-th moment for p(, 1). This new estimate is given below.

    Lemma 5.2 If u) 1 K

    1|B|u1 for allu > K.

    If= 1, thenP(|ZK(B)|> u)K|B|u2 for allu > K.

    21

  • 5/21/2018 Balan Stable Noise

    22/49

    Proof: We use the same idea as in Example 3.7 of [11]. For each k1, letZk,K(B) be a random variable with characteristic function:

    E(eiuZk,K(B)) = exp

    |B|{k1 u}). (42)

    wherenk denotes the n-fold convolution. Note that

    nk ({z; |z|> u}) = [k(R)]nP(|ni=1

    i|> u),

    where (i)i1 are i.i.d. random variables with law k/k(R).Assume first that u) we consider the

    intersection with the event{max1in |i| > u} and its complement. NotethatP(|i|> u) = 0 for any u > K. Using this fact and Markovs inequality,we obtain that for any u > K,

    P(|ni=1

    i|> u)P(|ni=1

    i1{|i|u}|> u) 1

    u

    ni=1

    E(|i|1{{|i|u}).Note thatP(|i|> s)(s K)/k(R) ifsK. Hence, for anyu > K

    E(|i|1{|i|u}) u0

    P(|i|> s)ds= K0

    P(|i|> s)ds 1k(R)

    1 K1.

    Combining all these facts, we get: for any u > K

    nk ({z; |z|> u})[k(R)]n1

    1 K1nu1,

    and the conclusion follows from (42).Assume now that = 1. In this case, E(i1{|i|u}) = 0 since i has a

    symmetric distribution. Using Chebyshevs inequality this time, we obtain:

    P(|ni=1

    i|> u)P(|ni=1

    i1{|i|u}|> u) 1

    u2

    ni=1

    E(2i 1{{|i|u}).

    22

  • 5/21/2018 Balan Stable Noise

    23/49

    The result follows as above using the fact that for any u > K,

    E(2i 1{|i|u})2 u0

    sP(|i|> s)ds= 2 K0

    sP(|i|> s)ds 1k(R)

    K.

    Lemma 5.3 If t)dt= p 0

    P(|ZK(B)|> u)up1du.

    We consider separately the integrals for u K and u > K. For the firstintegral we use (41):

    K0

    P(|ZK(B)|> u)up1dur|B| K0

    u+p1du= r|B| 1p K

    p.

    For the second one we use Lemma 5.2: if u)up1du 1 K

    1|B| K

    up2du=

    (1 )(1 p) |B|Kp,

    and if = 1, then K

    P(|ZK(B)|> u)up1duK|B| K

    up3du=|B| 12 pK

    p1.

    We now proceed to the construction of the stochastic integral with respectto ZK. For this, we use the same method as for Z. Note thatFZKt Ft,whereFZKt is the-field generated by ZK([0, s]A) for all s [0, t] and

    23

  • 5/21/2018 Balan Stable Noise

    24/49

    A Bb(Rd). For any B Bb(Rd), we will work with a cadlag modificationof the Levy process{ZK(t, B) =ZK([0, t] B); t0}.IfXis a simple process given by (23), we define

    IK(X)(t, B) =

    t0

    B

    X(s, x)ZK(ds, dx)

    by the same formula (24) with Zreplaced byZK. The following result showsthat IK(X)(t, B) has the same tail behavior as I(X)(t, B).

    Proposition 5.4 IfX is a bounded simple process then

    sup>0

    P( supt[0,T] |

    IK

    (X)(t, B)|> )

    d

    E T

    0 B |X(t, x)|dxdt, (43)for anyT >0 andB Bb(Rd), whered is a constant depending only on.Proof: As in the proof of Theorem 4.3, it is enough to prove that

    P( maxl=0,...,m1

    |li=0

    Nij=1

    ijZij |> )d

    m1i=0

    (si+1 si)Nij=1

    E|ij ||Hij B|,

    where Zij = ZK((si, si+1] (Hij B)). This reduces to showing that Ui =

    Nij=1 xjZij satisfies an inequality similar to (30) for any x RNi, i.e.P(|Ui |> )d(si+1 si)

    Nij=1

    |xj ||Hij B|, (44)

    for any >0, for some d> 0. We first examine the tail ofZij. By (41),

    P(|Zij|> )r(si+1 si)Kij.

    whereKij =|HijB|. Lettingij =K1/ij Zij , we obtain that for any u >0,P(

    |ij

    |> u)

    r(si+1

    si)u

    j = 1, . . . , N i.

    By Lemma A.3 (Appendix A), it follows that for any >0,

    P(|Nij=1

    bjij|> )r2(si+1 si)Nij=1

    |bj|,

    24

  • 5/21/2018 Balan Stable Noise

    25/49

    for any sequence (bj)j=1,...,Ni of real numbers. Inequality (44) (with d= r

    2)

    follows by applying this to bj =xjK1/

    ij .

    In view of the previous result and Proposition 4.2, for any processX Lwe can construct the integral

    IK(X)(t, B) =

    t0

    B

    X(s, x)ZK(ds, dx)

    in the same manner as I(X)(t, B), and this integral satisfies (43). If inaddition the process X L satisfies (38), then we can define the integralIK(X)(t, O) for an arbitrary Borel setO Rd (possiblyO = Rd). Thisintegral will satisfy an inequality similar to (43) with B replaced by

    O.

    The appealing feature ofIK(X)(t, B) is that we can control its moments,as shown by the next result.

    Theorem 5.5 If 0 andB Bb(Rd), whereC,p is a constant depending on, p.IfO Rd is an arbitrary Borel set, and we assume in addition, that the

    processX Lp satisfies:E

    T0

    O

    |X(s, x)|pdxds 0, (46)

    then inequality (45) holds withB replaced byO.

    Proof: Step 1. Suppose that X is an elementary process of the form (22).ThenIK(X)(t, B) =Y ZK(H) where H= (t a, t b] (A B). Note thatZK(H) is independent ofFa. Hence, ZK(H) is independent ofY. Let PYdenote the law ofY. By Fubinis theorem,

    E|Y ZK(H)|p = p 0

    P(|Y ZK(H)|> u)up1du

    = p

    R

    0

    P(|yZK(H)|> u)up1du

    PY(dy).

    25

  • 5/21/2018 Balan Stable Noise

    26/49

    We evaluate the inner integral. We split this integral into two parts, for

    uK|y|, respectively u > K|y|. For the first integral, we use (41). For thesecond one, we use Lemma 5.2. Therefore, the inner integral is bounded by:

    r|y||H| K|y|0

    u+p1du+

    1 |y|K1|H|

    K|y|

    up2du= C,pKp|y|p|H|

    and

    E|Y ZK(H)|p pC,pKp|H|E|Y|p =C,pKpE t0

    B

    |X(s, x)|pdxds.

    Step 2. Suppose now that Xis a simple process of the form (23). ThenX(t, x) = N1i=0 mij=1 Xij(t, x) where Xij(t, x) = 1(ti,ti+1](t)1Aij(x)Yij .

    Using the linearity of the integral, the inequality|a + b|p |a|p+ |b|p, andthe result obtained in Step 1 for the elementary processes Xij , we get:

    E|IK(X)(t, B)|p EN1i=0

    mij=1

    |IK(Xij)(t, B)|p

    C,pKpE

    N1i=0

    mij=1

    t0

    B

    |Xij(s, x)|pdxds= C,pKpE t0

    B

    |X(s, x)|pdxds.

    Step 3. Let X Lp be arbitrary. By Proposition 4.2, there exists asequence (Xn)nof bounded simple processes such that XnXp0. Since < p, it follows thatXnX 0. By the definition of IK(X)(t, B)there exists a subsequence{nk}k such that{IK(Xnk)(t, B)}k converges toIK(X)(t, B) a.s. Using Fatous lemma and the result obtained in Step 2 (forthe simple processesXnk), we get:

    E|IK(X)(t, B)|p lim infk

    E|IK(Xnk)(t, B)|p

    C,pKp lim infk

    E

    t0

    B

    |Xnk(s, x)|pdxds

    = C,pKpE t

    0

    B

    |X(s, x)|pdxds.

    Step 4. Suppose that X Lp satisfies (46). LetOk =O Ek where(Ek)k is an increasing sequence of sets inBb(Rd) such that

    k1 Ek = R

    d.

    26

  • 5/21/2018 Balan Stable Noise

    27/49

    By the definition ofIK(X)(t, O), there exists a subsequence (ki)i such that{IK(X)(t, Oki)}i converges toIK(X)(t, O) a.s. Using Fatous lemma, the re-sult obtained in Step 3 (forB =Oki) and the monotone convergence theorem,we get:

    E|IK(X)(t, O)|p liminfi

    E|IK(X)(t, Oki)|p

    C,pKp lim infi

    E

    t0

    Oki

    |X(s, x)|pdxds

    = C,pKpE

    t0

    O

    |X(s, x)|pdxds.

    Remark 5.6 Finding a similar moment inequality for the case = 1 andp(1, 2) remains an open problem. The argument used in Step 2 above relieson the fact that p 1.

    5.2 The case >1

    In this case, the construction of the integral with respect to ZKrelies on anintegral with respect toNwhich exists in the literature. We recall briefly thedefinition of this integral. For more details, see Section 1.2.2 of [26], Section24.2 of [28] or Section 8.7 of [22].

    Let E= Rd (R\{0}) endowed with the measure (dx, dz) = dx(dz)andBb(E) be the class of bounded Borel sets in E. For a simple processY ={Y(t,x,z); t0, (x, z)E}, the integral IN(Y)(t, B) is defined in theusual way, for any t >0, B Bb(E). The process IN(Y)(, B) is a (cadlag)zero-mean square-integrable martingale with quadratic variation

    [IN(Y)(, B)]t =

    t0

    B

    |Y(s,x,z)|2N(ds,dx,dz)

    and predictable quadratic variation

    IN(Y)(, B)t = t0

    B

    |Y(s,x,z)|2(dz)dxds.

    27

  • 5/21/2018 Balan Stable Noise

    28/49

    By approximation, this integral can be extended to the class of allP B(R\{0})-measurable processes Ysuch that, for any T >0 and B Bb(E)

    Y22,T,B :=E T0

    B

    |Y(s,x,z)|2(dz)dxds

  • 5/21/2018 Balan Stable Noise

    29/49

    whereC,p =Cp/(p ). If in addition, the process X Lp satisfies (46)then (49) holds with B replaced byO, for an arbitrary Borel setO R

    d

    .Note that (49) is the counterpart of (45) for the case > 1. Together,these two inequalities will play a crucial role in Section 6.

    The following table summarizes all the conditions:

    1B is bounded X L X Lp

    for some p(, 2]B =O is unbounded X L and X Lp and

    X satisfies (38) X satisfies (46)for some p(, 2]

    Table 1: Conditions for IK(X)(t, B) to be well-defined

    6 The main result

    In this section, we state and prove the main result regarding the existence ofa mild solution of equation (1). For this result,O is a bounded domain inRd. For anyt >0, we denote

    Jp(t) = supxO O G(t,x,y)pdy.

    Theorem 6.1 Let(0, 2), = 1. Assume that for anyT >0

    limh0

    T0

    O

    |G(t,x,y) G(t+h, x, y)|pdydt= 0 x O, (50)

    lim|h|0

    T0

    O

    |G(t,x,y) G(t, x+h, y)|pdydt= 0 x O, (51) T0

    Jp(t)dt 0 andK1,

    sup(t,x)[0,T]O

    E(|u(t, x)|p1{tK})

  • 5/21/2018 Balan Stable Noise

    30/49

    Example 6.2 (Heat equation)Let L= t12. Then G(t,x,y)G(t, xy)where G(t, x) is the fundamental solution ofLu= 0 on R

    d

    . Condition (52)holds ifp < 1 + 2/d. If < 1, this condition holds for any p (, 1). If >1, this condition holds for any p(, 1 + 2/d], as long as satisfies (5).Conditions (50) and (51) hold by the continuity of the functionG in t and x,by applying the dominated convergence theorem. To justify the applicationof this theorem, we use the trivial bound (2t)dp/2 for bothG(t+h,x,y)p andG(t, x+h, y)p, which introduces the extra condition dp 0, x O and L > K,

    30

  • 5/21/2018 Balan Stable Noise

    31/49

    uK(t, x) = uL(t, x) a.s. on the event{t K}. The final step is to showthat processu defined byu(t, x) =uK(t, x) on{tK}is a mild solution of(1). A similar method can be found in Section 9.7 of [22] using an approachbased on stochastic integration of operator-valued processes, with respect toHilbert-space-valued processes, which is different from our approach.

    Since is a Lipschitz function, there exists a constant C >0 such that:

    |(u) (v)| C|u v|, u, vR. (53)In particular, letting D =C |(0)|, we have:

    |(u)| D(1 + |u|), u R. (54)For the proof of Theorem 6.1, we need a specific construction of the Pois-

    son random measureN, taken from [21]. We review briefly this construction.Let (Ok)k1 be a partition ofRd with sets inBb(Rd) and (Uj)j1 be a

    partition of R\{0} such that (Uj) t) =ej,kt, P(Xj,ki B) =

    |B Ok||Ok| , P(Z

    j,ki ) =

    | Uj||Uj | ,

    wherej,k =|Ok|(Uj). Let Tj,ki =il=1 E

    j,kl for alli1. Then

    N= i,j,k1 (Tj,ki ,Xj,ki ,Zj,ki ) (55)is a Poisson random measure on R+Rd(R\{0}) with intensitydtdx(dz).

    This section is organized as follows. In Section 6.1 we prove the existenceof the solution of the equation with truncated noise ZK. Sections 6.2 and 6.3contain the proof of Theorem 6.1 when 1.

    6.1 The equation with truncated noise

    In this section, we fixK >0 and we consider the equation:

    Lu(t, x) =(u(t, x))ZK(t, x) t >0, x O (56)

    with zero initial conditions and Dirichlet boundary conditions. A mild solu-tion of (56) is a predictable process u which satisfies (2) with Zreplaced byZK. For the next result,Ocan be a bounded domain in Rd orO= Rd (withno boundary conditions).

    31

  • 5/21/2018 Balan Stable Noise

    32/49

    Theorem 6.5 Under the assumptions of Theorem 6.1, equation (56) has a

    unique mild solutionu={u(t, x); t0, x O}. For anyT >0,sup

    (t,x)[0,T]O

    E|u(t, x)|p 0 and x O; (iv) the map(t, x)un(t, x) is continuous from [0, T] O into Lp() for any T >0.

    The statement is trivial for n = 0. For the induction step, assume thatthe statement is true for n. By an extension to random fields of Theorem30, Chapter IV of [8], un has a jointly measurable modification. Since thismodification is (Ft)t-adapted, (in the sense of (iii)), it has a predictablemodification (using an extension of Proposition 3.21 of [22] to random fields).We work with this modification, that we call also un.

    We prove that(i)-(iv)hold forun+1. To show(i), it suffices to prove thatXn Lp, whereXn(s, y) = 1[0,t](s)G(ts,x,y)(un(s, y)). By (54) and (52),

    E

    t0

    O

    |Xn(s, y)|pdydsDp2p1(1 +Kn(t)) t0

    Jp(t s)ds

  • 5/21/2018 Balan Stable Noise

    33/49

    To prove(iv), we first show the right continuity int. Let h >0. Writing

    the interval [0, t + h] as the union of [0, t] and (t, t+ h], we obtain thatE|un+1(t+h, x) un+1(t, x)|p 2p1(I1(h) +I2(h)), where

    I1(h) = E

    t0

    O

    (G(t+h s,x,y) G(t s,x,y))(un(s, y))ZK(ds, dy)p

    I2(h) = E

    t+ht

    O

    G(t+h s,x,y)(un(s, y))ZK(ds, dy)p .

    Using again (54) and the moment inequality (45) (or (49)), we obtain:

    I1(h) Dp2p1(1 +Kn(t)) t

    0 O|G(s+h, x, y) G(s,x,y)|pdyds

    I2(h) Dp2p1(1 +Kn(t)) h0

    O

    G(s,x,y)pdyds

    It follows that both I1(h) and I2(h) converge to 0 as h 0, using (50) forI1(h), respectively the Dominated Convergence Theorem and (52) for I2(h).The left continuity in t is similar, by writing the interval [0, th] as thedifference between [0, t] and (th, t] for h > 0. For the continuity in x,similarly as above, we see that E|un+1(t, x + h) un+1(t, x)|p is bounded by:

    Dp2p1(1 +Kn(t))

    t

    0 O|G(s, x+h, y) G(s,x,y)|pdyds,

    which converges to 0 as|h| 0 due to (51). This finishes the proof of(iv).We denote Mn(t) = supxOE|un(t, x)|p. Similarly to (58), we have:

    Mn(t)C1 t0

    (1 +Mn1(s))Jp(t s)ds, n1,

    whereC1= C,pKpDp2

    p1. By applying Lemma 15 of Erratum to [6] withfn= Mn,k1 = 0, k2= 1 andg(s) =C Jp(s), we obtain that:

    supn0

    supt[0,T]

    Mn(t)0. (59)

    We now prove that{un(t, x)}n converges in Lp

    (), uniformly in (t, x)[0, T] O. To see this, let Un(t) = supxOE|un+1(t, x) un(t, x)|p forn0.Using the moment inequality (45) (or (49)) and (53), we have:

    Un(t)C2 t0

    Un1(s)Jp(t s)ds

    33

  • 5/21/2018 Balan Stable Noise

    34/49

    where C2 = C,pKpCp. By Lemma 15 of Erratum to [6],n0

    Un(t)1/p

    converges uniformly on [0, T]. (Note that this lemma is valid for all p >0.)We denote by u(t, x) the limit ofun(t, x) inLp(). One can show that u

    satisfies properties(ii)-(iv)listed above. Sou has a predictable modification.This modification is a solution of (56). To prove uniqueness, let v be anothersolution and denote H(t) = supxOE|u(t, x) v(t, x)|p. Then

    H(t)C2 t0

    H(s)Jp(t s)ds.

    Using (52), it follows that H(t) = 0 for all t >0.

    6.2 Proof of Theorem 6.1: case 0 andB Bb(Rd), we have: (see (11))

    Z(t, B) =

    [0,t]B(R\{0})

    zN(ds,dx,dz).

    The characteristic function ofZ(t, B) is given by:

    E(eiuZ(t,B)) = exp

    t|B|

    R\{0}

    (eiuz 1)(dz)

    , u R.

    Note that

    {Z(t, B)

    }t0is nota compound Poisson process sinceis infinite.

    We introduce the stopping times (K)K1, as on page 239 of [21]:

    K(B) = inf{t >0; |Z(t, B) Z(t, B)|> K},whereZ(t, B) = limst Z(s, B). Clearly,L(B)K(B) for all L > K.

    We first investigate the relationship betweenZand ZKand the propertiesofK(B). Using construction (55) ofNand definition (39) ofZK, we have:

    Z(t, B) =i,j,k1

    Zj,ki 1{Tj,ki t}1{Xj,ki B}

    =:j,k1

    Zj,k(t, B)

    ZK(t, B) =

    i,j,k1Zj,ki 1{|Zj,ki |K}

    1{Tj,ki t}1{Xj,ki B}

    .

    We observe that{Zj,k(t, B)}t0 is a compound Poisson process with

    E(eiuZj,k(t,B)) = exp

    t|Ok B|

    Uj

    (eiuz 1)(dz)

    u R.

    34

  • 5/21/2018 Balan Stable Noise

    35/49

    Note that K(B) > T means that all the jumps of{Z(t, B)}t0 in [0, T]are smaller thanKin modulus, i.e.{K(B)> T}={; |Z

    j,k

    i ()| Kfor alli,j,k1 for whichTj,ki ()T andXj,ki ()B}. Hence, on{K(B)> T},Z([0, t] A) =ZK([0, t] A) =ZL([0, t] A),

    for anyL > K, t[0, T], A Bb(Rd) with AB. Using an approximationargument and the construction of the integrals I(X) and IK(X), it followsthat for any X L and for anyL > K, a.s. on{K(B)> T}, we have:

    I(X)(T, B) =IK(X)(T, B) =IL(X)(T, B). (60)

    The next result gives the probability of the event{K(B)> T}.Lemma 6.6 For anyT >0 andB Bb(Rd),

    P(K(B)> T) = exp(T|B|K).Consequently, limK P(K(B)> T) = 1 and limK K(B) = a.s.Proof: Note that{K(B)> T}=

    j,k1{j,kK (B)> T}, where

    j,kK (B) = inf{t >0; |Zj,k(t, B) Zj,k(t, B)|> K}Since ({z; |z| > K}) = K and (j,kK (B))j,k1 are independent, it isenough to prove that for any j, k

    1,

    P(j,kK (B)> T) = exp {T|B Ok|({z; |z|> K} Uj)} . (61)Note that{j,kK (B) > T} ={; |Zj,ki ()| K for all i for which Tj,ki

    T andXj,ki B}and (Tj,kn )n1 are the jump times of a Poisson process withintensity j,k. Hence

    P(j,kK (B)> T) =n0

    nm=0

    I{1,...,n},card(I)=m

    P(Tj,kn T < Tj,kn+1)P(iI

    {Xj,ki B})

    P(iI{|Zj,ki

    | K

    })P(iIc{X

    j,ki

    B

    }=

    n0

    ej,kT(j,kT)

    n

    n! [1 P(Xj,k1 B)P(|Zj,k1 |> K)]n

    = expj,kT P(Xj,k1 B)P(|Zj,k1 |> K)

    ,

    35

  • 5/21/2018 Balan Stable Noise

    36/49

    which yields (61).

    To prove the last statement, let A(n)

    k ={K(B)> n}. Then P(limKA(n)

    K )limKP(A

    (n)K ) = 1 for any n 1, and hence P(

    n1limKA

    (n)K ) = 1. Hence,

    with probability 1, for anyn, there exists some Kn such that Kn > n. Since(K)K is non-decreasing, this proves that K with probability 1.

    Remark 6.7 The construction ofK(B) given above is due to [21] (in thecase of a symmetric measure). This construction relies on the fact thatBisa bounded set. SinceZ(t,Rd) (and consequentlyK(R

    d)) is not well-defined,I could not see why this construction can also be used when B = Rd, as itis claimed in [21]. To avoid this difficulty, one could try to use an increasingsequence (En)n of sets in

    Bb(R

    d) withnEn= Rd. Using (60) with B =Enand lettingn , we obtain thatI(X)(t,Rd) =IK(t,Rd) a.s. on{tK},where K = infn1 K(En). But P(K > t) P(limn{K(En) > t})limnP(K(En)> t) = limnexp(t|En|K) = 0 for any t >0, which meansthat K= 0 a.s. Finding a suitable sequence (K)Kof stopping times whichcould be used in the caseO= Rd remains an open problem.

    In what follows, we denote K = K(O). Let uK be the solution ofequation (56), whose existence is guaranteed by Theorem 6.5.

    Lemma 6.8 Under the assumptions of Theorem 6.1, for anyt >0, x

    OandL > K,uK(t, x) =uL(t, x) a.s. on{tK}.

    Proof: By the definition ofuL and (60),

    uL(t, x) =

    t0

    O

    G(t s,x,y)(uL(s, y))ZL(ds, dy)

    =

    t0

    O

    G(t s,x,y)(uL(s, y))ZK(ds,dy)

    a.s. on the event

    {t

    K

    }. Using the definition ofuKand Proposition C.1

    (Appendix C), we obtain that, with probability 1,

    (uK(t, x) uL(t, x))1{tK} = 1{tK} t0

    O

    G(t s,x,y)((uK(s, y)) (uL(s, y)))1{sK}ZK(ds, dy).

    36

  • 5/21/2018 Balan Stable Noise

    37/49

    Let M(t) = supxOE(|uK(t, x)uL(t, x)|p1{tK}). Using the momentinequality (45) and the Lipschitz condition (53), we get:

    M(t)C t0

    Jp(t s)M(s)ds,

    where C= C,pKpCp. Using (52), it follows that M(t) = 0 for all t >0.

    For any t > 0, x O, let t,x =L>K{t K(t), uK(t, x)= uL(t, x)},

    where L and K are positive integers. Let t,x = t,x {limK K =}.By Lemma 6.6 and Lemma 6.8, P(t,x) = 1.

    The next result concludes the proof of Theorem 6.1.

    Proposition 6.9 Under the assumptions of Theorem 6.1, the process u ={u(t, x); t0, x O} defined by:

    u(,t,x) = uK(,t,x) if t,x andtK()u(,t,x) = 0 if t,x

    is a mild solution of equation (1).

    Proof: We first prove that u is predictable. Note that

    u(t, x) = limK

    (uK(t, x)1{tK})1t,x.

    The processX(,t,x) = 1{tK}() is clearly predictable, being in the class Cdefined in Remark 4.1. By the definition of t,x, sinceuK, uLare predictable,it follows that (,t,x)1t,x() isP-measurable. Hence, u is predictable.

    We now prove that u satisfies (2). Let t > 0 and x O be arbitrary.Using (60) and Proposition C.1 (Appendix C), with probability 1, we have:

    1{tK}u(t, x) = 1{tK}uK(t, x)

    = 1{tK} t

    0 O G(t s,x,y)(uK(s, y))ZK(ds, dy)= 1{tK}

    t0

    O

    G(t s,x,y)(uK(s, y))Z(ds,dy)

    = 1{tK}

    t0

    O

    G(t s,x,y)(uK(s, y))1{sK}Z(ds, dy)

    37

  • 5/21/2018 Balan Stable Noise

    38/49

    = 1{tK} t

    0 O G(t s,x,y)(u(s, y))1{sK}Z(ds, dy)= 1{tK}

    t0

    O

    G(t s,x,y)(u(s, y))Z(ds, dy).

    For the second last equality, we used the fact that processes X(s, y) =1[0,t](s)G(ts,x,y)(uK(s, y))1{sK}and Y(s, y) = 1[0,t](s)G(ts,x,y)(u(s, y))1{sK} are modifications of each other (i.e. X(s, y) = Y(s, y) a.s. for alls >0, y O), and hence, [X Y],t,O = 0 and I(X)(t, O) =I(Y)(t, O) a.s.The conclusion follows letting K , since K a.s.

    6.3 Proof of Theorem 6.1: case >1

    In this case, for any t >0 andB Bb(Rd), we have: (see (12))

    Z(t, B) =

    [0,t]B(R\{0})

    zN(ds,dx,dz).To introduce the stopping times (K)K1 we use the same idea as in

    Section 9.7 of [22].LetM(t, B) =

    j1(Lj(t, B)ELj(t, B)) andP(t, B) =L0(t, B), where

    Lj(t, B) = Lj([0, t] B) was defined in Section 2. Note that{M(t, B)}t0is a zero-mean square-integrable martingale and

    {P(t, B)

    }t0 is a compound

    Poisson process with E[P(t, B)] = t|B| where = |z|>1 z(dz) = 1 .With this notation,

    Z(t, B) =M(t, B) +P(t, B) t|B|. (62)We let MK(t, B) =PK(t, B) E[PK(t, B)] =PK(t, B) t|B|K, where

    PK(t, B) =

    [0,t]B(R\{0})

    z1{10; |P(t, B) P(t, B)|> K},

    38

  • 5/21/2018 Balan Stable Noise

    39/49

    whereP(t, B) = limst P(s, B).Lemma 6.6 holds again, but its proof is simpler than in the case 1}1{Tj,ki t}

    1{Xj,ki B}

    PK(t, B) =i,j,k1

    Zj,ki 1{1 T}, for any L > K,t[0, T], A Bb(Rd) withAB,P([0, t] A) =PK([0, t] A) =PL([0, t] A).

    LetbK= K= |z|>Kz(dz). Using (62) and (63), it follows that:Z([0, t] A) =ZK([0, t] A) t|A|bK=ZL([0, t] A) t|A|bL

    for any L > K, t[0, T], A Bb(Rd) with AB . Let p(, 2] be fixed.Using an approximation argument and the construction of the integrals I(X)and IK(X), it follows that for any X L and for any L > K, a.s. on{K(B)> T}, we have:

    I(X)(T, B) = IK(X)(T, B) bK T0

    O

    X(s, y)dyds (64)

    = IL(X)(T, B) bL T0O

    X(s, y)dyds.

    We denote K=K(O). We consider the following equation:Lu(t, x) =(u(t, x))ZK(t, x) bK(u(t, x)), t >0, x O (65)

    with zero initial conditions and Dirichlet boundary conditions. A mild solu-tion of (65) is a predictable process u which satisfies:

    u(t, x) =

    t0

    O

    G(t s,x,y)(u(s, y))ZK(ds,dy)

    bK t0O

    G(t s,x,y)(u(s, y))dyds a.s.for anyt >0, x O. The existence and uniqueness of a mild solution of (65)can be proved similarly to Theorem 6.5. We omit these details. We denotethis solution by vK.

    39

  • 5/21/2018 Balan Stable Noise

    40/49

    Lemma 6.10 Under the assumptions of Theorem 6.1, for anyt >0, x OandL > K, vK(t, x) =vL(t, x) a.s. on{tK}.Proof: By the definition ofvL and (64), a.s. on the event{tK},vL(t, x)is equal to t0

    O

    G(ts,x,y)(vL(s, y))ZL(ds,dy)bL t0

    O

    G(ts,x,y)(vL(s, y))dyds= t0

    O

    G(ts,x,y)(vL(s, y))ZK(ds, dy)bK t0

    O

    G(ts,x,y)(vL(s, y))dyds.

    Using the definition ofvKand Proposition C.1 (Appendix C), we obtainthat, with probability 1,

    (vK(t, x) vL(t, x))1{tK}= 1{tK} t

    0

    O

    G(t s,x,y)((vK(s, y)) (vL(s, y))

    1{sK}ZK(ds,dy) t0

    O

    G(t s,x,y)((vK(s, y)) (vL(s, y))1{sK}dyds

    .

    Letting M(t) = supxOE(|vK(t, x)vL(t, x)|p1{tK}), we see that M(t)2p1(E|A(t, x)|p +E|B(t, x)|p) where

    A(t, x) = t0O

    G(t s,x,y)((vK(s, y)) (vL(s, y))1{sK}ZK(ds, dy)

    B(t, x) =

    t0

    O

    G(t s,x,y)((vK(s, y)) (vL(s, y))1{sK}dyds.

    We estimate separately the two terms. For the first term, we use themoment inequality (49) and the Lipschitz condition (53). We get:

    supxO

    E|A(t, x)|p C t0

    Jp(t s)M(s)ds, (66)

    where C = C,pKpCp

    . For the second term, we use Holders inequality

    | fgd| (|f|pd)1/p(|g|qd)1/q with f(s, y) =G(ts,x,y)1/p((vK(s, y))(vL(s, y))1{sK}and g(s, y) =G(ts,x,y)1/q, wherep1 +q1 = 1. Hence,

    |B(t, x)|p CpKp/qt t0

    O

    G(t s,x,y)|vK(s, y) vL(s, y)|p1{sK}dyds,

    40

  • 5/21/2018 Balan Stable Noise

    41/49

    where Kt = t

    0J1(s)ds 1, t0Jp(s)1/pdsct(t0

    Jp(s)ds)1/p 0 andx O, we let t,x=

    L>K{tK, vK(t, x)=vL(t, x)}

    where K and L are positive integers, and t,x = t,x {limK K =}.By Lemma 6.10, P(t,x) = 1.

    Proposition 6.11 Under the assumptions of Theorem 6.1, the processu={u(t, x); t0, x O} defined by:

    u(,t,x) = vK(,t,x) if t,x andtK()u(,t,x) = 0 if t,x

    is a mild solution of equation (1).

    Proof: We proceed as in the proof of Proposition 6.9. In this case, withprobability 1, we have:

    1{tK}u(t, x) = 1{tK}

    t0

    O

    G(t s,x,y)(u(s, y))Z(ds, dy)

    bK

    t0

    O

    G(t s,x,y)(u(s, y))dyds

    .

    The conclusion follows letting K

    , since K

    a.s. andbK

    0.

    A Some auxiliary results

    The following result is used in the proof of Theorem 4.3.

    41

  • 5/21/2018 Balan Stable Noise

    42/49

    Lemma A.1 IfXhas aS(,, 0) distribution then

    P(|X|> )c for all >0,wherec> 0 is a constant depending only on.

    Proof: Step 1. We first prove the result for = 1. We treat only theright tail, the left tail being similar. We denoteX byX to emphasize thedependence on. By Property 1.2.15 of [27], lim

    P(X > ) =C1+2 ,

    whereC = (0

    x sin xdx)1. We use the fact that for any [1, 1],P(X > )P(X1> ) for all >

    for some > 0 (see Property 1.2.14 of [27] or Section 1.5 of [19]). Sincelim P(X1> ) =C, there exists

    > such that

    P(X1> ) .

    It follows that P(X > ) < 2C for all > and [1, 1]. Clearly,

    for all (0, ] and [1, 1], P(X > ) ().Step 2. We now consider the general case. SinceX/ has a S(1, , 0)

    distribution, by Step 1, it follows that P(|X| > )c for any > 0.The conclusion follows multiplying by .

    In the proof of Theorem 4.3 and Lemma A.3 below, we use the following

    remark, due to Adam Jakubowski (personal communication).

    Remark A.2 Let Xbe a random variable such that P(|X| > )Kfor all >0, for some K >0 and(0, 2). Then, for anyA >0,

    E(|X|1{|X|A}) A0

    P(|X|> t)dtK 11 A

    1 if A}) A

    P(|X|> t)dt+AP(|X|> A)K 1A

    1 if >1,

    E(X2

    1{|X|A})2 A0 tP(|X|> t)dtK 22 A2 for any(0, 2).The next result is a generalization of Lemma 2.1 of [10] to the case of

    non-symmetric random variables. This result is used in the proof of Lemma5.1 and Proposition 5.4.

    42

  • 5/21/2018 Balan Stable Noise

    43/49

    Lemma A.3 Let(k)k1 be independent random variables such that

    sup>0

    P(|k|> )K k1 (68)

    for someK >0 and(0, 2). If >1, we assume thatE(k) = 0 for allk, and if = 1, we assume thatk has a symmetric distribution for allk.Then for any sequence(ak)k1 of real numbers, we have:

    sup>0

    P(|k1

    akk|> )rKk1

    |ak| (69)

    wherer> 0 is a constant depending only on.

    Proof: We consider the intersection of the event on the left-hand side of(69) with the event{supk1 |akk|> } and its complement. Hence,

    P(|k1

    akk|> )k1

    P(|akk|> )+P(|k1

    akk1{|akk|}|> ) =:I+II.

    Using (68), we have IK k1 |ak|. To treat I I, we consider 3 cases.Case 1. 1. Let X=k1 akk1{|akk|}. Since E(

    k1 akk) = 0,

    |E(X)|=|E(k1

    akk1{|akk|>})| k1

    |ak|E(|k|1{|akk|>}) K

    11

    k1

    |ak|,

    where we used Remark A.2 for the last inequality. From here, we infer that

    |E(X)|< 2

    for any > ,

    where= 2K 1k1 |ak|. By Chebyshevs inequality, for any > ,

    II = P(|X|> )P(|X E(X)|> |E(X)|) 42

    E|X E(X)|2

    42

    k1

    a2kE(2k1{|akk|})

    8K

    2 k1

    |ak|,

    43

  • 5/21/2018 Balan Stable Noise

    44/49

    using Remark A.2 for the last inequality. On the other hand, if(0, ],

    II=P(|X|> )1 = 2K 1k1

    |ak|.

    Case 3. = 1. Since k has a symmetric distribution, we can use theoriginal argument of [10].

    B Fractional power of the Laplacian

    Let G(t, x) be the fundamental solution of ut

    + ()u= 0 on Rd, >0.

    Lemma B.1 For anyp >1, there exist some constantsc1, c2> 0 dependingond, p, such that

    c1t d2

    (p1) Rd

    G(t, x)pdxc2t d2

    (p1).

    Proof: The upper bound is given by Lemma B.23 of [22]. For the lowerbound, we use the scaling property of the functions (gt,)t>0. We have:

    G(t, x) =

    0

    1

    (4t1/r)d/2exp

    |x|

    2

    4t1/rg1,(r)dr

    1

    1

    (4t1/r)d/2exp

    |x|24t1/r

    g1,(r)dr

    1(4t1/)d/2

    exp

    |x|

    2

    4t1/

    Cd, with Cd, :=

    1

    rd/2g1,(r)dr

  • 5/21/2018 Balan Stable Noise

    45/49

    Proposition C.1 LetT >0 andO Rd be a Borel set. LetX={X(t, x);t0, x R

    d

    }be a predictable process such thatX L if 1. IfO is unbounded, assume in addition thatXsatisfies (38) if 1.Suppose that there exists an eventA FT such that

    X(,t,x) = 0 forall A, t[0, T], x O. (70)

    Then for anyK >0, I(X)(T, O) =IK(X)(T, O) = 0 a.s. onA.

    Proof: We only prove the result for I(X), the proof for IK(X) being thesame. Moreover, we include only the argument for 1 is

    similar. The idea is to reduce the argument to the case when X is a simpleprocess, as in the proof Proposition of 8.11 of [22].Step 1. We show that the proof can be reduced to the case of a bounded

    setO. Let Xn(t, x) = X(t, x)1On(x) whereOn =O En and (En)n ia anincreasing sequence of sets inBb(Rd) such that

    nEn= R

    d. Then Xn Lsatisfies (70). By the dominated convergence theorem,

    E

    T0

    O

    |Xn(t, x) X(t, x)| 0.

    By the construction of the integral, I(Xnk)(T, O) I(X)(T, O) a.s. for asubsequence{nk}. It suffices to show that I(Xn)(T, O) = 0 a.s. onA for alln. But I(Xn)(T, O) =I(Xn)(T, On) andOn is bounded.

    Step 2. We show that the proof can be reduced to the case of a boundedprocesses. For this, letXn(t, x) = X(t, x)1{|X(t,x)|n}. Clearly, Xn L isbounded and satisfies (70) for alln. By the dominated convergence theorem,[XnX]0, and henceI(Xnk)(T, O)I(X)(T, O) a.s. for a subsequence{nk}. It suffices to show that I(Xn)(T, O) = 0 a.s. onA for all n.

    Step 3. We show that the proof can be reduced to the case of boundedcontinuous processes. Assume that X L is bounded and satisfies (70).For anyt >0 andx Rd, we define

    Xn(t, x) =nd+1 t

    (t1/n)0

    (x1/n,x]O

    X(s, y)dyds,

    where (a, b] ={y Rd; ai < yi bi for all i = 1, . . . , d}. Clearly, Xn isbounded and satisfies (70). We prove that Xn L. Since Xn is bounded,

    45

  • 5/21/2018 Balan Stable Noise

    46/49

    [Xn] 0, themap (,s,x)X(,s,x) from [0, t] Rd to RisFtB([0, t]) B(Rd)-measurable. Hence,F(t, ) isFtB(Rd)-measurable for anyt >0. Since themap t F(,t,x) is left-continuous for any , x Rd, it follows thatF is predictable, being in the classC defined in Remark 4.1. Hence, Xn ispredictable, being a sum of 2d+1 terms involving F.

    Since F is continuous in (t, x), Xn is continuous in (t, x). By Lebesque

    differentiation theorem inRd+1

    , Xn(,t,x)X(,t,x) for any , t >0, x O. By the bounded convergence theorem, [XnX] 0. HenceI(Xnk)(T, O)I(X)(T, O) a.s. for a subsequence{nk}. It suffices to showthat I(Xn)(T, O) = 0 a.s. on A for all n.

    Step 4. Assume that X L is bounded, continuous and satisfies (70).Let (U

    (n)j )j=1,...,mn be a partition ofO in Borel sets with Lebesque measure

    smaller than 1/n. Let xnj U(n)j be arbitrary. Define

    Xn(t, x) =n1k=0

    mnj=1

    X

    kT

    n , xnj

    1( kTn ,

    (k+1)Tn ]

    (t)1U(n)j

    (x).

    Since Xis continuous in (t, x), Xn(t, x)X(t, x). By the bounded conver-gence theorem, [Xn X]0, and hence I(Xnk)(T, O)I(X)(T, O) a.s.for a subsequence{nk}. Since on the event A,

    I(Xn)(T, O) =n1k=0

    mnj=1

    X

    kT

    n , xnj

    Z

    kT

    n ,

    (k+ 1)T

    n

    U(n)j

    = 0,

    it follows that I(X)(T, O) = 0 a.s. onA. Acknowledgement. The author is grateful to Robert Dalang for suggesting this problem.

    References

    [1] Albeverio, S., Wu, J.-L. and Zhang, T.-S. (1998). Parabolic SPDEsdriven by Poisson white noise. Stoch. Proc. Appl. 74, 21-36.

    46

  • 5/21/2018 Balan Stable Noise

    47/49

    [2] Albeverio, S., Mandrekar, V. and Rudiger, B. (2009). Existence of mild

    solutions for SDEs and semilinear equations with non-Gaussian Levynoise.Stoch. Proc. Appl. 119, 835-863.

    [3] Applebaum, D. and Wu, J.-L. (2000). Stochastic partial differentialequations driven by Levy space-time white noise. Random Oper. Stoch.Eq. 8, 245-259.

    [4] Azerad, P. and Mellouk, M. (2007). On a stochastic partial differentialequation with non-local diffusion.Potential Anal. 27, 183-197.

    [5] Billingsley, P. (1995).Probability and Measure. Third Edition. John Wi-ley, New York.

    [6] Dalang, R. C. (1999). Extending martingale measure stochastic integralwith applications to spatially homogenous s.p.d.e.s. Electr. J. Probab.4, no. 6, 29 pp. Erratum in Electr. J. Probab. 6 (2001), 5 pp.

    [7] Dawson, D. (1992). Infinite divisible random measures and superpro-cesses. In Stochastic Analysis and Related Topics, H. Korezlioglu andA. Ustunel, eds. Birkhauser, Boston.

    [8] Dellacherie, C. and Meyer, P.-A. (1975).Probabilites et potentiel. Vol. I.Hermann, Paris.

    [9] Feller, W. (1971). An Introduction to Probability Theory and Its Appli-cations. Vol. II. Second Edition. John Wiley.

    [10] Gine, E. and Marcus, M. B. (1982). Some results on the domain ofattraction of stable measures on C(K).Probab. Math. Stat. 2, 125-147.

    [11] Gine, E. and Marcus, M. B. (1983). The central limit theorem forstochastic integrals with respect to Levy processes. Ann. Probab. 11,58-77.

    [12] Ito, K. (1944). Stochastic integral. Proc. Imp. Acad. Tokyo20, 519-524.

    [13] Ito, K. (1946). On a stochastic integral equation. Proc. Imp. Acad. Tokyo22, 32-35.

    47

  • 5/21/2018 Balan Stable Noise

    48/49

    [14] Kallianpur, G., Xiong, J., Hardy, G. and Ramasubramanian, S. (1994).

    The existence and uniqueness of solutions to nuclear-space valuedSPDEs driven by Poisson random measures. Stoch. Stoch. Rep. 50, 85-122.

    [15] Marinelli, C. and Rockner, M. (2010). Well-posedness and asymptoticbehavior for stochastic reaction-diffusion equations with multiplicativePoisson noise. Electr. J. Probab. 15, 1528-1555.

    [16] Mueller, C. (1998). The heat equation with Levy noise. Stoch. Proc.Appl. 74, 67-82.

    [17] Mueller, C., Mytnik, L. and Stan, A. (2006). The heat equation with

    time-independent multiplicative stable noise. Stoch. Proc. Appl. 116,70-100.

    [18] Mytnik, L. (2002). Stochastic partial differential equations driven bystable noise.Probab. Th. Rel. Fields123, 157-201.

    [19] Nolan, J. P. (2013). Stable distributions. Models for heavy taileddata. Forthcoming book. Birkhauser, Boston. Chapter 1 available athttp://academic2.american.edu/ jpnolan/stable/chap1.pdf

    [20] Oksendal, B. (2008). Stochastic partial differential equations driven by

    multi-parameter white noise of Levy processes. Quarterly Appl. Math.Vol. LXVI, 521-537.

    [21] Peszat, S. and Zabczyk, J. (2006). Stochastic heat and wave equationsdriven by an impulsive noise. In: Stochastic Partial Differential Equa-tions and Applications VII, eds. Da Prato, G. and Tubaro, L., 229-242.

    [22] Peszat, S. and Zabczyk, J. (2007). Stochastic partial differential equa-tions with Levy noise. Cambridge University Press.

    [23] Priola, E. and Zabczyk, J. (2011). Structrural properties of semilinearSPDEs driven by cylindrical stable processes. Probab. Th. Rel. Fields149, 97-137.

    [24] Rajput, B. S. and Rosinski, J. (1989). Spectral representations of in-finitely divisible processes. Probab. Th. Rel. Fields82, 451-487.

    48

  • 5/21/2018 Balan Stable Noise

    49/49

    [25] Resnick, S. I. (2007).Heavy Tail Phenomena: probabilistic and statistical

    modelling. Springer, New York.

    [26] Saint Loubert Bie, E. (1998). Etude dune EDPS conduite par un bruitpoissonnien. Probab. Th. Rel. Fields111, 287-321.

    [27] Samorodnitsky, G. and Taqqu, M. S. (1994).Stable non-Gaussian Ran-dom Processes. Chapman and Hall.

    [28] Truman, A. and Wu, J.-L. (2006). Fractal Burgers equation driven byLevy noise. In: Stochastic Partial Differential Equations and Applica-tions VII, eds. Da Prato, G. and Tubaro, L., 295-310.

    [29] Walsh, J.B. (1986). An introduction to stochastic partial differentialequations.Ecole dEte de Probabilites de Saint-Flour XIV. Lecture Notesin Math. 1180, 265-439. Springer-Verlag, Berlin.

    49