12
Computers and Mathematics with Applications 65 (2013) 17–28 Contents lists available at SciVerse ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa A splitting method for a backward parabolic equation with time-dependent coefficients Nguyen Thi Ngoc Oanh Faculty of Mathematics and Informatics, College of Science, Thai Nguyen University, Thai Nguyen, Viet Nam article info Article history: Received 27 February 2012 Received in revised form 30 August 2012 Accepted 8 October 2012 Keywords: Backward parabolic equation Ill-posed problem Regularization Conjugate gradient method Splitting method abstract In this paper we propose a stable numerical method for an ill-posed backward parabolic equation with time-dependent coefficients in a parallelepiped. The problem is reformu- lated as an ill-posed least squares problem which is solved by the conjugate gradient method with an a posteriori stopping rule. The least squares problem is discretized by a splitting method which reduces the large dimensions of the discretized problem. We cal- culate the gradient of the objective functional of the discretized least squares problem by the aid of an adjoint discretized problem which enhances its accuracy. The algorithm is tested on several examples, that proves its efficiency. © 2012 Elsevier Ltd. All rights reserved. 1. Introduction Let := (0, L 1 ) × (0, L 2 ) ×···× (0, L n ) be an open parallelepiped in R n (n = 2, 3). Denote by the boundary of , Q T := × (0, T ], and S T := × (0, T ]. Consider the initial–boundary value problem u t n i=1 x i a i (x, t ) u x i + a(x, t )u = f , (x, t ) Q T , u(x, 0) = g (x), x , u(x, t ) = 0, (x, t ) S T , (1) where a i , a C 1 (Q T ), f L 2 (Q T ), a i (x, t ) λ> 0, a(x, t ) 0 (x, t ) Q T , i = 1, 2,..., n, are given functions and λ is a given constant. The solution of this problem is understood in the weak sense as follows: a weak solution in H 1,1 0 (Q T ) of the problem (1) is a function u(x, t ) H 1,1 0 (Q T ) satisfying the identity Q T u t η + n i=1 a i (x, t ) u x i ∂η x i + a(x, t )uη f η dxdt = 0, η H 1,0 0 (Q T ), (2) and u(x, 0) = g (x), x . (3) E-mail addresses: [email protected], [email protected]. 0898-1221/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.camwa.2012.10.005

A splitting method for a backward parabolic equation with time-dependent coefficients

Embed Size (px)

Citation preview

Page 1: A splitting method for a backward parabolic equation with time-dependent coefficients

Computers and Mathematics with Applications 65 (2013) 17–28

Contents lists available at SciVerse ScienceDirect

Computers and Mathematics with Applications

journal homepage: www.elsevier.com/locate/camwa

A splitting method for a backward parabolic equation withtime-dependent coefficientsNguyen Thi Ngoc OanhFaculty of Mathematics and Informatics, College of Science, Thai Nguyen University, Thai Nguyen, Viet Nam

a r t i c l e i n f o

Article history:Received 27 February 2012Received in revised form 30 August 2012Accepted 8 October 2012

Keywords:Backward parabolic equationIll-posed problemRegularizationConjugate gradient methodSplitting method

a b s t r a c t

In this paper we propose a stable numerical method for an ill-posed backward parabolicequation with time-dependent coefficients in a parallelepiped. The problem is reformu-lated as an ill-posed least squares problem which is solved by the conjugate gradientmethod with an a posteriori stopping rule. The least squares problem is discretized by asplitting method which reduces the large dimensions of the discretized problem. We cal-culate the gradient of the objective functional of the discretized least squares problem bythe aid of an adjoint discretized problem which enhances its accuracy. The algorithm istested on several examples, that proves its efficiency.

© 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Let Ω := (0, L1) × (0, L2) × · · · × (0, Ln) be an open parallelepiped in Rn(n = 2, 3). Denote by ∂Ω the boundary ofΩ,QT := Ω × (0, T ], and ST := ∂Ω × (0, T ]. Consider the initial–boundary value problem

∂u∂t

ni=1

∂xi

ai(x, t)

∂u∂xi

+ a(x, t)u = f , (x, t) ∈ QT ,

u(x, 0) = g(x), x ∈ Ω,u(x, t) = 0, (x, t) ∈ ST ,

(1)

where

ai, a ∈ C1(QT ), f ∈ L2(QT ),

ai(x, t) ≥ λ > 0, a(x, t) ≥ 0 ∀(x, t) ∈ QT , i = 1, 2, . . . , n,

are given functions and λ is a given constant.The solution of this problem is understood in the weak sense as follows: a weak solution in H1,1

0 (QT ) of the problem (1)is a function u(x, t) ∈ H1,1

0 (QT ) satisfying the identityQT

∂u∂tη +

ni=1

ai(x, t)∂u∂xi

∂η

∂xi+ a(x, t)uη − f η

dxdt = 0, ∀η ∈ H1,0

0 (QT ), (2)

and

u(x, 0) = g(x), x ∈ Ω. (3)

E-mail addresses: [email protected], [email protected].

0898-1221/$ – see front matter© 2012 Elsevier Ltd. All rights reserved.doi:10.1016/j.camwa.2012.10.005

Page 2: A splitting method for a backward parabolic equation with time-dependent coefficients

18 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

Here we use the standard notations of Sobolev spaces H1(Ω),H1,00 (QT ) and H1,1

0 (QT ) as in [1]: the space H1(Ω) consistsof all elements u(x) ∈ L2(Ω) having generalized derivatives ∂u

∂xi∈ L2(Ω). The scalar product is defined by

(u, v)H1(Ω) =

Ω

uv +

ni=1

∂u∂xi

∂v

∂xi

dx.

The space H1,0(QT ) is the set of all elements u(x, t) ∈ L2(QT ) having generalized derivatives ∂u∂xi

∈ L2(QT ) with the scalarproduct

(u, v)H1,0(QT )=

QT

uv +

ni=1

∂u∂xi

∂v

∂xi

dxdt.

The space H1,1(QT ) is the set of all elements u(x, t) ∈ L2(QT ) having generalized derivatives ∂u∂xi

∈ L2(QT ) with the scalarproduct

(u, v)H1,1(QT )=

QT

uv +

ni=1

∂u∂xi

∂v

∂xi+∂u∂t∂v

∂t

dxdt.

We also denote

H1,00 (QT ) =

u ∈ H1,0(QT ) : u

ST

= 0

and

H1,10 (QT ) =

u ∈ H1,1(QT ) : u

ST

= 0.

If g ∈ H10 (Ω) is given, the well-posedness of problem (2) and (3) is proved; see e.g. [1, Theorem 6.1]. In this paper, we

consider the backward parabolic equations, i.e. the following inverse problem.Inverse problem: reconstruct the initial condition g ∈ H1(Ω) using the measurement of the solution u at the final timeinstant u(x, T ) = ξ(x), x ∈ Ω .

This problem is severely ill-posed (see [2]) and, due to its importance in various practical situations, there have beenmanypapers devoted to it (see, e.g. [3–9,2,10,11] and the references therein). However, to our knowledge, there are not manypapers devoted to the case of time-dependent coefficients [12,9,13,2]. Furthermore, numerical methods for this problemare not well developed [14,5,13,15], especially for the multi-dimensional one. The aim of this paper is to suggest a fastand stable numerical method for the multi-dimensional backward parabolic equation with time-dependent coefficientsdescribed above. We follow the least-squares approach for this purpose. Namely, denoting the solution of (2) and (3) byu(x, t; g), we find g ∈ H1

0 (Ω)minimizing the misfit functional

J(g) :=12∥u(·, T ; g)− ξ∥2

L2(Ω). (4)

The gradient of this functional can be proved (see, the approach in [16–18]) to be ψ(x, 0), where ψ is the solution of theadjoint problem

∂ψ

∂t+

ni=1

∂xi

ai(x, t)

∂ψ

∂xi

− a(x, t)ψ = 0, (x, t) ∈ QT ,

ψ(x, T ) = u(x, T ; g)− ξ(x), x ∈ Ω,ψ(x, t) = 0, (x, t) ∈ ST .

(5)

Thenwe canmakeuse of the conjugate gradientmethodwith a stopping rule suggested byNemirovskii [19] for stably solvingthe problem (4), (2), (3). However, to implement the method on computer, we have to discretize the problem, for instance,by the finite differencemethod. Doing so, we realize that the discretized formula for the gradient obtained by the continuousadjoint problem (5) and that by the same technique but for the discretized optimization problem are different. Therefore,to avoid inaccuracies in evaluating the gradients by applying some discretization methods to direct and adjoint problems,we follow the approach [20] by directly working with discretized optimization problems, rather than with the continuousone. Since our problem is multi-dimensional and in every iteration we have to solve one direct and one adjoint problem,we should use a fast solver for direct problems. Taking the speciality of the domain Ω , we use a splitting finite differencescheme (see e.g. [21–23]) to discretize the direct problem. One of the main features of this method is that it reduces a multi-dimensional problem to a sequence of one-dimensional ones, which can be solved very fast. The main contribution of thispaper is to study the discretized version of the optimization problem (4), (2), (3) by the splitting finite difference schemeand deliver the gradient for the discretized problem. The last one is solved by the conjugate gradient method with the aposteriori stopping rule proposed by Nemirovskii [19]. We note that the question of the convergence rate of the method

Page 3: A splitting method for a backward parabolic equation with time-dependent coefficients

N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28 19

when noise level, space and time-step sizes approach zero, as for most general ill-posed problems, is open and out of thescope of this paper.

This paper is organized as follows. In Section 2 wewill describe the splitting finite difference scheme for problem (1) andin Section 3 we present the discretized variational problem and the conjugate gradient method. Finally in Section 4 we testour algorithm for some concrete examples.

2. A splitting finite difference scheme for the direct problem

The idea of splitting schemes is to approximate a complex problem by a sequence of simpler ones. The main advantagesof the splitting schemes are: (i) they are stable regardless the choice of the spatial and temporal grid sizes and (ii) theresulting linear systems can be easily solved since they are triangular systems. However, these methods are not easy to usefor problems in complicated domains.

In this section, we describe a splitting finite difference scheme for the direct problem (1). Following the techniques of[21–23] (see also [20,24]), we subdivide the domainΩ into small cells by the rectangular uniform grid specified by

0 = x0i < x1i = hi < · · · < xNii = Li, i = 1, . . . , n.

Here hi = Li/Ni is the grid size in the xi-direction, i = 1, . . . , n. The grid vertices are denoted by xk := (xk11 , . . . , xknn ), where

k := (k1, . . . , kn), 0 ≤ ki ≤ Ni. We also denote by h := (h1, . . . , hn) the vector of spatial grid sizes and∆h := h1 · · · hn. Letei be the unit vector in the xi-direction, i = 1, . . . , n, i.e. e1 = (1, 0, . . . , 0) and so on. Denote

ω(k) := x ∈ Ω : (ki − 0.5)hi ≤ xi ≤ (ki + 0.5)hi, ∀i = 1, . . . , n. (6)

In the following, we denote the set of indices of internal grid points byΩh, i.e.

Ωh := k = (k1, . . . , kn) : 1 ≤ ki ≤ Ni − 1, ∀i = 1, . . . , n. (7)

We also make use of the following sets

Ω ih := k = (k1, . . . , kn) : 0 ≤ ki ≤ Ni − 1, 1 ≤ kj ≤ Nj − 1,∀j = i (8)

for i = 1, . . . , n. For a function u(x, t) defined in QT , we denote by uk(t) its approximate value at (xk, t). We define thefollowing forward finite difference quotient with respect to xi

ukxi :=

uk+ei − uk

hi.

Now, taking into account the homogeneous boundary condition, we approximate the integrals in (2) as followsQT

∂u∂tηdxdt ≈ ∆h

T

0

k∈Ωh

duk(t)dt

ηk(t)dt, (9)

QT

ai(x, t)∂u∂xi

∂η

∂xidxdt ≈ ∆h

T

0

k∈Ω i

h

ak+ ei

2i (t)uk

xi(t)ηkxi(t)dt, (10)

QT

a(x, t)uηdxdt ≈ ∆h T

0

k∈Ωh

ak(t)uk(t)ηk(t)dt, (11)

QT

f ηdxdt ≈ ∆h T

0

k∈Ωh

f k(t)ηk(t)dt. (12)

Here f k(t) is an approximation of the right hand side f at the grid point xk and ak+ ei

2i := ai

xk +

hiei2

. With the approxima-

tions (9)–(12), we have the following discrete analogue of Eq. (2) T

0

k∈Ωh

duk

dt+ akuk

− f kηk +

ni=1

k∈Ω i

h

ak+ ei

2i uk

xiηkxi

dt = 0. (13)

We note that, using the discrete analogue of the integration by parts, we obtaink∈Ω i

h

ak+ ei

2i uk

xiηkxi =

k∈Ωh

ak− ei

2i

uk− uk−ei

h2i

− ak+ ei

2i

uk+ei − uk

h2i

ηk (14)

Page 4: A splitting method for a backward parabolic equation with time-dependent coefficients

20 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

for any grid functions satisfying the homogeneous boundary condition uk= 0 and ηk = 0 for ki = 0. Hence, replacing

equality (14) into (13), we obtain the following system which approximates the original problem (1).dudt

+ (Λ1 + · · · +Λn)u − F = 0,u(0) = g

(15)

with u = uk, k ∈ Ωh being the grid function. The function g is the grid function approximating the initial condition g and

(Λiu)k =akuk

n+

ak− ei

2i

h2i

uk

− uk−ei−

ak+ ei

2i

h2i

uk+ei − uk, 2 ≤ ki ≤ Ni − 2,

ak− ei

2i

h2i

uk−

ak+ ei

2i

h2i

uk+ei − uk, ki = 1,

ak− ei

2i

h2i

uk

− uk−ei+

ak+ ei

2i

h2i

uk, ki = Ni − 1,

(16)

for k ∈ Ωh. Moreover,

F = f k, k ∈ Ωh (17)

where

f k :=1

|ω(k)|

ω(k)

f (x)dx. (18)

We note that the coefficient matricesΛi are positive semi-definite (see e.g. [24]). We now turn to the discretization in timeof Eq. (15). We divide the interval [0, T ] into M sub-intervals by the points ti, i = 0, . . . ,M, t0 = 0, t1 = ∆t, . . . , tM =

M∆t = T . For simplifying the notation, we set uk,m:= uk(tm). In the following, we drop the spatial index if there is no

confusion. We also denote Fm:= F(tm).

In order to obtain a splitting scheme for the Cauchy problem (15), we set um+δ:= u(tm + δ∆t),Λm

i := Λi(tm + ∆t/2).We introduce the following implicit two-circle component-by-component splitting scheme [21]

um+i2n − um+

i−12n

∆t+Λm

ium+

i2n + um+

i−12n

4= 0, i = 1, 2, . . . , n − 1,

um+12 − um+

n−12n

∆t+Λm

num+

12 + um+

n−12n

4=

Fm

2+∆t8Λm

n Fm,

um+n+12n − um+

12

∆t+Λm

num+

n+12n + um+

12

4=

Fm

2−∆t8Λm

n Fm,

um+1− i−12n − um+1− i

2n

∆t+Λm

ium+1− i−1

2n + um+1− i2n

4= 0, i = n − 1, n − 2, . . . , 1,

u0= g.

(19)

Equivalently,Ei +

∆t4Λm

i

um+

i2n =

Ei −

∆t4Λm

i

um+

i−12n , i = 1, 2, . . . , n − 1,

En +∆t4Λm

n

um+

12 −

∆t2

Fm

=

En −

∆t4Λm

n

um+

n−12n ,

En +∆t4Λm

n

um+

n+12n =

En −

∆t4Λm

n

um+

12 +

∆t2

Fm,

Ei +∆t4Λm

i

um+1− i−1

2n =

Ei −

∆t4Λm

i

um+1− i

2n , i = n − 1, n − 2, . . . , 1,

u0= g,

(20)

where Ei is the identitymatrix corresponding toΛi, i = 1, . . . , n. The splitting scheme (20) can be rewritten in the followingcompact form

um+1= Amum

+∆tBmf m, m = 0, . . . ,M − 1,u0

= g,(21)

Page 5: A splitting method for a backward parabolic equation with time-dependent coefficients

N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28 21

with

Am= Am

1 · · · Amn A

mn · · · Am

1 ,

Bm= Am

1 · · · Amn ,

(22)

where Ami :=

Ei + ∆t

4 Λmi

−1 Ei − ∆t4 Λ

mi

, i = 1, . . . , n.

The stability of the splitting scheme (21) is given in the following theorem. For its proof, see [21, Section 4.3.1].

Theorem 2.1. The splitting scheme (21) is absolutely stable.

3. The discretized inverse problem and the conjugate gradient method

In this discrete setup, we consider the problem of estimating the discrete initial condition g from the discrete measure-ment of the solution at the final time instant. Using the well-known least-squares approach, we estimate g by minimizingthe following objective function

Jh0 (g) :=12

k∈Ωh

[uk,M(g)− ξ k]2. (23)

Here we used the notation uk,M(g) to indicate its dependence on the initial condition g and M represents the index of thefinal time instant.

3.1. Gradient of the objective function

In this work, we solve the minimization problem (23) by the conjugate gradient method. For this purpose, we need tocalculate the gradient of the objective function Jh0 . In this subsection, wemake use of the following inner product of two gridfunctions u := uk, k ∈ Ωh and v := vk, k ∈ Ωh:

(u, v) :=

k∈Ωh

ukvk. (24)

The following theorem shows the way to calculate the gradient of the objective function (23).

Theorem 3.1. The gradient ∇Jh0 (g) of the objective function Jh0 at g is given by

∇Jh0 (g) =A0∗ η0, (25)

where η satisfies the adjoint problemηM−1

= uM(g)− ξ

ηm =Am+1∗ ηm+1, m = M − 2,M − 1, . . . , 0.

(26)

Here the matrix (Am)∗ is given by

(Am)∗ =

E1 −

∆t4Λm

1

E1 +

∆t4Λm

1

−1

· · ·

En −

∆t4Λm

n

En +

∆t4Λm

n

−1

×

En −

∆t4Λm

n

En +

∆t4Λm

n

−1

· · ·

E1 −

∆t4Λm

1

E1 +

∆t4Λm

1

−1

. (27)

Proof. For an infinitesimally small variation δg of g , we have from (23) that

Jh0 (g + δg)− Jh0 (g) =12

k∈Ωh

[uk,M(g + δg)− ξ k]2 −12

k∈Ωh

[uk,M(g)− ξ k]2

=12

k∈Ωh

vk,M

2+

k∈Ωh

vk,M [uk,M(g)− ξ k]

=12

k∈Ωh

vk,M

2+

k∈Ωh

vk,Mψk

=12

k∈Ωh

vk,M

2+(vM , ψ), (28)

Page 6: A splitting method for a backward parabolic equation with time-dependent coefficients

22 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

where vk,m := uk,m(g + δg) − uk,m(g), k ∈ Ωh,m = 0, . . . ,M, vm := vk,m, k ∈ Ωh and ψ = uM(g) − ξ . It follows from(21) that v is the solution to the problem

vm+1= Amvm, m = 0, . . . ,M − 1

v0 = δg.(29)

Taking the inner product of both sides of the mth equation of (29) with an arbitrary vector ηm ∈ RN1×···×Nn , summing theresults overm = 0, . . . ,M − 1, we obtain

M−1m=0

(vm+1, ηm) =

M−1m=0

(Amvm, ηm)

=

M−1m=0

(vm,Am∗ηm). (30)

HereAm∗

is the adjoint matrix of Am. Consider the following adjoint problemηm = (Am+1)∗ηm+1, m = M − 2,M − 1, . . . , 0,ηM−1

= ψ.(31)

Taking the inner product of both sides of the first equation of (31) with an arbitrary vector vm+1, summing the results overm = 0, . . . ,M − 2, we obtain

M−2m=0

(vm+1, ηm) =

M−2m=0

(vm+1, (Am+1)∗ηm+1). (32)

Taking the inner product of both sides of the second equation of (31) with vector vM , we have

(vM , ηM−1) = (vM , ψ). (33)

From (32) and (33), we obtainM−2m=0

(vm+1, ηm)+ (vM , ηM−1) =

M−2m=0

(vm+1, (Am+1)∗ηm+1)+ (vM , ψ),

or, equivalently,

M−1m=0

(vm+1, ηm) =

M−1m=1

(vm, (Am)∗ηm)+ (vM , ψ). (34)

From (30) and (34), we have

(vM , ψ) = (v0,A0∗η0) = (δg,

A0∗η0). (35)

On the other hand, it can be proved by induction that

k∈Ωh

vk,M

2= o(δg). Hence, it follows from (28) and (35) that

Jh0 (g + δg)− Jh0 (g) = (δg,A0∗η0)+ o(δg). (36)

Consequently, the gradient of the objective function Jh0 can be written as

∂ Jh0 (g)∂ g

=A0∗η0. (37)

Note that, since the coefficient matricesΛmi , i = 1, . . . , n,m = 0, . . . ,M − 1 are symmetric, we have

(Ami )

∗=

Ei +

∆t4Λm

i

−1 Ei −

∆t4Λm

i

=

Ei −

∆t4Λm

i

Ei +∆t4Λm

i

−1∗

=

Ei −

∆t4Λm

i

Ei +

∆t4Λm

i

−1

.

Page 7: A splitting method for a backward parabolic equation with time-dependent coefficients

N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28 23

Hence,

(Am)∗ = (Am1 . . . A

mn A

mn . . . A

m1 )

= (Am1 )

∗ . . . (Amn )

∗(Amn )

∗ . . . (Am1 )

=

E1 −

∆t4Λm

1

E1 +

∆t4Λm

1

−1

. . .

En −

∆t4Λm

n

En +

∆t4Λm

n

−1

×

En −

∆t4Λm

n

En +

∆t4Λm

n

−1 E1 −

∆t4Λm

1

E1 +

∆t4Λm

1

−1

. (38)

The proof is complete.

3.2. Conjugate gradient method

Denote by ϵ the noise level of the measured data. Using the well-known conjugate gradient method with an a posterioristopping rule introduced by Nemirovskii [19], we reconstruct the initial condition g from the measured final state uM

= ξby the following steps.Step 1 (initialization). Given an initial guess g0 and a scalar γ > 1, calculate the residual r0 = u(g0) − ξ by solving thesplitting scheme (19) with the initial condition g being replaced by the initial guess g0.

If ∥r0∥ ≤ γ ϵ, stop the algorithm. Otherwise, set i = 0, d−1= (0, . . . , 0) and go to Step 2.

Step 2. Calculate the gradient r i = ∇Jk,M0 (g i) given in (25) by solving the adjoint problem (26). Then we set

di = −r i + βi−1di−1 (39)

whereβ i−1=

∥r i∥2

∥r i−1∥2for i ≥ 1,

β−1= 0.

(40)

Step 3. Calculate the solution ui of the splitting scheme (19) with g being replaced by di, put

αi=

∥r i∥2

∥(ui)M∥2. (41)

Then, set

g i+1= g i

+ αidi. (42)

The residual can be calculated by

r i+1= r i + αi(ui)M . (43)

Step 4. If ∥r i+1∥ ≤ γ ϵ, stop the algorithm (Nemirovskii’s stopping rule). Otherwise, set i := i + 1, and go back to Step 2.

We note that (43) can be derived from the equality

r i+1= uM(g i+1)− ξ = uM(g i

+ αidi)− ξ

= r i + uM(αidi) = r i + αi(ui)M . (44)

We note that the conjugate gradient method with the above stopping rule has been proved to be convergent [19] and ofoptimal order. As, the noise level ϵ converges to zero, the obtained iteration by the algorithm converges to the minimum-norm solution of the problem (23), (21).

4. Numerical results

To illustrate the performance of the proposed algorithm, we give in this section some two dimensional numerical tests.This algorithm was implemented in Matlab and run on a personal laptop with Intel R⃝ Pentium R⃝ duo core processors at2.13 GHz each and 2 GB of RAM.

In the following examples, we consider the domainΩ = (0, 1)× (0, 1). The grid sizes are chosen to be h = (0.02, 0.02)and ∆t = 0.02 resulting in 49 internal grid points in each spatial direction. In these tests, we add a random noise of mag-nitude 0.01 (about 2%) to the measured data to simulate noisy measurements. The parameter γ in the stopping criterion isset to be 1.05.

Page 8: A splitting method for a backward parabolic equation with time-dependent coefficients

24 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Exact function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Estimated function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.1

−0.05

0

0.05

0.1

0.15

0.2

x1

Error

x2

g(x)

0 0.2 0.4 0.6 0.8 10

1

1.4

1.2

0.8

0.6

0.4

0.2

a b

dc

Fig. 1. Reconstruction result of Example 1: (a) exact function g; (b) estimated one; (c) point-wise error; (d) the comparison of g|x1=1/2 and its reconstruction(the dashed curve: the exact function, the solid curve: the estimated function).

Example 1. In this example, the exact initial condition is given by

g(x) = sin(πx1) sin(πx2), x = (x1, x2) ∈ (0, 1)× (0, 1).

The coefficients are set to be a1(x, t) = 0.021 −

12 (1 − t) cos(15πx1) cos(15πx2)

; a2(x, t) = 0.01

1 −

12 (1 −

t) cos(15πx1) cos(15πx2)and a(x, t) = x21 + x22 + 2x1t + 1. The source term f is given by

f (x, t) = −g(x)2

1 − 0.06

1 −

t2

1 −

1 − t2

cos(15πx) cos(15πy)

− (x2 + y2 + 2xt + 1)(2 − t)

− 0.075π21 −

t2

(1 − t)

× (2 cos(πx) sin(πy) sin(15πx) cos(15πy)− sin(πx) cos(πy) cos(15πx) sin(15πy)) .

The measurement is taken at T = 1.We test the algorithm with the initial guess g0

= 4. In this test, the algorithm stops after 12 iterations and the totalcomputational time is approximately 35 s.

The reconstruction result and the exact initial condition g are shown in Fig. 1. The point-wise error is depicted in Fig. 1(c)and the comparison between the exact function and the estimated one at x1 = 1/2 is plotted in Fig. 1(d).

The figure shows that the reconstruction is accurate even though we start from an initial guess rather far from the exactsolution.

Example 2. Let us consider a case of nonsmooth function g . In this example, g is chosen as a multi-linear function given by

g(x) =

2x2 if x2 ≤ 1/2 and x2 ≤ x1 and x1 ≤ 1 − x2,2(1 − x2) if x2 ≥ 1/2 and x2 ≥ x1 and x1 ≥ 1 − x2,2x1 if x1 ≤ 1/2 and x1 ≤ x2 and x2 ≤ 1 − x1,2(1 − x1) otherwise.

Page 9: A splitting method for a backward parabolic equation with time-dependent coefficients

N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28 25

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Exact function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Estimated function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.2

−0.1

0

0.1

0.2

x1

Error

x2

g(x)

0 0.2 0.4 0.6 0.8 10

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

a

c

b

d

Fig. 2. Reconstruction result of Example 2: (a) exact function g; (b) estimated one; (c) point-wise error; (d) the comparison of g|x1=1/2 and its reconstruction(the dashed curve: the exact function, the solid curve: the estimated function).

The graph of this function is shown in Fig. 2(a). The coefficients a1, a2, a as well as the initial guess g0 are chosen to be thesame as in Example 1. The function f is given by

f (x, t) =

−y − (2 − t)10−2

2(1 − t)15π cos(15πx) sin(15πy)+ (x2 + y2 + 2xt + 1)y

if x2 ≤ 1/2 and x2 ≤ x1 and x1 ≤ 1 − x2,

y − 1 + (2 − t)10−2

2(1 − t)15π cos(15πx) sin(15πy)+ (x2 + y2 + 2xt + 1)(1 − y)

if x2 ≥ 1/2 and x2 ≥ x1 and x1 ≥ 1 − x2,

−x − (2 − t)10−2(1 − t)15π sin(15πx) cos(15πy)+ (x2 + y2 + 2xt + 1)x

if x1 ≤ 1/2 and x1 ≤ x2 and x2 ≤ 1 − x1,

x − 1 + (2 − t)10−2(1 − t)15π sin(15πx) cos(15πy)+ (x2 + y2 + 2xt + 1)(1 − x)

,

otherwise

The reconstruction result is shown in Fig. 2(b)–(d). In this test, the reconstruction is still accurate except at the locationswhere the exact initial condition is not smooth. This is due to the fact that the direct problem smooths out the initialcondition if the source function f is smooth. This phenomenon is more clearly visible even for discontinuous function gas given in the next example.

In this example, the algorithm stops after 14 iterations and the computational time is 49 s.

Example 3. We now test the algorithm with the piecewise constant initial condition given by

g =

1 if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,0 otherwise.

Page 10: A splitting method for a backward parabolic equation with time-dependent coefficients

26 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Exact function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

−0.2

0

0.5

1

x1

Estimated function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Error

x2

g(x)

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

1.2

a

c

b

d

Fig. 3. Reconstruction result of Example 3: (a) exact function g; (b) estimated one; (c) point-wise error; (d) the comparison of g|x1=1/2 and its reconstruction(the dashed curve: the exact function, the solid curve: the estimated function).

Note that this initial condition does not belong to H1(Ω). Its behaviour is depicted in Fig. 3(a). The coefficients of the directproblem are set to be a1 = a2 = a = 10−2 and the source function f is given by

f (x, t) =

−1/2 + 10−2(1 − t/2) if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,0, otherwise.

The reconstructed function and the error are shown in Fig. 3(b)–(d). In this test, the CPU time is 60 s for 25 iterations.

Example 4. In this example, initial condition g is chosen to be the same as in Example 3. The coefficients are given as

a1 = 2.10−2(1 − 0.5(1 − t) cos(15πx) cos(15πy)),a2 = 10−2(1 − 0.5(1 − t) cos(15πx) cos(15πy)),a = x21 + x22 + 2x1t + 1

and the function f is given by

f (x, t) =

−1/2 + (x21 + x22 + 2x1t + 1)(1 − t/2) if 1/4 ≤ x1 ≤ 3/4 and 1/4 ≤ x2 ≤ 3/4,0 otherwise.

Here, as in Example 2, we can also see that the reconstructed function is smooth even though the true one is discontinuous,so the approximation near the location of discontinuity is not so good. In order to preserve the discontinuity of the functiong , regularization methods based on total variation should be incorporated into the objective function (see Fig. 4).

The CPU time for this example is 38 s and the number of iterations is 14.

Page 11: A splitting method for a backward parabolic equation with time-dependent coefficients

N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28 27

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Exact function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

−0.2

0

0.5

1

x1

Estimated function g

x2

g(x)

0.010.41

0.81

0.01

0.41

0.81

−0.5

0

0.5

1

x1

Error

x2

g(x)

0 0.2 0.4 0.6 0.8 1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

a b

dc

Fig. 4. Reconstruction result of Example 4: (a) exact function g; (b) estimated one; (c) point-wise error; (d) the comparison of g|x1=1/2 and its reconstruction(the dashed curve: the exact function, the solid curve: the estimated function).

5. Conclusions

We studied backward parabolic equations with time-dependent coefficients in a parallelepiped. The problem isformulated as a least squares problem and discretized by a splitting method. The gradient of the discretized objectivefunctional is calculated via an adjoint discretized problem rather than by the formula given by the continuous problemsetting that avoids approximation errors in its evaluation. The discretized optimization problem is then implemented bythe conjugate gradient method with an a posteriori stopping rule posed by Nemirovskii which has been proved to be ofoptimal order. The algorithm has been tested in several examples which showed the efficiency of the method. The errorestimates, convergence rates of the method when the noise level, space- and time-step sizes approach zero have not beenconsidered in the paper and remain unanswered. Similar research on more complicated linear and nonlinear equations isunder progress.

Acknowledgements

This work is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grantnumber 101.02-2011.50 and by the Grant B2011-TN08-01 of the Ministry of Education and Training.

References

[1] O.A. Ladyzhenskaya, V.A. Solonnilov, N.N. Uralceva, Linear and Quasilinear Equations of Parabolic Types, American Mathematical Society, 1968.[2] M.M. Lavrent’ev, V.G. Romanov, G.P. Shishatskii, Ill-posed Problems in Mathematical Physics and Analysis, Amer. Math. Soc, Providence, R. I, 1986.[3] J. Baumeister, Stable Solution of Inverse Problems, Friedr. Vieweg & Sohn, Braunschweig, 1987.[4] N. Boussetila, F. Rebbani, Optimal regularization method for ill-posed Cauchy problems, Electron. J. Differential Equations 147 (2006) 1–15.[5] A. Carasso, The backward beam equation and the numerical computation of dissipative equations backwards in time, in: Improperly Posed Boundary

Value Problems, (Conf., Univ. New Mexico, Albuquerque, N.M., 1974), in: Res. Notes in Math., No. 1, Pitman, London, 1975, pp. 124–157.[6] Dinh Nho Hào, Nguyen Van Duc, Stability results for the heat equation backward in time, J. Math. Anal. Appl. 353 (2) (2009) 627–641.

Page 12: A splitting method for a backward parabolic equation with time-dependent coefficients

28 N.T.N. Oanh / Computers and Mathematics with Applications 65 (2013) 17–28

[7] Dinh Nho Hào, Nguyen Van Duc, D. Lesnic, Regularization of parabolic equations backward in time by a non-local boundary value problem method,IMA J. Appl. Math. 75 (2) (2010) 291–315.

[8] Dinh Nho Hào, Nguyen Van Duc, H. Sahli, A non-local boundary value problem method for parabolic equations backward in time, J. Math. Anal. Appl.345 (2) (2008) 805–815.

[9] V. Isakov, Inverse Problems in Partial Differential Equations, Springer, New York, Berlin, 1998.[10] K. Miller, Stabilized quasi-reversibility and other nearly-best-possible methods for non-well-posed problems, in: Symposium on Non-Well-Posed

Problems and Logarithmic Convexity (Heriot-Watt Univ., Edinburgh, 1972), in: Lecture Notes in Math., vol. 316, Springer, Berlin, 1973, pp. 161–176.[11] L. Payne, Improperly Posed Problems in Partial Differential Equations, SIAM, Philadelphia, 1975.[12] S. Agmon, L. Nirenberg, Properties of solutions of ordinary differential equations in Banach spaces, Comm. Pure Appl. Math. 16 (1963) 121–239.[13] Dinh Nho Hào, Nguyen Van Duc, Stability results for backward parabolic equations with time-dependent coefficients, Inverse Problems 27 (2) (2011)

025003.[14] B.L. Buzbee, A. Carasso, On the numerical computation of parabolic problems for preceding times, Math. Comp. 27 (1973) 237–266.[15] P. Manselli, K. Miller, Dimensionality reduction methods for efficient numerical solution, backward in time, of parabolic equations with variable

coefficients, SIAM J. Math. Anal. 11 (1980) 147–159.[16] Dinh Nho Hào, A noncharacteristic Cauchy problem for linear parabolic equations II: a variational method, Numer. Funct. Anal. Optim. 13 (5–6) (1992)

541–564.[17] Dinh Nho Hào, A noncharacteristic Cauchy problem for linear parabolic equations III: a variational method and its approximation schemes, Numer.

Funct. Anal. Optim. 13 (5–6) (1992) 565–583.[18] Dinh Nho Hào, Methods for Inverse Heat Conduction Problems., Peter Lang Verlag, Frankfurt/Main, Bern, New York, Paris, 1998.[19] A.S. Nemirovskii, The regularizing properties of the adjoint gradient method in ill-posed problems, Zh. Vychisl. Mat. Mat. Fiz. 26 (1986) 332–347; in

Engl. Transl. U.S.S.R. Comput. Maths. Math. Phys. 26 (2) (1986) 7–16.[20] Dinh Nho Hào, Nguyen Trung Thành, H. Sahli, Splitting-based gradient method for multi-dimensional inverse conduction problems, J. Comput. Appl.

Math. 232 (2009) 361–377.[21] G.I. Marchuk, Methods of Numerical Mathematics, Springer-Verlag, New York, 1975.[22] G.I. Marchuk, Splitting and alternating direction methods, in: P.G. Ciaglet, J.L. Lions (Eds.), Handbook of Numerical Mathematics, in: Finite Difference

Methods, vol. 1, Elsevier Science Publisher B.V, North-Holland, Amsterdam, 1990.[23] N.N. Yanenko, The Method of Fractional Steps, Springer-Verlag, Berlin, Heidelberg, New York, 1971.[24] N.T. Thành, Infrared thermography for the detection and characterization of buried objects. Ph.D. Thesis, Vrije Universiteit Brussel, Brussels, Belgium,

2007.