19
APPLICABLE ANALYSIS https://doi.org/10.1080/00036811.2020.1758308 Regularization of linear ill-posed problems involving multiplication operators P. Mathé a , M. T Nair b and B. Hofmann c a Weierstraß Institute for Applied Analysis and Stochastics, Berlin, Germany; b Department of Mathematics, IIT Madras, Chennai, India; c Department of Mathematics, Chemnitz University of Technology, Chemnitz, Germany ABSTRACT We study regularization of ill-posed equations involving multiplication operators when the multiplier function is positive almost everywhere and zero is an accumulation point of the range of this function. Such equations naturally arise from equations based on non-compact self-adjoint opera- tors in Hilbert space, after applying unitary transformations arising out of the spectral theorem. For classical regularization theory, when noisy obser- vations are given and the noise is deterministic and bounded, then non- compactness of the ill-posed equations is a minor issue. However, for sta- tistical ill-posed equations with non-compact operators less is known if the data are blurred by white noise. We develop a theory for spectral regulariza- tion with emphasis on this case. In this context, we highlight several aspects, in particular, we discuss the intrinsic degree of ill-posedness in terms of rear- rangements of the multiplier function. Moreover, we address the required modifications of classical regularization schemes in order to be used for non-compact statistical problems, and we also introduce the concept of the effective ill-posedness of the operator equation under white noise. This study is concluded with prototypical examples for such equations, as these are deconvolution equations and certain final value problems in evolution equations. ARTICLE HISTORY Received 15 August 2019 Accepted 8 April 2020 COMMUNICATED BY Irwin Yousept KEYWORDS Statistical ill-posed problem; non-compact operator; regularization; degree of ill-posedness 2010 MATHEMATICS SUBJECT CLASSIFICATIONS 47A52; 62G08; 65J22 1. Introduction, background This study is devoted to spectral regularization with focus on multiplication operators for finding stable approximate solutions to ill-posed linear operator equations A x = y (1) from noisy data of the right-hand side y. In this context, the operator A: H H mapping in the separable real Hilbert space H and possessing a non-closed range R(A) is assumed to be bounded self-adjoint and positive. Halmos’ version of the Spectral Theorem relates the above problem (1) to multiplication equations. Fact: For every bounded self-adjoint operator A: H H there is a σ -finite measure space (S, , μ), a real-valued essentially bounded function b L (S, , μ) and an isometry U : H L 2 (S, , μ) such that UAU 1 = M b , where M b is the multiplication operator, assigning f L 2 (S, , μ) b · f L 2 (S, , μ). CONTACT P. Mathé [email protected] © 2020 Informa UK Limited, trading as Taylor & Francis Group

Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIShttps://doi.org/10.1080/00036811.2020.1758308

Regularization of linear ill-posed problems involvingmultiplication operators

P. Mathé a, M. T Nairb and B. Hofmannc

aWeierstraß Institute for Applied Analysis and Stochastics, Berlin, Germany; bDepartment of Mathematics, IITMadras, Chennai, India; cDepartment of Mathematics, Chemnitz University of Technology, Chemnitz, Germany

ABSTRACTWe study regularization of ill-posed equations involving multiplicationoperators when the multiplier function is positive almost everywhere andzero is an accumulation point of the range of this function. Such equationsnaturally arise from equations based on non-compact self-adjoint opera-tors in Hilbert space, after applying unitary transformations arising out ofthe spectral theorem. For classical regularization theory, when noisy obser-vations are given and the noise is deterministic and bounded, then non-compactness of the ill-posed equations is a minor issue. However, for sta-tistical ill-posed equations with non-compact operators less is known if thedata are blurred bywhite noise.Wedevelop a theory for spectral regulariza-tionwithemphasis on this case. In this context,wehighlight several aspects,in particular,wediscuss the intrinsic degreeof ill-posedness in termsof rear-rangements of the multiplier function. Moreover, we address the requiredmodifications of classical regularization schemes in order to be used fornon-compact statistical problems, and we also introduce the concept ofthe effective ill-posedness of the operator equation under white noise. Thisstudy is concluded with prototypical examples for such equations, as theseare deconvolution equations and certain final value problems in evolutionequations.

ARTICLE HISTORYReceived 15 August 2019Accepted 8 April 2020

COMMUNICATED BYIrwin Yousept

KEYWORDSStatistical ill-posed problem;non-compact operator;regularization; degree ofill-posedness

2010MATHEMATICSSUBJECTCLASSIFICATIONS47A52; 62G08; 65J22

1. Introduction, background

This study is devoted to spectral regularization with focus on multiplication operators for findingstable approximate solutions to ill-posed linear operator equations

A x = y (1)

from noisy data of the right-hand side y. In this context, the operator A: H → H mapping in theseparable real Hilbert space H and possessing a non-closed range R(A) is assumed to be boundedself-adjoint and positive. Halmos’ version of the Spectral Theorem relates the above problem (1) tomultiplication equations.

Fact: For every bounded self-adjoint operator A: H → H there is a σ -finite measure space (S,�,μ),a real-valued essentially bounded function b ∈ L∞(S,�,μ) and an isometry U : H → L2(S,�,μ)

such that UAU−1 = Mb, where Mb is the multiplication operator, assigning f ∈ L2(S,�,μ) �→ b · f ∈L2(S,�,μ).

CONTACT P. Mathé [email protected]

© 2020 Informa UK Limited, trading as Taylor & Francis Group

Page 2: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

2 P. MATHÉ ET AL.

Therefore, we fix a measure space (S,�,μ) on the set S with a σ -finite measure μ defined on theσ -algebra �, and consider instead of (1) the equation

b(s)f (s) = g(s), s ∈ S (2)

in the setting of the Hilbert space L2(S,�,μ). Equation (2) is to hold μ-almost everywhere (μ-a.e.).A detailed discussion of this Fact can be found in [1].

Equations of type (2) are typically associated to equations with non-compact operators. In the con-text of inverse problems, these were considered in [2] (see also [3]) with applications in ground waterfiltration, by using projection methods. However, in this work, we confine to spectral regularization,given in terms of ‘filter functions’ �α , parametrized by a positive parameter α > 0.

Within the area of regularization theory, this approach was used in [4]. In the application area ofultrafast transmission electron microscopy, the problem of reconstruction of density matrices alsoleads to an equation of the type (2), see [5]. The authors in [6] use this representation to show theexistence of source conditions in Hilbert spaces. Classical regularization approaches for solving (1)by means of variational regularization, or iterative regularization techniques, are outlined in variousmonographs, for example in [7], and [8–11].

Since within this study the linear operator A is assumed to be bounded self-adjoint and positive,we have a constant b > 0 such that the multiplier function b in (2) obeys the inequalities 0 < b(s) ≤b < ∞ for almost all s ∈ S. However, as a consequence of the ill-posedness of Equation (1), whichimplies that zero is an accumulation point of the spectrum of A, the function b must have essentialzeros, which means that essinf s∈Sb(s) = 0.

So, the recovery of the element x ∈ H in (1) from noisy data

yδ := Ax + δη (3)

of the right-hand side y carries over to the reconstruction of the solution f (s), s ∈ S, of Equation (2)from noisy data

gδ := Uyδ = b · (Ux) + δ(Uη) (4)

of the right-hand side g = U(Ax). The variable η turns to the noise ξ := Uη, and further propertieswill be given in Definitions 12 and 13 below. The analysis will be different for bounded deterministicnoise and for statistical white noise.

Thus, we consider the reconstruction of the function f in the Hilbert space L2(S,�,μ), from theknowledge of the noisy data

gδ(s) := b(s)f (s) + δξ(s), s ∈ S, (5)

where b ∈ L∞(S,�,μ) is given, and we assume that δ > 0 denotes the noise level.Statistical inverse problems under white noise and the reduction to multiplication problems as (5)

were discussed in [12]. Multiplier equations as in (2) were studied in [13] and [14] from the regu-larization point of view for S = (0, 1) and μ being the Lebesgue measure, which we throughout willdenote by λ.

The outline of the remainder of this paper is as follows: In Section 2 we shall describe the frame-work for the analysis, and we will also provide several auxiliary results that might be of interest.Section 3 is devoted to the error analysis. The main results, presented in Propositions 15 & 21, yieldestimations from above of the regularization error under bounded deterministic and white noise,respectively. It is seen that the error bound for spectral regularization under bounded determinis-tic noise is similar to the case when the operator A is compact. In contrast, for statistical ill-posedproblems, this is an issue, and a suitable modification of spectral regularization schemes is requiredto bound the noise propagation. Finally, Section 4 exhibits that the well-known (and typically non-compact) deconvolution and final value problems fit the considered framework after turning fromoperator equations to the current setup bymeans of the Fourier transform.Moreover, it is shown thatmultiplication operators also occur in inverse problems in mathematical finance.

Page 3: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 3

2. Notation and auxiliary results

In this section, we shall first discuss the impact of properties of the multiplier function b on theintrinsic difficulty of the inverse problem. Then we turn to introducing the concept of regularizationschemes. Finally, we introduce the concept of solution smoothness.

2.1. Degree of ill-posedness incurred by themultiplier function b

As was mentioned in Section 1, ill-posedness of the multiplication problem (2) is a consequenceof having zero as accumulation point of the essential range of b. For the character of ill-posedness,however, the location of the essential zeros of the function b should not be relevant. Therefore, anormalization of the function b in (2) is desirable. In this context, the increasing rearrangement of bwas considered in the study [13]. For such setting, we let b∗ denote the increasing rearrangement

b∗(t) := sup {τ : μ({s : b(s) ≤ τ }) ≤ t} , t > 0.

However, this approach is limited to underlying S and μ with finite measure value μ(S).Another normalization is the decreasing rearrangement b∗ of the multiplier function b, which is

based on the distribution function db, defined by db(t) := μ{s ∈ S : b(s) > t} for t> 0. Then we letthe decreasing rearrangement of b be given as

b∗(t) := inf {τ > 0 : db(τ ) ≤ t} , 0 < t < μ(S).

Note that b∗ is defined on [0,μ(S)), equippedwith the Lebesguemeasure λ. In the context of ill-posedequations this normalization was first used in [15].

We also notice that for infinite measures, that is, for μ(S) = ∞, the function b∗ may have infinitevalue. Therefore, we confine to the case of b satisfying the following assumption.

Assumption 1: If μ(S) = ∞ then the function b is assumed to vanish at infinity, in the sense thatμ{s ∈ S : b(s) > t} is finite for every t> 0.

Remark 1: Assumption 1 is important to guarantee the existence of a decreasing rearrangement b∗which is equimeasurable with b, meaning that it has the same distribution function as b, i.e.

λ({τ : b∗(τ ) > t}) = μ({s ∈ S : b(s) > t}), t > 0.

The characterization of cases when the function b∗ is equimeasurable with b was first given by Dayin [16], and for infinite measures μ(S) = ∞ the Assumption 1 is known to be sufficient to guaranteethis, see [17, Chapt. VII] for details. For calculus with decreasing rearrangements, we also refer to [18,Chapt. 2]. Moreover, functions vanishing at infinity are important in Analysis, see [19, Chapt. 3].

For the subsequent analysis, we shall first assume that our focus is on the Lebesgue measureμ = λ, either on [0,∞) for the decreasing rearrangement, or on some bounded interval [0, a] for theincreasing rearrangement. In such case, � denotes the corresponding Borel σ -algebra. In fact, onemay take a := ‖b‖∞. The corresponding analysis extends to measures μ λ with density dμ/dλwhich obey 0 < c ≤ (dμ/dλ) ≤ C < ∞. If this is the case then it is easily seen that the increasingrearrangements b∗

μ and b∗λ of b corresponding to μ and λ, respectively, satisfy

b∗μ(ct) ≤ b∗

λ(t) ≤ b∗μ(Ct), t > 0,

Similar arguments apply to the decreasing rearrangement. Thus, the asymptotic results as these willbe established for the Lebesgue measure λ find their counterparts for other measures μ.

Page 4: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

4 P. MATHÉ ET AL.

The first observation concerns the decreasing rearrangement. ForM> 0 we assign the truncatedfunction bM(t) := b(t)χ(M,∞)(t), and the shifted (to zero) version bM(t) := bM(t + M).

Proposition 1: We have that

(bM)∗ (t) =(bM

)∗(t) ≤ b∗(t), t > 0.

Proof: The equality is a result of the translation invariance of the Lebesgue measure, and theinequality follows from the fact that the decreasing rearrangement is order-preserving. �

Thus, the decreasing rearrangement does not take into account any zeros which are present onbounded domains. Only the behavior at infinity is reflected.

For the increasing rearrangement we shall follow a constructive approach. Here we shall assumethat the function b is piece-wise continuous and has finitely many zeros. Specifically, we assume arepresentation

b(s) =∑j∈A+

bj(s − sj) +∑j∈A−

bj(sj − s), 0 ≤ s ≤ a, (6)

where

(i) the reals sj (j = 1, . . . ,m) are the (distinct) locations of the zeros,(ii) for each j = 1, . . . ,m we have that bj(0) = 0, and there is a neighborhood of zero [0, aj) such

that• bj : [0, aj] → R

+, j = 1, . . . ,m is continuous and strictly increasing, and• essinf s>ajbj > 0.

(iii) The set A = {1, . . . ,m} = A+ A− is decomposed into two disjoint subsets A+ and A−,possible empty, and

(iv) there is one function, say bk such that its inverse b−1k dominates1 all other functions b−1

j , i.e.b−1j � b−1

k .

Thus, the function b is a superposition of a function bounded away from zero, and of increasingand decreasing parts. We also stress that domination as in item (iv) does not extend from functionsf−1, g−1 to the inverse functions f, g unless additional assumptions are made, we refer to [20] for adiscussion.

Under the above assumptions, we state the following result.

Proposition 2: Let the function b be as in Equation (6), and let bk be the function from item (iv) above.Then there is a constant C ≥ 1 such that

bk(s) = (bk)∗ (s) ≥ b∗(s) ≥ (bk)∗( sCm

)= bk

( sCm

),

for sufficiently small s> 0.

Proof: Clearly, the function bk is increasing near zero, such that it coincides with its increasingrearrangement, which explains the outer equalities.

Page 5: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 5

To establish the inner inequalities we argue as follows. Recall that we need to control

λ(b ≤ τ) = λ

⎛⎝∑

j∈A+bj(s − sj) +

∑j∈A−

bj(sj − s) ≤ τ

⎞⎠ .

If τ > 0 is small enough then, by item (ii) the contribution of b is neglected, and the sub-level sets{bj ≤ τ } (j = 1, . . . ,m) are disjoint intervals, which in turn yields

λ(b ≤ τ) =∑j∈A+

λ(bj(s − sj) ≤ τ) +∑j∈A−

λ(bj(sj − s) ≤ τ)

=∑j∈A+

b−1j (τ ) +

∑j∈A−

b−1j (τ )

=∑j∈A

b−1j (τ ).

By the domination assumption from item (iv) we find a constant C ≥ 1 such that

b−1k (τ ) ≤

∑j∈A

b−1j (τ ) ≤ Cmb−1

k (τ ). (7)

Now, asking for the sup over all τ > 0 such that λ(b ≤ τ) ≤ s we find that

bk(s) ≥ b∗(s) ≥ bk( smC

),

for sufficiently small s> 0. This completes the proof. �

The above proposition asserts (heuristically) that the part in the decomposition of b in (6) whichhas the highest order zero determines the asymptotics of the increasing rearrangement.

2.2. Regularization

For reconstruction of f (s), s ∈ S, we shall use regularization schemes �α : [0,∞) → R+,

parametrized by α > 0, see e.g. [21].

Definition 3 (regularization scheme): A family (�α) of real-valued Borel-measurable functions�α(t), t ≥ 0,α > 0, is called a regularization if there are constants C−1 > 0 and C0 ≥ 1 such that

(I) for each t> 0 we have t�α(t) → 1 as α → 0,(II) |�α(t)| ≤ (C−1/α) for α > 0, and(III) the function Rα(t) := 1 − t�α(t), which is called a residual function, satisfies |Rα(t)| ≤ C0 for

all t ≥ 0 and α > 0.

For the case of statistical noise, additional assumptions have to be made. These will be introducedand discussed later.

We apply a regularization (�α) to a function b in the way

[�α(b)](s) := [�α ◦ b](s) = �α(b(s)), s ∈ S.

Having chosen a regularization �α , and given data gδ we consider the function

f δα (s) := [�α(b)](s)gδ(s), s ∈ S, (8)

or in short f δα := �α(b)gδ , as a candidate for the approximate solution.

Page 6: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

6 P. MATHÉ ET AL.

For the subsequent error analysis, the following property of a regularization proves important,again we refer to [21, 22].

Definition 4 (qualification): Let ϕ be any index function. A regularization (�α) is said to havequalification ϕ if there is a constant Cϕ > 0 such that

supt≥0

|Rα(t)| ϕ(t) ≤ Cϕϕ(α), α > 0.

Example 5 (spectral cut-off): Let the regularization be given as

�c−oα (t) :=

{1t , t > α

0, else.

This obeys the requirements of regularization with C−1 = 1. It has arbitrary qualification. That is,for any index function, the requirement in Definition 4 will be satisfied. The corresponding residualfunction is Rα = χ{t: t≤α}, so that for a function b on S, Rα(b) = χ{s: b(s)≤α}.

Example 6 (Lavrent’ev regularization): This method corresponds to the function

�α(t) := 1t + α

, t ≥ 0, α > 0.

Lavrent’ev regularization is known to have at most ‘linear’ qualification. More generally, indexfunctions ϕ(t) := tν , t > 0, are qualifications whenever the exponent ν satisfies 0 < ν ≤ 1.

For infinite measures μ, and under white noise, it will be seen that it is important that theregularization (�α) will vanish for small 0 ≤ t ≤ α This is formalized in

Assumption 2: For each α > 0 the function �α vanishes on the set {t ≥ 0 : t ≤ α}.

This assumption holds true for spectral cut-off, but it is not fulfilled formost other regularizations.However, we can modify any regularization to obey Assumption 2.

Lemma 7: Let (�α) be any regularization with constants C−1 and C0. Assign

�α(t) := χ(α,∞)(t)�α(t), t > 0.

Then(�α

)is a regularization scheme with same constants C−1 and C0. Moreover, an index function ϕ

is a qualification of (�α) if and only if it is a qualification of(�α

)with constant Cϕ = max

{Cϕ ,C0

}.

Proof: We verify the properties. For t > α the regularizations �α and �α coincide, thus item (I)holds true. Also,

∣∣∣�α(t)∣∣∣ ≤ |�α(t)|, such that we can let C0 := C0. Next, it is easy to check that

Rα(t) = χ(a,∞)(t)Rα(t) + χ(0,a](t), which allows us to prove the second assertion, after recalling thatC0 ≥ 1.

Finally, we bound∣∣Rα(t)

∣∣ ϕ(t). Plainly, if t ≤ α then∣∣Rα(t)∣∣ ϕ(t) ≤ C0ϕ(α).

Otherwise, for t > α both functions Rα and Rα coincide, This completes the proof. �

Therefore, we may tacitly assume that the regularization of choice is accordingly modified to meetAssumption 2.

Page 7: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 7

2.3. Solution smoothness

In order to quantify the error bounds, we need to specify the way in which the solution smooth-ness will be expressed. In inverse problems solution smoothness is typically related to the operator,governing the equation, in our case, this is encoded in the multiplier function b ∈ L∞(S,�,μ).

The strongest version, first used in [23], is given by the point-wise characteristics, i.e. thedistribution function Ff : (0,∞) → R

+, given through

F2f (t) :=∫Sχ{s: b(s)≤t}

∣∣f (s)∣∣2 dμ(s), t > 0.

This is a bounded, non-decreasing functionwith Ff (t) → 0 as t → 0. The subsequent results can alsobe found in [23]; we refer to the study [24] for the references.

At least two points are worth mentioning. First, from [24] (Proposition 1), we know that there isa constant 0 < c ≤ 1, depending on the chosen regularization �α , for which the residual function isbounded from below as

∥∥Rα(b)f∥∥L2(S,�,μ)

≥ Ff (cα), 0 < α ≤ ‖b‖L∞(S,�,μ) .

Hence, this cannot decay faster than the distribution function. Secondly, Theorem 2 in [24] assertsidentical asymptotics in the power-type case: Suppose that �α is a regularization with qualificationϕ(t) = tν , t > 0. Then, for 0 < κ ≤ ν, we have Ff (t) = O(ακ) if and only if

∥∥Rα(b)f∥∥L2(S,�,μ)

=O(ακ) as α → 0. For comprehensive studies of different kinds of solution smoothness and its impacton errors in regularization for linear operator equations, we refer to [25].

A more common, although less sharp, way to measure smoothness is given in terms of sourceconditions based on index functions. Here, and throughout, by an index function, we mean a strictlyincreasing continuous function ϕ : (0,∞) → [0,∞) such that limt→+0 ϕ(t) = 0.

Definition 8 (source condition): A function f ∈ L2(S,�,μ) obeys a source condition with respectto the index function ϕ and the multiplier function b if

f (s) = [ϕ(b)](s)v(s) := ϕ(b(s))v(s), s ∈ S, μ − a.e.

with ‖v‖L2(S,�,μ) ≤ 1.

Remark 2: We briefly discuss the meaning of source conditions as given in Definition 8. Let theoperator A and the multiplication operator Mb be related as described in the introduction. Thenit is clear from the relation between the noisy data representations (3) and (4), that a source con-dition f = Ux = ϕ(b)v, ‖v‖L2(S,�,μ) ≤ 1 yields the representation x = ϕ(A)w with w := U−1v ∈ Hand ‖w‖H = ‖v‖L2(S,�,μ) ≤ 1. This is the standard form of general smoothness in terms of sourceconditions, given with respect to the forward operator A from (3), see [8, 21].

The main point is as follows: If the solution admits a source condition as in Definition 8, withfunction ϕ and if the chosen regularization has this as a qualification then

∥∥Rα(b)f∥∥L2(S,�,μ)

≤ ‖Rα(b(s))ϕ(b(s))v(s)‖L2(S,�,μ)

≤ ‖Rα(b(s))ϕ(b(s))‖∞ ‖v‖L2(S,�,μ) ≤ Cϕϕ(α). (9)

So, source conditions are a convenient way to bound∥∥Rα(b)f

∥∥L2(S,�,μ)

, and we shall use this for theerror bounds, below.

Page 8: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

8 P. MATHÉ ET AL.

It is interesting to relate source conditions to ‘classical’ smoothness in the sense of HilbertianSobolev spaces Hp(S,μ). Precisely, for smoothness parameter p> 0 we let Hp(S,μ) be the Hilbertspace of all functions f : S → R such that

∥∥f ∥∥p :=(∫

S

∣∣f (s)∣∣2 (1 + |s|2)p dμ(s)

)1/2< ∞.

The question is, under which conditions this type of smoothness can be expressed in terms of sourceconditions as in Definition 8. We start with the following technical result.

Lemma 9: Suppose that μ(S) = ∞ and that the function b vanishes at infinity (cf. Assumption 1).Moreover, let there exist positive constants M < ∞ and c> 0 such that

b(s) ≥ c, for |s| ≤ M

and

μ ({x, b(x) > b(s)}) � |s| , for |s| > M. (10)

Then the function ϕ∗, given for sufficiently small t> 0 as

ϕ∗(t) := 1μ ({x, b(x) > t}) , (11)

constitutes an index function. Moreover, we have the asymptotics

ϕ∗(b(s)) � 1|s| as |s| → ∞. (12)

Example 10 (power-type decay on [0,∞)): The assumptions of Lemma 9 are fulfilled if we considerfor κ > 0 the functions f ∈ L2([0,∞),μ) with Lebesgue measure μ defined as

b(s) := 11 + s1/κ

, 0 ≤ s < ∞.

For sufficiently small t> 0, we have in this case

μ ({x, b(x) > t}) =(1 − tt

and hence ϕ∗(t) � tκ as t → +0.

Corollary 11: Under the assumptions of Lemma 9 consider the function ϕ∗ as in (11). The function fbelongs to Hp(S,μ) if and only if it obeys a source condition with respect to (a multiple of) the functionϕp∗(t), t > 0.

Proof: Under the assumptions of Lemma 9 for the function b there are constants 0 < c < C < ∞such that

c ≤ infs∈S

(1 + |s|2) ϕ2

∗ (b(s)) ≤ sups∈S

(1 + |s|2) ϕ2

∗ (b(s)) ≤ C. (13)

This is easily seen for |s| ≤ M, as given in Lemma 9. For |s| > M we see that(1 + |s|2) ϕ2

∗ (b(s)) � (1 + |s|2) |s|−2 .

But for |s| ≥ M we have that 1 ≤ (1 + |s|2) |s|−2 ≤ (

1 + M2)M−2, where the right-hand side boundfollows from the monotonicity of x �→ (1 + x)/x, x > 0, and this proves (13). Now, suppose that

Page 9: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 9

f ∈ Hp(S,μ). Consider the element w(s) := f (s)/ϕp∗(b(s)), where ϕ∗ is as above. It is enough to show

that w ∈ L2(S,μ), i.e. that it serves as a source element. We have that∫S|w(s)|2 μ(s) ≤

∫ ∣∣f (s)∣∣2 (1 + |s|2)p dμ(s) sup

s∈S1(

1 + |s|2)p ∣∣∣ϕp∗(b(s))

∣∣∣2= ∥∥f ∥∥2p sup

s∈S1[(

1 + |s|2) ∣∣ϕ2∗(b(s))∣∣]p ,

and the latter is finite by (13). On the other hand, under a source condition for f, we can bound∫S

∣∣f (s)∣∣2 (1 + |s|2)p dμ(s) ≤ ‖w‖L2(S,�,μ) sup

s∈S

[(1 + |s|2) ∣∣ϕ2

∗(b(s))∣∣]p ,

where the supremum is again bounded by (13). The proof is complete. �

3. Error analysis

As stated in the introduction, we shall discuss error bounds, both for the classical setup of boundeddeterministic noise, as well as for statistical white noise, to be defined now.

Definition 12 (deterministic noise): The noise term ξ = ξ(s) is norm-bounded by one, i.e.‖ξ‖L2(S,�,μ) ≤ 1.

We shall occasionally adopt the notation ξs := ξ(s).

Definition 13 (white noise): There is some probability space (�,F ,P) such that the family {ξs}s∈Sconstitutes a centered stochastic process2 with Eξs = 0 for all s ∈ S, and E |ξs|2 = 1, s ∈ S.

We return to the noisy Equation (5). Writing (the unknown) fα(s) := [�α(b)](s)g(s), s ∈ S, by (5)and (8), we obtain

f − f δα = [f − �α(b)(g)] − [�α(b)(gδ) − �α(b)(g)]

= [f − �α(b)(bf )] − [�α(b)(gδ) − �α(b)(g)]

= [I − �α(b)b]f − δ�α(b)ξ .

Thus, we have the decomposition of the error of the reconstruction f δα in a natural way, by using theresidual function Rα , as

f − f δα = Rα(b)f − δ�α(b)ξ , (14)

The term Rα(b)f is completely deterministic, the noise properties are inherent in �α(b)ξ , only.

3.1. Bounding the bias

The (noise-free) term Rα(b)f in the decomposition (14) gives rise to the bias, defined as

bf (α) := ∥∥Rα(b)f∥∥L2(S,�,μ)

,

and it is called the profile function in [26]. Bounds for the bias were already established in § 2.3, see (9),and we rely on those in the subsequent discussion.

Page 10: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

10 P. MATHÉ ET AL.

We briefly highlight the case when μ is a finite measure. It is to be observed that if f ∈ L∞(S,μ),then ∥∥Rα(b)f

∥∥L2(S,�,μ)

≤ ‖Rα(b)‖L2(S,�,μ)

∥∥f ∥∥L∞(S,μ). (15)

Example 14: For Lavrent’ev regularization, spectral cut-off, andwith function b(s) := sκ , s > 0, withκ > 0, we see that

‖Rα(b)‖2L2(S,�,μ)=

{α2 ∫

(α + sκ)−2 dμ(s), for Lavrent′ev regularizationμ ({s : sκ ≤ α}) , for spectral cut − off.

From this, we conclude that for Lavrent’ev regularization the bound in (15) is finite only if κ > 1/2,whereas for spectral cut-off this holds for all κ > 0. If μ is the Lebesgue measure λ on (0, 1), then wefind that

‖Rα(b)‖L2((0,1),�,λ) ≤ Cα1/(2κ)

in either case.A similar bound, relying on Tikhonov regularization was first given in [13, Theorem 4.5].

3.2. Error under deterministic noise

Although the focus of this study is on statistical ill-posed problems, we briefly sketch the correspond-ing result for bounded deterministic noise as introduced in Definition 12. In this case, we bound theerror, starting from the decomposition (14), by using the triangle inequality, for α > 0 as

∥∥f − f δα∥∥L2(S,�,μ)

≤ ∥∥Rα(b)f∥∥L2(S,�,μ)

+ δ ‖�α(b)ξ‖L2(S,�,μ) . (16)

Now, using the item (2) in Definition 3, we obtain a bound for the noise term as

δ ‖�α(b)ξ‖L2(S,�,μ) ≤ δ sups≥0

|�α(b(s))| ≤ C−1δ

α, α > 0.

This together with the estimate (9) for the noise-free term gives the following

Proposition 15: Suppose that the solution f satisfies the source condition as in Definition 8, and that aregularization (�α) is chosen with qualification ϕ. Then

∥∥f − f δα∥∥L2(S,�,μ)

≤ Cϕϕ(α) + C−1δ

α, α > 0.

The a priori parameter choice α∗ = α∗(ϕ, δ) from solving the equation

αϕ(α) = δ (17)

yields the error bound ∥∥f − f δα∥∥L2(S,�,μ)

≤ 2max{Cϕ ,C−1

}ϕ(α∗), (18)

uniformly for functions f which obey a source condition with respect to the index function ϕ.

Page 11: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 11

3.3. Error under white noise

Here we assume that the underlying noise is as in Definition 13. Thus, since ξ is a random variable,it is a function of ω ∈ � so that f δα also a function of ω ∈ �. Hence, for each fixed ω ∈ �, from (14)we obtain

∥∥f − f δα (ω)∥∥2L2(S,�,μ)

= ∥∥Rα(b)f∥∥2L2(S,�,μ)

+ 2δ〈Rα(b)f ,�α(b)ξ〉

+ δ2∫S|�α(b(s))|2 |ξs(ω)|2 dμ(s). (19)

The error of the regularization �α under white noise is measured in RMS sense, that is, it is definedas

e(f ,�α , δ)2 := E∥∥f − f δα

∥∥2L2(S,�,μ)

, (20)

where the expectation is with respect to the probability P governing the noise process. From theproperties of the noise, we deduce from (19) the bias–variance decomposition

E∥∥f − f δα

∥∥2L2(S,�,μ)

= ∥∥Rα(b)f∥∥2L2(S,�,μ)

+ δ2E

∫S|�α(b(s))| |ξs(ω)|2 dμ(s). (21)

The first summand above, the squared bias, is treated as in § 3.1. It remains to bound the variance,that is, the second summand in (21). By interchanging expectation and integration we deduce that

E

∫S|�α(b(s))| |ξs(ω)|2 dμ(s) =

∫S|�α(b(s))|2 E |ξs(ω)|2 dμ(s)

=∫S|�α(b(s))|2 dμ(s) (22)

For the above identity it is important to have the right-hand side finite; that is, �α ◦ b ∈ L2(S,�,μ).In the subsequent analysis we shall distinguish the cases of finite measure μ, i.e. when μ(S) < ∞

and the infinite case μ(S) = ∞.Plainly, if the measure μ is finite then we have from Definition 3 the uniform bound

∫S|�α(b(s))|2 dμ(s) ≤ C2

−1α2 μ(S), α > 0.

Otherwise, this needs not be the case as the following example shows.

Example 16: Consider the multiplication operator

g := b · f , f ∈ L2(R, λ),

where

b(s) =

⎧⎪⎨⎪⎩0, s < 0,s, 0 ≤ s ≤ 1,1, s > 1,

with λ denoting the Lebesgue measure on R.

Page 12: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

12 P. MATHÉ ET AL.

Let (�α) be an arbitrary regularization. From Definition 3 we know that �α(1) → 1 as α → 0,and hence there is α0 > 0 such that �α(1) ≥ 1/2 for 0 < α ≤ α0. Therefore, for each s ≥ 1 and 0 <

α ≤ α0 we have that �α(b(s)) = �α(1) ≥ 1/2, and∫R

|�α(b(s))|2 dλ(s) ≥∫ β

1|�α(b(s))|2 dλ(s) ≥ 1

4(β − 1)

for everyβ > 1 so that the integral∫R

|�α(b(s))|2 dλ(s) is not finite. Consequently, themultiplicationequation with the above b(·) cannot be solved with arbitrary accuracy (as δ → 0) under white noiseby using any regularization.

Example 17 (Lavrent’ev regularization, continued): Suppose that μ(S) = ∞, and that b vanishesat infinity. For Lavrent’ev regularization we then see that∫

S|�α(b(s))|2 dμ(s) ≥

∫{s: b(s)≤α}

1(α + b(s))2

dμ(s)

≥ 14α2μ ({s : b(s) ≤ α}) = ∞.

However, under Assumption 2, we have that∫S|�α(b(s))|2 dμ(s) =

∫{s:b(s)>α}

|�α(b(s))|2 dμ(s),

and this will be finite for functions b vanishing at infinity.We recall the decreasing rearrangement b∗ of the multiplier function b. Since both b and b∗ share

the same distribution function we can use the transformation of measure formula to see for any(measurable) function H : [0, ‖b‖∞) → R that∫

[0,μ(S))|H(b∗(t))|2 dλ(t) =

∫S|H(b(s))|2 dμ(s)

In particular this holds for spectral cut-off as in Example 5, used as H(s) := �c-o(b(s)χ{b(s)>α ,yielding ∫

{b∗>α}1

|b∗(t)|2dλ(t) =

∫{b>α}

1|(b(s))|2 dμ(s). (23)

We now observe that from the definition of regularization functions, see Definition 3 we have forarbitrary regularization �α that �α(t) ≤ C0+1

t , and hence that∫{b>α}

|�α(b(s))|2 dμ(s) ≤ (C0 + 1)2∫

{b>α}1

|(b(s))|2 dμ(s)

= (C0 + 1)2∫

{b∗>α}1

|b∗(t)|2dλ(t).

This gives rise to the following

Definition 18 (statistical effective ill-posedness): Suppose that we are given the function b> 0 onthe measure space (S,F ,μ). For a function b that vanishes at infinity we call the function

D(α) :=(∫

{b∗>α}1

|b∗(t)|2dλ(t)

)1/2, α > 0, (24)

the statistical effective ill-posedness of the operator.

Page 13: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 13

Example 19 (Spectral cut-off, counting measure): Suppose that S = N and μ is the count-ing measure assigning μ(

{j}) = 1, j ∈ N, and that the function j �→ b(j) is non-increasing with

limj→∞ b(j) = 0. Then it vanishes at infinity, and for eachα > 0 therewill be amaximal finite numberNα with b(Nα) ≥ α > b(Nα + 1).

In this case the statistical effective ill-posedness evaluates as

D(α) =⎛⎝ Nα∑

j=1

1b2j

⎞⎠

1/2

, α > 0.

This corresponds to the ‘degree of ill-posedness for statistical inverse problems’ as given in [27]. Thebias–variance decomposition from (19) is known to be order optimal, cf. [28].

The following bound simplifies the statistical effective ill-posedness, and can often be used.

Lemma 20: Let �α be any regularization. Under Assumptions 1 and 2 we have that

D(α) ≤ 1α

√μ({s : b(s) > α}), α > 0,

and ∫S|�α(b(s))|2 dμ(s) ≤ C2

−1α2 μ ({s : b(s) > α})

Proof: The result follows from Definition 3. �

We summarize the preceding discussion as follows. Suppose that the measureμ(S) = ∞, and thatAssumptions 1 and 2 hold true. The error decomposition (19) then yields

E∥∥f − f δα

∥∥2L2(S,�,μ)

≤ ∥∥Rα(b)f∥∥2L2(S,�,μ)

+ δ2(C0 + 1)2D2(α), α > 0. (25)

Using this we obtain the following analog of Proposition 15.

Proposition 21: Suppose that the solution f satisfies the source condition as in Definition 8, and that aregularization �α is chosen with qualification ϕ. Suppose, in addition, that Assumptions 1 and 2 hold.Then, for the case of white noise ξ ,

E∥∥f − f δα

∥∥2L2(S,�,μ)

≤ C2ϕϕ(α)2 + δ2(C0 + 1)2D2(α), α > 0.

The a priori parameter choice α∗ = α∗(ϕ,D, δ) from solving the equation

ϕ(α) = δD(α) (26)

yields the error bound

(E

∥∥f − f δα∥∥2L2(S,�,μ)

)1/2 ≤ √2max

{Cϕ , (C0 + 1)

}ϕ(α∗). (27)

Proof: Follows from Definitions 4 and 18, (21), and (22). �

Page 14: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

14 P. MATHÉ ET AL.

4. Operator equations in Hilbert space

As outlined in the introduction the setup of multiplication operators as analyzed here is prototypicalfor general bounded self-adjoint positive operators A: H → H mapping in the (separable) HilbertspaceH due to the associated Spectral Theorem (cf. [1]), stated as Fact. It is an advantage of our focuson multiplication operators that we can include compact linear operators and non-compact ones aswell.

Example 22 (Compact operator): It was emphasized in [12] that the case of a compact positiveself-adjoint operator A yields a multiplier version with S = N, � = P(N), and μ being the countingmeasure, i.e. L2(S,�,μ) = �2, andmultiplier b := (

bj)j∈N

, where bj denotes the jth eigenvalue takinginto account (finite) multiplicities. White noise in �2 is given by a sequence of i.i.d. random variablesξ1, ξ2, . . . with mean zero and variance one.

In the subsequent discussion, we shall highlight the impact of the previous results, presented forequations with multiplication operator for specific operator equations with non-compact operator A.

4.1. Deconvolution

Suppose that data yδ are a real-valued function on R and given as

yδ(t) = (r ∗ x) (t) + δη(t), t ∈ R. (28)

In the above, (r ∗ x) (t) := ∫Rr(u − t)x(u) du for t ∈ R. The noise η is assumed to be symmetric

around zero and (normalized) weightedwhite noise η(t) := w(t)dWt , t ∈ R, with a square integrableweight normalized to ‖w‖L2(R) = 1. The goal is to find approximately the function x(t), t ∈ R, basedon noisy data yδ . This problem is usually called deconvolution.

4.1.1. Turning tomultiplication in frequency spaceIn order to transfer the deconvolution task into the multiplication form (2) with noisy data (4), weuse the Fourier transform to get

gδ(s) := yδ(s) = r(s)x(s) + δη(s), s ∈ R. (29)

We make the following assumptions. First, we assume that the kernel function r ∈ L1(R) is non-negative, symmetric around zero, and that u �→ r(u), u > 0 is non-increasing. In this case its Fouriertransform b(s) := r(s) is non-negative and real valued. Also, b ∈ C0(R), and zero is an accumula-tion point of the essential range of b. Thereby the corresponding multiplication operator does nothave closed range. We denote f (s) = x(s), s ∈ R. Then it is easily checked that the Fourier transformξ(s) := η(s) is centered Gaussian, andEξ(s)ξ (s′) = 0 whenever s �= s′. By the properties of the noise,as described before, the variance is given as

E |ξ(s)|2 =∫

R

|w(u)|2 du = 1.

Thus we arrive at the multiplication Equation (29) as in Section 1.

4.1.2. Relation to reconstruction of stationary time seriesHistorically, the deconvolution problem was first studied by Wiener in [29]. In that context the solu-tion f in (5) is a stationary time series fs(ω) with (constant) average signal strength Sf := E

∣∣f (s)∣∣2.

Page 15: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 15

Then we may look for a (real valued) multiplier h(s), s ∈ S such that f δ(s) := h(s)gδ(s) is a MISEestimator, i.e. it minimizes (point-wise) the functional

Ef Eξ

∣∣f δ(s) − f (s)∣∣2 , s ∈ S. (30)

Assuming that the noise ξ(s) is independent from the signal f (s) the above minimization problemcan be rewritten as

Ef Eξ

∣∣f δ(s) − f (s)∣∣2 = |1 − h(s)b(s)|2 Sf + |h(s)|2 E |ξ(s)|2 , s ∈ S.

The minimizing function h(s) (in the general complex valued case, and with b denoting the complexconjugate to b) has the form

h(s) := b(s)Sf|b(s)|2 Sf + δ2

= b(s)

|b(s)|2 + δ2

sf

. (31)

This approach results in the classical Wiener Filter, see [29]. Notice that the quotient√Sf /δ is

the signal-to-noise ratio, a constant which is unknown, thus replacing δ2/Sf by α we arrive at thereconstruction formula

f δ(s) := b(s)α + |b(s)|2 g

δ(s), s ∈ R,

being the analog to Tikhonov regularization.However, since here we assume b to be real and positive, one may propose the Lavrent’ev approach

resulting in

f δ(s) := 1α + b(s)

gδ(s), s ∈ R, (32)

and hence to the regularization scheme�α(t) := 1/(α + t), α > 0, t > 0 as introduced in § 2. Otherregularization schemes also apply.

4.2. Final value problem

In the final value problem (FVP), also known as backward heat conduction problem associated withthe heat equation

∂tu(x, t) = c2�u(x, t), x ∈ �, 0 < t < τ , (33)

one would like to determine the initial temperature f0 := u(·, 0), from the knowledge of the finaltemperature fτ := u(·, τ). Here, the domain � is in R

d. This problem is known to be ill-posed.It can be considered as an operator equation with multiplication operator. A similar FVP was

considered in the recent study [30].

Page 16: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

16 P. MATHÉ ET AL.

4.2.1. � = Rd

In this case, on taking Fourier transform of the functions on both sides of Equation (33), we obtain

∂tu(s, t) = −c2|s|2u(s, t), s ∈ R

d, 0 < t < τ .

For each fixed s ∈ �, the above equation is an ordinary differential equation, and hence the solutionu(s, t) is given by

u(s, t) = e−c2t|s|2 f0(s), s ∈ Rd,

where f0(x) := u(x, 0), x ∈ Rd. In particular, with t = τ , we have

u(s, τ) = e−c2τ |s|2 f0(s), s ∈ Rd.

Taking

f (s) := f0(s), g(s) := fτ (s), b(s) := e−c2t|s|2 ,

the above equation takes the form

b(s)f (s) = g(s), s ∈ Rd. (34)

Here, one may assume that the actual data g(·) belongs to ∈ L2(Rd). The problem is to determine thefunction f (·) ∈ L2(Rd) satisfyingmultiplication operator Equation (34).

Wemay recall that the map h �→ h is a bijective linear isometry from L2(Rd) into itself. Therefore,if f δτ is a noisy data, then

‖fτ − f δτ ‖L2(Rd) = ‖g − gδ‖L2(Rd),

where gδ := ˆf δτ . Hence, if f δ is an approximate solution corresponding to the noisy data gδ , and if f δ0is the inverse Fourier transform of f δ , then we have

‖f0 − f δ0 ‖L2(Rd) = ‖f − f δ‖L2(Rd).

Thus, in order to obtain the error estimates for the regularized solutions corresponding to noisymeasurements f δτ , it is enough to consider the noisy equation as in (5), that is,

gδ(s) = b(s)f (s) + δξ(s), s ∈ Rd.

4.2.2. � is a bounded domain inRd

For the purpose of illustration, let us assume that the temperature is kept at 0 at the boundary of �,that is,

u(x, t) = 0 for x ∈ ∂�.

Then the solution of Equation (33) along with the initial condition

u(x, 0) = f0(x), x ∈ �,

is given by (see [8, § 4.1.2])

u(x, t) =∞∑n=1

e−c2λ2nt〈f0, vn〉vn(x).

Here (λn) is a non-decreasing sequence of non-negative real numbers such that λn → ∞ as n → ∞and (vn) is an orthonormal sequence of functions in L2(�). In fact, each λn is an eigenvalue of the

Page 17: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 17

operator (−�) with corresponding eigenvector vn. For t = τ , taking fτ := u(·, τ), we have

fτ (x) =∞∑n=1

e−c2λ2nτ 〈f0, vn〉vn(x).

Equivalently,

〈fτ , vn〉 = e−c2λ2nτ 〈f0, vn〉, n ∈ N. (35)Writing

g := (〈fτ , vn〉), f := (〈f0, vn〉), b := (e−c2λ2nτ ),the system of equations in (35) takes the form a multiplication operator equation

b(n)f (n) = g(n), n ∈ N, (36)

where g and f are in L2(N) and b is in c0(N), the space all null sequences.As in § 4.2.1, we have

‖fτ − f δτ ‖L2(�) = ‖g − gδ‖L2(N) and ‖f0 − f δ0 ‖L2(�) = ‖f − f δ‖L2(N),

where gδ ∈ L2(N) and f δ0 ∈ L2(�) are constructed from the bijective linear isometry h �→ (〈h, vn〉)from L2(�) onto L2(N), that is,

gδ(n) := 〈f δτ , vn〉 and f δ0 :=∞∑n=1

〈f δ , vn〉vn.

4.3. Inverse problem in option pricing

A simple benchmark problem of inverse option pricing was introduced in [31] and discussed morein detail in [32] and [33, Sect. 6]. Here we aim at the identification (calibration) of a purely maturity-dependent volatility square function a(t) = σ 2(t) (0 ≤ t ≤ T) to some asset from a correspondingcontinuous family ofmaturity-dependent option pricesu(t) (0 ≤ t ≤ T), at a (Black–Scholes)marketwith fixed prescribed strike price K > 0, interest rate r ≥ 0, and present asset price S> 0. Formu-lated in the Hilbert space H = L2(0,T), this is a non-linear and everywhere locally ill-posed inverseproblem (cf. [31]). It can be written as an operator equation F(a) = u with a non-linear forwardoperator F mapping in H, where F = N ◦ J is the composition of the linear integration operator[Jh](t) := ∫ t

0 h(τ )dτ (0 ≤ t ≤ T) and a non-linear Nemytskii operator N. Hence, the problem canbe decomposed into the linear inner problem Ja = v and the non-linear outer equationN(v) = u. Asin [20] we focus on the outer problem, but we consider its linearization at some point v0 ∈ H. SinceF is Fréchet-differentiable with Fréchet-derivative F′(v0) = Gv0 ◦ J at v0 ∈ H and since

[Gv0h](t) = mv0(t) h(t) (0 ≤ t ≤ T),

for a multiplier functionmv0 ∈ L∞(0,T) to be specified below, the linearization

mv0(t) f (t) = g(t) (0 ≤ t ≤ T) (37)

of the outer problem is of the form (2). The multiplier function is obtained from the Black–Scholesfunction UBS(S,K, r, t, s) (for details see [31, p. 1321]) as

mv0(t) = UBS(S,K, r, t, v0(t))∂s

(0 < t ≤ T).

Aside from the situation S = K (at-the-money), it can be shown (cf. [33, p. 1003]) thatmv0 is a posi-tive continuous function on [0,T], and mv0(0) = limt→+0mv0(t) = 0. This fact, namely, zero is thelimit point of the range of the multiplier function, indicates the ill-posedness of Equation (37). It ischaracterized by an exponential decay to zero of the functionmv0 .

Page 18: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

18 P. MATHÉ ET AL.

Notes

1. We say that a (non-negative) function g dominates f, and write f � g, if there are a neighborhood [0, ε) and aconstant k> 0 such that f (t) ≤ kg(t), 0 ≤ t ≤ ε.

2. For each s ∈ S we have a random variable ξs : � → R.

Disclosure statementNo potential conflict of interest was reported by the authors.

FundingB. H. was supported by German Research Foundation (Deutsche Forschungsgemeinschaft) [grant numberDFG-grantHO 1454/12-1].

ORCIDP. Mathé http://orcid.org/0000-0002-1208-1421

References[1] Halmos PR. What does the spectral theorem say?. Amer Math Mon. 1963;70:241–247. doi:10.2307/2313117.[2] Vainikko G. On the discretization and regularization of ill-posed problems with noncompact operators. Numer

Funct Anal Optim. 1992;13(3–4):381–396. doi:10.1080/01630569208816485.[3] Plato R. The conjugate gradient method for linear ill-posed problems with operator perturbations. Numer

Algorithms. 1999;20(1):1–22. doi:10.1023/A:1019139414435.[4] Bissantz N, Hohage T, Munk A, et al. Convergence rates of general regularization methods for statistical inverse

problems and applications. SIAM J Numer Anal. 2007;45(6):2610–2636. doi:10.1137/060651884.[5] Shi C, Ropers C, Hohage T. Density matrix reconstructions in ultrafast transmission electron microscopy:

uniqueness, stability, and convergence rates. Inverse Probl. 2020;36(2):025005, 17. doi:10.1088/1361-6420/ab539a.[6] Mathé B, Mathe P, von Weizsäcker H. Regularization in Hilbert space under unbounded operators and general

source conditions. Inverse Probl. 2009;25(11):115013, 15. doi:10.1088/0266-5611/25/11/115013.[7] Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Dordrecht: Kluwer Academic Publishers

Group; 1996. (Mathematics and its applications; vol. 375).[8] Nair M. T. Linear operator equations – approximation and regularization, World Scientific Publishing Co. Pte.

Ltd.; 2009; doi:10.1142/9789812835659.[9] Scherzer O, Grasmair M, Grossauer H, et al. Variational methods in imaging. New York: Springer; 2009. (Applied

mathematical sciences; vol. 167).[10] Schuster T, Kaltenbacher B, Hofmann B. Regularization methods in Banach spaces. Berlin: Walter de Gruyter

GmbH & Co. KG; 2012. (Radon Series on Computational and Applied Mathematics; vol. 10).[11] Flemming J. Generalized Tikhonov regularization andmodern convergence rate theory in banach spaces. Aachen:

Shaker Verlag; 2012.[12] Cavalier L. Inverse problems with non-compact operators. J Stat Plann Inference. 2006;136(2):390–400.

doi:10.1016/j.jspi.2004.06.063.[13] Hofmann B, Fleischer G. Stability rates for linear ill-posed problems with compact and non-compact operators.

Z Anal Anwendungen. 1999;18(2):267–286. doi:10.4171/ZAA/881.[14] Hofmann B. Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse

problems with multiplication operators. Math Methods Appl Sci. 2006;29(3):351–371. doi:10.1002/mma.686.[15] Engl HW, Hofmann B, Zeisel H. A decreasing rearrangement approach for a class of ill-posed nonlinear integral

equations. J Integral Equ Appl. 1993;5(4):443–463. doi:10.1216/jiea/1181075772.[16] Day PW. Rearrangements of measurable functions. [Ph.D.-Thesis] California Institute of Technology, Ann Arbor,

MI: ProQuest LLC; 1970.[17] Chong KM, Rice NM. Equimeasurable rearrangements of functions. Kingston Ont.: Queen’s University; 1971.

(Queen’s papers in pure and applied mathematics, No. 28).[18] Bennett C, Sharpley R. Interpolation of operators. Boston, MA: Academic Press, Inc.; 1988. (Pure and applied

mathematics; vol. 129).[19] Lieb EH, Loss M. Analysis. 2nd ed. Providence, RI: American Mathematical Society; 2001. (Graduate Studies in

Mathematics; vol. 14). doi:10.1090/gsm/014.[20] Krämer R, Mathé P. Modulus of continuity of Nemytskiı operators with application to the problem of option

pricing. J Inverse Ill-Posed Probl. 2008;16(5):435–461. doi:10.1515/JIIP.2008.024.

Page 19: Regularizationoflinearill-posedproblemsinvolving … · 6 P.MATHÉETAL. Forthesubsequenterroranalysis,thefollowingpropertyofaregularizationprovesimportant, againwereferto[21,22]

APPLICABLE ANALYSIS 19

[21] Mathé P, Pereverzev SV. Geometry of linear ill-posed problems in variable Hilbert scales. Inverse Probl.2003;19(3):789–803. doi:10.1088/0266-5611/19/3/319.

[22] Nair M. T., Pereverzev SV, Tautenhahn U. Regularization in Hilbert scales under general smoothing conditions.Inverse Probl. 2005;21(6):1851–1869. doi:10.1088/0266-5611/21/6/003.

[23] Neubauer A. On converse and saturation results for Tikhonov regularization of linear ill-posed problems. SIAMJ Numer Anal. 1997;34(2):517–527. doi:10.1137/S0036142993253928.

[24] Flemming J, Hofmann B, Mathé P. Sharp converse results for the regularization error using distance functions.Inverse Probl. 2011;27(2):025006, 18. doi:10.1088/0266-5611/27/2/025006.

[25] Albani V, Elbau P. de Hoop MV, et al. Optimal convergence rates results for linear inverse problems in Hilbertspaces. Numer Funct Anal Optim. 2016;37(5):521–540. doi:10.1080/01630563.2016.1144070.

[26] Hofmann B,Mathé P. Analysis of profile functions for general linear regularizationmethods. SIAM JNumer Anal.2007;45(3):1122–1141. doi:10.1137/060654530.

[27] Mathé P. Degree of ill-posedness of statistical inverse problems. Preprint 954, WIAS Berlin; August 2004.[28] Ding L,Mathé P.Minimax rates for statistical inverse problems under general source conditions. ComputMethods

Appl Math. 2018;18(4):603–608. doi:10.1515/cmam-2017-0055.[29] Wiener N. Extrapolation, interpolation, and smoothing of stationary time series. with engineering applications.

Cambridge, MA: The Technology Press of the Massachusetts Institute of Technology; 1949, John Wiley & Sons,Inc., New York, NY; Chapman & Hall, Ltd., London.

[30] Nair M. T. On truncated spectral regularization for an ill-posed evolution equation. arXiv e-prints. 2019 July;arXiv:1907.11076.

[31] Hein T, Hofmann B. On the nature of ill-posedness of an inverse problem arising in option pricing. Inverse Probl.2003;19(6):1319–1338. doi:10.1088/0266-5611/19/6/006.

[32] Hofmann B, Krämer R. On maximum entropy regularization for a specific inverse problem of option pricing. JInverse Ill-Posed Probl. 2005;13(1):41–63. doi:10.1163/1569394053583739.

[33] Hofmann B, Kaltenbacher B, Pöschl C, et al. A convergence rates result for Tikhonov regularization in banachspaces with non-smooth operators. Inverse Probl. 2007;23(3):987–1010. doi:10.1088/0266-5611/23/3/009.