104
UPPSALA UNIVERSITY - Department of Mathematics LULEÅ TECHNICAL UNIVERSITY - Department of Mathematics Fredrik Strömberg Johan Byström Lars-Erik Persson 2005-06-14 Applied Mathematics This book is based on lecture notes of professor Lars-Erik Persson, from a course in applied mathematics given at Luleå Technical University and Uppsala University. The course has been given for graduates students in areas outside mathematics, but the Internet version has been aimed at “gymnasielärare” (ap- proximately college teachers) in mathematics. The only prerequisites is the basic standard mathematics courses at the university level, i.e. Basic algebra, Linear algebra and Analysis. The blue links are directed towards external sources outside this document, and we are not responsible for the availability and information content of these pages. At present there is only very limited selection of exercises but that will improve soon. Fredrik Strömberg Johan Byström Lars-Erik PerssonFredrik Strömberg – Responsible for this document (and lectures appearing here). Fredrik Strömberg Johan Byström Lars-Erik PerssonJohan Byström – Responsible for the rest of the lectures available on the web. Fredrik Strömberg Johan Byström Lars-Erik PerssonLars-Erik Persson – Inventor of the course and source of inspiration for these notes.

Bystrom Applied Mathematics

Embed Size (px)

Citation preview

Page 1: Bystrom Applied Mathematics

UPPSALA UNIVERSITY - Department of MathematicsLULEÅ TECHNICAL UNIVERSITY - Department of MathematicsFredrik StrömbergJohan ByströmLars-Erik Persson

2005-06-14

Applied Mathematics

This book is based on lecture notes of professor Lars-Erik Persson, from a course in applied mathematicsgiven at Luleå Technical University and Uppsala University. The course has been given for graduatesstudents in areas outside mathematics, but the Internet version has been aimed at “gymnasielärare” (ap-proximately college teachers) in mathematics.The only prerequisites is the basic standard mathematics courses at the university level, i.e. Basic algebra,Linear algebra and Analysis.

The blue links are directed towards external sources outside this document, and we are not responsiblefor the availability and information content of these pages. At present there is only very limited selectionof exercises but that will improve soon.

Fredrik StrömbergJohan ByströmLars-Erik PerssonFredrik Strömberg – Responsible for this document (and lectures appearing here).

Fredrik StrömbergJohan ByströmLars-Erik PerssonJohan Byström – Responsible for the rest of the lectures available on the web.

Fredrik StrömbergJohan ByströmLars-Erik PerssonLars-Erik Persson – Inventor of the course and source of inspiration for these notes.

Page 2: Bystrom Applied Mathematics
Page 3: Bystrom Applied Mathematics

CHAPTER 1

Introduktion till dimensionsanalys and skalning

3

Page 4: Bystrom Applied Mathematics
Page 5: Bystrom Applied Mathematics

CHAPTER 2

Introduktion till störningsmetoder

5

Page 6: Bystrom Applied Mathematics
Page 7: Bystrom Applied Mathematics

CHAPTER 3

Introduktion till variationskalkyl

7

Page 8: Bystrom Applied Mathematics
Page 9: Bystrom Applied Mathematics

CHAPTER 4

Introduction to Partial Differential Equations

4.1. Some Examples

Example 4.1. (The one-dimensional heat conduction equation)We consider the heat conduction problem (see Chapter 1) in an (infinitely) thin rod of length

l (see Fig. 4.1.1). Let the heat at the point x and time t be given by u(x, t). Assume that theheat distribution in the rod at the time t = 0 is given by the function f (x), and that the heat at theendpoints x = 0 and x = l are given by the functions h(t) and g(t), respectively (in practice h and gare measured quantities). Then u(x, t) is described by the heat conduction equation:

u′t − ku′′xx = 0, t > 0,0 < x < l,

u(x,0) = f (x), 0 < x < l,

u(0, t) = h(t), t > 0,

u(l, t) = g(t), t > 0.

FIGURE 4.1.1. One-dimensional heat conduction0 l

Example 4.2. (The inhomogeneous one-dimensional heat conduction equation)Suppose that we have the same system as in the previous example, but that we also add the heat

v(x, t) at the point x and time t (see Fig. 4.1.2). In this case u(x, t) is described by the inhomogeneousheat conduction equation:

u′t − ku′′xx = v(x, t), t > 0,0 < x < l,

u(x,0) = f (x), 0 < x < l,

u(0, t) = h(t), t > 0,

u(l, t) = g(t), t > 0.

FIGURE 4.1.2. One-dimensional inhomogeneous heat conduction

0 lx

v = v(x, t)

9

Page 10: Bystrom Applied Mathematics

10 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Example 4.3. (The two-dimensional inhomogeneous heat conduction equation)We now consider heat conduction in a two-dimensional region D. Let the heat at the point

(x,y) ∈ D at the time t be given by u(x,y, t). Assume that the heat distribution at t = 0 is describedby the function f (x,y), and that the heat at the boundary of D is constant over time and givenby g(x,y) (in practice this is obtained by transfer of heat into or out from the system through theboundary). Assume also that the heat v(x,y, t) is added to the point (x,y) at the time t. Then u(x,y, t)is described by the two-dimensional inhomogeneous heat conduction equation:

u′t − k(u′′xx +u′′yy

)= v(x,y, t), (x,y) ∈ D, t > 0,

u(x,y,0) = f (x,y), (x,y) ∈ D,

u(x,y, t) = g(x,y), (x,y) ∈ ∂D, t > 0.

∂D(x,y)

D

Example 4.4. (The three-dimensional heat conduction equation)We now consider heat conduction in a three-dimensional region V . We use the same notation

as above, with the addition of a z-coordinate. Then u(x,y,z, t) is described by the three-dimensionalheat conduction equation:

u′t −div(kgradu) = v(x,y,z, t), (x,y,z) ∈V, t > 0,(4.1.1)u(x,y,z,0) = f (x,y,z), (x,y,z) ∈V,

u(x,y,z, t) = g(x,y,z), (x,y,z) ∈ ∂V, t > 0.

REMARK 1. Note that the gradient “grad” of the function u(x,y,z) is given by the vector

gradu = ∇u =(u′x,u

′y,u

′z)

=(

∂u∂x

,∂u∂z

,∂u∂z

)=(

∂x,

∂z,

∂z

)u.

If ∇ is written as

∇ =(

∂x,

∂z,

∂z

),

the divergence “div”, of a vector field ~F = (Fx,Fy,Fz) is given by

divF = ∇ ·~F =∂Fx

∂x+

∂Fy

∂z+

∂Fz

∂z.

Thus, the divergence of the gradient of u(x,y,z) is given by

div(gradu) = ∇ ·∇u = ∇2u = ∆u = u′′xx +u′′yy +u′′zz.

Hence, if k = k(x,y,z) = k0 is constant (4.1.1) can be written as

u′t − k0(u′′xx +u′′yy +u′′zz

)= v ⇔ u′t − k0∆u = v.

REMARK 2. Observe that the equationu′t −κ∆u = v

in general describes a diffusion process. Heat conduction implies a diffusion (transport) of heat, and isone example of such a process. Some other examples of diffusion processes are

Page 11: Bystrom Applied Mathematics

4.1. SOME EXAMPLES 11

• Mixing of one liquid in another (e.g. milk in a cup of tea).• Diffusion of a gaseous substance in air (e.g. a poisonous gas is released in the air).• Propagation of elementary particles in a solid material (e.g. neutrinos in a nuclear reactor)

Since the equations are the same, all methods we consider here for solving the heat equation in variouscases can also be applied to these alternative diffusion problems. Another PDE which is as important asthe diffusion equation is the wave equation, which we will now consider in some examples of.

Example 4.5. (The one-dimensional wave equation)Consider a vibrating (elastic) string of length l which is fixed at both endpoints. Arrange the

string along the x-axis and let u(x, t) describe the position (relative to the equilibrium) of the stringat the coordinate x and time t. At the initial time t = 0 the position and velocity of the string aregiven by the functions f (x)and g(x) respectively. The vibrations of the string are described by theone-dimensional wave equation:

u′′tt − ku′′xx = 0, 0 < x < l, t > 0,

u(0, t) = u(l, t) = 0, t > 0,

u(x,0) = f (x),0 < x < l,

u′t(x,0) = g(x), 0 < x < l.

FIGURE 4.1.3. Vibrating string

l0x

u(x, t)

Example 4.6. (The two-dimensional wave equation)Consider a vibrating membrane which is fixed at the boundary (e.g. a drum skin fastened in

a drum). Arrange the membrane so that it covers a domain D in the xy-plane, and let u(x,y, t)describe the position (relative to the equilibrium) of the membrane at the point (x,y) at the time t(see Fig. 4.1.4). At the initial time t = 0 the position and velocity of the membrane are given by thefunctions f (x,y) and g(x,y) respectively. The vibrations of the membrane are then described by thetwo-dimensional wave equation:

u′′tt − k(u′′xx +u′′yy

)= 0, (x,y) ∈ D, t > 0,

u(x,y, t) = 0, (x,y) ∈ ∂D, t > 0,

u(x,y,0) = f (x,y), (x,y) ∈ D,

u′t(x,y,0) = g(x,y), (x,y) ∈ D.

Page 12: Bystrom Applied Mathematics

12 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

FIGURE 4.1.4. Vibrating membrane

x

y

zz = u(x,y, t)

∂D

D

Example 4.7. (The two-dimensional Laplace equation)Assume that we have a two-dimensional domain, as in Example 4.3, and that we want to inves-

tigate the heat distribution in the system at thermal equilibrium, i.e. after so long time that the heatdistribution no longer changes with time. Assume also that we do not add any heat. This impliesthat we have to set u′t = 0 and v = 0 in Example 4.3, which gives us the Laplace equation, whichwe can write in the following three equivalent ways:

u′′xx +u′′yy = 0, ⇔(4.1.2)

∇2u = 0, ⇔∆u = 0.

∆ is usually called the Laplace operator or simply the Laplacian, and is of great importance inboth pure and applied mathematics. The solution u(x,y) of (4.1.2) describes the heat in the point(x,y) after thermal equilibrium. This is usually called a stationary solution to the heat conductionproblem.

Example 4.8. (The two-dimensional Poisson equation)The Poisson equation is an inhomogeneous Laplace equation, i.e. at all times t we add the heat

v(x,y, t) = f (x,y) (independent of t) to the point (x,y). This equation can be written in the followingthree equivalent ways:

u′′xx +u′′yy = f ⇔∇

2u = f ⇔∆u = f .

Page 13: Bystrom Applied Mathematics

4.2. A GENERAL PARTIAL DIFFERENTIAL EQUATION OF THE SECOND ORDER 13

Here u′t = 0 and v(x,y, t) =−1k

f (x,y) in Example 4.3, so the the Poisson equation can be interpreted

as the inhomogeneous heat conduction equation at thermal equilibrium (u′t = 0), where we at alltimes t add the heat f (x,y) to the point (x,y).

Example 4.9. (The three-dimensional Poisson equation)

u′′xx +u′′yy +u′′zz = f ⇔∇

2u = f ⇔∆u = f .

In this case we have u′t = 0 and v =− 1k0

f in Example 4.4, and as in the two-dimensional case above,

the three-dimensional Poisson equation can be interpreted as the heat conduction equation at thermal

equilibrium and when at all times t we add the heat − 1k0

f (x,y,z) to the point (x,y,z).

REMARK 3. If the added heat in the examples above have negative sign, the obvious physical interpreta-tion is that we cool down the system.

4.2. A General Partial Differential Equation of the Second Order

A general partial differential equation (PDE) can be written as

(4.2.1) G(x, t,u,u′x,u

′t ,u

′′xx,u

′′xt ,u

′′tt)

= 0.

The basic questions we now ask ourselves are:

1. Does it exist a solution to the PDE?2. Is the solution unique?3. Is the solution stable under small perturbations?4. Which methods are available to construct and illustrate solutions?

Example 4.10. The problems in Examples 4.1-4.6 have unique solutions, but the problems in Example4.7-4.9 do not have unique solutions.

REMARK 4. A PDE of the type (4.2.1) usually has an infinite number of solutions and the general solutiondepends on a number of arbitrary functions (to be compared with the fact that solutions to ODE:s usuallydepend on arbitrary constants).

Example 4.11. The equationu′′tx = tx

has the solutions

u =14

t2x2 +g(t)+h(x).

Page 14: Bystrom Applied Mathematics

14 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Example 4.12. The two-dimensional Laplace equation

u′′xx +u′′yy = 0,

has, for example, the solutions

u(x,y) = x2− y2,

u(x,y) = ex cosy,

u(x,y) = ln(x2 + y2) .

REMARK 5. A solution u(x,y) to the Laplace equation is called a harmonic function. To find harmonicfunctions one can use the fact that if f (z) = f (x + iy) is an analytic function (or synonymously: entire

or holomorphic), i.e. ifddz

f (z) exists, then the real part u(x,y) = ℜ f (x + iy), and the imaginary part

v(x,y) = ℑ f (x+ iy) of f are both harmonic functions.

In the example above we used f (z) = z2, ez and logz2 respectively.

4.3. Linearity and Non-linearity

A partial differential equation can be written as

(*) Lu = f ,

where L is a so called differential operator.

Example 4.13. Let L =∂

∂t− k

∂2

∂x2 . Then (*) becomes

u′t − ku′′xx = f ,

which is a one-dimensional heat conduction equation (cf. Example 4.2).

Example 4.14. Consider the differential operator

L(u) = u∂u∂t

+2txu.

Then the equation (*) becomes

u∂u∂t

+2txu = f (x, t).

DEFINITION 4.1. We say that the PDE (*) is linear if the operator L has the properties

L(u+ v) = Lu+Lv,(1)

L(cu) = cLu.(2)

If these conditions are not both satisfied we say that (*) is non-linear.

Example 4.15. The heat conduction equation in Example 4.13 is linear.

Proof: We must see verify that L =∂

∂t− k

∂2

∂x2 satisfies (1) and (2) above.

(1) L(u+ v) =∂(u+ v)

∂t− k

∂2(u+ v)∂x2 =

∂u∂t− k

∂2u∂x2 +

∂v∂t− k

∂2v∂x2 = Lu+Lv.

Page 15: Bystrom Applied Mathematics

4.4. CLASSIFICATION OF PDES 15

(2) L(cu) =∂(cu)

∂t− k

∂2(cu)∂x2 = c

∂u∂t− kc

∂2u∂x2 = c

(∂u∂t− k

∂2u∂x2

)= cLu.

Hence, since L satisfies both (1) and (2), the equation

Lu = f

is linear.

Example 4.16. The PDE in Example 4.14 is non-linear.Proof: We start by verifying property (1):

L(u+ v) = (u+ v)(u+ v)′t +2tx(u+ v)= uu′t +uv′t + vu′t + vv′t +2txu+2txv, and

Lu+Lv = uu′t +2txu+ vv′t +2txv.

Since L(u+ v)− (Lu+Lv) = uv′t + vu′t 6= 0 the property (1) is not satisfied and hence the equationis non-linear.

4.4. Classification of PDEs

A general linear second order PDE can be written as

(4.4.1) a(x, t)u′′tt +b(x, t)u′′xt + c(x, t)u′′xx +d(x, t)u′t + e(x, t)u′x +q(x, t)u = f (x,y), (x, t) ∈D.

SetD(x, t) = (b(x, t))2−4a(x, t)c(x, t).

We say that the PDE (4.4.1) is

• Elliptic if D(x, t) < 0 in D,• Parabolic if D(x, t) = 0 in D,• Hyperbolic if D(x, t) > 0 in D.

Example 4.17. Consider the two-dimensional Laplace equation

u′′xx +u′′yy = 0.

Here D(x,y) = 02−4 ·1 ·1 =−4 < 0, and hence the equation is elliptic.♦

Example 4.18. Consider the heat conduction equation

u′t −u′′xx = 0.

Here D(x,y) = 02−4 ·0 · (−1) = 0, and hence the equation is parabolic.♦

Example 4.19. Consider the one-dimensional wave equation

u′′tt −u′′xx = 0.

Here D(x,y) = 02−4 ·1 · (−1) = 4 > 0, and hence the equation is hyperbolic.

Page 16: Bystrom Applied Mathematics

16 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

4.5. The Superposition Principle

Consider a linear and homogeneous (i.e. the right hand side is 0) PDE:

(*) Lu = 0.

Suppose that u1,u2, . . . are solutions of (*) and that u is a finite linear combination of these:

u = c1u1 + c2u2 + · · ·+ cnun.

Then u is also a solution to (*) since

Lu = L(c1u1 + · · ·+ cnun) = c1Lu1 + · · ·+ cnLun = 0+ · · ·+0 = 0.

This is called the superposition principle and is true also for infinite sums:

u = c1u1 + c2u2 + · · ·+ cnun + · · · ,

provided that certain convergence properties hold1.

The continuous superposition principle:

Assume that uα(x, t) satisfies Luα = 0 for all α, a≤ α≤ b, and define

u(x, t) =Z b

ac(α)uα(x, t)dα,

where c(α) is an arbitrary (integrable) function. Then

Lu = 0.

Proof:

Lu = L(Z b

ac(α)uα(x, t)dα

)=

Z b

ac(α)Luα(x, t)dα

=Z b

ac(α) ·0dα = 0.

Example 4.20. It is easy to verify that for each −∞ < α < ∞, the function

uα(x, t) =1√4πkt

exp

(− (x−α)2

4kt

)satisfies the heat conduction equation

u′t − ku′′xx = 0.

Hence this equation is also satisfied by the function

u(x, t) =1√4πkt

Z∞

−∞

c(α)exp(− (x−α)2

4kt

)dα,

for any arbitrary, integrable function c(α).

1E.g. if we have uniform convergence in: sn(x) =n

∑1

u j(x)→ u, s′n(x) =n

∑1

u′j(x)→ u′, etc. for all occurring derivatives.

Page 17: Bystrom Applied Mathematics

4.6. WELL-POSED PROBLEMS 17

4.6. Well-Posed Problems

A boundary or initial value problem is said to be well-posed if

(a) there exists a solution,(b) the solution is unique, and(c) the solution is stable.

A problem that is not well-posed is said to be ill-posed.

Example 4.21. Consider the initial-values problem which consists of the equation

u′′tt +u′′xx = 0, t > 0,−∞ < x < ∞,

together with the initial-values

(4.6.1) u(x,0) = 0, u′t(x,0) = 0,−∞ < x < ∞.

The unique solution is given by the function which is constant 0:

u(x, t)≡ 0, t ≥ 0,−∞ < x < ∞.

Let us now make a little perturbation of the initial-values (4.6.1):

(4.6.2) u(x,0) = 0, u′t(x,0) = 10−4 sin104x.

The solution to this new problem is given by

u(x, t) = 10−8 sin(104x

)sinh

(104t

).

For large t we know that sinh(104t

)is approximately

12

exp(104t

). The tiny change in the initial-

values ave rise to a change in the solution from the constant 0 to a function which grows expo-nentially (from the sinh-factor) and oscillates exponentially much (from the sine-factor). A reallydramatic change! This implies that the solution is not stable, and hence the problem is ill-posed.

Example 4.22. Show that the boundary-value problemu′t − ku′′xx = 0, 0 < x < l, 0 < t < T,

u(x,0) = f (x), 0 < x < l,u(0, t) = g(t),u(l, t) = h(t), 0 < t < T,

where f ∈ C [0, l] and g,h ∈ C [0,T ], has a unique solution, u(x, t), in the rectangle

R : 0 ≤ x ≤ l, 0≤ t ≤ T.

Solution: Later on we will construct a solution to this problem (in Example 5.9)!But for now, assume that we have two different solutions to the problem: u1(x, t) and u2(x, t). It

is then clear that the function

w(x, t) = u1(x, t)−u2(x, t)

must satisfy the boundary-value problem:w′

t − kw′′xx = 0, 0 < x < l, 0 < t < T,

w(x,0) = 0, 0 < x < l,w(0, t) = w(l, t) = 0, 0 < t < T.

Page 18: Bystrom Applied Mathematics

18 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

We now form the “energy integral”

E(t) =Z l

0w2(x, t)dx.

Observe that E(t)≥ 0, E(0) = 0, and

E ′(t) =Z l

02ww′

tdx = 2kZ l

0ww′′

xxdx

=[2kww′

x]l

0−2kZ l

0

(w′

x)2 dx

= −2kZ l

0

(w′

x)2 dx ≤ 0.

Hence, the function E is decreasing from E(0) = 0, and since E ≥ 0 we must have E(t) ≡ 0. Thisimplies that also w(x, t) ≡ 0, i.e. u1(x, t) = u2(x, t) for all x, t. Since we assumed that the solutionsu1 and u2 were different we have arrived at a contradiction! Hence the problem must have a uniquesolution! ♦

4.7. Some Remarks On Fourier Series

Consider a function f (x), −l < x < l. The Fourier coefficients of f are defined as

a0 =12l

Z l

−lf (x)dx,

an =1l

Z l

−lf (x)cos

(nπxl

)dx, n = 1,2, . . . ,

bn =1l

Z l

−lf (x)sin

(nπxl

)dx, n = 1,2, . . . ,

and the Fourier series of f is defined by

S(x) = a0 +∞

∑n=1

an cos(nπx

l

)+bn sin

(nπxl

).

For a more detailed discussion of Fourier series see Section 6.1. See also Fig. 4.7.

Assume that f (x) is infinitely many times differentiable in the interval −l < x < l, except for a numberof discontinuity points. Then we have:

(a) S(x) = S(x+2l), for all x.(b) S(x) = f (x) at the points where f is continuous,

(c) S(x) =12

[ f (x+)+ f (x−)] at points of discontinuity2.

2Here f (+x) = limy→x

f (y), where we keep y > x as we take the limit, and we define f (x−) similarly.

Page 19: Bystrom Applied Mathematics

4.7. SOME REMARKS ON FOURIER SERIES 19

y

x−l l

f (x)

(a) A Discontinuous Function

y

x−l l

S(x)f (x)

(b) And its Fourier Series

When making a graph of a discontinuous function it is customary to indicate the value which is attainedby the function with a filled circle and the value which is not attained by an unfilled circle.

FIGURE 4.7.1. A square wavey

k

−k

x−π π

f (x) =

k, 0 < x < l,−k, −l < x≤ 0

Example 4.23. Consider the function f (x) from Fig. 4.7.1:

f (x) =

k, 0 < x < l,−k, −l < x ≤ 0.

Note that f (x) is odd, i.e. f (−x) = − f (x). Since cosx is even the function f (x)cos(nπx

l

)is

odd and we know that the integral of an odd function over an even interval is always 0 (the “negative”

Page 20: Bystrom Applied Mathematics

20 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

area cancels the “positive” area), hence a0 = an = 0 for all n. And we have

bn =1l

Z l

−lf (x)sin

(nπxl

)dx

=1l

Z 0

−l−k sin

(nπxl

)dx+

Z l

0k sin

(nπxl

)dx

=2kl

Z l

0sin(nπx

l

)dx

=2kl

[− l

nπcos(nπx

l

)]l

0

=2knπ

(1− cosnπ)

=2knπ

(1− (−1)n) .

I.e.

b1 =4kπ

, b2 = 0, b3 =4k3π

, b4 = 0, b5 =4k5π

, . . . ,

and the Fourier series of f is

S(x) =4kπ

∑n=1

bn sin(nπx

l

)=

4kπ

∑m=0

12m+1

sin(

(2m+1)πxl

)=

4kπ

(sin(

πxl

)+

13

sin(

3πxl

)+

15

sin(

5πxl

)+ · · ·

).

See Fig. 4.7.2 for an illustration of some of the partial (containing only a finite number of terms)sums for S(x).

Page 21: Bystrom Applied Mathematics

4.8. SEPARATION OF VARIABLES 21

FIGURE 4.7.2. Partial Fourier seriesy

k

−k

xπ π

S1(x)

(a) The First Term

y

k

−k

xπ π

S1(x)4k3π

sin3x4k5π

sin5x

S2(x)

S3(x)

(b) More Terms

4.8. Separation of Variables

Separation of variables is a common method to solve certain types of PDEs. Since it originated from anidea of Fourier it is also sometimes called Fourier’s method.

Page 22: Bystrom Applied Mathematics

22 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

Model example: Solve the problem

u′t − ku′′xx = 0, 0 < x < l, t > 0,(1)

u(x,0) = f (x), 0 < x < l,(2)

u(0, t) = u(l, t) = 0, t > 0.(3)

What we mean by separating the variables in (1) is to seek a solution u(x, t) which can be factored as

u(x, t) = X(x)T (t),

where X(x) and T (t) are functions depending only on x and t respectively. Assume now that we can writeu in this way. If we differentiate u = XT we get u′t(x, t) = X(x)T ′(t) and u′′xx(x, t) = X ′′(x)T (t), and if wesubstitute these expressions into (1) we get the equation:

X(x)T ′(t)− kX ′′(x)T (t) = 0,

which can be rewritten asT ′(t)T (t)

1k

=X ′′(x)X(x)

.

Wee see that the left hand side is a function of t only and the right hand side is a function of x only. Hence,the the only possibility is that both sides equals a constant:

T ′(t)T (t)

1k

=X ′′(x)X(x)

=−λ,

for some constant λ (which we have to determine later). Instead of the PDE (1) we now have two ODEs:T ′(t) =−λkT (t),X ′′(x) =−λX(x),

with the general solutions

T (t) = Ce−λkt ,and X(x) = Asin(√

λx)

+Bcos(√

λx)

.

The boundary values (3) implies that either T ≡ 0 or X(0) = X(l) = 0. Since the first alternative only givesus the solution which is constant 0 we see that X must satisfy the boundary conditions X(0) = X(l) = 0,i.e.

X(0) = B = 0,

which tells us that B = 0, and we also see that

X(l) = Asin(√

λl)

= 0.

To once again avoid the trivial solution W ≡ 0 (i.e. with A = 0) we must have sin(√

λl)

= 0, whichimplies that √

λl = nπ, n ∈ Z+,

or equivalently

λ =n2π2

l2 ,

for some positive integer n.

We have showed that if a solution to (1) can be factored as X(x)T (t) then it can be written as

K sin(nπ

lx)

exp(−n2π2kt

l2

),

Page 23: Bystrom Applied Mathematics

4.8. SEPARATION OF VARIABLES 23

where n is a positive integer and K a constant. By the superposition principle (sec 4.5) the general solutionto (1) satisfying the boundary-values (3) can be written as

u(x, t) =∞

∑n=1

bn sin(nπ

lx)

exp(−n2π2kt

l2

),

where the Fourier coefficients, bn∞n=1, are determined by the initial condition (2):

(*) u(x,0) = f (x) =∞

∑n=1

bn sin(nπ

lx)

.

Let us for simplicity assume that l = π and consider some examples of initial values f (x) in the abovemodel example.

Example 4.24. Let f (x) = 2sinx + 4sin3x. Then (*) is satisfied if b1 = 2, b2 = 0, b3 = 4, b4 = b5 =· · ·= 0. Hence, the solution to the model example is

u(x, t) = 2sin(x)e−xt +4sin(3x)e−9kt .

Example 4.25. Let f (x) = 1 =4π

(sinx+

13

sin3x+15

sin5x+ · · ·)

. Then (*) is satisfied if b1 =4π

,

b2 = 0, b3 =4π

13, b4 = 0, b5 =

15, b6 = 0, etc. in this case, the solution to the model example is

given by

u(x, t) =4π

(sin(x)e−kt +

13

sin(3x)e−9kt +15

sin(5x)e−25kt + · · ·)

=4π

∑n=1

sin((2n−1)x)e−(2n−1)2kt .

Example 4.26. If we have an arbitrary initial-value function f (x), 0≤ x≤ π, the solution to the modelexample is given by

u(x, t) =∞

∑n=1

bn sin(nx)exp(−n2kt

),

where

bn =1π

−π

fu(x)sinnxdx =2π

0f (x)sinnxdx.

Here fu(x) is an extension of f (x) to an odd function in the interval −π < x < π, i.e. fu(x) = f (x) ifx > 0 and fu(x) =− f (x) if x ≤ 0 (cf. Fig. 4.8.1).

Page 24: Bystrom Applied Mathematics

24 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS

FIGURE 4.8.1. Construction of an odd extension of a functiony

x−π π

fu(x)f (x)

4.9. Exercises

4.1. [S] Determine, for each of the following differential equations, if it is linear or non-linear:a) u′t(x, t)+ x2u′′xx(x, t) = 0.

b)∂2u∂2t

+u∂u∂x

= f (x, t).

c) u∆u−u′t = 0.

d)∂3u∂3t

+∂2u∂2t

+∂u∂t

= u′x.

4.2.* Determine, for each of the following partial differential equations, the regions where it is hyper-bolic, elliptic or parabolic:a) u′′tt + xu′′xx +2u′x = f (x, t), (x, t) ∈ R2.b) y2 (u′′xx +u′′yy

)= 0, x ∈ R, y > 0.

c)∂2u∂t2 = c2

(∂2u∂r2 +

1r

∂u∂r

), t > 0, r > 0, and c ∈ R a constant.

d) sinx(u′′tt +2u′′xt

)+ cosxu′′xx = tanx, t ∈ R,|x| ≤ π.

4.3. [S] Let u(x, t), t > 0, x > 0 denote the temperature in an infinitely long rod with heat conductancecoefficient k, and which we heat up by increasing the temperature at the end point such that u(0, t) =

t. Use the fact that uα (x, t) = (4πkt)−12 e−

(x−α)24kt is a solution of u′t −ku′′xx = 0 for each α∈R together

with the superposition principle to determine u(x, t). I.e. solve the problem

u′t − ku′′xx = 0, x > 0, t > 0,

u(0, t) = t, t > 0.

Page 25: Bystrom Applied Mathematics

4.9. EXERCISES 25

4.4. Determine whether the following problems are well-posed or ill-posed:a) u′′tt = u′′xx, u(0, t) = u(π, t) = u(x,0) = u(x,π), x, t ∈ [0,π].b) u′t − ku′′xx = 0, u(0, t) = u(π, t) = 0, u(x,0) = sinx, x ∈ [0,π], t > 0.c) u′t − ku′′xx = 0, u(0, t) = 0, u(x,0) = sin

( x2

), x ∈ [0,π], t > 0.

c) u′t − ku′′xx = 0, u(0, t) = u(π, t) = 0, u(x,0) = sin( x

2

), x ∈ [0,π], t > 0.

4.5. [S] a) Determine the Fourier series for the function f (x) which in the interval−π < x < π is givenby f (x) = x2.

b) use a) to show thatπ2

12=−

∑k=1

(−1)k

k2 .

4.6.* Determine the Fourier series of f (t) = |sin t|.

4.7. [S] Consider a rod of length L = 1 with heat conduction coefficient k = 1. At the beginning therod has the constant temperature 1. We then (instantaneous) cool down the ends of the rod to thetemperature 0, where we then keep it during the continuation of the experiment.a) Formulate this problem mathematically.b) find an expression for the temperature of the rod in the point x at the time t.

(Hint: for the Fourier series expansion of the constant 1 use an odd periodic extensionin the interval.)

4.8.* Solve the following problem by separation of variables:

u′t = u′′xx, 0 < x < 3, t > 0,

u(x,0) = sin(πx)−2sin(

3x)

, 0 < x < 3,

u(0, t) = u(3, t) = 0, t > 0.

4.9. [S] Solve the following problem

u′t = u′′xx, 0 < x < π, t > 0,

u(x,0) = sin2 x, 0 < x < π,

u′x(0, t) = u′x(π, t) = 0, t > 0.

4.10. Solve the following problem

u′′tt = u′′xx, 0 < x < π, t > 0,

u(x,0) = sinx, 0 < x < π,

u′t(x,0) = 1, 0 < x < π,

u(0, t) = u(π, t) = 0, t > 0.

Page 26: Bystrom Applied Mathematics
Page 27: Bystrom Applied Mathematics

CHAPTER 5

Introduction to Sturm-Liouville Theory and the Theory ofGeneralized Fourier Series

We start with some introductory examples.

5.1. Cauchy’s equation

The homogeneous Euler-Cauchy equation (Leonhard Euler and

Augustin-Louis Cauchy) is a linear homogeneous ODE which can be written as

(*) x2y′′+axy′+by = 0.

Example 5.1. Solve the equation (*).

Solution: Set y(x) = xr, then y′(x) = rxr−1 and y′′(x) = r(r−1)xr−2. If we insert this into (*) we get

r(r−1)xr +arxr +bxr = 0,

which gives us the equation

(**) r(r−1)+ar +b = 0.

This is the so-called Characteristic equation corresponding to (*). Assume that the solutions of (**) arer1 and r2 . We have three different cases:

1. If r1 and r2 are real and different, r1 6= r2 then

y(x) = Axr1 +Bxr2 .

2. If r1 and r2 are real and equal, r1 = r2 = r then

y(x) = Axr +Bxr lnx.

3. If r1and r2 are complex conjugates, r1 = α+ iβ, r2 = α− iβ then

y(x) = Axα+iβ +Bxα−iβ.

REMARK 6. Observe that

xα+iβ = xαeiβ lnx = xα (cos(β lnx)+ isin(β lnx))

andxα−iβ = xα (cos(β lnx)− isin(β lnx)) ,

hence we can write the solution of case 3 in the example above as

y(x) = xα ((A+B)cos(β lnx)+ i(A−B)sin(β lnx)) .

27

Page 28: Bystrom Applied Mathematics

28 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

If we only consider constants A and B such that C = A + B and D = i(A−B) are real numbers then wesee that

y(x) = xα (C cos(β lnx)+Dsin(β lnx))

is a real-valued solution to (*).

Example 5.2. Solve the differential equation

x2y′′+2xy′−6y = 0.

Solution: The characteristic equation is

r(r−1)+2r−6 = 0,

i.e.r2 + r−6 = 0

which has the solutionsr1 = 2, r2 =−3.

Since we have two different real solutions we are in case 1 above, and the general solution to thedifferential equation is given by

y(x) = Ax2 +Bx−3.

Example 5.3. Solve the equation

x2y′′+2xy′+λy = 0, λ >14.

Solution: The characteristic equation is

r2 + r +λ = 0,

with solutions

r =−12±√

14−λ =−1

2± i

√λ− 1

4.

Since we now have two complex conjugate solutions r1 = α + iβ and r1 = α− iβ we are in case 3above, and the general solution to the differential equation is given by

y(x) = Ax−12

(sin

(√λ− 1

4lnx

)+Bcos

(√λ− 1

4lnx

)).

5.2. Examples of Sturm-Liouville Problems

In the next section we will describe in more details what is meant by a Sturm-Liouville problem (Charles-Fran cois Sturm and Joseph Liouville), but first we will look at some examples.

Example 5.4. Solve y′′+λy = 0,

y(0) = y(l) = 0.

Page 29: Bystrom Applied Mathematics

5.2. EXAMPLES OF STURM-LIOUVILLE PROBLEMS 29

Solution: Previously (cf. section 4.8, p. 22) we saw that this problem can be solved if and only if

λ = λn =(nπ

l

)2, n = 1,2,3, . . . (eigenvalues)

with the corresponding solutions

yn(x) = an sin(nπ

lx)

(eigenfunctions).

Example 5.5. Solve X ′′(x)−λX(x) = 0, 0 ≤ x ≤ 1,

X(0) = 0,

X ′(1) =−3X(1).Solution: We have three different cases:

λ = 0 X(x) = Ax+B, X(0) = 0 ⇒ B = 0, and X ′(1) =−3X(1) ⇒ A =−3A ⇒ A = 0. Hence, weonly get the trivial solution X(x)≡ 0.

λ > 0 With λ = p2 the solutions are given by X(x) = Aepx + Be−px. The boundary conditionsX(0) = 0 and X ′(1) =−3X(1) gives us the system

X(0) = A+B = 0X ′(1)+3X(1) = A

(pep + pe−p)+3A

(ep− e−p)= 0,

i.e. B =−A and A = 0 or ep(p+3)+e−p(p−3) = 0, but this expression is never 0 for p 6= 0(show this!) and hence we must have A =−B = 0, and also in this case we get only the trivialsolution X ≡ 0.

λ < 0 With λ = −p2 we get the solution X(x) = Acos px + Bsin px, and the boundary conditionsare X(0) = A = 0, and X ′(1) =−3X(1) which gives pBcos px =−3Bsin px ⇒

B(pcospx+3sin px) = 0,

hence either B = 0, (and we get the trivial solution), or

(pcospx+3sin px) = 0,

i.e. p must satisfy the equation tan p =− p3.

Thus, we see that we only have non-trivial solutions when λ is an eigenvalue λ = λn =−p2n, n = 1,2, . . . ,

where pn is a solution of tan p =− p3

(see Fig. 5.2.1), and then we have the corresponding eigenfunctions

Xn(x) = an sin pnx.

Example 5.6. Solve x2X ′′(x)+2xX ′(x)+λX = 0,

X(1) = 0, X(e) = 0.

Solution: The characteristic equation is

r(r−1)+2r +λ = 0⇔

r2 + r +λ = 0

Page 30: Bystrom Applied Mathematics

30 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

FIGURE 5.2.1. Solutions to tan p =− p3

y

pp1 p2 p3

y =− p3

y = tan p

which has the solutions

r =−12±√

14−λ =−1

2± i

√λ− 1

4,

hence the cases we must investigate are λ <14, λ =

14

and λ >14

(cf. Example 5.3).

λ <14

With r1,2 = −12±√

14−λ (different real ) we get the solutions X(x) = Axr1 + Bxr2 and the

boundary conditions givesX(1) = 0,

X(e) = 0,⇔

A+B = 0,

Aer1 +Ber2 = 0,⇔

A =−B,

A(er1 − er2) = 0,

i.e. since er1 6= er2 we must have A = 0 and we only get the trivial solution X ≡ 0.

λ =14

Now we get a double root r = −12

and the solutions are X(x) = Ax−12 + Bx−

12 lnx. The

boundary conditions give X(1) = A = 0 and X(e) = Be−12 = 0, i.e. A = B = 0 and we only

get the trivial solution X ≡ 0.

λ >14

The two complex roots r =−12± i

√λ− 1

4give the solutions

X(x) =A√x

sin

(√λ− 1

4lnx

)+

B√x

cos

(√λ− 1

4lnx

),

and we get X(1) = B = 0, and X(e) =A√e

sin

(√λ− 1

4

)= 0 hence λ must satisfy

√λ− 1

4= nπ,

for some positive integer n. We get the eigenvalues

λn =14

+(nπ)2 , n ∈ Z+,

Page 31: Bystrom Applied Mathematics

5.2. EXAMPLES OF STURM-LIOUVILLE PROBLEMS 31

FIGURE 5.2.2. The Bessel function J0(x)

x

y

α1 α2 α3 α4 α5

J0(x)

and the corresponding eigenfunctions

Xn(x) =An√

xsin(nπ lnx) .

Example 5.7. (The Bessel equation)An important ordinary differential equation in mathematical physics is the Bessel equation (Wil-

helm Bessel) of order m:r2w′′+ rw′+(r2−m2)w = 0.

The solutions (there are two linearly independent) to this equation are called Bessel functions oforder m. (For more information see e.g. Besselfunktions at engineering fundamentals). Here we willonly consider a special case.

Solve the following problem involving the Bessel equation of order 0:d2wdr2 +

1r

dwdr

+ k2w = 0,

w(R) = 0, w′(r) < ∞.

Solution: A general solution is given by

w(r) = C1J0(kr)+C2Y0(kr),

where J0 and Y0 are the Bessel functions of the first and second kind of order 0. It is known thatY ′

0 is not bounded and if we impose the condition that w′(r) must be bounded we get C2 = 0. Theboundary condition implies that

w(R) = C1J0(kR) = 0,

and if we want a non-trivial solution (C1 6= 0) then k and R must satisfy J0(kR) = 0. It is well-knownthat J0 has infinitely many zeros αn (α1 = 2.4047 . . . , α2 = 5.5201 . . . , α3 = 8.6537 . . . , . . . etc., seeFig. 5.2.2). Hence we only get non-trivial solutions for the eigenvalues

kn =αn

R, n ∈ Z+,

with the corresponding solutions are the eigenfunctions

wn(r) = J0

(αn

Rr)

, n ∈ Z+.

Page 32: Bystrom Applied Mathematics

32 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.3. Inner Product and Norm

To construct an orthonormal basis in a vector space we must be able to measure lengths and angles.Hence we must introduce an inner product (a scalar product). With the help of an inner product we caneasily determine which elements are orthogonal to each other. There are two examples of vector spacesand inner products we will consider here. The plane R2 together with the usual scalar product, and avector space consisting of functions on an interval together with an inner product defined by an integral.

Vectors in R2

If we have two vectors~x = (x1,x2) and~y = (y1,y2), the inner product of~x and~y is defined by

~x ·~y = x1y1 + x2y2.

The norm of~x, |~x|, is defined by|~x|2 =~x ·~x = x2

1 + x22,

and the distance between~x and~y, |~x−~y|, is given by

|~x−~y|2 = (x1− y1)2 +(x2− y2)

2 .

The angle θ between~x and~y can now be computed using the relation

~x ·~y = |~x||~y|cosθ,

and we say that two vectors are orthogonal (perpendicular to each other),~x ⊥~y, if θ =π

2, i.e. if

~x ·~y = 0.

A Function Space

We now consider the vector space consisting of functions f (x) defined on the interval [0, l] (for somel > 0) together with a positive weight-function r(x). The generalizations of the concepts above are

〈 f ,g〉=Z l

0f (x)g(x)r(x)dx, (inner product)

‖ f‖2 =Z l

0| f (x)|2 r(x)dx, (norm)

‖ f −g‖2 =Z l

0| f (x)−g(x)|2 r(x)dx, (distance)

〈 f ,g〉= ‖ f‖‖g‖cosθ, (angle)f ⊥ g⇔ 〈 f ,g〉= 0 (orthogonality)

⇔Z l

0f (x)g(x)r(x)dx = 0.

5.4. Sturm-Liouville Problems

A general Sturm-Liouville problem can be written as(P (x)y′

)′+(−q(x)+λr(x))y = 0, 0 < x < l,

c1y(0)+ c2y′(0) = 0,

c3y(l)+ c4y′(l) = 0.

Here r(x), q(x) and P (x) are given functions, c1, . . . ,c4 given constants and λ a constant which can onlytake certain values, the eigenvalues corresponding to the problem. r(x) is usually called a weight function.It is also customary to assume that r(x) > 0.

Page 33: Bystrom Applied Mathematics

5.4. STURM-LIOUVILLE PROBLEMS 33

If P (x) > 0 and c1, . . . ,c4 6= 0 we say that the problem is regular, and if P or r is 0 in some endpoint wesay that it is singular (note that there are other examples of both regular and singular SL problem, e.g.the following problem is regular).

Example 5.8. r(x) = 1, P (x) = 1, q(x) = 0, c1 = c3 = 1, c4 = c2 = 0.y′′+λy = 0,

y(0) = 0,

y(l) = 0.

(Cf. Example 5.4). In this case we have

λn =(nπ

l

)2, n = 1,2,3, . . . , (eigen values)

yn = sin(nπ

lx)

, (eigen functions)

and

〈yn,ym〉 =Z l

0sin(nπ

lx)

sin(mπ

lx)

dx = 0, if n 6= m,

‖yn‖2 =Z l

0

∣∣∣sin(nπ

lx)∣∣∣2 dx =

Z l

0

12

(1− cos

(nπ

lx))

dx =l2.

If f is a function on the interval [0, l] we can define the Fourier series of f , S(x), by (cf. Section6.1):

S(x) =∞

∑n=1

cn sin(nπ

lx)

, where

cn =1

‖yn‖2 〈 f ,yn〉=2l

Z l

0f (x)sin

(nπ

lx)

dx.

REMARK 7. Examples 5.5-5.7 is also a Sturm-Liouville problem.

For a regular Sturm-Liouville problem we have:

(i) The eigenvalues are real and to every eigenvalue the corresponding eigenfunction is uniqueup to a constant multiple.

(ii) The eigenvalues form an infinite sequence λ1,λ2, . . . and they can be ordered as

0≤ λ1 < λ2 < λ3 < · · · ,

with

limn→∞

λn = ∞.

(iii) If y1 and y2 are two eigenfunctions corresponding to two different eigenvalues, λi1 6= λi2 ,they are orthogonal with the respect to the inner product defined by r(x), i.e.

〈y1,y2〉=Z l

0y1(x)y2(x)r(x)dx = 0.

Page 34: Bystrom Applied Mathematics

34 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.5. Generalized Fourier Series

We will now see how we can generalize the concept of Fourier series from the usual trigonometric basisfunctions to an orthonormal basis consisting of eigenfunctions to a Sturm-Liouville problem.

Assume that we have an infinite linear combination

f (x) =∞

∑n=1

cnyn(x),

where yn ⊥ ym for n 6= m. Then

〈 f ,ym〉 =

⟨∞

∑n=1

cnyn,ym

⟩=

∑n=1

cn 〈yn,ym〉

= cm 〈ym,ym〉= cm ‖ym‖2 .

Let f be an arbitrary function on [0, l]. Then we define the generalized Fourier series for f as

S(x) =∞

∑n=1

cnyn(x),

where

cn =1

‖ym‖2 〈 f ,yn〉

are the generalized Fourier coefficients.

Let y1,y2, . . . be a set of orthogonal eigenfunctions of a regular Sturm-Liouville problem, and let f be apiece-wise smooth function in [0, l]. Then, for each x in [0, l] we have that

(a) S(x) = f (x) if f is continuous at x, and

(b) S(x) =12

( f (x+)+ f (x−)) if f has a discontinuity point at x.

5.6. Some Applications

Example 5.9. Consider a rod of length l, with constant density ρ, specific heat cv and thermal conduc-tance κ. Let the temperature of the rod at the time t and the distance x (from, say, the left end point)be denoted by u(x, t). Assume that the temperature at the end points of the rod are given by

u(0, t) = u(l, t) = 0, t > 0,

and that the temperature distribution in the rod at the initial time t = 0 is given by

u(x,0) = f (x), 0≤ x ≤ l.

Determine u(x, t) for 0≤ x ≤ l, and t ≥ 0.Solution: We have seen (cf. Chapter 1) that the mathematical formulation of this problem is

u′t(x, t)− ku′′xx(x, t) = 0, 0≤ x ≤ l, t ≥ 0, k =κ

cV ρ,

u(0, t) = u(l, t) = 0, t > 0,

u(x,0) = f (x), 0≤ x ≤ l.

(*)

We begin by performing the following natural scaling of the problem (cf. Chapter 1):

(5.6.1) t =kl2 t, x =

xl.

Page 35: Bystrom Applied Mathematics

5.6. SOME APPLICATIONS 35

Then we arrive at the following standard problem to solve:

(1)

(2)

(3)

u′t (x, t)− u′′xx (x, t) = 0, 0≤ x ≤ 1, t ≥ 0,

u(0, t) = u(1, t) = 0, t > 0,

u(x,0) = f (x), 0≤ x ≤ 1,

where f (x) = f (xl). We can now use Fourier’s method to solve this problem (cf. section 4.8).Step 1: Try to find solutions of the type

u(x, t) = X(x)T (t).

If we insert this expression in (1) above we get

T ′(t)T (t)

=X ′′(x)X(x)

=−λ,

i.e. the two equations

X ′′(x)+λX(x) = 0,and(A)

T ′(t)+λT (t) = 0.(B)

The function u(x, t) = X(x)T (t) must also satisfy the boundary conditions (2):

X(0)T (t) = X(1)T (t) = 0, t ≥ 0,

and if we want a non-trivial solution (T 6≡ 0) we conclude that

X(0) = X(1) = 0.

This boundary condition together with (A) gives us the Sturm-Liouville problem

(**)

X ′′(x)+λX(x) = 0,

X(0) = X(1) = 0.

Step 2: We get three cases depending on the value of λ : λ < 0, λ = 0, and λ > 0.λ < 0 We get only the trivial solution X(x)≡ 0.λ = 0 We get only the trivial solution X(x)≡ 0.λ > 0 Then

X(x) = Asin(√

λx)

+Bcos(√

λx)

,

and X(0) = 0⇒ B = 0, and X(1) = 0⇒ Asin(√

λ

)= 0⇒ A = 0 or

√λ = nπ, n∈Z+.

Thus the SL-problem (**) has the following eigenvalues

λn = (nπ)2 , n ∈ Z+,

and the corresponding eigenfunctions

Xn(x) = sin(nπx) .

Furthermore, for these values of λ = λn, the equation (B) has the solution

T (t) = Tn(t) = e−(nπ)2t ,

and we conclude that the general (separable) solution of (1) and (2) can be written as

un(x, t) = cn sin(nπx)e−(nπ)2t .

Step 3: The superposition principle (cf. section 4.5) tells us that the function

u(x, t) =∞

∑n=1

bn sin(nπx)e−(nπ)2t

Page 36: Bystrom Applied Mathematics

36 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

also satisfy (1) and (2). We will now see that we can also assert that this function satisfy the initialcondition (3) by choosing appropriate constants bn . It is clear that

u(x,0) =∞

∑n=1

bn sin(nπx) ,

and if we choose the bn’s as the Fourier coefficients of f , i.e.

bn = 2Z 1

0f (x)sin(nπx)dx,

we actually get

u(x,0) =∞

∑n=1

bn sin(nπx) = f (x).

We conclude that the function

u(x, t) =∞

∑n=1

bn sin(nπx)e−(nπ)2t ,

with bn as above satisfy (1), (2) and (3).Final step: By using the scaling from (5.6.1) we see that the solution to the original problem is

given by

u(x, t) =∞

∑n=1

bn sin(nπ

lx)

e−( nπ

l )2kt ,

where

bn =2l

Z l

0f (x)sin

(nπ

lx)

dx.

Example 5.10. Consider a rod between x = 1 and x = e. Let u(x, t) denote the temperature of the rodat the point x and time t. Assume that the end points are kept at the constant temperature 0, that atthe initial time t = 0 the rod has a heat distribution given by

u(x,0) = f (x), 1 < x < e,

that no heat is added and that the rod has constant density ρ and specific heat cv. Assume alsothat the rod has heat conductance K which varies as K(x) = x2. The equation which determines thetemperature u(x, t) is in this case

cvρu′t =∂

∂x

(x2u′x

),1 < x < e, t > 0.(1)

Determine u(x, t) for 1≤ x ≤ e, and t > 0.Solution: We apply Fourier’s method of separating the variables and assume that we can find a

solution of the form u(x, t) = X(x)T (t). Inserting this expression in (1) above we get

cvρT ′

T=

1X

ddx

(x2X ′)=−λ,

where λ is a constant and X satisfies the boundary condition

X(1) = X(e) =0.(2)

Thus T satisfies the equation

T ′ =− λ

cvρT,(3)

Page 37: Bystrom Applied Mathematics

5.6. SOME APPLICATIONS 37

and X satisfiesddx

(x2X ′)+λX = 0,1 < x < e

x2X ′′+2xX ′+λX = 0,1 < x < e.(4)

The equation (4) together with the boundary condition (2) gives us a regular Sturm-Liouville problemon [1,e]. The characteristic equation is

r(r−1)+2r +λ = 0,

with the roots

r1,2 =−12±√

14−λ.

As in Example 5.6 we get three different cases depending on the value of λ:

λ =14

We have a double root r =−12, and the solutions are given by X(x) = Ax−

12 +Bx−

12 lnx.

The boundary condition (2) gives X(1) = A = 0 and X(e) = Be−12 = 0, i.e. we get only

the trivial solution X ≡ 0.λ <

14

The roots are now real and different, r1 6= r2, and the solutions are

X(x) = Axr1 +Bxr2 .

The boundary conditions gives usX(1) = A+B = 0X(e) = Aer1 +Ber2 = 0

A =−B,

A(er1 − er2) = 0,

and since r1 6= r2 we must have A = 0, and we only get the trivial solution X ≡ 0.

λ >14

We have two complex roots r =−12± i

√λ− 1

4, and the general solution is

X(x) =A√x

sin

(√λ− 1

4lnx

)+

B√x

cos

(√λ− 1

4lnx

).

The boundary conditions imply that X(1) = B = 0 and X(e) = Ae−12 sin

(√λ− 1

4

)=

0, which us gives that √λ− 1

4= nπ, n ∈ Z+.

Observe that the case n = 0 is the same as λ =14

. Hence the eigenvalues of the Sturm-Liouville problem(4) and (2) are

λn =14

+n2π

2, n ∈ Z+,

and the corresponding eigenfunctions are

Xn(x) =1√x

sin(nπ lnx) , n ∈ Z+.

For every fixed n, the equation (3) is

T ′n =− λn

cvρTn,

Page 38: Bystrom Applied Mathematics

38 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

with the solutionsTn(t) = e−

λncvρ

t, n ∈ Z+.

We conclude that the functions

un(x, t) = Tn(t)Xn(x) =1√x

sin(nπ lnx)e−λncvρ

t, n ∈ Z+,

are solutions to the original equation, and they also satisfy the boundary conditions. The superpositionprinciple implies that the function

u(x, t) =∞

∑n=1

an1√x

sin(nπ lnx)e−λncvρ

t

is also a solution of the equation which satisfy the boundary conditions. Finally, to accommodate theinitial values we must choose the constants an so that

u(x,0) =∞

∑n=1

an1√x

sin(nπ lnx) = f (x),

This holds if we choose the constants an as

an =1

‖Xn‖2

Z e

1f (x)Xn(x)dx

= 2Z e

1f (x)

1√x

sin(nπ lnx)dx.

(Note that ‖Xn‖2 =Z e

1

1x

sin2 (nπ lnx)dx =12

.) The wanted temperature distribution is thus given by

u(x, t) =∞

∑n=1

an1√x

sin(nπ lnx)e−λncvρ

t,

where

an = 2Z e

1

f (x)√x

sin(nπ lnx)dx.

Example 5.11. Solve the problem:

∂u∂t

=∂2u∂x2 ,(1)

u(0, t) =0,(2)

u′x(1, t) =−3u(1, t),(3)

u(x,0) = f (x).(4)

Solution: We use Fourier’s method of separation of variables.Step 1: Assume that u(x, t) = X(x)T (t) and insert this into (1). In the same way as in the

previous examples we obtain the equation

T ′(t)T (t)

=X ′′(x)X(x)

= λ,

which gives us the two equations

T ′(t)−λT (t) = 0,(5)

X ′′(x)−λX(x) = 0.(6)

Step 2: There are three cases of λ we must study.

Page 39: Bystrom Applied Mathematics

5.6. SOME APPLICATIONS 39

λ = 0 The solutions to (5) and (6) are then T =constant, and X = Ax+B, i.e. u(x, t) = Ax+Bfor some constants A and B. The boundary value (2) gives u(0, t) = B = 0, and (3)gives u′x(1, t) = A = −3u(1, t) = −3A, i.e. A = 0 and we only get the trivial solutionu(x, t)≡ 0.

λ > 0 The solutions to (5) are T (t) = Aeλt and the solutions to (6) are X(x) = Be√

λx +Ce−√

λx.

The boundary value (2) gives u(0, t) = T (t)X(0) = Aeλt(B+C) = 0, hence either A = 0(which implies u≡ 0) or B =−C. The condition (3) is now equivalent to

Aeλt√

λB(

e√

λ + e−√

λ

)= −3AB

(e√

λ− e−√

λ

),

⇔ABe2

√λ

(3+

√λ

)= AB

(3−

√λ

),

which is only satisfied if AB = 0 (show this!), and in this case we only get the trivialsolution u≡ 0.

λ < 0 If we set λ =−p2 we get (in the same manner as in Example 5.5) the solutions

(*) un(x, t) = Bne−p2nt sin pnx, n = 1,2,3, . . . ,

where pn are solutions to the equation

tan p =− p3.

Step 3: All functions defined by (*) satisfy (1), (2) and (3). According to the superposition principlethe function

u(x, t) =∞

∑n=1

Bne−p2nt sin pnx

also satisfies (1), (2) and (3). Furthermore, (6) with the corresponding boundary conditions is aregular Sturm-Liouville problem and the theory of generalized Fourier series implies that u(x, t) willsatisfy (4):

u(x,0) =∞

∑n=1

Bn sin pnx = f (t)

if we choose the constants Bn as

(**) Bn =〈 f (x),sin pnx〉‖sin pnx‖2 =

R 10 f (x)sin pnxdxR 1

0 sin2 pnxdx.

Thus, the solution to the problem is

u(x, t) =∞

∑n=1

Bne−p2nt sin pnx,

where pn are the positive solutions of tan p =− p3

, p1 < p2 < · · · (see Fig. 5.2.1), and Bn is definedby (**).

Page 40: Bystrom Applied Mathematics

40 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

Example 5.12. (The wave equation) A vibrating circular membrane with radius R is described by thefollowing equation together with boundary and initial values:

u′′tt = c2 (u′′xx +u′′yy), t > 0, r =

√x2 + y2 ≤ R,(1)

u(R, t) = 0, t > 0, (fixated boundary)(2)

u(r,0) = f (r),r ≤ R, (initial position)(3)

∂u∂t

(r,0) = g(r),r ≤ R. (initial velocity)(4)

Observe that the initial conditions only depend on r =√

x2 + y2 = the distance from the center ofthe membrane to the point (x,y), and if we introduce polar coordinates

x = r cosθ,

y = r sinθ,

we see that (1) can be written as

∂2u∂t2 = c2

(∂2u∂r2 +

1r

∂u∂r

+1r2

∂2u∂θ2

).

If we also make the assumption that u(r,θ, t) is radially symmetric (i.e. that u(r,θ, t) is independentof the angle θ) we can write (1) as

(1’)∂2u∂t2 = c2

(∂2u∂r2 +

1r

∂u∂r

).

To solve the problem we continue as before and use Fourier’s method to separate the variables. Withthe function u(r, t) = W (r)G(t) inserted into (1’) we get the equations

W ′′+1r

W ′+ k2W = 0, 0≤ r ≤ R,(5)

G′′+(ck)2G = 0, t > 0.(6)

Furthermore, we get the following boundary values from (2):

(7) W (R) = 0,

and (5) together with (7) is a regular Sturm-Liouville problem which gives us the eigenfunctions

Wn(r) = J0

(αn

Rr)

,

where αn = knR are solutions of J0(kR) = 0 (see Example 5.7). Observe that if we write (5) in thegeneral form we see that we have the weight function= r, i.e. the inner product is given by

〈 f ,g〉=Z R

0f (r)g(r)rdr.

By solving (6) for k = kn and using the superposition principle we see that

(*) u(r, t) =∞

∑n=1

(An cos

(cαn

Rt)

+Bn sin(cαn

Rt))

J0

(αn

Rr)

is a solution to (1) and (2). And we can also choose the constants An so that (3) is satisfied, i.e.

u(r,0) =∞

∑n=1

AnJ0

(αn

Rr)

= f (r),

if

(**) An =1R R

0 J0(

αnR r)2 rdr

Z R

0f (r)J0

(αn

Rr)

rdr.

Page 41: Bystrom Applied Mathematics

5.7. EXERCISES 41

In the same way we see that (4) is satisfied if we choose Bn so that

(***) Bncαn

R=

1R R0 J0

(αnR r)2 rdr

Z R

0g(r)J0

(αn

Rr)

rdr.

Hence, the answer to the problem is given by (*) where An and Bn are chosen as in (**) and (***).

5.7. Exercises

5.1. [S] Solve the following S-L problem by determing the eigenvalues and eigenfunctions:(a)

(x2u′(x)

)′+λu(x) = 0, 1 < x < eL, u(1) = u

(eL)= 0,

(b)(x2u′(x)

)′+λu(x) = 0, 1 < x < eL, u(1) = u′ (e) = 0.

5.2.* Solve the following S-L problem by determing the eigenvalues and eigenfunctions:(a) u′′(x)+λu(x) = 0, 0 < x < l, u′(0) = u′(l) = 0,(b) u′′(x)+λu(x) = 0, 0 < x < l, u′(0) = u(l) = 0.

5.3. [S] Use Fourier’s method to solve the following problem:

u′t = u′′xx, 0≤ x ≤ l, t > 0,

u′x(0, t) = u′x(l, t) = 0, t > 0,

u(x,0) = f (x), 0 < x < l.

5.4.* A rod between x = 1 and x = e has constant temperature 0 at the endpoints, and at the timet = 0 the heat distribution is given by

√x, 1 < x < e. The rod has a constant density ρ and constant

specific heat C, but its thermical conductance varies like K = x2, 1 < x < e. Formulate an initial andboundary values problem for the temperature of the rod, u(x, t). Then use fourier’s method to solvethe problem.

5.5.*(a) Solve the problem

u′t = 4u′′xx, 0≤ x ≤ 1, t > 0,

u(0, t) = 0 t > 0,

u′x(1, t) = −cu(1, t), t > 0,

u(x,0) =

x,1− x,

0≤ x <12,

x ≥ 12.

(b) Giva a physical interpretation of the problem in (a).

Page 42: Bystrom Applied Mathematics

42 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES

5.6. [S] Consider an ideal liquid, flowing orthogonally towards an infinitely long cylinder by the radiusa. Since the problem is uniform in the axial coordinate we can treat the problem in plane polarcoordinates.

a

x

The speed of the liquid,~v(r,θ) is then given by the equation

~v(r,θ) =−gradψ,

where ψ as a solution of the Laplace equation

∆ψ = 0.

At the surface of the cylinder we have the boundary condition∂ψ

∂r|r=a = 0,

and as r → ∞ we have the following asymptotic boundary condition

limr→∞

ψ

x= lim

r→∞

ψ

r cosθ=−v0,

where v0 is a constant.a) Show, using separation of variables, that in polar coordinates, the assumption ψ(r,θ) =

R(r)Θ(θ) transforms the Laplace equation to the following two equations

Θ′′(θ)+m2

Θ(θ) = 0,

R′′(r)+1r

R′(r)− m2

r2 R(r) = 0,

where m is an integer.b) Use a) to find ψ and~v.

Page 43: Bystrom Applied Mathematics

CHAPTER 6

Introduction to Transform Theory with Applications

6.1. Transforms of Fourier Series Type

Example 6.1. (The classical form)If f (t) is defined for t ∈ [−l, l] (or alternatively priodic with period 2l) we can construct a

(classical) Fourier series (Joseph Fourier) for f :

Fcl : f (t)→ a0,a1,b1, . . . ,an,bn, . . . ,

where

a0 =12l

Z l

−lf (t)dt,

an =1l

Z l

−lf (t)cos(nΩt)dt, n = 1,2, . . . , and

bn =1l

Z l

−lf (t)sin(nΩt)dt, n = 1,2, . . . ,

are the Fourier coefficients (amplitudes). Here we have defined Ω =2π

l. The “signal” f (t) can be

reconstructed (in points of continuity) in the following way:

F −1cl : f (t) = a0 +

∑n=1

an cos(nΩt)+bn sin(nΩt).

DEFINITION. (Generalized form)

In the generalized form we use, for example, eigenfunctions from a Sturm-Liouville problem (chapter5.4) instead of the sine and cosine functions.

Let yn(t)∞

n=1 be an orthogonal system (basis functions), i.e.

〈yn,ym〉=

0, n 6= m,

‖yn‖2 , n = m.

We then defineFd : f (t)→a∗n

n=1 ,

where

a∗n =1

‖yn‖2 〈 f ,yn〉

are the Fourier coefficients. Under rather general assumptions we can reconstruct the signal f (t) (inpoints of continuity) by

F −1d : f (t) =

∑n=1

a∗nyn(t).

43

Page 44: Bystrom Applied Mathematics

44 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

REMARK 8. The classical form in Example 6.1 is obtained by considering

yn(t)∞

n=1 = 1,cosΩt,sinΩt, . . . ,cosnΩt,sinnΩt, . . . .

Note that in this case we have

〈 f ,cosnΩt〉 =Z l

−lf (t)cosnΩtdt, and

‖cosnΩt‖2 =Z l

−lcos2 nΩtdt =

Z l

−l

1+ cos2Ωt2

dt = l.

Observere also that the integrals can be taken over any period of f , e.g. [0,2l].

Example 6.2. (Classical complex form)

Fc : f (t)→cn∞

n=−∞,

where

c0 =12l

Z l

−lf (t)dt, and

cn =12l

Z l

−lf (t)e−inΩtdt, n =±1,±2, . . . .

Here we have the reconstruction formula:

F −1c : f (t) =

∑n=−∞

cneinΩt .

REMARK 9. The complex form in Example 6.2 can be deduced from the formulas in Example 6.1 andEuler’s formulas:

sin t =eit − e−it

2i,

cos t =eit + e−it

2,

or equivalently

eit = cos t + isin t,e−it = cos t− isin t.

(6.1.1)

We have

f (t) = a0 +∞

∑n=1

an cosnΩt +bn sinnΩt

= a0 +∞

∑n=1

an

(einΩt + e−inΩt

2

)+bn

(einΩt − e−inΩt

2i

)= a0 +

∑n=1

(an

2+

bn

2i

)einΩt +

(an

2− bn

2i

)e−inΩt

= c0 +∞

∑n=1

cneinΩt + cne−inΩt ,

where we let c0 = a0 and cn =(

an

2+

bn

2i

). (Observe that an and bn are real numbers). If we additionally

definec−n = cn,

we get

f (t) =∞

∑n=−∞

cneinΩt .

Page 45: Bystrom Applied Mathematics

6.2. THE LAPLACE TRANSFORM 45

Moreover:

n > 0 : cn =an

2− i

bn

2=

12l

Z l

−lf (t)e−inΩtdt,

n = 0 : c0 = a0,

n < 0 : cn = c−n =a−n

2− i

b−n

2=

12l

Z l

−lf (t)cos(−nΩt)dt− i

2l

Z l

−lf (t)sin(−nΩt)dt

=12l

Z l

−lf (t)e−inΩtdt.

6.2. The Laplace Transform

If f (t) is defined for t ≥ 0 the (unilateral) Laplace transform (Pierre-Simon Laplace) L and its inverseL−1 are defined by:

L : f (t) 7→ F(s) = L f (t)(s) =Z

0e−st f (t)dt,

L−1 : F(s) 7→ f (t) = L−1F(s)(t) =1

2πi

Z a+i∞

a−i∞F(s)estds.

Note that if f (t)e−σ0t → 0 as t → ∞ then the first integral converges for all complex numbers s with realpart greater than σ0, and in the second integral we then demand that a > σ0.

REMARK 10. In applications the inverse transforms are usually computed by using a table (see e.g.Appendix A-1, p. 90). When computing the inverse transform it is sometimes also useful to rememberhow to compute partial fraction decompositions (see e.g. Appendix A-6, p. 99)

It is obvious that the Laplace transform is linear, i.e.

L a f (t)+bg(t)= aL f (t)+bL g(t) .

Apart from computing the Laplace transform of a function by using the integral in the definition aboveone can also use the general properties stated below, which also illustrate some important properties ofthe Laplace transform.

Differentiation

L

f ′(t)

(s) = sL f (t)(s)− f (0),

L

f ′′(t)

(s) = s2L f (t)(s)− s f (0)− f ′(0),...

L

f (n)(t)

(s) = snL f(s)− sn−1 f (0)− sn−2 f ′(0)−·· ·− f (n−1)(0).

Convolution

The Convolution product of two functions f and g, f ?g over a finite interval [0, t] is defined as

( f ?g)(t) =Z t

0f (u)g(t−u)du.

For the Laplace transform we then have

L f ?g= L fL g .

Page 46: Bystrom Applied Mathematics

46 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

In fact

L f ?g =Z

0

Z t

0f (u)g(t−u)due−stdt

=Z

0

Z∞

uf (u)g(t−u)e−stdtdu

=Z

0f (u)e−su

(Z∞

ug(t−u)e−s(t−u)dt

)du

x = t−u =Z

0f (u)e−su

(Z∞

0g(x)e−sxdx

)du

= L fL g .

Observe that in the second equality we used the following identity:Z

0

Z t

0dudt =

Z∞

0

Z∞

udtdu, which

follows from the fact that both sides represent an area integral in the (u, t)-plane over the octant betweenthe positive t-axis and the line t = u.

Damping

By damping a “signal” f (t) exponentially, i.e. multiply f (t) with e−at one obtains a translation of theLaplace transform of f as

L

e−at f (t)

(s) =Z

0e−at f (t)e−stdt

=Z

0f (t)e−(s+a)tdt = L f(s+a).

I.e. we have the following formula:

(6.2.1) L

e−at f (t)

(s) = L f(s+a).

Time delay

Heaviside’s function is defined by

θ(t) =

0, t < 0,

1, t ≥ 0,

and for a ∈R the function t 7→ θ(t−a) is a function which takes the value 0 when t < a and 1 when t ≥ a(see Fig. 6.2.1). The meaning of the function θ(t−a) is to switch on a signal at time t = a, and one canalso form the function θ(t−a)−θ(t−b) which switch on a signal at the time t = a and switch it off atthe time t = b:

f (t)(θ(t−a)−θ(t−b)) =

f (t), a≤ t ≤ b,

0, else.

Another use of the Heaviside’s function is time delay. To translate a function f (t) which is defined fort ≥ 0 (i.e. delay the signal) one can form the function t 7→ f (t−a)θ(t−a), the function which is 0 whent < a and f (t− a) when t ≥ a. The Laplace transform of this function is given in the following mannerby a damping at the transform side

L f (t−a)θ(t−a)(s) =Z

0f (t−a)θ(t−a)e−stdt

=Z

af (t−a)e−stdt = [u = t−a]

=Z

0f (u)e−s(u+a)du = e−asL f(s),

Page 47: Bystrom Applied Mathematics

6.2. THE LAPLACE TRANSFORM 47

FIGURE 6.2.1. Shifted Heaviside’s function

1θ(a− t)

i.e. we have the relation

(6.2.2) L f (t−a)θ(t−a)(s) = e−asL f(s).

Example 6.3. We have L 1(s) =1s, L t(s) =

1s2 , . . . ,L tn(s) =

n!sn+1 , since

L 1(s) =Z

01e−stdt =

[e−st

−s

]∞

0=

1s,

L t(s) =Z

0te−stdt =

[te−st

−s

]∞

0+

1s

Z∞

01e−stdt

= 0+1s· 1

s=

1s2 , etc.

Observe that when we calculate the integral from 0 to ∞ of tne−st each integration by parts will giveus an s in the denominator and a factor in the numerator, and since tke−st vanishes at both limits of

the integral (when k > 0) all terms will vanish except the last,n!sn

Z∞

0e−stdt.

By using this example and the dampening formula (6.2.1) we can easily compute for example

L

e−at(s) =1

s−a, L

te−at(s) =1

(s−a)2 , etc.

Example 6.4. Let f (t) = eiat , where a is a constant and t ≥ 0. The Laplace transform of f is then givenby:

L

eiat(s) =Z

0eiate−stdt =

Z∞

0e(ia−s)tdt

=

[e(ia−s)t

ia− s

]∞

0

=1

s− ia=

s+ ias2 +a2

=s

s2 +a2 + ia

s2 +a2 ,

and since L is linear Eulers formulas (6.1.1) implies that

L cosat(s) =s

s2 +a2 ,

L sinat(s) =a2

s2 +a2 .

Page 48: Bystrom Applied Mathematics

48 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

Example 6.5. Solve the initial value problemy′′+ y = 1,

y(0) = y′(0) = 0.

Solution: Let L y(t)= Y (s). Then L

y′′(t)

= s2Y (s), and if we (Laplace-) transform the equa-tion above we get

L

y′′+ y

= s2Y (s)+Y (s) = L 1=1s,

i.e.

Y (s) =1s

1+ s2 =1

s(1+ s2)=

1s− s

1+ s2 .

If we apply the inverse Laplace transform we get

y(t) = L−1 Y (s)(t) = L−1

1s

(t)−L−1

s

1+ s2

(t)

= 1− cos t.

Example 6.6. (The heat conduction equation)Consider the boundary value problem

ut − kuxx = 0, t > 0, x > 0,

u(x,0) = 0, x > 0,

u(0, t) = 1, t > 0,

where u(x, t) is a bounded function (u(x, t) gives the heat in the point x at the time t). Now transformthe entire equation in the time variable, and let U(x,s) denote the Laplace transform of u(x, t). Theequation ut − kuxx = 0 can now be written as

sU(x,s)− kUxx(x,s) = 0,

and if we solve this ordinary differential equation (in the x-variable), we get

U(x,s) = Ae−√

s/kx +Be√

s/kx,

where A and B are functions of s. Since we assumed u to be bounded (in both variables) the termcontaining e

√s/kx must vanish, i.e. B = 0, and we get

U(x,s) = A(s)e−√

s/kx.

The boundary condition implies U(0,s) = L u(0, t)(s) = L 1(s) =1s, but we see that U(0,s) =

A(s) so A(s) =1s

and

U(x,s) =1s

e−√

s/kx.

Page 49: Bystrom Applied Mathematics

6.3. THE FOURIER TRANSFORM 49

To find u we must now apply the inverse transform on U. For this purpose it is convenient to use atable, and using Appendix 1, p. 90 we see that

u(x, t) = erfc(

x2√

kt

),

where erfc is the complementary error function (erf),

erfc(t) = 1− erf(t),

erfc(t) =2√π

Z t

0e−z2

dz.

6.3. The Fourier Transform

The counterpart of Fourier series for functions f (t) defined on R is the Fourier transform, F f, whichwe define as

F : f (t) 7→ f (ω) = F f (t)=Z

−∞

f (t)e−iωtdt,

for functions f (t) such that the integral converges. We also have an inverse transform

F −1 : f (t) =1

Z∞

−∞

f (ω)eiωtdω.

REMARK 11. We can still interpret the formula as if we reconstruct the signal f (t) as a sum of waves(basis functions) eiωt , with amplitudes f (ω).

REMARK 12. In applications it is customary to find the inverse transform using appropriate tables (seeAppendix 2, p. 91).

In the same manner as for the Laplace transform we can derive a number of useful general properties forthe Fourier transform.

• LinearityF a f (t)+bg(t)= aF f (t)+bF g(t) .

• Differentiation

F

f ′(t)

= iωF f (t) ,

F

f ′′(t)

= (iω)2F f (t) ,

...

F

f (n)(t)

= (iω)nF f (t) .

• ConvolutionF f ?g= F f (t)F g(t) ,

where the convolution over R is defined by

f ?g =Z

−∞

f (t−u)g(u)du.

• Frequency modulation

F

eiat f (t)

(ω) = F f (t)(ω−a) = f (ω−a).

Page 50: Bystrom Applied Mathematics

50 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

• Time delayF f (t−a)= e−iωa f (ω).

Example 6.7. Let f (t) = θ(t)e−t , (θ(t) is defined as on p. 46) then

F

θ(t)e−t(ω) =Z

−∞

θ(t)e−te−iωtdt

=Z

0e−(1+iω)tdt

=

[−e−(1+iω)t

1+ iω

]∞

0

=1

1+ iω.

I.e. f (ω) =1

1+ iω.

Example 6.8. (Heat conduction equation with an initial temperature distribution)Assume that we have an infinitely long rod with temperature distribution in the point x at the

time t given by u(x, t), x ∈ R, t ≥ 0. Assume also that at the initial time t = 0 the temperature isdistributed according to the function f (x), i.e. u(x,0) = f (x). To determine u we must solve thefollowing initial value problem.

(6.3.1)

u′t − ku′′xx = 0, −∞ < x < ∞, t > 0,

u(x,0) = f (x), −∞ < x < ∞.

Solution: By using the Fourier transform in the same way as the Laplace transform in Example6.6 we get (after some calculations) that

u(x, t) =1√4πkt

Z∞

−∞

f (z)e−(x−z)2/4ktdz.

REMARK 13. The function G(y, t) =1√4πkt

e−(x−z)2/4kt is the so called Green’s function or the unit

impulse solution to the following problem:G′

t − kG′′xx = 0,

G(x,0) = δy(x).

Here δy(x) is the Dirac delta function (Paul Dirac), which is usually characterized by the property thatZ∞

−∞

g(x)δy(x)dx = g(y),

or alternatively formulatedg?δy(u) = g(u− y).

Green’s method: The solution to 6.3.1 is given by

u = f ?G.

Observe that δy(x) is not a function strictly speaking, but a distribution. If y = 0 we simply write δ0(x) =δ(x). I connection with applications δy(x) is usually called a unit impulse (in the point x = y). Whenconsidering a physical system, the occurrence of δ(t) should be viewed as that the system is subjected to

Page 51: Bystrom Applied Mathematics

6.3. THE FOURIER TRANSFORM 51

a short (momentary) force. (For example if you hit a pendulum with a hammer at the time 0 the systemwill be described by an equation of the type my+ay+by = cδ(t).)

Sampling

Sampling here means that we reconstruct a continuous function from a set of discrete (measured/sampled)function values.

S : f (t)→ f (nδ) , δ is the length of the sampling interval.

FIGURE 6.3.1. Samplingy

t−2δ−δ 0 δ 2δ 3δ

DEFINITION 6.1. A function f (t) is said to be band limited if the Fourier transform of f , F ( f ) onlycontains frequencies in a bounded interval, i.e. if f (ω) = 0 for |ω| ≥ c for some constant c. (Thecounterpart for periodic functions is of course that the Fourier series transform is a finite sum.)

THEOREM. The sampling theorem

A continuous band limited signal f (t) can be uniquely reconstructed from its values in a finite number ofuniformly distributed points (sampling points) if the distance between two neighboring points is at mostπ

c. In this case we have:

S−1 : f (t) =∞

∑k=−∞

f(

c

)sin(ct− kπ)

ct− kπ.

(Here the sampling is performed over the points xk =kπ

c.)

REMARK 14. In connection with the sampling theorem we should also mention two other discrete Fouriertransforms:

• The Discrete Fourier Transform (DFT).• The Fast Fourier Transform (FFT).

These transforms are very useful in many practical applications, but we do not have the time to go intomore details concerning these in this short introduction (in short one can say that practically the entireinformation society of today relies on the FFT). Some references:

• Mathematics of the DFT. A good and extensive online-book on DFT and applications,http://ccrma-www.stanford.edu/~jos/r320/.

Page 52: Bystrom Applied Mathematics

52 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

• Fourier Transforms, DFTs, and FFTs. Another extensive text on mainly DFT and FFT withexamples and applications,http://www.me.psu.edu/me82/Learning/FFT/FFT.html.

6.4. The Z-transform

Consider discrete signals, xn∞

n=0 = x0,x1,x2, . . . , or xn∞

n=−∞= . . . ,x−2,x−1,x0,x1,x2, . . . . The

notation 1,2,

↓5,6,−1, . . .

implies that x0 = 5. The Z-transform of the sequence xn is defined by

Z : xn → X(z) =∞

∑n=0

xnz−n,

Z−1 : Z−1[X ] = xn∞

n=0 .

The Z-transform can be considered as a discrete version of the Laplace transform and therefore it is notsurprising that similar general properties hold. For example we have:

• LinearityZ [axn+byn] = aZ [xn]+bZ [yn] .

• Damping

Z [anxn] = X( z

a

).

• ConvolutionZ [xn?yn] = Z [xn] ·bZ [yn] ,

where (the discrete) convolution of two sequences is defined by

xn?yn= zn , with zn =n

∑k=1

xn−kyk, n = 0,1,2, . . . .

• DifferentiationX ′(z) = Z [0,0,−x1,−2x2,−3x3, . . .] .

• Forward shift

Z [0,x0,x1,x2,x3, . . .] = z−1X(z),

Z [0,0,x0,x1,x2,x3, . . .] = z−2X(z), etc.

• Backward shift

Z[x0,

↓x1,x2,x3, . . .

]= zX(z)− x0z,

Z[x0,x1,

↓x2,x3, . . .

]= z2X(z)− x0z2− x1z, etc.

When comparing with the formulas for the Laplace transform we se that the forward shift correspondsto time delay and backward shift corresponds to differentiation in the continuous case. Since the shift

Page 53: Bystrom Applied Mathematics

6.4. THE Z-TRANSFORM 53

operations might feel a little different as compared to their continuous counterparts we prove the secondlast equality:

Z[x0,

↓x1,x2,x3, . . .

]= x1 + x2z−1 + x3z−2 + · · ·

= x0z+ x1 + x2z−1 + x3z−2 + · · ·− x0z

= zX(z)− x0z.

Example 6.9. (Some examples on the Z-transform)

a) Unit step sequence. Let σn= 0,0,↓1,1,1, . . ., then

Z [σn] = 1+1z

+1z2 + · · ·= 1

1− 1z

=z

z−1, |z|> 1.

b) Unit impulse sequence. Let δn= . . . ,0,0,↓1,0,0, . . ., then

δn−2= . . . ,0,0,↓0,0,1,0,0 . . ., and we get

Z [δn] = 1,

Z [δn−2] =1z2 , etc.

c) Unit ramp sequence. Let rn= . . . ,0,0,↓0,1,2, . . .. Then

Z [rn] = 0+1z

+2z2 +

3z3 + · · · ,

11− z

= 1+ z+ z2 + z3 + · · · , |z|< 1,

and (differentiate both sides)1

(1− z)2 = 1+2z+3z2 + · · · ,

which givesz

(1− z)2 = z+2z2 +3z3 + · · · ,

and if we set1z

instead of z here we see that

Z [rn] =z

(z−1)2 , |z|> 1.

REMARK 15. The Z-transform is very useful for solving difference equations and for treating discretelinear systems.

REMARK 16. The discrete Fourier transform (DFT) that was mentioned earlier is a special case of theZ-transform with z = e−2kπ/N .

REMARK 17. More examples of useful transform pairs and general properties can be found in Appendix3.

Page 54: Bystrom Applied Mathematics

54 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

6.5. Wavelet transforms

The idea of wavelets is relatively new, but it has already shown itself to be much more effective than manyother transforms, e.g. for applications in

• Signal processing, and• Image processing.

In these cases the story begins with what is now called the mother wavelet, ψ. Typically the function ψ

has the following properties:

*Z

−∞

ψ(t)dt = 0,

** ψ is well localized in both time and frequency, and in addition satisfies some further (techni-cal) conditions.

It can then be shown that the following systemψ j,k(t)

j,k=−∞,

whereψ j,k(t) = 2 j/k

ψ(2 jt− k

)are translations, dilatations and normalization of the original mother wavelet, is a (complete) orthogonalbasis. A signal f (t) can be reconstructed by using the usual (generalized) Fourier idea:

W −1 : f (t) =∞

∑j,k=−∞

⟨f ,ψ j,k

⟩ψ j,k(t).

and we also haveW : f (t)→

⟨f ,ψ j,k

⟩∞

j,k=−∞,

where the “Fourier coefficients” are given by the scalar products⟨

f ,ψ j,k⟩

=Z

−∞

f (t)ψ j,k(t)dt.

REMARK 18. A problem with the Fourier series transform is that a signal f (t) which is well localized intime results in an outgoing signal f (ω) which is dispersed in the frequency range (e.g. the Fourier seriesfor the delta function δ(t) contains all frequencies) and vice versa. The advantage with the wavelet trans-form is that you can “compromise” and obtain localization in both time and frequency simultaneously (atleast in certain cases).

REMARK 19. In Appendix 4 we have included a motivation and illustration which makes it easier tounderstand the terminology and formulas above. The motivation is obtained by a natural approximationprocedure, with the classical Haar wavelet as mother wavelet.

REMARK 20. The transform W above corresponds to the Fourier series transform, but there also existsa similar integral transform corresponding to the Fourier transform.

REMARK 21. The wavelet transforms are not so useful if you have to do all calculations by hand, butnowadays there are easily available computer programs which makes them very powerful for certainapplications. The following web adresses provide information about a few such programs:

• http://www.wavelet.org (Wavelet Digest+search engine+links+...)• http://www.finah.com/ (Many practical applications)• http://www.tyche.math.univie.ac.at/Gabor/index.html (Gabor analysis)• http://www.sm.luth.se/~grip/ (Licentiate and PhD thesis of Niklas Grip)

Some research groups in Sweden which are working with wavelets and applications (also industrially):

Page 55: Bystrom Applied Mathematics

6.7. CONTINUOUS LINEAR SYSTEMS 55

• KTH: Jan-Olov Strömberg ([email protected])• Chalmers: Jöran Bergh (math.chalmers.se)• LTU (and Uppsala): Lars-Erik Persson ([email protected])

Some books on wavelets• Wavelets, J. Bergh and F. Ekstedt and M. Lindberg ((1))• A Wavelet Tour of Signal Processing, S.G. Mallat ((6))• Introduction to Wavelets and Wavelet Transforms, A Primer, C.S. Burrus and R.A. Gopinath

and H. Guo ((2))• Foundations of Time-Frequency Analysis, K. Gröchenig ((4))

6.6. The General Transform Idea

Solution ofthe problem

Problem Transformed Problem

Solution of the trans-formedproblem

(Difficult orImpossible)

(easy)L−1-transform

(easy) Solve thetransformed problem

(easy)L-transform

f (t);an f (s);cn

When we want to solve a given problem the key to success is to chose a suitable transform for the problemin question. In this chapter we have presented some useful transforms but there are other examples in theliterature. In Appendix 5 we present some further transforms (mainly taken from (3)). In most caseswe have also included a formula for the inverse transform and the corresponding useful tables are alsoincluded.

6.7. Continuous Linear Systems

FIGURE 6.7.1. A schematic picture of a continuous linear system

x(t) y(t)

Page 56: Bystrom Applied Mathematics

56 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

Many linear system, e.g. in technical applications, can be described by a linear differential equation:

(6.7.1) any(n)(t)+an−1y(n−1)(t)+ · · ·+a0y(t) = bkx(k)(t)+bk−1x(k−1)(t)+ · · ·+b0x(t),

together with initial valuesy(0) = y′(0) = · · ·= y(n)(0) = 0.

Set Y (s) = Ly(t)(s) and X(s) = Lx(t)(s) and transform (6.7.1). Using the initial values we get:

(ansn +an−1sn−1 + · · ·a0)Y (s) = (bksk +bk−1sk−1 + · · ·b0)X(s),

which givesY (s)X(s)

=bksk +bk−1sk−1 + · · ·b0

ansn +an−1sn−1 + · · ·a0.

We define the Transfer function, H(s), by Y (s) = H(s)X(s), i.e.

H(s) =bksk +bk−1sk−1 + · · ·b0

ansn +an−1sn−1 + · · ·a0.

For every incoming signal (with transform X(s)) we get the corresponding solution (outgoing signal)Y (s) = H(s)X(s), and if we invert the transform we see that

y(t) = h(t)? x(t).

How do we find H(s)?

For a unit impulse δ(t) we have

L δ(t)=Z

0δ(t)e−stdt = e0 = 1.

This implies that if we send in a unit impulse the system will respond in the following way:y(t) = h(t)?δ(t) = h(t),Y (s) = H(s).

In technical applications h(t) is usually called the unit impulse solution.

Example 6.10. (Driven harmonic oscillator)

FIGURE 6.7.2. Hanging spring

m

y(t)

x(t)

Page 57: Bystrom Applied Mathematics

6.7. CONTINUOUS LINEAR SYSTEMS 57

We consider the system illustrated in Figure 6.7.2, i.e. a weight m which is attached to the endof vertically suspended spring. The weight has an equilibrium point relative to a moving referencesystem (e.g. the point of attachment for the spring), and the distance from this equilibrium point isdenoted by y(t). The movement of the reference system (relative to some absolute reference system)is denoted by x(t).

(A concrete example of such a system with a moving reference system is obtained attaching thespring to a wooden board and then move that board up and down.)

It can be shown that the system can be described by the following linear differential equation:

my(t)+ cy(t)+ ky(t) = cx(t)+ax(t).

If we apply the Laplace transform to both sides of this equation we get(ms2 + cs+ k

)Y (s) = (cs+a)X(s),

and the transfer function isH(s) =

cs+ams2 + cs+ k

.

Suppose, for example, that we have the incoming signal x(t) = sinωt and that m = 1.00kg, c = 0,

k = a = 1000N/m and ω = 2π. Then X(s) =ω

s2 +ω2 ,

Y (s) = H(s)X(s) =cs+a

ms2 + cs+ k· ω

s2 +ω2 ,

and if we insert the values we get

Y (s) =1000

s2 +1000· 2π

s2 +4π2 =D

s2 +4π2 −D

s2 +1000,

where D =2000π

1000−4π2 . Thus

y(t) =D2π

sin2πt− D√1000

sin√

1000t ≈ 1.04sin6.28t−0.207sin31.6t.

It is sometimes also useful to compute the unit step solution, i.e. the reaction of the system on theincoming signal

θ(t) =

1, t > 0,

0, t ≤ 0.

We know that Lθ(t)=1s

hence Y (s) =1s·H(s).

Example 6.11. A system has the transfer function

H(s) =3

(s+1)(s+3).

Compute the unit step solution!Solution: We know that

Y (s) =1s

H(s) =3

s(s+1)(s+3)=

1s− 3

2(s+1)+

12(s+3)

,

and hence y(t) = 1− 32

e−t +12

e−3t for y≥ 0 (and y(t) = 0 for y < 0), i.e.

y(t) =(

1− 32

e−t +12

e−3t)

θ(t),

Page 58: Bystrom Applied Mathematics

58 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

see Fig. 6.7.3.

FIGURE 6.7.3.y

t

1

1

y(t) =(1− 3

2 e−t + 12 e−3t

)θ(t)

6.8. Discrete Linear Systems

FIGURE 6.8.1. A schematic image of a discrete linear system

xn yn

A discrete linear system can be described by a linear difference equation:

(6.8.1) a0yn +a1yn−1 + · · ·+amyn−m = b0xn +b1xn−1 + · · ·+bkxn−k,

alternatively this equation can be formulated as

ak?yk= bk?xk .

Let Y (s) = Z [yn] (z), and X(s) = Z [xn] (z). A Z-transform of (6.8.1) gives the equation(a0 +a1

1z

+ · · ·+am1zm

)Y (z) =

(b0 +b1

1z

+ · · ·+bk1zk

)X(z),

i.e.Y (z)X(z)

=b0 +b1

1z + · · ·+bk

1zk

a0 +a11z + · · ·+am

1zm

,

and in the same way as before we can define a transfer function, H(z), by

H(z) =b0 +b1

1z + · · ·+bk

1zk

a0 +a11z + · · ·+am

1zm

.

For every incoming signal (with Z-transform X(z)) we get the solution (outgoing signal)

Y (z) = H(z)X(z),

which givesyn= hn?xn.

How do we find H(z)?

Page 59: Bystrom Applied Mathematics

6.9. FURTHER EXAMPLES 59

For the unit impulse sequence, δn, we have Z [yn] = 1+0 · 1z

+ · · ·= 1, which implies that the system

will respond in the following way:

yn= hn?δn= hn,

i.e.Y (z) = H(z).

In technical applications hn is called the unit impulse response.

Example 6.12. A linear discrete system has the transfer function H(z) =1

z+0.8. Compute the unit

step response!Solution: The unit step sequence is σn= 1,1,1, . . ., and we have

X(z) = Z [σn] =z

z−1,

and thus we get

Y (z) = H(z)X(z) =z

(z−1)(z+0.8)=

5z9

[1

(z−1)− 1

(z+0.8)

].

The inverse transform gives the answer, Z−1 [Y (z)] = yn, where

yn =59

(1− (−0.8)n) .

6.9. Further Examples

Example 6.13. Compute the integralZ

0

sinaxx(1+ x2)

dx for a > 0.

Solution: Consider

f (t) =Z

0

sin txx(1+ x2)

dx, t > 0

and its Laplace transform

L f (t)= f (s) =Z

0

(Z∞

0

sin txx(1+ x2)

dx)

e−stdt

=Z

0

(Z∞

0sin(tx)e−stdt

)1

x(1+ x2)dx

=Z

0L(sin tx)(s)

1x(1+ x2)

dx

=Z

0

x(x2 + s2)

1x(1+ x2)

dx

=1

s2−1

Z∞

0

11+ x2 −

1x2 + s2 dx

=1

s2−1

2− π

21s

)=

π

2

(1s− 1

s+1

).

By applying the inverse transform we see that

f (t) =π

2(1− e−t) ,

Page 60: Bystrom Applied Mathematics

60 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

i.e. Z∞

0

sinaxx(1+ x2)

dx =π

2(1− e−a) , a > 0.

Example 6.14. (The Dirichlet problem (Lejeune Dirichlet) for a half-plane)Solve

(6.9.1)

u′′xx +u′′yy = 0,−∞ < x < ∞, y≥ 0,

u(x,0) = f (x),u(x,y)→ 0, when |x| → ∞, y→ ∞.

Solution: We start by applying the Fourier transform (with respect to x) to u. We denote thisoperation with Fx u= F x 7→ u(x,y) and we get

U = U(ω,y) = Fx u(ω) =Z

−∞

u(x,y)e−iωxdx,

and (6.9.1) is then transformed intod2Udy2 −ω

2U = 0,

U(ω,0) = f (ω),U(ω,y)→ 0, when y → ∞.

The solution to this transformed problem is given by

U(ω,y) = f (ω)e−|ω|y.

If we use the convolution property (F ( f ?g) = F ( f )F (g)) we see that

u(x,y) = F −1(U) = F −1

f (ω)e−|ω|y

= F −1 F ( f )F (gy)Z∞

−∞

f (z)gy(x− z)dz,

where gy(x) is the inverse Fourier transform of e−|ω|y, i.e.

gy(x) =1π

yx2 + y2 .

Hence the wanted solution is

u(x,y) =yπ

Z∞

−∞

f (z)(x− z)2 + y2 dz, y > 0.

(This is the famous Poisson integral formula).

Page 61: Bystrom Applied Mathematics

6.10. EXERCISES 61

6.10. Exercises

6.1. [S] a) Compute the inverse Laplace transform of

F(s) = e−2s 1s2 +8s+15

.

b) Find the unit step response to a system with the transfer function

H(s) =3

(s+1)(s+3).

6.2.* Use the Laplace transform to solve:u′t(x, t) = u′′xx(x, t), 0≤ x < 1, t > 0,

u(0, t) = u(1, t) = 1, t > 0,

u(x,0) = 1+ sinπx, 0 < x < 1.

6.3. [S] Use the Laplace transform to solve:

y′′+2y′+2y = u(t), y(0) = y′(0) = 0.

a) When u(t) = θ(t),

b) when u(t) = ρ(t) =

0, t ≤ 0,

1 t > 0.

6.4. Use Fourier series to solve:u′tt(x, t) = u′′xx(x, t), 0 < x < 1, t > 0,

u(0, t) = u(1, t) = 0, t > 0,

u(x,0) = sinπx, u′t(x,0) = sin3πx, 0 < x < 1.

6.5. [S] Compute the Fourier transform of the signal

f (t) = θ(t−3)e−(t−3).

6.6.*a) Prove the convolution formula, F f ?g= F fF g, for the Fourier transform.b) Define f (t) = θ(t)e−t , let f1(t) = f (t) and for n≥ 1 let fn(t) = ( fn−1 ? f )(t). Compute

fn(t).

6.7. [S] Compute the Fourier transform, F (ω), of

f (t) =

sinω0t, |t| ≤ a,

0 else.

Page 62: Bystrom Applied Mathematics

62 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

6.8. Compute the Fourier transform, F (ω), of

f (t) =

cosω0t, |t| ≤ a,

0 else.

6.9. [S] Solve the following difference equation

y(n+2)− y(n+1)−2y(n) = 0, y(0) = 2, y(1) = 1.

6.10. Determine the sequence y(n), n≥ 0 which has the Z-transform Y (z) =1

z2 +1.

6.11. [S] Let f (x) = e−|x| and compute the convolutoin product ( f ? f )(x).

6.12. Use the function e−|t| toa) Compute the Fourier transform of f (t) =

11+ t2 .

b) Compute the Fourier transform of g(t) =α

α2 + t2 , α > 0.

c) Compute the Fourier transform of h(t) =t2−α2

(α2 + t2)2 , α 6= 0.

6.13. [S] Use the Laplace transform to solve the following system of differential equationsx′−2x+3y = 0y′− y+2x = 0

,

x(0) = 8y(0) = 3.

6.14.a) Define the Haar-scaling function ϕ and the Haar-wavelet function ψ.b) Illustrate ψ(t−2),ψ(4t),ψ(4t−1),ψ(4t−3) and 2ψ(4t−2) in the ty-plane.c) Explain how a signal f (t) can be represented by a system of basis functions constructed

by translating, dilating and normalizing the Haar wavelet.

6.15. [S] A continuous system has the transfer function

H(s) =1

1+ sT.

Compute the response, y(t) to the signal x(t) = sinωt.

6.16.* A discrete linear system has the transfer function

H(z) =1

2z+1.

Compute the unit impulse response.

6.17. [S] A discrete linear system has the unit impulse answer 0.7n. Compute the system’s responseto the signal an, a 6= 0.7.

Page 63: Bystrom Applied Mathematics

6.10. EXERCISES 63

6.18. Let f : R → R be a continuous function such that∞

∑−∞

f (n) is absolutely convergent, and that

there is a continuous function g(x) =∞

∑n=−∞

f (2πn+ x), x ∈ [−π,π].

a) Show that g(x) has the period 2π.b) Compute the Fourier series for g(x) and use this to show the following formula (the

Poisson summation formula):∞

∑n=−∞

f (n) = 2π

∑n=−∞

f (2πn).

c) Use the Fourier series for g from b) to show that if f (x) = 0 for |x| ≥ π then we havethe following formula

f (x) =

1

∑m=−∞

f (m)eimx, |x| ≤ π,

0, |x| ≥ π.

6.19. Use the previous exercise to show a version of the Sampling theorem. Suppose that f : R→ Chas a Fourier transform and is band-limited, i.e. f (ω) = 0 for |ω| ≥ c. Show that f is uniquely

determined by its values at (for example) the sequencekπ

c, k ∈Z according to the following formula

f (x) =∞

∑−∞

f(

c

)1

cx−mπsin(ct−nπ) .

6.20. [S] The dispersion of smoke from a smoke pipe with the height h as the wind direction and andwind speed is constant can be modelled by the following equation

v∂c∂x

= d(

∂2c∂x2 +

∂2c∂z2

),

where c(x,z) is the concentration of smoke at the height z conunted from the base of the pipe andthe distance x from the pipe in the direction of the wind. d is a diffusion coefficient and v is the windspeed (in m/s). If we also assume that the rate of change of c in the x-direction is much smaller thanthe rate of change in the z -direction we get the simplified equation

∂c∂x

= k(

∂2c∂z2

).

The rate of change in the concentration at ground level and infinitely high up can be viewed asnegligible which gives us the boundary values

∂c∂z

(x,0) = limz→∞

∂c∂z

(x,z) = 0.

The concentration of smoke can also be neglected infinitely far away in the x-direction. At thelocation of the pipe the concentration is 0 except for at the height h where the smoke drifts out of itwith a flowrate qkgm−2s−1. Thus we also get the boundary values:

c(0,z) =qv

δ(z−h) , limx→∞

c(x,z) = 0.

a) Rewrite the problem using dimensionless quantities.

Page 64: Bystrom Applied Mathematics

64 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS

b) Use the Laplace transform to find the concentration at ground level, c(x,0). (Hint: split intotwo cases, z ≷ 1 and observe that the derivative of the Laplace transform of c is not continuouseverywhere.)

c) At which range from the pipe is the concentrationen at ground level highest?

Page 65: Bystrom Applied Mathematics

CHAPTER 7

Introduktion till Hamiltonsk teori and isoperimetriska problem

65

Page 66: Bystrom Applied Mathematics
Page 67: Bystrom Applied Mathematics

CHAPTER 8

Integral Equations

8.1. Introduction

Integral equations appears in most applied areas and are as important as differential equations. In fact,as we will see, many problems can be formulated (equivalently) as either a differential or an integralequation.

Example 8.1. Examples of integral equations are:

(a) y(x) = x−Z x

0(x− t)y(t)dt.

(b) y(x) = f (x)+λ

Z x

0k(x− t)y(t)dt, where f (x) and k(x) are specified functions.

(c) y(x) = λ

Z 1

0k(x, t)y(t)dt, where

k(x, t) =

x(1− t), 0 ≤ x ≤ t,t(1− x), t ≤ x ≤ 1.

(d) y(x) = λ

Z 1

0(1−3xt)y(t)dt.

(e) y(x) = f (x)+λ

Z 1

0(1−3xt)y(t)dt.

♦A general integral equation for an unknown function y(x) can be written as

f (x) = a(x)y(x)+Z b

ak(x, t)y(t)dt,

where f (x),a(x) and k(x, t) are given functions (the function f (x) corresponds to an external force). Thefunction k(x, t) is called the kernel. There are different types of integral equations. We can classify agiven equation in the following three ways.

• The equation is said to be of the First kind if the unknown function only appears under theintegral sign, i.e. if a(x)≡ 0, and otherwise of the Second kind.

• The equation is said to be a Fredholm equation if the integration limits a and b are constants,and a Volterra equation if a and b are functions of x.

• The equation are said to be homogeneous if f (x)≡ 0 otherwise inhomogeneous.

Example 8.2. A Fredholm equation (Ivar Fredholm):Z b

ak(x, t)y(t)dt +a(x)y(x) = f (x).

67

Page 68: Bystrom Applied Mathematics

68 8. INTEGRAL EQUATIONS

Example 8.3. A Volterra equation (Vito Volterra):Z x

ak(x, t)y(t)dt +a(x)y(x) = f (x).

Example 8.4. The storekeeper’s control problem.To use the storage space optimally a storekeeper want to keep the stores stock of goods constant. Itcan be shown that to manage this there is actually an integral equation that has to be solved. Assumethat we have the following definitions:

a = number of products in stock at time t = 0,

k(t) = remainder of products in stock (in percent) at the time t,

u(t) = the velocity (products/time unit) with which new products are purchased,

u(τ)∆τ = the amount of purchased products during the time interval ∆τ.

The total amount of products at in stock at the time t is then given by:

ak(t)+Z t

0k(t− τ)u(τ)dτ,

and the amount of products in stock is constant if, for some constant c0, we have

ak(t)+Z t

0k(t− τ)u(τ)dτ = c0.

To find out how fast we need to purchase new products (i.e. u(t)) to keep the stock constant we thus needto solve the above Volterra equation of the first kind.

Example 8.5. (Potential)Let V (x,y,z) be the potential in the point (x,y,z) coming from a mass distribution ρ(ξ,η,ζ) in

Ω (see Fig. 8.1.1). Then

V (x,y,z) =−GZ Z Z

Ω

ρ(ξ,η,ζ)r

dξdηdζ.

The inverse problem, to determine ρ from a given potential V , gives rise to an integrated equation.Furthermore ρ and V are related via Poisson’s equation

∇2V = 4πGρ.

FIGURE 8.1.1. A potential from a mass distribution

r(x,y,z)

(ξ,η,ζ)

Ω

Page 69: Bystrom Applied Mathematics

8.2. INTEGRAL EQUATIONS OF CONVOLUTION TYPE 69

8.2. Integral Equations of Convolution Type

We will now consider integral equations of the following type:

y(x) = f (x)+Z x

0k(x− t)y(t)dt = f (x)+ k ? y(x),

where k ? y(x) is the convolution product of k and y (see p. 45). The most important technique whenworking with convolutions is the Laplace transform (see sec. 6.2).

Example 8.6. Solve the equation

y(x) = x−Z x

0(x− t)y(t)dt.

Solution: The equation is of convolution type with f (x) = x and k(x) = x. We observe that

L(x) =1s2 and Laplace transforming the equation gives us

L [y] =1s2 −L [x? y] =

1s2 −L [x]L [y] =

1s2 −

1s2 L [y] , i.e.

L [y] =1

1+ s2 ,

and thus y(x) = L−1[

11+ s2

]= sinx.

Answer: y(x) = sinx.

Example 8.7. Solve the equation

y(x) = f (x)+λ

Z x

0k(x− t)y(t)dt,

where f (x) and k(x) are fixed, given functions.Solution: The equation is of convolution type, and applying the Laplace transform yields

L [y] = L [ f ]+λL [k]L [y] , i.e.

L [y] =L [ f ]

1−λL [k].

Answer: y(x) = L−1[

L [ f ]1−λL [k]

].

Page 70: Bystrom Applied Mathematics

70 8. INTEGRAL EQUATIONS

8.3. The Connection Between Differential and Integral Equations (First-Order)

Example 8.8. Consider the differential equation (initial value problem)

(8.3.1)

y′(x) = f (x,y),y(x0) = y0.

By integrating from x0 to x we obtainZ x

x0

y′(t)dt =Z x

x0

f (t,y(t))dt,

i.e.

(8.3.2) y(x) = y0 +Z x

x0

f (t,y(t))dt.

On the other hand, if (8.3.2) holds we see that y(x0) = y0, and

y′(x) = f (x,y(x)),

which implies that (8.3.1) holds! Thus the problems (8.3.1) and (8.3.2) are equivalent.

♦In fact, it is possible to formulate many initial and boundary value problems as integral equations andvice versa. In general we have:

Initial value problem

Dynamical system

⇒ The Volterra equation,

Boundary value problem ⇒ The Fredholm equation.

Picard’s method (Emile Picard)

Problem: Solve the initial value problem y′ = f (x,y),y(x0) = A.

Or equivalently, solve the integral equation :

y(x) = A+Z x

x0

f (t,y(t))dt.

We will solve this integral equation by constructing a sequence of successive approximations to y(x).

First choose an initial approximation, y0(x) (it is common to use y0(x) = y(x0)), then define the sequence:y1(x), y2(x), . . . ,yn(x) by

y1(x) = A+Z x

x0

f (t,y0(t))dt,

y2(x) = A+Z x

x0

f (t,y1(t))dt,

......

yn(x) = A+Z x

x0

f (t,yn−1(t))dt.

Our hope is now thaty(x)≈ yn(x).

Page 71: Bystrom Applied Mathematics

8.3. THE CONNECTION BETWEEN DIFFERENTIAL AND INTEGRAL EQUATIONS (FIRST-ORDER) 71

By a famous theorem (Picard’s theorem) we know that under certain conditions on f (x,y) we have

y(x) = limn→∞

yn(x).

Example 8.9. Solve the equation y′(x) = 2x(1+ y),y(0) = 0.

Solution: (With Picard’s method) We have the integral equation

y(x) =Z x

02t(1+ y(t))dt,

and as the initial approximation we take y0(x)≡ 0. We then get

y1(x) =Z x

02t(1+ y0(t))dt =

Z x

02t(1+0)dt =

Z x

02tdt = x2,

y2(x) =Z x

02t(1+ y1(t))dt =

Z x

02t(1+ t2)dt =

Z x

02t +2t3dt = x2 +

12

x4,

y3(x) =Z x

02t(1+ t2 +

12

t4)dt = x2 +12

x4 +x6

6,

......

yn(x) = x2 +x4

2+

x6

6+ · · ·+ x2n

n!.

We see thatlimn→∞

yn(x) = ex2 −1.

REMARK 22. Observe that y(x) = ex2 −1 is the exact solution to the equation. (Show this!)

REMARK 23. In case one can can guess a general formula for yn(x) that formula can often be verified by,for example, induction.

LEMMA 8.1. If f (x) is continuous for x ≥ a then:Z x

a

Z s

af (y)dyds =

Z x

af (y)(x− y)dy.

PROOF. Let F(s) =Z s

af (y)dy. Then we see that:Z x

a

Z s

af (y)dyds =

Z x

aF(s)ds =

Z x

a1 ·F(s)ds

integration by parts = [sF(s)]xa−Z x

asF ′(s)ds

= xF(x)−aF(a)−Z x

as f (s)ds

= xZ x

af (y)dy−0−

Z s

ay f (y)dy

=Z s

af (y)(x− y)dy.

Page 72: Bystrom Applied Mathematics

72 8. INTEGRAL EQUATIONS

8.4. The Connection Between Differential and Integral Equations (Second-Order)

Example 8.10. Assume that we want to solve the initial value problem

(8.4.1)

u′′(x)+u(x)q(x) = f (x), x > a,

u(a) = u0, u′(a) = u1.

We integrate the equation from a to x and get

u′(x)−u1 =Z x

a[ f (y)−q(y)u(y)]dy,

and another integration yieldsZ x

au′(s)ds =

Z x

au1ds+

Z x

a

Z s

a[ f (y)−q(y)u(y)]dyds.

By Lemma 8.1 we get

u(x)−u0 = u1(x−a)+Z x

a[ f (y)−q(y)u(y)] (x− y)dy,

which we can write as

u(x) = u0 +u1(x−a)+Z x

af (y)(x− y)dy+

Z x

aq(y)(y− x)u(y)dy

= F(x)+Z x

ak(x,y)u(y)dy,

where

F(x) = u0 +u1(x−a)+Z x

af (y)(x− y)dy, and

k(x,y) = q(y)(y− x).

This implies that (8.4.1) can be written as Volterra equation:

u(x) = F(x)+Z x

ak(x,y)u(y)dy.

REMARK 24. Example 8.10 shows how an initial value problem can be transformed to an integral equa-tion. In example 8.12 below we will show that an integral equation can be transformed to a differentialequation, but first we need a lemma.

LEMMA 8.2. (Leibniz’s formula)

ddt

(Z b(t)

a(t)u(x, t)dx

)=

Z b(t)

a(t)u′t(x, t)dx+u(b(t), t)b′(t)−u(a(t), t)a′(t).

PROOF. Let

G(t,a,b) =Z b

au(x, t)dx,

where a = a(t),b = b(t).

Page 73: Bystrom Applied Mathematics

8.4. THE CONNECTION BETWEEN DIFFERENTIAL AND INTEGRAL EQUATIONS (SECOND-ORDER) 73

The chain rule now gives

ddt

G = G′t(t,a,b)+G′

a(t,a,b)a′(t)+G′b(t,a,b)b′(t)

=Z b

au′t(x, t)dx−u(a(t), t)a′(t)+u(b(t), t)b′(t).

Example 8.11. Let

F(t) =Z t2

√t

sin(xt)dx.

Then

F ′(t) =Z t2

√t

cos(xt)xdx+ sin t3 ·2t− sin t32 · 1

2√

t.

Example 8.12. Consider the equation

(*) y(x) = λ

Z 1

0k(x, t)y(t)dt,

where

k(x, t) =

x(1− t), x ≤ t ≤ 1,

t(1− x), 0 ≤ t ≤ x.

I.e. we have

y(x) = λ

Z x

0t(1− x)y(t)dt +λ

Z 1

xx(1− t)y(t)dt.

If we differentiate y(x) we get (using Leibniz’s formula)

y′(x) = λ

Z x

0−ty(t)dt +λx(1− x)y(x)+λ

Z 1

x(1− t)y(t)dt−λx(1− x)y(t)

= λ

Z x

0−ty(t)dt +λ

Z 1

x(1− t)y(t)dt,

and one further differentiation gives us

y′′(x) =−λxy(x)−λ(1− x)y(x) =−λy(x).

Furthermore we see that y(0) = y(1) = 0. Thus the integral equation (*) is equivalent to the boundaryvalue problem

y′′(x)+λy(x) = 0y(0) = y(1) = 0.

Page 74: Bystrom Applied Mathematics

74 8. INTEGRAL EQUATIONS

8.5. A General Technique to Solve a Fredholm Integral Equation of the Second Kind

We consider the equation:

(8.5.1) y(x) = f (x)+λ

Z b

ak(x,ξ)y(ξ)dξ.

Assume that the kernel k(x,ξ) is separable, which means that it can be written as

k(x,ξ) =n

∑j=1

α j(x)β j(ξ).

If we insert this into (8.5.1) we get

y(x) = f (x)+λ

n

∑j=1

α j(x)Z b

aβ j(ξ)y(ξ)dξ

= f (x)+λ

n

∑j=1

c jα j(x).(8.5.2)

Observe that y(x) as in (8.5.2) gives us a solution to (8.5.1) as soon as we know the coefficients c j. Howcan we find c j?

Multiplying (8.5.2) with βi(x) and integrating gives usZ b

ay(x)βi(x)dx =

Z b

af (x)βi(x)dx+λ

n

∑j=1

c j

Z b

aα j(x)βi(x)dx,

or equivalently

ci = fi +λ

n

∑j=1

c jai j.

Thus we have a linear system with n unknown variables: c1, . . . ,cn, and n equations ci = fi +λ

n

∑j=1

c jai j,

1≤ i≤ n. In matrix form we can write this as

(I−λA)~c = ~f ,

where

A =

a11 · · · a1n

.... . .

...

an1 · · · ann

, ~f =

f1...

fn

, and ~c =

c1...

cn

.

Some well-known facts from linear algebra

Suppose that we have a linear system of equations

(*) B~x = ~b.

Depending on whether the right hand side~b is the zero vector or not we get the following alternatives.

1. If~b =~0 then:a) detB 6= 0 ⇒~x =~0,b) detB = 0 ⇒ (*) has an infinite number of solutions~x.

2. If~b 6= 0 then:c) detB 6= 0 ⇒ (*) has a unique solution~x,d) detB = 0 ⇒ (*) has no solution or an infinite number of solutions.

Page 75: Bystrom Applied Mathematics

8.5. A GENERAL TECHNIQUE TO SOLVE A FREDHOLM INTEGRAL EQUATION OF THE SECOND KIND 75

The famous Fredholm Alternative Theorem is simply a reformulation of the fact stated above to the settingof a Fredholm equation.

Example 8.13. Consider the equation

(*) y(x) = λ

Z 1

0(1−3xξ)y(ξ)dξ.

Here we havek(x,ξ) = 1−3xξ = α1(x)β1(ξ)+α2(x)β2(ξ),

i.e. α1(x) = 1, α2(x) =−3x,β1(ξ) = 1, β2(ξ) = ξ.

We thus get

A =

Z 1

0β1(x)α1(x)dx

Z 1

0β1(x)α2(x)dxZ 1

0β2(x)α1(x)dx

Z 1

0β2(x)α2(x)dx

=

1 −32

12

−1

,

and

det(I−λA) = det

1−λ λ32

−λ12

1+λ

= 1− λ2

4= 0

⇔λ = ±2.

The Fredholm Alternative Theorem tells us that we have the following alternatives:

λ 6=±2 then (*) has only the trivial solution y(x) = 0, andλ = 2 then the system (I−λA)~c =~0 looks like

−c1 +3c2 = 0,

−c1 +3c2 = 0,

which has an infinite number of solutions: c2 = a and c3 = 3a, for any constant a. From(8.5.2) we see that the solutions y(x) are

y(x) = 0+2(3a ·1+a(−3x)) = 6a(1− x)= b(1− x).

We conclude that every function y(x) = b(1− x) is a solution of (*).λ =−2 Then the system (I−λA)~c =~0 looks like

3c1−3c2 = 0,

c1− c2 = 0,

which has an infinite number of solutions c1 = c2 = a for any constant a. From (8.5.2) weonce again see that the solutions y(x) are

y(x) = 0−2(a ·1+a(−3x)) =−2a(1−3x)= b(1−3x),

and we see that every function y(x) of the form y(x) = b(1−3x) is a solution of (*).

Page 76: Bystrom Applied Mathematics

76 8. INTEGRAL EQUATIONS

As always when solving a differential or integral equation one should test the solutions by inserting theminto the equation in question. If we insert y(x) = 1− x and y(x) = 1−3x in (*) we can confirm that theyare indeed solutions corresponding to λ = 2 and −2 respectively.

Example 8.14. Consider the equation

(*) y(x) = f (x)+λ

Z 1

0(1−3xξ)y(ξ)dξ.

Note that the basis functions α j and β j and hence the matrix A is the same as in the previous example,and hence det(I−λA) = 0 ⇔ λ = ±2. The Fredholm Alternative Theorem gives us the followingpossibilities:

1Z 1

0f (x) ·1dx 6= 0 or

Z 1

0f (x) · xdx 6= 0 and λ 6=±2. Then (∗) has a unique solution

y(x) = f (x)+λ

2

∑i=1

ciαi(x) = f (x)+λc1−3λc2x,

where c1 and c2 is (the unique) solution of the system(1−λ)c1 +

32

λc2 =Z 1

0f (x)dx,

−12

λc1 +(1+λ)c2 =Z 1

0x f (x)dx.

2Z 1

0f (x) ·1dx 6= 0 or

Z 1

0f (x) · xdx 6= 0 and λ =−2. We get the system

3c1−3c2 =Z 1

0f (x)dx,

c1− c2 =Z 1

0x f (x)dx.

Since the left hand side of the topmost equation is a multiple of the left hand side of the

bottom equation there are no solutions ifZ 1

0x f (x)dx 6= 3

Z 1

0f (x)dx, and there are an infinite

number of solutions ifZ 1

0x f (x)dx = 3

Z 1

0f (x)dx. Let 3c2 = a, then 3c1 = a +

Z 1

0f (x)dx,

which gives the solutions

y(x) = f (x)−2 [c1α1(x)+ c2α2(x)]

= f (x)−2[(

a3

+13

Z 1

0f (x)dx

)+

a3

(−3x)]

= f (x)− 23

Z 1

0f (x)dx−a

(23−2x

).

3Z 1

0f (x) ·1dx 6= 0 or

Z 1

0f (x) · xdx 6= 0 and λ = 2. We get the system−c1 +3c2 =

Z 1

0f (x)dx,

−c1 +3c2 =Z 1

0x f (x)dx.

Page 77: Bystrom Applied Mathematics

8.6. INTEGRAL EQUATIONS WITH SYMMETRICAL KERNELS 77

The left hand sides are identical so there are no solutions ifZ 1

0x f (x)dx 6=

Z 1

0f (x)dx, other-

wise we have an infinite number of solutions. Let c2 = a, c1 = 3a−Z 1

0f (x)dx, then we get

the solution

y(x) = f (x)+2[

3a−Z 1

0f (x)dx+a(−3x)

]= f (x)−2

Z 1

0f (x)dx+6a(1− x).

4Z 1

0x f (x)dx =

Z 1

0f (x)dx = 0, λ 6=±2. Then y(x) = f (x) is the unique solution.

5Z 1

0x f (x)dx =

Z 1

0f (x)dx = 0, λ =−2. We get the system

3c1−3c2 = 0,

c1− c2 = 0,⇔ c1 = c2 = a,

for an arbitrary constant a.We thus get an infinite number of solutions of the form

y(x) = f (x)−2 [a ·1+a(−3x)]= f (x)−2a(1−3x) .

6Z 1

0x f (x)dx =

Z 1

0f (x)dx = 0, λ = 2. We get the system

−c1 +3c2 = 0,

c1 +3c2 = 0,⇔

c2 = a,

c1 = 3a,

for an arbitrary constant a. We thus get an infinite number of solutions of the form

y(x) = f (x)+2 [3a ·1+a(−3x)]= f (x)+6a(1− x) .

8.6. Integral Equations with Symmetrical Kernels

Consider the equation

(*) y(x) = λ

Z b

ak(x,ξ)y(ξ)dξ,

wherek(x,ξ) = k(ξ,x)

is real and continuous. We will now see how we can adapt the theory from the previous sections to thecase when k(x,ξ) is not separable but instead is symmetrical, i.e. k(x,ξ) = k(ξ,x). If λ and y(x) satisfy(*) we say that λ is an eigenvalue and y(x) is the corresponding eigenfunction. We have the followingtheorem.

THEOREM 8.3. The following holds for eigenvalues and eigenfunctions of (*):

(i) if λm and λn are eigenvalues with corresponding eigenfunctions ym(x) and yn(x) then:

λn 6= λm ⇒Z b

aym(x)yn(x)dx = 0.

I.e. eigenfunctions corresponding to different eigenvalues are orthogonal (ym(x)⊥ yn(x)).(ii) The eigenvalues λ are real.

Page 78: Bystrom Applied Mathematics

78 8. INTEGRAL EQUATIONS

(iii) If the kernel k is not separable then there are infinitely many eigenvalues:

λ1,λ2, . . . ,λn, . . . ,

with 0 < |λ1| ≤ |λ2| ≤ · · · and limn→∞

|λn|= ∞.

(iv) To every eigenvalue corresponds at most a finite number of linearly independenteigenfunctions.

PROOF. (i). We have

ym(x) = λm

Z b

ak(x,ξ)ym(ξ)dξ, and

yn(x) = λn

Z b

ak(x,ξ)yn(ξ)dξ,

which gives Z b

aym(x)yn(x)dx = λm

Z b

ayn(x)

Z b

ak(x,ξ)ym(ξ)dξdx

= λm

Z b

a

(Z b

ayn(x)k(k,ξ)dx

)ym(ξ)dξ

[k(x,ξ) = k(ξ,x)] = λm

Z b

a

(Z b

ak(ξ,x)yn(x)dx

)ym(ξ)dξ

= λm

Z b

a

(1λn

yn(ξ))

ym(ξ)dξ

=λm

λn

Z b

aym(ξ)yn(ξ)dξ.

We conclude that (1− λm

λn

)Z b

aym(x)yn(x)dx = 0,

and if λm 6= λn then we must haveZ b

aym(x)yn(x)dx = 0.

Example 8.15. Solve the equation

y(x) = λ

Z 1

0k(x,ξ)y(ξ)dξ,

where

k(x,ξ) =

x(1−ξ), x ≤ t ≤ 1,

ξ(1− x), 0≤ ξ≤ x.From Example 8.12 we know that the integral equation is equivalent to

y′′(x)+λy(x) = 0,

y(0) = y(1) = 0.

If λ > 0 we have the solutions y(x) = c1 cos√

λx + c2 sin√

λx, y(0) = 0 ⇒ c1 = 0 and y(1) = 0⇒c2 sin

√λ = 0, hence either c2 = 0 (which only gives the trivial solution y ≡ 0) or

√λ = nπ for

some integer n, i.e. λ = n2π

2. Thus, the eigenvalues are

λn = n2π

2,

and the corresponding eigenfunctions are

yn(x) = sin(nπx).

Page 79: Bystrom Applied Mathematics

8.7. HILBERT-SCHMIDT THEORY TO SOLVE A FREDHOLM EQUATION 79

Observe that if m 6= n it is well-known thatZ 1

0sin(nπx)sin(mπx)dx = 0.

8.7. Hilbert-Schmidt Theory to Solve a Fredholm Equation

We will now describe a method for solving a Fredholm Equation of the type:

(*) y(x) = f (x)+λ

Z b

ak(x, t)y(t)dt.

LEMMA 8.4. (Hilbert-Schmidt’s Lemma) Assume that there is a continuous function g(x) such that

F(x) =Z b

ak(x, t)g(t)dt,

where k is symmetrical (i.e. k(x, t) = k(t,x)). Then F(x) can be expanded in a Fourier series as

F(x) =∞

∑n=1

cnyn(x),

where yn(x) are the normalized eigenfunctions to the equation

y(x) = λ

Z b

ak(x, t)y(t)dt.

(Cf. Thm. 8.3.)

THEOREM 8.5. (The Hilbert-Schmidt Theorem) Assume that λ is not an eigenvalue of (*) and that y(x)is a solution to (*). Then

y(x) = f (x)+λ

∑n=1

fn

λn−λyn(x),

where λn and yn(x) are eigenvalues and eigenfunctions to the corresponding homogeneous equation (i.e.

(*) with f ≡ 0) and fn =Z b

af (x)yn(x)dx.

PROOF. From (*) we see immediately that

y(x)− f (x) = λ

Z b

ak(x,ξ)y(ξ)dξ,

and according to H-S Lemma (8.4) we can expand y(x)− f (x) in a Fourier series:

y(x)− f (x) =∞

∑n=1

cnyn(x),

where

cn =Z b

a(y(x)− f (x))yn(x)dx =

Z b

ay(x)yn(x)dx− fn.

Page 80: Bystrom Applied Mathematics

80 8. INTEGRAL EQUATIONS

Hence Z b

ay(x)yn(x)dx = fn +

Z b

a(y(x)− f (x))yn(x)dx

= fn +λ

Z b

a

(Z b

ak(x,ξ)y(ξ)dξ

)yn(x)dx

k(x,ξ) = k(ξ,x) = fn +λ

Z b

a

(Z b

ak(ξ,x)yn(x)dx

)y(ξ)dξ

= fn +λ

λn

Z b

ayn(ξ)y(ξ)dξ.

Thus Z b

ay(x)yn(x)dx =

fn

1− λ

λn

=λn fn

λn−λ,

and we conclude that

cn =λn fn

λn−λ− fn =

λ fn

λn−λ,

i.e. we can write y(x) as

y(x) = f (x)+λ

∑n=1

fn

λn−λyn(x).

Example 8.16. Solve the equation

y(x) = x+λ

Z 1

0k(x,ξ)y(ξ)dξ,

where λ 6= n2π

2, n = 1,2, . . . , and

k(x,ξ) =

x(1−ξ), x ≤ ξ≤ 1,

ξ(1− x), 0≤ ξ≤ x.

Solution: From Example 8.15 we know that the normalized eigenfunctions to the homogeneousequation

y(x) = λ

Z 1

0k(x,ξ)y(x)dξ

areyn(x) =

√2sin(nπx) ,

corresponding to the eigenvalues λn = n2π

2, n = 1,2, . . . . In addition we see that

fn =Z 1

0f (x)yn(x)dx =

Z 1

0x√

2sin(nπx)dx =(−1)n+1√2

nπ,

hence

y(x) = x+√

π

∑n=1

(−1)n+1

n(n2π2−λ)sin(nπx) , λ 6= n2

π2.

Finally we observe that by using practically the same ideas as before we can also prove the followingtheorem (cf. (5, pp. 246-247)).

Page 81: Bystrom Applied Mathematics

8.7. HILBERT-SCHMIDT THEORY TO SOLVE A FREDHOLM EQUATION 81

THEOREM 8.6. Let f and k be continuous functions and define the operator K acting on the functiony(x) by

Ky(x) =Z x

ak(x,ξ)y(ξ)dξ,

and then define positive powers of K by

Kmy(x) = K(Km−1y)(x), m = 2,3, . . . .

Then the equation

y(x) = f (x)+λ

Z x

ak(x,ξ)y(ξ)dξ

has the solution

y(x) = f (x)+∞

∑n=1

λnKn( f ).

This type of series expansion is called a Neumann series.

Example 8.17. Solve the equation

y(x) = x+λ

Z x

0(x−ξ)y(ξ)dξ.

Solution: (by Neumann series):

K(x) =Z x

0(x−ξ)ξdξ =

x3

3!

K2(x) =Z x

0(x−ξ)

ξ3

3!dξ =

x5

5!...

Kn(x) =Z x

0(x−ξ)

ξ2n−1

(2n−1)!dξ =

x2n+1

(2n+1)!,

hence

y(x) = x+∞

∑n=1

λnKn(x)

= x+λx3

3!+λ

2 x5

5!+ · · ·+λ

n x2n+1

(2n+1)!+ · · · .

Solution (by the Laplace transform): We observe that the operator

K =Z x

0(x−ξ)y(ξ)dξ

is a convolution of the function y(x) with the identity function x 7→ x, i.e. K(x) = (t 7→ t ? y)(x),which implies that L [K(x)] = L [x]L [y], and since y(x) = x+λK(x) we get

L (y) = L (x)+λL (x)L (y) =1s2 +λ

1s2 L (y)

L (y) =1

s2−λ=

1

2√

λ

(1

s−√

λ− 1

s+√

λ

),

and by inverting the transform we get

y(x) =1

2√

λ

(e√

λx− e−√

λx)

.

Page 82: Bystrom Applied Mathematics

82 8. INTEGRAL EQUATIONS

Observe that we obtain the same solution independent of method. This is easiest seen by lookingat the Taylor expansion of the second solution. More precisely we have

e−√

λx = 1−√

λx+12

(√λx)2− 1

3!

(√λx)3

+ · · · ,

e√

λx = 1+√

λx+12

(√λx)2

+13!

(√λx)3

+ · · · ,

i.e.

y(x) =1

2√

λ

(e√

λx− e−√

λx)

=1

2√

λ

(2√

λx+23!

(√λx)3

+25!

(√λx)5

+ · · ·)

= x+λx3

3!+λ

2 x5

5!+ · · · .

8.8. Exercises

8.1. [S] Rewrite the following second order initial values problem as an integral equationu′′(x)+ p(x)u′(x)+q(x)u(x) = f (x), x > a,

u(a) = u0, u′(a) = u1.

8.2. Consider the initial values problemu′′(x)+ω

2u(x) = f (x), x > 0,

u(a) = 0, u′(0) = 1.

a) Rewrite this equation as an integral equation.b) Use the Laplace transform to give the solution for a general f (x) with Laplace transform

F(s).c) Give the solution u(x) for f (x) = sinat with a ∈ R, a 6= ω.

8.3. [S] Rewrite the initial values problem

y′′(x)+ω2y = 0, 0≤ x ≤ 1,

y(0) = 1, y′(0) = 0

as an integral equation of Volterra type and give those solutions which also satisfy y(1) = 0.

8.4. Rewrite the boundary values problem

y′′(x)+λp(x)y = q(x), a≤ x ≤ b,

y(a) = y(b) = 0

as an integral equation of Fredholm type. (Hint: Use y(b) = 0 to determine y′(a).)

Page 83: Bystrom Applied Mathematics

8.8. EXERCISES 83

8.5. [S] Let α ≥ 0 and consider the probability that a randomly choosen integer between 1 and x hasits largest prime factor ≤ xα. As x→ ∞ this probability distribution tends to a limit distribution withthe distribution function F(α), the so called Dickman function (note that F(α) = 1 for α ≥ 1). Thefunction F(α) is a solution of the following integral equation

F(α) =Z

α

0F(

t1− t

)1t

dt, 0≤ α ≤ 1.

Compute F(α) for12≤ α≤ 1.

8.6.* Consider the Volterra equation

u(x) = x+µZ x

0(x− y)u(y)dy.

a) Compute the first three non-zero terms in the Neumann series of the solution.b) Give the solution of the equation (for example by using a) to guess a solution and then verify

it).

8.7. [S] Solve the following integral equation:

x =Z x

0ex−ξy(ξ)dξ.

8.8. Use the Laplace transform to solve:

a) y(x) = f (x)+λ

Z x

0ex−ξy(ξ)dξ.

b) y(x) = 1+Z x

0ex−ξy(ξ)dξ

8.9. [S] Write a Neumann series for the solution of the integral equation

u(x) = f (x)+λ

Z 1

0u(t)dt,

and give the solution of the equation for f (x) = ex− e2

+12

and λ =12

.

8.10. Solve the following integral equations:

a) y(x) = x2 +Z 1

0(1−3xξ)y(ξ)dξ,

b) y(x) = x2 +λ

Z 1

0(1−3xξ)y(ξ)dξ for all values of λ.

8.11. [S] Solve the following integral equation

u(x) = f (x)+λ

0sin(x)sin(2y)u(x)dy

whena) f (x) = 0,b) f (x) = sinx,c) f (x) = sin2x.

Page 84: Bystrom Applied Mathematics

84 8. INTEGRAL EQUATIONS

8.12. Consider the equation

u(x) = f (x)+λ

Z x

0u(t)dt, 0 ≤ x ≤ 1.

a) Show that for f (x)≡ 0 the equationen has only the triviala solution in C2[0,1].b) Give a function f (x) such that the equation has a non-trivial solution for all values of λ

and compute this solution.

8.13. [S] Let a > 0 and consider the integral equation

u(x) = 1+λ

Z x−a

0θ(x− y+a)(x− y)u(y)dy, x ≥ a.

Use the Laplace transform to determine the eigenvalues and the corresponding eigenfunctions of thisequation.

8.14.* The current in an LRC-circuit with L = 3, R = 2, C = 0.2 (SI-units) and where we apply avoltage at the time t = 3 satisfies the following integral equation

I(t) = 6θ(t−1)(t−1)+2t +3−Z t

0(2+5(t− y)) I(y)dy.

Determine I(t) using the Laplace transform.

8.15. [S] Consider (again) the salesman’s control problem (Example 8.4). Assume that the number ofproducts in stock at the time t = 0 is a and that the products are sold at a constant rate such that allproducts are sold out in T (time units). Now let u(t) be the rate (products/time unit) with which wehave to purchase new products in order to have a constant number of a products in stock.a) Write the integral equation which is satisfied by u(t).b) Solve the equation from a) and find u(t).b) u(t) =

aT

et/T .

8.16.*a) Write the integral equation

(*) y(x) = λ

Z 1

0k(x,ξ)y(ξ)dξ,

where

k(x,ξ) =

x(1−ξ), x ≤ ξ≤ 1,

ξ(1− x), 0≤ ξ≤ x,

as a boundary value problem.b) Find the eigenvalues and the normalized eigenvectors to the problem in a).Solve the equation

y(x) = f (x)+λ

Z 1

0k(x,ξ)y(ξ)dξ,

where k(x,ξ) is as in a) and λ 6= n2π

2 forc) f (x) = sin(πkx), k ∈ Z, andd) f (x) = x2.

Page 85: Bystrom Applied Mathematics

8.8. EXERCISES 85

8.17. [S] Consider the Fredholm equation

u(x) = f (x)+λ

Z 2π

0cos(x+ t)u(t)dt.

Determine solutions for all values of λ and give sufficient conditions (if there are any) that f (x) hasto satisfy in order for solutions to exist.

8.18. Show that the equation

g(s) = λ

0(sinssin2t)g(t)dt

only has the trivial solution.

8.19. [S] Solve the integral equation

sins =1π

Z ∗∞

−∞

u(t)t− s

dt,

whereZ ∗

means that we consider the principal value of the integral (since the integrand has a

singularity at t = s). (Hint: Use the resiue theorem on the integralZ

−∞

eit

s− tdt.)

8.20. Give the Laplace transform of the non-trivial solution for the following integral equation

g(s) =Z s

0

(s2− t2)g(t)dt.

Hint: Rewrite the kernel in convolution form and use the differentiation rule.

Page 86: Bystrom Applied Mathematics
Page 87: Bystrom Applied Mathematics

CHAPTER 9

Introduction to the theory of Dynamical Systems, Chaos , Stabilityand Bifurcations

87

Page 88: Bystrom Applied Mathematics
Page 89: Bystrom Applied Mathematics

APPENDIX A

Appendices

89

Page 90: Bystrom Applied Mathematics

90 A. APPENDICES

A-1. General Properties of the Laplace Transform:

TABLE 1. General Properties of the Laplace Transform

f (t) F(s) = L [ f (t)](s)

Definition f (t)Z

0f (t)e−stdt

Inverse1

2πi

Z a+i∞

a−i∞F(s)estds F(s)

Linearity a f (t)+bg(t) aL [ f (t)](s)+bL [g(t)](s)

Scaling f (at)1|a|

F( s

a

)Sign change f (−t) F(−s)

Time delay f (t−a)θ(t−a) e−asF(s)

Ampl. modulation f (t)cosΩt12

(F (s− iΩ)+F (s+ iΩ))

Damping e−at f (t) F(s+a)

Convolution f ?g(t) =Z t

0f (τ)g(t− τ)dτ L [ f (t)]L [g(t)]

Differentiation f (n)(t) snF(s)− sn−1 f (0)−·· ·− f (n−1)(0)

Differentiation tn f (t) (−1)n F(n)(s)

Transformpar

Constant 1 s−1, s > 0

Exponential eat 1s−a

, s > a

Power tn, n ∈ Z+ n!sn+1 , s > 0

Trig. sinat and cosata

s2 +a2 , ands

s2 +a2 , s > 0

Hyp. trig. sinhat and coshata

s2−a2 , ands

s2−a2 , s > |a|

Exp.×trig. eat sinbtb

(s−a)2 +b2, s > a

Exp.×trig. eat cosbts−a

(s−a)2 +b2, s > a

Exp.×power tneat n!

(s−a)n+1 , s > a

Heaviside’s function θ(t−a) s−1e−as, s > 0

Delta function δ(t−a) e−as

Error function erf√

t =2√π

Z √t

0e−z2

dz1

s√

1+ s, s > 0

Normal dist./Gaussian.1√te−

a24t

√π

se−a

√s, s > 0

Compl. Erf. erfca

2√

t= 1− erf

a2√

t1s

e−a√

s, s > 0

1t×Normal dist.

a2t3/2 e−

a24t

√πe−a

√s, s > 0

Page 91: Bystrom Applied Mathematics

A-2. GENERAL PROPERTIES OF THE FOURIER TRANSFORM 91

A-2. General Properties of the Fourier Transform

TABLE 2. General Properties of the Fourier Transform

f (t) f (ω)

Definition f (t)Z

−∞

f (t)e−iωtdt

Inverse1

Z∞

−∞

f (ω)eiωtdω f (ω)

Linearity a f (t)+bg(t) a f (ω)+bg(ω)

Scaling f (at), a 6= 01|a|

f(

ω

a

)Sign change f (−t) f (−ω)

Complex conjugation f (t) f (−ω)

Time delay f (t−T ) e−iωT f (ω)

Freq. translations eiΩt f (t) f (ω−Ω)

Ampl. modulation f (t)cosΩt12(

f (ω−Ω)+ f (ω+Ω))

Ampl. modulation f (t)sinΩt12i

(f (ω−Ω)− f (ω+Ω)

)Symmetry f (t) 2π f (−ω)

Time differentiation f (n)(t) (iω)n f (ω)

Freq. differentiation (−it)n f (t) f (n)(ω)

Time convolution f (t)?g(t) f (ω)g(ω)

Freq. convolution f (t)g(t)1

2πf (ω)? g(ω)

Transform pairs

Delta function δ(t) 1

Derivative av Delta fn.. δ(n)(t) (iω)n

Exponential θ(t)e−at 1a+ iω

, a > 0

Exponential (1−θ(t))e−at 1a− iω

, a > 0

Exponential e−a|t|,a > 02a

a2 +ω2

Heaviside’s function θ(t) πδ(ω)+1iω

Constant 1 2πδ(ω)

Filtering (sinc)sinΩt

πtθ(ω+Ω)−θ(ω−Ω)

Normal dist./Gaussian.1√4πA

e−t2/(4A) e−Aω2, A > 0

Page 92: Bystrom Applied Mathematics

92 A. APPENDICES

A-3. General Properties of the Z-transform

TABLE 3. General Properties of the Z-transform

xn∞

n=0 X(z) = Z [xn] (z)

Definition xn

∑n=0

xnz−n

Linearity axn+byn aZ [xn]+bZ [yn]

Damping a−nxn X (az) , a > 0

nxn −zX ′(z)

Differentiation (1−n)xn−1σn−1 X ′(z)

Convolution xn?yn X(z)Y (z)

Forward translation xn−kσn−k (k ≥ 0) z−kX(z)

Backward translation xn+k (k ≥ 0) zkX(z)−k−1

∑j=0

x jzk− j

Transform pairs

Unit step σnz

z−1Unit pulse δn 1

Delayed unit pulse δn−k z−k

Exponential an zz−a

Ramp function rn = nσnz

(z−1)2

Sine sinnθzsinθ

z2−2zcosθ+1Damped sine an sinnθ

zsinθ

z2−2zacosθ+a2

Cosine cosnθz(z− cosθ)

z2−2zcosθ+1

Damped cosine an cosnθz(z−acosθ)

z2−2zacosθ+a2

Page 93: Bystrom Applied Mathematics

A-4. THE HAAR WAVELET 93

A-4. The Haar Wavelet

1.The Haar Wavelet. The mother wavelet ψ and scaling function ϕ are in this case very simplefunctions that take the values 0,1 and −1, and 0 and 1 (see Fig. 1.4.1):

ψ(t) =

1, 0≤ t ≤ 1

2,

−1,12

< t ≤ 1,

0, otherwise,

ϕ(t) =

1, 0 ≤ t < 1,

0, otherwise.

The different operations performed on the mother wavelet to construct a basis are illustrated in Figs. 1.4.2,1.4.3, 1.4.4 and 1.4.5.

FIGURE 1.4.1. The Haar wavelet and scaling functiony

t

1

1

ϕ(t) =

1, 0≤ t ≤ 1,

0, else

(a) The Haar scaling function

y

t

1

112

ψ(t) =

1 , 0≤ t ≤ 1

2 ,

−1 , 12 < t ≤ 1,

0 , else

(b) The Haar wavelet

FIGURE 1.4.2. Translations of ϕy

t

1

1

y = ϕ(t−1)

y

t

1

k k +1

y = ϕ(t− k)

trim =0 0 -10 0

Page 94: Bystrom Applied Mathematics

94 A. APPENDICES

FIGURE 1.4.3. Dilatations of ϕy

t

1

14

y = ϕ(22t

)y

t

1

12k

y = ϕ(2kt

)

FIGURE 1.4.4. Dilatations and translationsy

t

1

14

12

y = ϕ(22t−1

)y

t

1

134

y = ϕ(22t−3

)

FIGURE 1.4.5. Dilatations, translation and normalizationy

t

4

134

y = 22ϕ(22t−3

)

REMARK 25. Note that we have e.g.ϕ(t) = ϕ(2t)+ϕ(2t−1),ψ(t) = ϕ(2t)−ϕ(2t−1).

Page 95: Bystrom Applied Mathematics

A-4. THE HAAR WAVELET 95

2. An Approximation Example. We will now see how to approximate a function by step functions.Observe that in the figures that illustrate the different cases we have used the function f (t) = t2.

a) Approximation by the mean value (see Fig. 1.4.6):

f (t)≈ A0(t) =(

11

Z 1

0f (s)ds

)ϕ(t).

FIGURE 1.4.6. Approximation by the mean valuey

t

1

1

y = f (t)

y = A0(t)

b) Approximation by a step function (2 steps) (see Fig. 1.4.7):

f (t)≈ A1(t) = 2Z 1

2

0f (s)dsϕ(2t)+2

Z 1

12

f (s)dsϕ(2t−1)

=Z 1

0f (s)

√2ϕ(2s)ds

√2ϕ(2t)+

Z 1

0f (s)

√2ϕ(2s−1)ds

√2ϕ(2t−1)

= a0ϕ0(t)+a1ϕ1(t).

FIGURE 1.4.7. Approximation by a step function (2 steps)y

t

1

112

y = f (t)

y = A1(t)

c) Approximation by a step function (4 steps) (see Fig. 1.4.8):

Page 96: Bystrom Applied Mathematics

96 A. APPENDICES

f (t) ≈ A2(t)

= 4Z 1

4

0f (s)dsϕ(4t)+4

Z 12

14

f (s)dsϕ(4t−1)+4Z 3

4

12

f (s)dsϕ(4t−2)+

4Z 1

34

f (s)dsϕ(4t−3)

=[Z 1

0f (s)2ϕ(4s)ds

]2ϕ(4t)+

[Z 1

0f (s)2ϕ(4s−1)ds

]2ϕ(4t−1)+[Z 1

0f (s)2ϕ(4s−2)ds

]2ϕ(4t−2)+

[Z 1

0f (s)2ϕ(4s−3)ds

]2ϕ(4t−3)

= a0ϕ0(t)+a1ϕ1(t)+a2ϕ2(t)+a3ϕ3(t).

FIGURE 1.4.8. Approximation by a step function (4 steps)y

t

1

114

12

34

y = f (t)

y = A2(t)

d) Approximation by a step function (2n steps)

f (t)≈2n−1

∑k=0

akϕk(t),

where the “Fourier coefficients” are

ak =Z 1

0f (s)2

n2 ϕ(2ns− k)ds,

and the “basis functions” areϕk(t) = 2

n2 ϕ(2nt− k) .

3. Approximation by wavelets. The basic idea is that we can write f (t) as:

f (t)≈ An(t) = (An(t)−An−1(t))+(An−1(t)−An−2(t))+ . . .

+(A2(t)−A1(t))+(A1(t)−A0(t))+A0(t).

E.g. for n = 2 we have

f (t)≈ A2(t) = (A2(t)−A1(t))+(A1(t)−A0(t))+A0(t),

Page 97: Bystrom Applied Mathematics

A-5. ADDITIONAL TRANSFORMS 97

where

A1(t)−A0(t) = 2Z 1

0f (s)ϕ(2s)dsϕ(2t)+2

Z 1

0f (s)ϕ(2s−1)dsϕ(2t−1)−Z 1

0f (s)ϕ(s)dsϕ(t) = [ϕ(t) = ϕ(2t)+ϕ(2t−1)]

=Z 1

0f (s)(ϕ(2s)−ϕ(2s−1))dsϕ(2t)−

Z 1

0f (s)(ϕ(2s)−ϕ(2s−1))ϕ(2t−1)

=Z 1

0f (s)ψ(s)dsψ(t),

where ψ(t) is the Haar wavelet as defined on p. 93. Similarly one can also show that

A2(t)−A1(t) =Z 1

0f (s)

√2ψ(2s)ds

√2ψ(2t)+

Z 1

0f (s)

√2ψ(2s−1)dsψ(2t−1).

By continuing in this manner we find that f (t) can be approximated by An(t), which can be expressed as

An(t) = A0(t)+n

∑j,k=0

⟨f ,ψ j,k

⟩ψ j,k(t),

whereψ j,k(t) = 2

j2 ψ(2 jt− k

),

and ⟨f ,ψ j,k

⟩=

Z 1

0f (s)ψ j,k(s)ds.

A-5. Additional Transforms

We present here some additional examples of transforms. For more information and applications cf.,e.g. L.Debnath, Integral Transforms and Their Applications, (3).

1 The Fourier Cosine Transform

Fc : f (t) → fc(ω) =

√2π

Z∞

0f (t)cos(ωt)dt,

F−1c : f (t) =

√2π

Z∞

0fc(ω)cos(ωt)dω.

2 The Fourier Sine Transform

Fs : f (t) → fs(ω) =

√2π

Z∞

0f (t)sin(ωt)dt,

F−1s : f (t) =

√2π

Z∞

0fs(ω)sin(ωt)dω.

3 The Hankel Transforms (Defined by the Bessel functions Jn, n = 0,1, . . .)

Hn : f (r) → fn(y) =Z

0f (r)Jn(yr)rdr,

H−1n : f (r) =

Z∞

0fn(y)Jn(yr)ydy.

Page 98: Bystrom Applied Mathematics

98 A. APPENDICES

4 The Mellin Transform

M : f (x) → f (α) =Z

0xα−1 f (x)dx,

M−1 : f (x) =1

2πi

Z c+i∞

c−i∞x−α f (α)dα,

here α is complex and c is chosen such that the integral converges.5 The Hilbert Transform

H : f (t) → fH(x) =1π

Z∞

−∞

f (t)t− x

dt,

H−1 : f (t) =−1π

Z∞

−∞

fH(x)x− t

dx.

6 The Stieltjes Transform

S : f (t)→ f (z) =Z

0

f (z)t + z

dt, |argz|< π.

Remark: This operation can be inverted, but we don’t get any simple integral formula asbefore, hence we don’t write the inverse transform explicitly here.

7 The Generalized Stieltjes Transform

Sρ : f (t)→ fρ(z) =Z

0

f (t)(t + z)ρ

dt, |argz|< π.

The same remark as above applies.8 The Legendre Transform

L : f (x) →

f (n)

, f (n) =Z 1

−1Pn(x) f (x)dx,

L−1 : f (x) =∞

∑n=0

2n+12

f (n)Pn(x).

Here Pn(x) is the Legendre polynomial of degree n, which we can write explicitly as

Pn(x) = 2−n[ n

2 ]

∑k=0

(−1)k(

nk

)(2n−2k

n

)xn−2k, n = 0,1, . . . ,

and the “Fourier coefficients” are an =2n+1

2f (n).

9 The Jacobi Transform

J : f (x)→

f α,β(n)

, f α,β(n) =Z 1

−1(1− x)α(a+ x)βPα,β

n (x) f (x)dx,

J−1 : f (x) =∞

∑n=0

(δn)−1 f α,β(n)Pα,β

n (x).

Here Pα,βn (x) is the Jacobi polynomial of degree n and order α,β, which can be written

explicitly as

Pα,βn (x) = 2−n

∑k=0

(n+α

k

)(n+β

n− k

)(x−1)n−k(x+1)k, n = 0,1, . . . ,

Page 99: Bystrom Applied Mathematics

A-6. PARTIAL FRACTION DECOMPOSITIONS 99

and the “Fourier coefficients” are an = (δn)−1 f α,β(n), where

δn =2α+β+1Γ(n+α+1)Γ(n+β+1)

n!(α+β+2n+1)Γ(n+α+β+1).

10 The Laguerre Transform

L : f (x)→

fα(n)

, fα(n) =Z

0e−xxαLα

n (x) f (x)dx,

L−1 f (x) =∞

∑n=0

(δn)−1 fα(n)Lα

n (x).

Here Lαn (x) is the Laguerre polynomial of degreen ≥ 0 and order α > −1, and the “Fourier

coefficients” are an = (δn)−1 fα(n), where

δn =Γ(n+α+1)

n!.

11 The Hermite Transform

H∗ : f (x)→ fH(n) , fH(n) =Z

−∞

e−x2Hn(x) f (x)dx,

(H∗)−1 : f (x) =∞

∑n=0

δ−1n fH(n)Hn(x).

Here Hn(x) is the Hermite polynomial of degree n, and the “Fourier coefficients” are an =δ−1n fH(n), where

δn = n!2n√π.

REMARK 26. Observe that the transforms 8-11 are special cases of the earlier theory for generalizedFourier series (cf. Def. 6.1).

A-6. Partial Fraction Decompositions

It is quite common that, especially when dealing with Laplace or Z transform, one wants to apply theinverse transform to a rational function

P(s)Q(s)

.

If none of the standard rules apply directly, the standard approach is to first of all perform polynomialdivision if the degree of P is greater than or equal to the degree of Q. After this step it is usually the bestapproach to make a partial fraction decomposition.

Suppose now that degP < degQ. We know that the polynomial Q can be factored (in R) into linear factors,(s−a), and quadratic factors

((s−a)2 +b2)). Remember that a partial fraction decomposition is of the

formP(s)Q(s)

=p1

q1+ · · ·+ pM

qM,

where p1, . . . , pM are constants or linear polynomials, and q1, . . . ,qM consist of the linear and quadraticfactors of Q (with all multiplicities). The following two general rules apply:

• A linear factor (s−a) of multiplicity n contributes with

A1

s−a+

A2

(s−a)2 + · · ·+ An

(s−a)n .

Page 100: Bystrom Applied Mathematics

100 A. APPENDICES

• A quadratic factor((s−a)2 +b2) of multiplicity n contributes with

A1s+B1

((s−a)2 +b2)+

A2s+B2

((s−a)2 +b2)2 + · · ·+ Ans+Bn

((s−a)2 +b2)n .

The coefficients of the polynomials p j are usually computed by putting the right hand side on a commondenominator and comparing the resulting coefficients with P(s).

Example 1.1. We consider the rational function

P(s)Q(s)

=3s2 +1

s(s2 +1)(s−1)2 .

The factors of Q are the linear factors s, (s− 1) of multiplicity 2 and the quadratic factor (s2 + 1).Hence the partial fraction decomposition is

P(s)Q(s)

=3s2 +1

s(s2 +1)(s−1)2 =As

+B

s−1+

C

(s−1)2 +Ds+Es2 +1

,

and if we put the right hand side on a common denominator we get

3s2 +1s(s2 +1)(s−1)2 =

A(s−1)2(s2 +1

)+Bs(s−1)(s2 +1)+Cs(s2 +1)+(Ds+E)s(s−1)2

s(s2 +1)(s−1)2 ,

and hence

3s2 +1 = A(s−1)2 (s2 +1)+Bs(s−1)(s2 +1)+Cs(s2 +1)+(Ds+E)s(s−1)2.

We can solve for A and C immediately: if we set s = 1 we see that

3+1 = A ·0+B ·0+2C +D ·0+E ·0 = 2C,

hence C = 2, and if we set s = 0 we see that 1 = A. Hence we must have

3s2 +1 = (s−1)2 (s2 +1)+Bs(s−1)(s2 +1)+2s(s2 +1)+(Ds+E)s(s−1)2

=(s2 +1−2s

)(s2 +1)+B(s2− s)(s2 +1)+2s3 +2s+(Ds2 +Es)

(s2−2s+1

)= s4−2s3 +2s2−2s+1+B(s4− s3 + s2− s)+2s3 +2s+Ds4−2Ds3 +Es3−2Es2

+Ds2 +Es

= s4 (1+B+D)+ s3 (−2−B+2−2D+E)+ s2 (2+B−2E +D)+ s(−2−B+2+E)+1.

And we get the following equations for the coefficients

1 = 1,

E−B = 0,

B+D−2E +2 = 3,

E−2D−B = 0,

B+D+1 = 0,

and using standard linear algebra we see that B = E = −1, and D = 0. Hence the partial fractiondecomposition becomes

P(s)Q(s)

=3s2 +1

s(s2 +1)(s−1)2 =1s− 1

s−1+

2

(s−1)2 −1

s2 +1.

This can (and should) now also be verified by multiplying together all factors of the right hand sideagain.

Page 101: Bystrom Applied Mathematics

Bibliography

[1] J. Bergh, F. Ekstedt, and M. Lindberg. Wavelets. Studentlitteratur, 1999. 55[2] C.S. Burrus, R.A. Gopinath, and H. Guo. Introduction to Wavelets and Wavelet Transforms, A Primer.

Prentice Hall, 1998. 55[3] L. Debnath. Integral Transforms and Their Applications. CRS Press, 1995. 55, 97[4] K.Gröchenig. Foundations of Time-Frequency Analysis. Birkhäuser, 2000. 55[5] J. D. Logan. Applied Mathematics. Wiley, 2nd edition, 1996. 80[6] S.G. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999. 55

101

Page 102: Bystrom Applied Mathematics

102 Bibliography

A-7. Answers to exercises

4.1. a) Linear. b) Non-linear, c) Non-linear, d) Linear.

4.3. u(x, t) =Z

−∞

c(α)uα (x, t)dα with c(α) =α2

2k√

πgives u(x, t) =

1

4√

k3π

Z∞

−∞

α2e−

(x−α)24kt dα.

4.5. a) S(x) =π2

3+4

∑n=1

(−1)n

n2 cos(nt). b) Use f (0) = 0 = S(0).

4.7. a) We get the equation u′t = u′′xx,0 < x < 1, t > 0, the initial value u(x,0) = 1, and the boundary valuesu(0, t) = u(1, t) = 0.

b) u(x, t) =4π

∑k=0

12k +1

e−π2(2k+1)2t sin(π(2k +1)x).

4.9. u(x, t) =12(1− e−4t cos2x

).

5.1. (a) The eigenvalues are λn =14

+n2π2

L2 and the eigenfunctions are un(x) =An√

xcos((

14

+nπ

L

)lnx)

.

(b) The eigenvalues are λn = p2n +

14

, where pn are solutions of tan pn = 2pn and the eigenfunctions are

un(x) = An1√x

sin(pn lnx).

5.3. u(x, t) =∞

∑k=0

ak cos(

πkl

x)

e−π2k2

l2t with ak =

2l

Z l

0f (x)cos

(πkl

x)

dx.

5.6. a) The reason that m must be an integer is the periodicity: Θ(θ+2π) = Θ(θ).

b) ψ(r,θ) =−v0

[r +

a2

r

]cosθ, och~v = (vr,vθ) with vr = v0

[1− a2

r2

]cosθ, ochvθ = v0

[1+

a2

r2

]sinθ.

6.1. a) f (t) =(

12

e−3(t−2)− 12

e−5(t−2))

θ(t−2).

b) y(t) = 1− 32

e−t +12

e−3t .

6.3. a) y(t) =12[1− e−t (cos t + sin t)

]b) y(t) =

12(e−t cos t−1+ t

).

6.5. f (ω) = e−3iω 11+ iω

.

6.7. F (ω) = i(

sin(ω+ω0)aω+ω0

− sin(ω−ω0)aω−ω0

).

6.9. y(n) = 2n +(−1)n ,n≥ 1.

6.11. ( f ? f )(x) = (1+ |x|)e−|x|.

6.13. x(t) = 5e−t +3e4t (X(s) =5

s+1+

3s−4

), y(t) = 5e−t −2e4t (Y (s) =5

s+1− 2

s−4).

6.15. y(t) =1

1+ω2T 2

(ωTe−t/T + sinωt−ωT cosωt

).

Page 103: Bystrom Applied Mathematics

A-7. ANSWERS TO EXERCISES 103

6.17. yn =10

10a−7(an+1−0.7n+1)

σn .

6.20. a) z =zh

, x =Dx

V h2 , c =cvq

, and the equation now becomes

∂c∂x

=∂2c∂z2 ,

with the boundary conditions∂c∂z|z=0 =

∂c∂z|z→∞ = 0,

and

c|x=0 = δ(z−1) , c|x→∞ = 0.

b) c(x,0) =1√πx

e−14x which gives c(x,0) =

qh√πdvx

e−V h24dx kg m−3 .

c) The maximum is attained at x =12

, i.e. x =d

2vh2 m.

8.1. We get a Volterrra equation on the form u(x)= F(x)+Z x

ak(x,y)u(y)dy with F(x)=

Z x

a(x− y) f (y)dy+

(p(a)u0 +u1)(x−a)+u0 and k(x,y) = p(y)+(x− y)[p′(y)−q(y)

].

8.3. The integral equation is y(x) = −ω2

Z x

0(x− t)y(t)dt + 1, and the solutions are y(x) = cosωt, and if

y(1) = 0 we must have ω = 2πn, and we thus get the eigenfunctions yn(x) = cos(2πnx) , n ∈ Z.

8.5. For12≤ α≤ 1 we have: F(α) =

0F(

t1− t

)1t

dt = 1−Z 1

α

1t

dt = 1+ lnα.

8.7. y = 1− x

8.9. u(x) = f (x)+∞

∑n=1

λn f1, where f1 =

Z 1

0f (t)dt. For the given function f (x) and λ =

12

we get u(x) =

ex− e2

+12

+∞

∑n=1

12n

e−12

= ex.

8.11. a) u(x)≡ 0. b) u(x) = sinx, c) u(x) = sin2x+λπ

2sinx.

8.13. The eigenvalues are λn =−(

2πna

)2

and the eigenfunctions are un(x) = cos(

2πna

x)

, for n ∈ Z.

8.15. a) With k(t) = 1− tT

for 0≤ t ≤ T we get the Volterra equation

ak(t)+Z t

0k(t− y)u(y)dy, 0≤ t ≤ T,

which can be solved either by rewriting it as a differential equation or by using the Laplace transfrom.

Page 104: Bystrom Applied Mathematics

104 Bibliography

8.17. The eigenvalues are λ =±1π

, and if we define f1 =Z 2π

0f (t)cos t and f2 =

Z 2π

0f (t)sin t we get the

different cases:

For λ =1π

there are solutions if f (x) is odd (or ≡ 0), and these are then given by

u(x) = f (x)− f2

2πsinx+ ccosx

for some constant c.

For λ =−1π

there are solutions if f (x) is even (or ≡ 0), and these are then given by

u(x) = f (x)+f1

2πcosx+d sinx,

where d is a constant.

For λ 6=±1π

we have the solutions

u(x) = f (x)+λ f1

1−λπcosx− λ f1

1+λπsinx.

8.19. u(t) = cos t