89
The University of Michigan Industry Program of the College of Engineering Notes on Mathematics For Engineers Prepared by the American Nuclear Society. University of Michigan Student Branch October, 1959 IP-394

Notes on Mathematics For Engineers

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Notes on Mathematics For Engineers

The University of Michigan

Industry Program of the College of Engineering

Notes on Mathematics For Engineers

Prepared by the American Nuclear Society.

University of Michigan Student Branch

October, 1959

IP-394

Page 2: Notes on Mathematics For Engineers

Acknowledgments

These notes were prepared by the University of Michigan Student Branch of the AmericanNuclear Society, Committee on Math Notes, which included R.W. Albrecht, J.M. Carpenter,D.L. Galbraith, E.H. Klevans, and R.J. Mack, all students in the University of MichiganDepartment of Nuclear Engineering. Assistance was provided by Professors William Kerrand Paul Zweifel.

The translation to LATEXwas done by Alison Chistopherson (class of 2013), with theencouragement of Alex Bielajew, over the Winter and Summer of 2012. Other than thisparagraph, everything else is a verbatim transcription of the original document.

i

Page 3: Notes on Mathematics For Engineers

Foreword

This set of notes has been compiled with one primary objective in mind: to provide,in one volume, a handy reference for a large number of the commonly-used mathematicalformulae, and to do so consistently with respect to notation, definition, and normalization.Many of us keep these results available to us in an excessive number of references, in whichthe notation or normalization varies, or formulae are so spread out that they are difficult tofind, and their use is time-consuming.

Short explanations are included, with some examples, to serve two purposes: first, torecall to the user some of the ideas which may have slipped his mind since his detailed studyof the material; second, for those who have never studied the material, to make its use atleast plausible, and to help in his study of references.

No claim can be made that all results anyone ever uses are here, but it is hoped thata sufficient quantity of material is included to make necessary only infrequent use of otherreferences, except for integral tables, etc. for elementary work. Of course, the user may findit desirable to add some pages of his own.

Finally, it is recommended that those unfamiliar with the theory at any point not blindlyapply the formulae herein, for this is risky business at best; a text should be studied (and,of course, understood) first.

ii

Page 4: Notes on Mathematics For Engineers

Contents

1 Orthogonal Functions 21.1 Importance of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . 31.2 Generation of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . 41.3 Use of Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 Operational Properties of Some Common Sets of Orthogonal Functions . . . 81.5 Fourier Series, Range −ℓ ≤ x ≤ ℓ . . . . . . . . . . . . . . . . . . . . . . . . 9

1.5.1 Boundary Value Problem Satisfied by Sines and Cosines . . . . . . . 91.5.2 Orthogonal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.5.3 Expansion in Fourier Series . . . . . . . . . . . . . . . . . . . . . . . 91.5.4 Normalization Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 101.5.5 Orthonormal Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.5.6 Fourier Series, Range 0 ≤ x ≤ L . . . . . . . . . . . . . . . . . . . . 101.5.7 Expansion in Half-range Series . . . . . . . . . . . . . . . . . . . . . . 101.5.8 Full Range Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Legendre Polynomials 122.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3 Differential Equation Satisfied by P (x) . . . . . . . . . . . . . . . . . . . . . 122.4 Rodriques’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.6 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.7 Expansion in Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 132.8 Normalized Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 142.9 Expansion in Normalized Polynomials . . . . . . . . . . . . . . . . . . . . . . 14

2.9.1 A Few Low-Degree Legendre Polynomials and Respective Norms . . . 142.9.2 Integral Representation of Pℓ(x) . . . . . . . . . . . . . . . . . . . . 142.9.3 Bounds on Pℓ(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Associated Legendre Functions 163.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Differential Equation Satisfied by Pm

ℓ (x) . . . . . . . . . . . . . . . . . . . . 16

iii

Page 5: Notes on Mathematics For Engineers

3.4 Expression of Pℓ(cos θ) in terms of Pmℓ (x) . . . . . . . . . . . . . . . . . . . 17

3.5 Normalizing Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Spherical Harmonics 194.1 Definition of Y m(Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2 Expression of P (cos θ) in terms of the Spherical Harmonics . . . . . . . . . . 194.3 Orthonormality of Y m(Ω) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.4 Expansion in Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . 204.5 Differential Equation Satisfied by Y m

ℓ (Ω) . . . . . . . . . . . . . . . . . . . . 204.6 Some Low-order Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . 214.7 A Useful Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Laguerre Polynomials 235.1 Derivative Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.3 Differential Equation Satisfied by Lα

n(x) . . . . . . . . . . . . . . . . . . . . . 235.4 Orthogonality, Range 0 ≤ x ≤ ∞ . . . . . . . . . . . . . . . . . . . . . . . . 235.5 Expansion in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 245.6 Expansion of Xm in Laguerre Polynomials . . . . . . . . . . . . . . . . . . . 245.7 Recurrence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Bessel Functions 256.1 Differential Equation Satisfied by Bessel Functions . . . . . . . . . . . . . . . 256.2 General Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.3 Series Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.4 Properties of Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 276.5 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.6 Recursion Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.7 Differential Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.8 Orthogonality, Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.9 Expansion in Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 286.10 Bessel Integral Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7 Modified Bessel Functions 307.1 Differential Equation Satisfied . . . . . . . . . . . . . . . . . . . . . . . . . . 307.2 General Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307.3 Relation of Modified Bessel to Bessel . . . . . . . . . . . . . . . . . . . . . . 317.4 Properties of In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

8 The Laplace Transformation 338.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

8.1.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.1.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

iv

Page 6: Notes on Mathematics For Engineers

8.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 338.1.4 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.1.5 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.1.6 Further Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

8.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.2.1 Solving Simultaneous Equations . . . . . . . . . . . . . . . . . . . . . 368.2.2 Electric Circuit Example . . . . . . . . . . . . . . . . . . . . . . . . . 378.2.3 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

8.3 Inverse Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.3.1 Heaviside Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.3.2 The Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 41

8.4 Tables of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448.4.1 Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478.4.2 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . 478.4.3 Regular and Singular Points . . . . . . . . . . . . . . . . . . . . . . . 50

8.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

9 Fourier Transforms 529.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

9.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529.1.2 Range of Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529.1.3 Existence Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

9.2 Fundamental Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539.2.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 539.2.2 Relations Among Infinite Range Transforms . . . . . . . . . . . . . . 559.2.3 Transforms of Functions of Two Variables . . . . . . . . . . . . . . . 569.2.4 Fourier Exponential Transforms of Functions of Three Variables . . . 56

9.3 Summary of Fourier Transform Formulae . . . . . . . . . . . . . . . . . . . . 579.3.1 Finite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579.3.2 Infinite Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

9.4 Types of Problems to which Fourier Transforms May be Applied . . . . . . . 619.5 Inversion of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 66

9.5.1 Finite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669.5.2 Infinite Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679.5.3 Inversion of Fourier Exponential Transforms . . . . . . . . . . . . . . 67

9.6 Table of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

10 Miscellaneous Identities, Definitions, Functions, and Notations 7210.1 Leibnitz Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7210.2 General Solution of First Order Linear Differential Equations . . . . . . . . . 7210.3 Identities in Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 7410.4 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

v

Page 7: Notes on Mathematics For Engineers

10.5 Index Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7610.6 Examples of Use of Index Notation . . . . . . . . . . . . . . . . . . . . . . . 7710.7 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7910.8 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8010.9 Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

11 Notes and Conversion Factors 8211.1 Electrical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

11.1.1 Electrostatic CGS System . . . . . . . . . . . . . . . . . . . . . . . . 8211.1.2 Electromagnetic CGS System . . . . . . . . . . . . . . . . . . . . . . 8211.1.3 Practical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

11.2 Tables of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8311.2.1 Energy Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

11.3 Physical Constants and Conversion Factors; Dimensional Analysis . . . . . . 83

1

Page 8: Notes on Mathematics For Engineers

Chapter 1

Orthogonal Functions

In general, orthogonal functions arise in the solution of certain boundary-value problems.The use of the properties of orthogonal functions may often greatly simplify and systematizethe solution to such a problem, in addition to providing a natural way of making approximatesolutions. Let us first make clear the concept of orthogonality. We begin at what may seeman improbable starting point.

Two vectors, A and B in a three-dimensional space are said to be “orthogonal” if thedot (inner) product A · B vanishes:

A · B = A1B1 + A2B2 + A3B3 =3∑

i=1

AiBi = 0.

This is easily generalized to a space with more than three dimensions. In an n-dimensionalspace the concept of orthogonality is unchanged, except that then the sum is over n termsAiBi, and we have for orthogonality

A · B = A1B1 + A2B2 + A3B3 =n∑

i=1

AiBi = 0.

Now one may think of the components of a vector, A1, A2, A3, as the values of a (real)function at three values of its argument; say A1 = f(r1), A2 = f(r2), A3 = f(r3), or, in termswhich will make our efforts here more clear, Ai = f(ri)(ri = r1, r2, r3). That is, r has thevalues r1, r2, r3, and to get Ai, put ri in f(r).

We may now think of r as having any number, say n, possible values in some range,so that f(r) evaluated at the various r’s generates an n-dimensional vector. The step toconsidering a function as an infinitely-many-dimensional vector is now a natural one; weallow n to increase without bound, r taking all values in its range.

Let r have some range a ≤ r ≤ b in which it takes on n values such that rj − rj−1 = ∆rj ,and suppose two such n-dimensional vectors f(rj) and g(rj) are thus generated. The innerproduct of f with g is generalized as

n∑

j=1

f(rj)g(rj)∆rj .

2

Page 9: Notes on Mathematics For Engineers

Above, for A · B, rj takes on only the discrete values 1,2,3; so ∆r is always unity in thissimple case. Now let us consider the case as n→ ∞, where r takes all values between a andb. The inner product, if it exists, is then

limn→∞

n∑

j=1

f(rj)g(rj)∆rj .

With proper restrictions on ∆rj (max ∆rj → 0) and on the range of r(a ≤ r ≤ b), this isjust the limit occurring in the definition of the ordinary integral. Thus we say that the innerproduct of f(r) with g(r), which is often denoted (f,g), is

(f, g) =

∫ b

a

f(r)g(r)dr.

Of course, the range could be infinite.This discussion constitutes the generalization of the dot, or inner, product, to functions.

Two functions are then by definition orthogonal over the range a ≤ r ≤ b when

(f, g) =

∫ b

a

f(r)g(r)dr = 0.

As we shall see, this definition is subject to generalization by the inclusion of a “weightfunction” p(r), with which

(f, g) =

∫ b

a

p(r)f(r)g(r)dr.

Here f and g are said to be orthogonal “with respect to weight function p” over the rangea ≤ r ≤ b. The function p(r) may of course be unity.

Only the function f(r) ≡ 0, a ≤ r ≤ b, is orthogonal to itself. In general, we denote theinner product of a function with itself as

(f, f) =

∫ b

a

f(r2)dr = N2f ,

and call it the norm of the function.Orthogonal sets may or may not possess two other properties, normality and complete-

ness. A set of orthogonal functions Ui(u) is said to be normal or orthonormal if N2i = 1

for all i.A set of functionsUi(u) orthogonal on the interval u1 ≤ u ≤ u2 is complete if there

exists no other function orthogonal to all the Ui on the same interval, with respect to thesame weight function, if one is involved.

1.1 Importance of Orthogonal Functions

The importance of orthogonal sets in mathematical physics may perhaps be indicated byfurther considerations of their analogs: orthogonal coordinate vectors. It is true that any

3

Page 10: Notes on Mathematics For Engineers

N-dimensional vector may be defined in terms of its components along N coordinates, pro-vided that no more than two of the reference coordinates are coplanar. But if the referencecoordinates are orthogonal, e.g., Cartesian coordinates, then the equations take a particu-larly simple form. The situation is somewhat similar when it is desired to expand a functionin terms of a set of other functions – it is much simpler if the set is orthogonal.

Completeness is another important property. It is apparent that no two reference axeswill suffice for the definition of a vector in 3-dimensional space. The set of two reference axesis not complete in ordinary space, since a third coordinate can be added which is orthogonalto both of them. Addition of this coordinate makes the set complete. The situation withorthogonal functions is exactly analogous. Some authors define a complete set as a set interms of which any other function defined on the same interval can be expressed.

Some more common sets of orthogonal functions are the sines and cosines, Bessel func-tions, Legendre polynomials, associated Legendre functions, spherical harmonics, Laguerreand Hermite polynomials; operational properties of which are listed in these notes.

1.2 Generation of Orthogonal Functions

In the mathematical formulation of physical problems, one often encounters partial dif-ferential equations or integro-differential equations (which contain not only derivatives offunctions but also integrals of functions), with which are associated a set of boundary con-ditions. If the equation and its boundary conditions are such as to be “separable” in one ofthe variables one may attempt to apply the method of “separation of variables”.

Suppose we (admittedly rather abstractly) represent our equation⊕

F (u, v, w, · · · )

= 0 (1.2.1)

where⊕ is an operator involving the variables , u, v, w, · · · , applied to the function

F (u, v, w, · · · ). For Example,⊕

might be something like

∂2

∂u2+ v +

∫ w2

w1

K(w,w′)dw′ + ∆2

so that when⊕

is applied to a function F the equation appears

F =∂2F

∂u2+ vF +

∫ w2

w1

K(w,w′)F (u, v, w)dw′ + ∆2F = 0

The process of separation proceeds as follows. One attempts to find a variable, say u,such that if it is assumed that F (u, v, w, · · · ) may be written

F (u, v, w) = U(u)φ(v, w, · · · )

then Equation (1.2.1) can be written as follows:⊕

F =⊕

U(u)φ(v, w)

= 0. (1.2.2)

4

Page 11: Notes on Mathematics For Engineers

Now letΥ

U(u)

= Φ

φ(v, w)

.

Here, Υ is an operator involving only u, and Φ is an operator involving only v, w, · · ·. Re-turning to the example, assume

F (u, v, w) = U(u)φ(v, w).

Then,⊕

F =⊕

= ∂2Uφ

∂u2 + vUφ + ∆2Uφ +

∫ w2

w1

K(w,w′)Uφdw′

= φd2Udu2 + vUφ+ ∆2Uφ + U

∫ w2

w1

K(w,w′)φ(v, w′)dw′

= 0

We can, in this case, if Uφ 6= 0, divide by Uφ to get

1

U

d2U

du2+ v +

1

φ

∫ w2

w1

K(w,w′)φ(v, w′)dw′ + ∆2 = 0.

This can be rearranged to obtain the following result:

1

U

d2U

du2+ ∆2 = −v − 1

φ

∫ w2

w1

K(w,w′)φ(v, w′)dw′

In this case the separation has been successful, for on the left are functions of u only, whileon the right stand only functions of v and w. Now suppose we were to vary u, fixing v and w.Then the right side would not change since it does not involve u, and is therefore a constant.We therefore state this fact:

1

U

d2U

du2+ ∆2 = µ2 = −v − 1

φ

∫ w2

e1

K(w,w′)φ(v, w′)dw′

where µ2 is a constant called the “separation constant”. We may choose it at our discretion.We now have two equations, where only one existed before:

0 = d2Udu2 + (∆2 − µ2)U

0 = (v + µ2)φ+

∫ w2

w1

K(w,w′)φ(v, w′)dw′.

For φ 6= 0, we can divide by φ to get

U(u1) = 0dUdu

(u2) = U(u2).

5

Page 12: Notes on Mathematics For Engineers

These equations are separated; they do not involve v nor w.By the process of separation of variables we have, from the original equation involving

u, v, and w, generated a new set of equations, some of which (those involving u) are acomplete problem.

0 = d2Udu2 + (∆2 − µ2)U

0 = U(u1)dUdu

(u2) = U(u2)

0 = (v + µ2)φ+∫ w2

w1K(w,w′)φ(v, w′)dw′

This was our objective in applying the method of separation of variables. The process maybe repeated on the remaining equation or performed on another variable.

Now if, after separation, the u-equation can be put in the form

d

du

[

r(u)dUdu

]

−[

q(u) + µ(u)]

U = 0, (1.2.3)

and the boundary conditions assume the form

a1U(u1) + a2U′(u1) + α1U(u2) + α2U

′(u2) = 0

b1U(u1) + b2U′(u1) + β1U(u2) + β2U

′(u2) = 0(1.2.4)

where a, b, α, and β are constants (some may be zero), then the system of differential equa-tions and boundary conditions is called a “Sturm-Liouville system”1. It will be noted thatthe Sturm-Liouville system is very general, and includes many important equations as specialcases, for example the wave equation with those boundary conditions which are commonlyapplied.

This system under quite general conditions generates a complete set of orthogonal func-tions, one for each of an infinite, discrete set of values of the parameter µ. One finds thevalues of µ, called “eigenvalues”, for which solutions exists, and the solution functions cor-responding to these eigenvalues, called “eigenfunctions”. If the eigenvalues are µ1, µ2, · · ·and the corresponding eigenfunctions are U(u1), U(u2), · · ·, then in general, the functionsare orthogonal over the range u1 to u2, with respect to the weight function p(u).

∫ u2

u1

p(u)Ui(u)Uj(u)du = N2i δij (1.2.5)

Here δ is the Kronecker delta, and p(u) is the same as in equation (1.2.3).One must take care to find all possible eigenvalues when the equation and boundary

conditions are written exactly as in (1.2.3) and (1.2.4) and they are all real. When alleigenvalues are found, the set of eigenfunctions is complete, and any function reasonablywell behaved between u1 and u2 may be represented in terms of them. Say we seek anexpansion of a function f(u) in terms of our eigenfunctions,

f(u) =∑

i

fiUi(u)

1pronounced LEE-oo-vil, NOT Loo-i-vil

6

Page 13: Notes on Mathematics For Engineers

where fi are a set of constants. Multiplying on the right and left by Ujp(u) and integratingwith respect to u in the range of u1 to u2.

∫ u2

u1

p(u)Uj(u)f(u)du =

∫ u2

u1

i

p(u)Uj(u)fiUi(u)du

=∑

i

fi

∫ u2

u1

p(u)Uj(u)Ui(u)du

As a consequence of the orthogonality of the Ui(u)’s, defined in equation (1.2.5), this becomes∑

i

fiN2i δij = fjN

2j .

We then solve for the fj ’s;

fj =1

N2j

∫ u2

u1

p(u)Uj(u)f(u)du =1

N2j

(Uj, f).

In order to justify the switching of the order of summation and integration here, and toguarantee the existence of (U < j, f), we usually require that f(u) be absolutely integrable,i.e.,

∫ u2

u1

f(u)du exists.

It is to be noted that the outline of procedure above requires the solution of an ordinarydifferential equation, perhaps not an easy task, but one hopes not as difficult as the problemof solving the partial differential equation. Very often the eigenfunctions which fit a givenproblem are known, and so this process can be bypassed.

Different sets of eigenfunctions have different sets of operational properties, or sets ofrelationships between member which may be found useful.

We note in conclusion that sets of orthogonal functions are generated by other meansthan be Sturm-Liouville systems; by sets of differential equations and boundary conditionswhich are not of Sturm-Liouville type, and by integral equations, to mention two.

1.3 Use of Orthogonal Functions

Let us consider a linear partial differential equation outlining the elimination of onevariable from the equation. Under rather general conditions, we may expand F in an infiniteseries of orthogonal functions which we assume are known:

F (u, v, w, · · · ) =∞∑

i=0

fi(v, w, · · · )Ui(u)

where

fi(v, w, · · · ) =1

N2i

∫ u2

u1

p(u)F (u, v, w, · · ·)Ui(u)du.

7

Page 14: Notes on Mathematics For Engineers

The formula for the coefficient fi follows immediately from multiplying the first equationby p(u)Ui(u) and integrating. Let the original partial differential equation be representedabstractly.

F (u, v, w, · · · ) = S(u, v, w, · · · )where

is a linear differential operator and⊕

F merely represents that part of the differ-ential equation that involves F. Assume F to be expanded in a series in fiUi; multiply theequation by p(u)Uj(u) and integrate over u from u1 to u2.

F = S⊕∑

i

fiUi = S

∫ u2

u1

p(u)Uj(u)⊕∑

i

fiUidu =

∫ u2

u1

Sp(u)Uj(u)du

= N2j Sj(v, w, · · · )

where

Sj =

∫ u2

u1

S(u, v, w, · · · )p(u)Uj(u)du

so that

S(u, v, w, · · · ) =∞∑

j

SjUj

Now, by using the operational properties of the Ui’s, one reduces the equations (an infiniteset, one for each i) to a set in the fi’s and si’s. The particular steps taken depend upon theexact nature of the operator

, and the set of equations may be coupled, i.e., f’s with severalindices may appear in the same equation (for example, fi−1, fi, fi+1). These equations donot involve derivatives with respect to variable u, and we have gained in this respect. Butwe have to contend with the infinite set of equations.

All is not lost at this point, for, as it turns out, the series for F, (called a generalizedFourier series because of the manner in which the coefficients of Ui’s are chosen in theexpansion for F), is the most rapidly convergent series possible in the Ui’s. Thus solving foronly the coefficients of the leading terms in the series may enable us to obtain a satisfactoryapproximation to F. Also, very often, we are interested only in one or two of the fi’s onphysical or other grounds.

1.4 Operational Properties of Some Common Sets of

Orthogonal Functions

There follow now a few pages on which are outlined, rather concisely, some basic oper-ational properties of some commonly occurring sets of orthogonal functions. The familiar

8

Page 15: Notes on Mathematics For Engineers

set of sines and cosines used in the construction of Fourier series is included as an example.These lists of properties contained in these notes are by no means complete, though theymay suffice for the solution of many problems. The references listed with each section givedetailed derivations, more extensive lists of properties, more discussions of the method andits limitations, or examples of the use of orthogonal functions. It is recommended that oneunfamiliar with these functions read in some of these references, in order to avoid the pitfallsof using mathematics beyond the realm of its applicability. These notes have been assembledmainly for reference. Of special interest may be the tables in Margenau and Murphy, page254, which lists twelve special cases of the Sturm-Liouville equation with the name of theorthogonal set which satisfied each one.

1.5 Fourier Series, Range −ℓ ≤ x ≤ ℓ

1.5.1 Boundary Value Problem Satisfied by Sines and Cosines

d2f

dx2 + k2f = 0 k real

f(−ℓ) = f(ℓ)

f(−ℓ) = f ′(ℓ)

1.5.2 Orthogonal Set

sin

(

nπx

)

; cos

(

nπx

)

ℓ ≤ x ≤ ℓ, n = 0, 1, 2, . . .

1.5.3 Expansion in Fourier Series

F (x) =a0

z+

∞∑

n=1

an cos

(

nπx

)

+ bn sin

(

nπx

)

where

an =1

∫ ℓ

−ℓ

F (x) cos

(

nπx

)

dx n = 0, 1, 2 . . .

bn =1

∫ ℓ

−ℓ

F (x) sin

(

nπx

)

dx n = 0, 1, 2, . . .

9

Page 16: Notes on Mathematics For Engineers

1.5.4 Normalization Factors

N2n cosx = N2

n sin x = ℓ n = 1, 2, . . .

N20 cosx = 2ℓi

1.5.5 Orthonormal Set

1√ℓ,

1√ℓ

sin

(

nπx

)

,1√ℓ

cos

(

nπx

)

− ℓ ≤ x ≤ ℓ, n = 1, 2, . . .

1.5.6 Fourier Series, Range 0 ≤ x ≤ L

One may wish to expand a function defined on an interval 0 ≤ x ≤ L, in a Fourier series.One may choose at his discretion either of two ways, expanding either in a series of sines orone of cosines. The sine-series expansion yields an odd function and the cosine series yieldsan even function when the series is considered as a continuation of the function outside therange 0 ≤ x ≤ L. An “odd function” is one such that F(x)=-F(-x) and an “even function”is one such that F(x)=F(-x). Even functions are symmetric about the y axis.

Of course, either series represents the function on the interval 0 ≤ x ≤ L. Note thaton this range, either sin (nπx

L) or cos (nπx

L)n = 0, 1, 2, . . . forms a complete set, thus the two

possible expansions.

1.5.7 Expansion in Half-range Series

F (x) =a0

z

∞∑

n=1

an cos

(

nπx

L

)

or

F (x) =

∞∑

n=1

bn sin

(

nπx

L

)

where

an =2

L

∫ L

0

F (x) cos

(

nπx

L

)

dx, n = 0, 1, 2, . . .

bn =2

L

∫ L

0

F (x) sin

(

nπx

L

)

dx, n = 0, 1, 2, . . .

10

Page 17: Notes on Mathematics For Engineers

1.5.8 Full Range Series

Consider the coefficients in a full-range expansion for an even function, i.e., F(x) suchthat F(x)=F(-x).

an =1

∫ ℓ

−ℓ

F (x) cos

(

nπx

)

dx

=1

[

∫ 0

−ℓ

F (x) cos

(

nπx

)

dx+

∫ ℓ

0

F (x) cos

(

nπx

)

dx

]

Putting −x in for x in the first integral,

an =1

[

−∫ 0

F (−x) cos

(−nπxℓ

)

dx+

∫ ℓ

0

F (x) cos

(

nπx

)

dx

]

Use F(-x)=F(x), cos(−nπx

)

= cos(

nπxℓ

)

, and switch the order of integration,

an =1

[

∫ ℓ

0

F (x) cos

(

nπx

)

dx+

∫ ℓ

0

F (x) cos

(

nπx

)

dx

]

=2

[

∫ ℓ

0

F (x) cos

(

nπx

)

dx

]

6= 0 in general.

bn =1

[

∫ ℓ

−ℓ

F (x) sin

(

nπx

)

dx

]

=1

[

∫ 0

−ℓ

F (x) sin

(

nπx

)

dx+

∫ ℓ

0

F (x) sin

(

nπx

)

dx

]

Putting −x in for x,

bn =1

[

−∫ 0

−ℓ

F (−x) sin

(

nπx

)

dx+

∫ ℓ

0

F (x) sin

(

nπx

)

dx

]

Using F (x) = F (−x), sin(−nπx

)

= − sin(

nπxℓ

)

, and switching the order of integration,

bm =1

[

−∫ ℓ

0

F (x) sin

(

nπx

)

dx+

∫ ℓ

0

F (x) sin

(

nπx

)

dx

]

= 0 ∀n ∈ N

Thus, the coefficients of the sine terms in the full range expansion of an even function areall zero; the cosine coefficients do not, of course, vanish for all n. This situation is much thesame with respect to coefficients in the full range expansion of odd functions, except that inthis case it is the coefficients of the cosine terms which vanish, for n=0, 1, 2 . . ..

11

Page 18: Notes on Mathematics For Engineers

Chapter 2

Legendre Polynomials

2.1 Generating Function

If

H(x, y) =1

1 − 2xy + y2=

∞∑

ℓ=0

Pℓ(x)yℓ

Then

Pℓ(x) =1

ℓ!

∂ℓ

yℓH(x, y).

Valid for y=0.

2.2 Recurrence Relations

1. (ℓ+ 1)Pℓ+1(x) − (2ℓ+ 1)xPℓ(x) + ℓPℓ−1(x) = 0

2. ℓPℓ−1(x) − P′

ℓ(x) + xP′

ℓ−1(x) = 0

2.3 Differential Equation Satisfied by P (x)

1. (1 − x2)P′′

ℓ (x) − 2xP′

ℓ(x) + ℓ(ℓ+ 1)Pℓ(x) = 0 ℓ ∈ Z

12

Page 19: Notes on Mathematics For Engineers

Very often x = cos θ.

2.∂2

x2Pℓ +

(

ℓ(ℓ+ 1)(1 − x2) + 1

(1 − x2)2

)

Pℓ = 0

2.4 Rodriques’ Formula

Pℓ(x) =1

2ℓℓ!

dℓ

dxℓ(x2 − 1)ℓ

2.5 Normalizing Factor

Norm of Pℓ(x) is:

N2ℓ =

∫ 1

−1

(Pℓ(x))2dx =

2

2ℓ+ 1

2.6 Orthogonality

∫ 1

−1

Pℓ(x)Pℓ′(x)dx = δℓℓ′N2ℓ

where δℓℓ′ is the Kronecker delta, defined as

δℓℓ′ =

0, ℓ 6= ℓ′

1, ℓ = ℓ′

2.7 Expansion in Legendre Polynomials

Any function f(x) which is defined over the range −1 ≤ x ≤ 1|0, and which is absolutelyintegrable over this range may be expanded in an infinite series of Legendre Polynomials;

f(x) =∞∑

ℓ=0

fℓPℓ(x)

where

fℓ =2ℓ+ 1

2

∫ 1

−1

f(x)ℓ(x)dx

=1

N2ℓ

∫ 1

−1

f(x)Pℓ(x)dx

13

Page 20: Notes on Mathematics For Engineers

2.8 Normalized Legendre Polynomials

Pℓ(x) =Pℓ(x)

Nℓ

2.9 Expansion in Normalized Polynomials

g(x) =∞∑

ℓ=0

gℓPℓ(x)

where

gℓ =

∫ 1

−1

g(x)Pℓ(x)dx

2.9.1 A Few Low-Degree Legendre Polynomials and RespectiveNorms

P0(x) = 1 N20 = 2

P1(x) = x N21 =

2

3

P2(x) =1

2(3x2 − 1) N2

2 =2

5

P3(x) =1

2(5x3 − 3x) N2

3 =2

7

P4(x) =1

8(35x4 − 30x2 + 3) N2

4 =2

9

P5(x) =1

8(63x5 − 70x3 + 15x) N2

5 =2

11

P6(x) =1

16(231x6 − 315x4 + 105x2 − 5) N2

6 =2

13

2.9.2 Integral Representation of Pℓ(x)

Pℓ(x) =1

π

∫ π

0

[

x+√x2 − 1 cosϕ

]ℓ

14

Page 21: Notes on Mathematics For Engineers

2.9.3 Bounds on Pℓ(x)

For −1 ≤ x ≤ 1,|Pℓ(x)| ≤ 1, ∀ ℓ ∈ Z and ℓ ≥ 0

References on Legendre Polynomials

1. D. Jackson; “Fourier Series and Orthogonal Polynomials”, Carus Math. Monographs,Number 6, 1941, pp. 45-68.

2. H. Margenau and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.Van Nostrand, First Edition, 1943, pp. 94-109.

3. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert andCompany, 1927, pp. 302-320.

4. R.V. Churchill; “Four Series and Boundary Value Problems”, McGraw Hill, 1941, pp.175-201. (for discussion of concept of orthogonality, see Chap. 3).

5. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.(Lists some properties and tabulates functions).

15

Page 22: Notes on Mathematics For Engineers

Chapter 3

Associated Legendre Functions

3.1 Definition

1. Pmℓ (x) = (1 − x2)

m2d|m|

dx|m|Pℓ(x) (0 ≤ m ≤ ℓ)

2. Pmℓ (x) =

(

(1 − x2)m2

2ℓℓ!

)

dℓ+m

xℓ+m(x2 − 1)ℓ

3.2 Recurrence Relations

1. Pm+1ℓ (x) − 2m

x√1 − x2

Pmℓ (x) +

[

ℓ(ℓ+ 1) −m(m− 1)]

Pm−1ℓ (x) = 0

2. xPmℓ (x) =

(ℓ+m)Pmℓ−1(x) + (ℓ−m+ 1)Pm+1

ℓ+1 (x)

2ℓ+ 1

3.√

1 − x2Pmℓ (x) =

Pm+1ℓ+1 (x) − Pm+1

ℓ−1 (x)

2ℓ+ 1

4. Pm+1ℓ (x) =

2m√1 − x2(2ℓ+ 1)

[

(ℓ+m)Pmℓ−1(x) + (ℓ−m+ 1)Pm

ℓ+1(x)]

−[

ℓ(ℓ+ 1) −m(m− 1)]

Pm−1ℓ (x)

3.3 Differential Equation Satisfied by Pmℓ (x)

(1 − x2)d2

x2Pm

ℓ (x) − 2xd

dxPm

ℓ (x) +

[

ℓ(ℓ+ 1) − m2

1 − x2

]

Pmℓ (x) = 0

16

Page 23: Notes on Mathematics For Engineers

Since Pmℓ (x) is defined on the interval −1 ≤ x ≤ 1, in physical applications Pm

ℓ (x) isoften associated with an angle θ through the relation x = cos θ. Then the equation satisfiedby Pm

ℓ (x) may be found in the following form:

d2

dθ2Pm

ℓ (x) + cot θd

dθPm

ℓ (x) +

[

ℓ(ℓ+ 1) − m2

sin2 θ

]

Pmℓ (x) = 0

or[

1

sin θ

d

dθsin θ

d

]

Pmℓ (x) +

[

ℓ(ℓ+ 1) − m2

sin2 θ

]

Pmℓ (x) = 0

3.4 Expression of Pℓ(cos θ) in terms of Pmℓ (x)

1. Expressions θ1, ϕ1 and θ2, ϕ2 respectively denote the polar and azimuthal angles oftwo lines passing through the origin. Therefore, Θ, the angle between these two lines, isgiven by

cos θ = cos θ1 cos θ2 + sin θ1 sin θ2 cosϕ1 − ϕ2.

With these definitions, Pℓ(cos θ may be expressed as follows:

Pℓ(cos θ) = Pℓ(cos θ1)Pℓ(cos θ2) + 2

ℓ∑

m=1

(ℓ−m)!

(ℓ+m)!Pm

ℓ (cos θ1)Pmℓ (cos θ2) cos

(

m(ϕ2 − ϕ2))

2. If Pmℓ (x) is defined in a slightly different manner that allows negative values for m,

Pmℓ (x) = (1 − x2)

|m|2d|m|

x|m|Pℓ(x) (|m| ≤ ℓ)

then the expansion may be written as follows:

Pℓ(cos θ) =ℓ∑

m=−ℓ

(ℓ− |m|)!(ℓ+ |m|)!P

mℓ (cos θ1)P

mℓ (cos θ2) cos

(

m(ϕ1 − ϕ2))

.

3.5 Normalizing Factor

(Nmℓ )2 =

∫ 1

−1

[

Pmℓ (x)

]2dx =

2

2ℓ+ 1

(ℓ+m)!

(ℓ−m)!

References

1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.Van Nostrand, 1st edition (1934).

17

Page 24: Notes on Mathematics For Engineers

2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert andCompany, 1927, pp. 302-320.

3. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

4. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.(Lists some properties and tabulates functions).

18

Page 25: Notes on Mathematics For Engineers

Chapter 4

Spherical Harmonics

4.1 Definition of Y m(Ω)

The spherical harmonics are a complete, orthonormal set of complex functions of twovariables, defined on the unit sphere. Below, the vector symbol Ω will be used to denote apair of variables θ, ϕ here taken to be, respectively, the polar and azimuthal angles specifyinga point on the unit sphere with reference to a coordinate system at its center. With theseconventions, the functions Y m

ℓ (Ω) are defined as:

Y mℓ (Ω) =

2ℓ+ 1

(ℓ− |m|)!(ℓ+ |m|)!

12

Pmℓ (cos θ)eimϕ.

It also defines µ = cos θ,

Y mℓ (Ω) =

2ℓ+ 1

(ℓ− |m|)!(ℓ+ |m|)!

12 (1 − µ2)

|m|2

2ℓ!

dℓ+|m|

dµℓ+|m| (µ2 − 1)ℓeimϕ.

Note: Y mℓ = (−1)mY −m∗

ℓ where denotes complex conjugate.

4.2 Expression of P (cos θ) in terms of the Spherical Har-

monics

Define θ1, ϕ1, i.e. Ω1, and θ2, ϕ2, i.e. Ω2 as the polar and azimuthal angles specifyingtwo points on the unit sphere, with respect to a coordinate system at its center. Denote byΘ the angle between the lines drawn from each point to the origin of the coordinates. Then:

Pℓ(cos Θ) =4π

2ℓ+ 1

m=ℓ∑

m=−ℓ

Y mℓ (Ω1)Y

m∗

ℓ (Ω2)

19

Page 26: Notes on Mathematics For Engineers

4.3 Orthonormality of Y m(Ω)

Ω

Y kj (Ω)Y m∗

ℓ (Ω)dΩ = δjℓδkm

where the integral over vector Ω indicates a double integration over the full ranges of θ andϕ; π

2≤ θ ≤ π

2, 0 ≤ ϕ ≤ 2π.

4.4 Expansion in Spherical Harmonics

Any functions, perhaps complex, of the variables θ and ϕ, i.e. Ω, absolutely integrableover θ and ϕ, can be expanded in terms of the functions Y m

ℓ :

F (Ω) =

∞∑

ℓ=0

ℓ∑

m=−ℓ

Fmℓ Y

mℓ (Ω)

where

Fmℓ =

Ω

F (Ω)Y m∗

ℓ (Ω)dΩ

4.5 Differential Equation Satisfied by Y mℓ (Ω)

∂θ

[

sin θ∂Y

∂θ

]

+1

sin θ

∂2Y

∂ϕ2+ ℓ(ℓ+ 1) sin θY = 0

Assume Y=Φ(ϕ)Θ(θ) and let∂2Y

∂ϕ2= −m2Y

Imposing the conditions that Y is bounded at cos θ = ±1 makes ℓ an integer. Imposing thatY is single-valued in ϕ makes m an integer.

20

Page 27: Notes on Mathematics For Engineers

4.6 Some Low-order Spherical Harmonics

Y 00 (Ω) =

1√4π

Y −11 (Ω) =

3

8πsin θe−iϕ

Y 01 (Ω) =

3

4πcos θ

Y 11 (Ω) =

3

8πsin θeiϕ

4.7 A Useful Relationship

If the vector Ω is considered to represent a point on the unit sphere, its components canbe represented as follows:

Ωx = sin θ cosϕ

Ωy = sin θ sinϕ

Ωz = cos θ

If a new set of components are constructed as follows:

Ω−1 =1√2(Ωx − iΩy) =

1√2

sin θ(cosϕ− i sinϕ)

Ω0 = Ωx = cos θ

Ω1 =1√2(Ωx + iΩy) =

1√2

sin θ(cosϕ+ i sinϕ)

then these are easily seen to be expressible as the ℓ = 1 spherical harmonics (see E),

Ω−1 =

3Y −1

1

Ω0 =

3Y1

Ω1 =

3Y 1

1

References

1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D.Van Nostrand, 1st edition (1934).

21

Page 28: Notes on Mathematics For Engineers

2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert andCompany, 1927, pp. 302-320.

3. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.(Lists some properties and tabulates functions).

4. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”,U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

5. L. I. Schiff, “Quantum Mechanics”, 2nd Edition, McGraw Hill, 1955, p. 73.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University Press(1927), pp. 391-396.

22

Page 29: Notes on Mathematics For Engineers

Chapter 5

Laguerre Polynomials

5.1 Derivative Definition

L(α)n (x) = (−1)mx−αex d

n

dxn(xα+ne−x)

5.2 Generating Function

H(x, t) = (1 − t)−(α+1)e−xt

1−t =∞∑

n=0

(−1)n

n!L(α)

n (x)tn

Thus

L(α)n (x) =

[

dn

dtn(1 − t)−(α+1)e−

xt1−t

]

; t = 0

5.3 Differential Equation Satisfied by Lαn(x)

xd2y

dx2+ (α− x+ 1)

dy

dx+ nyn = 0 n = 0, 1, 2, . . .

5.4 Orthogonality, Range 0 ≤ x ≤ ∞

∫ ∞

0

xαe−xL(α)m (x)L(α)

n (x)dx = N2nδmn

23

Page 30: Notes on Mathematics For Engineers

where

N2n = (−1)nn!Γ(n + α + 1)

5.5 Expansion in Laguerre Polynomials

F (x) =∞∑

n=0

fnL(α)n (x)

where

fn =1

N2n

∫ ∞

0

xαe−xF (x)L(α)n (x)dx

5.6 Expansion of Xm in Laguerre Polynomials

xm =m∑

n=0

(−1)nm!Γ(α +m+ 1)

n!Γ(n+ α + 1)L(α)

n (x)

5.7 Recurrence Relations

a. (x− 2n− α− 1)L(α)n (x) = L

(α)n+1(x) + n(n+ α)L

(α)n−1(x)

b. L(α)′

n+1 = (n + 1)[

L(α)′

n − L(α)n (x)

]

24

Page 31: Notes on Mathematics For Engineers

Chapter 6

Bessel Functions

6.1 Differential Equation Satisfied by Bessel Functions

1.d2y

dx2+

1

x

dy

dx+

(

1 − ν2

x2

)

y = 0

or

2.1

x

d

dx

(

xdy

dx

)

+

(

1 − ν2

x2

)

y = 0

(An extensive listing of other equations satisfied by Bessel functions is given in Reference 2,)

6.2 General Solutions

Assuming that A and B are arbitrary constants, the general solutions to the above equa-tions are:

y = AJν(x) +BJ−ν(x) (ν non-integral)

y = AJn(x) +BNn(x) (n integral)

Where Jn(x) are the Bessel Functions of the first kind of order n and Nn(x) are the Neu-mann1 Functions, the Bessel Functions of the second kind. The Neumann functions are alsofrequently represented by the symbol Yn(x). For non-integers,

Nν(x) =Jν(x) cos (νπ) − J−ν(x)

sin (νπ)

1In reference 4, these are called Weber functions

25

Page 32: Notes on Mathematics For Engineers

For ν = n and n ∈ Z, the above expression reduces to2

Nn(x) =2

πJn(x) log

x

2− 1

π

∞∑

r=0

(−1)r

(

x2

)n+2r

r!(n+ r)!F (r) + F (n+ r) − 1

π

n−1∑

r=0

(n− r − 1)!

r!

(

x

2

)2r−n

Where F (r) =∑r

s=11s. A third function which sometimes finds use is the Hankel function,

or a Bessel function of the third kind. There are two such functions, defined by

H1ν (x) = Jν(x) + iNν(x) (ν unrestricted )

H2ν (x) = Jν(x) − iNν(x)

Then, for integers, we see that a solution to Bessel’s equation will be

y = A1H(1)n (x) + A2H

(2)n (x)

Where A1 and A2 are arbitrary constants that may be complex. These functions bearthe same relation to the Bessel functions Jν(x) and Nν(x) as the functions e±νx bear closeto cos νx and sin νx. They satisfy the same differential equation and recursion relations asJν(x). Their importance results from the fact that they alone vanish for an infinite complexargument, vis. H(1) if the imaginary part of the argument is positive, H(2) if it is negative,i.e, limr→∞H(1)(reiθ) = 0, limr→∞H(2)(re−iθ), 0 ≤ θ ≤ λ.

From the above equations, we can also write

Jν(x) =1

2

[

H(2)ν (x) +H(1)

ν (x)]

(ν unrestricted )

Nν(x) =1

2

[

H(2)ν (x) −H(1)

ν (x)]

6.3 Series Representation

For Bessel Functions of the first kind of order n, a series representation is given as follows:

Jn(x) =

∞∑

r=0

(−1)rx2r+n

22r+nr!(1 + n+ r)

2This is shown in reference 8, pg. 577.

26

Page 33: Notes on Mathematics For Engineers

6.4 Properties of Bessel Functions

1. Jn(x) = (−1)nJ−n(x)

2. Jn(x) = (−1)nJn(−x)3. Bounds on Jn(x) for n ∈ Z : n ≥ 0 and x ∈ R : x ≥ 0

a.∣

∣Jn(x)∣

∣ ≤ 1

b.

dk

dxkJn(x)

≤ 1 k = 1, 2, . . .

4. Limits for Jn(x) for n ∈ Z : n ≥ 0a. lim

n→∞Jn(x) = 0

b. limx→∞

Jn(x) = 0

c. limx→∞

Jn(x) =xn

2nn!5. Limits for Nn(x) for n ∈ Z : n ≥ 0 and x ∈ R

a. limx→∞

Nn(x) = 0

b. limx→0

Nn(x) = −(n− 1)!

π

(

2

x

)n

n ≥ 1

c. limx→0

N0(x) = −2

πlog

2

γx(cγ = 1.781)

6. Graphs of Jn(x) and Nn(x)

6.5 Generating Function

exp

[

x

2

(

t− 1

t

)

]

=

∞∑

h=−∞Jn(x)tn ( n integral)

6.6 Recursion Formulae

a. 2J′

n(x) = Jn−1(x) − Jn+1(x)

b.2n

xJn(x) = Jn−1(x) + Jn+1(x)

c. xJ′

n(x) = xJn−1(x) − nJn(x) = xJn(x) − xJn+1(x)

27

Page 34: Notes on Mathematics For Engineers

6.7 Differential Formulae

a.d

dx

[

xnJn(x)]

= xnJn−1(x)

b.d

dx

[

x−nJn(x)]

= −x−nJn+1(x)

6.8 Orthogonality, Range

For 0 ≤ x ≤ c,

1.

∫ c

0

xJn(λnjx)Jn(λnkx)dx = δjkN2nj

Where λnj are positive roots of the equation Jn(λc) = 0 and N2nj = c2

2

[

Jn+1(λnjc)]2

2.

∫ c

0

xJn(λnℓx)Jn(λnmx)dx = δℓmM2nℓ

Where λnℓ are the positive roots of the equation (λc)J′

n(λc) = −hJn(λc) or its equivalent(n+ h)Jn(λc) − λcJn+1(λc) = 0 where h is some constant > 0 and n = 0, 1, 2, . . . and

M2nℓ =

[

λ2nℓc

2 + h2 − n2

2λ2nℓ

]

[

Jn(λnℓc)]2

6.9 Expansion in Bessel Functions

f(x) =

∞∑

j=0

AjJn(λnjx)

where Aj will be represented by either of the two following ways:

a. Aj =2

c2[

Jn+1(λnjc)]2

∫ c

0

xf(x)Jn(λnjx)dx

when Jn(λnjc) = 0

b. Aj =2λ2

nj

(λ2njc

2 + h2 − n2)[

Jn(λnjc)]2

∫ c

0

xf(x)Jn(λnjx)dx

when λcJ′

n(λc) = −hJn(λc)

28

Page 35: Notes on Mathematics For Engineers

6.10 Bessel Integral Form

Jn(x) =1

π

∫ π

0

cos (nθ − x sin θ)dθ (n = 0, 1, 2, . . .)

29

Page 36: Notes on Mathematics For Engineers

Chapter 7

Modified Bessel Functions

7.1 Differential Equation Satisfied

d2y

dx2+

1

x

dy

dx−(

1 +ν2

x2

)

y = 0

7.2 General Solutions

y = AIν(x) +BIν(x) (ν non-integral)

y = AIn(x) +BKn(x) (n integral)

where A and B are arbitrary constants

In(x) = Modified Bessel Function of the first kind of order n

Kn(x) = Modified Bessel Function of the second kind of order n.

For non-integers,

Kν(x) =π

2

[

I−ν(x) − Iν(x)] 1

sin (νπ)

2iν+1H(1)

ν (ix) =π

2i−ν−1H(2)

ν (−ix).

For integers,

Kν(x) =2

π(−1)n+1In(x) log

x

2+

1

π

n−1∑

r=0

(−1)r r(n− r − 1)!

r!

(

x

2

)−n+2r

+ (−1)n 1

π

∞∑

r=0

(x2)n+2r

r!(n+ r)!

F (r) + F (n+ r)

30

Page 37: Notes on Mathematics For Engineers

Where

F (r) =r∑

s=1

1

s

7.3 Relation of Modified Bessel to Bessel

For ν unrestricted,

Iν(x) = i−νJν(ix)

=i

2

[

H(2)ν (x) −H(1)

ν (x)]

=xν

Γ(ν + 1)2ν

∞∑

r=0

1

r!(ν + 1)r

(

x2

4

)r

=

∞∑

r=0

1

Γ(r + ν + 1)r!

(

x

2

)2r+ν

Where (α)r = (α)(α + 1) · · · (α + r − 1) is the rising factorial.

7.4 Properties of In

1. I−n(x) = In(x)

2. 2I′

n(x) = In−1(x) + In+1(x)

3.2n

xIn(x) = In−1(x) − In+1(x)

4. xI′

n(x) = xIn−1(x) − nIn(x)

5. xI′

n(x) = nIn(x) + xIn+1(x)

6. I′

0(x) = I1(x)

A few comments on notation of Jahnke and Emde.

1. A general cylindrical function Zp(x) is defined on page 144 by Zp(x) = c1Jp(x)+c2Np(x)for p ∈ Z where c1, c2 are arbitrary (real or complex) constants. Thus Zp(x) can applyto Jp(x) by letting c2=0, to Np(x) for c1=0, and to Hp(x) by other constant. Allformulae on pages following use this definition of Zp(x).

2. The function Ip(x) is not listed as such, but is found as ipJp(ix) on pages 224-229.

3. The function Kp(x) is not listed, but 2πKn(x) = in+1H

(1)n (ix) = −in+1H

(2)n (ix). The

functions iH(1)0 (ix) = −iH(2)

0 (−ix) and −H(2)1 (−ix) are tabulated on pages 236-243.

31

Page 38: Notes on Mathematics For Engineers

4. This reference is full of extremely interesting, beautiful, and helpful pictures of manyfunctions, almost suitable for hanging in the living room.

References

1. G.N. Watson; “A Treatise on the Theory of Bessel Functions”, 2nd Edition, CambridgeUniversity Press, 1944, (exhaustive treatment)

2. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125.(Lists some properties and tabulates functions).

3. R.V. Churchill; “Fourier Series and Boundary Value Problems”, McGraw Hill, 1941,pp. 175-201. (for discussion of concept of orthogonality, see Chap. 3).

4. I.N. Sneddon; “Special Functions of Mathematical Physics and Chemistry” UniversityMath. Texts, 1956 (thorough discussion for practical use. However, his use of subscriptis not always clear or general.)

5. D. Jackson; “Fourier Series and Orthogonal Polynomials”, Carus Math. Monographs,Number 6, 1941, pp. 45-68.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University press,1958 (more rigorous development)

7. N.W. McLachlan; “Bessel Functions for Engineers”, 2nd Edition, Oxford, ClarendonPress, 1955.

8. H. and B.S. Jeffrets; “Mathematical Physics”, 3rd Edition, Cambridge UniversityPress, 1955, (compact, but rigorous presentation. They use a different notation, but itis clearly defined.)

32

Page 39: Notes on Mathematics For Engineers

Chapter 8

The Laplace Transformation

8.1 Introduction

8.1.1 Description

The Laplace transformation permits many relatively complicated operations upon a func-tion, such as differentiation and integration for instance, to be replaced by simpler algebraicoperations, such as multiplication or division, upon the transform. It is analogous to theway in which such operations as multiplication and division are replaced by simpler pro-cesses of addition and subtraction when we work not with numbers themselves but withtheir logarithms.

8.1.2 Definition

The Laplace transformation applied to a function f(t) associates a function of a newvariable with f(t). This function of s is denoted by Lf(t) or where no confusion will result,simply by L(f) or F(s); and the transform is defined by:

L(f) ≡∫ ∞

0

f(t)e−stdt

8.1.3 Existence Conditions

For a Laplace transformation of f(t) to exist and for f(t) to be recoverable from itstransform, it is sufficient that f(t) be of exponential order, i.e. that there should exist aconstant, s, such that the product e−st|f(t)| is bounded for all values of t greater than somefinite number T ; and that f(t) should be at least piecewise continuous over every finiteinterval 0 ≤ t ≤ T , for any finite number T . More formally,

∃M, s, T : e−st|f(t)| ≤M ∀t > T

33

Page 40: Notes on Mathematics For Engineers

These conditions are usually met by functions occurring in physical problems. The num-ber a is called the exponential order of f(t). If a number a exists such that e−at|f(t)| isbounded, then f is said to be of exponential order.

8.1.4 Analyticity

If f(t) is piecewise continuous and of exponential order a, the transform of f(t), i.e., F(s),is an analytic function of s for Re(s)>s. Also, it is true that for Re(s)>s, limx→∞ F (s) = 0and limy→∞ F (s) = 0 when s=x+iy.

8.1.5 Theorems

Theorem I (Linearity)

The Laplace transform is a sum of functions is the sum of the transforms of the individualfunctions.

L(f + g) = L(f) + L(g)

Theorem II (Linearity)

The Laplace transform of a constant times a function is the constant times the transformof the function.

L(cf) = cL(f)

Theorem III (Basic Operational Property)

If f(t) is a continuous function of exponential order whose derivative is at least piecewisecontinuous over every finite interval 0 ≤ t ≤ t2, and limt→0+ f(t) = f(0+), then the Laplacetransform of the derivative of f(t) is given by

L(f ′) = s − f(0+)

and

L(f ′′) = s2 − sf(0+) − f ′(0+).

The latter, of course, requires an extension of the continuity of f(t) and its derivatives toinclude f ′′(t), and may be formally shown by partial integration. More generally, if f(t) andits first n-1 derivatives are continuous and dnf

dtnis piecewise continuous, then

L[

dnf

dtn

]

= snL(f) − sn−1f(0+) − sn−2f ′(0+) − . . .− f (n−1)(0+)

34

Page 41: Notes on Mathematics For Engineers

Theorem IV (Transforms of Integrals)

If f(t) is of exponential order and at least piecewise continuous, then the transform of∫ t

af(t)dt is given by

L[

∫ t

a

f(t)dt

]

=1

sL(f) +

1

s

∫ 0

a

f(t)dt

8.1.6 Further Properties

Below, let us assume all functions of the variable t are piecewise continuous, 0 ≤ t ≤ T ,and of exponential order as t approaches infinity. Then the following theorems hold:

Theorem V

L[

estf(t)]

= F (s− a)

Theorem VI

If

fb(t) =

f(t− b), t ≥ b

0, t < b

Then

L[

fb(t)]

= e−bsF (s)

Theorem VII (Convolution)

L[

∫ t

0

f(t− τ)f(τ)dt

]

= F (s) ∗G(s)

= L[

∫ t

0

g(t− τ)f(τ)dt

]

35

Page 42: Notes on Mathematics For Engineers

Theorem VIII (Derivatives of Transforms)

L[

tf(t)]

= −dF (s)

ds

More generally,

L[

tnf(t)]

= (−1)ndnF

dsn

8.2 Examples

8.2.1 Solving Simultaneous Equations

The problem is to solve for y from the simultaneous equations:

y′ + y + 3

∫ t

0

zdt = cos t+ 3 sin t

2y′ + 3z′ + 6z = 0 y0 = −3, z0 = 2

Taking the Laplace transform of each equation, we have:

L(y′) + L(y) + 3L[

∫ t

0

zdt

]

= L(cos t) + 3L(sin t)

2L(y′) + 3L(z′) + 6L(z) = 0

This system of equations can be reduced as follows by using the theorems and definitionsfrom the previous sections:

[

L(y) + 3]

+ L(y) +3

sL(z) =

s

s2 + 1+

3

s2 + 1

2[

sL(y) + 3]

+ 3[

sL(z) − 2]

+ 6L(z) = 0

Collecting the terms and transposing, we have

(s+ 1)L(y) +3

sL(z) =

s+ 3

s2 + 1− 3

2sL(y) + 3(s+ 2)L(z) = 0

The two original integro-differential equations have thus been reduced to two linear algebraicequations in L(y) and L(z). Applying Cramer’s rule and solving for L(y) since it is y whichwe want;

L(y) =

(

s+3s2+1

− 3)

3s

0 3(s+ 2)

(s+ 1) 3s

2s 3(s+ 2)

=3(s+ 2)

(

s+3s2+1

− 3)

3s(s+ 3)

36

Page 43: Notes on Mathematics For Engineers

Where 3s(s+3) is the determinant of the matrix in the denominator and 3(s+2)(

s+3s2+1

− 3)

is the determinant of the matrix in the numerator. L(y) can also be written as

L(y) =s+ 2

s(s2 + 1)− 3

[

s+ 2

s(s+ 3)

]

.

Now applying the method of partial fractions,

L(y) =

(

2

s+

1 − 2s

s2 + 1

)

−(

1

s+ 3+

2

s

)

= −2

(

s

s2 + 1

)

+1

s2 + 1− 1

s+ 3

And, finding the inverse in the table of transforms, which are tables relating functions of sto the corresponding functions of t, and will be found in section IV of this paper,

y = −2 cos t+ sin t− e−3t (t > 0)

It should be noted that one of the inherent characteristics of solving differential equationsby the use of Laplace transforms is that the initial conditions are included in the solution.

8.2.2 Electric Circuit Example

Since Laplace transforms are widely used in the determination of the transient responseof electric circuits, suppose a resistor R, inductor L, DC voltage source E, and an open switchare all connected in series. At time t=0, the switch closes. Find the equation of current flowafter the switch is closed.

1. Kirchoff’s Voltage Law

E = iR + Ldi

dt

2. Laplace Transformation

E

s= I(s)R + LsI(s) − Lf(0+)

t = 0 ⇒ i = 0 ⇒ f(0+) = 0

3. Solving for I(s)

I(s) =E

s(R + Ls)

4. Taking the Inverse

i(t) =E

R

(

1 − e−RtL

)

37

Page 44: Notes on Mathematics For Engineers

8.2.3 Transfer Functions

For certain control functions, and for representing the dynamic behavior of various devicessuch as reactors, heat exchangers, etc., it is advantageous to use a “transfer function” becauseof the convenience in manipulation it provides. The transfer functions of many elements ofa system, when strung together in a block diagram, represent a convenient way of writingcomplicated system equations. The transfer function of a system may be defined as the ratioof the output to the input of the system in transform (s) space. The conditions for usingtransfer functions are:

1. Initial condition operator =0.

2. No loading between transfer functions.

3. Transfer function satisfies existence conditions for Laplace transformation.

4. Linear system.

Example of Transfer Function

Consider an input voltage source ei in series with a resistor R and an inductor withinductance L. Suppose the potential drop across the inductor is e0. The objective is to findthe transfer function E0

Ei(s).

1. Equations

ei = Ri + Ldi

dt

e0 = Ldi

dt

2. Transforms

Ei(s) = RI(s) + sLI(s)

E0(s) = sLI(s)

3. Solving for the Transfer Function

E0

Ei

(s) =sLI(s)

(R + sL)I(s)=

sL

R + sL=

1 + sτ

(where τ = LR

is the circuit’s time constant.)

38

Page 45: Notes on Mathematics For Engineers

8.3 Inverse Transformations

8.3.1 Heaviside Methods

When solving equations by the Laplace transform technique, it is frequently the mostdifficult part of the procedure to invert the transformed solution for F(s) into the desiredfunction f(t). A simple way of making this inversion, but unfortunately a method onlyapplicable to special cases, is to reduce the answer to a number of simple expressions byusing partial fractions, and then applying the Heaviside theorems as outlined below:

Theorem I

If y(t) = L−1[

p(s)q(s)

]

, where p(s) and q(s) are polynomials, and the order of q(s) is greater

than the order of p(s), then the term in y(t) corresponding to an unrepeated linear factor

(s− a) of q(s) is p(a)q!(a)

eat or p(a)Q(a)

eat, where Q(s) is the product of all factors of q(s) except for

(s− a).

Example

If L(f(t)) = s2+2s(s+1)(s+2)

, then what is f(t)?

a. Roots of the denominator are s = 0, s = −1, s = −2

b. p(s) = s2 + 2

c. q(s) = s3 + 3s2 + 2s; q′(s) = 3s2 + 6s+ 2

p(0) = 2; p(1) = 3, p(−2) = 6

d. q′(0) = 2, q′(−1) = −1, q′(−2) = 2

e. f(t) =2

2e0 +

3

−1e−t +

6

2e−2t = 1 − 3e−t + 3e−2t

Theorem II

If y(t) = L−1[

p(s)q(s)

]

where p(s) and q(s) are polynomials and the order of q(s) is greater

than the order of p(s), then the terms in y(t) corresponding to the repeated linear factor(s− a)r of q(s) are:

eat

[

φ(r−1)(a)

(r − 1)!+φ(r−2)(a)

(r − 2)!

t

1!+ . . .+

φ′(a)tr−2

1!(r − 2)!+ φ(a)

tr−1

(r − 1)!

]

,

where φ(s) is the quotient of p(s) and all the factors of q(s) except for (s− a)r.

39

Page 46: Notes on Mathematics For Engineers

Example

If L(f(t)) = s+3(s+2)2(s+1)

, then what is f(t)?

a. φ(s) = s+3s+1

;φ′(s) = (s+1)−(s+3)(s+1)2

= −2(s+1)2

b. φ(−2) = −1;φ′(−2) = −2The terms in f(t) corresponding to (s+ 2)2 are:

e−2t

[−2

1!+

−1t

0!1!

]

= −e−2t(s+ t).

Then, as in the example from Theorem I;

p(s) = 8 + 3; q(s) = s3 + 5s2 + 8s+ 4

q′(s) = 3s2 + 10s+ 8

p(−1) = 2 q′(−1) = 1.

Thus,

f(t) = −e−2t(2 + t) + 2e−t

Theorem III

If y(t) = L−1[

p(s)q(s)

]

, where p(s) and q(s) are polynomials and the order of q(s) is greater

than the order of p(s), then the terms in y(t) corresponding to an unrepeated quadraticfactor

[

(s+ a)2 + b2]

of q(s) are e−at

b(φi cos bt + φr sin bt) where φr and φi are respectively,

the real and imaginary parts of φ(−a+ ib), and φ(s) is the quotient of p(s) and all the factorsof q(s) except

[

(s+ a)2 + b2]

.

Example

If L(f(t)) = s(s+2)2(s2+2s+2)

, then what is f(t)?

a. Considering the linear factor as in the example of Theorem II

φ(s) =s

(s2 + 2s+ 2); φ′(s) =

−s2 + 2

(s2 + 2s+ 2)2

φ(−2) = −1; φ′(−2) = −1

2

b. Considering the quadratic factor;

s2 + 2s+ 2 = (s+ 1)2 + 12

φ(s) =s

(s+ 2)2.

40

Page 47: Notes on Mathematics For Engineers

Therefore,

φ(−a + ib) = φ(−1 + i) =−1 + i

[

(−1 + i) + 2]2

=−1 + i

(1 + i)2=

−1 + i

2i=

1

2+i

2.

So φr = φi = 12

and the terms in f(t) corresponding to (s2 + 2s + 2) are e−t(cos t+sin t)2

. Byadding the two partial inverses, we get the following answer:

f(t) =−(1 + 2t)e−2t

2+e−t(cos t+ sin t)

2

8.3.2 The Inversion Integral

When the function cannot be reduced to a form amenable to inversion by tables oftransforms or Heaviside methods, there remains a most powerful method for the evaluationof inverse transformations. The inversion is given by an integral in the complex s-plane,

f(t) =1

2πi

∫ γ+i∞

γ−i∞estF (s)ds

where γ is some real number chosen such that F(s) is analytic (see Appendix A) for Re(s)≥ γ,and the Cauchy principle value of the integral is to be taken, i.e.

f(t) =1

2πilim

B→∞

∫ γ+iB

γ−iB

eStF (s)ds

Let us illustrate the formal origin of the inversion integral in the following way. In thecomplex plane, let φ(x) be a function of z, analytic on the line x=γ, and in the entire halfplane R to the right of this line. Moreover, let |φ(z)| approach zero uniformly as z becomesinfinite through this half plane. Then s0 is any point in the half plane R, we can choose asemi-circular contour c, composed of c1 and c2, as shown below, and apply Cauchy’s integralformula, (see Appendix B)

Φ(s0) =1

2πi

c

φ(z)

z − s0dz

Here, φ(z) is analytic within and on the boundary c of a simply connected region R and s0

is any point in the interior of R integration around c in a positive sense).Thus, Cauchy’s integral formula yields

φ(s) =1

2πi

c

φ(z)

z − sdz =

1

2πi

c2

φ(z)

z − sdz +

1

2πi

∫ γ−ib

γ+ib

φ(z)

z − sdz

41

Page 48: Notes on Mathematics For Engineers

Now, for values of z on the path of integration, c2, and for b sufficiently large,

|z − a| ≥ b− |a− γ| ≥ b− |a|

⇒∣

c2

φ(z)

z − sdz

≤∫

c2

|φ(z)||z − s| |dz|

≤ M

b− |s|

c2

|dz|

= πbM

b− |s|where M is the maximum value of |φ(z)| on c2. As b → ∞, the fraction b

b−|s| → 1, and Mapproaches zero. Hence,

limb→∞

c2

φ(z)

z − sdz = 0

and the contour integral reduces to

φ(s) = limb→∞

1

2πi

∫ γ−ib

γ+ib

φ(z)

z − sdz =

1

2πi

∫ γ+i∞

γ−i∞

φ(z)

z − sdz

Now taking the inverse transform of the above equation,

f(t) = L−1φ(s) = L−1

1

2πi

∫ γ+i∞

γ−i∞

φ(z)

z − sdz

=1

2πi

∫ γ+i∞

γ−i∞L−1

φ(z)

z − s

dz

=1

2πi

∫ γ+i∞

γ−i∞φ(z)L−1

1

z − s

dz

=1

2πi

∫ γ+i∞

γ−i∞φ(z)etzdz

Since from our table of transforms

L−1

1

s− z

= ezt

Our final equation after switching from z to s as dummy variable in the last integral is justthe inversion integral which we were establishing.

L−1

φ(z)

= f(t) =1

2πi

∫ γ+i∞

γ−i∞φ(z)estds

At this point, it would be advantageous to know how to evaluate the integral on the right.According to residue theorem, the integral of estφ(s) around a path enclosing the isolatedsingular points s)1, s2, · · · , sn, of estφ(s) has the value

2πi[

φ1(t) + ρ2(t) + . . .+ ρn(t)]

42

Page 49: Notes on Mathematics For Engineers

where ρn(t) is the residue of estφ(s) at s=sn. For discussion of residues, see Appendix C; forsingular points, Appendix D.

Let the path of integration be made up of the line segments γ− ib, γ + ib, and c3. Then,

1

2πi

∫ γ+ib

γ−ib

φ(s)estds+1

2πi

c3

φ(s)estds =

N∑

n=1

ρn(t)

If the second integral around c3 vanishes for b → ∞, as often happens, we are led to theimmediate result that

L−1

φ(s)

= f(t) =

N∑

n=1

ρn(t)

Note that in the formal derivation of the inversion formula, we assumed that φ(s) (andtherefore estφ(s)) is analytic for s ≥ γ, and that lims→∞ |φ(s)|=0 in that plane. In ourdiscussion of the residue form of the inversion, we work in the left half-plane. This isbecause Laplace transforms have the property that they are analytic in a right half-plane,and that in that plane, lims→∞ |φ(s)|=0.

Questions of the validity of the above procedures, alterations of contour, and applicationsto problems are not dealt with here, as they are presented in detail in the references.

43

Page 50: Notes on Mathematics For Engineers

8.4 Tables of Transforms

F(s) f(t)

1 Unit impulse at t=0, δ(t)

s Unit doublet impulse at t=0, δ2(t)

1s

Unit step at t=0, u(t)

1s+a

e−at

1(s+a)(s+b)

e−at−e−bt

b−a

s+c(s+a)(s+b)

(c−a)e−at−(c−b)e−bt

b−a

s+cs(s+a)(s+b)

cab

+ c−aa(a−b)

e−at + c−bb(b−a)

e−bt

1(s+a)(s+b)(s+c)

e−at

(c−a)(b−a)+ e−bt

(a−b)(c−b)+ e−ct

(a−c)(b−c)

s+d(s+a)(s+b)(s+c)

(d−a)e−at

(c−a)(b−a)+ (d−b)e−bt

(a−b)(c−b)+ (d−c)e−ct

(a−c)(b−c)

s2+es+d(s+a)(s+b)(s+c)

(a2−ea+d)e−at

(c−a)(b−a)+ (b2−eb+d)e−bt

(a−b)(c−b)+ (c2−ec+d)e−ct

(a−c)(b−c)

1a2+b2

sin btb

s+ds2+b2

√d2+b2

bsin (bt+ ϕ) ϕ = tan−1

(

bd

)

ss2+b2

cos (bt)

44

Page 51: Notes on Mathematics For Engineers

F(s) f(t)

1(s+a)2+b2

e−at

bsin (bt)

s+a(s+a)2+b2

e−at cos (bt)

s+d(s+a)2+b2

e−at√

(d−a)2+b2

bsin (bt+ ϕ) ϕ = tan−1

(

bd−a

)

1

s[(s+a)2+b2]1b20

+ e−at

b0bsin (bt− ϕ) ϕ = tan−1

(

b−a

)

b20 = a2 + b2

s+d

s[(s+a)2+b2]db20

+e−at

√(d−a)2+b2

b0bsin (bt+ ϕ) ϕ = tan−1

(

bd−a

)

− tan−1(

b−a

)

b20 = a2 + b2

1s2−b2

sinh (bt)b

ss2−b2

cosh (bt)

1sn

tn−1

(n−1)!∀n ∈ N

1sν

tν−1

Γ(ν)∀ν > 0

1s2(s+a)

e−at+ata2 − 1

s+ds2(s+a)

d−aa2 e

−at + dat+ a−d

a2

45

Page 52: Notes on Mathematics For Engineers

F(s) f(t)

s+d(s+a)2

[

(d− a)t+ 1]

e−at

1s(s+a)2

1−(at+1)e−at

a2

s+ds(s+a)2

da2 +

[

a−dat− d

a2

]

e−at

s2+bs+ds(s+a)2

da2 +

[

ba−a2−da

]

t+ a2−da2

e−at

1s2(s2+b2)

1b2t− 1

b3sin (bt)

1s3(s2+b2)

1b4

(cos (bt) − 1) + 12b2t2

1s2(s2−b2)

1b3

sinh (bt) − 1b2t

1s3(s2−b2)

1b4

(cosh (bt) − 1) − 12b2t2

1(s2+b2)2

12b3

(

sin (bt) − bt cos (bt))

s(s2+b2)2

12bt sin (bt)

s2

(s2+b2)212b

(

sin (bt) + bt cos (bt))

s2+b2

(s2+b2)2t cos (bt)

1

[(s+a)2+b2]2

e−at

2b3

(

sin (bt) − bt cos (bt))

s+a

[(s+a)2+b2]2

te−at

2bsin (bt)

(s+a)2−b2

[(s+a)2+b2]2 te−at cos (bt)

e−t1s

s2 (t− t1)u(t− t1)

(t1s+1)e−t1s

s2 tu(t− t1)

(t21s2+2t1s+2)e−t1s

s3 t2u(t− t1)

46

Page 53: Notes on Mathematics For Engineers

8.4.1 Analyticity

Let ω be a single valued complex function of z such that ω = f(z) = u(x, y) + iv(x, y)where u and v are real functions. The definition of the limit of f(z) as z approaches z0 andthe theorems on limits of sums, products, and quotients correspond to those in the theory offunctions of a real variable. The neighborhoods involved are not two-dimensional; however;and the condition

limz→z0

f(z) = u0 + iv0

is satisfied if and only if the two-dimensional limits of the real functions u(x, y) and v(x, y)as x → x0, y → y0 have the values u0 and v0 respectively. Also, f(z) is continuous whenz = z0 is and only if u(x, y) and v(x, y) are both continuous at x0, y0).

The derivative of ω at a point z is:

dz= f ′(z) = lim

∆z→0

∆ω

∆z= lim

∆z→0

f(z + ∆z) − f(z)

∆z,

provided that this limit exists (it must be independent of direction). Now suppose onechooses a path on which ∆y = 0 so that ∆z = ∆x. Then, since ∆ω = ∆u+ i∆v,

dz= lim

∆x→0

(

∆u

∆x+ i

∆v

∆x

)

=∂u

∂x+ i

∂v

∂x

The case when ∆x = 0 so that ∆z = i∆y is similar.

dz= lim

∆y→0

(

∆u

∆y+ i

∆v

∆y

)

=∂u

∂y+ i

∂v

∂y

Equating the real and imaginary parts of the above equations, since we insist that thederivative must be independent of direction, we get

∂u

∂x=∂v

∂y;

∂v

∂x= −∂u

∂y.

These are known as the “Cauchy-Reimann conditions”.Now, the definition of analyticity is that “a function f(z) is said to be analytic at a

point z0 is its derivative f ′(z) exists at every point of some neighborhood of z0”. And, it isnecessary and sufficient that f(z) = u+ iv satisfies the Cauchy-Reimann conditions in orderfor the function to have a derivative at point z.

8.4.2 Cauchy’s Integral Formula

Theorem I

If f(z) is analytic at all points within and on a closed curve c, then∮

cf(z)dz = 0.

Proof:

47

Page 54: Notes on Mathematics For Engineers

c

f(z)dz =

c

(u+ iv)(dx+ idy) =

c

(udx− vdy) + i

c

(vdx+ udy).

Applying Green’s lemma to each integral,∮

c

f(z)dz =

∫ ∫

R

(

−∂v∂x

− ∂u

∂y

)

dxdy + i

∫ ∫

R

(

−∂u∂x

− ∂v

∂y

)

dxdy.

Due to analyticity, the integrands on the right vanish identically, giving∫

c

f(z)dz = 0. QED

Theorem II

If f(z) is analytic within and on the boundary c of a simply connected region R and if z0is any point in the interior of R, then f(z0) = 1

2πi

c

f(z)z−z0

dz where the integration around c

is in the positive sense.

Proof:

Let c0 be a circle with center at z0 whose radius ρ is sufficiently small such that c0 liesentirely within R. The function f(z) is analytic everywhere within R, hence f(z)

z−z0is analytic

everywhere within R except at z = z0. By the “principle of deformation of contours” (seeany complex variable book), we have that

c

f(z)

z − z0dz =

c0

f(z)

z − z0dz =

c0

f(z0) + f(z) − f(z0)

z − z0dz

= f(z0)

c0

1

z − z0dz +

c0

f(z) − f(z0)

z − z0dz.

Considering the first integral∮

c0

1z−z0

dz and letting z − z0 = reiθ, dz = rieiθdθ, we get thefollowing:

∫ 2π

0

rieiθ

reiθdθ = i

∫ 2π

0

dθ = 2πi.

With that, we may also observe the following:

1.

c0

f(z) − f(z0)

z − z0dz

≤∮

c0

|f(z) − f(z0)||z − z0|

|dz|

2. On c0, z − z0 = ρ

3. |f(z) − f(z0)| < ǫ provided that |z − z0| = ρ < δ (Due to the continuity of f(z)).

48

Page 55: Notes on Mathematics For Engineers

Choosing ρ to be less than δ, we write∣

c0

f(z) − f(z0)

z − z0

dz <

c0

ǫ

ρ|dz| =

ǫ

ρ

c0

|dz| =ǫ

ρ2πρ = 2πǫ.

Since the integral on the left is independent of ǫ, yet cannot exceed 2πǫ, which can be madearbitrarily small, it follows that the absolute value of the integral is zero. We therefore havethat

c0

f(z)

z − z0dz = f(z0)2πi+ 0 or,

f(z0) =1

2πi

c0

f(z)

z − z0dz. QED

This is known as “Cauchy’s integral formula”.

Calculation of Residues

Laurent Series

Theorem I:

If f(z) is analytic throughout the closed region R, bounded by two concentric circles c1and c2, then at any point in the annular ring bounded by the circles, f(z) can be representedby a series of the following form where a is the common center of the circles:

f(z) =

∞∑

n=−∞an(z − a)n

an =1

2πi

c

f(ω)

(ω − a)n+1dω.

Each integral is taken in the counter-clockwise direction around any curve c lying within theannulus and encircling its inner boundary (for proof see any complex variable book). Thisseries is called the Laurent series.

Residues

The coefficient, a−1, of the term (z − a)−1 in the Laurent expansion of a function f(z),is related to the integral of the function through the formula

a−1 =1

2πi

c

f(z)dz.

In particular, the coefficient of (z−a)−1 in the expansion of f(z) about an “isolated singularpoint” is called the “residue” of f(z) at that point.

If we consider a simply closed curve c containing in its interior a number of isolatedsingularities of a function f(z), then it can be shown that

cf(z)dz = 2πi [r1 + r2 + · · ·+ rn]

where r1, r2, · · · , rn are the residues of f(z) at the singular points within c.

49

Page 56: Notes on Mathematics For Engineers

Determination of Residues

The determination of residues by the use of series expansions is often quite tedious. Analternative procedure for a simple or first order pole at z = a can be obtained by writing

f(z) =a− 1

z − a+ a0 + a1(z − a) + · · ·

and multiplying by z − a to get

(z − a)f(z) = a−1 + a0(z − a) + a1(z − a)2 + · · ·and letting z → a to get

a−1 = limz→a

[

(z − a)f(z)]

.

A general formula for the residue at a pole of order m is

(m− 1)!a−1 = limz→a

dm−1

dzm−1

[

(z − a)mf(z)]

.

For polynomials, the method of residues reduces to the Heaviside method for finding inverseLaplace transforms.

8.4.3 Regular and Singular Points

If ω = f(z) possesses a derivative at z − z0 and at every point in some neighborhood ofz0, then f(z) is said to be “analytic” at z = z0, and z0 is called a “regular point” of thefunction.

If a function is analytic at some point in every neighborhood of a point z0, but not at z0,then z0 is called a “singular point” of the function.

If a function is analytic at all points except z0, in some neighborhood of z0, then z0 is an“isolated singular point”.

About an isolated singular point z0, a function always has a Laurent series representation

f(z) = . . .+A−2

(z − z0)2+

A−1

z − z0+ A0 + A1(z − z0) + . . .

where 0 < |z − z0| < γ0 and γ0 is the radius of the neighborhood in which f(z) is analyticexcept at z0. This series of negative powers of (z − z0) is called the “principle part” of f(z)about the isolated singular point z0. The point z0 is an “essential singular point” of f(z) ifthe principle part has an infinite number of non-vanishing terms. It is a “pole of order m”if A−m 6= 0 and A−n = 0 when n > m. It is called a “simple pole” when m = 1.

8.5 References

1. Churchill; “Operational Mathematics” - A complete discourse on operational methods,well written and presented.

50

Page 57: Notes on Mathematics For Engineers

2. Wylie; “Advanced Engineering Mathematics” - A concise review of the highlights oftransformation calculus, both Fourier and Laplace transforms.

3. Murphy; “Basic Automatic Control Theory” - An exposition of Laplace transformswith many good electrical engineering examples. A fairly complete table of Laplacetransforms.

4. Erdelyi, A., et al; “Tables of Integral Transforms”, Vols. I and II, McGraw Hill, NewYork (1954). Very extensive compilation of transforms of many kinds.

5. Gardner and Barnes; “Transients in Linear Systems”, Extensive tables of transformsof ratios of polynomials.

51

Page 58: Notes on Mathematics For Engineers

Chapter 9

Fourier Transforms

9.1 Definitions

9.1.1 Basic Definitions

In addition to the Laplace transform there exists another commonly-used set of trans-forms called Fourier transforms. At least five different Fourier transforms may be distin-guished. Their definitions are as follows:

Finite Range Cosine Transform

Fc[n] = Cn[f ] ≡∫ π

0

f(x) cos (nx)dx (n = 0, 1, 2, . . .)

Finite Range Sine Transform

Fs[n] = Sn[f ] ≡∫ π

0

f(x) sin (nx)dx (n = 1, 2, 3, . . .)

Infinite Range Cosine Transform

Fc[r] = Cr[f ] ≡√

2

π

∫ ∞

0

f(x) cos (rx)dx (0 ≤ r ≤ ∞)

Infinite Range Sine Transform

Fs[r] = Sr[f ] ≡√

2

π

∫ ∞

0

f(x) sin (rx)dx (0 ≤ r ≤ ∞)

Infinite Range Exponential Transform

Fe[r] = Er[f ] ≡√

1

∫ ∞

−∞f(x)eirxdx (−∞ ≤ r ≤ ∞)

9.1.2 Range of Definition

In the infinite range transforms, the transform variable is continuous; in the case of thefinite range transforms, the variable takes only positive integer values or zero. Considering

52

Page 59: Notes on Mathematics For Engineers

the range of integration used in the definition of each transform, we see that the finite rangetransforms apply to functions defined on a finite interval, the infinite range sine and cosinetransforms to functions defined on a semi-infinite interval, while the exponential transformapplies to functions defined on the infinite interval.

9.1.3 Existence Conditions

As an existence condition for all these transforms, it is customarily required that thefunction be absolutely integrable over the range, i.e.

|f(x)|dx exists. Note that although forthe derivations to follow, the more stringent conditions of continuity or sectional continuityare imposed upon the function, absolute integrability is all that is required in the generalcase.

9.2 Fundamental Properties

9.2.1 Transforms of Derivatives

Consider the finite range cosine transforms of the derivative f ′ of the function f ,

Cn[f ′] =

∫ π

0

f ′(x) cos (nx)dx.

Integrating by parts,

Cn[f ] = f(x) cos (nx)

π

0

+ n

∫ π

0

f(x) sin (nx)dx

= f(π) cos (nπ) − f(0+) + nSn[f ].

Since n is an integer,

Cn[f ′] =[

(−1)nf(π) − f(0+)]

+ nSn[f ].

Consider also,

Sn[f ′] =

∫ π

0

f ′(x) sin (nx)dx

= f(x) sin (nx)

π

0

− n

∫ π

0

f(x) cos (nx)dx

= −nCn[f ].

53

Page 60: Notes on Mathematics For Engineers

Now take for f , f = g′; and by iteration we get the following:

Cn[f ′] = Cn[g′′] =[

(−1)ng′(π) − g′(0+)]

+ nSn[g′]

=[

(−1)ng′(π) − g′(0+)]

− n2Cn[g].

Similarily,

Sn[g′′] = −nCn[g′]

= −n[

(−1)ng(π) − g(0)]

− n2Sn[g]

= n[

g(0) − (−1)ng(π)]

− n2Sn[g].

Now consider the infinite range cosine transform

Cr[f′] =

2

π

∫ ∞

0

f ′(x) cos (rx)dx,

and again integrating by parts, and assuming limx→∞ f(x) = 0, which is a consequence ofour condition of absolute integrability, we get

Cr[f′] =

2

π[−f(0)] + r

2

π

∫ ∞

0

f(x) sin (rx)dx

= −√

2

πf(0) + rSr[f ].

Likewise,

Sr[f′] =

2

π

∫ ∞

0

f ′(x) sin (rx)dx

= −r√

2

π

∫ ∞

0

f(x) cos (rx)dx = −rCr[f ].

Iterating once, we find

Cr[f′′] = −

2

πf ′(0) + rSr[f

′]

= −√

2

πf ′(0) − r2Cr[f ].

Similarly,

Sr[f′′] = −rCr[f

′]

= −r√

2

πf(0) − r2Sr[f ].

54

Page 61: Notes on Mathematics For Engineers

Finally, consider

Er[f′] =

1

∫ ∞

−∞f ′(x)eirxdx

and assuming limx→∞ f(x) = 0,

Er[f′] = − ir√

∫ ∞

−∞f(x)eirxdx.

Iterating, it follows that

Er[f′′] = −r2Er[f ].

In each case, we have assumed continuity for f ′ and f ′′ in order to perform the indicatedparts integrations. One may proceed with the iterations, obtaining relations involving trans-forms of higher derivatives. Further properties are derivable with similar ease, the procedureusually involving an integration by parts.

9.2.2 Relations Among Infinite Range Transforms

It is interesting to note some relations among the infinite range transforms. RecallingEuler’s identity eirx = cos (rx) + i sin (rx), we find that

Er[f ] =1√2π

∫ ∞

−∞f(x) cos (rx)dx+

i√2π

∫ ∞

−∞f(x) sin (rx)dx

=1√2π

∫ 0

−∞f(x) cos (rx)dx+

1√2π

∫ ∞

0

f(x) cos (rx)dx

+i√2π

∫ 0

−∞f(x) sin (rx)dx+

i√2π

∫ ∞

0

f(x) sin (rx)dx

=1√2π

∫ 0

−∞f(−x) cos (rx)dx+

1√2π

∫ ∞

0

f(x) cos (rx)dx

+i√2π

∫ 0

−∞f(x) sin (rx)dx+

i√2π

∫ ∞

0

f(−x) sin (rx)dx

or

Er[f ] =1

2

Cr[f(−x)] + Cr[f(x)] + iSr[f(x)] − iSr[f(−x)]

,

which is not very interesting except when f(x) is either even or odd on the infinite interval;if even, i.e. f(x) = f(−x), then the exponential transform reduced to the cosine transform;if odd, i.e. f(x) = −f(−x), then the exponential transform reduces to the sine transform,with a factor

√−1.

55

Page 62: Notes on Mathematics For Engineers

9.2.3 Transforms of Functions of Two Variables

The transforms may also be used with functions of two or more variables; for example,if f is a function of x and y, defined for 0 ≤ x ≤ π, 0 ≤ y ≤ π, then

Sm[f ] =

∫ π

0

f(x, y) sin (mx)dx

Sn[f ] =

∫ π

0

f(x, y) sin (ny)dy

Sm,n[f ] =

∫ π

0

Sn[f ] sin (my)dy =

∫ π

0

Sm[f ] sin (nπ)dx

=

∫ π

0

∫ π

0

f(x, y) sin (mx) sin (ny)dxdy.

Furthermore,

Sm,n

[

∂2f

∂x2

]

= m

Sn

[

f(0, y)− (−1)mSn[f(π, y)]]

−m2Sm,n[f ]

so that if

f(0, y) = f(π, y) = f(x, 0) = f(x, π),

then

Sm,n

[

∂2f

∂x2

]

= −m2Sm,n[f ]

Sm,n

[

∂2f

∂x2+∂2f

∂y2

]

= −(m2 + n2)Sm,n[f ].

Similar formulae may be derived for Cm,n and extensions can be worked out in analogyto the single-variable properties. These transforms of more than one variable amount totransforms of transforms, obtained by taking the transform of the function with respect toa single variable, and subsequently taking the transform of this transformed function withrespect to another variable. In fact, if the boundary conditions in the various dimensionsare not all of the same type, more than one type of transformation may be used. In fact,one fairly common combination is the Fourier plus the Laplace transformation.

9.2.4 Fourier Exponential Transforms of Functions of Three Vari-

ables

Consider a function of three variables, f(x1, x2, x3), piecewise continuous and absolutelyintegrable over the infinite range with respect to each variable. We may apply the exponential

56

Page 63: Notes on Mathematics For Engineers

transform with respect to each variable, defining the three-times transformed function.

Ek[f ] = g(k1, k2, k3) =1

(2π)32

∫ ∞

−∞

∫ ∞

−∞

∫ ∞

−∞ei(k1x1+k2x2+k3x3)f(x1, x2, x3)dx1dx2dx3

Using vector notation, this may be rewritten as follows:

Ek[f ] = g(k) =1

(2π)32

x

eik·xf(x)d3x

where k has the components k1, k2, k3, x has the components x1, x2, x3, and d3x = dx1dx2dx3,and the integration is to be taken over the full range, −∞ to ∞ of each variable. The inversetransformation gives back f(x),

f(x) =1

(2π)32

k

e−ik·xg(k)d3k.

Properties: suppose Ek[f ] = g(k); Ek[F] = G(k), then

1. Ek[∇f ] = −ikg(k)

2. Ek[~∇ · F ] = −ik · G(k)

3. Ek[∇× F] = −ik ×G(k)

4. Ek[∇2f ] = −k2f

From a glance at formulae 1 to 4, we see that under this transformation, the vector operator∇ operating on a function transforms into the vector ik times the transformed function.

9.3 Summary of Fourier Transform Formulae

9.3.1 Finite Transforms

Functions defined on any finite range can be transformed into functions defined on theinterval 0 ≤ x ≤ π.

1. Definitions

a. Sn[y] =

2

π

∫ π

0

y(x) sin (nx)dx n = 1, 2, . . .

b. Cn[y] =

2

π

∫ π

0

y(x) cos (nx)dx n = 0, 1, . . .

57

Page 64: Notes on Mathematics For Engineers

2. Inversions (0 ≤ x ≤ π)

a. y(x) =

2

π

∞∑

n=1

Sn[y] sin (nx)

b. y(x) =

2

π

∞∑

n=1

Cn[y] cos (nx) +1

πC0[y]

3. Transforms of Derivatives

a. Sn[y′] = −nCn[y] n = 1, 2, . . .

b. Cn[y′] = nSn[y] − y(0) + (−1)ny(π) n = 0, 1, 2, . . .

c. Sn[y′′] = −n2Sn[y] + n[

y(0) + (−1)ny(π)]

n = 1, 2, . . .

d. Cn[y′′] = −n2Cn[y] − y′(0) + (−1)ny(π) n = 0, 1, 2, . . .

(Note that for b and c, the functions must be known on the boundaries and for d, thederivative must be known on the boundaries.)

4. Transforms of Integrals

a. Sn

[∫ x

0

y(ξ)dξ

]

=1

nCn[y] −

(−1)n

nC0[y] n = 1, 2, . . .

b. Cn

[∫ x

0

y(ξ)dξ

]

= −1

nSn[y]

C0

[∫ x

0

y(ξ)dξ

]

= πC0[y] − C0[xy]

5. Convolution Properties

a. Define convolution of f(x), g(x) for −π ≤ x ≤ π.

p ∗ q ≡∫ π

−π

p(x− ξ)q(ξ)dξ = q ∗ p

b. Transforms of Convolutions

Define extension of f(x), where f(x) is defined for 0 ≤ x ≤ π.

Odd extension:

f1(−x) = −f1(x)

f1(x+ 2π) = f1(x)

Even extension:

f2(−x) = f2(x)

f2(x+ 2π) = f2(x)

1. 2Sn[f ]Sn[g] = −Cn[f1 ∗ g1]

2. 2Sn[f ]Cn[g] = Sn[f1 ∗ g2]

3. 2Cn[f ]Cn[g] = Cn[f1 ∗ g2]

58

Page 65: Notes on Mathematics For Engineers

6. Derivatives of Transforms

a.d

dnSn[y] = Cn[xy]

b.d

dnCn[y] = −Sn[xy]

(Here the differentiated transforms must be in a form valid for n as a continuous variableinstead of only for integral n.)

9.3.2 Infinite Transforms

It must be true that∫∞0y(x)dx or

∫∞−∞ y(x)dx exists.

1. Definitions

a. Sr[y] =

2

π

∫ ∞

0

y(x) sin (rx)dx r ≥ 0

b. Cr[y] =

2

π

∫ ∞

0

y(x) cos (rx)dx r ≥ 0

c. Er[y] =1√2π

∫ ∞

−∞y(x)eirxdx −∞ ≤ r ≤ ∞

2. Inversions

a. y(x) =

2

π

∫ ∞

0

Sr[y] sin (rx)dx x > 0

=

2

πSx

[

Sr[y]]

b. y(x) =

2

π

∫ ∞

0

Cr[y] cos (rx)dx x > 0

=

2

πCx

[

Cr[y]]

c. y(x) =1√2π

∫ ∞

−∞Er[y]e

−irxdx

= E−x

[

Er[y]]

= Ex

[

E−r[y]]

(The Cauchy principal value of the integral is to be taken).

59

Page 66: Notes on Mathematics For Engineers

3. Transforms of Derivatives

a. Sr[y′] = −rCr[y]

b. Cr[y′] = rSr[y] − y(0)

c. Er[y′] = −irEr[y]

d. Sr[y′′] = −r2Sr[y] + ry(0)

e. Cr[y′′] = −r2Cr[y] − y′(0)

f. Er[y′′] = −r2Er[y]

g. Er

[

dny

dxn

]

= (−ir)nEr[y]

4. Transforms of Integrals

a. Sr

[∫ x

0

y(ξ)dξ

]

=1

rCr[y]

b. Cr

[∫ x

0

y(ξ)dξ

]

=−1

rSr[y]

(In a and b,∫ x

0y(ξ)dξ must be piecewise continuous and approaching zero as x→ ∞ ).

c. Er

[∫ x

c

y(ξ)dξ

]

=1

rEr[y]

(Where c is any lower limit and∫ x

−∞ y(ξ)dξ must be piecewise continuous and approachingzero as x→ ∞ ).

d. Sr

[

∫ x

0

∫ λ

0

y(ξ)dξdλ

]

=−1

r2Sr[y]

e. Cr

[

∫ x

0

∫ λ

0

y(ξ)dξdλ

]

=−1

r2Cr[y]

f. Er

[

∫ x

0

∫ λ

0

y(ξ)dξdλ

]

=−1

r2Er[y]

5. Some Relations Between Transforms for real y(x)

a. Cr[y] + iSr[y] = Er[y]

or

Cr[y] = ℜ(

Er[y])

,

Sr[y] = ℑ(

Er[y])

b. For y(x) = y(−x), y(x) = Θ(

e−ǫt)

ǫ > 0

Er[y] = 2L[y],

60

Page 67: Notes on Mathematics For Engineers

(where Laplace transform variable is taken as ir.)

6. Convolution Properties

a. 2Sr[f ]Sr[g] = Cr

[∫ ∞

0

g(ξ)(

f(x+ ξ) − f1(x− ξ))

]

b. 2Sr[f ]Cr[g] = Sr

[∫ ∞

0

g(ξ)(

f(x+ ξ) + f1(x− ξ))

]

= Cr

[∫ ∞

0

f(ξ)(

g2(x− ξ) − g(x+ ξ))

]

c. 2Cr[f ]Cr[g] = Cr

[∫ ∞

0

g(ξ)(

f2(x− ξ) + f(x+ ξ))

]

where extensions defined

y1(−x) = −y(x); y2(−x) = y(x), ∀x

d. Er[f ]Er[g] = Er

[

∫ ∞

−∞f(ξ)g(x− ξ)dξ

]

7. Derivatives of Transforms

a.d

drSr[y] = Cr[xy]

b.d

drCr[y] = −Sr[xy]

c.d

drEr[y] = iEr[xy]

9.4 Types of Problems to which Fourier Transforms

May be Applied

General Discussion

It is to our great advantage to have some inkling as to just which transform to use andwhere to use it. We have noted that finite-range transforms are useful on functions definedover a finite range, Fc[r] and Fs[r] are useful on functions defined over semi-infinite intervals,and Fe[r] on functions defined over the infinite range. Nevertheless, there is still more to besaid.

First, it goes almost without saying that if it can be avoided, it is undesirable to introducean unknown quantity into an equation. Now, if an equation in f defined over 0 ≤ x ≤ π

contains a differential operator which one wishes to reduce, say d2f

dx2 , and f(π) and f(0) are

61

Page 68: Notes on Mathematics For Engineers

known, while f ′(π) and f ′(0) are not, then clearly Sn is the best option because in doing so,we introduce f(π) and f(0) and need not know the value of f ′ at any point. We would notuse Cn because f ′(0) and f ′(π) are unknown and could not be solved for until much later inthe work. On the other hand, if f ′(0) and f ′(π) are known, then one uses Cn for the samereason. The situation is similar with respect to the infinite range transforms; use Fs[r] to

reduce d2f

dx2 when f(0) is known and use Fc[r] when f ′(0) is known. No such question ariseswith respect to Fe[r].

We have noted at the start that the functions to which the Fourier transforms are appli-cable are usually required to be absolutely integrable. This kind of knowledge of a functionis usually evident from the physical meaning of the function, before the function itself isknown. The Laplace transform, on the other hand, merely requires that the function be ofexponential order, i.e., |f(x)| < Meαx ∀M,α ∈ R.

Example

Consider the following steady-state heat conduction problem in a medium with no internalheat generation. Suppose we have a two-dimensional slab which is semi-infinite along the y-axis (0 ≤ y ≤ ∞) and finite along the x-axis (0 ≤ x ≤ π). The left face at x = 0 is insulatedfor y > a but has a heat flux q for 0 < y < a. The right face at x = π is insulated for ally. Furthermore, suppose that the face at y = 0 is held at temperature T0, 0 < x < π. Theposition-dependent temperature T (x, y) is modeled using Laplace’s equation with boundaryconditions.

∇2T = 0

−k∂T∂x

(0, y) =

q 0 < y < a

0 y > a

∂T

∂x(π, y) = 0

T (x, 0) = T0

We propose to do the problem by the method of Fourier transforms, but intuitively we knowlimy→∞ T ≡ T 6= 0 and that therefore, the transform of T does not exist. However, thefunction T − T ≡ Θ is such that limy→∞ Θ = limy→∞ T − T = T − T = 0 and the transformmay (in fact, does) exist. Let us, therefore, substitute T = Θ + T into the above problem toobtain the following:

∇2Θ = 0

−k∂Θ∂x

(0, y) =

q 0 < y < a

0 y > a

∂Θ

∂x(π, y) = 0

Θ(x, 0) + T = T0 ⇒ Θ(x, 0) = T0 − T ≡ Θ0

62

Page 69: Notes on Mathematics For Engineers

The structure of the problem is not essentially change, except that now Θ0 is not knownsince T is not known.

We must reduce the operators ∂2Θ∂x2 and ∂2Θ

∂y2 . In x, ∂Θ∂x

(0, y) and ∂Theta∂x

(π, y) are known.Thus, a finite range Fourier cosine transform is a good option here. In y, we know thatΘ(x, 0) = Θ0 and that limy→∞ Θ = 0. Therefore, an infinite range Fourier sine transform isa good option. We shall denote the x-transformed functions by the superscript fn and they-transformed functions by the superscript F . Recall that

(

∂2Θ

∂x2

)f

= −n2Θf + (−1)n∂Θ

∂x(π, y) − ∂Θ

∂x(0, y)

and

(

∂2Θ

∂y2

)F

= −r2ΘF + rΘ(x, 0)

It is irrelevant in which order the transformations are applied or inverted, although one ordermay prove nicer than another. Let us transform first with respect to x.

0 = −n2Θfn + (−1)n∂Θ

∂x(π, y)− ∂Θ

∂x(0, y) +

d2Θfn

dy2

∂Θ

∂x(0, y) =

− q

k0 < y < a

0 y > a

∂Θ

∂x(π, y) = 0

Θfn(x, 0) =

π n = 0

0 n = 1, 2, . . .

Then with respect to y ( Churchill pg. 300, formula 3):

0 = −n2Θfn + (−1)∂ΘF

∂x(π, y) − ∂ΘF

∂x(0, y) − r2ΘfnF + rΘfn(0)

∂ΘF

∂x(0, y) = − q

k

1 − cos (ar)

r

(See Erdelye, pg. 63, formula 1)

∂ΘF

∂x(π, y) = 0

Θfn(x, 0) = 0 n = 1, 2, 3, . . .

63

Page 70: Notes on Mathematics For Engineers

Making substitutions yields the single algebraic equation:

0 = −(n2 + r2)ΘfnF +q

k

(1 − cos (ar))

r; n = 1, 2, . . .

0 = −r2ΘfoF +q

k

(1 − cos (ar))

r+ πΘ0r; n = 0

Solving for ΘfnF :

ΘfoF =q

k

(1 − cos (ar))

r3+πΘ0

r; n = 0

ΘfnF =q

k

(1 − cos (ar))

r(r2 + n2); n = 1, 2, . . .

We propose to invert first with respect to r, but we would run into difficulties for n = 0. Letus, therefore, integrate the x-transformed equations directly for n = 0 to get Θfo. We have

d2Θfo

dy2+q

k= 0 0 < y < a

d2Θfo

dy2= 0 y > a

Θfo(x, 0) = πΘ0

limy→∞

Θfo = 0.

Integrating,

Θfo = − q

2ky2 + C1y + C2 0 < y < a

Θfo = C3y + C4 y > a

Now limy→∞

Θf = 0 ⇒ C3 = C4 = 0.

Also,

Θfo(x, 0) = πΘ0 = C2.

It is necessary to cook up another condition to get C1. In a problem of this type, we mustrequire ∂Θ

∂yand Θ to be continuous, therefore ∂Θfo

∂yand Θfo are continuous. Applying these

conditions at y = a,

dΘfo

dy(x, a−) =

dΘfo

dy(x, a+)

0 = −qak

+ C1; C1 =qa

k

Θfo(a−) = Θfo(a+)

0 = −qa2

2k+qa2

k+ πΘ0.

64

Page 71: Notes on Mathematics For Engineers

Somewhat surprisingly, applying this last condition yields

Θ0 = − qa2

2πk.

Thus

Θfo =

− qa

2k(y − a)2 y ≤ a

0 y ≥ a

We have Θfo. Let us now invert ΘfnF to get Θfn.

ΘfnF =q

k

(1 − cos (ar))

r(n2 + r2)n = 1, 2, . . .

The inverse of 1r(n2+r2)

is 1n2 (1 − e−ny) (See Erdelye, pg. 65, formula 20).

Also, a property of the Fourier sine and cosine transforms is

F−1[

g(r) cos (ar)]

=1

2

[

G(y + a) +G(y − a)]

.

It is also true that for the sine transform, if

F−1[gf ] = G(y),

then

G(−y) = −G(y).

Therefore,

F−1

[

1

r(r2 + n2)(1 − cos (ar))

]

=

=

1n2 (1 − e−ny) − 1

2n2

[

1 − e−n(y+1) + 1 − e−n(y−a)]

1n2 (1 − e−ny) − 1

2n2

[

1 − e−n(y+1) − 1 + e−n(a−y)]

=

1n2

[

1 − e−ny − 1 + e−ny (e−na+ena)2

]

y > a

1n2

[

1 − e−ny + (e−n(y+a)−e−n(a−y))2

]

y < a

=

e−ny

n2 (cosh (na) − 1) y > a1n2 (1 − e−ny − e−na sinh (ny)) y < a

65

Page 72: Notes on Mathematics For Engineers

Now, lacking a known inversion to invert with respect to n, we use the series form

Θ =1

πΘfo +

2

π

∞∑

n=1

Θfn cos (nx)

=

− qa

2πk(y − a)2 + 2q

πk

∑∞n=1

(1−e−ny−e−na sinh (ny))n2 cos (nx) y > a

2q

πk

∑∞n=1

e−ny

n2 (cosh (na) − 1) cos (nx) y < a

and recall that

T = Θ + T

T0 = Θ0 + T = − qa2

2πk+ T

T = T0 +qa2

2πk

T = Θ + Θ0 +qa2

2πk

9.5 Inversion of Fourier Transforms

9.5.1 Finite Range

Inversions of the finite range transforms are easily seen to be a consequence of the com-pleteness and orthogonality of the cosines in the case of the cosine transform and of the sinein the case of the sine transform on the interval of integration. Indeed, one sees that theintegrals defining Cn and Sn are just the Fourier coefficients for expansions of f in a cosineseries or a sine series. Thus, their inversions are given by

f(x) = C−1n [Cn] =

1

πC0[f ] +

2

π

∞∑

n=1

Cn[f ] cos (nx) (0 ≤ x ≤ π)

f(x) = S−1n [Sn] =

2

π

∞∑

n=1

Sn[f ] sin (nx) (0 ≤ x ≤ π).

Two facts, though obvious, should be noted with regard to these transforms. If thefunction is defined over some range other than 0 ≤ x ≤ π, say 0 ≤ x ≤ L, there arises nodifficulty since one can define a new variable, such as ξ = πx

L. This way, x = L ⇒ xi = π,

and f(x) = g(ξ) = f(Lπξ). If the function is extended out of the range 0 ≤ x ≤ π, the

inversion of the cosine transform is the even extensions, i.e.,

C−1n

[

Cn(x)]

= C−1n

[

Cn(−x)]

,

while the inversion of the sine transform is the odd extensions, i.e.,

S−1n

[

Sn(x)]

= −S−1n

[

Sn(−x)]

.

66

Page 73: Notes on Mathematics For Engineers

9.5.2 Infinite Range

Inversions of the infinite range transforms follow from the Fourier integral theorem invarious forms. The inversion of the cosine transform, for example, arises from the formula

f(x) =2

π

∫ ∞

0

cos (rx)

∫ ∞

0

f(y) cos (ry)dydr

=

2

π

∫ ∞

0

cos (rx)

2

π

∫ ∞

0

f(y) cos (ry)dydr.

The interior integral is just what we above defined as Cr[f ]. Therefore,

f(x) = C−1r [Cr] =

2

π

∫ ∞

0

Cr[f ] cos (rx)dr.

The other inversions follow immediately in the same way from other forms of the Fourierintegral theorem. It is to our great advantage to note that, with the normalization factor√

or 1√2π

inserted as above, the inversion integral is just the transform of the transform,

i.e.,

f(x) = C−1r [Cr] = Cx

[

Cr[f ]]

Similar formulae for the sine and exponential transforms are

f(x) = S−1r [Sr] = Sx

[

Sr[f ]]

f(x) = E−1r [Er] = E−x

[

Er[f ]]

.

Knowing this fact doubles the utility of a table of transforms since it can be used backwardsas well as forwards. That is, given a transform one wishes to invert, one may first look forit among tabulated transformed functions; not finding an inversions there, one may equallywell look for his transform among the tabulated functions. If it is found there, the inversionsof the given transform is the transform of the tabulated function.

There are tables of both the finite range transforms and the infinite range transforms,useful for the purpose of inverting these transforms. However, this is just one way of obtainingan inversions (the easiest, of course). In the case of the finite range transforms, where theinversion is a Fourier series, and one does not know how to sum it, or get the inversion inclosed form, then the truncated series is a useful approximation to the inverse.

In the case of the infinite range transforms, the inversion integral is subject to evaluationby the methods of complex integration and residue theory.

9.5.3 Inversion of Fourier Exponential Transforms

We have seen that if the transform of a function F (x) is

f(r) =1√2π

∫ ∞

−∞F (x)eirxdx,

67

Page 74: Notes on Mathematics For Engineers

then (under the proper conditions on F ), the function can be recovered from its transformthrough the inversion formula:

F (x) =1√2π

∫ ∞

−∞f(r)e−irxdr.

Note that in the inversion formula, r is a real variable. Let us change the variable r to anew (complex) variable, ir = s, and let φ(s) = f(r). Then

φ(s) =1√2π

∫ ∞

−∞F (x)esxdx,

and the inversions is idr = ds

F (x) = − i√2π

c

φ(s)e−sxds.

Where c is the path of integration in the complex s-plane (along the iy axis at x = 0) andthe Cauchy Principal value is to be taken such that

F (x) = − i√2π

limβ→∞

∫ iβ

−iβ

φ(s)e−sxds.

Suppose φ(s) is analytic in the left half plane ℜ(s) < 0, except at a finite number ofisolated singular points sn. Let us close the path cβ, −β ≤ ℑ(s) ≤ β, with a semicircle inthe left half-plane, choosing β so large as to included all finite singular points in the planeℜ(s) < 0. Therefore, cβ is a vertical line along the imaginary axis, extending from −iβ toiβ, and c′β is a semicircle with radius β which intersects the real axis at x = −β and theimaginary axis at iy = iβ and iy = −iβ. Every singular point is contained within this halfcircle. By Cauchy’s residue theorem, we then have that

e−sxφ(s)ds+

c′β

e−sxφ(s)ds = 2πik∑

j=1

ρj

where ρj denotes the residue of e−sxφ(s) at the singular points sj, and we have assumed thatthere are k such singular points.

Since we have hypothesized that β be so large that c′β include all finite singular points inthe left half plane, in the limit as β → ∞, the right side remains constant, and we have thefollowing:

limβ→∞

e−sxφ(s)ds+ limβ→∞

e−sxφ(s)ds = 2πik∑

j=1

ρj

68

Page 75: Notes on Mathematics For Engineers

or

limβ→∞

∫ iβ

−iβ

e−sxφ(s)ds = 2πik∑

j=1

ρj − limβ→∞

c′β

e−sxφ(s)ds.

Thus,

F (x) = − i√2π

limβ→∞

∫ iβ

−iβ

esxφ(s)ds

=√

k∑

j=1

ρj +i√2π

limβ→∞

c′β

e−sxφ(s)ds.

Many times, the limit on the right hand side is zero, or is easy to evaluate, so that the aboveformula is a useful tool for inverting the transform.

Reasoning in similar fashion, but completing the path with a semi-circle in the right halfplane ℜ(s) > 0, we obtain a similar formula:

F (x) = −√

k′∑

k=1

ρ′j −i√2π

limβ→∞

c′′β

e−sxφ(s)ds

where the curve c′′β is another semicircular path similar to c′β except it extends out into thepositive real axis (rather than the negative real axis). Hence, this curve intersects the realaxis at x = β. Another way to visualize the curve is to realize that combining semi-circle c′βwith semi-circle c′′β results in a full circle with radius β centered at the origin of the complexplane. For x < 0, one may find that the limit

limβ→∞

c′β

e−sxφ(s)ds = 0

so that the first formula becomes

F (x) =√

k∑

j=1

ρj .

(Recall that ρj are residues at singular points in the left half-plane ℜ(s) < 0). Again, onemay find that for x > 0, the limit

limβ→∞

c′′β

e−sxφ(s)ds = 0

69

Page 76: Notes on Mathematics For Engineers

so that the second formula becomes

F (x) = −√

2πk′∑

j=1

ρ′j .

At x = 0,

F (0) =1

2[F (0+) + F (0−)]

where

F (0+) = limǫ→0

F (ǫ)

F (0−) = limǫ→0

F (−ǫ)

9.6 Table of Transforms

y(x) Sn(y)

1 anπ

[

1 − (−1)n]

x a2

nπ(−1)n+1

x2 a3

nπ(−1)n+1 − 2

(

anπ

)3 [1 − (−1)n

]

ecx nπan2π2+c2a2

[

1 − (−1)neca]

sin (ωx)

a

2(n =

ωa

π)

nπa

n2π2 − ω2a2(−1)n+1 sin (ωa) (n 6= ωa

π)

cosωx0 (n =

ωa

π)

nπa

n2π2 − ω2a2

[

1 − (−1)n cos (ωa)]

(n 6= ωa

π)

sinh (cx) nπan2π2+c2a2 (−1)n+1 sinh (ca)

cosh (cx) nπan2π2+c2a2

[

1 − (−1)n cosh (ca)]

a− x a2

x(a− x) 2(

anπ

)3 [1 − (−1)n

]

sin (ω(a−x))sin (ωx)

nπan2π2−ω2a2 (n 6= ωa

π)

sin (c(a−x))sin (cx)

nπan2π2+c2a2

70

Page 77: Notes on Mathematics For Engineers

y(x) Cn(y)

1a (n = 0)

0 (n = 1, 2, . . .)

x

a2

2(n = 0)

(

a

)2

[(−1)n − 1] (n = 1, 2, . . .)

x2

a3

3(n = 0)

2a3

n2π2(01)n (n = 1, 2, . . .)

ecx a2cn2π2+c2a2 [(−1)neca − 1]

sin (ωx)

0 (n =ωa

π)

a2ω

n2π2 − ω2a2

[

(−1)n cos (ωa) − 1]

(n 6= ωa

π)

cosωx

a

2(n =

ωa

π)

a2ω

n2π2 − ω2a2(−1)n+1 sin (ωa) (n 6= ωa

π)

sinh (cx) a2cn2π2+c2a2

[

(−1)n cosh (ca) − 1]

cosh (cx) a2cn2π2+c2a2 (−1)n sinh (ca)

(x− a)2

4

3a3 (n = 0)

2a3

n2π2(n = 1, 2, . . .)

cos (ω(a−x))sin (ωa)

a2ωn2π2+ω2a2 (n 6= ωa

π)

cosh (c(a−x))sinh (ca)

a2cn2π2+c2a2

For a few additional transforms of this type, see Churchill or Sheddon (References 2 and3). For transforms of the form

∫∞−∞ y(x)einxdx,

∫∞0y(x) sin (nx)dx, and

∫∞0y(x) cos (nx)dx,

see Erdelyi (reference 1).

9.7 References

1. Sneddon, Ian N. “Fourier Transforms”, McGraw Hill, 1951.

2. Churchill, Ruel V., “Operational Mathematics”, McGraw Hill, 1958.

3. Erdelyi, Volume I, “Bateman Mathematical Tables”.

71

Page 78: Notes on Mathematics For Engineers

Chapter 10

Miscellaneous Identities, Definitions,Functions, and Notations

10.1 Leibnitz Rule

If

f(x) =

∫ b(x)

a(x)

g(x, y)dy,

Then

d

dxf(x) = g[x, b(x)]b′(x) − g[x, a(x)]a′(x) +

∫ b(x)

a(x)

2g(x, y)

2xdy.

10.2 General Solution of First Order Linear Differen-

tial Equations

The objective is to solve for y(x) which satisfied the following differential equation:

f(x) =dy

dx+ a(x)y

with boundary condition

y(x0) = y0.

72

Page 79: Notes on Mathematics For Engineers

The procedure is to find an integrating factor. Define h such that

dh

dx= a(x).

Thus

h =

∫ x

c

a(x′)dx′.

The integrating factor will be eh, since

d(ehy)

dx= eh dy

dx+ ehy

dh

dx= eh dy

dx+ eha(x)y = ehf(x).

Then

∫ yeh

y0eh0

d(ehy) =

∫ x

x0

eh′

f(x′)dx′

where

h0 =

∫ x0

c

a(x′)dx′

h′ =

∫ x0

c

a(x′′)dx′′.

Thus,

yeh − y0eh0 =

∫ x

x0

eh′

f(x′)dx′.

Hence

y = y0eh0−h +

∫ x

x0

eh′−hf(x′)dx′.

Recalling that

h =

∫ x

c

a(x′)dx′

h′ − h =

∫ x′

c

a(x′′)dx′′ −∫ x

c

a(x′′)dx′ = −∫ x

x′

a(x′′)dx′′.

73

Page 80: Notes on Mathematics For Engineers

Finally,

y = y0eh0−h +

∫ x

x0

f(x′)e−R x

x′ a(x′′)dx′′

dx′.

(Note that the constant c appearing as a lower limit in the integral of the integrating factoris not a boundary condition: it disappears in the final solution.)

10.3 Identities in Vector Analysis

Below, bold quantities are vectors, and V is the vector differential operator ∇ = i ∂∂x

+

j ∂∂y

+ k ∂∂x

, and i, j, k are unit vectors in the x, y, and z directions, respectively.

1. a · b × c = b · c × a = c · a× b

2. a× (b× c) = b(a · c) − c(a · b)

3. (a× b) · (c × d) = a · b × (c × d)

= a ·(

c(b · d) − d(b · c))

= (a · c)(b · d) − (a · d)(b · c)

4. (a× b) × (c × d) = (a× b · d)c − (a × b · c)d

5. ∇(φ+ ψ) = ∇φ+ ∇ψ6. ∇(φψ) = φ∇ψ + ψ∇φ7. ∇(a · b) = (a · ∇)b + (b · ∇)a + a × (∇× b) + b × (∇× a)

8. ∇ · (a + b) = ∇ · a + ∇ · b9. ∇× (a + b) = ∇× a + ∇× b

10. ∇ · (φa) = a · ∇φ+ φ∇ · a11. ∇× (φa) = (∇φ) × a + φ∇× a

12. ∇ · (a× b) = b · ∇ × a − a · ∇ × b

13. ∇× (a× b) = a(∇ · b) − b(∇ · a) + (b · ∇)a − (a · ∇)b

14. ∇×∇× a = ∇(∇ · a) −∇2a

15. ∇×∇φ = 0

16. ∇ · ∇× a = 0

If r = ix+ jy + kz

17. ∇ · r = 3; ∇× r = 0

74

Page 81: Notes on Mathematics For Engineers

If V represents a volume bounded by a closed surface S with unit vector n normal to S anddirected positive outwards, then,

18.

∫∫∫

V

(∇φ)dV =

∫∫

S

(φn)dS

19.

∫∫∫

V

(∇ · a)dV =

∫∫

S

(a · n)dS (Gauss’ Theorem)

20.

∫∫∫

V

(f∇ · g)dV =

∫∫

S

(fg · n)dS −∫∫∫

V

(g · ∇f)dV

21.

∫∫∫

V

(g · ∇f)dV =

∫∫

S

(fg · n)dS −∫∫∫

V

(f∇ · g)dV

22.

∫∫∫

V

(∇× a)dV =

∫∫

S

(n× a)dA

23.

∫∫∫

V

(

ϕ∇2ψ − ψ∇2φ)

dV =

∫∫

S

(

(ϕ∇ψ − ψ∇φ) · n)

dS (Green’s Theorem)

If S is an unclosed surface bounded by contour C, and ds is an increment of length along C,then

24.

∫∫

S

(n ×∇φ)dA =

C

φdS

25.

∫∫

S

(∇× a)dA =

C

a · dS (Stokes’ Theorem).

10.4 Coordinate Systems

Cartesian Coordinates

∇ =∂

∂xi +

∂yj +

∂zk

∇2φ =∂2φ

∂x2+∂2φ

∂y2+∂2φ

∂z2

d3r = dxdydz

Cylindrical Coordinates

∇ =∂

∂ρρ0 +

1

ρ

∂θθ0 +

∂zk

∇2ϕ =1

ρ

∂ρ

(

ρ∂ϕ

∂ρ

)

+1

ρ2

∂2ϕ

∂θ2+∂2ϕ

∂z2

d3r = ρdρdθdz

75

Page 82: Notes on Mathematics For Engineers

Spherical Coordinates

∇ =∂

∂rr0 +

1

r

∂θθ0 +

1

r sin (θ)

∂ϕϕ0

∇2ψ =1

r2

∂r

(

r2∂ψ

∂r

)

+1

r2 sin (θ)

∂θ

(

sin (θ)∂ψ

∂θ

)

+1

r2 sin2 (θ)

∂2ψ

∂ϕ2

d3r = r2 sin (θ)drdθdϕ

10.5 Index Notation

A short note will be given on this notation which greatly simplifies certain mathematicalproblems (to mention one advantage). A convention is adopted here which is sufficient forworking with rectangular Cartesian coordinates. For more general coordinate systems, amore elaborate convention is needed (see references 1, 2, and 3).

Consider a simple example which illustrates the utility and application of the indexnotation. Supposed one has a set of three equations

u = axx+ ayy + azz

v = bxx+ byy + bzz

w = cxx+ cyy + czz

By defining

u = u1 v = u1 w = u3;

x = x1 y = x2 z = x3;

ax = a11 ay = a12 az = a13;

bx = a21 by = a22 bz = a23;

cx = a31 cy = a32 cz = a33;

These may be written

u1 = a11x1 + a12x2 + a13x3

u2 = a21x1 + a22x2 + a23x3

u3 = a31x1 + a32x2 + a33x3

76

Page 83: Notes on Mathematics For Engineers

or

u1 =

3∑

α=1

a1αxα

u2 =

3∑

α=1

a2αxα

u3 =

3∑

α=1

a3αxα

or

ui =3∑

α=1

aiαxα (i = 1, 2, 3)

To this point we have effected considerable simplification of the original equations. By theintroduction of the “summation convention”, we can so even further. Notice that there aretwo kinds of indices on the right: i, which occurs once in the product, and α, which occurstwice. Index i is called a “single-occurring” index; α is called a “double-occurring” index.The convention to be introduced is:

1. Doubly-occurring indices are to be given all possible values and the results summedwithin the equation.

2. Single-occurring indices are to be assigned one value in an equation, but as manyequations are to be generated as there are available values for the index.

Thus from 1, we may drop the summation symbol, and by 2, we may drop the parenthesisdenoting values for i. With the summation convention in force, we have ui = aiαxα whichunambiguously represents the original equations, if α and i have the same range, which weshall assume.

A nice but unnecessary finishing touch can be put on the convention which seems to makethings clearer in the work: for all singly occurring indices use lower-case Roman letters; forall double-occurring indices, use lower-case Greek letters.

We remark that it is possible to have any number of indices on a quantity.

10.6 Examples of Use of Index Notation

A. Some Handy Symbols

1. δij =

1 i = j

0 i 6= j

77

Page 84: Notes on Mathematics For Engineers

This is the Kronecker delta.

2. εijk =

1 i 6= j, i 6= k, j 6= k, i, j, k in cyclic order− 1 i 6= j, i 6= k, j 6= k, i, j, k in anticylic order0 i = j, i = k, or j = k

This is called the Levy-Civita Tensor density.

B. Some Relationships Expressed in Index Notation

1. Dot Product (a scalar)

a · b = aαbα

2. Cross Product (a vector; consider ith component)

(a × b)i = εiαbaαbα = −εiβαbβaα

3. Triple Product (a scalar)

a · b× c = aαεαβγbβcγ

4. Gradient (ϕ is a scalar)

(∇ϕ)i =∂ϕ

∂xi

5. Divergence (V is a vector)

∇ · V =∂Vα

∂xα

6. Curl (V is a vector)

(∇× V)i = εiαβ

∂Vα

∂xβ

7. Laplacian

∇2φ =∂2φ

∂xα∂xα

8.∂xi

∂xj

= δij

C. Some Identities in δij and ǫijk in 3-D space

1. δαα = 3

2. εαβγεαβγ = 6

3. εiαβεjαβ = 2δij

4. εijβεkℓβ = δikδjℓ − δiℓδjk

5. εijkεℓmn = δiℓδjmδkn + δimδjnδkℓ

78

Page 85: Notes on Mathematics For Engineers

10.7 The Dirac Delta Function

The Dirac Delta “function” symbolizes an integration operation and in this sense is notstrictly a function (in the interpretation of Professor R. V. Churchill). It cannot be the endresult of a calculation, but is meaningful only if an integration is to be carried out over itsargument. we define the δ-“function” as follows:

δ(x) ≡ 0, x 6= 0∫ ǫ

−ǫ

δ(x)dx = 1, ǫ > 0

∫ ǫ

−ǫ

f(x)δ(x)dx = f(0)

Very often it is convenient to think of the δ-“function” as a function zero everywhereexcept where its argument is zero, but which is so large at that point where its argumentvanishes that its integral over any region of which that point is an interior point, is equalto unity. Mathematicians shudder at the idea of the δ-“function”, but physicists have usedthem for years (carefully), finding them of great utility.

Shiff’s “Quantum Mechanics” lists some properties of the Dirac δ:

δ(x) = δ(−x)

δ′(x) = −δ′(−x)(

δ′(x) =dδ(x)

dx

)

xδ(x) = 0

xδ′(x) = −δ(x)

δ(ax) =1

aδ(x) a > 0

δ(x2 − a2) =2

a

[

δ(x− a) + δ(x+ a)]

a > 0∫

δ(a− x)δ(x− b)dx = δ(a− b)

f(x)δ(x− a) = f(a)δ(x− a)

Professor Churchill uses as a δ-“function” the operation

limh→∞

∫ ǫ

−ǫ

f(x)U(h, x)dx; ǫ > 0

where U(h, x) is the function

U(h, x) =

1h; −h

2≤ x ≤ h

2

0; x < −h2

or x > h2

79

Page 86: Notes on Mathematics For Engineers

Here, note that the limit is taken after integration. Other such representations are common,like that given by Schiff:

δ(x) = limg→∞

sin (gx)

πx

which means not that the limit is to be taken exactly as show, but rather that it is takenafter integration, i.e., with this representation:

∫ ǫ

−ǫ

δ(x)dx = limg→∞

∫ ǫ

−ǫ

sin (gx)

πxdx

∫ ǫ

−ǫ

f(x)δ(x)dx = limg→∞

∫ ǫ

−ǫ

f(x)sin (gx)

πxdx

Schiff gives still another representation in terms of an integral:

δ(x) =1

∫ ∞

−∞eixξdξ = lim

g→∞

1

∫ g

−g

eixξdξ

= limg→∞

1

2 sin (gx)

x

= limg→∞

sin (gx)

πx

thus

∫ ǫ

−ǫ

f(x)δ(x)dx =

∫ ǫ

−ǫ

∫ ∞

−∞

1

2πeixξf(x)dξdx

10.8 Gamma Function

A. Definitions

Γ(x) =

∫ ∞

0

e−ttx−1dt

80

Page 87: Notes on Mathematics For Engineers

B. Properties

a. Γ(x+ 1) = xΓ(x)

b. Γ(n) = (n− 1)! (Γ(1) = 1) n ∈ N

c. Γ(x)Γ(1 − x) =π

sin (πx)

d. Γ(x)Γ(x+1

2) = 21−2x

√πΓ(2x)

e. Γ

(

1 − b

1 − a

)

=1 − a

1 − b· 2 − a

a− b· 3 − a

3 − b· · ·

f. Γ

(

1

2

)

=√π

Since by 1, one may reduce Γ(x) to a product involving Γ or some number between 1 and 2,a handy table for calculations is one like that found at the end of Chemical Rubber IntegralTables, for Γ(x), 1 ≤ x ≤ 2.

References

1. H. Margenau and G.M. Murphy; “The Mathematics of Physics and Chemistry”, D.Van Nostrand Company, inc., New York, 1956m pp. 93-98.

2. Whittacker and Watson, “Modern Analysis”, 4th Edition, Cambridge University Press(1927), Chapter VII.

10.9 Error Function

erf(x) =2√π

∫ x

0

e−λ2

erfc(x) = 1 − erf(x) =2√π

∫ ∞

x

e−λ2

81

Page 88: Notes on Mathematics For Engineers

Chapter 11

Notes and Conversion Factors

11.1 Electrical Units

11.1.1 Electrostatic CGS System

1. The electrostatic cgs units of charge, sometimes called the escoulomb or statcoulomb,is that “point” charge which repels an “equal” point charge at a distance of 1 cm in avacuum, with a force of 1 dyne.

2. The electrostatic cgs unit of field strength is that field in which 1 escoulomb experiencesa force of 1 dyne. It, therefore, is 1 dyne per escoulomb.

3. The electrostatic cgs unit of potential difference (or esvolt) is the difference of potentialbetween two points such that 1 erg of work is done in carrying 1 escoulomb from onepoint to the other. It is 1 erg per escoulomb.

11.1.2 Electromagnetic CGS System

1. The unit magnetic pole is a “point” pole which repels an equal pole at a distance of 1cm, in a vacuum, with a force of 1 dyne.

2. The unit magnetic field strength, the oersted, is that field in which a unit pole expe-riences a force of 1 dyne. It therefore is 1 dyne per unit pole.

3. The absolute unit of current (or abampere) is that current which in a circular wire of1-cm radius, produces a magnetic field of strength 2π dynes per unit pole at the centerof the circle. One abampere approximately equals 3 · 1010 esamperes, or 10 amp.

4. The electromagnetic cgs unit of charge (or abcoulomb) is the quantity of electricitypassing in 1 second through any cross section of a conductor carrying a steady currentof 1 abampere. One abcoulomb equals 10 coulombs.

82

Page 89: Notes on Mathematics For Engineers

5. The electromagnetic cgs unit of potential difference (or abvolt) is a potential differencebetween two points, such that 1 erg of work is done in transferring abcoulomb fromone point to the other. One abvolt = 10−8 volts which is approximately 1

3·10−10 esvolt.

11.1.3 Practical System

11.2 Tables of Transforms

Practical Electrostatic cgs Electromagnet cgs

Quantity 1 Coulomb 3 · 109 escoulombs 110

abcoulomb

Current 1 Ampere 3 · 109 esamperes 110

abcoulomb

Potential Difference 1 Volt 1300

esvolt 108 abvolts

Electrical Field Strength 1 Volt/cm 1300

dyne/escoulomb 108 abvolts/cm

11.2.1 Energy Relationships

• 1 esvolt x 1 escoulomb = 1 erg

• 1 abvolt x 1 abcoulomb = 1 erg

• 1 volt x 1 coulomb = 107 ergs = 1 joule

• The electron volt equals the work done when an electron is moved from one point toanother differing in potential by 1 volt.

• 1 electron volt = 4.80 · 10−10 escoulomb x 1300

esvolt = 1.60 · 10−12 erg

11.3 Physical Constants and Conversion Factors; Di-

mensional Analysis

See http://physics.nist.gov/cuu/Constants/index.html.

83