Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Lecture Notes-contents
1. Introduction2. First Order Linear Differential Equations
2a. Bernoulli’s Differential Equation3. Separable Differential Equations and some differences between linear
and non-linear equations4. Some applications of first order differential equations5. Exact Equations, Integrating Factors, and Homogeneous Equations 5a.
Exam-1 with answers6. Linear Differential Equations of the Second Order–general properties
and constant coefficients7. Some special second order differential equations8. Reduction of Order and more on complex roots9. Particular Solutions-Undetermined Coefficients
10. Particular Solutions-Variation of Parameters11. Some applications of second order differential equations12. Forced Oscillations13. Laplace Transform14. Initial Value Problems and the Laplace Transform
14a. A supplemental Laplace Transform Table15. Step Functions and initial value problems with discontinuous forcing
15a. Some Solutions of Problems using Laplace Transforms–115b. Some Solutions of Problems using Laplace Transforms–215c. Some Solutions of Problems using Laplace Transforms–315d. Some Solutions of Problems using Laplace Transforms–416. Systems of Differential Equations17. Linear Homogeneous Systems with Constant Coefficients
17-a -supplement: Some notes18. Geometry of two dimensional Linear Homogeneous Systems with Con-
stant Coefficients19. Higher dimensional linear homogeneous systems with constant coeffi-
cients20. Variation of Parameters for Systems21. Partial Differential Equations – the heat equation
– Exercises
1
December 24, 2008 1-1
1 Introduction
This course will cover basic material about ordinary differential equations.Before we go into the course material itself, it is worth noting that there
are many useful resources available on the internet for the study of differentialequations. In particular, there are sample problem sets, sample tests, anduseful software available. Among the most useful software are programs likeMaxima, Mathematica, Maple, and Matlab. The first three of these programsenable one to solve almost all of the problems in the present course, so theycan be very useful for checking answers.
Now we begin with the subject which will concern us in this course.Differential Equations: equations involving functions and their deriva-
tives.The order of a differential equation is the highest order of derivative that
occurs in the equation.Examples:
1.
f(t, y, y′, . . . , y(n)) = 0.
where f : Rn+2 → R is a real-valued function of n+2 variables. Order= n
2.
y′′ + ty′ + y2 = 0
order = 2
3.
y′ = (t2 + 1)y
order = 1
December 24, 2008 1-2
The above differential equations are called scalar differential equationsbecause they involve a single equation involving a real-valued function andits derivatives.
We will later consider systems of differential equations. These involvemore than one differential equation. For instance, an example of a two-dimensional system is the following.
x′ = x2 + 2y + t2
y′ = 2x− y − sin(t)
We begin our study with first order scalar differential equationsConsider
y′ = f(t, y) (1)
A solution to (1) is a differentiable function t → y(t) defined on a realinterval I = {t : α < t < β} such that, for all t ∈ I, we have
y′(t) = f(t, y(t)).
December 24, 2008 1-3
Examples:
1. y′ = 2y solution y(t) = ae2t where a is an arbitrary constant
Note that y(0) = a. Thus, we have a one-parameter family of solutions,and we get a particular solution by specifying the value at a single realnumber.
If we know y at any fixed value, we can get the unique value of a bysolving an algebraic equation.
If y(1) = 2, then we get
y(t) = ae2t
y(1) = 2
ae2 = 2
a =2
e2
2. y = t2sin(t)is a solution of y′ = 2y
t+ y cot(t)
3. y = a cos(t) + b sin(t) is solution toy′′ = −y.
This is a second order equation, and we usually need two conditions toobtain a unique solution.
These can be expressed as
• initial conditions: y(0) = y0, y′(0) = y1 , or as
• boundary value conditions: y(0) = 1, y(2π) = 2.
• any two initial values for y(0), y′(0) or boundary conditions for y(α), y(β), α 6=β will suffice to uniquely determine solutions
Fact: For an n− th order scalar differential equation, one typically needsn conditions to uniquely express the solution.
Given a differential equation (1) as above, the general solution to (1) is anexpression y(t, c) involving an arbitrary constant c such that each function
December 24, 2008 1-4
yc(t) = y(t, c) with c given a specific value is a solution of (1), and anysolution has this form for some c.
The constant c is determined by initial conditions. Many examples willfollow.
We frequently write the pair of equations
y′ = f(t, y), y(t0) = y0
which we call an inital value problem.
December 24, 2008 1-5
The following important theorem states that, under mild conditions, aninitial value problem has a unique solution.
Theorem (Existence-Uniqueness Theorem) Let f(t, y) be a C1 func-tion of the variables (t, y) defined in an open set D in the plane R2. Then,for each (t0, y0) ∈ D there is a unique solution to the initial value problem
y′ = f(t, y), y(t0) = y0
Most of this course consists of
1. learning how to find solutions to various differential equations, and
2. studying applications of certain differential equations
Direction FieldsIt will turn out that it is difficult to solve many differential equations.Some insight can be gained by the method of direction fieldsConsider the equation y′ = f(t, y).and the curve (t, y(t)) where y(·) is a solutionAt the point (t, y(t) in the plane, the number f(t, y(t)) is the slope of the
tangent line.Thus, if we draw a small line segment at in the direction of f(t, y) at (t, y)
and fit these together, we get approximations to the solution to (1).Frequently, we can obtain information about limiting behavior of solutions
as t →∞ is this way.Examples.We use the program Mathematica to generate direction fields for various
d.e.’s
1. y′ = y
Note that we can determine that solutions y(t) with y(0) > 0 approach∞ as t →∞ while those with y(0) < 0 approach −∞ as t →∞.
2. The other d.e.’s are y′ = 3−y2
, y′ = 1− 2ty, y′ = 2e−t+y2
. Again variousbehaviors as t →∞ can be determined.
These are shown in the next figures
December 24, 2008 1-6
Figure 1: y′ = y, t ∈ [−1, 1], y ∈ [−1, 1]
December 24, 2008 1-7
Figure 2: y′ = 3−y2
, t ∈ [−1, 5], y ∈ [−1, 5]
December 24, 2008 1-8
Figure 3: y′ = 1 = 2ty, t ∈ [−1, 5], y ∈ [−1, 3]
December 24, 2008 1-9
Figure 4: y′ = 2e−t+y2
, t ∈ [0, 8], y ∈ [−4, 4]
August 21, 2004 2-1
2. First Order Linear Equations
A differential equation of the form
y′ + p(t)y = g(t) (1)
is called a first order scalar linear differential equation. Here we assumethat the functions p(t), g(t) are continuous on a real interval I = {t : α <t < β}.
We will discuss the reason for the name linear a bit later.Now, let us describe how to solve such differential equations.There is a theorem which says that under these continuity assumptions,
if t0 ∈ (α, β), then, for any real number y0, there is a unique solution y(t) tothe initial value problem
y′ + p(t)y = g(t), y(t0) = y0 (2)
which is defined on the whole interval I.Now that we know there is a solution, we can use various methods to try
to find it.There is a useful trick (or observation) for this.Assuming y is a non-zero solution to (1), suppose there was a non-zero
function µ such that
(µy)′ = µg
Then, we would have
µ′y + µy′ = µg
µ′y + µ(g − py) = µg
µ′y = µpy
µ′ = µp
d logµ
dt= p
Since p = p(t) is a continuous function of t, we can integrate both sidesto find log µ, and then take the exponential to find µ.
August 21, 2004 2-2
log µ(t) =∫ t
t0p(s)ds
µ(t) = e∫ t
t0p(s)ds
.
Now, define µ(t) by the last formula. Going backwards through the pre-vious equations, we obtain the formula
(µy)′ = µg.
Since the right side is now a known function of t, we can integrate againand get
(µy)(t) =∫ t
t0µ(s)g(s)ds + c
for some constant c.This gives
y(t) =1
µ(t)
[∫ t
t0µ(s)g(s)ds + c
](3)
Notice that µ(t0) = e0 = 1, and that if we evaluate y(t0), the integralvanishes and we get
y(t0) =c
µ(t0)= c.
To summarize, the solution to the initial value problem (2) is given by
y(t) =1
µ(t)
[∫ t
t0µ(s)g(s)ds + y0
](4)
This involves taking two integrals.The general solution to (2) is given by leaving the constant c in the
previous formula and taking the indefinite integral
y(t) =1
µ(t)
[∫ t
µ(s)g(s)ds + c]
(5)
Examples:
August 21, 2004 2-3
1. Find the general solution to the d.e.
y′ +1
2y =
3
2
Here
µ(t) = e∫ t 1
2dt = e
t2 ,
so, the general solution has the form
y(t) =1
µ(t)(∫ t
µ(t)3
2dt + c)
= e−t2 (3e
t2 + c)
= 3 + ce−t2
2. In the preceding d.e. find the solution whose graph passes through thepoint (0, 2).
Here y(0) = 2, so
3 + c = 2, c = −1
3. Find the solution of the initial value problem
y′ − y
2= e−t, y(0) = −1.
Let
µ = exp(∫ t
0−1
2dt) = e
−t2
The solution is
y(t) = et2 (
∫ t
0e−t2 e−tdt− 1)
= et2 (
∫ t
0e−3t2 dt− 1)
August 21, 2004 2-4
= et2 (
[−2
3e−3t2
]t
0− 1)
= et2 (−2
3(e−3t2 − 1)− 1)
=−2
3e−t − 1
3e
t2
4. Find the solution of the IVP
y′ + 2ty = t, y(0) = 0
µ = e∫ t
02tdt = et2
y(t) =1
et2
∫ t
0tet2dt
=1
et2
[1
2et2
]t
0
=1
et2(1
2(et2 − 1)
=1
2− 1
2
1
et2
The reason for the name linear is as follows.Consider the space C1 = C1(α, β) of continuously differentiable functions
on the open interval I = (α, β), andlet C0 = C0(I) be the space of continous functions on I.A function L from one function space to another is usually called an
operatorWe can define operations of addition and scalar multiplication on the
spaces C1 and C0 as follows.
1. (f + g)(t) = f(t) + g(t) for all t (pointwise addition)
2. (c · f)(t) = cf(t) for all t (pointwise scalarmultiplication)
An operator L : C1 → C0 is called a linear operator if it preserves theoperations of pointwise addition and scalar multiplication.
That is, for any two functions f, g ∈ C1 and c ∈ R, we have
August 21, 2004 2-5
1. L(f + g) = L(f) + L(g),
2. L(c · f) = cL(f)
Examples:
1. the operator L(f) = f ′ = Df is linear
2. the operator L(f) = f ′ + 1 is not linear
3. the operator L(f) = (f ′)2 is not linear
4. for any function p(t), the operator defined by
L(y)(t) = y′(t) + p(t)y(t) ∀t
is linear.
5. If V and W are any spaces of functions we can similarly define linearoperators from V to W .
6. Letting Cn(I) denote the space of n−times continuously differentiablefunctions on the interval I, one can check that the n−th derivativeoperator y → y(n) from Cn(I) to C0(I) is a linear operator.
7. Given continuous functions
p0(t), p1(t), . . . , pn−1(t)
on an interval I, the operator
L(y)(t) = y(n) + pn−1(t)y(n−1) + . . .
. . . + p1(t)y′ + p0(t)y
is a linear operator from Cn(I) to C0(I).
In general, a linear differential equation is one of the form
L(y)(t) = g(t)
where L(y) is a linear operator from Cn to C0 involving sums of scalarmultiples of Djy for 0 ≤ j ≤ n.
Problems for sections 2.1, 2.2: p. 23 1, 3, 5, 13-19, p. 30 1,3,7
2-1
2a. Bernoulli’s Differential Equation
A differential equation of the form
y′ + p(t)y = g(t)yn (1)
is called Bernoulli’s differential equation.If n = 0 or n = 1, this is linear. If n 6= 0, 1, we make the change of
variablea v = y1−n. This transforms (1) into a linear equation.Let us see this.We have
v = y1−n
v′ = (1− n)y−ny′
y′ =1
1− nynv
and
y = ynv
Hence,
y′ + py = gyn
becomes
1
1− nynv + pynv = gyn
Dividing yn through and multiplying by 1− n gives
v + (1− n)pv = (1− n)g.
We can then find v and, hence, y = v1
1−n .Example.Find the general solution to
y′ + ty = ty3.
We put v = y−2
2-2
We get
v′ = (−2)y−3y′, y = y3v
So,
y′ + ty = ty3
(−1/2)y3v′ + ty3v = ty3
v′ − 2tv = −2t
µ = e−t2
v = et2(∫ t
e−t2(−2t)dt + c)
= et2(e−t2 + c
)= 1 + cet2 ,
and,
y = v−12 =
[1 + cet2
]− 12 .
Problems: p. 33: 38, 39
August 21, 2004 3-1
3. Separable differential Equations
A differential equation of the form
dy
dx= f(x, y)
is called separable if the function f(x, y) decomposes as a product f(x, y) =φ1(x)φ2(y) of two functions φ1 and φ2.
Proceding formally we can rewrite this as
dy
dx= φ1(x)φ2(y)
dy
φ2(y)= φ1(x)dx.
Using the second formula, we can integrate both sides to get∫ y dy
φ2(y)=
∫ x
φ1(x)dx + C
as the general solution.Note that this is an implicit relation between y(x) and x.Indeed, the last integral formula has the form
F (y(x)) = G(x) + C
for some functions F and G. To find y(x) as a function of x we wouldhave to solve this implicit relationship.
This is frequently hard to do, so we will leave the solution in implicitform.
A more general version of this is the d.e.
M(x)dx + N(y)dy = 0 (1)
We say that the general solution to this d.e. is an expression
f(x, y) = C
where fx = M(x), gy = M(y).
August 21, 2004 3-2
Since the family of the preceding equation as C varies is a family of curves,one sometimes says that this is the family of integral curves for the d.e. (1).
Also, the initial value problem
dy
dx= φ1(x)φ2(y), y(x0) = y0
can be solved as ∫ y
y0
dy
φ2(y)=
∫ x
x0
φ1(x)dx
This picks out a specific curve in the family of integral curves.Examples:
1. Find the general solution of the d.e.
dy
dx=
x2
1− y2.
Write this as
−x2dx + (1− y2)dy = 0
The general solution has the form
f(x, y) = C
where fx = −x2 and fy = 1− y2.
Hence, we can take
f =∫ x
−x2dx +∫ y
(1− y2)dy
= −x3
3+ y − y3
3
and the general solution is
−x3
3+ y − y3
3= C.
December 25, 2008 3-3
and the general solution is
−x3
3+ y − y3
3= C.
2. For the preceding d.e. find the integral curve passing through (1, 3).
We need to substitute x = 1, y = 3 in the above formula.
We get
−1
3+ 3− 27
3= C,
so the desired curve is
−x2
3+ y − y3
3=
8
3− 9 = −19
3.
3. Solve the IVP
dy
dx=
3x2 + 4x + 2
2(y − 1), y(0) = −1.
Write this as
−(3x2 + 4x + 2)dx + 2(y − 1)dy = 0.
Integrate to
−x3 − 2x2 − 2x + y2 − 2y = C,
and plug in x = 0, y = −1 to get C = 3.
So,
ANS: − x3 − 2x2 − 2x + y2 − 2y = 3.
August 21, 2004 3-4
y′ + p(t)y = g(t), y(t0) = y0
exists on the whole interval I.This fails for non-linear equations.As an example, take
y′ = y2, y(0) = y0
We solve this equation as
dy
y2= dt
∫ y dy
y2=
∫ t
0dt
−1
y= t + C
y = − 1
t + C, = y0 = − 1
C
This solution blows up at the point t = −C. The graphs of solutions looklike those in the following figure.
Problems for sections 2.3, 2.4: p. 38-39, 1-6, 9,11, p. 45, 1,3,5, 9,11
August 21, 2004 3-5
In[8]:= C = 1; Plot@-1� Ht + 1L, 8t, -2, 2<D
-2 -1 1 2
-40
-20
20
40
Out[8]= � Graphics �
In[9]:= C = -1; Plot@-1� Ht - 1L, 8t, -2, 2<D
-2 -1 1 2
-60
-40
-20
20
40
60
Out[9]= � Graphics �
Untitled−1 1
August 21, 2004 4-1
4. Some Applications of first order linear dif-
ferential Equations
The modeling problemThere are several steps required for modeling scientific phenomena
1. Data collection (experimentation)Given a certain physical system, one has to run experiments and getsome idea of how the observed data depend on time.
2. Setting up scientific law to describe the time dependenceThis may involve differential or difference equations. The idea is tofind the correct equations whose solutions give the observed time de-pendence.
3. Analysis of solutions of appropriate equations to describe observed phe-nomena.
We will describe several known applications involving this processRadioactive DecayIt is known that certain radioactive substances exhibit spontaneous decay.
That is, if Q(t) represents the amount of the substance at time t, then Q(t)satisfies the differential equation
dQ
dt= −rQ(t) (1)
where r is a positive real number. This simply means that the rate ofdecay of the quantity at time t is proportional to the amount present at timet.
We know that the general solution to (1) is
Q(t) = Q(0)e−rt
where Q(0) is the amount present at time 0.We can use this to solve various questions related to radio-active decay.
1. The element Thorium-234 (Th-234) exhibits radio-active decay. If 100mg of Th-234 decays to 82.04 mg in one week, find an expression for
August 21, 2004 4-2
the amount at any time t. Also, find the half-life of the element (theamount of time it takes to decay to half its original value).
Let Q(t) denote the amount at time t. Let Q0 = Q(0).
Then,
Q(t) = Q0e−rt.
If t is measured in units of days, and Q(t) is measured in units ofmilligrams (mg), then
Q0 = 100,
Q(7) = 100e−7r = 82.04,
e−7r = 82.04/100,
r = − log(82.04/100)
7= 0.028.
So,Q(t) = 100e−rt = 100e−0.028t.
Let th denote the half-life.
Then,
Q(th) = Q0e−rth =
Q0
2.
So,
1
2= e−rth ,
2 = erth .
th =log(2)
r.
August 21, 2004 4-3
2. Carbon DatingAll living beings contain roughly the same concentration in cells ofa certain amount of the radio-active element Carbon-14, C14. Thissubstance decays at a certain rate, but gets replenished by living beingswhich breathe from the atmosphere. When a being dies, it does notreplenish its Carbon-14, so the concentration which remains in the cellsis smaller than that which was originally there. Since the concentrationsatisfies
Q(t) = Q0e−kt
for some constants Q0, k > 0, and the half-life of C14 is about 5560years, one can use this for dating archealogical objects. See page, 54,problem 6
Compound interestIf interest is compounded continuously, this gives an example of a model
exhibiting exponential growth.Let us review interest calculations.Suppose we are given a certain inital amount of money, called the princi-
pal amount P (0). If this is compounded annually at a rate of 5 % and P (t)denotes the amount of money available after t years, we have
P (1) = P (0) + .05 ∗ P (0) = P (0)(1 + .05)
P (2) = P (1) + .05 ∗ P (1) = P (1)(1 + .05)
= P (0)(1 + .05)2
P (t) = P (0)(1 + .05)t
Now, suppose the interest is 5 % per year, but compounded monthly.The interest per month is 5/12 %. In t years, we compound 12t times.So, we get
P (t) = P (0)(1 + (.05/12))12t.
If the interest is at the rate of r %, we get
P (t) = P (0)(1 +r
100 ∗ 12)12t
August 21, 2004 4-4
If the interest is compounded n times per year, we get
P (t) = P (0)(1 +r
100 ∗ n)nt
When n→∞, we say we have interest compounded continuously.What is P (t) ?We take
Hdef= lim
n→∞(1 +
r
100 ∗ n)nt
Taking logs we get
log H = limn→∞ nt log(1 +r
100 ∗ n)
= limn→∞log(1 + r
100∗n)1nt
For small x, log(1 + x) ∼ x, so we get
log H = limn→∞
r100∗n
1nt
=rt
100
So,
H = exp(rt
100),
and we have
P (t) = P (0)ert100 .
August 21, 2004 4-5
It is sometimes of interest to estimate roughly how much time it will takefor an initial principle amount to double in value for various interest rates.
Below we did the calculation assuming interest varies from 1 % to 10 %,and compounding is done annually, monthly, daily, and continuously.
We used the program Mathematica to do the calculations. The output isgiven in the next few pages.
August 21, 2004 4-6
H* Compound Interest
Assume we have an initial principle P_ 0 and wecompound at a rate of r % per year. After t years we sill have
P HtL = P_ 0 H1 + r�100L^t
If we compound monthly, the amount of interest peryear is r�H100*12L. After t years, we have compounded 12 t times.If we compound $n$ times per year, we getthe interest is r�H100*nL and, after t years we have
P HtL = P_ 0 H1 + r�H100*nLL^HntL
The time t_d it takes for doubling is gotten from2 P_ 0 = P_ 0 H1+r�H100*nLL^HntL
or 2 = H1+r�H100*nLL^HntL
or t = Log@2D�Log@1 + r�H100*nLD�n
Compounding contiuously gives
P HtL = P_ 0 e^8rt�100LSo, doubling time is given by
2 = e^Hrt�100L
or rt = 100 Log@2D, or
t = H100�rL Log@2D*L
H* So, let’ s do some calculations *L
DoublingTime@r_, n_D := N@Log@2D�Log@1 + r�H100*nLD�nDCDT@r_D := N@100*Log@2D�rD
H* Here DT is the doubling time when compounded at n;and CDT is the continous doubling time *L
In[32]:= n = 1; Table@Print@"Rate = ", r, "%", " Compouding = ",n, " DT = ", DoublingTime@r, nD, " CDT = ", CDT@rDD, 8r, 1, 10<D;
Rate = 1% Compouding = 1 DT = 69.6607 CDT = 69.3147
Rate = 2% Compouding = 1 DT = 35.0028 CDT = 34.6574
Rate = 3% Compouding = 1 DT = 23.4498 CDT = 23.1049
Rate = 4% Compouding = 1 DT = 17.673 CDT = 17.3287
Rate = 5% Compouding = 1 DT = 14.2067 CDT = 13.8629
Rate = 6% Compouding = 1 DT = 11.8957 CDT = 11.5525
Rate = 7% Compouding = 1 DT = 10.2448 CDT = 9.9021
Rate = 8% Compouding = 1 DT = 9.00647 CDT = 8.66434
Rate = 9% Compouding = 1 DT = 8.04323 CDT = 7.70164
Rate = 10% Compouding = 1 DT = 7.27254 CDT = 6.93147
Interest.nb 1
August 21, 2004 4-7
In[33]:= n = 12; Table@Print@"Rate = ", r, "%", " Compouding = ",n, " DT = ", DoublingTime@r, nD, " CDT = ", CDT@rDD, 8r, 1, 10<D;
Rate = 1% Compouding = 12 DT = 69.3436 CDT = 69.3147
Rate = 2% Compouding = 12 DT = 34.6862 CDT = 34.6574
Rate = 3% Compouding = 12 DT = 23.1338 CDT = 23.1049
Rate = 4% Compouding = 12 DT = 17.3575 CDT = 17.3287
Rate = 5% Compouding = 12 DT = 13.8918 CDT = 13.8629
Rate = 6% Compouding = 12 DT = 11.5813 CDT = 11.5525
Rate = 7% Compouding = 12 DT = 9.93096 CDT = 9.9021
Rate = 8% Compouding = 12 DT = 8.69319 CDT = 8.66434
Rate = 9% Compouding = 12 DT = 7.73048 CDT = 7.70164
Rate = 10% Compouding = 12 DT = 6.96031 CDT = 6.93147
In[34]:= n = 365; Table@Print@"Rate = ", r, "%", " Compouding = ",n, " DT = ", DoublingTime@r, nD, " CDT = ", CDT@rDD, 8r, 1, 10<D;
Rate = 1% Compouding = 365 DT = 69.3157 CDT = 69.3147
Rate = 2% Compouding = 365 DT = 34.6583 CDT = 34.6574
Rate = 3% Compouding = 365 DT = 23.1059 CDT = 23.1049
Rate = 4% Compouding = 365 DT = 17.3296 CDT = 17.3287
Rate = 5% Compouding = 365 DT = 13.8639 CDT = 13.8629
Rate = 6% Compouding = 365 DT = 11.5534 CDT = 11.5525
Rate = 7% Compouding = 365 DT = 9.90305 CDT = 9.9021
Rate = 8% Compouding = 365 DT = 8.66529 CDT = 8.66434
Rate = 9% Compouding = 365 DT = 7.70258 CDT = 7.70164
Rate = 10% Compouding = 365 DT = 6.93242 CDT = 6.93147
Interest.nb 2
August 21, 2004 4-8
Mixing ProblemsSuppose a tank contains a solution of Q0 lbs. of salt dissolved in 100
gallons of water. Assume that a solution of containing 14
lb of salt/gal ispoured into the tank at a rate of r gal/min. Assume that the solution mixesinstantaneously and that the combined solution is drained from the tank atthe same rate of r gal/min.
1. How much salt is there in the tank at time t > 0?
2. Find the limiting amount QL as t→∞.
3. If r = 3, and Q0 = 2QL, find the time T for Q(T ) to be within 2% ofQL.
4. What must r be for T to be no larger than 45 minutes?
Solution:Let Q(t) be the amount of salt in the tank at time t.We first find Q(t). Then we will see that the other questions can be
answered simply.Let Qin(t) denote the amount of salt that has flowed into the tank at
time t, and Qout(t) denote the amound that has flowed out of the tank attime t. Since the number of gallons flowing into the tank equals the numberof gallons flowing out of the tank. The total number of gallons remains fixedat 100.
Then,
Q(t) = Q0 + Qin −Qout
and
Q′(t) = Q′in(t)−Q′
out(t)
Now,
Q′in =
r
4lb/min
and
August 21, 2004 4-9
Q′out = (amount of salt per gallon)
× (number of gallons flowing out per minute)
=Q(t)
100× r
Hence, we get the d.e.
Q′ =r
4− Q(t)
100× r,
or
Q′ +Q(t)
100× r =
r
4.
This is a linear d.e., with solution obtained from
µ = ert100
Q(t) = e−rt100
[∫ r
4e
rt100 + C
]
Q(t) = 25 + Ce−rt100
Q0 = 25 + C, C = Q0 − 25
QL = 25.
Q0 = 2QL ⇒ Q0 = 50, C = 25.
Find T such that Q(T )−QL < (.02)QL.
Q(T ) < (1.02)25
Plug into above and get
August 21, 2004 4-10
Q(T ) = 25 + 25e−3T100 < (1.02)25
Then, solve for T .Observe that if we had different rates rin of input and rout of output,
and we let V (t) be the volume in the tank at time t, then we would get therelations
V (t) = V (0) + t(rin − rout),
and
Q′in = (amount of salt per gal coming in)
× (number of gallons per unit time
coming in),
and
Q′out = (amount of salt per gal going out)
× (number of gallons per unit time
going out),
= − Q(t)
V (t)× rout.
So,
Q′ = Q′in −Q′
out
= (amount of salt per gal coming in)× rin
− Q(t)
V (t)× rout.
Newton’s Law of Cooling:Assume a solid body B with initial temperature Θ0 (at time t = 0) is
immersed in an ambient fluid whose temperature is kept at the constant valueT . Let Θ(t) denote the temperature of the body at time t.
Newton’s law of cooling states that
August 21, 2004 4-11
Θ′(t) = k(Θ(t)− T )
for some constant k. That is, the rate of change of the temperature of Bat time t > 0 is proportional to the difference of the temperature of B andthe temperature T of the ambient fluid.
Let us solve this d.e.We have
dΘ
dt= k(Θ− T )
dΘ
Θ− T= kdt
log(Θ− T ) = kt + c
Θ− T = Cekt
Θ = T + Cekt = T + (T −Q0)ekt
Have 3 parameters T, Q0, k to determine.
Problems for section 2.5: pp 54-57, 1,3,4,6,7,19,23,25,26
August 21, 2004 5-1
5. Exact Equations, Integrating Factors, and
Homogeneous Equations
Exact EquationsA region D in the plane is a connected open set. That is, a subset which
cannot be decomposed into two non-empty disjoint open subsets.The region D is called simply connected if it contains no “holes.” Alter-
natively, if any two continuous curves in D can be continuously deformedinto one another. We do not make this precise here, but rely on standardintuition.
A differential equation of the form
M(x, y)dx + N(x, y)dy = 0 (1)
is called exact in a region D in the plane if the we have equality of thepartial derivatives
My(x, y) = Nx(x, y)
for all (x, y) ∈ D.If the region D is simply connected, then we can find a function f(x, y)
defined in D such that
fx = M, and fy = N.
Then, we say that the general solution to (1) is the equation
f(x, y) = C.
This is because the differential equation can be written as
df = 0.
Here we will not develop the complete theory of exact equations, but willsimply give examples of how they are dealt with.
Example.Find the general solution to
(3x2y2 − 3y2)dx + (2x3y − 6xy + 3y2)dy = 0.
August 21, 2004 5-2
Step 1: Check to see if My = Nx.
M = 3x2y2 − 3y2, N = 2x3y − 6xy + 3y2
My = 6x2y − 6y, Nx = 6x2y − 6y
So, it is exact.Then,
f =∫
Mdx + g(y) = x3y2 − 3xy2 + g(y)
fy = N = 2x3y − 6xy + 3y2 = 2x3y − 6xy + g′(y)
3y2 = g′(y), g(y) = y3
So, we get
x3y2 − 3xy2 + y3 = C
as the general solution.Integrating FactorsSometimes a d.e. Mdx + Ndy = 0 is not exact, but can be made exact
by multiplying by a non-zero function.Let us see when this can be done with functions of x or y alone.Consider a non-zero function µ(x) which is a function of x alone such
that
(µM)y = (µN)x
We get
µyM + µMy = µxN + µNx, µy = 0
So,
µMy = µxN + µNx
August 21, 2004 5-3
µ(My −Nx) = µxN
My −Nx
N=
µx
µ
Now, if the Left Hand Side is a function of x alone, say h(x), we can solvefor µ(x) by
µ(x) = e∫
h(x)ds,
and reverse the above arguments to get an integrating factor.Similarly, if
Nx −My
M= g(y)
is a function of y alone, we can find an integrating factor of the form
ν(y) = e∫
g(y)dy.
Example:Consider the equation
(3xy + y2)dx + (x2 + xy)dy = 0
M = 3xy + y2, N = x2 + xy
Step 1: Check if exact
My −Nx = 3x + 2y − 2x− y = x + y
So, not exact.Step 2: Compute
My −Nx
N=
x + y
x2 + xy=
1
x
So, get integrating factor of the form
August 21, 2004 5-4
µ(x) = e∫
1x
dx = elogx = x
So,
(3x2y + xy2)dx + (x3 + x2y)dy = 0
is exact.
fx = M = 3x2y + xy2, f = x3y + x2y2/2 + g′(y)
f(x, y) = x3y + x2y2/2 = C
is the general solution.Homogeneous equationsA function f(x, y) is called homogeneous (or order p) if
f(tx, ty) = tpf(x, y)
for all t > 0.If M and N are homogeneous of the same degree, then the differential
equation
dy
dx= M(x, y)/N(x, y)
can be reduced to a separable one for v(x) by the change of variable
y(x) = xv(x).
To see this we calculate:
y′ = v + xv′ =M(x, xv)
N(x, xv)
=xpM(1, v)
xpN(1, v)
=M(1, v)
N(1, v)
August 21, 2004 5-5
So,
v + xv′ = H(v), where H(v) =M(1, v)
N(1, v)
This makes
xv′ = H(v)− v
which is separable.Problems for Sections 2.8, 2.9:
p. 88, 89 1-10, 25-29 p. 93, 1,3,5,7,9
August 21, 2004 6-1
6. Linear Differential Equations of the Second
Order
A differential equation of the form
L(y) = g
is called linear if L is a linear operator and g = g(t) is continuous.The most general second order linear differential equations has the form
P (t)y′′ + Q(t)y′ + R(t)y = G(t)
where P, Q, R,G are continuous functions defined on an interval I. As-suming that P (t) 6= 0 for t ∈ I, we can divide through by P (t) and rewritethis d.e. as
y′′ + p(t)y′ + q(t) = g(t) (1)
where p, q, g are all continuous on the interval I.Analogously, we write the IVP
y′′ + p(t)y′ + q(t) = g(t), y(t0) = y0, y′(t0) = y′0 (2)
where p, q, g are all continuous on the interval I, t0 ∈ I, and y0, y′0 are
given constants.The following is an important theorem, usually proved in a more advanced
course.Theorem(Existence-Uniqueness Theorem for Second Order Lin-
ear Differential Equations). Let p(t), q(t), g(t) be continuous functions onthe interval I, let t0 ∈ I, and let y0, y
′0 be given constants. Then, there is a
unique solution y(t) to the IVP (1) which is defined on the whole interval I.We are concerned with finding the general solution to (1) , and solving
initial value problems.Given equation (1), the associated homogeneous equation is the d.e.
y′′ + p(t)y′ + q(t) = 0 (3)
A consequence of the next result is that, in order to find the generalsolution to (1), it suffices to
August 21, 2004 6-2
1. find the general solution yh to (3), (4)
and
2. find a particular solution yp to (1). (5)
The general solution to (1) is then obtained as
y = yh + yp.
Theorem. Let yp(t) be a particular solution to (1). Then, every solutiony(t) to (1) can be expressed as y(t) = y1(t)+yp(t) where y1(t) is a solution to(3). Conversely, for any solution y1(t) of (3), the function y(t) = y1(t)+yp(t)is a solution to (1).
Proof.Let yp(t) be a particular solution to (1), and let y(t) be any other solution
to (1). Consider the function
y1(t) = y(t)− yp(t).
We clearly have y(t) = y1(t) + yp(t). Let us verify that
y1 is a solution to (3). (6)
By linearity,
L(y1) = L(y(t)− yp(t)) = L(y(t))− L(yp(t)) = g(t)− g(t) = 0,
which verifies (6).Converse:Let y1(t) be solution to (3), and let y(t) = y1(t) + yp(t).Then,
L(y) = L(y1 + yp) = L(y1) + L(yp) = 0 + g(t) = g(t),
so, y is a solution to (1). QED.In view of the preceding theorem, we need to study methods to handle
the problems (4) and (5).We begin with (4).
August 21, 2004 6-3
It turns out that to solve this problem, it suffices to find two solutionswhich satisfy a condition called linear independence.
Defintion. A pair of functions y1(t), y2(t) defined on an interval I iscalled a linearly independent pair of functions (on I) if whenever there areconstants c1, c2 such that
c1y1(t) + c2y2(t) = 0, ∀t ∈ I,
we have c1 = c2 = 0.This means that if c1y1 + c2y2 is the zero function on I, it follows that
c1 = c2 = 0.We state some theorems which allow us to find the general solution to
second order homogeneous linear differential equations.We will justify the theorems later.Theorem. Let y1(t), y2(t) be a linearly independent pair of solutions to
(3) on the interval I. Then, the general solution to (3) has the form
y(t) = c1y1(t) + c2y2(t).
Definition. A 2× 2 matrix is an array of the form
A =
(a11 a12
a21 a22
)
where the aij are real or complex numbers. When they are real, we saythat A is a real matrix.
Definition. The determinant det(A) of the 2×2 matrix A is the number
det(A) = a11a22 − a12a21.
Definition. The Wronskian at t0 of the two functions y1, y2 is thedeterminant
W (y1, y2)(t0) = det
(y1(t0) y2(t0)y′1(t0) y′2(t0)
)
We also call the function W (y1, y2)(t) the Wronksian or Wronskian func-tion of y1 and y2.
August 21, 2004 6-4
Theorem. Two solutions y1, y2 of the the equation (3) are linearly in-dependent on I if W (y1, y2)(t) 6= 0 for some (or any) t ∈ I.
Second Order Linear Homogeneous Differential Equations withConstant Coefficients:
These have the form
ay′′ + by′ + cy = 0 (7)
where a, b, c are constants and a 6= 0.Let us first try to find a solution of the form
y = ert (8)
where r is a constant.Differentiating, we get
ay′′ + by′ + cy = ar2ert + brert + cert = 0
= (ar2 + br + c)ert = 0
Since ert is never zero, the only way we could possibly get a solution ofthe form (8) is for r to be a root of the polynomial
q(r) = ar2 + br + c.
This last polynomial is called the characteristic polynomial of the d.e.(7), and the equation
q(r) = ar2 + br + c = 0 (9)
is called the characteristic equation of (7).Proceding in the reverse order, we also see that if r1 is a root of the
characteristic equation, then, indeed, y(t) = er1t is a solution of (7).Also, if the characteristic equation has two distinct real roots, r1, r2, then
we get two solutions of the form
y1(t) = er1t, y2(t) = er2t.
Let us see that these turn out to be linearly independent solutions.
August 21, 2004 6-5
We compute the Wronskian at t = 0.
W (y1, y2)(t) = det
(y1(0) y2(0)y′1(0) y′2(0)
)
= det
(1 1r1 r2
)= r2 − r1
6= 0.
Since this non-zero at t = 0 it is non-zero everywhere, so we do havelinearly independent solutions.
Hence, the general solution in the case of real distinct roots r1, r2 of (9)is
y(t) = c1er1t + c2e
r2t.
Examples:
1. y′′ − 3y′ − 4y = 0.
Find the roots of r2 − 3r − 4.
Factoring the polynomial, we get
r2 − 3r − 4 = (r − 4)(r + 1).
So, the roots are r1 = 4, r2 = −1.
Geral Solution: y = c1d4t + c2e
−t.
2. y′′ + 3y′ + y = 0
Characteristic equation: r2 + 3r + 1 = 0.
Use the quadriatic formula:
r =−3±
√5
2.
So, general solution:
y(t) = c1e(−3+
√5
2)t + c2e
(−3−√
52
)t.
August 21, 2004 6-6
Now, we know that, given a second degree polynomial q(r), we have threepossibilities for its roots r1, r2.
Case 1. r1 6= r2 and both are real
Case 2. r1 = r2,
Case 3. r1 = a + bi, r2 = a− bi where i =√−1.
So, finding the general solution to a homogeneous second order linear d.e.with constant coefficients, also involves those three cases.
We have already dealt with Case 1.Case 2: r1 = r2. That is, q(r) = a(r − r1)
2.Here we already have one non-zero solution y1(t) = er1t.We claim that the function y2(t) = ter1t is a second linearly independent
solution.In proceding to verifty this, it will be useful to recall the formula for the
second derivative of a product.
(fg)′′
= f ′′g + 2f ′g′ + gg′′.
Now, let us verify that y2 is a solution.Note that, since, r1 is a root of multiplicity two, we have
ar21 + br1 + c = 0, and 2ar1 + b = 0.
We have
ay′′2 + by′2 + cy2 = a(2r1er1t + tr2
1er1t) + b(er1t + tr1e
r1t) + cter1t
= (a2r1 + b)er1t + (ar21 + br1 + c)ter1t
= 0er1t + 0ter1t = 0.
Hence, y2 is a solution.Now, let us verify that the pair y1(t), y2(t) is a linearly independent pair.We compute the Wronskian:
August 21, 2004 6-7
W (y1, y2) = det
(y1 y2
y′1 y′2
)
= det
(er1t ter1t
r1er1t er1t + tr1e
r1t
)= e2r1t + tr1e
2r1t − tr1e2r1t
= e2r1t 6= 0.
Hence, the general solution is:
y(t) = c1er1t + c2te
r1t.
Case 3: r1 = a + bi with b 6= 0.Here we will make use of complex variables.Recall the formula
ea+bi = ea(cos(b) + i sin(b)).
We first verify that the complex valued function
y(t) = e(a+bi)t
is a solution to our d.e. It turns out that the real and imaginary parts ofthis complex solution give linearly independent solutions to the d.e.
What are these real and imaginary parts:
e(a+bi)t = eat(cos(bt) + i sin(bt)) (10)
So, the real part is
eat cos(bt)
and the imaginary part is
eat sin(bt).
Hence, the general solution is
y(t) = eat(c1cos(bt) + c2sin(bt)).
Examples:
August 21, 2004 6-8
1. Find the general solution to
y′′ + 6y′′ + 9y = 0.
Solution: r2 + 6r + 9 = (r − 3)2, so answer is:
y(t) = c1e3t + c2te
3t.
2. Find the general solution to
y′′ + y′ + 3y = 0.
Solution:
Step 1: Roots of characteristic equation.
r2 + r + 3r = 0.
r =−1±
√1− 12
2
=−1
2± 11i
2
General Solution:
y(t) = e−t2 (c1 cos(
11t
2) + c2 sin(
11t
2)).
Problems: p. 128: odd 1-15, p. 138: 1-6, p. 144: odd 1-7, p. 150: odd1-15, p. 159: odd 1-13
August 21, 2004 7-1
7. Some Special Second Order Equations
There are certain second order differential equations, even non-linear, whichreduce to first order equations. We will describe some of these now.
Type 1:
y′′ = f(x, y′).
Here the variable y is missing from the right hand side.We proceed as follows.Set v = y′. We get
y′′ = v′ = f(x, v)
Thus, we get a first order d.e. for v. If we can use our known methodsto solve this, then we get y by integrating v.
Example 1:
y′′ = x(y′)2
Set v = y′. Then,
y′′ = v′ = xv2
is a separable d.e. We solve it.
dv
v2= xdx
−1
v=
x2
2+ C
v =1
−x2
2− C
=1
C − x2
2
different C
=2
C21 − x2
=1
C1(C1 + x)+
1
C1(C1 − x)
August 21, 2004 7-2
So,
y′ =1
C1(C1 + x)+
1
C1(C1 − x)
which gives
y =1
C1
log(C1 + x) +1
C1
log(C1 − x) + C2
as the general solution.Type 2:
y′′ = f(y, y′).
Here the independent variable is missing.Again, we set v = y′ and get
v′ = f(y, v).
We try to treat y as a new independent variable. Then,
v′ =dv
dx=
dv
dy
dy
dx=
dv
dyv.
The equation becomes
y′′ = v′ =dv
dyv. = f(y, v),
or
dv
dy=
1
vf(y, v).
We solve this, and then integrate y = v′ to get y.Example:
y′′ = yy′
Setting y′ = v, we get
y′′ = vyv = yv
August 21, 2004 7-3
So,
vy = y
or,
dv = ydy
v =y2
2+ C
y′ =y2
2+ C
dyy2
2+ C
= dx
Then, we solve for y(x) as before.Problems: p. 129: 28-31, p. 130: 34-37.
August 21, 2004 8-1
8. Reduction of Order and more on complex
roots
Reduction of Order:Suppose we are given a general homogeneous second order d.e.
L(y) = y′′ + p(t)y′ + q(t)y = 0. (1)
We know that, in order to find the general solution, it suffices to find twolinearly independent solutions. It turns out that, if we can find one non-zerosolution, then a second independent solution can always be found as usualup to integration), by a method called reduction of order.
Here is how it works,Suppose y1 is one non-zero solution to (1). Let us try to find a second
solution y2v where v is a non-constant function.For y2 to be a solution, we have
(y1v)′′ + p(y1v)′ + qy1v = 0
or
y′′1v + 2y′1v′ + y1v
′′
+ py′1v + py1v′ + qy1v = 0
v(y′′1 + py′1 + qy1) + v′(2y′1 + py1) + y1v′′ = 0.
v′(2y′1 + py1) + y1v′′ = 0
since L(y1) = 0.Now, y1 and p are known, so we get a first order linear d.e. for v′. We
solve this for v′, then integrate to get v, and then go back to get an actualsolution y2 = y1v of L(y) = 0. .
Since v is not constant, we clearly get that y2 = y1v and y1 are linearlyindependent functions.
Example 1: The function y(t) = t3 + t is a solution to the d.e.
August 21, 2004 8-2
y′′ + 3ty′ − 9y = 0.
Find the general solution.Write y2 = yv = (t3 + t)v for some unknown v.Then,
y′′2 + 3ty′2 − 9y2
= (6tv + 2(3t2 + 1)v′ + (t3 + t)v′′
+3t((3t2 + 1)v + (t3 + t)v′)− 9(t3 + t)v
= (6t2 + 2)v′ + (3t4 + 3t2)v′ + (t3 + t)v′′ = 0
or
v′(2 + 9t2 + 3t4) + t(t2 + 1)v′′ = 0
v′′
v′= −2 + 9t2 + 3t4
t(t2 + 1)
log(v′) =∫−2 + 9t2 + 3t4
t(t2 + 1)dt
v′ = exp(∫−2 + 9t2 + 3t4
t(t2 + 1)dt)
v =∫
exp(∫−2 + 9t2 + 3t4
t(t2 + 1)dt)dt
General solution:
y(t) = c1(t3 + t) + c2v(t)
Example 2:Let us illustrate this with the constant coefficient case with a multiple
root.
August 21, 2004 8-3
Consider
L2(y) = y′′ + py′ + qy = 0
where p, q are constants, p 6= 0 and p2 − 4q = 0. Then, if r1 = −p2
, wehave
r21 + pr1 + q = 0 and 2r1 + p = 0.
We know that y1 = er1t is a solution to L2(y) = 0.Let us try to find another solution of the form ver1t.From the above computation, we get
v((er1t)′′ + p(er1t)′ + qer1t) + v′(2(er1t)′ + per1t)
+ er1tv′′ = 0
which simplifies to rr1tv′′ = 0, or, v = ct + d. We might as well takec = 1, d = 0, and get v(t) = t. Thus, we get the second linearly independentsolution as y2(t) = ter1t (which we only stated before).
Review of Complex Numbers:A complex number is an expression of the form z = a+bi where i =
√−1.
The number a is called the real part of z, and the number b is called theimaginary part of z.
We define addition and multiplication of complex numbers as follows.
(a + bi) + (c + di) = (a + c) + (b + d)i
(a + bi)(c + di) = ac− bd + (ad + bc)i
The usual rules of arithmetic hold. Note that i2 = −1.Complex numbers may be thought of as vectors in the plane with a + bi
corresponding to the vector (a, b). Note that this makes i = (0, 1).Then, complex addition is simply vector addition. Complex multiplica-
tion is harder to see geometrically.If z = a + bi, w = c + di are non-zero complex numbers, we can write the
vectors (a, b) and (c, d) in polar coordinates asz = r1(cos(θ1), sin(θ1)), w = r2(cos(θ2), sin(θ2)). Then, zw has the polar
form
August 21, 2004 8-4
zw = r1r2(cos(θ1 + θ2), sin(θ1 + θ2)).
In the polar coordinate representation z = (r, cos(θ)) we call r the magni-tude of z, and θ the argument of z. It is denoted by Arg(z). The magnitude| z | of z can be found by the formula
| a + bi | =√
a2 + b2
Another way to look at these expressions is the following.Write
eiy = cos(y) + i sin(y).
Then, if z is a non-zero complex number, we write
z = reiθ = | z |eiArg(z).
We define the complex exponential by
ea+bi = eaebi = ea(cos(b) + i(sin(b))).
Then, we can remember the product geometry as follows. Let z =ea+bi, w = ec+di.
Then,
zw = ea+biec+di = ea+ce(b+d)i.
We can differentiate and integrate complex valued functions just as wedo for real-valued functions, keeping track of i2 = −1 when necessary.
Let us apply this concept.Consider the d.e.
y′′ + py′ + qy = 0
where p, q are real constants.Assume that the characteristic polynomial z(r) = r2 + pr + c has roots
of the form r = α± iβ with β 6= 0.We get a complex valued solution of the form
August 21, 2004 8-5
yc(t) = e(α+iβ)t
Its real and imaginary parts are
y1(t) = eαtcos(βt), y2(t) = eαtsin(βt).
In general, one can show that if L(y) is a linear second order differen-tial operator with real coefficients, and yc is a non-zero complex solution toL(y) = 0, then the real and imaginary parts of yc are linearly indepdentsolutions to L(y) = 0. This is how one justifies the solutions we wrote downto the case of characteristic equations with non-real roots.
Problems: p. 161, 23-25
August 21, 2004 9-1
9. Particular Solutions of Non-homogeneous
second order equations–Undetermined Coeffi-
cients
We have seen that in order to find the general solution to the second orderdifferential equation
y′′ + p(t)y′ + q(t)y = g(t) (1)
where p, q, g are continuous functions in an interval I, it suffices to findtwo linearly independent solutions to the associated homogeneous equation
y′′ + p(t)y′ + q(t)y = 0 (2)
and one particular solution to (1).Here we will describe some methods for finding particular solutions.Method 1: Undetermined coefficientsThis method is useful when the the differential equation has constant
coefficients and the function g(t) has a special form: some linear combinationsof functions of the form tn, eαt, eαtsin(βt), eαtcos(βt).
One needs to know the form of a solution to try.The correct forms are listed in the next table.
August 21, 2004 9-2
As an alternative to the last form
gi(t) = eαt cos(βt),
one may recognize that this is the real part of e(α+iβ)t, assume the secondform with complex coefficients and exponential and then take the real partat the end. Similarly, if we are given
gi(t) = eαt sin(βt),
we recognize this as the imaginary part of e(α+iβ)t. Then, we assume thesecond form with complex coefficients and take the imaginary part at theend.
Let us begin with some examples.
Example 1.
y′′ + 2y′ − 3y = 2t2.
Let us try to find a solution of the form
August 21, 2004 9-3
y = At2 + Bt + C.
Plugging in, we get
2A + 2(2At + B)− 3(At2 + Bt + C) = 2t2.
The coefficients of t2, t, 1 have to be equal on both sides, so we get
−3A = 2, 4A− 3B = 0, 2A + 2B − 3C = 0.
Solving these equations for A, B, C we get
A = −2
3, B = −8
9, C = −4
9− 16
27.
Example 2.
y′′ + 2y′ − 3y = 4e3t.
Here we try y = Ae3t.
We get
9Ae3t + 6Ae3t − 3Ae3t = 4e3t.
12A = 4, A = 3.
Example 3.y′′ + 2y′ − 3y = 2t2 + 4e3t. (3)
Here we use the superposition principle.
If we have the equation
L(y) = g1(t) + g2(t) (4)
August 21, 2004 9-4
and y1, y2 are solutions to L(y) = g1, L(y) = g2, respectively, theny1 + y2 is a solution to (4).
So, we get the particular solution
y(t) = 3e3t − 2
3t2 − 8
9t− 4
9− 16
27
to (3).
Example 4.y′′ + 2y′ − 3y = 4e−3t. (5)
We again try y = Ae−3t.
We get
9Ae−3t − 6Ae−3t − 3Ae−3t = 4e−3t.
This has no solution. The problem is that −3 is a root of the charac-teristic polynomial r2 + 2r − 3.
In this case, we try y = At(e−3t).
We get
y′ = Ae−3t − 3Ate−3t
y′′ = −3Ae−3t − 3A(e−3t − 3te−3t) + 2Ae−3t
−6Ate−3t − 3Ate−3t = 4e−3t.
−4A = 4, A = −1.
Hence, we get the solution y = −te−3t.
What is going on with the method?
Here it helps to go to the abstract situation.
August 21, 2004 9-5
Consider
y′′ + py′ + qy = Beαt
We try to find a solution of the form y = Aeαt.
We get
A(α2 + pα + q)eαt = Beαt.
If α is not a root of z(r) = r2 + pr + q, then we get A as
A =B
z(α).
If α is a root of multiplicity one, and we try y = Ateαt, we find
Ateαt[α2 + pα + 1] + Aeαt(2α + p) = Beαt.
This is the same as
Ateαtz(α) + Aeαtz′(α) = Beαt.
So, we get
A =B
z′(α).
Similarly, if α is a root of multiplicity 2 to z(r), we get
y = At2eαt
where A = B2
= Bz′′(α)
.
In general, we try a function of the form y = Atseαt where s is themultiplicity of α as a root of z(r).
August 21, 2004 9-6
Example 5. Similar methods apply to right hand sides of the form
eαtsin(βt), eαtcos(βt).
For instance, consider
y′′ + 2y′ − 3y = 3cos(2t) = y′′ + 2y′ − 3y = Re(3eit).
Since i is not a solution of z(r) = 0, we get a complex solution of theform y(t) = Aeit. Its real part is a real solution.
In applying this method it is usually necessary to divide by complexnumbers and then take real or imaginary parts.
For instance, we may have the number
a + bi
c + di,
and we have to take the real or imaginary part.
Here it is useful to transform the division into a multiplation using theformula for the inverse
1
c + di=
c− di
c2 + d2.
This gives
a + bi
c + di= (a + bi)
c− di
c2 + d2
=ac + bd + (bc− ad)i
c2 + d2,
and we can read the real and imaginary parts off easily.
Another method here is to assume a solution of the form Acos(t) +Bsin(t), plug in and solve for A and B. The advantage of complexarithmetic is that it allows us to keep track of coefficients two at atime, and thus only do half the number of the calculations.
Problems: p. 171, 1-8, 13-16
August 21, 2004 10-1
10. Particular Solutions of Non-homogeneous
second order equations–Variation of Parame-
ters
As above, we consider the second order differential equation
L(y) = y′′ + p(t)y′ + q(t)y = g(t) (1)
where p, q, g are continuous functions in an interval I.In the method called variation of parameters, we start with two linearly
independent solutions y1, y2 of L(y) = 0, and we look for a particular solutiony(t) of L(y) = g of the form
y(t) = y1(t)v1(t) + y2(t)v2(t) (2)
where v1 and v2 are not constant functions.It turns out that we can consider the system of linear equations
y1v′1 + y2v
′2 = 0 (3)
y′1v
′1 + y′
2v′2 = g(t) (4)
We solve these equations for v′1 and v′
2 and integrate to get v1 and v2.Let us verify that if the equations (3)and (4) are satisfied, then (2) does
indeed give us a particular solution.Let us begin by observing that differentiating the first equation above
gives
y′1v
′1 + y′
2v′2 + y1v
′′1 + v2y
′′2 = 0. (5)
Hence, using (3), (4), and (5), we get
y′′ + py′ + qy = (y1v1 + y2v2)′′ + p(y1v1 + y2v2)
′
+q(y1v1 + y2v2)
= y′′1v1 + 2y′
1v′1 + y1v
′′1 + y′′
2v1 + 2y′2v
′2 + y2v
′′2
+p(y′1v1 + y1v
′1) + y′
2v2 + y2v′2)
+q(y1v1 + y2v2)
August 21, 2004 10-2
= v1(y′′1 + py′
1 + qy1) + v2(y′′2 + py′
2 + qy2)
+v′1(2y
′1 + py1) + v′
2(2y′2 + py2) + v′′
1y1 + v′′2y2
= y′1v
′1 + y′
2v′2 + p(y1v
′1 + y2v
′2)
+y′1v
′1 + y′
2v′2 + v′′
1y1 + v′′2y2
= g(t)
We wish to put the solution of the linear system (3), (4) in a simple form.For this purpose, it is useful to discuss a formula for the solution of linearequations known as Cremer’s rule. We will consider this here in the specialcase of two linear equations in two unknowns.
We first consider simple properties of the determinant as a function ofthe columns of the matrix.
Properties of Determinants of 2× 2 matrices:Let ∣∣∣∣∣ a b
c d
∣∣∣∣∣ = ad− bc
denote the determinant of the matrix(a bc d
).
For any real numbers a, b, c, d, e, fα we have∣∣∣∣∣ αa bαc d
∣∣∣∣∣ = α
∣∣∣∣∣ a bc d
∣∣∣∣∣ ,∣∣∣∣∣ a + e b
c + f d
∣∣∣∣∣ =∣∣∣∣∣ a b
c d
∣∣∣∣∣+∣∣∣∣∣ e b
f d
∣∣∣∣∣ ,and ∣∣∣∣∣ a b
c d
∣∣∣∣∣ = −∣∣∣∣∣ b a
d c
∣∣∣∣∣ ,Assume that x, y satisfy the system
ax + by = e
cx + dy = f.
August 21, 2004 10-3
Then,
∣∣∣∣∣ e bf d
∣∣∣∣∣ =
∣∣∣∣∣ ax + by bcx + dy d
∣∣∣∣∣=
∣∣∣∣∣ ax bcx d
∣∣∣∣∣+∣∣∣∣∣ by b
dy d
∣∣∣∣∣= x
∣∣∣∣∣ a bc d
∣∣∣∣∣+ y
∣∣∣∣∣ b bd d
∣∣∣∣∣= x
∣∣∣∣∣ a bc d
∣∣∣∣∣ .Similarly, ∣∣∣∣∣ a e
c f
∣∣∣∣∣ = y
∣∣∣∣∣ a bc d
∣∣∣∣∣ .Now, if ∣∣∣∣∣ a b
c d
∣∣∣∣∣ 6= 0,
then we can solve for x and y from the above equations.Applying this to the system (3), (4), we get
v′1 =
∣∣∣∣∣ 0 y2
g y′2
∣∣∣∣∣W (y1, y2)
=−y2g
W (y1, y2)
v1 =∫ −y2g
W (y1, y2)dt
v′2 =
∣∣∣∣∣ y1 0y′
1 g
∣∣∣∣∣W (y1, y2)
=y1g
W (y2, y2)
v2 =∫ y1g
W (y1, y2)dt.
Example.
August 21, 2004 10-4
Find the general solution to
y′′ + 4y = 3csc(t).
The general solution to the the homogeneous equation is
y = A cos(2t) + B sin(2t).
We assume a particular solution of the form
y = cos(2t)v1 + sin(2t)v2,
and we get the system
cos(2t)v′1 + sin(2t)v′
2 = 0
−2 sin(2t)v′1 + 2 cos(2t)v′
2 = 3csc(t),
so,
W (cos(2t), sin(2t)) = 2,
and
v′1 = −3
sin(2t)csc(t)
2
v′2 = 3
cos(2t)csc(t)
2
This gives
v1 = −3
2
∫sin(2t)csc(t)dt
= −3
2
∫ sin(2t)
sin(t)dt
= −3
2
∫ 2sin(t)cos(t)
sin(t)dt
= −3 sin(t)
August 21, 2004 10-5
v2 =3
2
∫cos(2t)csc(t)dt
=3
2
∫ cos(2t)
sin(t)dt
=3
2
∫ cos(t)2 − sin(t)2
sin(t)dt
=3
2
∫ 1− 2 sin(t)2
sin(t)dt
=3
2(log(csc(t)− cot(t)) + 3 cos(t)
The general solution is
y = A cos(2t) + B sin(t) + cos(2t)(−3sin(t))
+ sin(2t)(3
2(log(csc(t)− cot(t)) + 2 cos(t)).
Here is a second example.
y′′ − 3y′ + 2y = 3te2t.
The characteristic equation is r2−3r+2 = (r−2)(r−1) has roots r = 1, 2,so according to the table above, in the method of undetermined coefficients,we need to assume the particular solution yp has the form
yp = t(A + Bt)e2t.
This involves a lot of computation to differentiate and plug into the d.e.to get the appropriate constants A, B.
Let us see what the method of Variation of Parameters gives us.We get a solution yp = e2tv1 + etv2 where
e2tv′1 + etv′
2 = 0 (6)
2e2tv′1 + etv′
2 = 3te2t (7)
This gives
August 21, 2004 10-6
v′2 = −etv′
1
2e2tv′1 + et(−etv′
1) = 3te2t
e2tv′1 = 3te2t
v′1 = 3t, v1 =
3t2
2
v′2 = −3tet, v2 = −3(tet − et).
So, we get
yp = e2t 3t2
2+ et(−3)((tet − et)
= e2t 3t2
2− 3te2t + 3e2t
Since 3e2t is a solution of the homogeneous equation, the function
y3 = e2t 3t2
2− 3te2t = te2t(
3t
2− 3)
is also a particular solution. This has the form given by the methodof undetermined coefficients, and was obtained with much less computationthan using that method directly. In our case, we only want a particularsolution anyway.
Thus, in the d.e.
y′′ + py′ + qy = Pneαt
with p, q constants, the method of undetermined coefficients seems togive an improvement of the method of variation of parameters only if Pn isa constant.
Problems: p. 177 1-3, 5-9
August 21, 2004 11-1
11. Some applications of second order differ-
ential equations
The first application we consider is the motion of a mass on a spring.Consider an object of mass m on a spring suspended vertically as in the
next figure.
Assume that the unstretched spring has length ` and that gravity pullsthe mass down with a force equal to mg in magnitude with g = 32ft/sec2.We also assume Hooke’s law which says that the force on the object exertedby the spring has magnitude kx where k is a positive constant and x isthe displacement of the spring from its unstretched state. If the spring isextended, then the force is exerted toward the spring, while it is exerted
August 21, 2004 11-2
away from the spring if the spring is compressed. Let L be the amountthe spring is stretched when the mass is in equilibrium. Let u denote thedisplacement of the mass from equilibrium.
At equilibrium, the force upward due to the spring must equal the forcedownward due to gravity, so we get
kL = mg.
By Newton’s law of motion, we have that
mass times acceleration = total force at time t
on object at position u
If we take the downward direction as positive, and ignore frictional effects,then we get the differential equation of motion
mu = mg − k(u + L).
= mg − ku + kL
= −ku
or
mu + ku = 0, or u +k
mu = 0.
This is a second order d.e. with constant coefficients. If we set
ω0 =
√k
m,
then we have the general solution
u(t) = A cos(ω0t) + B sin(ω0t).
To graph this function, it is convenient to write it as
u(t) = R cos(ω0t− δ)
Here, we have
August 21, 2004 11-3
R cos(ω0t− δ) = R cos(ω0t) cos(δ) + R sin(ω0t) sin(δ).
Hence, we have
R cos(δ) = A, R sin(δ) = B,
so,
R =√
A2 + B2, tan(δ) =B
A.
The quantities involved above have the following names
• ω0 = fundamental frequency, T = 2πω0
= period
• R = amplitude
• δ = phase angle or phase shift
The graph of the solution is as in the next figure where we plot ω0thorizontally, and u vertically. This graph is a cosine funtion whose maximumheight is R. Its period gets shorter as ω0 increasese. The cosine function withmaximum at t = 0 is translated to the left by δ units.
Next, we consider the d.e. for a spring with friction. For instance, themass may have to encounter air resistence in its motion. A typical physi-cal assumtion is that the frictional force points opposite to the direction ofmotion and has magnitude proportional to the speed.
Without external forces, this leads to the d.e.
mu + γu + ku = 0.
The characteristic equation is
mr2 + γr + k = 0
with roots
r =−γ ±
√γ2 − 4km
2m.
August 21, 2004 11-4
We have three cases for the general solution.Case 1: γ2 > 4km. This is called overdamped. The general solution has
the form
u(t) = Aer1t + Ber2t
where
r1 =−γ −
√γ2 − 4km
2m,
and
r2 =−γ +
√γ2 − 4km
2m.
August 21, 2004 11-5
Case 2: γ2 = 4km. This is called critically damped. The general solutionhas the form
u(t) = Aer1t + Bter1t
where
r1 = − γ
2m.
Case 3: γ2 < 4km. This is under-damped. If µ =√
4km− γ2, then thegeneral solution has the form
u(t) = e−γ2m
t(A cos(µt) + B sin(µt)) = Re−γ2m
t(cos(µt− δ),
where R =√
A2 + B2, and δ = tan(BA).
Typical Problem: A mass weighing 4 pounds stretches a spring twoinches. The mass is in a medium which resists the motion with a force of 6pounds which the mass has a speed of 3 feet per second. Write an equationdescribing the motion of the spring. If the spring is extended a distance of2 feet and the mass is released from rest, how long does it take the mass toreturn to its lowest subsequent position? What height is that position?
We have mg = 4,and, at equilibrium we have
4 = mg = kL
We use units of feet and pounds. Thus
L = two inches = 1/6 feet,
Thus,
4 = k/6
or k = 24.Also the frictional force has magnitude γu = γ3 = 6, so γ = 2.The equation of motion is
mu + 2u + 24u = 0.
Since g is 32 ft/sec2, we have
August 21, 2004 11-6
m =3
32=
1
8.
Hence,
1
8u + 2u + 24u = 0,
or
u + 16u + 192u = 0.
The characteristic equation is r2 + 16r + 192 = 0, and its roots are
r =−16±
√256− 768
2= −8± 8
√2i.
The general solution is
u = e−8t(A cos(8√
2t) + B sin(8√
2t)).
For the initial conditions, we have u(0) = 2, u(0) = 0.So, we get A = 2, B = 0.Thus the motion is
u(t) = e−8t2 cos(8√
2t).
The lowest position occurs at the the time t` equal to the period of theoscillation which is
t` =2π
8√
2.
The height is u(t`).Note that, surprisingly, the differential equations for a simple electric
circuit turn out to be the same as for the mass-spring system.Problems: p. 190, 1-7, 9-11
August 21, 2004 12-1
12. Forced Oscillations
We consider a mass-spring system in which there is an external oscillatingforce applied.
One model for this is that the support of the top of the spring is oscillatingwith a certain frequency.
The equation of motion becomes
mu + γu + ku = F0cos(ωt). (1)
Let us find the general solution using the complex function method.First assume that γ = 0; i.e., there is no friction.The given equation is the real part of the complex equation
mu + ku = F0eiωt.
Let ω0 =√
km
be the natural frequency of the unforced equation. Ifω 6= ω0, then the general complex solution is
u(t) = c1 cos(ω0t) + c2 sin(ωt) + Re(F0
m(iω)2 + keiωt)
= R cos(ω0t− δ) +F0
k −mω2)eiωt)
= R cos(ω0t− δ) +F0
m( km− ω2)
cos(ωt)
= R cos(ω0t− δ) +F0
m(ω20 − ω2)
cos(ωt).
Resonance:Note that if ω is close to ω0, then the magnitude of this is large. This
the phenomena of resonance. It is responsible for the breakdown of certainphysical processes.
Note that if ω = ω0, then the amplitude of the particular solution becomesunbounded. In practice, one probably can’t get the frequency to be the same,but even if it is close the physical system in question will break down. Thiscan be observed in the playing certain high notes causing breaking of glass,the destruction of bridges in strong winds, etc.
See the links for resonance on the class notes web page.
August 21, 2004 12-2
Beats: Again suppose the ω is close to ω0, γ = 0, and the motion beginswith u(0) = 0 and u′(0) = 0.
Then, the constants c1, c2 satisfy
c1 = − F0
m(ω20 − ω2)
, c2 = 0.
So, the solution is
u(t) = − F0
m(ω20 − ω2)
cos(ω0t) +F0
m(ω20 − ω2)
cos(ωt)
=F0
m(ω20 − ω2)
(cos(ωt)− cos(ω0t)) (2)
which is the difference of two cosine functions with the same amplitudeand different frequencies. To get an idea of the graph of such a function, letus rewrite it.
Recall the identities
cos(A + B) = cos(A) cos(B)− sin(A) sin(B),
cos(A−B) = cos(A) cos(B) + sin(A) sin(B).
Subtracting the first equation from the second gives
cos(A−B)− cos(A + B) = 2 sin(A) sin(B).
Let us put
A−B = ωt, A + B = ω0t,
or,
A =ω0t + ωt
2, B =
ω0t− ωt
2,
so that (2) becomes
u(t) =F0
m(ω20 − ω2)
(2 sin(ω0t + ωt
2) sin(
ω0t− ωt
2)).
August 21, 2004 12-3
If ω0 − ω is small, then the function sin(ω0t−ωt2
) oscillates much moreslowly than sin(ω0t+ωt
2)). The graph of this product is illustrated in the
figures below. This gives rise to the phenomenon of “beats” in music whentwo vibrating strings have very close frequencies.
Effect of Damping in Forced Oscillations:If we have a forced oscillation of the form
mu + γu + ku = F0 cos(ωt),
with γ 6= 0, thenthe general solution has one of the following forms:Case 1: γ2 − 4km > 0
u(t) = c1er1t + c2e
r2t + A cos(ωt) + B sin(ωt)
where
ri =−γ ±
√γ2 − 4km
2m.
As t → ∞, the homogeneous part tends to zero. So, the solution ap-proaches an oscillation with the same frequency as the external force. Thefunctions
c1et1t + c2e
r2t, A cos(ωt) + B sin(ωt)
are called the transient and steady state motions of the system, respec-tively.
Case 2: γ2 − 4km < 0.Here, if ω1 =
√4km− γ2 and a = γ
2m, then the general solution has the
form
u(t) = e−at(c1 cos(ω1t) + c2 sin(ω1t))
+A cos(ωt) + B sin(ωt)
Again the first part tends to zero as t → ∞, and the solution tends toan oscillation with the same frequency as the external force. The first andsecond parts are again called the transient and steady state solutions.
Problems: p. 199, 1-12Some plots of solutions (see next page):
August 21, 2004 12-4
August 21, 2004 12-5
August 21, 2004 13-1
13. Laplace Transform
Review of Improper IntegralsAn integral of the form ∫ b
af(t)dt
is called an improper integral if at least one of the following conditions issatisfied.
1. a = −∞
2. b = +∞
3. limt→a+ f(t) = ±∞
4. limt→b− f(t) = ±∞
At this time, we only need to consider the case where a is a finite realnumber and b = +∞. Thus, we consider expressions of the form∫ ∞
af(t)dt.
In this case, we define the value of this expression to be
limb→∞
∫ b
af(t)dt
whenever the limit exists. This assumes that the numbers F (b) =∫ ba f(t)dt
exist for each b > a, and the limit of the numbers F (b) as b approaches plusinfinity exists.
If this limit is finite, then we say the integral converges. Otherwise, wesay the integral diverges.
Let us take some examples.Example 1:Fix a positive real number p > 0. Let a > 0, and consider the improper
integral ∫ ∞
a
dt
tp.
For a < b, we have
August 21, 2004 13-2
∫ b
a
dt
tp=
t−p+1
−p + 1
t=b
t=a
=b1−p
1− p− a1−p
1− p
We have three cases:
1. 0 < p < 1.
limb→∞
∫ b
a
dt
tpdt = lim
b→∞
b1−p
1− p− a1−p
1− p= +∞,
so, the integral diverges.
2. p = 1.
limb→∞
∫ b
a
dt
tpdt = lim
b→∞
∫ b
a
dt
t= lim
b→∞log(b)− log(a) = +∞,
so, the integral diverges again.
3. p > 1.
limb→∞
∫ b
a
dt
tpdt = lim
b→∞
b1−p
1− p− a1−p
1− p= − a1−p
1− p,
so, the integral converges.
Example 2.Consider the integral ∫ ∞
0e−ctdt.
where c > 0 is a positive real number.We have
∫ ∞
0e−ctdt = lim
b→∞
∫ b
0e−ctdt
= limb→∞
−1
ce−ct
t=b
t=0
= limb→∞
−1
ce−cb +
1
c
=1
c,
August 21, 2004 13-3
so, the integral always converges.General Facts about improper integrals:
1. If
limt→∞
f(t)
g(t)= L
exists and is finite, then ∫ ∞
af(t)dt
converges if and only if ∫ ∞
ag(t)dt
converges.
2. Assume 0 ≤ f(t) < g(t) for all t. If∫∞0 g(t)dt converges, then so
does∫∞0 f(t)dt. On the other hand, if
∫∞0 f(t)dt diverges, then so does∫∞
0 g(t)dt.
3. If f(t) is a non-negative nonincreasing function on the infinite interval(0,∞), then ∫ ∞
1f(t)dt
converges if and only if the infinite series
∞∑n=1
f(n)
converges.
Now, we define the Laplace transform L(s) of the function f(t) to be theintegral
L(s) =∫ ∞
0e−stf(t)dt.
This integral may converge only for some values of s, so the Laplacetransform of f(t) will only be defined for those values of s.
Let us take some examples.
August 21, 2004 13-4
1. Consider the constant function f(t) = 1 for all t. We write its Laplacetransform as L(1). Let’s compute it.
L(1) =∫ ∞
0e−stdt
=−1
se−st
t=∞
t=0
=1
s
for s > 0.
2. Consider the function f(t) = eat.
We have
L(eat) =∫ ∞
0e−steatdt
=∫ ∞
0e−st+atdt
=−1
s− ae−(s−a)t
t=∞
t=0
=1
s− a
for s > a.
3. For n ≥ 0, we have
L(tn) =n!
sn+1. (1)
Proof. This is by induction on n.
First we describe the
Principle of Mathematical Induction.
August 21, 2004 13-5
This principle says the following.
Suppose n0 is an integer, and S(n) represents a statement about inte-gers n ≥ n0. If one knows that
(a) S(n0) is true.
(b) Whenever S(n) is assumed true, it follows that S(n + 1) is alsotrue.
Then, one concludes that
S(n) is true for all integersgreater than on equal to n0.
(2)
The idea behind this is that if there were some integer m > n0 forwhich S(m) were false, one could take the least such integer. Let uscall it n1. Then, we know that S(n1 − 1) is true by the choice of n1.But, by (b), we know that whever S(n) is true, then so is S(n + 1).This would imply that S(n1) is also true, which contradicts the earlierstatement that it was false. Hence, the assumption that there is anm > n0 for which S(m) is false was wrong, and we conclude that (2) istrue.
Let us return to the proof of (1).
We have already done the case n = 0. Assume it holds for n, we showit holds for n + 1.
Integrating by parts, and using the inductive assumption for n, we have
L(tn+1) =∫ ∞
0e−sttn+1dt
= −1
se−sttn+1
t=∞
t=0
− (∫ ∞
0−1
se−st(n + 1)tndt
=1
s(n + 1)L(tn)
=1
s(n + 1)
n!
sn+1
=(n + 1)!
sn+2
August 21, 2004 13-6
which is the statement for n + 1.
4.
L(sin(at)) =a
s2 + a2
L(cos(at)) =s
s2 + a2
These can be proved by integration by parts or using complex variables.We present the second method.
Note that if f(t) = u(t) + iv(t) is a complex valued function of t, then
L(f(t)) =∫ ∞
0e−st(u(t) + iv(t))dt
=∫ ∞
0e−st(u(t))dt + i
∫ ∞
0e−st(v(t))dt
= L(u(t)) + iL(v(t))
Using this, we have
L(sin(at)) = L(Im(eiat))
= Im(L(eiat))
= Im(1
s− ai)
= Im(s + ai
s2 + a2)
=a
s2 + a2
Also,
L(cos(at)) =s
s2 + a2.
Here is a useful fact.If L(f(t)) = F (s), then L(eatf(t)) = F (s− a).To verify this, notice that
August 21, 2004 13-7
F (s− a) =∫ ∞
0e−(s−a)tf(t)dt
=∫ ∞
0e−st+atf(t)dt
=∫ ∞
0e−steatf(t)dt
= L(eatf(t)).
Properties of the Laplace transform.
1. Linearity: The function L is linear. That is, if F (s) = L(f(t)), andG(s) = L(g(t)) and a, b are constants, then
L(af(t) + bg(t)) = aL(f(t)) + bL(g(t)) = aF (s) + bG(s).
2. Existence: Let us say that a function f defined on the closed interval[α, β] is piecewise continuous if there is a finite set of points α = α0 <α1 < . . . < αr = β such that f is continuous on each interval αi < t <αi+1. We say that f is piecewise continuous on the interval [0,∞) if itis piecewise continuous on each finite subinterval [0, A] for A > 0.
If f is piecewise continuous on the interval [0,∞) and there are con-stants C > 0 and a > 0 such that
| f(t) | ≤ Ceat, ∀t ∈ [0,∞), (3)
then L(f(t)) exists for s > a.
A piecewise continuous function f satisfying the inequality (3) is saidto be of exponential order.
3. Uniqueness: if f and g are piecewise continuous and satisfy
| f(t) | ≤ Ceat, and | g(t) | ≤ Ceat, ∀t ∈ [0,∞)
and L(f(t)) = L(g(t)) for s > a, then f(t) = g(t) except for at mosta sequence of points. Indeed for any A > 0, there are at most finitelymany points in [0, A] at which f(t) and g(t) can fail to be equal.
In the above case, if L(f(t)) = F (s), we say L−1(F (s)) = f(t), and wecall the operator L−1 the inverse Laplace transform. It is defined onthose functions which are of exponential order.
August 21, 2004 13-8
Problems: p.294 1,3,5, 7,9,11-14, 15,17,19, 21, 23We consider a mass-spring system in which there is an external oscillating
force applied.One model for this is that the support of the top of the spring is oscillating
with a certain frequency.The equation of motion becomes
mu + γu + ku = F0cos(ωt). (4)
Let us find the general solution using the complex function method.First assume that γ = 0; i.e., there is no friction.The given equation is the real part of the complex equation
mu + ku = F0eiωt.
Let ω0 =√
km
be the natural frequency of the unforced equation. Ifω 6= ω0, then the general complex solution is
u(t) = c1 cos(ω0t) + c2 sin(ωt) + Re(F0
m(iω)2 + keiωt)
= R cos(ω0t− δ) +F0
k −mω2)eiωt)
= R cos(ω0t− δ) +F0
m( km− ω2)
cos(ωt)
= R cos(ω0t− δ) +F0
m(ω20 − ω2)
cos(ωt).
Resonance:Note that if ω is close to ω0, then the magnitude of this is large. This
the phenomena of resonance. It is responsible for the breakdown of certainphysical processes.
Note that if ω = ω0, then the amplitude of the particular solution becomesunbounded. In practice, one probably can’t get the frequency to be the same,but even if it is close the physical system in question will break down. Thiscan be observed in the playing certain high notes causing breaking of glass,the destruction of bridges in strong winds, etc.
See the links for resonance on the class notes web page.
August 21, 2004 13-9
Beats: Again suppose the ω is close to ω0, γ = 0, and the motion beginswith u(0) = 0 and u′(0) = 0.
Then, the constants c1, c2 satisfy
c1 = − F0
m(ω20 − ω2)
, c2 = 0.
So, the solution is
u(t) = − F0
m(ω20 − ω2)
cos(ω0t) +F0
m(ω20 − ω2)
cos(ωt)
=F0
m(ω20 − ω2)
(cos(ωt)− cos(ω0t)) (5)
which is the difference of two cosine functions with the same amplitudeand different frequencies. To get an idea of the graph of such a function, letus rewrite it.
Recall the identities
cos(A + B) = cos(A) cos(B)− sin(A) sin(B),
cos(A−B) = cos(A) cos(B) + sin(A) sin(B).
Subtracting the first equation from the second gives
cos(A−B)− cos(A + B) = 2 sin(A) sin(B).
Let us put
A−B = ωt, A + B = ω0t,
or,
A =ω0t− ωt
2, B =
ω0t + ωt
2,
so that (5) becomes
u(t) =F0
m(ω20 − ω2)
(2 sin(ω0t− ωt
2) sin(
ω0t + ωt
2)).
August 21, 2004 13-10
If ω0 − ω is small, then the function sin(ω0t−ωt2
) oscillates much moreslowly than sin(ω0t+ωt
2)). The graph of this product is illustrated in the
figures below. This gives rise to the phenomenon of “beats” in music whentwo vibrating strings have very close frequencies.
Effect of Damping in Forced Oscillations:If we have a forced oscillation of the form
mu + γu + ku = F0 cos(ωt),
with γ 6= 0, thenthe general solution has one of the following forms:Case 1: γ2 − 4km > 0
u(t) = c1er1t + c2e
r2t + A cos(ωt) + B sin(ωt)
where
ri =−γ ±
√γ2 − 4km
2m.
As t → ∞, the homogeneous part tends to zero. So, the solution ap-proaches an oscillation with the same frequency as the external force. Thefunctions
c1et1t + c2e
r2t, A cos(ωt) + B sin(ωt)
are called the transient and steady state motions of the system, respec-tively.
Case 2: γ2 − 4km < 0.Here, if ω1 =
√4km− γ2 and a = γ
2m, then the general solution has the
form
u(t) = e−at(c1 cos(ω1t) + c2 sin(ω1t))
+A cos(ωt) + B sin(ωt)
Again the first part tends to zero as t → ∞, and the solution tends toan oscillation with the same frequency as the external force. The first andsecond parts are again called the transient and steady state solutions.
Problems: p. 199, 1-12Some plots of solutions (see next page):
August 21, 2004 13-11
August 21, 2004 13-12
August 21, 2004 14-1
14. Initial Value Problems and the Laplace
Transform
We first consider the relation between the Laplace transform of a functionand that of its derivative.
Theorem. Suppose that f(t) is a continuously differentiable function onthe interval [0,∞). Then,
L(f ′(t)) = sL(f(t))− f(0). (1)
Proof.We integrate the Laplace transform of f(t) by parts to get
L(f(t)) =∫ ∞
0e−stf(t)dt
= −1
se−stf(t)
t=∞
t=0
−∫ ∞
0−1
se−stf ′(t)dt
=1
sf(0) +
1
sL(f ′(t)).
Multiplying both sides by s and moving the first term on the right sideof this equation to the left side gives the theorem. QED.
Applying this formula twice to a C2 function f(t) gives the expression
L(f ′′) = sL(f ′)− f ′(0)
= s(sL(f)− f(0))− f ′(0)
= s2L(f)− sf(0)− f ′(0).
Repeating this for a Cn function gives
L(f (n) = snL(f)− sn−1f(0)− sn−2f ′(0)− . . .− f (n−1)(0).
Now, consider the following initial value problem.
ay′′ + by′ + cy = 0, y(0) = y0, y′(0) = y′0. (2)
August 21, 2004 14-2
with characteristic equation z(r) = ars + br + c.Taking the Laplace transform we get
aL(y′′) + bL(y′) + cL(y) = 0
or
a(s2L(y)− sy(0)− y′(0)) + b(L(y)− y(0)) + cL(y) = 0
z(s)L(y) = (as + b)y(0) + ay′(0)
L(y) =(as + b)y0 + ay′0
z(s).
If we knew how to take the inverse transform of the right hand side wewould then have the solution to (2).
The table at the end of this section summarizes useful formulas for thispurpose.
Let’s take an example.Example 1.Use the Laplace transform to find the unique solution to
y′′ − y′ − 2y = 0, y(0) = 1, y′(0) = 2.
We have
L(y) =(s− 1) + 2
s2 − s− 2
=(s + 1)
(s− 2)(s + 1)
=1
s− 2
and we need to find the inverse Laplace transform of the right last term.This is just y(t) = e2t.Next, suppose that we have the same equations with the initial conditions
August 21, 2004 14-3
y(0) = 1, y′(0) = 0.
This gives
L(y) =(s− 1)
s2 − s− 2
=(s− 1)
(s− 2)(s + 1)
For this we use partial fractions.
(s− 1)
(s− 2)(s + 1)=
1
3(
1
s− 2) +
2
3(
1
s + 1).
So,
y(t) = L−1(1
3(
1
s− 2) +
2
3(
1
s + 1))
=1
3e2t +
2
3e−t
Example 2.Use the Laplace transform to find the unique solution to
y′′ − 3y′ + 2y = 3cos(3t), y(0) = 2, y′(0) = 3.
We have
L(y′′)− 3L(y′) + 2L(y) = 3L(cos(3t)), y(0) = 2, y′(0) = 3.
This gives rise to
(s2 − 3s + 2)L(y)− (s− 3)2− 3 = 3(s
s2 + 9)
or,
(s2 − 3s + 2)L(y)− 2s + 3 = 3(s
s2 + 9)
August 21, 2004 14-4
(s2 − 3s + 2)L(y) = 2s− 3 + 3(s
s2 + 9)
L(y) =2s− 3
(s− 2)(s− 1)+
3s
(s2 + 9)(s− 2)(s− 1)
We now find
L−1(2s− 3
(s− 2)(s− 1)) = y1(t)
and
L−1(3s
(s2 + 9)(s− 2)(s− 1)) = y2(t),
and then add them to get
y(t) = y1(t) + y2(t).
L−1( 2s−3(s−2)(s−1)
):Write
2s− 3
(s− 2)(s− 1)=
A
s− 2+
B
s− 1
This gives the linear equations
A + B = 2
−A− 2B = −3
which has the solutions A = 1, B = 1.Hence,
y1(t) = e2t + et.
L−1( 3s(s2+9)(s−2)(s−1)
) :We write the right side as
August 21, 2004 14-5
A + Bs
s2 + 9+
C
s− 2+
D
s− 1(3)
or
A
s2 + 9+
Bs
s2 + 9+
C
s− 2+
D
s− 1.
which we write as
A
3
3
s2 + 9+
Bs
s2 + 9+
C
s− 2+
D
s− 1.
From this we recognize the the inverse transform as
A
3sin(3t) + Bcos(3t) + Ce2t + Det.
To find A, B, C,D we put the fraction in (3) over the common denomina-tor and note that the numerator equals 3s. This gives the following systemof linear equations for A, B, C,D.
2A− 9C − 18D = 0
−3A + 2B + 9C + 9D = 3
A− 3B − C − 2D = 0
B + C + D = 0
The solution of this system is
A = −81/130, B = −21/130, C = 6/13, D = −3/10
Remarks.
1. The use of the Laplace transform to solve initial value requires that theinitial values y(0), y′(0) be taken at time t0 = 0. If one is given theIVP
ay′′ + by′ + cy = g(t), y(t0) = y0, y′(t0) = y′0,
then one simply translates the t variable by defining the function v(t) =y(t + t0). Since y′′(t + t0) = v′′(t) and y′(t + t0) = v′(t), we get the newIVP
August 21, 2004 14-6
av′′(t) + bv′(t) + cv(t) = g(t + t0), v(0) = y0, v′(0) = y′0.
We can use the Laplace transform to find v(t). Then, we find y(t) usingthe formula y(t) = v(t− t0).
2. So far, the Laplace transform simply gives us another method withwhich we can solve initial value problems for linear differential equa-tions with constant coefficients. The possible advantages are that wecan solve initial value problems without having first to solve the homo-geneous equation and then finding the particular solution. The Laplacetransform takes the differential equation for a function y and forms anassociated algebraic equation to be solved for L(y). Then, one has totake the inverse Laplace transform to get y. A possible disadvantageis that the computations may be cumbersome, and we need to find theinverse transforms at the end of the process.
The real power of the Laplace transform will be seen later when we dealwith differential equations with discontinuous right hand sides.
For convenience, we give a list of elementary functions and their Laplacetransforms.
Problems for section 6.2: p. 303 1-10, 11,13, 15, 21,22,23
Formula for the use of Laplace Transforms to Solve Second Order Differential Equations.
Given the differential equation
'y(0)y')0(),(''' 0,0 yytgcybyaywe have
cbsas
tgLayybasyL
2
00 ))((')()(
We get the solution y(t) by taking the inverse Laplace transform.
The following table are useful for applying this technique.
Table of Laplace TransformsDefinition of Laplace transform
0
)()}({ dttfetfL st
)}({)( 1 sFLtf )}({)( tfLsF Laplace transforms of elementary functions
1s1
nt1
!ns
n
at1
)1(
as
a
ateas
1
ktsin 22 ksk
ktcos 22 kss
ktsinh 22 ksk
ktcosh 22 kss
First translation and derivative theorems)(tfeat )( asF
)(tft n
)()1( sFdsd
n
nn
)(' tf )0()( fssF )('' tf )0(')0()(2 fsfsFs
)(''' tf )0('')0(')0()( 23 fsffssFs Unit step and Dirac delta function)( atu
se as
)()( atutf )}({ atfLe as
)()( atuatf )(sFe as
)( at ase
Convolution theorem and Laplace transform of a periodic function
t
dtgfgf0
)()( )()( sGsF
)()( ptftf
pst
sp dtetfe 0
)(1
1
August 21, 2004 15-1
15. Step Functions and initial value problems
with discontinuous forcing
In applications it is frequently useful to consider differential equations whoseforcing terms are piecewise differentiable.
Here we begin to explore techniques which enable us to deal with thissituation.
Def. Let c > 0 be a positive real number. The Heaviside c function isthe function uc defined by
uc(t) =
{0 if t < c1 if t ≥ c
There are some simple piecewise continuous functions which can be con-structed using simple operations applied to various u′cs.
Example 1. Consider the graphs of the functions
uc(t), 1− uc(t), uc − u2c.
These are as in the next figures.8888888888 give figures 88888888888We can easily compute the Laplace Transform of uc(t) as follows
L(uc(t)) =∫ ∞
0e−stuc(t)dt
=∫ ∞
ce−stdt
= −1
se−st
∞
t=c
=e−cs
sfor s > 0
For a function f(t) defined for t > 0, consider the function g(t) =uc(t)f(t − c). The graph of g(t) is zero for 0 < t < c, and the graph off(t) translated to the right to start at c.
The Laplace Transform of g(t) is simply related to that of f .Theorem. If F (s) = L(f(t)) exists for s > a ≥ 0, and c is a positive
constant, then
August 21, 2004 15-2
L(uc(t)f(t− c)) = e−csL(f(t)) = e−csF (s)
for s > a. Also,
L−1(e−csF (s)) = uc(t)f(t− c).
Proof.Let F (s) = L(f(t)).We have, using definitions and the the substitution ξ = t− c,
L(uc(t)f(t− c)) =∫ ∞
0e−stuc(t)f(t− c)dt
=∫ ∞
ce−stf(t− c)dt
=∫ ∞
0e−s(c+ξ)f(ξ)dt
= e−sc∫ ∞
0e−sξf(ξ)dt
= e−scF (s).
QED.An alternative formulation of the above theorem is the following.
L(uc(t)f(t)) = e−csG(s)
where G(s) = L(f(t + c)).To see this, just write f(t) = f(t + c− c) and use the above theorem.Let us present Some examples of the use of these formulas.Example 2. Let f(t) be defined by
f(t)
{sin(t) if 0 < t < π
4
sin(t) + cos(t− π4) if t ≥ π
4
Compute L(f(t)).We have f(t) = sin(t) + uπ
4cos(t− π
4).
So,
August 21, 2004 15-3
L(f(t)) = L(sin(t)) + L(uπ4cos(t− π
4))
=1
s2 + 1+
se−s π4
s2 + 1.
Example 3. Find the inverse transform of
F (s) =1− e−2s
s2.
We have
L−1(F (s)) = L−1(1
s2)− L−1(
e−2s
s2)
= t− u2(t)(t− 2).
Example 4. Find the inverse transform of
G(s) =1
s2 − 4s + 5.
We have
G(s) =1
s2 − 4s + 5
=1
s2 − 4s + 4 + 1
=1
(s− 2)2 + 1
= F (s− 2)
where
F (s) =1
s2 + 1.
So,
F (s) = L(sin(t))
August 21, 2004 15-4
and
G(s) = L(e2tsin(t)).
Example 5. Find L−1(2se−3s
s2+5).
We first find L−1( 2ss2+5
).
L−1(2s
s2 + 5) = L−1(
2s
s2 + (√
5)2)
= 2 cos(√
5t)
Then, we get
L−1(2se−3s
s2 + 5) = 2u3(t) cos(
√5(t− 3)).
Example 6. Find L(u2(t)t2).
We use the formula
L(uc(t)f(t)) = e−csL(f(t + c)).
We get
L(u2(t)t2) = e−csL((t + 2)2)
= e−cs(L(t2) + 4L(t) + 4L(1))
= e−cs(2
s3+
4
s2+
4
s)
Differential Equations with discontinuous right hand sidesWe apply the above techniques to some initial value problems.Example 6Solve the initial value problem
2y′′ + y′ + 2y = u5(t)− u20(t), y(0) = 0, y′(0) = 0.
From our formulas we have
August 21, 2004 15-5
L(y) =L(u5(t))−L(u20(t))
2s2 + s + 2
=e−5s − e−20s
s(2s2 + s + 2)
and we have to take the inverse Laplace transform of the right side.This is
L−1(e−5s
s(2s2 + s + 2))− L−1(
e−20s
s(2s2 + s + 2)).
This is
u5(t)f(t− 5)− u20(t)f(t− 20)
where
f(t) = L−1(1
s(2s2 + s + 2)),
We now compute this last inverse transform.Using partial fractions, we write
1
s(2s2 + s + 2)) =
A
s+
B + Cs
2s2 + s + 2
We determine A, B, C from
2As2 + As + 2A + Bs + Cs2 = 1 ∀ s.
Setting terms of like powers equal we get a system of three equations inthe unknowns A, B, C, which we solve to get
A =1
2, B = −1
2, C = −1.
Hence, we have
(1
2)1
s−
12
+ s
2(s2 + s2
+ 1)= (
1
2)1
s−
14
+ s + 14
2((s + 14)2 + 15
16)
August 21, 2004 15-6
We compute the inverse Laplace transform of this as the sum of the terms
L−1(1
2s)− L−1(
14
2((s + 14)2 + 15
16))− L−1(
s + 14
2((s + 14)2 + 15
16))
We have
L−1(1
2s) =
1
2,
L−1(14
2((s + 14)2 + 15
16)) = L−1(
1
8√
1516
√1516
((s + 14)2 + 15
16)
=1
2√
15e−
t4 sin(
√15
16t),
and
L−1(s + 1
4
2((s + 14)2 + 15
16)) =
1
2e−
t4 cos(
√15
16t).
Problems: p. 311, 1-17, p. 318, 1-11Impulse functionsIn some cases one wants to consider a function which is very large for a
short amount of time. One wants to take integrals of these functions and toconsider them as forcing terms in differential equations.
The standard impulse function is the Dirac delta function. This is thoughtof as a function δ(t) which is infinite at t = 0, zeros at t = 0, and has integralwith value 1. There is no classical function which these properties, so ittakes some work to make rigorous sense of this. The mathematical theory ofdistributions (invented by the French mathematician Laurent Schwartz) isthe modern way to rigorously justify the concepts and calculations involvingDirac delta functions. That theory is studied in advanced analysis coursesand is beyond the scope of this course.
Nevertheless, one can operate formally with delta functions by assumingthat they have certain properties. So, we assume that the expression δ(t−t0)represents a “generalized function” whose value is 0 for t 6= t0, infinite att = t0, and
August 21, 2004 15-7
∫ ∞
−∞δ(t− t0)f(t)dt = f(t0)
for any continuous function f(t).We assume that the Laplace transform L(δ(t − t0)) is defined by the
formula
L(δ(t− t0)) = e−st0 .
Using this one can solve differential initial value problems of the form
ay′′ + by′ + cy = δ(t− t0), y(0) = 0, y′(0) = 0
as we did above with Laplace transform methods.One takes
L(y) =L(δ(t− t0))
as2 + bs + c.
Using L(δ(t− t0)) = e−st0 , we can find the inverse Laplace transform andfind y in terms of Heaviside functions as above.
Convolutions.It is sometimes desirable to compute the inverse Laplace transform of the
product of two functions F (s) and G(s).This calculation requires an operation on functions called convolution.Given f(t), g(t) two piecewise continuous functions of exponential order
a defined on (0,∞), we define
(f ? g)(t) =∫ t
0f(t− τ)g(τ)dτ .
The function f ? g is called the convolution of f and g. It will also haveexponential order a.
There are the following properties.
1. f ? g = g ? f
2. f ? (g ? h) = (f ? g) ? h
3. f ? (g1 + g2) = f ? g1 + g ? g2
August 21, 2004 15-8
4. f ? 0 = 0
5. If L(f(t)) = F (s) and L(g(t)) = G(s), then
L((f ? g)(t)) = F (s)G(s)
Some more examples of Laplace transforms:Find the Laplace transforms of the following functions.
1.
f(t) =
{0 t < 3(t− 3)2 t ≥ 3
2.
f(t) =
{0 t < 3t2 − 6t + 12 t ≥ 3
For the first function f(t), we have
f(t) = u3(t)(t− 3)2.
Using the formula for uc(t)f(t− c), we get
e−3sF (s)
where
F (s) = L(t2) =2
s3
For the second function f(t) we have
f(t) = u3(t)((t− 3)2 + 3) = u3(t)((t− 3)2) + u3(t)3.
We get the Laplace transform is
e−3s 2
s3+ 3
e−3s
s.
August 21, 2004 16-1
16. Systems of Differential Equations
1 Matrices and Systems of Linear Equations
An n×m matrix is an array A = (aij) of the forma11 · · · a1m
a21 · · · a2m
. . .
an1 · · · anm
where each aij is a real or complex number.The matrix has n rows and m columns.For 1 ≤ j ≤ m, 1 ≤ i ≤ n, the n× 1 matrix
a1j...
anj
is called the j − th column of A and the 1×m matrix
(ai1 . . . aim)
is called the i− th row of A.We can add n ×m matrices as follows. If A = (aij) and B = (bij), then
C = A + B is the matrix (cij) defined by
cij = aij + bij.
We can multiply an n ×m matrix A = (aij) by an m × p matrix Bjk toget an n× p matrix C = (cik) defined by
cik =m∑
j=1
aijbjk.
Thus the element cik is the dot product of the ith row of A and the jthcolumn of B.
Both the operations of matrix addition and matrix multiplication areassociative. That is,
August 21, 2004 16-2
(A + B) + C = A + (B + C), (AB)C = A(BC).
Multiplication of n×n matrices is not always commutative. For instance,if
A =
(1 10 1
)and B =
(1 01 1
),
then,
AB =
(2 11 1
)and BA =
(1 11 2
).
We will write vectors x = (x1, . . . , xn) in Rn both as row vectors andcolumn vectors.
Matrices are useful for dealing with systems of linear equations.Since our interest here is in treating systems of differential equations, we
will only consider linear systems of n equations in n unknowns.We can write the system
a11x1 + a12x2 + . . . + a1nxn = b1
a21x1 + a22x2 + . . . + a2nxn = b2...
an1x1 + an2x2 + . . . + annxn = bn
as a single vector equation
Ax = b
where A in the n × n matrix (aij), x is an unknown n−vector, and b isa known n−vector.
Let ei be the n−vector with zeroes everywhere except in the i−th positionand a 1 there.
The n × n matrix I whose i−th row is ei is called the n × n identitymatrix.
For any n× n matrix A we have
AI = IA = A.
August 21, 2004 16-3
An n × n matrix A is invertible if there is another n × n matrix B suchthat AB = BA = I. We also call A non-singular. A singular matrix is onethat is not invertible.
The matrix B is unique and called the inverse of A. It is usually writtenA−1.
Let 0 denote the n−vector all of whose entries are 0.A collection u1,u2, . . . ,uk of vectors in Rn is called a linearly independent
set of vectors in Rn, if whenever we have a linear combination
α1u1 + . . . + αkuk = 0,
with the α′is constants (scalars) we must have αi = 0 for every i.
Ax = b
Fact. The following conditions are equivalent for an n × n matrix A tobe invertible.
1. the rows of A form a linearly independent set of vectors
2. the columns of A form a linearly independent set of vectors
3. for every vector b, the system
Ax = b
has a unique solution
4. det(A) 6= 0
We define the number det(A) inductively by
det(A) =n∑
i=1
(−1)iai1det(A[i | 1])
where A[i | 1] is the (n − 1) × (n − 1) matrix obtained by deleting thefirst column and i−th row of A.
88888888888888888 do examples 8888888888888888
August 21, 2004 16-4
2 Systems of Differential Equations
Let U be an open subset of Rn, I be an open interval in R and : I×Rn → Rn
be a function from I ×Rn to Rn.The equation
x = f(t,x) (1)
is called a first order ordinary differential equation in Rn. We emphasizehere that x is an n−dimensional vector in Rn. We also consider the initialvalue problem
x = f(t,x), x(t0) = x0 (2)
where t0 ∈ I and x0 ∈ U .A solution to the IVP (2) is a differentiable function x(t) from an open
subinterval J ⊂ I containing t0 such that
x(t) = f(t,x(t))
for t ∈ J .The general solution to (1) is an expression
x(t, c) (3)
where c is an n−dimensional constant vector in Rn such that every solu-tion of (1) can be written in the form (3) for some choice of c.
If we write out the D.E. (1) in coordinates, we get a system of first orderdifferential equations as follows.
x1 = f1(x1, . . . , xn)...
xn = fn(x1, . . . , xn)
(4)
Fact: The n−th order scalar D.E. is equivalent to a simple n−dimensionalsystem.
Consider
y(n) = f(t, y, y′, . . . , y(n−1)). (5)
Letting x1 = y, x2 = y′, . . . , xn = y(n−1)), we get
August 21, 2004 16-5
x1 = x2
x2 = x3...
xn = f(t, x1, . . . , xn)
(6)
If we have a solution y(t) to (5), and set x1 = y(t), x2(t) = y′(t), . . . , xn(t) =y(n−1)(t), then x(t) = (x1(t), . . . , xn(t) is a solution to the system (6). Con-versely, if we have a solution x(t) = (x1(t), . . . , xn(t) to the system 6, thenputting y(t) = x1(t) gives a solution to (5).
The following existence and uniqueness theorem is proved in more ad-vanced courses.
Theorem(Existence-Uniqueness Theorem for systems). U be anopen set in Rn, and let I be an open interval in R. Let f(t,x) be a C1 functionof the variables (t,x) defined in I×U with values in Rn. Let t0,x0) ∈ I0×U .Then, there is a unique solution x(t) to the initial value problem
x = f(t,x), x(t0) = x0.
If the right side of the system f(t,x) does not depend on time, then onecalls the system autonomous (or time-independent). Otherwise, one calls thesystem non-autonomous or time-dependent.
There is a simple geometric description of autonomous systems in Rn. Inthat case, we consider
x = f(x) (7)
where f is a C1 function defined in an open subset U in Rn. We think off as a vector field in U and solutions x(t) of (7) as curves in U which areeverywhere tangent to f .
2.1 Linear Systems of Differential Equations: GeneralProperties
The system
x = A(t)x + g(t) (8)
August 21, 2004 16-6
in which A(t) is a continuous n× n matrix valued function of t and g(t)is a continuous n−vector valued function of t is called a linear system ofdifferential equations (or a linear differential equation) in Rn.
As in the case of scalar equations, one gets the general solution to (8)in two steps. First, one finds the general solution xh(t) to the associatedhomogeneous system
x = A(t)x. (9)
Then, one finds a particular solution xp(t) to (8) and gets the generalsolution to (8) as a sum
x(t) = xh(t) + xp(t).
Accordingly, we will examine ways of doing both tasks.Let yi(t) be a collection of Rn−valued functions for 1 ≤ i ≤ k. We say
that they form a linearly independent set of functions if whenever α1, . . . , αk
are k scalars such that
α1y1(t) + α2y2(t) + . . . + αkyk(t) = 0
for all t, we have that α1 = . . . = αk = 0.An n×n matrix Φ(t) of linearly independent solutions to the homogeneous
linear system (9) is called a fundamental matrix for (9).A necessary and sufficient condition for the matrix of solutions Φ(t) to
be a fundamental matrix is that det(Φ(t)) = 0 for some (or any ) t.If y1(t), . . . ,yn(t) are n solutions to (9), and Φ(t) is the matrix whose
columns are the functions yi(t), then the function
W (t) = W (y1, . . . ,yn)(t) = det(Φ(t))
is called the Wronskian of the collection {y1(t), . . . ,yn(t)} of solutions.It is then a fact that W (t) vanishes at some point t0 if and only if it vanishesat all point t.
The general solution to (9) has the form
x(t) = Φ(t)c
where Φ(t) is any fundamental matrix for (9) and c is a constant vector.
August 21, 2004 16-7
Thus, we have to find fundamental matrices and particular solutions. Wewill do this explicitly below for n = 2, 3 and constant matrices A.
To close this section, we observe an analogy between systems
x′ = Ax
and the scalar equation x′ = ax.One can define the matrix exp(A) = eA by the power series
eA = I + A +A2
2!+
A3
3!+ . . . .
It can be shown that the matrix series on the right side of this equationconverges for any A. The series represents a matrix with many propertiesanalogous to the usual exponential function of a real variable.
In particular, for a real number t, the matrix function t → etA is a differ-entiable matrix valued funtion and its derivative, computed by differentiatingthe series
I + tA +t2A2
2!+
t3A3
3!+ . . .
term by term satisfies
d
dtetA = AetA.
It follows that, for each vector x0, the vector function
x(t) = etAx0
is the unique solution to the IVP
x′ = Ax, x(0) = x0.
Hence, the matrix etA is a fundamental matrix for the system
x′ = Ax.
This observation is useful in certain circumstances, but, in general, it ishard to compute etA directly. In practice the methods involving eigenvaluesdescribed in the next secion are easier to use to find the general solution to
August 21, 2004 16-8
x′ = Ax.
In applications it is frequently useful to consider differential equationswhose forcing terms are piecewise differentiable.
Here we begin to explore techniques which enable us to deal with thissituation.
Def. Let c > 0 be a positive real number. The Heaviside c function isthe function uc defined by
uc(t) =
{0 if t < c1 if t ≥ c
There are some simple piecewise continuous functions which can be con-structed using simple operations applied to various u′cs.
Example 1. Consider the graphs of the functions
uc(t), 1− uc(t), uc − u2c.
These are as in the next figures.8888888888 give figures 88888888888We can easily compute the Laplace Transform of uc(t) as follows
L(uc(t)) =∫ ∞
0e−stuc(t)dt
=∫ ∞
ce−stdt
= −1
se−st
∞
t=c
=e−cs
sfor s > 0
For a function f(t) defined for t > 0, consider the function g(t) =uc(t)f(t − c). The graph of g(t) is zero for 0 < t < c, and the graph off(t) translated to the right to start at c.
The Laplace Transform of g(t) is simply related to that of f .Theorem. If F (s) = L(f(t)) exists for s > a ≥ 0, and c is a positive
constant, then
August 21, 2004 16-9
L(uc(t)f(t− c)) = e−csL(f(t)) = e−csF (s)
for s > a. Also,
L−1(e−csF (s)) = uc(t)f(t− c).
Proof.Let F (s) = L(f(t)).We have, using definitions and the the substitution ξ = t− c,
L(uc(t)f(t− c)) =∫ ∞
0e−stuc(t)f(t− c)dt
=∫ ∞
ce−stf(t− c)dt
=∫ ∞
0e−s(c+ξ)f(ξ)dt
= e−sc∫ ∞
0e−sξf(ξ)dt
= e−scF (s).
QED.An alternative formulation of the above theorem is the following.
L(uc(t)f(t)) = e−csG(s)
where G(s) = L(f(t + c)).To see this, just write f(t) = f(t + c− c) and use the above theorem.Let us present Some examples of the use of these formulas.Example 2. Let f(t) be defined by
f(t)
{sin(t) if 0 < t < π
4
sin(t) + cos(t− π4) if t ≥ π
4
Compute L(f(t)).We have f(t) = sin(t) + uπ
4cos(t− π
4).
So,
August 21, 2004 16-10
L(f(t)) = L(sin(t)) + L(uπ4cos(t− π
4))
=1
s2 + 1+
se−s π4
s2 + 1.
Example 3. Find the inverse transform of
F (s) =1− e−2s
s2.
We have
L−1(F (s)) = L−1(1
s2)− L−1(
e−2s
s2)
= t− u2(t)(t− 2).
Example 4. Find the inverse transform of
G(s) =1
s2 − 4s + 5.
We have
G(s) =1
s2 − 4s + 5
=1
s2 − 4s + 4 + 1
=1
(s− 2)2 + 1
= F (s− 2)
where
F (s) =1
s2 + 1.
So,
F (s) = L(sin(t))
August 21, 2004 16-11
and
G(s) = L(e2tsin(t)).
Example 5. Find L−1(2se−3s
s2+5).
We first find L−1( 2ss2+5
).
L−1(2s
s2 + 5) = L−1(
2s
s2 + (√
5)2)
= 2 cos(√
5t)
Then, we get
L−1(2se−3s
s2 + 5) = 2u3(t) cos(
√5(t− 3)).
Example 6. Find L(u2(t)t2).
We use the formula
L(uc(t)f(t)) = e−csL(f(t + c)).
We get
L(u2(t)t2) = e−csL((t + 2)2)
= e−cs(L(t2) + 4L(t) + 4L(1))
= e−cs(2
s3+
4
s2+
4
s)
Differential Equations with discontinuous right hand sidesWe apply the above techniques to some initial value problems.Example 6Solve the initial value problem
2y′′ + y′ + 2y = u5(t)− u20(t), y(0) = 0, y′(0) = 0.
From our formulas we have
August 21, 2004 16-12
L(y) =L(u5(t))−L(u20(t))
2s2 + s + 2
=e−5s − e−20s
s(2s2 + s + 2)
and we have to take the inverse Laplace transform of the right side.This is
L−1(e−5s
s(2s2 + s + 2))− L−1(
e−20s
s(2s2 + s + 2)).
This is
u5(t)f(t− 5)− u20(t)f(t− 20)
where
f(t) = L−1(1
s(2s2 + s + 2)),
We now compute this last inverse transform.Using partial fractions, we write
1
s(2s2 + s + 2)) =
A
s+
B + Cs
2s2 + s + 2
We determine A, B, C from
2As2 + As + 2A + Bs + Cs2 = 1 ∀ s.
Setting terms of like powers equal we get a system of three equations inthe unknowns A, B, C, which we solve to get
A =1
2, B = −1
2, C = −1.
Hence, we have
(1
2)1
s−
12
+ s
2(s2 + s2
+ 1)= (
1
2)1
s−
14
+ s + 14
2((s + 14)2 + 15
16)
August 21, 2004 16-13
We compute the inverse Laplace transform of this as the sum of the terms
L−1(1
2s)− L−1(
14
2((s + 14)2 + 15
16))− L−1(
s + 14
2((s + 14)2 + 15
16))
We have
L−1(1
2s) =
1
2,
L−1(14
2((s + 14)2 + 15
16)) = L−1(
1
8√
1516
√1516
((s + 14)2 + 15
16)
=1
2√
15e−
t4 sin(
√15
16t),
and
L−1(s + 1
4
2((s + 14)2 + 15
16)) =
1
2e−
t4 cos(
√15
16t).
Problems: p. 311, 1-17, p. 318, 1-11Impulse functionsIn some cases one wants to consider a function which is very large for a
short amount of time. One wants to take integrals of these functions and toconsider them as forcing terms in differential equations.
The standard impulse function is the Dirac delta function. This is thoughtof as a function δ(t) which is infinite at t = 0, zeros at t = 0, and has integralwith value 1. There is no classical function which these properties, so ittakes some work to make rigorous sense of this. The mathematical theory ofdistributions (invented by the French mathematician Laurent Schwartz) isthe modern way to rigorously justify the concepts and calculations involvingDirac delta functions. That theory is studied in advanced analysis coursesand is beyond the scope of this course.
Nevertheless, one can operate formally with delta functions by assumingthat they have certain properties. So, we assume that the expression δ(t−t0)represents a “generalized function” whose value is 0 for t 6= t0, infinite att = t0, and
August 21, 2004 16-14
∫ ∞
−∞δ(t− t0)f(t)dt = f(t0)
for any continuous function f(t).We assume that the Laplace transform L(δ(t − t0)) is defined by the
formula
L(δ(t− t0)) = e−st0 .
Using this one can solve differential initial value problems of the form
ay′′ + by′ + cy = δ(t− t0), y(0) = 0, y′(0) = 0
as we did above with Laplace transform methods.One takes
L(y) =L(δ(t− t0))
as2 + bs + c.
Using L(δ(t− t0)) = e−st0 , we can find the inverse Laplace transform andfind y in terms of Heaviside functions as above.
Convolutions.It is sometimes desirable to compute the inverse Laplace transform of the
product of two functions F (s) and G(s).This calculation requires an operation on functions called convolution.Given f(t), g(t) two piecewise continuous functions of exponential order
a defined on (0,∞), we define
(f ? g)(t) =∫ t
0f(t− τ)g(τ)dτ .
The function f ? g is called the convolution of f and g. It will also haveexponential order a.
There are the following properties.
1. f ? g = g ? f
2. f ? (g ? h) = (f ? g) ? h
3. f ? (g1 + g2) = f ? g1 + g ? g2
August 21, 2004 16-15
4. f ? 0 = 0
5. If L(f(t)) = F (s) and L(g(t)) = G(s), then
L((f ? g)(t)) = F (s)G(s)
Some more examples of Laplace transforms:Find the Laplace transforms of the following functions.
1.
f(t) =
{0 t < 3(t− 3)2 t ≥ 3
2.
f(t) =
{0 t < 3t2 − 6t + 12 t ≥ 3
For the first function f(t), we have
f(t) = u3(t)(t− 3)2.
Using the formula for uc(t)f(t− c), we get
e−3sF (s)
where
F (s) = L(t2) =2
s3
For the second function f(t) we have
f(t) = u3(t)((t− 3)2 + 3) = u3(t)((t− 3)2) + u3(t)3.
We get the Laplace transform is
e−3s 2
s3+ 3
e−3s
s.
August 21, 2004 17-1
17. Linear Homogeneous Systems with Con-
stant Coefficients
Consider the system
x = Ax (1)
where A is a constant n× n matrix and x is an n−vector in Rn.In the one-dimensional (scalar case), we found solutions using exponential
functions, so it seems reasonable to try to find a solution of the form
x(t) = ertξ
where r is a real constant and ξ is a non-zero constant vector.Plugging in, we get
x(t) = rertξ = Aertξ
for all t. Since ert is never zero, we can cancel it and get
rξ = Aξ
or
(rI − A)ξ = 0. (2)
Thus, r is a scalar such that there is a non-zero vector ξ such that ξ is asolution of the system of linear equations (2).
This only holds for special r′s and special ξ′s.Definition. Given an n×n matrix A, we call a number r and eigenvalue
of A is there is a non-zero vector ξ such that
Aξ = rξ.
The vector ξ is called an eigenvector for the eigenvalue r.Note that r is an eigenvalue of A if and only if det(rI − A) = 0. The
function z(r) = det(rI−A) is a polynomial of degree n in r and is called thecharacteristic polynomial of A. Thus, eigenvalues of A are the roots ofthe characteristic polynomial of A.
Remark.
August 21, 2004 17-2
1. Some general facts about eigenvalues and eigenvectors.
(a) Let A be an n× n matrix and let r1 be an eigenvalue of A. Let ξand η be eigenvectors associated to r1. Then, for arbitrary scalarsα, β, we have that αξ + βη is also an eigenvector associated to r1
provided that it is not the zero vector.
Proof.
Let v = αξ + βη and assume this is not 0.
We have
A(v) = A(αξ + βη)
= αAξ + βAη
= αr1ξ + βr1η
= r1(αξ + βη)
= r1v
Therefore v is also an eigenvector as required.
(b) Let r1 6= r2 be distinct eigenvalues of A with associated eigenvec-tors ξ, η, respectively. Then, ξ is not a multiple of η.
Proof.
Assume that ξ = αη for some α. Since both vectors are not 0, wemust have α 6= 0.
Now,
Aξ = r1ξ
= r1αη,
Aξ = Aαη = αAη = αr2η,
So,
r1αη = r2αη.
Since α 6= 0, and η 6= 0, we get r1 = r2 which is a contradiction.
August 21, 2004 17-3
2. A real matrix may not have any real eigenvalues, but always has com-plex eigenvalues.
There is a simple formula for the characteristic polynomial of a 2 × 2matrix.
Let
A =
(a11 a12
a21 a22
).
Then,
rI − A =
(r − a11 −a12
−a21 r − a22
)
So,
det(rI − A) = (r − a11)(r − a22)− a12a22
= r2 − a11r − a22r + a11a22 − a12a21
= r2 − tr(A)r + det(A)
where we define tr(A) = a11 + a22 and det(A) = a11a22 − a12a21.Let us find some characteristic polynomials and eigenvalues for the fol-
lowing matrices.Example 1
A =
(2 11 1
)
We have z(r) = r2 − 3r + 1. r = 3±√
52
.Example 2
A =
(3 1−1 2
)
We have z(r) = r2 − 5r + 7. r = 3±√
25−282
= 3±√−3
2= 3±3i
2.
Next, we compute the eigenvectors associated to the eigenvalues.Example 1a.Return to
August 21, 2004 17-4
A =
(2 11 1
)
Let r1 = 3+√
52
.Let
ξ =
(ξ1
ξ2
)
be the associated eigenvector.Then, we have
(r1 − a11)ξ1 − a12ξ2 = 0.
The other equation is not necessary since the matrix (r1I−A) is singular.Thus, we have the condition
ξ2 =(r1 − a11)ξ1
a12
for ξ to be an eigenvector for r1. We can take ξ1 = 1, and get
ξ =
(1
r1−a11
a12
).
Similarly, for r2, we get
ξ =
(1
r2−a11
a12
).
Example 2a.In this case, we have complex eigenvalues, so there is no real eigenvector.
Let r1 = α + iβ and r2 = α − β be the two roots. We can get complexeigenvectors for r1 and r2 in a manner analgous to that of Example 1a. Forr1 or r2, we simply look for a complex vector
ξ =
(ξ1
ξ2
)
with complex numbers ξ1 and ξ2 such that
August 21, 2004 17-5
Aξ = r1ξ,
and
Aξ = r2ξ.
This gives the complex vector
ξ =
(1
r1−a11
a12
)
for r1, and the complex vector
ξ =
(1
r2−a11
a12
)
for r2.We will see next how to use this for solving systems of two linear differ-
ential equations.
1 Two dimensional homogeneous systems of
linear differential equations with constant
coefficients
Consider the system
x = a11 x + a12 yy = a21 x + a22 y
where
A =
(a11 a12
a21 a22
)
is a constant 2× 2 real matrix.We compute the eigenvalues r1, r2. These are the roots of the character-
istic polynomial
August 21, 2004 17-6
r2 − tr(A)r + det(A).
Case 1: Both roots are real and distinct. Say these are r1 6= r2.Step 1. Compute the eigenvectors v1 for r1 and v2 for r2, repectively.Then, we get solutions of the form
x1 = er1tv1, x2 = er2tv2.
These turn out to be linearly independent, so the general solution is
x(t) = α1er1tv1 + α2e
r2tv2
where α1 and α2 are constants.Case 2: Both roots are real and equal. Say the common root is r1.We get one solution x1(t) of the form
x1(t) = er1tv1
where v1 is an eigenvector for r1.Next, we have two subcasesSubcase 2a: There are two linearly independent eigenvectors, say ξ, η
for the eigenvalue r1.In this case the general solution is
x(t) = er1t(α1ξ + α2η).
An example of this is the system
x = 2x
y = 2y
with general solution
x(t) = e2t
(α1
(10
)+ α2
(01
)).
Subcase 2b: All eigenvectors for r1 are multiples of v1.
August 21, 2004 17-7
In this case we proceed as follows.Let us try to find another linearly independent solution of the form
x2(t) = er1tv0 + ter1tv1.
We get
x2(t) = r1er1tv0 + tr1e
r1tv1 + er1tv1
= A(er1tv0 + ter1tv1),
or
r1v0 + tr1v1 + v1 = A(v0 + tv1).
Setting the constant and terms with t equal we get that v1 is an eigen-vector, and v0 satisfies the linear system
(A− r1I)v0 = v1. (3)
Finding the solution v0, we can in fact obtain a second linearly indepen-dent solution of the above form.
Thus, the general solution has the form
x(t) = α1er1tv1 + α2e
r1t(v0 + tv1).
where v1 is an eigenvector associated to r1 and v0 satisfies (3).Note that this involves solving the two systems of equations
(A− r1I)v1 = 0, (A− r1I)v0 = v1.
Example.Consider the system
x = x
y = x + y
The matrix is
August 21, 2004 17-8
A =
(1 01 1
)
with characteristic equation
r2 − 2r + 1 = (r − 1)2.
Hence, r = 1 is a root of multiplicity two.
We see that v1 =
(01
)is an eigenvector so we get one non-zero solution
as
x1(t) = et
(01
).
For the second independent solution we have
x1(t) = etv0 + tet
(01
)
where
(A− I)v0 =
(01
). (4)
The solutions to this last equation are all vectors of the form
(1ξ2
)with
ξ2 arbitrary, so we can pick v0 =
(10
)and get a second linearly independent
solution as
x2(t) = et
(10
)+ tet
(01
).
The general solution is
x(t) = c1x1(t) + c2x2(t).
Remark. Note that we used the method above when there are not twolinearly independent eigenvectors for the eigenvalue 1. We did not check
August 21, 2004 17-9
whether this is the case, so why does this work? The answer is that if therewere indeed two linearly independent eigenvectors for the eigenvalue 1, thenthe system (4) would not have had any solutions, so the fact that we couldsolve the system justifies the approach. (The proof of this requires morelinear algebra and will have to be deferred to a more advanced course.)
This method generalizes to n dimensional systems with eigenvalues ofmultiplicity greater than one although the linear algebra required is morecomplicated.
We will see that the method of elimination described below is more effi-cient for two dimensional systems with a multiple root.
Case 3: The roots are α± iβ where β 6= 0.
Here we use complex variables. We have that xc(t) = e(α+iβ)tξ is a com-plex solution where ξ is a complex eigenvector associated to the eigenvalueα + iβ. The real and imaginary parts give linearly independent solutions.Then the general real solution is a linear combination of these independentsolutions.
Let us do an example.Consider the system
x = 2 x− 3 yy = 2 x + 4 y
The matrix A is given by (2 −32 4
).
The characterstic polynomial is
r2 − 6r + 14
with roots
r =6±
√36− 56
2= 3± i
√5.
We seek a complex eigenvector ξ = (ξ1, ξ2) for the eigenvalue r = 3+ i√
5.We get the equation
(rI − A)ξ = 0.
August 21, 2004 17-10
or (r − 2 3−2 r − 4
) (ξ1
ξ2
)=
(00
).
Because the matrix is singular, we need only consider the first row equa-tion
(r − 2)ξ1 + 3ξ2 = 0
Setting ξ1 = 1, we get
ξ2 =(2− r)
3
=2− (3 + i
√5)
3
=−1− i
√5
3
Thus, we get a complex solution of the form
xc = e(3+i√
5)t
(1
−1−i√
53
)
= e(3+i√
5)t
((1−13
)− i
(1
−√
53
))
The real and imaginary parts are
R1def= e3t
(cos(
√5t)
(1−13
)+ sin(
√5t)
(1
−√
53
))
I1def= e3t
(sin(√
5t)
(1−13
)− cos(
√5t)
(1
−√
53
))
The general real solution is
x(t) = α1R1 + α2I1.
August 21, 2004 17-11
2 An alternate method for 2 dimensional
systems: Elimination and reduction to scalar
equations
First, we give some examples to describe the elimination method to solve twodimensional systems.
Example 1:Consider the system
x = x + y
y = x− y
We can write
y = x− x (5)
from the first equation and substitute into the second equation getting
y = x− x
= x− (x− x)
This gives the second order scalar equation for x
x− 2x = 0.
We know how to solve this. The characteristic equation is r2 − 2 withroots r = ±
√2 and general solution
x(t) = c1e√
2t + c2e−√
2t.
Then, we get y from (5) as
y(t) = x− x
= c1
√2e
√2t − c2
√2e−
√2t − c1e
√2t − c2e
−√
2t,
so, the general solution to the system is
August 21, 2004 17-12
(x(t)y(t)
)=
(c1e
√2t + c2e
−√
2t
c1
√2e
√2t − c2
√2e−
√2t − c1e
√2t − c2e
−√
2t
)
= c1e√
2t
(1√
2− 1
)+ c2e
−√
2t
(1
−√
2− 1
).
This method can most often be used for two dimensional homogeneoussystems.
Which method is best?In my opinion, this method is best when the eigenvalue is real of multi-
plicity two and the matrix method is best in the other cases.Example.Let us apply the method of elimination to the system
x = x
y = x + y
we considered above.We get
x = y − y
x = y − y = x = y − y.
or
y − 2y + y = 0.
The general solution is
y(t) = c1et + c2te
t.
This gives
x(t) = y − y
= c1et + c2e
t + c2tet − (c1e
t + c2tet)
= c2et
August 21, 2004 17-13
and the general solution
(x(t)y(t)
)=
(c2e
t
c1et + c2te
t
)
= c1et
(01
)+ c2e
t
(1t
).
Finally, we describe some general aspects of the method of elimination.We consider the system
x′ = ax + by
y′ = cx + dy
We assume, that either b or c is not 0. Otherwise, the system is diagonaland easily solvable.
Assuming b 6= 0, we use the elimination method to find a second orderequation for x(t). We will see that the characteristic polynomial for this sec-
ond order equation is the same as the characteristic polynomial of
(a bc d
).
The latter characteristic polynomial is
r2 − (a + d)r + ad− bc.
Now,
x′′ = ax′ + by′
= ax′ + bcx + bdy
= ax′ + dx′((bc− ad)x
or
x′′ − (a + d)x′ + ad− bc = 0.
Now, we can find the general solution x(t) of this second order equation,and then get y(t) in the original system from
August 21, 2004 17-14
y =x′ − ax
b
If, b = 0 but c 6= 0, we use the elimination method to find a second orderequation for y(t). We get the general solution for y(t), and then get x(t)from
x =y′ − dy
c.
April 17, 2004 18-1
18. Geometry of two dimensional homoge-
neous systems
The solutions of two dimensional homogeneous systems
are curves in the plane. In this section we wish to describe
the geometric structure of these curves.
First, it will be convenient to obtain formulas for linear
changes of coordinates.
Let e1 =
1
0
, and e2 =
0
1
be the standard basis
vectors in R2, and let v1,v2 be two linearly independent
vectors.
We know that every vector x =
x1
x2
in R2 can be
written uniquely as
x = x1e1 + x2e2 = α1v1 + α2v2.
The numbers x1, x2 are simply the usual coordinates
of x. How do we get α1, α2? Using 2× 2 matrices, it is
easy to get the answer.
Let
v1 =
a
c
, v2 =
c
d
,
and consider the matrix
Q =
a b
c d
April 17, 2004 18-2
whose first column is v1 and whose second column is
v2.
Because the vectors v1,v2 were linearly independent,
the matrix Q is invertible.
Note that Q(e1) = v1 and Q(e2) = v2.
So,
x = x1e1 + x2e2
= α1v1 + α2v2
which is the same as the matrix equation x1
x2
= Q
α1
α2
.
Thus, α1
α2
= Q−1
x1
x2
,
and we can find the constants α1 and α2 in terms of x1
and x2 by multiplying
x1
x2
by the inverse of the matrix
Q.
There is a simple formula for the inverse of a 2 × 2
matrix
Q =
a b
c d
April 17, 2004 18-3
which we now give.
Write det(Q) = ad− bc as usual.
Then,
Q−1 =1
det(Q)
d −b
−c a
. (1)
Def. A linear change of coordinates in the plane R2
is a change of coordinates of the form x1
x2
= Q
u1
u2
where Q is an invertible 2× 2 matrix.
Written out in coordinates, if Q =
a b
c d
, we get
x1 = a u1 + b u2
x2 = c u1 + d u2
It is easy to see how this transforms a homogeneous
linear system
x = Ax
where A is a 2× 2 matrix.
Letting x =
x1
x2
and u =
u1
u2
, we have
April 17, 2004 18-4
x = Qu
= Ax
= AQu,
so we get
Qu = AQu
or
u = Q−1AQu.
Hence, the differential equation gives rise to a new lin-
ear homogeneous differential equation in the u−coordinates
whose matrix is
B = Q−1AQ.
A similar formula holds for n dimensional matrices.
In general two matrices A and B which have the prop-
erty that there is an invertible matrix Q such that Q−1AQ =
B are called similar. This is an important concept and
is studied more deeply in courses in linear algebra.
We want to use the concept to help in describing the
geometric structure of the solutions of linear homogenous
2× 2 systems with constant coefficients.
Let us begin by considering the special system
April 17, 2004 18-5
x = r1x
y = r2y.
We have the general solution x(t)
y(t)
=
c1er1t
c2er2t
.
If either c1 or c2 are 0, then the solution is on a vertical
or horizontal line, so we assume that both c1 and c2 are
not 0.
We have
y = c2er2t
= c2c
r2r11
cr2r11
er2t
= c2c
r2r11
cr2r11
(er1t)r2r1
=c2
cr2r11
(xr2r1 )
= Axα
where A is a constant depending of c1, c2, and α = r2r1
.
We draw some of these curves for various r1, r2.
April 17, 2004 18-6
August 21, 2004 19-1
19. Higher dimensional linear homogeneous
systems with constant coefficients
To get the explicit form of the solutions of higher dimensional linear systemswith constant coefficients requires methods of linear algebra which are beyondthe scope of this course. Basically, one has to compute what is called theJordan canonical form of the matrix.
We will discuss the methods one can use here, but we will not providethe rigorous justification.
Consider the system
x′ = Ax.
First, one finds the eigenvalues r1, r2, . . . , rs of the matrix A. We willconsider only real eigenvalues. When there are complex eigenvalues one pro-ceeds in a similar way to get complex solutions and then one takes the realand imaginary parts (taking into account multiplicities) as described abovein the two dimensional case.
So, assume that all eigenvalues of A are real.It turns out that every vector v in Rn can be uniquely expressed as a
linear combination
v = α1v1 + . . . + αsvs
where each vi satisfies
(A− riI)mvi = 0
for some positive integer m.In linear algebra terms, the vectors vi are called generalized eigenvectors
of A, and the result just mentioned is the statement that Rn is the directsum of the generalized eigenspaces of A.
It then turns out that it is enough to find solutions in each generalizedeigenspace separately.
So, we assume that r is a single real eigenvalue for A, and that V is thegeneralized eigenspace for r; i.e., V is the set of vectors v such that there issome m > 0 such that (A− rI)mv = 0.
Procedure to find all solutions in V :
August 21, 2004 19-2
First find a maximal set of linearly independent eigenvectors v1,v2,v3, . . . ,v`
for the eigenvector r. Once one knows r this simply involves solving systemsof linear equations.
One gets solutions of the form
x(t) = ertmi∑k=0
tk
k!vi,k
where
vi,k+1 = (A− rI)vi,k
and
vi,mi= vi
We find the vi,k one at a time solving systems of linear equations.Thus, we find vi,mi−1 as a solution ξ to
(A− rI)ξ = vi.
We repeat to find vi,mi−2 as a solution ξ to
(A− rI)ξ = v1,mi−1, .
and we continue.We do this as long as we can find solutions of the systems of linear equa-
tions. The process will stop in qi steps where qi is no larger than the multi-plicity of the eigenvalue r as a root of the characteristic polynomial of A.
All solutions obtained in this way using all the eigenvectors associated tor will give a set of linearly independent solutions in the generalized eigenspaceof r.
As we have said above, this procedure is finding the Jordan canonicalform for the matrix A on the generalized eigenspace of r. We refer to linearalgebra textbooks (or more advanced differential equations textbooks) formore details on this.
August 21, 2004 20-1
20. Variation of Parameters for Systems
Now, we consider non-homogeneous linear systems.Thus, we consider the system
x′ = Ax + g(t) (1)
where g(t) is a continuous vector valued function, and A is an n × nmatrix.
Let Φ(t) be a fundamental matrix for the associated homogeneous system
x′ = Ax (2)
We try to find a particular solution of the form
x(t) = Φ(t)v(t)
where v(t) is a non-constant vector valued function of t.We have
x′ = Φ′v + Φv′
= AΦv + Φv′
and, since x = Φv is a solution of (1),
x′ = Ax(t) + g(t)
= AΦv + g(t)
which leads to
AΦv + Φv′ = AΦv + g(t),
or
Φv′ = g(t).
Since, Φ = Φ(t) is a fundamental matrix, it is invertible. So, we get
v′ = Φ(t)−1g(t).
August 21, 2004 20-2
That is, we get a system of linear equations to solve for v′. We then getv(t) by integrating.
This is similar to what happened for the case of variation of parametersin second order scalar differential equations.
Examples.Example 1: p. 411, 1Find the general solution to the system
x′ =
(2 −13 −2
)x +
(et
t
)
Write x =
(xy
).
We have
x′ = 2x− y + et
y′ = 3x− 2y + t
Step 1. Two independent homogeneous solutions.The characteristic polynomial z(r) = r2 − 1 with roots r = ±1.For r, we have the eigenvector (r − 2)ξ1 + ξ2 = 0, or
ξ2 = (2− r)ξ1.
Let us call an eigenvalue pair a pair (r,v) where r is an eigenvalue and vis an associated eigenvector.
So, the eigenvalue pairs are
(1,
(11
)), (−1,
(13
))
So, two independent solutions are
x1(t) = et
(11
), e−t
(13
)
The general homogeneous solution is
August 21, 2004 20-3
xh(t) = c1et
(11
)+ c2e
−t
(13
)
A fundamental matrix is
Φ(t) =
(et e−t
et 3e−t
).
To get a particular solution xp(t) by the variation of parameters method,we proceed as follows.
xp(t) = Φ(t)v(t)
where
Φ(t)v′ =
(et
t
)
We get v as follows.
etv′1 + e−tv′2 = et
etv′1 + 3e−tv′2 = t
or, in matrix form,(et e−t
et 3e−t
) (v′1v′2
)=
(et
t
).
Hence, using the formula for the inverse of a 2× 2 matrix (formula(1) insection 18), we get
(v′1v′2
)=
(et e−t
et 3e−t
)−1 (et
t
)
=1
2
(3e−t −e−t
−et et
) (et
t
).
This gives
August 21, 2004 20-4
v′1 =1
2(3− te−t)
v′2 =1
2(−e2t + tet).
Integrating, we get
v1 =1
2(3t + e−t(1 + t))
v2 =1
2(−1
2e2t + tet − et).
Finally, we get
xp(t) =
(etv1 + e−tv2
etv1 + 3e−tv2
).
August 21, 2004 20-5
Example 2:
x′ = 2x + 2y + e2t
y′ = 2x− y − et
Characteristic polynomial:
r2 − r − 6 = (r − 3)(r + 2).
eigenpairs: (r − 2)ξ1 − 2ξ2 = 0.
(r,
(1
r−22
)).
General homogeneous solution:
xh(t) = c1e3t
(112
)+ c2e
−2t
(1−2
).
Fundamental Matrix: (e3t e−2t
12e3t −2e−2t
).
Particular solution:
xp(t) =
(e3t e−2t
12e3t −2e−2t
) (v1
v2
),
where (e3t e−2t
12e3t −2e−2t
) (v′1v′2
)=
(e2t
−et
).
Finding v1,v2:We have
e3tv′1 + e−2tv′2 = e2t
1
2e3tv′1 − 2e−2tv′2 = −et
e3tv′1 − 4e−2tv′2 = −2et
August 21, 2004 20-6
5e−2tv′2 = e2t + 2et
v′2 =1
5(e4t + 2e3t)
v2 =1
20e4t +
2
15e3t
e3tv′1 + e−2t 1
5(e4t + 2e3t) = e2t
e3tv′1 +1
5e2t +
2
5et = e2t
e3tv′1 =4
5e2t − 2
5et
v′1 =4
5e−t − 2
5e−2t
v1 = −4
5e−t +
1
5e−2t
August 21, 2004 21-1
21. Partial Differential Equations – The heat
equation
We consider the transfer of heat in a thin wire of length L. The heat flow attime t and position x is related to the change in temperature of positon x attime t.
We assume the wire has coordinates 0 ≤ x ≤ L on the real line, and welet u(x, t) denote the temperature at position x and time t.
The laws of heat conduction in this physical system can be used to derivethe following partial differential equation for u(x, t).
α2uxx = ut ∀t. (1)
The constant α2 depends on the conductive properties of the wire. Thus,for instance, it is different for copper or aluminum wires.
In general, we are interested in finding all solutions of (1). For mathemat-ical convenience, we will impose other conditions to solve this problem. Thus,we assume that there is an initial temperature distribution u(x, 0) = f(x) inthe wire and that the boundary points are kept at constant temperatures.This means that u(0, t) = T1 and u(L, t) = T2 where T1 and T2 are constants.Physically, the latter condition means that the ends of the wire are perfectlyinsulated, so that no heat flows in them.
General remarks.
1. (Principle of Superposition) If u(x, t) and v(x, t) are solutions to (1),and c1, c2 are constants, then z(x, t) = c1u(x, t) + c2v(x, t) is also asolution to (1).
Proof.
We have
α2zxx = α2(c1uxx + c2vxx)
= c1α2uxx + c2α
2vxx
= c1ut + c2vt
= (c1u + c2v)t
= zt.
QED.
August 21, 2004 21-2
2. Using Remark 1, we can reduce to the case in which the boundaryconstants are both 0. This is called homogeneous boundary conditions.
Indeed, note that any linear time independent function w(x, t) = ax+bis a solution to (1), so we simple choose w(x, t) so that w(0, t) = T1
and w(L, t) = T2. That is, we take
w(x, t) = T1 +T2 − T1
Lx.
Then, u = u − w is a solution to (1) such that u(0, t) = u(L, t) = 0,and we get u(x, t) = u(x, t) + w(x, t).
We want to find all solutions u(x, t) to the problem (1) satisfying
u(x, 0) = f(x), (2)
and
u(0, t) = u(L, t) = 0 ∀t. (3)
Clearly the function u(x, t) = 0 is a solution, so we will only considernon-trivial solutions: u(x, t) 6= 0.
As a preliminary guess, let us try to find solutions u(x, t) which decomposeas a product of a function of x and one of t.
That is,
u(x, t) = X(x)T (t).
This method is called the method of separation of variables.We get
αsX ′′T = XT ′
α2X ′′
X=
T ′
T.
Since X only depends on x and T only depends on t, we must have thatthere is a constant β such that
α2X ′′
X= β, and
T ′
T= β.
August 21, 2004 21-3
This gives the two ordinary differential equations
X ′′ − β
α2X = 0, (4)
and
T ′ = βT. (5)
The last equation is easily solved
T (t) = T (0)eβt.
Claim 1: The homogeneous boundary conditions imply that β < 0.Claim 2: The homogeneous boundary conditions imply that σ must have
the form
σ = σn =α2n2π2
L2.
Proof of Claim 1: If β > 0, then the second equation has the form
X ′′ − λX = 0
where λ > 0.We may assume that T (0) 6= 0 since we are assuming u(x, t) is not the
trivial solution.The general solution is
X(x) = c1e
√λx + c2e
−√
λx
Using u(0, t) = 0 we get X(0) = 0 or
c1 + c2 = 0.
Using u(L, t) = 0 we get X(L) = 0, or
c1e
√λL + c2e
−√
λL = 0
This gives
c1(e
√λL − e−
√λL) = 0.
August 21, 2004 21-4
If c1 6= 0, this gives
e
√λL = e−
√λL,
Since L 6= 0, the first number above is greater than 1, but the secondnumber is less than 1. Thus, c1 = c2 = 0. This contradiction proves theclaim.
Now that we know β < 0, we write it as −σ where σ > 0.Proof of Claim 2:We have the two equations
X ′′ +σ
α2X = 0, T (t) = T (0)e−σt
The general solution to the first equation is
X(x) = c1cos(
√σ
αx) + c2sin(
√σ
αx).
Using X(0) = 0 we get c1 = 0.Using X(L) = 0, and c2 6= 0, we get
sin(
√σ
αL) = 0,
or√
σ
αL = nπ for some integer n.
QED.The considerations we have done so far give us that we can find solutions
to (1) with homogeneous boundary conditions of the form
un(x, t) = e−σnt sin(nπ
Lx),
where
σn = (αnπ
L)2.
These are called fundamental solutions to the heat equation with homo-geneous boundary conditions.
August 21, 2004 21-5
By superposition, we can also get solutions of the form
u(x, t) =m∑
n=1
cnun(x, t).
for a finite integer m. It turns out that if the series
u(x, t) =∞∑
n=1
cnun(x, t)
actually converges, then it also represents a solution.Next, considering the effect of the initial condition u(x, 0) = f(x) on this
kind of solution, we get
f(x) =∞∑
n=1
cnun(x, 0) =∞∑
n=1
cn sin(nπ
Lx), (6)
and we are led to consider what functions f(x) satisfy this last condition.Note that such a function satisfies the following two properties.
1. f(−x) = f(x) for all x. That is, f(x) is an odd function.
2. f(x + 2L) = f(x). That is, f is periodic of period 2L.
It is an amazing fact, that every function C1 odd function f(x) of period2L defined in 0 ≤ x ≤ L can be written in the form (6). The series convergesto f(x) at all points except possibly at the boundary points 0, L.
We extend f(x) to [−L, L] by requiring that f(−x) = −f(x) for all x,and we get that the coefficient cn can be computed from the formula
cn =1
L
∫ L
−Lf(x)sin(
nπ
Lx)dx. (7)
These considerations enable one to solve the heat equation for variousinitial values u(x, 0) = f(x) on 0 ≤ x ≤ L.
1-1
Exercises
1. Problems for Section 1: p. 10, 1,3,5,7,9,15,17,19, 29,31,33
2. Problems for sections 2.1, 2.2: p. 23 1, 3, 5, 13-19, p. 30 1,3,7
3. Problems on Bernoulli’s equation: p. 33: 38, 39
4. Problems for sections 2.3, 2.4: p. 38-39, 1-6, 9,11, p. 45, 1,3,5, 9,11
5. Problems for section 2.5: pp 54-57, 1,3,4,6,7,19,23,25,26
6. Problems for Sections 2.8, 2.9:p. 88, 89 1-10, 25-29 p. 93, 1,3,5,7,9
Exam-1
7. First Problems for Sections 3.1-3.5: p. 128: odd 1-15, p. 138: 1-6, p.144: odd 1-7, p. 150: odd 1-15, p. 159: odd 1-13
8. Some extra problems from sections 3.1 and 3.5: p. 129: 28-31, p. 130:34-37. p. 161: 23-25
9. Problems on undetermined coefficients: p. 171, 1-8, 13-16
10. Problems on variation of parameters: p. 177 1-3, 5-9
Exam-2
11. Problems on Springs: p. 190, 1-7, 9-11
12. Problems on Forced Oscillations: p. 199, 1-12
13. Problems for section 6.1: p.294 1,3,5, 7,9,11-14, 15,17,19, 21, 23
14. Problems for section 6.2: p. 303 1-10, 11,13, 15, 21,22,23
15. Problems for section 6.3: p. 311, 1-17
16. Problems for section 6.4: p. 318, 1-11
17. Problems in section 7.1-7.9, p. 340 1,3,8-12, p. 364 15-20, p. 378 1-8,p. 387 1-6, p. 396 1-4, p.404 1-8, p.411, 1-5