68
Dynamical Systems Lecture Notes Julien Arino Department of Mathematics University of Manitoba February 18, 2011

Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Dynamical Systems

Lecture Notes

Julien ArinoDepartment of Mathematics

University of Manitoba

February 18, 2011

Page 2: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order
Page 3: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Contents

1 A brief introduction to dynamical systems 31.1 A first-order linear difference equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 A first-order linear ordinary differential equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Exercises and problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Discrete time systems 72.1 Types of equations/systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 General theory of ODEs 93.1 ODEs, IVPs, solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1.1 Ordinary differential equation, initial value problem . . . . . . . . . . . . . . . . . . . . . . . 93.1.2 Solutions to an ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1.3 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2 Existence and uniqueness theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2.1 Successive approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2.2 Local existence and uniqueness – Proof by fixed point . . . . . . . . . . . . . . . . . . . . . . 153.2.3 Local existence and uniqueness – Proof by successive approximations . . . . . . . . . . . . . . 173.2.4 Local existence (non Lipschitz case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.3 Continuation of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.1 Maximal interval of existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3.2 Maximal and global solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.4 Continuous dependence on initial data, on parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 273.5 Generality of first order systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.6 Generality of autonomous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.7 Suggested reading, Further problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.8 Exercises and problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Linear systems of equations 354.1 Generality of linear systems of first-order equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.1.1 Difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.1.2 Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.2 Existence and uniqueness of solutions for linear ordinary differential equations . . . . . . . . . . . . . 374.3 Linear systems of low order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.3.1 First-order linear difference equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.3.2 Second-order linear difference equation with constant coefficients . . . . . . . . . . . . . . . . 40

4.4 Linear systems of difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.4.1 Higher-order linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.4.2 Nonhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.4.3 Qualitative analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

i

Page 4: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

iiDynamical Systems – Lecture Notes – J. Arino

CONTENTS

4.5 Linear systems of differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.5.1 The vector space of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.5.2 Fundamental matrix solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.5.3 Resolvent matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.5.4 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.5.5 Autonomous linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.6 Nonhomogeneous systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.6.1 The space of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.6.2 Construction of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.6.3 A variation of constants formula for a nonlinear system with a linear component . . . . . . . 53

4.7 Linear systems of ODEs with periodic coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.7.1 Linear systems: Floquet theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.7.2 Nonhomogeneous systems: the Fredholm alternative . . . . . . . . . . . . . . . . . . . . . . . 56

4.8 Further developments, bibliographical notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.9 Exercises and problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5 Stability 63

5.1 Fixed points/equilibrium solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.1.1 Difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.1.2 Ordinary differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.2 Local stability of fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.2.1 Discrete time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.3 Global stability of fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3.1 Discrete time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3.2 Local stability in first-order equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3.3 Global stability in first-order equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.4 Stability at fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.5 Affine systems with small coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5.6 Liapunov functions approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

6 Linearization 73

6.1 Systems of nonlinear difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

6.2 Some linear stability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

6.3 The stable manifold theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6.4 The Hartman-Grobman theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6.5 Example of application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.5.1 A chemostat model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.5.2 A second example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7 Bifurcation theory 89

7.0.3 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

7.0.4 Bifurcation diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

7.1 General context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.2 Some bifurcations in discrete-time equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.3 Some bifurcations in continuous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.4 Pitchfork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7.5 Period doubling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7.6 Hopf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Page 5: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

8 Exponential dichotomy 1018.1 Exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018.2 Existence of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028.3 First approximate theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038.4 Stability of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058.5 Generality of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

9 Introduction to control theory 1079.1 Problems of control theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089.2 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089.3 Identifiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099.4 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

References 111

A Definitions and results 113A.1 Vector spaces, norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

A.1.1 Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113A.1.2 Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113A.1.3 Supremum (or operator) norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

A.2 An inequality involving norms and integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A.3 Types of convergences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A.4 Asymptotic notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A.5 Types of continuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115A.6 Lipschitz function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115A.7 Gronwall’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116A.8 Fixed point theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117A.9 Jordan normal form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118A.10 Matrix exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120A.11 Matrix logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121A.12 Spectral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122A.13 Matrix analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

A.13.1 Nonnegative matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

B Solutions to exercises 125Homework sheet 1 – Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126B.1 Exercises from Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126B.2 Exercises from Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142Final examination 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154Final examination 1 – Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Index 160

iii

Page 6: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order
Page 7: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Introduction

These lecture notes are intended for several different courses. They deal with ordinary differential equations as wellas difference equations, emphasizing the similarities and differences between the two types of objects.

If you are taking this course, you most likely know some results about ordinary differential equations. Youknow, for example, that in order for solutions to a system to exist and be unique, the system must have a C1

vector field. What you do not necessarily know is why that is. This is the object of Chapter 3, where we considerthe general theory of existence and uniqueness of solutions. We also consider the continuation of solutions as wellas continuous dependence on initial data and on parameters.

In Chapter 4, we explore linear systems. We first consider homogeneous linear systems, then linear systems infull generality. Homogeneous linear systems are linked to the theory for nonlinear systems by means of linearization,which we study in Chapter 6, in which we show that the behavior of nonlinear systems can be approximated, inthe vicinity of a hyperbolic equilibrium point, by a homogeneous linear system. As for autonomous systems,nonautonomous nonlinear systems are linked to a linearized form, this time through exponential dichotomy, whichis explained in Chapter 8.

Warning. These lecture notes are work in progress and are unreviewed; they most likely contain many mistakesand typos. If you find mistakes, let me know, that will improve their quality for future students.

1

Page 8: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Lecture guide for MATH 4800 – Winter2011 session

In the following, the chapters, sections and results that were covered in class are outlined, with some precisionswhen needed.

• You must of course know how to deal with the simplest cases explained in Chapter 1. We have not yetintroduced dynamical systems properly, so Section 1.3 and the following exercise are not yet relevant.

• We have covered the content of Chapter 2.

• In Chapter 3, Section 3.1 explains what are solutions to ordinary differential equations. It is essential thatyou understand the content of this section.

• Section 3.2 concerns existence and uniqueness theorems for ODEs. It is important to understand the useof the integral form of the solution to construct successive approximations to the solution, as explainedin Section 3.2.1. You should also try to understand the proof of Picard’s theorem (Theorem 3.2.2) usingthe contraction mapping principle in Section 3.2.2. Do not worry about the proof by “explicit” successiveapproximations (Section 3.2.3), nor should you worry about the non-Lipschitz case (Section 3.2.4), exceptmaybe for the statement of Theorem 3.2.5: if the vector field is not Lipschitz, then the results you get aremuch less useful than in the Lipschitz case.

• We have discussed the continuation of solutions (Section 3.3). Do not worry about the proofs in this section,just know what maximal solutions are, what we call the maximal interval of existence, etc.

• We briefly discussed Theorem 3.4.1 on the continuous dependence of solutions on initial conditions. Knowthe theorem, do not worry about the proof. You could wish to read the statement of Theorem 3.4.3, aboutthe continuous dependence of solutions on parameters.

• Sections 3.5 and 3.6 discuss the generality of first-order systems and autonomous systems, respectively. Theyare to be understood.

• The generality of first-order systems is discussed again in Section 4.1 at the beginning of Chapter 4, both fordiscrete-time and continuous-time systems.

• Understanding why solutions of linear systems of ODEs exist and are unique is important. This is done inSection 4.2.

• Sections 4.3 and 4.4 need to be reworked. (The content is correct, I do not like the presentation.)

• A lot of the content of Section 4.5 is important.

• Section 4.6 is also important.

• We have not covered Section 4.7.

2

Page 9: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Chapter 1

A brief introduction to dynamicalsystems

In this chapter, we introduce the topic covered in the remainder of these lecture notes: dynamical systems. We focuson two types of systems: discrete-time systems and continuous-time systems described using ordinary differentialequations. Both types are illustrated here using a very simple problem.

1.1 A first-order linear difference equation

Consider the sequence {x(t)}t∈R of real numbers, with t indicating “time”. A difference equation can be definedusing the functional relationship

x(t+ 1) = f(x(t)),

i.e., by defining the next term in the sequence as a function of the previous one, with f : R → R. Of course, weneed an initial term x(0) to initiate the sequence. This initial term is usually called the initial value.

Proposition 1.1.1. Consider the first-order linear homogeneous difference equation with constant coefficient a ∈ R,

x(t+ 1) = ax(t). (1.1)

If an initial value x0 ∈ R is known, then the solution to (1.1) is unique and given by

x(t) = atx(0). (1.2)

Proof. Given x(0), using (1.1), we have x(1) = ax(0), so in turn,

x(2) = ax(1) = a(ax(0)) = a2x(0).

It follows thatx(3) = ax(2) = a(a2x(0)) = a3x(0).

Continuing this process, we obtain the general expression

x(t) = atx(0).

We will see later that some difference equations can be solved using this type of idea, namely, those differenceequations that are linear or affine.

In many cases, however, it will not be possible to obtain such an explicit solution, i.e., one where the general termx(t) can be expressed as a function of t and other parameters rather than through a functional relationship withthe previous term(s). However, using qualitative analysis, it is still possible to gain quite a wealth of informationabout the behaviour of a difference equation.

3

Page 10: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4Dynamical Systems – Lecture Notes – J. Arino

1. A brief introduction to dynamical systems

In order to introduce the notions of qualitative analysis, inspect (1.2): clearly, this defines a geometric sequencewith common ratio a. Therefore, the asymptotic behavior of the solution x(t), i.e., the behaviour when t → ∞,depends on the value of a:

• if |a| < 1, then limt→∞ x(t) = 0, i.e., x(t) converges to 0,

• if a = 1, then for all t ≥ 0, x(t) = x(0), i.e., x(t) remains constant,

• if a = −1, then for all t ≥ 0, x(t) = (−1)tx(0), i.e., x(t) alternates,

• if |a| > 1 then x(t) diverges (either approaches infinity if a > 1 or diverges with alternating signs if a < −1).

Note that we did not need to know that the general solution to (1.1) is given by x(t) = atx(0) to come to thisconclusion.

1.2 A first-order linear ordinary differential equation

Very similar to (1.1) is the ordinary differential equation

d

dtx(t) = ax(t),

which is often denoted, when no ambiguity can arise from doing so, by omitting the dependence of x(t) on t,

x′ = ax.

This must be considered with an initial value, say x(t0) = x0 ∈ R. As in the discrete-time case, the initial valueproblem (IVP) associated to the differential equation above then consists in the following

x′ = ax

x(t0) = x0.(1.3)

Here, similarly as in Section 1.1, solving (1.3) is simple. First, we solve the differential equation by noting that wecan write

x′

x= a.

The term of the left hand side integrates to ln |x(t)|, while the right hand side integrates to at, so we find thesolution

|x(t)| = Ceat,

where C is a real constant. To solve the IVP (1.3), we then set t = t0, giving

|x(t0)| = Cat0 ,

which must equal x0. Doing a little algebra, we find that x(t) = x0eat, i.e., (1.3) has an explicit solution.

As in Section 1.1, we can also use qualitative methods to understand the behaviour of x(t). First, note that ifx(t0) = 0, then x(t) = 0 for all t ≥ t0. We will see later that because of existence and uniqueness of the solutionsto an ordinary differential equation, this means that x(t) can never become zero if it does not start out equal tozero. Because of that, we can assume for example that x(t0) > 0 (the case x(t0) < 0 is symmetric and is left as anexercise). The quantity x′(t) denotes the infinitesimal rate of change of the quantity x(t). Therefore, if x(t) > 0,then x(t) increases or decreases depending on whether a > 0 or a < 0.

So, again, it is not necessary to know the solution as a function of t to be able to deduce information about itsbehaviour.

Page 11: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

1.3. Dynamical systemsDynamical Systems – Lecture Notes – J. Arino

5

1.3 Dynamical systems

Both (1.1) and (1.3) are examples of dynamical systems. Some authors emphasize similarities between both typesof systems by omitting time dependence, writing both equations x′ = ax and denoting

x′ = x(t+ 1)

in the case of discrete-time systems and

x′ =d

dt

for continuous-time systems. In these lecture notes, we will keep the notation different for clarity. We will in factfurther distinguish between discrete and continuous dynamical systems by denoting xt the value x(t) in the former.

What these systems have in common: they are both dynamical systems (more precisely: semi-dynamicalsystems, as time is assumed to only be positive). Formally, a dynamical system is defined as follows.

Definition 1.3.1 ([3]). Let X be a metric space. A dynamical system on X is the tuple (X,R, π), where π is amap from the product space X × R into the space X satisfying the following axioms:

i) π(x, 0) = x for every x ∈ X. [identity axiom]

ii) π(π(x, t1), t2) = π(x, t1 + t2) for every x ∈ X and t1, t2 ∈ R. [group axiom]

iii) π is continuous. [continuity axiom]

1.4 Exercises and problems

Exercise 1.4.1. Show that (1.1.1) defines a dynamical system.

Page 12: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order
Page 13: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Chapter 2

Discrete time systems

Elementary discrete time systems such as the ones considered in these notes are much easier than ordinary differ-ential equations as far as existence, uniqueness and other similar properties are concerned. So this brief chapterfocuses on discussing the classification of the various types of difference equations.

In this chapter, we consider discrete-time systems of the form

xt+1 = f(xt), (2.1)

with initial condition given for t = 0 by x0, with x, x0 ∈ Rn and f : Rn → Rn. We also consider pth order equationsof the form

f(xt+p, xt+p−1, . . . , xt+1, xt, t) = 0, (2.2)

where f is a real-valued function of the real variables xt through xt+p and t. We could also consider systems ofnth order equations, but for the sake of brievety, we will limit ourselves to equations of the form (2.1) and (2.2).

Implicit in (2.1) and (2.2) is that the time interval is taken to be ∆t = 1. Also, the state of the system at timet is denoted xt. Formally, (2.1) should be written

x(t+ ∆t) = f(x(t)),

but this is cumbersome and will be used only when ambiguity could lead to miscomprehensions.Using (2.1), we see that

x1 = f(x0)

x2 = f(x1) = f(f(x0))∆= f2(x0)

...

xk = fk(x0).

The compositions

fk = f ◦ f ◦ · · · ◦ f︸ ︷︷ ︸k times

are called the iterates of f . They define an infinite sequence of points

x0, x1, x2, . . . , xt, . . . ,

that constitute the solution to (2.1). The object of this chapter is to determine the behavior of this solution.For example, in the case of the logistic map (??), do solutions behave like they do for the continuous time logisticequation (??) and tend to the carrying capacity K?

7

Page 14: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

8Dynamical Systems – Lecture Notes – J. Arino

2. Discrete time systems

2.1 Types of equations/systems

Definition 2.1.1. The order of a difference equation (2.2) is the difference between the largest and the smallestarguments p appearing in it.

Remark that in biological terms, the order p of the equation is the number of previous generations that directlyinfluence the value of x in a given generation.

Definition 2.1.2. The difference equation is called autonomous if f does not depend explicitly on t and it is callednonautonomous otherwise.

Definition 2.1.3. Letxt+p + a1xt+p−1 + a2xt+p−2 + · · ·+ ap−1xt+1 = bt.

If the coefficients aj, j = 1, . . . , p are constant or depend on t but do not depend on the state variables, then thedifference equation is said to be linear; otherwise, it is nonlinear.

Definition 2.1.4. If the difference equation is linear and bt = 0 for all t, then it is said to be homogeneous;otherwise, it is said to be nonhomogeneous.

Definition 2.1.5. A solution of the difference equation

f(xt+k, xt+k−1, . . . , xt+1, xt, t) = 0

is a function xt, t = 0, 1, 2, . . . such that when substituted into the equation makes it a true statement.

Some characteristics of difference equations

• changes of states are descibed over discrete intervals. Length of the discrete interval is some fixed length ∆t:states of a system are modeled at the discrete time t = 0,∆t, 2∆t, . . .

• recurrence relation

• evolutionary character or not

• to describe populations whose generations do not overlap:

Page 15: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Chapter 3

General theory of ODEs

We begin with the general theory of ordinary differential equations (ODEs). First, we define ODEs, initial valueproblems (IVPs) and solutions to ODEs and IVPs in Section 3.1. In Section 3.2, we discuss existence and uniquenessof solutions to IVPs.

3.1 ODEs, IVPs, solutions

3.1.1 Ordinary differential equation, initial value problem

Definition 3.1.1 (ODE). An nth order ordinary differential equation (ODE) is a functional relationship takingthe form

F

(t, x(t),

d

dtx(t),

d2

dt2x(t), . . . ,

dn

dtnx(t)

)= 0,

that involves an independent variable t ∈ I ⊂ R, an unknown function x(t) ∈ D ⊂ Rn of the independent variable,its derivative and derivatives of order up to n. For simplicity, the time dependence of x is often omitted, and wein general write equations as

F(t, x, x′, x′′, . . . , x(n)

)= 0, (3.1)

where x(n) denotes the nth order derivative of x. An equation such as (3.1) is said to be in general (or implicit)form.

An equation is said to be in normal (or explicit) form when it is written as

x(n) = f(t, x, x′, x′′, . . . , x(n−1)

).

Note that it is not always possible to write a differential equation in normal form, as it can be impossible to solveF (t, x, . . . , x(n)) = 0 in terms of x(n).

Definition 3.1.2 (First-order ODE). In the following, we consider for simplicity the more restrictive case of afirst-order ordinary differential equation in normal form

x′ = f(t, x). (3.2)

Note that the theory developed here holds usually for nth order equations; see Section 3.5. The function f isassumed continuous and real valued on a set U ⊂ R× Rn.

Definition 3.1.3 (Initial value problem). An initial value problem (IVP) for equation (7.3) is given by

x′ = f(t, x)

x(t0) = x0,(3.3)

where f is continuous and real valued on a set U ⊂ R× Rn, with (t0, x0) ∈ U .

9

Page 16: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

10Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Remark – The assumption that f be continuous can be relaxed, piecewise continuity only is needed. However, this leads

in general to much more complicated problems and is beyond the scope of this course. Hence, unless otherwise stated, we

assume that f is at least continuous. The function f could also be complex valued, but this too is beyond the scope of this

course. ◦

Remark – An IVP for an nth order differential equation takes the form

x(n) = f(t, x, x′, . . . , x(n−1))

x(t0) = x0, x′(t0) = x′0, . . . , x

(n−1)(t0) = x(n−1)0 ,

i.e., initial conditions have to be given for derivatives up to order n− 1. ◦

We have already seen that the order of an ODE is the order of the highest derivative involved in the equation.An equation is then classified as a function of its linearity. A linear equation is one in which the vector field ftakes the form

f(t, x) = a(t)x(t) + b(t).

If b(t) = 0 for all t, the equation is linear homogeneous; otherwise it is linear nonhomogeneous. If the vector field fdepends only on x, i.e., f(t, x) = f(x) for all t, then the equation is autonomous; otherwise, it is nonautonomous.Thus, a linear equation is autonomous if a(t) = a and b(t) = b for all t. Nonlinear equations are those that are notlinear. They too, can be autonomous or nonautonomous.

Other types of classifications exist for ODEs, which we shall not deal with here, the previous ones being theonly one we will need.

3.1.2 Solutions to an ODE

Definition 3.1.4 (Solution). A function φ(t) (or φ, for short) is a solution to the ODE (7.3) if it satisfies thisequation, that is, if

φ′(t) = f(t, φ(t)),

for all t ∈ I ⊂ R, an open interval such that (t, φ(t)) ∈ U for all t ∈ I.

The notations φ and x are used indifferently for the solution. However, in this chapter, to emphasize thedifference between the equation and its solution, we will try as much as possible to use the notation x for theunknown and φ for the solution.

Definition 3.1.5 (Integral form of the solution). The function

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds (3.4)

is called the integral form of the solution to the IVP (3.3).

Let R = R((t0, x0), a, b) be the domain defined, for a > 0 and b > 0, by

R = {(t, x) : |t− t0| ≤ a, ‖x− x0‖ ≤ b} ,

where ‖ ‖ is any appropriate norm of Rn. This domain is illustrated in Figures 3.1; it is sometimes called a securitysystem, i.e., the union of a security interval (for the independent variable) and a security domain (for the dependentvariables) [21].

Suppose that f is continuous on R, and let M = maxR ‖f(t, x)‖, which exists since f is continuous on thecompact set R. In the following, existence of solutions will be obtained generally in relation to the domain R byconsidering a subset of the time interval |t− t0| ≤ a defined by |t− t0| ≤ α, with

α =

{a if M = 0

min(a, bM ) if M > 0.

Page 17: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.1. ODEs, IVPs, solutionsDynamical Systems – Lecture Notes – J. Arino

11

0

t −a t

x

0 t0 t +a0

x +b0

0x

x −b0

(t ,x )0

t

x

y

t0

x

y0

0

Figure 3.1: (Left) The domain R for D ⊂ R. (Right) The domain R for D ⊂ R2: “security tube”.

This choice of α = min(a, b/M) is natural. We endow f with specific properties (continuity, Lipschitz, etc.) onthe domain R. Thus, in order to be able to use the definition of φ(t) as the solution of x′ = f(t, x), we must beworking in R. So we require that |t − t0| ≤ a and ‖x − x0‖ ≤ b. In order to satisfy the first of these conditions,choosing α ≤ a and working on |t − t0| ≤ α implies of course that |t − t0| ≤ a. The requirement that α ≤ b/Mcomes from the following argument. If we assume that φ(t) is a solution of (3.3) defined on [t0, t0 + α], then wehave, for t ∈ [t0, t0 + α],

‖φ(t)− x0‖ =

∥∥∥∥∫ t

t0

f(s, φ(s))ds

∥∥∥∥≤∫ t

t0

‖f(s, φ(s))‖ ds

≤M∫ t

t0

ds

= M(t− t0),

where the first inequality is a consequence of the definition of the integrals by Riemann sums (Lemma A.2.1 inAppendix A.2). Similarly, we have ‖φ(t) − x0‖ ≤ −M(t − t0) for all t ∈ [t0 − α, t0]. Thus, for |t − t0| ≤ α,‖φ(t)− x0‖ ≤ M |t− t0|. Suppose now that α ≤ b/M . It follows that ‖φ− x0‖ ≤ M |t− t0| ≤ Mb/M = b. Takingα = min(a, b/M) then ensures that both |t− t0| ≤ a and ‖φ− x0‖ ≤ b hold simultaneously.

The following two theorems deal with the localization of the solutions to an IVP. They make more precise theprevious discussion. Note that for the moment, the existence of a solution is only assumed. First, we establishthat the security system described above performs properly, in the sense that a solution on a smaller time intervalstays within the security domain.

Theorem 3.1.6. If φ(t) is a solution of the IVP (3.3) in an interval |t − t0| < α ≤ α, then ‖φ(t) − x0‖ < b in|t− t0| < α, i.e., (t, φ(t)) ∈ R((t0, x0), α, b) for |t− t0| < α.

Page 18: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

12Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Proof. Assume that φ is a solution with (t, φ(t)) 6∈ R((t0, x0), α, b). Since φ is continuous, it follows that thereexists 0 < β < α such that(

‖φ(t)− x0‖ < b for |t− t0| < β)

and(‖φ(t0 + β)− x0‖ = b or ‖φ(t0 − β)− x0‖ = b

), (3.5)

i.e., the solution escapes the security domain at t = t0 ± β. Since α ≤ α ≤ a, β < a. Thus

(t, φ(t)) ∈ R for |t− t0| ≤ β.

Thus ‖f(t, φ(t))‖ ≤M for |t− t0| ≤ β. Since φ is a solution, we have that φ′(t) = f(t, φ(t)) and φ(t0) = x0. Thus

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds for |t− t0| ≤ β.

Hence

‖φ(t)− x0‖ =

∥∥∥∥∫ t

t0

f(s, φ(s))ds

∥∥∥∥ for |t− t0| ≤ β

≤M |t− t0| for |t− t0| ≤ β.

As a consequence,

‖φ(t)− x0‖ ≤Mβ < Mα ≤Mα ≤M b

M= b for |t− t0| ≤ β.

In particular, ‖φ(t0 ± β)− x0‖ < b. Hence contradiction with (3.5).

The following theorem is proved using the same sort of technique as in the proof of Theorem 3.1.6. It links thevariation of the solution to the nature of the vector field.

Theorem 3.1.7. If φ(t) is a solution of the IVP (3.3) in an interval |t − t0| < α ≤ α, then ‖φ(t1) − φ(t2)‖ ≤M |t1 − t2| whenever t1, t2 are in the interval |t− t0| < α.

Proof. Let us begin by considering t ≥ t0. On t0 ≤ t ≤ t0 + α,

φ(t1)− φ(t2) = x0 +

∫ t1

t0

f(s, φ(s))ds− x0 −∫ t2

t0

f(s, φ(s))ds

= −∫ t2

t1

f(s, φ(s))ds if t2 > t1∫ t2

t1

f(s, φ(s))ds if t1 > t2.

Now we can see formally what is needed for a solution.

Theorem 3.1.8. Suppose f is continuous on an open set U ⊂ R × Rn. Let (t0, x0) ∈ U , and φ be a functiondefined on an open set I of R such that t0 ∈ I. Then φ is a solution of the IVP (3.3) if, and only if,

i) ∀t ∈ I, (t, φ(t)) ∈ U .

ii) φ is continuous on I.

iii) ∀t ∈ I, φ(t) = x0 +∫ tt0f(s, φ(s))ds.

Page 19: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.1. ODEs, IVPs, solutionsDynamical Systems – Lecture Notes – J. Arino

13

Proof. (⇒) Let us suppose that φ′ = f(t, φ) for all t ∈ I and that φ(t0) = x0. Then for all t ∈ I, (t, φ(t)) ∈ U (i).Also, φ is differentiable and thus continuous on I (ii). Finally,

φ′(s) = f(s, φ(s)),

so integrating both sides from t0 to t,

φ(t)− φ(t0) =

∫ t

t0

f(s, φ(s))ds

and thus

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds,

hence (iii).(⇐) Assume i), ii) and iii). Then φ is differentiable on I and φ′(t) = f(t, φ(t)) for all t ∈ I. From (3),

φ(t0) = x0 +∫ t0t0f(s, φ(s))ds = x0.

Note that Theorem 3.1.8 states that φ should be continuous, whereas the solution should of course be C1, forits derivative needs to be continuous. However, this is implied by point iii). In fact, more generally, the followingresult holds about the regularity of solutions.

Theorem 3.1.9 (Regularity). Let f : U → Rn, with U an open set of R × Rn. Suppose that f ∈ Ck. Then allsolutions of (7.3) are of class Ck+1.

Proof. The proof is obvious, since a solution φ is such that φ′ ∈ Ck.

3.1.3 Geometric interpretation

The function f is the vector field of the equation. At every point in (t, x) space, a solution φ is tangent to thevalue of the vector field at that point. A particular consequence of this fact is the following theorem.

Theorem 3.1.10. Let x′ = f(x) be a scalar autonomous differential equation. Then the solutions of this equationare monotone.

Proof. The direction field of an autonomous scalar differential equation consists of vectors that are parallel for allt (since f(t, x) = f(x) for all t). Suppose that a solution φ of x′ = f(x) is non monotone. Then this means that,given an initial point (t0, x0), one the following two occurs, as illustrated in Figure 3.2.

i) f(x0) 6= 0 and there exists t1 such that φ(t1) = x0.

ii) f(x0) = 0 and there exists t1 such that φ(t1) 6= x0.

Suppose we are in case i), and assume we are in the case f(x0) > 0. Thus, the solution curve φ is increasing at(t0, x0), i.e., φ′(t0) > 0. As φ is continuous, i) implies that there exists t2 ∈ (t0, t1) such that φ(t2) is a maximum,with φ increasing for t ∈ [t0, t2) and φ decreasing for t ∈ (t2, t1]. It follows that φ′(t1) < 0, which is a contradictionwith φ′(t0) > 0.

Now assume that we are in case ii). Then there exists t2 ∈ (t0, t1) with φ(t2) = x0 but such that φ′(t2) < 0.This is a contradiction.

Remark – If we have uniqueness of solutions, it follows from this theorem that if φ1 and φ2 are two solutions of the scalar

autonomous differential equation x′ = f(x), then φ1(t0) < φ2(t0) implies that φ1(t) < φ2(t) for all t. ◦

Remark – Be careful: Theorem 3.1.10 is only true for scalar equations. In higher dimensions, one can use the theory of

monotone dynamical systems [22]. ◦

Page 20: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

14Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

1t t t t t t0 12 0 2

Figure 3.2: Situations that would lead to a scalar autonomous differential equation having nonmonotone solu-tions.

3.2 Existence and uniqueness theorems

Several approaches can be used to show existence and/or uniqueness of solutions. In Sections 3.2.2 and 3.2.3, wetake a direct path: using either a fixed point method (Section 3.2.2) or an iterative approach (Section 3.2.3), weobtain existence and uniqueness of solutions under the assumption that the vector field is Lipschitz. In Section 3.2.4,the Lipschitz assumption is dropped and therefore a different approach must be used, namely that of approximatesolutions, with which only existence can be established.

3.2.1 Successive approximations

Picard’s successive approximation method consists in using the integral form (3.4) of the solution to the IVP (3.3)to construct a sequence of approximation of the solution, that converges to the solution. The steps followed inconstructing this approximating sequence are the following.Step 1. Start with an initial estimate of the solution, say, the constant function φ0(t) = φ0 = x0, for |t− t0| ≤ h.Evidently, this function satisfies the IVP.Step 2. Use φ0 in (3.4) to define the second element in the sequence:

φ1(t) = x0 +

∫ t

t0

f(s, φ0(s))ds.

Step 3. Use φ1 in (3.4) to define the third element in the sequence:

φ2(t) = x0 +

∫ t

t0

f(s, φ1(s))ds.

. . .Step n. Use φn−1 in (3.4) to define the nth element in the sequence:

φn(t) = x0 +

∫ t

t0

f(s, φn−1(s))ds.

At this stage, there are two major ways to tackle the problem, which use the same idea: if we can prove that thesequence {φn} converges, and that the limit happens to satisfy the differential equation, then we have the solutionto the IVP (3.3). The first method (Section 3.2.2) uses a fixed point approach. The second method (Section 3.2.3)studies explicitly the limit.

Page 21: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.2. Existence and uniqueness theoremsDynamical Systems – Lecture Notes – J. Arino

15

3.2.2 Local existence and uniqueness – Proof by fixed point

Here are two slightly different formulations of the same theorem, which establishes that if the vector field iscontinuous and Lipschitz, then the solutions exist and are unique. We prove the result in the second case. For thedefinition of a Lipschitz function, see Section A.6 in the Appendix.

Theorem 3.2.1 (Picard local existence and uniqueness). Assume f : U ⊂ R× Rn → D ⊂ Rn is continuous, andthat f(t, x) satisfies a Lipschitz condition in U with respect to x. Then, given any point (t0, x0) ∈ U , there exists aunique solution of (3.3) on some interval containing t0 in its interior.

Theorem 3.2.2 (Picard local existence and uniqueness). Consider the IVP (3.3), and assume f is (piecewise)continuous in t and satisfies the Lipschitz condition

‖f(t, x1)− f(t, x2)‖ ≤ L‖x1 − x2‖,

for all x1, x2 ∈ D = {x : ‖x− x0‖ ≤ b} and all t such that |t− t0| ≤ a. Then there exists 0 < δ ≤ α = min(a, bM

)such that (3.3) has a unique solution in |t− t0| ≤ δ.

To set up the proof, we proceed as follows. Define the operator F by

F : x 7→ x0 +

∫ t

t0

f(s, x(s))ds.

Note that the function (Fφ)(t) is a continuous function of t. Then Picard’s successives approximations take theform φ1 = Fφ0, φ2 = Fφ1 = F 2φ0, where F 2 represents F ◦ F . Iterating, the general term is given for k = 0, . . .by

φk = F kφ0.

Therefore, finding the limit limk→∞ φk is equivalent to finding the function φ, solution of the fixed point problem

x = Fx,

with x a continuously differentiable function. Thus, a solution of (3.3) is a fixed point of F , and we aim to use thecontraction mapping principle to verify the existence (and uniqueness) of such a fixed point. We follow the proofof [16, p. 56-58].

Proof. We show the result on the interval t− t0 ≤ δ. The proof for the interval t0 − t ≤ δ is similar. Let X be thespace of continuous functions defined on the interval [t0, t0 + δ], X = C([t0, t0 + δ]), that we endow with the supnorm, i.e., for x ∈ X,

‖x‖c = maxt∈[t0,t0+δ]

‖x(t)‖.

Recall that this norm is the norm of uniform convergence. Let then

S = {x ∈ X : ‖x− x0‖c ≤ b}.

Of course, S ⊂ X. Furthermore, S is closed, and X with the sup norm is a complete metric space. Note that wehave transformed the problem into a problem involving the space of continuous functions; hence we are now in aninfinite dimensional case. The proof proceeds in 3 steps.Step 1. We begin by showing that F : S → S. From (3.4),

(Fφ)(t)− x0 =

∫ t

t0

f(s, φ(s))ds

=

∫ t

t0

f(s, φ(s))− f(s, x0) + f(s, x0)ds.

Page 22: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

16Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Therefore, by the triangle inequality,

‖Fφ− x0‖ ≤∫ t

t0

‖f(s, φ(s))− f(t, x0)‖+ ‖f(t, x0)‖ds.

As f is (piecewise) continuous, it is bounded on [t0, t1] and there exists M = maxt∈[t0,t1] ‖f(t, x0)‖. Thus

‖Fφ− x0‖ ≤∫ t

t0

‖f(s, φ(s))− f(t, x0)‖+Mds

≤∫ t

t0

L‖φ(s)− x0‖+Mds,

since f is Lipschitz. Since φ ∈ S for all ‖φ− x0‖ ≤ b, we have that for all φ ∈ S,

‖Fφ− x0‖ ≤∫ t

t0

Lb+Mds

≤ (t− t0)(Lb+M).

As t ∈ [t0, t0 + δ], (t− t0) ≤ δ, and thus

‖Fφ− x0‖c = max[t0,t0+δ]

‖Fφ− x0‖ ≤ (Lb+M)δ.

Choose then δ such that δ ≤ b/(Lb+M), i.e., t sufficiently close to t0. Then we have

‖Fφ− x0‖c ≤ b.

This implies that for φ ∈ S, Fφ ∈ S, i.e., F : S → S.Step 2. We now show that F is a contraction. Let φ1, φ2 ∈ S,

‖(Fφ1)(t)− (Fφ2)(t)‖ =

∥∥∥∥∫ t

t0

f(s, φ1(s))− f(s, φ2(s))ds

∥∥∥∥≤∫ t

t0

‖f(s, φ1(s))− f(s, φ2(s))‖ds

≤∫ t

t0

L‖φ1(s)− φ2(s)‖ds

≤ L‖φ1 − φ2‖c∫ t

t0

ds

and thus‖Fφ1 − Fφ2‖c ≤ Lδ‖φ1 − φ2‖c ≤ ρ‖φ1 − φ2‖c for δ ≤ ρ

L.

Thus, choosing ρ < 1 and δ ≤ ρ/L, F is a contraction. Since, by Step 1, F : S → S, the contraction mappingprinciple (Theorem A.11) implies that F has a unique fixed point in S, and (3.3) has a unique solution in S.Step 3. It remains to be shown that any solution in X is in fact in S (since it is on X that we want to show theresult). Considering a solution starting at x0 at time t0, the solution leaves S if there exists a t > t0 such that‖φ(t)− x0‖ = b, i.e., the solution crosses the border of D. Let τ > t0 be the first of such t’s. For all t0 ≤ t ≤ τ ,

‖φ(t)− x0‖ ≤∫ t

t0

‖f(s, φ(s))− f(s, x0)‖+ ‖f(s, x0)‖ds

≤∫ t

t0

L‖φ(s)− x0‖+Mds

≤∫ t

t0

Lb+Mds.

Page 23: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.2. Existence and uniqueness theoremsDynamical Systems – Lecture Notes – J. Arino

17

As a consequence,

b = ‖φ(τ)− x0‖ ≤ (Lb+M)(τ − t0).

As τ = t0 + µ, for some µ > 0, it follows that if

µ >b

Lb+M,

then the solution φ is confined to D.

Note that the condition x1, x2 ∈ D = {x : ‖x − x0‖ ≤ b} in the statement of the theorem refers to a localLipschitz condition. If the function f is Lipschitz, then the following theorem holds.

Theorem 3.2.3 (Global existence). Suppose that f is piecewise continuous in t and is Lipschitz on U = I × D.Then (3.3) admits a unique solution on I.

3.2.3 Local existence and uniqueness – Proof by successive approximations

Using the method of successive approximations, we can prove the following theorem.

Theorem 3.2.4. Suppose that f is continuous on a domain R of the (t, x)-plane defined, for a, b > 0, by R ={(t, x) : |t− t0| ≤ a, ‖x− x0‖ ≤ b}, and that f is locally Lipschitz in x on R. Let then, as previously defined,

M = sup(t,x)∈R

‖f(t, x)‖ <∞

and

α = min

(a,

b

M

).

Then the sequence defined by

φ0 = x0, |t− t0| ≤ α

φi(t) = x0 +

∫ t

t0

f(s, φi−1(s))ds, i ≥ 1, |t− t0| ≤ α

converges uniformly on the interval |t− t0| ≤ α to φ, unique solution of (3.3).

Proof. We follow [23, p. 3-6]. Existence. Suppose that |t− t0| ≤ α. Then

‖φ1 − φ0‖ =

∥∥∥∥∫ t

t0

f(s, φ0(s))ds

∥∥∥∥≤M |t− t0|≤ αM ≤ b,

from the definitions of M and α, and thus ‖φ1 − φ0‖ ≤ b. So∫ tt0f(s, φ1(s))ds is defined for |t − t0| ≤ α, and, for

|t− t0| ≤ α,

‖φ2(t)− φ0‖ =

∥∥∥∥∫ t

t0

f(s, φ1(s))ds

∥∥∥∥ ≤ ‖∫ t

t0

‖f(s, φ1(s))‖ds ≤ αM ≤ b.

All subsequent terms in the sequence can be similarly defined, and, by induction, for |t− t0| ≤ α,

‖φk(t)− φ0‖ ≤ αM ≤ b, k = 1, . . . , n.

Page 24: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

18Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Now, for |t− t0| ≤ α,

‖φk+1(t)− φk(t)‖ =

∥∥∥∥x0 +

∫ t

t0

f(s, φk(s))ds− x0 −∫ t

t0

f(s, φk−1(s))ds

∥∥∥∥=

∥∥∥∥∫ t

t0

f(s, φk(s))− f(s, φk−1(s)) ds

∥∥∥∥≤ L

∫ t

t0

‖φk(s)− φk−1(s)‖ds,

where the inequality results of the fact that f is locally Lipschitz in x on R.We now prove that, for all k,

‖φk+1 − φk‖ ≤ b(L|t− t0|)k

k!for |t− t0| ≤ α. (3.6)

Indeed, (3.6) holds for k = 1, as previously established. Assume that (3.6) holds for k = n. Then

‖φn+2 − φn+1‖ =

∥∥∥∥∫ t

t0

f(s, φn+1(s))− f(s, φn(s))ds

∥∥∥∥≤∫ t

t0

L‖φn+1(s)− φn(s)‖ds

≤∫ t

t0

Lb(L|s− t0|)n

n!ds for |t− t0| ≤ α

≤ bLn+1

n!

|t− t0|n+1

n+ 1

∣∣∣∣s=ts=t0

≤ b (L|t− t0|)n+1

(n+ 1)!

and thus (3.6) holds for k = 1, . . ..Thus, for N > n we have

‖φN (t)− φn(t)‖ ≤N−1∑k=n

‖φk+1(t)− φk(t)‖ ≤N−1∑k=n

b(L|t− t0|)k

k!≤ b

N−1∑k=n

(Lα)k

k!.

The rightmost term in this expression tends to zero as n → ∞. Therefore, {φk(t)} converges uniformly to afunction φ(t) on the interval |t− t0| ≤ α. As the convergence is uniform, the limit function is continuous. Moreover

φ(t0) = x0. Indeed, φN (t) = φ0(t) +∑Nk=1(φk(t)− φk−1(t)), so φ(t) = φ0(t) +

∑∞k=1(φk(t)− φk−1(t)).

The fact that φ is a solution of (3.3) follows from the following result. If a sequence of functions {φk(t)}converges uniformly and that the φk(t) are continuous on the interval |t− t0| ≤ α, then

limn→∞

∫ t

t0

φn(s)ds =

∫ t

t0

limn→∞

φn(s)ds.

Hence,

φ(t) = limn→∞

φn(t)

= x0 + limn→∞

∫ t

t0

f(s, φn−1(s))ds

= x0 +

∫ t

t0

limn→∞

f(s, φn−1(s))ds

= x0 +

∫ t

t0

f(s, φ(s))ds,

Page 25: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.2. Existence and uniqueness theoremsDynamical Systems – Lecture Notes – J. Arino

19

which is to say that

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds for |t− t0| ≤ α.

As the integrand f(t, φ) is a continuous function, φ is differentiable (with respect to t), and φ′(t) = f(t, φ(t)), so φis a solution to the IVP (3.3).

Uniqueness. Let φ and ψ be two solutions of (3.3), i.e., for |t− t0| ≤ α,

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds

ψ(t) = x0 +

∫ t

t0

f(s, ψ(s))ds.

Then, for |t− t0| ≤ α,

‖φ(t)− ψ(t)‖ =

∥∥∥∥∫ t

t0

f(s, φ(s))− f(s, ψ(s))ds

∥∥∥∥≤ L

∫ t

t0

‖φ(s)− ψ(s)‖ds. (3.7)

We now apply Gronwall’s Lemma A.7) to this inequality, using K = 0 and g(t) = ‖φ(t) − ψ(t)‖. First, applyingthe lemma for t0 ≤ t ≤ t0 + α, we get 0 ≤ ‖φ(t)− ψ(t)‖ ≤ 0, that is,

‖φ(t)− ψ(t)‖ = 0,

and thus φ(t) = ψ(t) for t0 ≤ t ≤ t0 + α. Similarly, for t0 − α ≤ t ≤ t0, ‖φ(t)− ψ(t)‖ = 0. Therefore, φ(t) = ψ(t)on |t− t0| ≤ α.

Example – Let us consider the IVP x′ = −x, x(0) = x0 = c, c ∈ R. For initial solution, we choose φ0(t) = c. Then

φ1(t) = x0 +

∫ t

0

f(s, φ0(s))ds

= c+

∫ t

0

−φ0(s)ds

= c− c∫ t

0

ds

= c− ct.

To find φ2, we use φ1 in (3.4).

φ2(t) = x0 +

∫ t

0

f(s, φ1(s))ds

= c−∫ t

0

(c− cs)ds

= c− ct+ ct2

2.

Continuing this method, we find a general term of the form

φn(t) =

n∑i=0

c(−1)iti

i!.

This is the power series expansion of ce−t, so φn → φ = ce−t (and the approximation is valid on R), which is the solution

of the initial value problem. �

Note that the method of successive approximations is a very general method that can be used in a much moregeneral context; see [10, p. 264-269].

Page 26: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

20Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

3.2.4 Local existence (non Lipschitz case)

The following theorem is often called Peano’s existence theorem. Because the vector field is not assumed to beLipschitz, something is lost, namely, uniqueness.

Theorem 3.2.5 (Peano). Suppose that f is continuous on some region

R = {(t, x) : |t− t0| ≤ a, ‖x− x0‖ ≤ b},

with a, b > 0, and let M = maxR ‖f(t, x)‖. Then there exists a continuous function φ(t), differentiable on R, suchthat

i) φ(t0) = x0,

ii) φ′(t) = f(t, φ) on |t− t0| ≤ α, where

α =

{a if M = 0

min(a, bM

)if M > 0.

Before we can prove this result, we need a certain number of preliminary notations and results. The definitionof equicontinuity and a statement of the Ascoli lemma are given in Section A.5. To construct a solution withoutthe Lipschitz condition, we approximate the differential equation by another one that does satisfy the Lipschitzcondition. The unique solution of such an approximate problem is an ε-approximate solution. It is formally definedas follows [10, p. 285].

Definition 3.2.6 (ε-approximate solution). A differentiable mapping u of an open ball J ∈ I into U is anapproximate solution of x′ = f(t, x) with approximation ε (or an ε-approximate solution) if we have

‖u′(t)− f(t, u(t))‖ ≤ ε,

for any t ∈ J .

Lemma 3.2.7. Suppose that f(t, x) is continuous on a region

R = {(t, x) : |t− t0| ≤ a, ‖x− x0‖ ≤ b}.

Then, for every positive number ε, there exists a function Fε(t, x) such that

i) Fε is continuous for |t− t0| ≤ a and all x,

ii) Fε has continuous partial derivatives of all orders with respect to x1, . . . , xn for |t− t0| ≤ a and all x,

iii) ‖Fε(t, x)‖ ≤ maxR ‖f(t, x)‖ = M for |t− t0| ≤ a and all x,

iv) ‖Fε(t, x)− f(t, x)‖ ≤ ε on R.

See a proof in [14, p. 10-12]; note that in this proof, the property that f defines a differential equation is notused. Hence Lemma 3.2.7 can be used in a more general context than that of differential equations. We now proveTheorem 3.2.5.

Proof of Theorem 3.2.5. The proof takes four steps.1. We construct, for every positive number ε, a function Fε(t, x) that satisfies the requirements given in

Lemma 3.2.7. Using an existence-uniqueness result in the Lipschitz case (such as Theorem 3.2.2), we construct afunction φε(t) such that

(P1) φε(t0) = x0,

(P2) φ′ε(t) = Fε(t, φε(t)) on |t− t0| < α.

Page 27: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.2. Existence and uniqueness theoremsDynamical Systems – Lecture Notes – J. Arino

21

(P3) (t, φε(t)) ∈ R on |t− t0| ≤ α.

2. The set F = {φε : ε > 0} is bounded and equicontinuous on |t− t0| ≤ α. Indeed, property (P3) of φε impliesthat F is bounded on |t− t0| ≤ α and that ‖Fε(t, φε(t))‖ ≤M on |t− t0| ≤ α. Hence property (P2) of φε impliesthat

‖φε(t1)− φε(t2)‖ ≤M |t1 − t2|,

if |t1 − t0| ≤ α and |t2 − t0| ≤ α (this is a consequence of Theorem 3.1.7). Therefore, for a given positive numberµ, we have ‖φε(t1)− φε(t2)‖ ≤ µ whenever |t1 − t0| ≤ α, |t2 − t0| ≤ α, and |t1 − t2| ≤ µ/M .

3. Using Lemma A.5, choose a sequence {εk : k = 1, 2, . . .} of positive numbers such that limk→∞ εk = 0 andthat the sequence {φεk : k = 1, 2, . . .} converges uniformly on |t− t0| ≤ α as k →∞. Then set

φ(t) = limk→∞

φεk(t) on |t− t0| ≤ α.

4. Observe that

φε(t) = x0 +

∫ t

t0

Fε(s, φε(s))ds

= x0 +

∫ t

t0

f(s, φε(s))ds+

∫ t

t0

Fε(s, φε(s))− f(s, φε(s))ds,

and that it follows from iv) in Lemma 3.2.7 that∥∥∥∥∫ t

t0

Fε(s, φε(s))− f(s, φε(s))ds

∥∥∥∥ ≤ ε|t− t0| on |t− t0| ≤ α.

This is true for all ε ≥ 0, so letting ε→ 0, we obtain

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds,

which completes the proof.

See [14, p. 13-14] for the outline of two other proofs of this result. A proof, by Hartman [13, p. 10-11], nowfollows.

Proof. Let δ > 0 and φ`(t) be a C1 n-dimensional vector-valued function on [t0 − δ, t0] satisfying φ`(t0) = x0,φ′`(t0) = f(t0, x0) and ‖φ`(t)− x0‖ ≤ b, ‖φ′`(t)‖ ≤ M . For 0 < ε ≤ δ, define a function φε(t) on [t0 − δ, t0 + α] byputting φε(t) = φ`(t) on [t0 − δ, t0] and

φε(t) = x0 +

∫ t

t0

f(s, φε(s− ε))ds on [t0, t0 + α]. (3.8)

The function φε can indeed be thus defined on [t0−δ, t0+α]. To see this, remark first that this formula is meaningfuland defines φε(t) for t0 ≤ t ≤ t0 + α1, α1 = min(α, ε), so that φε(t) is C1 on [t0 − δ, t0 + α1] and, on this interval,

‖φε(t)− x0‖ ≤ b, ‖φε(t)− φε(s)‖ ≤M |t− s|. (3.9)

It then follows that (3.8) can be used to extend φε(t) as a C1 function over [t0− δ, t0 +α2], where α2 = min(α, 2ε),satisfying relation (3.9). Continuing in this fashion, (3.8) serves to define φε(t) over [t0, t0 + α] so that φε(t) is aC0 function on [t0 − δ, t0 + α], satisfying relation (3.9).

Since ‖φ′ε(t)‖ ≤ M , M can be used as a Lipschitz constant for φε, giving uniform continuity of φε. It followsthat the family of functions, φε(t), 0 < ε ≤ δ, is equicontinuous. Thus, using Ascoli’s Lemma (Lemma A.5), thereexists a sequence ε(1) > ε(2) > . . ., such that ε(n)→ 0 as n→∞ and

φ(t) = limn→∞

φε(n)(t) exists uniformly

Page 28: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

22Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

on [t0− δ, t0 +α]. The continuity of f implies that f(t, φε(n)(t− ε(n)) tends uniformly to f(t, φ(t)) as n→∞; thusterm-by-term integration of (3.8) where ε = ε(n) gives

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds

and thus φ(t) is a solution of (3.3).

An important corollary follows.

Corollary 3.2.8. Let f(t, x) be continuous on an open set E and satisfy ‖f(t, x)‖ ≤ M . Let E0 be a compactsubset of E. Then there exists an α = α(E,E0,M) > 0 with the property that if (t0, x0) ∈ E0, then the IVP (3.3)has a solution and every solution exists on |t− t0| ≤ α.

In fact, hypotheses can be relaxed a little. Coddington and Levinson [7] define an ε-approximate solution as

Definition 3.2.9. An ε-approximate solution of the differential equation (7.3), where f is continuous, on a tinterval I is a function φ ∈ C on I such that

i) (t, φ(t)) ∈ U for t ∈ I;

ii) φ ∈ C1 on I, except possibly for a finite set of points S on I, where φ′ may have simple discontinuities (ghas finite discontinuities at c if the left and right limits of g at c exist but are not equal);

iii) ‖φ′(t)− f(t, φ(t))‖ ≤ ε for t ∈ I \ S.

Hence it is assumed that φ has a piecewise continuous derivative on I, which is denoted by φ ∈ C1p(I).

Theorem 3.2.10. Let f ∈ C on the rectangle

R = {(t, x) : |t− t0| ≤ a, ‖x− x0‖ ≤ b}.

Given any ε > 0, there exists an ε-approximate solution φ of (3.3) on |t− t0| ≤ α such that φ(t0) = x0.

Proof. Let ε > 0 be given. We construct an ε-approximate solution on the interval [t0, t0 + ε]; the constructionworks in a similar way for [t0 − α, t0]. The ε-approximate solution that we construct is a polygonal path startingat (t0, x0).

Since f ∈ C on R, it is uniformly continuous on R, and therefore for the given value of ε, there exists δε > 0such that

‖f(t, φ)− f(t, φ)‖ ≤ ε (3.10)

if

(t, φ) ∈ R, (t, φ) ∈ R and |t− t| ≤ δε ‖φ− φ‖ ≤ δε.

Now divide the interval [t0, t0 + α] into n parts t0 < t1 < · · · < tn = t0 + α, in such a way that

max |tk − tk−1| ≤ min

(δε,

δεM

). (3.11)

From (t0, x0), construct a line segment with slope f(t0, x0) intercepting the line t = t1 at (t1, x1). From thedefinition of α and M , it is clear that this line segment lies inside the triangular region T bounded by the linessegments with slopes ±M from (t0, x0) to their intercept at t = t0 + α, and the line t = t0 + α. In particular,(t1, x1) ∈ T .

At the point (t1, x1), construct a line segment with slope f(t1, x1) until the line t = t2, obtaining the point(t2, x2). Continuing similarly, a polygonal path φ is constructed that meets the line t = t0 + α in a finite numberof steps, and lies entirely in T .

Page 29: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.3. Continuation of solutionsDynamical Systems – Lecture Notes – J. Arino

23

The function φ, which can be expressed as

φ(t0) = x0

φ(t) = φ(tk−1) + f(tk−1, φ(tk−1))(t− tk−1), t ∈ [tk−1, tk], k = 1, . . . , n,(3.12)

is the ε-approximate solution that we seek. Clearly, φ ∈ C1p([t0, t0 + α]) and

‖φ(t)− φ(t)‖ ≤M |t− t| for t, t ∈ [t0, t0 + α]. (3.13)

If t ∈ [tk−1, tk], then (3.13) together with (3.11) imply that ‖φ(t)− φ(tk−1)‖ ≤ δε. But from (3.12) and (3.10),

‖φ′(t)− f(t, φ(t))‖ = ‖f(tk−1, φ(tk−1))− f(t, φ(t))‖ ≤ ε.

Therefore, φ is an ε-approximation.

We can now turn to their proof of Theorem 3.2.5.

Proof. Let {εn} be a monotone decreasing sequence of positive real numbers with εn → 0 as n → ∞. ByTheorem 3.2.10, for each εn, there exists an εn-approximate solution φn of (3.3) on |t−t0| ≤ α such that φn(t0) = x0.Choose one such solution φn for each εn. From (3.13), it follows that

‖φn(t)− φn(t)‖ ≤M |t− t|. (3.14)

Applying (3.14) to t = t0, it is clear that the sequence {φn} is uniformly bounded by ‖x0‖+ b, since |t− t0| ≤ b/M .Moreover, (3.14) implies that {φn} is an equicontinuous set. By Ascoli’s lemma (Lemma A.5), there exists asubsequence {φnk}, k = 1, . . ., of {φn}, converging uniformly on [t0 − α, t0 + α] to a limit function φ, which mustbe continuous since each φn is continuous.

This limit function φ is a solution to (3.3) which meets the required specifications. To see this, write

φn(t) = x0 +

∫ t

t0

f(s, φn(s)) + ∆n(s)ds, (3.15)

where ∆n(t) = φ′(t) − f(t, φn(t)) at those points where φ′n exists, and ∆n(t) = 0 otherwise. Because φn isan εn-approximate solution, ‖∆n(t)‖ ≤ εn. Since f is uniformly continuous on R, and φnk → φ uniformly on[t0 − α, t0 + α] as k →∞, it follows that f(t, φnk)→ f(t, φ(t)) uniformly on [t0 − α, t0 + α] as k →∞.

Replacing n by nk in (3.15) and letting k →∞ gives

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds. (3.16)

Clearly, φ(t0) = 0, when evaluated using (3.16), and also φ′(t) = f(t, φ(t)) since f is continuous. Thus φ as definedby (3.16) is a solution to (3.3) on |t− t0| ≤ α.

3.3 Continuation of solutions

The results we have seen so far deal with the local existence (and uniqueness) of solutions to an IVP, in the sensethat solutions are shown to exist in a neighborhood of the initial data. The continuation of solutions consists instudying criteria which allow to define solutions on possibly larger intervals.

Consider the IVPx′ = f(t, x)

x(t0) = x0,(3.17)

with f continuous on a domain U of the (t, x) space, and the initial point (t0, x0) ∈ U .

Page 30: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

24Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Lemma 3.3.1. Let the function f(t, x) be continuous in an open set U in (t, x)-space, and assume that a functionφ(t) satisfies the condition φ′(t) = f(t, φ(t)) and (t, φ(t)) ∈ U , in an open interval I = {t1 < t < t2}. Under thisassumption, if limj→∞(τj , φ(τj)) = (t1, η) ∈ U for some sequence {τj : j = 1, 2, . . .} of points in the interval I,then limτ→t1(τ, φ(τ)) = (t1, η). Similarly, if limj→∞(τj , φ(τj)) = (t2, η) ∈ U for some sequence {τj : j = 1, 2, . . .}of points in the interval I, then limτ→t2(τ, φ(τ)) = (t2, η).

Proof. Let W be an open neighborhood of (t1, η). Then (t, φ(t)) ∈ W in an interval τ1 < t < τ(W) for some τ(W)determined by W. Indeed, assume that the closure of W, W ⊂ U , and that |f(t, x)| ≤ M in W for some positivenumber M . For every positive integer j and every positive number ε, consider a rectangular region

Rj(ε) = {(t, x) : |t− tj | ≤ ε, ‖x− φ(tj)‖ ≤Mε}

Then there exists an ε > 0 and a j such that (τ1, η) ∈ Rj(ε) ⊂ W, with ε = min(ε, Mε

M

)and tj − ε ≤ τ1.

From Theorem 3.1.6 applied to the solution φ of the IVP x′ = f(t, x), x(τj) = φ(τj), we obtain that (τ, φ(τ)) ∈Rj(ε) ∈ U on the interval t1 < τ ≤ τj . Since U is an arbitrary open neighborhood of (t1, η), we conclude thatlimj→∞(τj , φ(τj)) = (t1, η) ∈ R.

From the previous result, we can derive a result concerning the maximal interval over which a solution can beextended. To emphasize the fact that the solution φ of an ODE exists in some interval I, we denote (φ, I). Weneed the notion of extension of a solution. It is defined in the classical manner (see Figure 3.3).

Definition 3.3.2 (Extension). Let (φ, I) and (φ, I) be two solutions of the same ODE. We say that (φ, I) is anextension of (φ, I) if, and only if,

I ⊂ I, φ|I = φ

where |I denotes the restriction to I.

φ~

φ~

I

I~

φ

Figure 3.3: The extension φ on the interval I of the solution φ (defined on the interval I).

Theorem 3.3.3. Let f(t, x) be continuous in an open set U in (t, x)-space, and the function φ(t) be a functionsatisfying the condition φ′(t) = f(t, φ(t)) and (t, φ(t)) ∈ U , in an open interval I = {t1 < t < t2}. If the followingtwo conditions are satisfied:

i) φ(t) cannot be extended to the left of t1 (or, respectively, to the right of t2),

Page 31: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.3. Continuation of solutionsDynamical Systems – Lecture Notes – J. Arino

25

ii) limj→∞(τj , φ(τj)) = (t1, η) (or, respectively, (t2, η)) exists for some sequence {τj : j = 1, 2, . . .} of points inthe interval I,

then the limit point (t1, η) (or, respectively, (t2, η)) must be on the boundary of U .

Proof. Suppose that the hypotheses of the theorem are satisfied, and that (t1, η) ∈ U (respectively, (t2, η) ∈ U).Then, from Lemma 3.3.1, it follows that

limτ→t1

(τ, φ(τ)) = (t1, η),

or, respectively, limτ→t2(τ, φ(τ)) = (t2, η). Thus we can apply Theorem 3.2.5 (Peano’s Theorem) to the IVP

x′ = f(t, x)

x(t1) = η,

(or, respectively, x′ = f(t, x), x(t2) = η). This implies that the solution φ can be extended to the left of t1 (respec-tively, to the right of t2), since Theorem 3.2.5 implies existence in a neighborhood of t1. This is a contradiction.

A particularly important consequence of the previous theorem is the following corollary.

Corollary 3.3.4. Assume that f(t, x) is continuous for t1 < t < t2 and all x ∈ Rn. Also, assume that there existsa function φ(t) satisfying the following conditions:

a) φ and φ′ are continuous in a subinterval I of the interval t1 < t < t2,

b) φ′(t) = f(t, φ(t)) in I.

Then, either

i) φ(t) can be extended to the entire interval t1 < t < t2 as a solution of the differential equation x′ = f(t, x),or

ii) limt→τ ‖φ(t)‖ =∞ for some τ in the interval t1 < t < t2.

3.3.1 Maximal interval of existence

Another way of formulating these results is with the notion of maximal intervals of existence. Consider thedifferential equation

x′ = f(t, x). (3.18)

Let x = x(t) be a solution of (3.18) on an interval I.

Definition 3.3.5 (Right maximal interval of existence). The interval I is a right maximal interval of existencefor x if there does not exist an extension of x(t) over an interval I1 so that x remains a solution of (3.18), withI ⊂ I1 (and I and I1 having different right endpoints). A left maximal interval of existence is defined in a similarway.

Definition 3.3.6 (Maximal interval of existence). An interval which is both a left and a right maximal interval ofexistence is called a maximal interval of existence.

Theorem 3.3.7. Let f(t, x) be continuous on an open set U and φ(t) be a solution of (3.18) on some interval.Then φ(t) can be extended (as a solution) over a maximal interval of existence (ω−, ω+). Also, if (ω−, ω+) is amaximal interval of existence, then φ(t) tends to the boundary ∂U of U as t→ ω− and t→ ω+.

Remark – The extension need not be unique, and ω± depends on the extension. Also, to say, for example, that φ→ ∂U as

t→ ω+ is interpreted to mean that either ω+ =∞ or that ω+ <∞ and if U0 is any compact subset of U , then (t, φ(t)) 6∈ U0

when t is near ω+. ◦

Two interesting corollaries, from [13].

Page 32: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

26Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Corollary 3.3.8. Let f(t, x) be continuous on a strip t0 ≤ t ≤ t0 +a (<∞), x ∈ Rn arbitrary. Let φ be a solutionof (3.3) on a right maximal interval J . Then either

i) J = [t0, t0 + a],

ii) or J = [t0, δ), δ ≤ t0 + a, and ‖φ(t)‖ → ∞ as t→ δ.

Corollary 3.3.9. Let f(t, x) be continuous on the closure U of an open (t, x)-set U , and let (3.3) possess a solutionφ on a maximal right interval J . Then either

i) J = [t0,∞),

ii) or J = [t0, δ), with δ <∞ and (δ, φ(δ)) ∈ ∂U ,

iii) or J = [t0, δ) with δ <∞ and ‖φ(t)‖ → ∞ as t→ δ.

3.3.2 Maximal and global solutions

Linked to the notion of maximal intervals of existence of solutions is the notion of maximal and global solutions.

Definition 3.3.10 (Maximal solution). Let I1 ⊂ R and I2 ⊂ R be two intervals such that I1 ⊂ I2. A solution(φ, I1) is maximal in I2 if φ has no extension (φ, I) solution of the ODE such that I1 ( I ⊂ I2.

Definition 3.3.11 (Global solution). A solution (φ, I1) is global on I2 if φ admits a extension φ defined on thewhole interval I2.

2

I

U

φ1

φ

Figure 3.4: φ1 is a global and maximal solution on I; φ2 is a maximal solution on I, but it is not global on I.

Every global solution on a given interval I is maximal on that same interval. The converse is false.

Example – Consider the equation x′ = −2tx2 on R. If x 6= 0, x′x−2 = −2t, which implies that x(t) = 1/(t2 − c), withc ∈ R. Depending on c, there are several cases.

• if c < 0, then x(t) = 1/(t2 − c) is a global solution on R,

• if c > 0, the solutions are defined on (−∞,−√c), (−

√c,√c) and (

√c,∞). The solutions are maximal solutions on R,

but are not global solutions.

• if c = 0, then the maximal non global solutions on R are defined on (−∞, 0) and (0,∞).

Page 33: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.4. Continuous dependence on initial data, on parametersDynamical Systems – Lecture Notes – J. Arino

27

Another solution is x ≡ 0, which is a global solution on R. �

Lemma 3.3.12. Every solution φ of the differential equation x′ = f(t, x) is contained in a maximal solution φ.

The following theorem extends the uniqueness property to an interval of existence of the solution.

Theorem 3.3.13. Let φ1, φ2 : I → Rn be two solutions of the equation x′ = f(t, x), with f locally Lipschitz in xon U . If φ1 and φ2 coincide at a point t0 ∈ I, then φ1 = φ2 on I.

Proof. Under the assumptions of the theorem, φ1(t0) = φ2(t0). Suppose that there exists a t1, t1 6= t0, such thatφ1(t1) 6= φ2(t1). For simplicity, let us assume that t1 > t0.

By the local uniqueness of the solution, it follows from φ1(t0) = φ2(t0) that there exists a neighborhood N oft0 such that φ1(t) = φ2(t) for all t ∈ N . Let

E = {t ∈ [t0, t1] : φ1(t) 6= φ2(t)}.

Since t1 ∈ E, E 6= ∅. Let α = inf(E), we have α ∈ (t0, t1], and for all t ∈ [t0, α), φ1(t) = φ2(t).By continuity of φ1 and φ2, we thus have φ1(α) = φ2(α). This implies that there exists a neighborhood W of α

on which φ1 = φ2. This is a contradiction, since φ1(t) 6= φ2(t) for t > α, hence there exists no such t1, and φ1 = φ2

on I.

Corollary 3.3.14 (Global uniqueness). Let f(t, x) be locally Lipschitz in x on U . Then by any point (t0, x0) ∈ U ,there passes a unique maximal solution φ : I → Rn. If there exists a global solution on I, then it is unique.

3.4 Continuous dependence on initial data, on parameters

Let φ be a solution of (3.3). To emphasize the fact the this solution depends on the initial condition (t0, x0), wedenote it φt0,x0

. Let η be a parameter of (3.3). When we study the dependence of φt0,x0on η, we denote the

solution as φt0,x0,η.We suppose that ‖f(t, x)‖ ≤ M and |∂f(t, x)/∂xi| ≤ K for i = 1, . . . , n for (t, x) ∈ U , with U ∈ R× Rn. Note

that these conditions are automatically satisfied on a closed bounded region of the form R = {(t, x) : |t − t0| ≤a, ‖x− x0‖ ≤ b}, where a, b > 0.

Our objective here is to characterize the nature of the dependence of the solution on the initial time t0 and theinitial data x0.

Theorem 3.4.1. Suppose that f and ∂f/∂x are continuous and bounded in a given region U . Let φt0,x0be a

solution of (3.3) passing through (t0, x0) and ψt0,x0be a solution of (3.3) passing through (t0, x0). Suppose that φ

and ψ exist on some interval I.Then, for each ε > 0, there exists δ > 0 such that if |t− t| < δ and ‖x0 − x0‖ < δ, then ‖φ(t)− ψ(t)‖ < ε, for

t, t ∈ I.

Proof. The prooof is from [4, p. 135-136]. Since φ is the solution of (3.3) through the point (t0, x0), we have, forall t ∈ I,

φ(t) = x0 +

∫ t

t0

f(s, φ(s))ds. (3.19)

As ψ is the solution of (3.3) through the point (t0, x0), we have, for all t ∈ I,

ψ(t) = x0 +

∫ t

t0

f(s, ψ(s))ds. (3.20)

Since ∫ t

t0

f(s, φ(s))ds =

∫ t0

t0

f(s, φ(s))ds+

∫ t

t0

f(s, φ(s))ds,

Page 34: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

28Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

substracting (3.20) from (3.19) gives

φ(t)− ψ(t) = x0 − x0 +

∫ t0

t0

f(s, φ(s))ds+

∫ t

t0

f(s, φ(s))− f(s, ψ(s))ds

and therefore

‖φ(t)− ψ(t)‖ ≤ ‖x0 − x0‖+

∥∥∥∥∥∫ t0

t0

f(s, φ(s))ds

∥∥∥∥∥+

∥∥∥∥∫ t

t0

f(s, φ(s))− f(s, ψ(s))ds

∥∥∥∥ .Using the boundedness assumptions on f and ∂f/∂x to evaluate the right hand side of the latter inequation, weobtain

‖φ(t)− ψ(t)‖ ≤ ‖x0 − x0‖+M |t0 − t0|+K

∥∥∥∥∫ t

t0

φ(s)− ψ(s)ds

∥∥∥∥ .If |t0 − t0| < δ, ‖x0 − x0‖ < δ, then we have

‖φ(t)− ψ(t)‖ ≤ δ +Mδ +K

∥∥∥∥∫ t

t0

φ(s)− ψ(s)ds

∥∥∥∥ . (3.21)

Applying Gronwall’s inequality (Appendix A.7) to (3.21) gives

‖φ(t)− ψ(t)‖ ≤ δ(1 +M)eK|t−t0| ≤ δ(1 +M)eK(τ2−τ1)

using the fact that |t− t0| < τ2 − τ1, if we denote I = (τ1, τ2). Since

‖ψ(t)− ψ(t)‖ <∥∥∥∥∫ t

t

f(s, ψ(s))ds

∥∥∥∥ ≤M |t− t| ≤Mδ

if |t− t| < δ, we have

‖φ(t)− ψ(t)‖ ≤ ‖φ(t)− ψ(t)‖+ ‖ψ(t)− ψ(t)‖ ≤ δ(1 +M)eK(τ2−τ1) + δM.

Now, given ε > 0, we need only choose δ < ε/[M + (1 + M)K(τ2−τ1)] to obtain the desired inequality, completingthe proof.

What we have shown is that the solution passing through the point (t0, x0) is a continuous function of thetriple (t, t0, x0). We now consider the case where the parameters also vary, comparing solutions to two differentbut “close” equations.

Theorem 3.4.2. Let f, g be defined in a domain U and satisfy the hypotheses of Theorem 3.4.1. Let φ and ψ besolutions of x′ = f(t, x) and x′ = g(t, x), respectively, such that φ(t0) = x0, ψ(t0) = x0, existing on a commoninterval α < t < β. Suppose that ‖f(t, x)− g(t, x)‖ ≤ ε for (t, x) ∈ U . Then the solutions φ and ψ satisfy

‖φ(t)− ψ(t)‖ ≤ ‖x0 − x0‖eK|t−t0| + ε(β − α)eK|t−t0|,

for all t, α < t < β.

The following theorem [8, p. 58] is less restrictive in its hypotheses than the previous one, requiring onlyuniqueness of the solution of the IVP.

Theorem 3.4.3. Let U be a domain of (t, x) space, Iµ the domain |µ− µ0| < c, with c > 0, and Uµ the set of all(t, x, µ) satisfying (t, x) ∈ U , µ ∈ Iµ. Suppose f is a continuous function on Uµ, bounded by a constant M there.For µ = µ0, let

x′ = f(t, x, µ)

x(t0) = x0

(3.22)

Page 35: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.5. Generality of first order systemsDynamical Systems – Lecture Notes – J. Arino

29

have a unique solution φ0 on the interval [a, b], where t0 ∈ [a, b]. Then there exists a δ > 0 such that, for any fixedµ such that |µ− µ0| < δ, every solution φµ of (3.22) exists over [a, b] and as µ→ µ0

φµ → φ0,

uniformly over [a, b].

Proof. We begin by considering t0 ∈ (a, b). First, choose an α > 0 small enough that the region R = {|t − t0| ≤α, ‖x−x0‖ ≤Mα} is in U ; note that R is a slight modification of the usual security domain. All solutions of (3.22)with µ ∈ Iµ exist over [t0 − α, t0 + α] and remain in R. Let φµ denote a solution. Then the set of functions {φµ},µ ∈ Iµ is an uniformly bounded and equicontinuous set in |t− t0| ≤ α. This follows from the integral equation

φµ(t) = x0 +

∫ t

t0

f(s, φµ(s), µ)ds (|t− t0| ≤ α) (3.23)

and the inequality ‖f‖ ≤M .Suppose that for some t ∈ [t0−α, t0 +α], φµ(t) does not tend to φ0(t). Then there exists a sequence {µk}, k =

1, 2, . . ., for which µk → µ0, and corresponding solutions φµk such that φµk converges uniformly over [t0−α, t0 +α]as k →∞ to a limit function ψ, with ψ(t) 6= φ0(t). From the fact that f ∈ C on Uµ, that ψ ∈ C on [t0−α, t0 +α],and that φµk converges uniformly to ψ, (3.23) for the solutions φµk yields

ψ(t) = x0 +

∫ t

t0

f(s, ψ(s), µ0)ds (|t− t0| ≤ α).

Thus ψ is a solution of (3.22) with µ = µ0. By the uniqueness hypothesis, it follows that ψ(t) = φ0(t) on |t−t0| ≤ α.Thus ψ(t) = φ0(t). Thus all solutions φµ on |t− t0| ≤ α tend to φ0 as µ→ µ0. Because of the equicontinuity, theconvergence is uniform.

Let us now prove that the result holds over [a, b]. For this, let us consider the interval [t0, b]. Let τ ∈ [t0, b),and suppose that the result is valid for every small h > 0 over [t0, τ − h] but not over [t0, τ + h]. It is clear thatτ ≥ t0 + α. By the above assumption, for any small ε > 0, there exists a δε > 0 such that

‖φµ(τ − ε)− φ0(τ − ε)‖ < ε (3.24)

for |µ− µ0| < δε. Let H ⊂ U be defined as the region

H = {|t− τ | ≤ γ, ‖x− φ0(τ − γ)‖ ≤ γ +M |t− τ + γ|},

with γ small enough that H ⊂ U . Any solution of x′ = f(t, x, µ) starting on t = τ − γ with initial value ξ0,|ξ0 − φ0(τ − γ)| ≤ γ will remain in H as t increases. Thus all solutions can be continued to τ + γ.

By choosing ε = γ in (3.24), it follows that for |µ−µ0| < δε, the solutions φµ can all be continued to τ+ε. Thusover [t0, τ + ε] these solutions are in U so that the argument that φµ → φ0 which has been given for |t − t0| ≤ α,also applies over [t0, τ + ε]. Thus the assumption about the existence of τ < b is false. The case τ = b is treated insimilar fashion on τ − γ ≤ t ≤ τ .

A similar argument applies to the left of t0 and therefore the result is valid over [a, b].

Definition 3.4.4 (Well-posedness). A problem is said to be well-posed if solutions exist, are unique, and that thereis continuous dependence on initial conditions.

3.5 Generality of first order systems

Consider an nth order differential equation in normal form

x(n) = f(t, x, x′, . . . , x(n−1)

). (3.25)

Page 36: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

30Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

This equation can be reduced to a system of n first order ordinary differential equations, by proceeding as follows.Let y0 = x, y1 = x′, y2 = x′′, . . . , yn−1 = x(n). Then (3.25) is equivalent to

y′ = F (t, y), (3.26)

with y = (y0, y1, . . . , yn−1)T and

F (t, z) =

y1

y2

...yn−1

f(t, y0, . . . , yn−1)

.Similarly, the IVP associated to (3.25) is given by

x(n) = f(t, x, x′, . . . , x(n−1)

)x(t0) = x0, x

′(t0) = x1, . . . , x(n−1)(t0) = xn−1

(3.27)

is equivalent to the IVPy′ = F (t, y)

y(t0) = y0 = (x0, . . . , xn−1)T .(3.28)

As a consequence, all results in this chapter are true for equations of order higher than 1.

3.6 Generality of autonomous systems

The nonautonomous systemx′(t) = f(t, x(t))

can be transformed into an autonomous system of differential equations by setting an auxiliary variable, say y,equal to t, giving

x′ = f(y, x)

y′ = 1.

However, this transformation does not always make the system any easier to study.

3.7 Suggested reading, Further problems

Most of these results are treated one way or another in Coddington and Levinson [8] (first edition published in1955), and the current text, as many others, does little but paraphrase them.

We have not seen here any results specific to complex valued differential equations. As complex numbers aretwo-dimensional real vectors, the results carry through to the complex case by simply assuming that if, in (7.3),we consider an n-dimensional complex vector, then this is equivalent to a 2n-dimensional problem. Furthermore,if f(t, x) is analytic in t and x, then analytic solutions can be constructed. See Section I-4 in [14], ..., for example.

Page 37: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.8. Exercises and problemsDynamical Systems – Lecture Notes – J. Arino

31

3.8 Exercises and problems

Exercise 3.8.1. Consider the IVPx′ = 3|x| 23

x(t0) = x0.(3.29)

Find the solution(s) to this IVP and discuss.

Problem 3.8.2. We consider here the equation

x′(t) = −αx(t) + f(x(t)) (3.30)

where α ∈ R is constant and f is continuous on R.

i) Show that x is a solution of (3.30) on R+ if, and only if,x is continuous on R+

and

x(t) = e−αtx(0) + e−αt∫ t

0eαsf(x(s))ds ∀t ∈ R+

(3.31)

Suppose now that α > 0 and that f is such that

∃a, k ∈ R, a > 0, 0 < k < α : ∀u ∈ R, |u| ≤ a⇒ |f(u)| ≤ k|u| (3.32)

Part I. Suppose that there exists a solution x of (3.30), defined on R+ and satisfying the inequality

|x(t)| ≤ a, t ∈ R+ (3.33)

i) Prove the inequality|x(t)| ≤ |x(0)|e−(α−k)t, t ∈ R+

[Hint: Use Gronwall’s lemma with the function g(t) = eαt|x(t)|].

ii) Deduce that x admits the limit 0 as t→∞.

Part II.

i) Show that any solution of (3.30) on R+ that satisfies |x(0)| < a, satisfies (3.33).

ii) Deduce from the preceding questions the two following properties.

a) Any solution x of (3.30) on R+ satisfying the condition |x(0)| < a, admits the limit 0 when t→∞.

b) The function x ≡ 0 is the unique solution of (3.30) on R+ such that x(0) = 0.

Part III. Application. Show that, for α > 1, all solutions of the equation

x′ = −αx+ ln(1 + x2

)tend to zero when t→∞.

Exercise 3.8.3. Let f : [0,+∞)→ R, f ∈ C1, and a ∈ R. We consider the initial value problems

x′(t) + ax(t) = f(t), t ≥ 0

x(0) = 0(3.34)

andx′(t) + ax(t) = f ′(t), t ≥ 0

x(0) = 0(3.35)

As these equations are linear, the initial value problems (3.34) and (3.35) admit unique solutions. We denote φ thesolution to (3.34) and ψ the solution to (3.35). Find a necessary and sufficient condition on f such that φ′ = ψ.

[Hint: Use a variation of constants formula].

Page 38: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

32Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Problem 3.8.4. Let f : Rn → Rn be continuous. Consider the differential equation

x′(t) = f(x(t)) (3.36)

i) Let x be a solution of (3.36) defined on a bounded interval [0, α), with α > 0. Suppose that t 7→ f(x(t)) isbounded on [0, α).

a) Consider the sequence

zα,n = x(α− 1

n), n ∈ N∗

Show that (zα,n)n∈N∗ is a Cauchy sequence.

b) Deduce that there exists xα ∈ Rn such that,

‖x(t)− xα‖ ≤Mα|t− α|

with M = supt∈[0,α) ‖f(x(t))‖.c) Show that x admits a finite limit when t→ α, t < α.

ii) Show that there exists an extension of x that is a solution of (3.36) on the interval [0, α].

Problem 3.8.5. A differential equation of the form

x′ = f(t, x(t), x(t− ω)) (3.37)

for ω > 0, is called a delay differential equation (or also a differential difference equation, or an equation withdeviating argument), and ω is called the delay. The basic initial value problem for (3.37) takes the form

x′ = f(t, x(t), x(t− ω))

x(t) = φ0(t), t0 − ω ≤ t ≤ t0(3.38)

i) Use the method of steps to construct the solution to (3.38) on the interval [t0, t0 + ω], that is, find how toconstruct the solution to the non delayed problem

x′ = f(t, x(t), φ0(t− ω))

x(t0) = φ0(t0), t0 ≤ t ≤ t0 + ω(3.39)

ii) Discuss existence and uniqueness of the solution on the interval [t0, t0 + ω], depending on the nature of φ0

and f .

iii) Suppose that φ0 ∈ C0([t0−ω, t0]). Discuss the regularity of the solution to (3.38) on the interval [t0 +kω, t0 +(k + 1)ω], k ∈ N.

Exercise 3.8.6. Consider the delay initial value problem

x′(t) = ax(t− ω)

x(t) = C, t ∈ [t0 − ω, t0](3.40)

with a,C ∈ R, ω ∈ R∗+. Using the ideas of the previous exercise, find the solution to (3.40) on the interval[t0 + kω, t0 + (k + 1)ω], k ∈ N.

Problem 3.8.7. Periodic solutions of differential equations. In this problem, we study the solutions of somedifferential equations, and in particular, their periodic solutions.

Let T > 0 be a real number, P be the vector space of real valued, continuous and T -periodic functions definedon R, and let a ∈ P . Define

A =

∫ T

0

a(t)dt, g(t) = exp

(∫ t

0

a(u)du

),

and endow P with the norm‖x‖ = sup

t∈R|x(t)|.

Page 39: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

3.8. Exercises and problemsDynamical Systems – Lecture Notes – J. Arino

33

First part

1. For what value(s) of A does the differential equation

x′(t) = a(t)x(t) (E1)

admit non trivial T -periodic solutions?We now let b ∈ P , and consider the differential equation

x′(t) = a(t)x(t) + b(t). (E2)

2.a. Describe the set of maximal solutions to (E2) and the intervals of definition of these solutions.2.b. Describe the set of maximal solutions to (E2) that are T -periodic, first assuming A 6= 0, then A = 0.

Second part

In this part, we let H be a real valued C1 function defined on R2, and consider the differential equation

x′(t) = a(t)x(t) +H(x(t), t). (E3)

3. Check that a function x is solution to (E3) if and only if it satisfies the condition

x(t) = g(t)

(x(0) +

∫ t

0

g(s)−1H(x(s), s)ds

).

4. Suppose that H is T -periodic with respect to its second argument, and that A 6= 0. Show that, for all functionsx ∈ P , the formula

U(Hx)(t) =eA

1− eAg(t)

∫ t+T

t

g(s)−1H(x(s), s)ds,

defines a function UHx ∈ P , and that x est solution to (E3) if and only if UHx = x.In the rest of the problem, we let F be a real-valued C1 function defined on R2, T -periodic with respect to its

second argument; for all ε > 0, define Hε = εF and Uε = UHε , so that the differential equation (E3) is written

x′(t) = a(t)x(t) + εF (x(t), t). (E4)

Assume that A 6= 0. For all r > 0, we denote Br the closed ball with centre 0 and radius r in the normed space P .We want to show the following assertion: for all r > 0, there exists ε1 > 0 such that, for all ε ≤ ε1, the differentialequation (E4) has a unique solution x ∈ Br, that we will denote xε.

We denote αr (resp. βr) the upper bound of the set |F (v, s)| (resp. |∂F∂v (v, s)|), where v ∈ [−r, r] and s ∈ [0, T ].5.a. Find a real ε0 > 0 such that, for all ε ≤ ε0, Uε(Br) ⊂ Br.5.b. Find a real ε1 ≤ ε0 such that, for all ε ≤ ε1, the restriction of Uε to Br be a contraction of Br.5.c. Conclude.6. Study the behavior of the function xε when ε→ 0, the number r being fixed.7. We now suppose that the function a is a constant k 6= 0 et that the function F takes the form F (v, s) = f(v).

Determine the solution xε of (E4).8. We now consider T = 1, k = −1 and f(v) = v2, and thus (E4) takes the form

x′(t) = −x(t) + εx(t)2. (E5)

8.a. Give possible values of ε0 and ε1.8.b. Determine the xε of (E5).8.c. Let α ∈ R. Show that there exists a unique maximal solution ϕα of (E5) such that ϕα(0) = α. Determine

precisely this solution, and graph several of these solutions.

Third part

Page 40: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

34Dynamical Systems – Lecture Notes – J. Arino

3. General theory of ODEs

Here, we consider the differential equation

x′(t) = kx(t) + εf(x(t)), (E6)

where k < 0, f is C1 and zero at zero. We let

λ = supu∈[−1,1]

|f ′(u)|,

and assume that ελ < −k.We propose to show the following result: if x is a maximal solution of (E6) such that |x(0)| < 1, then it is

defined on [0,∞) and, for all t ≥ 0,|x(t)| ≤ |x(0)|e(k+ελ)t.

9. In this question, we suppose that the set of t such that |x(t)| > 1 is non-empty, and we denote its lowerbound by θ. Show that, for all t ∈ [0, θ],

|x(t)| ≤ |x(0)|e(k+ελ)t.

10. Conclude.N.B. This result expresses the stability and the asymptotic stability of the trivial solution of (E6).

Page 41: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

Chapter 4

Linear systems of equations

In this chapter, we study linear systems of difference equations and ordinary differential equations. Our aimthroughout is to show the similarities between the results obtained for both types of systems. Therefore, each typeof result is formulated first for difference equations, then for differential equations.

The aim is to study linear systems of first-order difference equations of the form

xt+1 = A(t)xt +B(t) (4.1)

and linear systems of first-order differential equations of the form

d

dtx(t) = A(t)x(t) +B(t). (4.2)

In both cases, we take I an interval of R, E a normed vector space over a field K (E = Kn, with K = R or C),and L(E) the space of continuous linear maps from E to E. Let ‖ ‖ be a norm on E, and ||| ||| be the inducedsupremum norm on L(E) (see Appendix A.1). Consider a map A : I → L(E) and a map B : I → E. We restrictourselves to the finite dimensional case (E = Kn). Hence we consider A ∈ Mn(K), n × n matrices over the fieldK, and B ∈ Kn. We suppose that A and B have continuous entries. In most of what follows, we assume K = R.

In the case of (4.2), the unknown x is a map on I, taking values in E, defined differentiable on a sub-intervalof I.

Note that the name linear for systems (4.1) and (4.2) is an abuse of language. They should be called affinesystems, with associated linear systems

xt+1 = A(t)xt (4.3)

andx′(t) = A(t)x(t), (4.4)

respectively. However, in the context of difference or differential equations, we refer to (4.1) and (4.2) as nonho-mogeneous linear systems, and to (4.3) and (4.4), respectively, as their corresponding homogeneous linear systems.

4.1 Generality of linear systems of first-order equations

As was established in Section 3.5, an nth-order ordinary differential equation can be transformed into a system ofn first-order equations. This is true in particular with linear difference or differential equations.

4.1.1 Difference equations

Consider the mth-order linear nonhomogeneous equation

a0xt+m + a1xt+m−1 + · · ·+ amxt = b(t).

35

Page 42: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

36Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

For convenience xt is now denoted x(t). Let Y (t) be a m-vector Y (t) = (y1(t), y2(t), . . . , ym(t)) that satisfies

y1(t) = x(t)

y2(t) = x(t+ 1)

y3(t) = x(t+ 2)

...

ym(t) = x(t+m− 1).

The first element y1(t) is the solution x(t). Hence a first-order difference equation in y is

y1(t+ 1) = y2(t)

y2(t+ 1) = y3(t)

y3(t+ 1) = y4(t)

...

ym−1(t+ 1) = ym(t)

ym(t+ 1) = −a1ym(t)− · · · − am−1y2(t)− amy1(t) + b(t)

In matrix form,Y (t+ 1) = AY (t) +B

where

A =

0 1 0 . . . 00 0 1 . . . 0...

......

. . ....

0 0 0 . . . 1−am −am−1 −am−2 . . . −a1

, B =

00...0b(t)

.

The matrix A has 1’s along the superdiagonal and has the coefficients of the higher-order difference equation−ai (but the signs are reversed) along the last row. Matrix is called the companion matrix of the mth−orderdifference equation.

4.1.2 Differential equations

The differential equation

x(n)(t) = an−1(t)x(n−1)(t) + · · ·+ a1(t)x′(t) + a0(t)x(t) + b(t)

is an nth order nonhomogeneous linear differential equation. Together with the initial condition

x(n−1)(t0) = x(n−1)0 , . . . , x′(t0) = x′0, x(t0) = x0,

where x0, x′0, . . . , x

(n−1)0 ∈ R, it forms an IVP. We can transform it to a system of linear first order equations by

setting

y0 = x

y1 = x′

...

yn−1 = x(n−1)

yn = x(n).

Page 43: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.2. Existence and uniqueness of solutions for linear ordinary differential equationsDynamical Systems – Lecture Notes – J. Arino

37

The nth order linear equation is then equivalent to the following system of n first order linear equations

y′0 = y1

y′1 = y2

...

y′n−2 = yn−1

y′n−1 = yn

y′n = an−1(t)yn(t) + an−2(t)yn−1(t) + · · ·+ a1(t)y1(t) + a0(t)y0(t) + b(t),

under the initial conditionsyn−1(t0) = x

(n−1)0 , . . . , y1(t0) = x′0, y0(t0) = x0.

Example – Consider the second order IVP

x′′ = −2x′ + 4x− 3

x(0) = 2, x′(0) = 1.

To transform it into a system of first-order differential equations, we let y = x′. Substituting (where possible) y for x′ inthe equation gives

y′ = −2y + 4x− 3.

The initial condition becomes x(0) = 2, y(0) = 1. So finally, the following IVP is equivalent to the original one:

x′ = y

y′ = 4x− 2y − 3

x(0) = 2, y(0) = 1.

Note that the linearity of the initial problem is preserved. �

4.2 Existence and uniqueness of solutions for linear ordinary differen-tial equations

Theorem 4.2.1. Let A and B be defined and continuous on I 3 t0. Then, for all x0 ∈ E, there exists a uniquesolution φt(x0) of (4.2) through (t0, x0), defined on the interval I.

Proof. Let k(t) = |||A(t)||| = sup‖x‖=1 ‖A(t)x‖. Then for all t ∈ I and all x1, x2 ∈ K,

‖f(t, x1)− f(t, x2)‖ = ‖A(t)(x1 − x2)‖≤ |||A(t)||| ‖x1 − x2‖≤ k(t)‖x1 − x2‖,

where the inequality‖A(t)(x1 − x2)‖ ≤ |||A(t)||| ‖x1 − x2‖

results from the nature of the norm ||| ||| (see Appendix A.1). Furthermore, k is continuous on I. Therefore theconditions of Theorem 3.2.2 hold, leading to existence and uniqueness on the interval I.

With linear systems, it is possible to extend solutions easily, as is shown by the next theorem.

Theorem 4.2.2. Suppose that the entries of A(t) and the entries of B(t) are continuous on an open interval I.Then every solution of (4.2) which is defined on a subinterval J of the interval I can be extended uniquely to theentire interval I as a solution of (4.2).

Page 44: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

38Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

Proof. Suppose that I = (t1, t2), and that a solution φ of (4.2) is defined on J = (τ1, τ2), with J ( I. Then

‖φ(t)‖ ≤ ‖φ(t0)‖+

∥∥∥∥∫ t

t0

A(s)φ(s) +B(s)ds

∥∥∥∥ ,for all t ∈ J , where t0 ∈ J . Let {

K = ‖φ(t0)‖+ (τ2 − τ1) maxτ1≤t≤τ2 ‖B(t)‖L = maxτ1≤t≤τ2 ‖A(t)‖.

Then, for t0, t ∈ J ,

‖φ(t)‖ ≤ K + L

∥∥∥∥∫ t

t0

φ(s)ds

∥∥∥∥ ≤ K + L

∫ t

t0

‖φ(s)‖ds.

Thus, using Gronwall’s Lemma (Lemma A.7), the following estimate holds in J ,

‖φ(t)‖ ≤ KeL|t−t0| ≤ KeL(τ2−τ1) <∞.

This implies that case ii) in Corollary 3.3.4 is ruled out, leaving only the possibility for φ to be extendable over I,since the vector field in (4.2) is Lipschitz.

4.3 Linear systems of low order

We begin by considering some linear equations of order ≤ 2, or equivalently, by virtue of Section 4.1, with 2equations or less (of dimensionality less than or equal to 2). This will allow us to better understand systems ofhigher order/dimensionality.

4.3.1 First-order linear difference equation

Recall that we introduced in Chapter 1 the equation

xt+1 = axt.

We now consider the more general case of an equation with varying coefficient.

Proposition 4.3.1. Consider the first-order linear homogeneous difference equation defined for t = 0, 1, 2, . . . by

xt+1 = atxt.

If an initial value x0 is known and at ∈ R for all t, then the solution is unique and given by

xt =

[t−1∏i=0

ai

]x0.

Proof. Let us prove by mathematical induction that the proposition Pt holds ∀t ∈ N \ {0}, with

Pt : xt =

[t−1∏i=0

ai

]x0.

First, we consider P1. We have

x1 = a0x0,

Page 45: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.3. Linear systems of low orderDynamical Systems – Lecture Notes – J. Arino

39

hence P1 is true. Then assume that Pt is true, i.e., xt =[∏t−1

i=0 ai

]x0. Now express xt+1:

xt+1 = atxt

= at

[t−1∏i=0

ai

]x0 (by induction hypothesis)

=

[t∏i=0

ai

]x0,

so Pt+1 is true. By the principle of mathematical induction (PMI), we conclude that

xt =

[t−1∏i=0

a(i)

]x0,∀t ∈ N \ {0}.

Proposition 4.3.2. Consider the first-order linear nonhomogeneous (nonhomogeneous) difference equation definedfor t = 0, 1, 2, . . . by

xt+1 = atxt + bt.

If an initial value x0 is known, then the solution is unique and is given by

xt =

[t−1∏i=0

ai

]x0 + bt−1 +

t−2∑i=0

[t−1∏r=i+1

ar

]bi.

In particular,

• If xt+1 = axt + bt, then

xt = atx0 +

t−1∑i=0

at−i−1b(i).

• If xt+1 = axt + b, then

xt =

atx0 + b

[at − 1

a− 1

]a 6= 1

x0 + bt a = 1.

Proof. Let us prove by mathematical induction that

Pt : xt =

[t−1∏i=0

ai

]x0 + bt−1 +

t−2∑i=0

[t−1∏r=i+1

ar

]bi

holds true for all t ∈ N \ {0}. At rank t = 2: x1 = a0x0 + b0, then

x2 = a1x1 + b1 = a1a0x0 + a1b0 + b1.

Now assume that Pt holds true, i.e.,

xt =

[t−1∏i=0

ai

]x0 + bt−1 +

t−2∑i=0

[t−1∏r=i+1

ar

]bi

Page 46: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

40Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

and express xt+1:

xt+1 = atxt + bt

= at

{[t−1∏i=0

ai

]x0 + bt−1 +

t−2∑i=0

[t−1∏r=i+1

ar

]bi

}+ bt

=

[at

t−1∏i=0

ai

]x0 + atbt−1 +

t−2∑i=0

[at

t−1∏r=i+1

ar

]bi + bt

=

[t∏i=0

ai

]x0 + bt + atbt−1 +

t−2∑i=0

[t∏

r=i+1

ar

]bi

=

[t∏i=0

ai

]x0 + bt +

t−1∑i=0

[t∏

r=i+1

ar

]bi.

Thus Pt+1 holds. By the principle of mathematical induction, we conclude that, for all t ∈ N \ {0},

xt =

[t−1∏i=0

ai

]x0 + bt−1 +

t−2∑i=0

[t−1∏r=i+1

ar

]bi.

4.3.2 Second-order linear difference equation with constant coefficients

Consider the second-order linear homogeneous equation with constant coefficients:

a0xt+2 + a1xt+1 + a2xt = 0. (4.5)

To find two linearly independent solutions, x1t and x2

t : assume that solutions take the form of xt = λt with λ 6= 0.Then substitute solution in (4.5)

a0λt+2 + a1λ

t+1 + a2λt = 0,

a0λ2 + a1λ+ a2 = 0.

The equation a0λ2 + a1λ + a2 = 0 is the characteristic equation of (4.5). The 2 roots of the characteristic

equation, λ1 and λ2, are the eigenvalues.The general solution is a linear combination of the 2 solutions x1

t = λt1 and x2t = λt2. The form of the general

solution depends on the eigenvalues and there are 3 cases:

Eigenvalues are real and distinct: λ1 6= λ2. The general solution is

xt = c1λt1 + c2λ

t2,

with c1 and c2 arbitrary constants.

Eigenvalues are real and equal: λ1 = λ2. Then the 2 linearly independent solutions are x1t = λt1 x

2t = tλt2.

The general solution isxt = c1λ

t1 + c2tλ

t2,

with c1 and c2 arbitrary constants.

Eigenvalues are complex conjugates: λ1,2 = A ± iB = r(cosφ ± i sinφ), where r =√A2 +B2 and φ =

arctan(B/A). The two linearly independent solutions are x1 = rt cos(tφ) and x2 = rt sin(tφ). Then thegeneral solution is

xt = c1rt cos(tφ) + c2r

t sin(tφ),

with c1 and c2 arbitrary constants.

Page 47: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.4. Linear systems of difference equationsDynamical Systems – Lecture Notes – J. Arino

41

4.4 Linear systems of difference equations

4.4.1 Higher-order linear equations

Definition 4.4.1. The functions x1t , x

2t ,. . . , x

kt are said to be linearly independent for t ≥ t0 whenever

a1x1t + a2x

2t + · · ·+ akx

kt = 0

for all t ≥ t0, then we must have a1 = a2 = · · · = ak = 0.

Definition 4.4.2. The Casoratian of k functions x1t , x

2t ,. . . , x

kt is

C(x1t , x

2t , . . . , x

kt ) = det

x1t x2

t . . . xktx1t+1 x2

t+1 . . . xkt+1

x1t+2 x2

t+2 . . . xkt+2...

x1t+k−1 x2

t+k−1 . . . xkt+k−1

Proposition 4.4.3. If the Casoratian of x1

t , x2t ,. . . , x

kt satifies

C(x1t , x

2t , . . . , x

kt ) 6= 0 ∀t

then x1t , x

2t ,. . . , x

kt are k linearly independent functions.

Definition 4.4.4. A set of k linearly independent solutions of a kth linear homogeneous difference equation iscalled a fundamental set of solutions.

Proposition 4.4.5. (Principle of superposition) If x1t , x

2t ,. . . , x

kt are solutions a kth linear homogeneous difference

equation thenc1x

1t + c2x

2t + . . . , ckx

kt

is also solution of the kth linear homogeneous difference equation.

Definition 4.4.6. Let {x1t , x

2t , . . . , x

kt } be a fundamental set of solutions of kth linear homogeneous difference

equation. Then the general solution of the kth linear homogeneous difference equation is given by

xt =

k∑i=1

cixit,

for arbitrary constants ci, i = 1, . . . , k

Page 48: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

42Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

An mth−order linear homogeneous equation with constant coefficients is defined as

a0xt+m + a1xt+m−1 + · · ·+ amxt = 0 (4.6)

Solutions are composed of linear superpositions of m solutions of the form xt = λt, λ 6= 0 where λ are obtained byfinding the roots (eigenvalues) of the characteristic equation

a0λm + a1λ

m−1 + · · ·+ am = 0.

The characteristic equation has m eigenvalues: λi, i = 1, . . . ,m.

If eigenvalues are all real and distinct, the general solution takes the form

xt = c1λt1 + · · ·+ cmλ

tm

where ci, i = 1, . . . ,m are arbitrary.

For the other cases, general solutions depend on the existence of repeated or complex conjugate eigenvalues. Ifthere is a real eigenvalue λ1 of multiplicity k, then k linearly independent solutions can be formed by multiplyingby powers of t:

λt1, tλt1, t

2λt1, . . . , tk−1λt1.

If there are complex eigenvalues λ1,2 = r(cosφ ± i sinφ) of multiplicty k, then there are 2k linearly independentsolutions:

rt cos(tφ), rt sin(tφ), trt cos(tφ), trt sin(tφ), . . . , tk−1rt cos(tφ), tk−1rt sin(tφ)

4.4.2 Nonhomogeneous equations

An mth−order linear nonhomogeneous equation is defined as

a0xt+m + a1xt+m−1 + · · ·+ amxt = b(t) (4.7)

Theorem 4.4.7. The general solution of (4.7) is

xt = xpt +

m∑i=1

aixit

where xpt is a particular solution of the nonhomogeneous equation and {x1t , x

2t , . . . , x

kt } is a fundamental set of

solutions of mth homogeneous equation (4.6).

To find a particular solution of a nonhomogeneous equation, there exist several methods:

Method of undetermined coefficient: making a guess as to the form of the particular solution, and thensubstituting this function in the difference equation. This method works if the nonhomogeneous term b(t) isa linear combination or product of terms having one of the forms

at, cos(ct), sin(ct), tk.

See Table ??.

Method of variation of constants

Page 49: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.4. Linear systems of difference equationsDynamical Systems – Lecture Notes – J. Arino

43

b(t) xptat c1a

t

tk c0 + c1t+ c2t2 + · · ·+ ckt

k

tkat c0at + c1ta

t + c2t2at + · · ·+ ckt

kat

sin(ct), cos(ct) c1 sin(ct) + c2 cos(ct)at sin(ct), at cos(ct) (c1 sin(ct) + c2 cos(ct))at

attk sin(ct), attk cos(ct) (d0 + d1t+ d2t2 + · · ·+ dkt

k) sin(ct)at + (c0 +c1t+ c2t

2 + · · ·+ cktk) cos(ct)at

Table 4.1: Particular solutions

4.4.3 Qualitative analysis

What is the long-term behaviour of the solutions?For linear difference equations, the asymptotic behavior depends on the eigenvalues: real and complex and the

magnitude of eigenvalues.

Definition 4.4.8. Magnitude of an eigenvalue:

• If λ = A is real, |λ| = |A|.

• If λ = A+ iB is complex, |λ| = |A+ iB| =√A2 +B2.

Definition 4.4.9. An eigenvalue λi such that|λi| ≥ |λj |

for all j 6= i is called the dominant eigenvalue. If the inequality is strict, then λi is a strictly dominant eigenvalue.

Let the general solution of (4.6) be

xt =

m∑i=1

ciλti

The limiting behavior of the general solution is determined by the behavior of the dominant solution (cor-respondant to the dominant eigenvalue). Let λ1 be the strictly dominant eigenvalue (|λ1| > |λj | for all j 6= 1)then

xt = λt1

[c1 +

m∑i=2

ci

(λiλ1

)t]

Since∣∣∣ λiλ1

∣∣∣ < 1, for all i 6= 1, then(λiλ1

)t→ 0 as t→ +∞. Then

limt→+∞

xt = limt→+∞

c1λt1.

Depending on the value of λ1 there are different situations

• λ1 Real

– λ1 > 1 : limt→+∞ c1λt1 =∞ (monotonically diverge ⇒ unstable system)

– λ1 = 1 : constant

– 0 < λ1 < 1 : limt→+∞ c1λt1 = 0 (monotonically decreasing to 0 ⇒ stable system)

– −1 < λ1 < 0 : limt→+∞ c1λt1 = 0 (oscillating around zero and converging to 0 ⇒ stable system)

– λ1 = −1 : system oscillating between two values c1 and −c1– λ1 < −1 : system is oscillating but increasing in magnitude (unstable system)

Page 50: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

44Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

• λ1 Complex

– |λ1| > 1: system oscillates but increases in magnitude (unstable system)

– |λ1| = 1: system oscillates but constant magnitude

– |λ1| < 1: system oscillates but converges to 0 (stable system)

Magnitude of eigenvalues determine whether solutions are unbounded or bounded. The nature, real or complexdetermine whether solutions oscillate, or not.

In the case of a nonhomogeneous difference equation with a constant nonhomogeneous term, if the systemconverges, it will converge to the equilibrium point x∗ (not to 0 as previously).

A solution to a first-order linear difference system X(t+ 1) = AX(t) +B is the superposition of two solutions:the general solution Xh to the homogeneous system Xh(t + 1) = AXh(t) and a particular solution Xp to thenonhomogeneous system Xp(t+ 1) = AXp(t) +B. The general solution to the nonhomogeneous system is

X(t) = Xh(t) +Xp(t).

The homogeneous system has m−linearly independent solutions: there are some direct and indirect methodsto find these linearly independent solutions.

Indirect methods use the fact that the solution can be expressed as X(t) = AtX(0). In [?], methods tocompute At are presented, then the general solution can be known.

Direct method to solve X(t + 1) = AX(t) where A = (aij) is an m × m constant matrix: Assume thata solution has the following form X(t) = λtV where V is an nonzero m−column vector and λ is a constant.Substituting λtV into the linear system gives

λt+1V = AλtV,

then(A− λI)V = 0 (4.8)

where I is the m×m identity matrix and 0 is the zero vector. The zero solution V = 0 is the trivial solution; and(A.4) has an unique solution if det(A− λI) 6= 0. Hence, nonzero solutions V are obtained if and only if (A− λI)is singular if and only if

det(A− λI) = 0. (4.9)

Thus we need to consider the eigenvalues and eigenvectors of matrix A.

4.5 Linear systems of differential equations

We begin our study of linear systems of ordinary differential equations by considering homogeneous systems of theform (4.4) (linear systems), with x ∈ Rn and A ∈ Mn(R), the set of square matrices over the field R, A havingcontinuous entries on an interval I.

4.5.1 The vector space of solutions

Theorem 4.5.1 (Superposition principle). Let S0 be the set of solutions of (4.4) that are defined on some intervalI ⊂ R. Let φ1, φ2 ∈ S0, and λ1, λ2 ∈ R. Then λ1φ1 + λ2φ2 ∈ S0.

Proof. Let φ1, φ2 ∈ S0 be two solutions of (4.4), λ1, λ2 ∈ R. Then for all t ∈ I,

φ′1 = A(t)φ1

φ′2 = A(t)φ2,

from which it comes thatd

dt(λ1φ1 + λ2φ2) = A(t)[λ1φ1 + λ2φ2],

implying that λ1φ1 + λ2φ2 ∈ S0.

Page 51: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.5. Linear systems of differential equationsDynamical Systems – Lecture Notes – J. Arino

45

Thus the linear combination of any two solutions of (4.4) is in S0. This is a hint that S0 must be a vectorspace of dimension n on K. To show this, we need to find a basis of S0. We proceed in the classical manner, withthe notable difference from classical linear algebra that the basis is here composed of time-dependent functions.

Definition 4.5.2 (Fundamental set of solutions). A set of n solutions of the linear differential equation (4.4),all defined on the same open interval I, is called a fundamental set of solutions on I if the solutions are linearlyindependent functions on I.

Proposition 4.5.3. If A(t) is defined and continuous on the interval I, then the system (4.4) has a fundamentalset of solutions defined on I.

Proof. Let t0 ∈ I, and e1, . . . , en denote the canonical basis of Kn. Then, from Theorem 4.2.1, there exists a uniquesolution φ(t0) = (φ1(t0), . . . , φn(t0)) such that φi(t0) = ei, for i = 1, . . . , n. Furthermore, from Theorem 4.2.1, eachfunction φi is defined on the interval I.

Assume that {φi}, i = 1, . . . , n, is linearly dependent. Then there exists αi ∈ R, i = 1, . . . , n, not all zero, suchthat

∑ni=1 αiφi(t) = 0 for all t. In particular, this is true for t = t0, and thus

∑ni=1 αiφi(t0) =

∑ni=1 αiei = 0,

which implies that the canonical basis of Kn is linearly dependent. Hence a contradiction, and the φi are linearlyindependent.

Proposition 4.5.4. If F is a fundamental set of solutions of the linear system (4.4) on the open interval I, thenevery solution defined on I can be expressed as a linear combination of the elements of F .

Let t0 ∈ I, we consider the application

Φt0 : S0 → Kn

Y 7→ Φt0(x) = x(t0).

Lemma 4.5.5. Φt0 is a linear isomorphism.

Proof. Φt0 is bijective. Indeed, let v ∈ Kn, from Theorem 4.2.1, there exists a unique solution passing through(t0, v), i.e.,

∀v ∈ Kn, ∃!x ∈ S0, x(t0) = v ⇒ Φt0(x) = v,

so Φt0 is surjective. That Φt0 is injective follows from uniqueness of solutions to an ODE. Furthermore, Φt0(λ1x1 +λ2x2) = λ1Φt0(x1) + λ2Φt0(x2). Therefore dimS0 = dimKn = n.

4.5.2 Fundamental matrix solution

Definition 4.5.6. An n× n matrix function t 7→ Φ(t), defined on an open interval I, is called a matrix solutionof the homogeneous linear system (4.4) if each of its columns is a (vector) solution. A matrix solution Φ is calleda fundamental matrix solution if its columns form a fundamental set of solutions. If in addition Φ(t0) = I, afundamental matrix solution is called the principal fundamental matrix solution.

An important property of fundamental matrix solutions is the following, known as Abel’s formula.

Theorem 4.5.7 (Abel’s formula). Let A(t) be continuous on I and Φ ∈Mn(K) be such that Φ′(t) = A(t)Φ(t) onI. Then det Φ satisfies on I the differential equation

(det Φ)′ = (trA)(det Φ),

or, in integral form, for t, τ ∈ I,

det Φ(t) = det Φ(τ) exp

(∫ t

τ

trA(s)ds

). (4.10)

Page 52: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

46Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

Proof. Writing the differential equation Φ′(t) = A(t)Φ(t) in terms of the elements ϕij and aij of, respectively, Φand A,

ϕ′ij(t) =

n∑k=1

aik(t)ϕkj(t), (4.11)

for i, j = 1, . . . , n. Writing

det Φ =

∣∣∣∣∣∣∣∣ϕ11(t) ϕ12(t) . . . ϕ1n(t)ϕ21(t) ϕ22(t) . . . ϕ2n(t)

ϕn1(t) ϕn2(t) . . . ϕnn(t)

∣∣∣∣∣∣∣∣ ,we see that

(det Φ)′ =

∣∣∣∣∣∣∣∣ϕ′11 ϕ′12 . . . ϕ′1nϕ21 ϕ22 . . . ϕ2n

ϕn1 ϕn2 . . . ϕnn

∣∣∣∣∣∣∣∣+

∣∣∣∣∣∣∣∣ϕ11 ϕ12 . . . ϕ1n

ϕ′21 ϕ′22 . . . ϕ′2n

ϕn1 ϕn2 . . . ϕnn

∣∣∣∣∣∣∣∣+ · · ·+

∣∣∣∣∣∣∣∣ϕ11 ϕ12 . . . ϕ1n

ϕ21 ϕ22 . . . ϕ2n

ϕ′n1 ϕ′n2 . . . ϕ′nn

∣∣∣∣∣∣∣∣ .Indeed, write det Φ(t) = Γ(r1, r2, . . . , rn), where ri is the ith row in Φ(t). Γ is then a linear function of each of itsarguments, if all other rows are constant, which implies that

d

dtdet Φ(t) = Γ

(d

dtr1, r2, . . . , rn

)+ Γ

(r1,

d

dtr2, . . . , rn

)+ · · ·+ Γ

(r1, r2, . . . ,

d

dtrn

).

(To show this, use the definition of the derivative as a limit.) Using (4.11) on the first of the n determinants in(det Φ)′ gives ∣∣∣∣∣∣∣∣

∑k a1kϕk1

∑k a1kϕk2 . . .

∑k a1kϕkn

ϕ21 ϕ22 . . . ϕ2n

ϕn1 ϕn2 . . . ϕnn

∣∣∣∣∣∣∣∣ .Adding −a12 times the second row, −a13 times the first row, etc., −a1n times the nth row, to the first row, doesnot change the determinant, and thus∣∣∣∣∣∣∣∣

∑k a1kϕk1

∑k a1kϕk2 . . .

∑k a1kϕkn

ϕ21 ϕ22 . . . ϕ2n

ϕn1 ϕn2 . . . ϕnn

∣∣∣∣∣∣∣∣ =

∣∣∣∣∣∣∣∣a11ϕ11 a11ϕ12 . . . a11ϕ1n

ϕ21 ϕ22 . . . ϕ2n

ϕn1 ϕn2 . . . ϕnn

∣∣∣∣∣∣∣∣ = a11 det Φ.

Repeating this for each of the terms in (det Φ)′, we obtain (det Φ)′ = (a11 + a22 + · · · + ann) det Φ, giving finally(det Φ)′ = (trA)(det Φ). Note that this equation takes the form u′ − α(t)u = 0, which implies that

u exp

(∫ t

τ

α(s)ds

)= constant,

which in turn implies the integral form of the formula.

Remark – Consider (4.10). Suppose that τ ∈ I is such that det Φ(τ) 6= 0. Then, since ea 6= 0 for any a, it follows that

det Φ 6= 0 for all t ∈ I. In short, linear independence of solutions for a t ∈ I is equivalent to linear independence of solutions

for all t ∈ I. As a consequence, the column vectors of a fundamental matrix are linearly independent at every t ∈ I. ◦

Theorem 4.5.8. A solution matrix Φ of (4.4) is a fundamental solution matrix on I if, and only if, det Φ(t) 6= 0for all t ∈ I

Page 53: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.5. Linear systems of differential equationsDynamical Systems – Lecture Notes – J. Arino

47

Proof. Let Φ be a fundamental matrix with column vectors φi, and suppose that φ is any nontrivial solution of(4.4). Then there exists c1, . . . , cn, not all zero, such that

φ =

n∑j=1

cjφj ,

or, writing this equation in terms of Φ, φ = Φc, if c = (c1, . . . , cn)T . At any point t0 ∈ I, this is a system ofn linear equations with n unknowns c1, . . . , cn. This system has a unique solution for any choice of φ(t0). Thusdet Φ(t0) 6= 0, and by the remark above, det Φ(t) 6= 0 for all t ∈ I.

Reciproqually, let Φ be a solution matrix of (4.4), and suppose that det Φ(t) 6= 0 for t ∈ I. Then the columnvectors are linearly independent at every t ∈ I.

From the remark above, the condition “det Φ(t) 6= 0 for all t ∈ I” in Theorem 4.5.8 is equivalent to the condition“there exists t ∈ I such that det Φ(t) 6= 0”. A frequent candidate for this role is t0.

To conclude on fundamental solution matrices, remark that there are infinitely many of them, for a given linearsystem. However, since each fundamental solution matrix can provide a basis for the vector space of solutions, it isclear that the fundamental matrices associated to a given problem must be linked. Indeed, we have the followingresult.

Theorem 4.5.9. Let Φ be a fundamental matrix solution to (4.4). Let C ∈ Mn(K) be a constant nonsingularmatrix. Then ΦC is a fundamental matrix solution to (4.4). Conversely, if Ψ is another fundamental matrixsolution to (4.4), then there exists a constant nonsingular C ∈Mn(K) such that Ψ(t) = Φ(t)C for all t ∈ I.

Proof. Since Φ is a fundamental matrix solution to (4.4), we have

(ΦC)′ = Φ′C = (A(t)Φ)C = A(t)(ΦC),

and thus ΦC is a matrix solution to (4.4). Since Φ is a fundamental matrix solution to (4.4), Theorem 4.5.8 impliesthat det Φ 6= 0. Also, since C is nonsingular, detC 6= 0. Thus, det ΦC = det Φ detC 6= 0, and by Theorem 4.5.8,ΦC is a fundamental matrix solution to (4.4).

Conversely, assume that Φ and Ψ are two fundamental matrix solutions. Since ΦΦ−1 = I, taking the derivativeof this expression gives Φ′Φ−1 + Φ

(Φ−1

)′= 0, and therefore

(Φ−1

)′= −Φ−1Φ′Φ−1. We now consider the product

Φ−1Ψ. There holds (Φ−1Ψ

)′=(Φ−1

)′Ψ + Φ−1Ψ′

= −Φ−1Φ′Φ−1Ψ + Φ−1A(t)Ψ

=(−Φ−1A(t)ΦΦ−1 + Φ−1A(t)

=(−Φ−1A(t) + Φ−1A(t)

= 0.

Therefore, integrating(Φ−1Ψ

)′gives Φ−1Ψ = C, with C ∈Mn(K) is a constant. Thus, Ψ = CΦ. Furthermore, as

Φ and Ψ are fundamental matrix solutions, det Φ 6= 0 and det Ψ 6= 0, and therefore detC 6= 0.

Remark – Note that if Φ is a fundamental matrix solution to (4.4) and C ∈Mn(K) is a constant nonsingular matrix, then

it is not necessarily true that CΦ is a fundamental matrix solution to (4.4). See Exercise 2.3. ◦

4.5.3 Resolvent matrix

If t 7→ Φ(t) is a matrix solution of (4.4) on the interval I, then Φ′(t) = A(t)Φ(t) on I. Thus, by Proposition 4.5.3,there exists a fundamental matrix solution.

Page 54: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

48Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

Definition 4.5.10 (Resolvent matrix). Let t0 ∈ I and Φ(t) be a fundamental matrix solution of (4.4) on I. Sincethe columns of Φ are linearly independent, it follows that Φ(t0) is invertible. The resolvent (or state transitionmatrix) of (4.4) is then defined as

R(t, t0) = Φ(t)Φ(t0)−1.

It is evident thatR(t, t0) is the principal fundamental matrix solution at t0 (sinceR(t0, t0) = Φ(t0)Φ(t0)−1 = I).Thus system (4.4) has a principal fundamental matrix solution at each point in I.

Proposition 4.5.11. The resolvent matrix satisfies the Chapman-Kolmogorov identities

1) R(t, t) = I,

2) R(t, s)R(s, u) = R(t, u),

as well as the identities

3) R(t, s)−1 = R(s, t),

4) ∂∂sR(t, s) = −R(t, s)A(s),

5) ∂∂tR(t, s) = A(t)R(t, s).

Proof. First, for the Chapman-Kolmogorov identities. 1) is R(t, t) = Φ(t)Φ−1(t) = I. Also, 2) gives

R(t, s)R(s, u) = Φ(t)Φ−1(s)Φ(s)Φ−1(u) = Φ(t)Φ−1(u) = R(t, u).

The other equalities are equally easy to establish. Indeed,

R(t, s)−1 =(Φ(t)Φ−1(s)

)−1=(Φ−1(s)

)−1Φ(t)−1 = Φ(s)Φ−1(t) = R(s, t),

whence 3). Also,

∂sR(t, s) =

∂s

(Φ(t)Φ−1(s)

)= Φ(t)

(∂

∂sΦ−1(s)

)As Φ is a fundamental matrix solution, Φ′ exists and Φ is nonsingular, and differentiating ΦΦ−1 = I gives

∂s

(Φ(s)Φ−1(s)

)= 0⇔

(∂

∂sΦ(s)

)Φ−1(s) + Φ(s)

(∂

∂sΦ−1(s)

)= 0

⇔ Φ(s)

(∂

∂sΦ−1(s)

)= −

(∂

∂sΦ(s)

)Φ−1(s)

⇔ ∂

∂sΦ−1(s) = −Φ−1(s)

(∂

∂sΦ(s)

)Φ−1(s).

Therefore,∂

∂sR(t, s) = −Φ(t)Φ−1(s)

(∂

∂sΦ(s)

)Φ−1(s) = −R(t, s)

(∂

∂sΦ(s)

)Φ−1(s).

Now, since Φ(s) is a fundamental matrix solution, it follows that ∂Φ(s)/∂s = A(s)Φ(s), and thus

∂sR(t, s) = −R(t, s)A(s)Φ(s)Φ−1(s) = −R(t, s)A(s),

giving 4). Finally,

∂tR(t, s) =

∂tΦ(t)Φ−1(s)

= A(t)Φ(t)Φ−1(s) since Φ is a fundamental matrix solution

= A(t)R(t, s),

giving 5).

Page 55: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.5. Linear systems of differential equationsDynamical Systems – Lecture Notes – J. Arino

49

The role of the resolvent matrix is the following. Recall that, from Lemma 4.5.5, Φt0 defined by

Φt0 : S → Kn

x 7→ x(t0),

is a K-linear isomorphism from the space S to the space Kn. Then R is an application from Kn to Kn,

R(t, t0) : Kn → Kn

v 7→ R(t, t0)v = w,

such that

R(t, t0) = Φt ◦ Φ−1t0 ,

i.e.,

(R(t, t0)v = w)⇔ (∃x ∈ S, w = x(t), v = x(t0)) .

Since Φt and Φt0 are K-linear isomorphisms, R is a K-linear isomorphism on Kn. Thus R(t, t0) ∈ Mn(K) and isinvertible.

Proposition 4.5.12. R(t, t0) is the only solution in Mn(K) of the initial value problem

d

dtM(t) = A(t)M(t)

M(t0) = I,

with M(t) ∈Mn(K).

Proof. Since d(R(t, t0)v)/dt = A(t)R(t, t0)v,(d

dtR(t, t0)

)v = (A(t)R(t, t0)) v,

for all v ∈ Rn. Therefore, R(t, t0) is a solution to M ′ = A(t)M . But, by Theorem 4.2.1, we know the solution tothe associated IVP to be unique, hence the result.

From this, the following theorem follows immediately.

Theorem 4.5.13. The solution to the IVP consisting of the linear homogeneous nonautonomous system (4.4) withinitial condition x(t0) = x0 is given by

φ(t) = R(t, t0)x0.

4.5.4 Wronskian

Definition 4.5.14. The Wronskian of a system {x1, . . . , xn} of solutions to (4.4) is given by

W (t) = det(x1(t), . . . , xn(t)).

Let vi = xi(t0). Then we havexi(t) = R(t, t0)vi,

and it follows that

W (t) = det(R(t, t0)v1, . . . ,R(t, t0)vn)

= detR(t, t0) det(v1, . . . , vn).

Page 56: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

50Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

The following formulae hold

∆(t, t0) := detR(t, t0) = exp

(∫ t

t0

trA(s)ds

)(4.12a)

W (t) = exp

(∫ t

t0

trA(s)ds

)det(v1, . . . , vn). (4.12b)

4.5.5 Autonomous linear systems

At this point, we know that solutions to (4.4) take the form φ(t) = R(t, t0)x0, but this was obtained formally.We have no indication whatsoever as to the precise form of R(t, t0). Typically, finding R(t, t0) can be difficult,if not impossible. There are however cases where the resolvent can be explicitly computed. One such case is forautonomous linear systems, which take the form

x′(t) = Ax(t), (4.13)

that is, where A(t) ≡ A. Our objective here is to establish the following result.

Lemma 4.5.15. If A(t) ≡ A, then R(t, t0) = e(t−t0)A for all t, t0 ∈ I.

This result is deduced easily as a corollary to another result developped below, namely Theorem 4.5.16. Notethat in Lemma 4.5.15, the notation e(t−t0)A involves the notion of exponential of a matrix, which is detailed inAppendix A.10.

Because the reasoning used in constructing solutions to (4.13) is fairly straightforward, we now detail thisderivation. Using the intuition from one-dimensional linear equations, we seek a λ ∈ K such that φλ(t) = eλtv bea solution to (4.13) with v ∈ Kn \ {0}. We have

φ′λ = λeλtv,

and thus φλ is a solution if, and only if,

λeλtv = Aeλtv

= eλtAv

⇔ λv = Av

⇔ (A− λI)v = 0 (with I the identity matrix).

As v = 0 is not the only solution, this implies that A− λI must not be invertible, and so

φλ is a solution⇔ det(A− λI) = 0,

i.e., λ is an eigenvalue of A.In the simple case where A is diagonalizable, there exists a basis (v1, . . . , vn) of Kn, with v1, . . . , vn the

eigenvectors of A corresponding to the eigenvalues λ1, . . . , λn. We then obtain n linearly independent solutionsφλi(t) = eλi(t−t0), i = 1, . . . , n. The general solution is given by

φ(t) =(eλ1(t−t0)x01, . . . , e

λn(t−t0)x0n

),

where x0i is the ith component of x0, i = 1, . . . , n. In the general case, we need the notion of matrix exponentials.Defining the exponential of matrix A as

eA =

∞∑k=0

An

n!

(see Appendix A.10), we have the following result.

Page 57: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.5. Linear systems of differential equationsDynamical Systems – Lecture Notes – J. Arino

51

Theorem 4.5.16. The global solution φ on K of (4.13) such that φ(t0) = x0 is given by

φ(t) = e(t−t0)Ax0.

Proof. Assume φ = e(t−t0)Ax0. Then φ(t0) = e0Ax0 = Ix0 = x0. Also,

φ(t) =

( ∞∑n=0

1

n!(t− t0)nAn

)x0

=

∞∑n=0

1

n!(t− t0)nAnx0,

so φ is a power series with radius of convergence R =∞. Therefore, φ is differentiable on R and

φ′(t) =

∞∑n=1

1

n!n(t− t0)n−1Anx0

=

∞∑n=0

1

(n+ 1)!(n+ 1)(t− t0)nAn+1x0

=

∞∑n=0

1

n!(t− t0)nAn+1x0

= A

( ∞∑n=0

1

n!(t− t0)nAnx0

)= Aφ(t)

so φ is solution of (4.13). Since (4.13) is linear, solutions are unique and global.

The problem is now to evaluate the matrix etA. We have seen that in the case where A is diagonalizable,solutions take the form

φ(t) =(eλ1(t−t0)x01, . . . , e

λn(t−t0)x0n

),

which implies that, in this case, the matrix R(t, t0) takes the form

R(t, t0) =

eλ1(t−t0) 0 0

0 eλ2(t−t0) 0...

. . .

0 0 eλn(t−t0)

.

In the general case, we need the notion of generalized eigenvectors.

Definition 4.5.17 (Generalized eigenvectors). Let λ be an eigenvalue of the n × n matrix A, with multiplicitym ≤ n. Then, for k = 1, . . . ,m, any nonzero solution v of

(A− λI)kv = 0

is called a generalized eigenvector of A.

Theorem 4.5.18. Let A be a real n× n matrix with real eigenvalues λ1, . . . , λn repeated according to their multi-plicity. Then there exists a basis of generalized eigenvectors for Rn. And if {v1, . . . , vn} is any basis of generalizedeigenvectors for Rn, the matrix P = [v1 · · · vn] is invertible,

A = D +N,

whereP−1DP = diag(λj),

the matrix N = A−D is nilpotent of order k ≤ n, and D and N commute.

Page 58: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

52Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

4.6 Nonhomogeneous systems of ODEs

We consider the general (nonhomogeneous) problem (4.2), which we restate here for convenience. Let x ∈ Rn,A : I → L(E) and B : I → E, where I ⊂ R and E is a normed vector space, we consider the system

x′(t) = A(t)x(t) +B(t). (4.2)

4.6.1 The space of solutions

The first problem that we are faced with when considering system (4.2) is that the set of solutions does notconstitute a vector space; in particular, the superposition principle does not hold. However, we have the followingresult.

Proposition 4.6.1. Let x1, x2 be two solutions of (4.2). Then x1−x2 is a solution of the associated homogeneousequation (4.4).

Proof. Since x1 and x2 are solutions of (4.2),

x′1 = A(t)x1 +B(t)

x′2 = A(t)x2 +B(t).

Therefored

dt(x1 − x2) = A(t)(x1 − x2).

Theorem 4.6.2. The global solutions of (4.2) that are defined on I form an n dimensional affine subspace of thevector space of maps from I to Kn.

Theorem 4.6.3. Let V be the vector space over R of solutions to the linear system x′ = A(t)x. If ψ is a particularsolution of the nonhomogeneous system (4.2), then the set of all solutions of (4.2) is precisely

{φ+ ψ, φ ∈ V }.

Practical rules:

1. To obtain all solutions of (4.2), all solutions of (4.4) must be added to a particular solution of (4.2).

2. To obtain all solutions of (4.4), it is sufficient to know a basis of S0. Such a basis is called a fundamentalsystem of solutions of (4.4).

4.6.2 Construction of solutions

We have the following variation of constants formula.

Theorem 4.6.4. Let R(t, t0) be the resolvent of the homogeneous equation x′ = A(t)x associated to (4.2). Thenthe solution x to (4.2) is given by

x(t) = R(t, t0) +

∫ t

t0

R(t, s)B(s)ds. (4.14)

Proof. Let R(t, t0) be the resolvent of x′ = A(t)x. Any solution of the latter equation is given by

x(t) = R(t, t0)v, v ∈ Rn.

Let us now seek a particular solution to (4.2) of the form x(t) = R(t, t0)v(t), i.e., using a variation of constantsapproach. Taking the derivative of this expression of x, we have

x′(t) =d

dt[R(t, t0)]v(t) +R(t, t0)v′(t)

= A(t)R(t, t0)v(t) +R(t, t0)v′(t).

Page 59: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.6. Nonhomogeneous systems of ODEsDynamical Systems – Lecture Notes – J. Arino

53

Thus x is a solution to (4.2) if

A(t)R(t, t0)v(t) +R(t, t0)v′(t) = A(t)R(t, t0)v(t) +B(t)

⇔ R(t, t0)v′(t) = B(t)

⇔ v′(t) = R(t0, t)B(t),

since R(t, s)−1 = R(s, t). Therefore, v(t) =∫ tt0R(t0, s)B(s)ds. A particular solution is given by

x(t) = R(t, t0)

∫ t

t0

R(t0, s)B(s)ds

=

∫ t

t0

R(t, t0)R(t0, s)B(s)ds

=

∫ t

t0

R(t, s)B(s)ds.

Consider the nonhomogeneous equation (4.2) with the matrix A(t) ≡ A.

Theorem 4.6.5. The general solution to the IVP

x′(t) = Ax(t) +B(t)

x(t0) = x0

(4.15)

is given by

x(t) = e(t−t0)Ax0 +

∫ t

t0

e(t−t0)AB(s)ds. (4.16)

Proof. Use Lemma 4.5.15 and the variation of constants formula (4.14).

4.6.3 A variation of constants formula for a nonlinear system with a linear compo-nent

The variation of constants formula given in Theorem 4.6.4 can be extended.

Theorem 4.6.6 (Variation of constants formula). Consider the IVP

x′ = A(t)x+ g(t, x) (4.17a)

x(t0) = x0, (4.17b)

where g : R× Rn → Rn a smooth function, and let R(t, t0) be the resolvent associated to the homogeneous systemx′ = A(t)x, with R defined on some interval I 3 t0. Then the solution φ of (4.17) is given by

φ(t) = R(t, t0)x0 +

∫ t

t0

R(t, s)g(φ(s), s)ds, (4.18)

on some subinterval of I.

Proof. We proceed using a variation of constants approach. It is known that the general solution to the homogeneousequation x′ = A(t)x associated to (4.17) is given by

φ(t) = R(t, t0)x0.

Page 60: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

54Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

We seek a solution to (4.17) by assuming that φ(t) = R(t, t0)v(t). We have

φ′(t) =

(d

dtR(t, t0)

)v(t) +R(t, t0)v′(t)

= A(t)R(t, t0)v(t) +R(t, t0)v′(t),

from Proposition 4.5.11. For φ to be solution, it must satisfy the differential equation (4.17), and thus

φ′(t) = A(t)φ(t) + g(t, φ(t))⇔ A(t)R(t, t0)v(t) +R(t, t0)v′(t) = A(t)R(t, t0)v(t) + g(t, φ(t))

⇔ R(t, t0)v′(t) = g(t, φ(t))

⇔ v′(t) = R(t, t0)−1g(t, φ(t))

⇔ v′(t) = R(t0, t)g(t, φ(t))

⇔ v(t) =

∫ t

t0

R(t0, s)g(s, φ(s))ds+ C,

using Proposition 4.5.11 again. Therefore,

φ(t) = R(t, t0)

(∫ t

t0

R(t0, s)g(s, φ(s))ds+ C

).

Evaluating this expression at t = t0 gives φ(t0) = C, so C = x0. Therefore,

φ(t) = R(t, t0)x0 +R(t, t0)

∫ t

t0

R(t0, s)g(s, φ(s))ds

= R(t, t0)x0 +

∫ t

t0

R(t, t0)R(t0, s)g(s, φ(s))ds

= R(t, t0)x0 +

∫ t

t0

R(t, s)g(s, φ(s))ds,

from Proposition 4.5.11.

4.7 Linear systems of ODEs with periodic coefficients

4.7.1 Linear systems: Floquet theory

We consider the linear system (4.4) in the following case,

x′ = A(t)x

A(t+ ω) = A(t), ∀t,(4.19)

with entries of A(t) continuous on R.

Definition 4.7.1 (Monodromy operator). Associated to system (4.19) is the resolvent R(t, s). For all s ∈ R, theoperator

C(s) := R(s+ ω, s)

is called the monodromy operator.

Theorem 4.7.2. If X(t) is a fundamental matrix for (4.19), then there exists a nonsingular constant matrix Vsuch that, for all t,

X(t+ ω) = X(t)V.

This matrix takes the formV = X−1(0)X(ω),

and is called the monodromy matrix.

Page 61: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.7. Linear systems of ODEs with periodic coefficientsDynamical Systems – Lecture Notes – J. Arino

55

Proof. Since X is a fundamental matrix solution, there holds that X ′(t) = A(t)X(t) for all t. Therefore X ′(t+ω) =A(t+ω)X(t+ω), and by periodicity of A(t), X ′(t+ω) = A(t)X(t+ω), which implies that X(t+ω) is a fundamentalmatrix of (4.19). As a consequence, by Theorem 4.5.9, there exists a matrix V such that X(t+ ω) = X(t)V .

Since at t = 0, X(ω) = X(0)V , it follows that V = X−1(0)X(ω).

Theorem 4.7.3 (Floquet’s theorem, complex case). Any fundamental matrix solution Φ of (4.19) takes the form

Φ(t) = P (t)etB , (4.20)

where P (t) and B are n× n complex matrices such that

i) P (t) is invertible, continuous, and periodic of period ω in t,

ii) B is a constant matrix such that Φ(ω) = eωB.

Proof. Let Φ be a fundamental matrix solution. From 4.7.2, the monodromy matrix V = Φ−1(0)Φ(ω) is such thatΦ(t + ω) = Φ(t)V . By Theorem A.11.1, there exists B ∈ Mn(C) such that eBω = V . Let P (t) = Φ(t)e−Bt, soΦ(t) = P (t)eBt. It is clear that P is continuous and nonsingular. Also,

P (t+ ω) = Φ(t+ ω)e−B(t+ω)

= Φ(t)V e−B(ω+t)

= Φ(t)eBωe−Bωe−Bt

= Φ(t)e−Bt

= P (t),

proving the P is ω-periodic.

Theorem 4.7.4 (Floquet’s theorem, real case). Any fundamental matrix solution Φ of (4.19) takes the form

Φ(t) = P (t)etB , (4.21)

where P (t) and B are n× n real matrices such that

i) P (t) is invertible, continuous, and periodic of period 2ω in t,

ii) B is a constant matrix such that Φ(ω)2 = e2ωB.

Proof. The proof works similarly as in the complex case, except that here, Theorem A.11.1 implies that there existsB ∈ Mn(R) such that e2ωB = V 2. Let P (t) = Φ(t)e−Bt, so Φ(t) = P (t)etB . It is clear that P is continuous andnonsingular. Also,

P (t+ 2ω) = Φ(t+ 2ω)e−(t+2ω)B

= Φ(t+ ω)V e−(2ω+t)B

= Φ(t)V 2e−(2ω+t)B

= Φ(t)e2ωBe−2ωBe−tB

= Φ(t)e−tB

= P (t),

proving the P is ω-periodic.

See [14, p. 87-90], [6, p. 162-179].

Page 62: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

56Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

Theorem 4.7.5 (Floquet’s theorem, [6]). If Φ(t) is a fundamental matrix solution of the ω-periodic system (4.19),then, for all t ∈ R,

Φ(t+ ω) = Φ(t)Φ−1(0)Φ(ω).

In addition, for each possibly complex matrix B such that

eωB = Φ−1(0)Φ(ω),

there is a possibly complex ω-periodic matrix function t 7→ P (t) such that Φ(t) = P (t)etB for all t ∈ R. Also, thereis a real matrix R and a real 2ω-periodic matrix function t→ Q(t) such that Φ(t) = Q(t)etR for all t ∈ R.

Definition 4.7.6 (Floquet normal form). The representation Φ(t) = P (t)etR is called a Floquet normal form.

In the case where Φ(t) = P (t)etB , we have dP (t)/dt = A(t)P (t) − P (t)B. Therefore, letting x = P (t)z, weobtain x′ = P (t)x′ + dP (t)/dtx = P (t)A(t)x+A(t)P (t)x− P (t)Bx

z = P−1(t)x, so z′ = dP−1(t)dt x+ P−1(t)x′ = dP−1(t)

dt P (t)z + P−1(t)A(t)P (t)z

Definition 4.7.7 (Characteristic multipliers). The eigenvalues λ1, . . . , λn of a monodromy matrix B are called thecharacteristic multipliers of equation (4.19).

Definition 4.7.8 (Characteristic exponents). Numbers µ such that eµω is a characteristic multiplier of (4.19) arecalled the Floquet exponents of (4.19).

Theorem 4.7.9 (Spectral mapping theorem). Let K = R or C. If C ∈ GLn(K) is written C = eB, then theeigenvalues of C coincide with the exponentials of the eigenvalues of B, with same multiplicity.

Definition 4.7.10 (Characteristic exponents). The eigenvalues λ1, . . . , λn of a monodromy matrix B are calledthe characteristic exponents of equation (4.19). The exponents ρ1 = exp(2ωλ1), . . . , ρn = exp(2ωλn) of the matrixΦ(ω)2 are called the (Floquet) multipliers of (4.19).

Proposition 4.7.11. Suppose that X,Y are fundamental matrices for (4.19) and that X(t+ω) = X(t)V , Y (t+ω) =Y (t)U . Then the monodromy matrices U and V are similar.

Proof. Suppose that X(t + ω) = X(t)V and Y (t + ω) = Y (t)U . But, by Theorem 4.5.9, since X and Y arefundamental matrices for (4.19), there exists an invertible matrix C such that X(t) = Y (t)C for all t. Thus, inparticular, X(t+ ω) = Y (t+ ω)C, and so

C−1UCX(t+ ω) = Y (t+ ω)C = Y (t)UC = X(t)C−1UC,

since Y (t) = X(t)C−1. It follows that V = C−1UC, so U and V are similar.

From this Proposition, it follows that monodromy matrices share the same spectrum.

Corollary 4.7.12. All solutions of (4.19) tend to 0 as t → ∞ if and only if |ρj | < 1 for all j (or <(λj) < 0 forall j).

Let p be an eigenvector of Φ(ω)2 associated with a multiplier ρ. Then the solution φ(t) = Φ(t)p of (4.19)satisfies the condition φ(t+ 2ω) = ρφ(t). This is the origin of the term multiplier.

4.7.2 Nonhomogeneous systems: the Fredholm alternative

We discuss here an extension of a theorem that was proved implicitly in Exercise 4, Assignment 2. Let us start bystating the result in question. We consider here the system

x′ = A(t)x+ b(t), (4.22)

where x ∈ Rn, A ∈Mn(R) and b ∈ Rn, with A and b continuous and ω-periodic.

Page 63: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.7. Linear systems of ODEs with periodic coefficientsDynamical Systems – Lecture Notes – J. Arino

57

Theorem 4.7.13. If the homogeneous equation

x′ = A(t)x (4.23)

associated to (4.22) has no nonzero solution of period ω, then (4.22) has for each function f , a unique ω-periodicsolution.

The Fredholm alternative concerns the case where there exists a nonzero periodic solution of (4.23). We givesome needed results before going into details. Consider (4.23). Associated to this system is the so-called adjointsystem, which is defined by the following differential equation,

y′ = −AT (t)y (4.24)

Proposition 4.7.14. The adjoint equation has the following properties.

i) Let R(t, t0) be the resolvent matrix of (4.23). Then, the resolvent matrix of (4.24) is RT (t0, t).

ii) There are as many independent periodic solutions of (4.23) as there are of (4.24).

iii) If x is a solution of (4.23) and y is a solution of (4.24), then the scalar product 〈x(t), y(t)〉 is constant.

Proof. i) We know that ∂∂sR(t, s) = −R(t, s)A(s). Therefore, ∂

∂sRT (t, s) = −AT (s)RT (t, s). As R(s, s) = I, the

first point is proved.ii) The solution of (4.24) with initial value y0 is RT (0, t)y0. The initial value of a periodic solution of (4.24) is

y0 such thatRT (0, ω)y0 = y0

This can also be written as [RT (0, ω)− I

]y0 = 0

or, taking the transpose,yT0 [R(0, ω)− I] = 0

Now, since R(0, ω) = R−1(ω, 0), it follows that

yT0 [R(0, ω)− I] = 0⇔ yT0[R−1(ω, 0)− I

]= 0

This is equivalent to yT0 [R(0, ω)− I] = 0. The latter equation has as many solutions as [R(0, ω)− I]x0 = 0; thenumber of these depends on the rank of R(ω, 0)− I.

iii) Recall that for differentiable functions a, b,

d

dt〈a(t), b(t)〉 = 〈 d

dta(t), b(t)〉+ 〈a(t),

d

dtb(t)〉

Thusd

dt〈x(t), y(t)〉 = 〈A(t)x(t), y(t)〉+ 〈x(t),−AT (t)y(t)〉 = 0

Before we carry on to the actual Fredholm alternative in the context of ordinary differential equations, let usconsider the problem in a more general setting. Let H be a Hilbert space. If A ∈ L(H,H), the adjoint operatorA∗ of A is the element of L(H,H) such that

∀u, v ∈ H, 〈Au, v〉 = 〈u,A∗v〉

Let Img(A) be the image of A, Ker(A∗) be the kernel of A∗. Then we have H = Img(A)⊕Ker(A∗).

Theorem 4.7.15 (Fredholm alternative). For the equation Af = g to have a solution, it is necessary and sufficientthat g be orthogonal to every element of Ker(A∗).

We now use this very general setting to prove the following theorem, in the context of ODEs.

Page 64: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

58Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

Theorem 4.7.16 (Fredholm alternative for ODEs). Consider (4.22) with A and f continuous and ω-periodic.Suppose that the homogeneous equation (4.23) has p independent solutions of period ω. Then the adjoint equation(4.24) also has p independent solutions of period p, which we denote y1, . . . , yp. Then

i) If ∫ ω

0

〈yk(t), b(t)〉dt = 0, k = 1, . . . , p (4.25)

then there exist p independent solutions of (4.22) of period ω, and,

ii) if this condition is not fulfilled, (4.22) has no nontrivial solution of period ω.

Proof. First, remark that x0 is the initial condition of a periodic solution of (4.22) if, and only if,

[R(0, ω)− I]x0 =

∫ ω

0

R(0, s)b(s)ds (4.26)

By Theorem 4.6.4, the solution of (4.22) through (0, x0) is given by

x(t) = R(t, 0)x0 +

∫ t

0

R(t, s)b(s)ds

Hence, at time ω,

x(ω) = R(ω, 0)x0 +

∫ ω

0

R(ω, s)b(s)ds

If x0 is the initial condition of a ω-periodic solution, then x(ω) = x0, and so

x0 −R(ω, 0) =

∫ ω

0

R(ω, s)b(s)ds

On the other hand, yk(0) is the initial condition of an ω-periodic solution yk if, and only if,[RT (0, ω)− I

]yk(0) = 0

Let C = R(0, ω) − I. We have that Rn = Img(C) ⊕ Ker(CT ). We now use the Fredholm alternative in thiscontext. There exists x0 such that

Cx0 =

∫ ω

0

R(0, s)b(s)ds

if, and only if, ∫ ω

0

R(0, s)b(s)ds ∈ Img(C)

Indeed, from the Fredholm alternative, setting f = x0 and g =∫ ω

0R(0, s)b(s)ds, we have that Cf = g has a

solution if, and only if, g is orthogonal to every element of Ker(CT ), i.e., since Rn = Img(C) ⊕ Ker(CT ), if, andonly if, g ∈ Img(C).

Now, y1(0), . . . , yp(0) is a basis of Ker(CT ). It follows that there exists a solution of (4.22) if, and only if, forall k = 1, . . . , p,

∀k = 1, . . . , p, 〈∫ ω

0

R(0, s)b(s)ds, yk(0)〉 = 0

⇔ ∀k = 1, . . . , p,

∫ ω

0

〈R(0, s)b(s), yk(0)〉ds = 0

⇔ ∀k = 1, . . . , p,

∫ ω

0

〈b(s), RT (0, s)yk(0)〉ds = 0

⇔ ∀k = 1, . . . , p,

∫ ω

0

〈b(s), yk(s)〉ds = 0

Page 65: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.8. Further developments, bibliographical notesDynamical Systems – Lecture Notes – J. Arino

59

If these relations are satisfied, the set of vectors v such that

Av =

∫ ω

0

R(0, s)b(s)ds

is of the form v0 + Ker(CT ), where v0 is one of these vectors; hence there exist p of them which are independentand are initial conditions of the p independent ω-periodic solutions of (4.22).

Example – The equationx′′ = f(t) (4.27)

where f is ω-periodic, has solutions of period ω if, and only if,∫ ω

0

f(s)ds = 0

Let y = x′. Then, differentiating y and substituting into (4.27), we have

y′ = f(t)

Hence the system is (xy

)′=

(0 10 0

)(xy

)+

(0f(t)

)Hence,

AT =

(0 01 0

)and the adjoint equation ξ′ = AT ξ has the periodic solution (0, a)T . �

4.8 Further developments, bibliographical notes

Page 66: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

60Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

4.9 Exercises and problems

Exercise 4.9.1. Let A be a constant n× n matrix.

i) Show that ‖eA‖ ≤ e‖A‖.

ii) Show that det eA = etrA.

iii) How should α be chosen so that limt→∞

e−αteAt = 0.

Exercise 4.9.2. Let X(t) be a fundamental matrix for the system x′ = A(t)x, where A(t) is an n× n matrix withcontinuous entries on R. What conditions on A(t) and C guarantee that CX(t) is a fundamental matrix, where Cis a constant matrix.

Exercise 4.9.3. Compute the solution of the differential equation

x′(t) = x(t)− y(t)− t2

y′(t) = x(t) + 3y(t) + 2t(4.28)

Problem 4.9.4. Consider the initial value problem

x′(t) = A(t)x(t)

x(t0) = x0

(4.29)

We have seen that the solution of this initial value problem is given by

x(t) = R(t, t0)x0

where R(t, t0) is the resolvent matrix of the system. Suppose that we are in the case where the following conditionholds

∀t, s ∈ I, A(t)A(s) = A(s)A(t) (4.30)

with I ⊂ R.

i) Show that M(t) = exp(∫ t

t0A(s)ds

)is a solution of the matrix initial value problem

M ′(t) = A(t)M(t)

M(t0) = In

where In is the n × n identity matrix. [Hint: Use the formal definition of a derivative, i.e., M ′(t) =limh→0(M(t+ h)−M(t))/h]

ii) Deduce that, when (4.30) holds,

R(t, t0) = exp

(∫ t

t0

A(s)ds

)iii) Deduce the following theorem.

Theorem 4.9.1. Let U, V be constant matrices that commute, and suppose that A(t) = f(t)U + g(t)V forscalar functions f, g. Then

R(t, t0) = exp

(∫ t

t0

f(s)dsU

)exp

(∫ t

t0

g(s)dsV

)(4.31)

Exercise 4.9.5. Find the resolvent matrix associated to the matrix

A(t) =

(a(t) −b(t)b(t) a(t)

)(4.32)

where a, b are continuous functions on R.

Page 67: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

4.9. Exercises and problemsDynamical Systems – Lecture Notes – J. Arino

61

Exercise 4.9.6. Consider the linear system

x′ =1

tx+ ty

y′ = y(4.33)

with initial condition x(t0) = x0, y(t0) = y0.

i) Solve the initial value problem (4.33).

ii) Deduce the formula for the principal solution matrix R(t, t0).

iii) Show that in this case,

R(t, t0) 6= exp

(∫ t

t0

A(s)ds

)with

A(t) =

(1t t0 1

)Exercise 4.9.7. Consider the system

x′ = A(t)x (4.34)

where A(t) is a continuous n× n matrix on R, and A(t+ ω) = A(t), ω > 0.

i) Show that P(ω), the set of ω-periodic solutions of (4.34), is a vector space.

ii) Let f be a continuous n× 1 matrix function on R with f(t+ ω) = f(t). Show that, for the system

x′ = A(t)x+ f(t) (4.35)

the following conditions are equivalent.

a) System (4.35) has a unique ω-periodic solution,

b) [X−1(ω)−X−1(0)] is nonsingular,

c) dimP(ω) = 0.

Exercise 4.9.8. Compute Ani and etAi for the following matrices.

A1 =

(0 11 0

), A2 =

(1 10 1

), A3 =

0 1 − sin(θ)−1 0 cos(θ)

− sin(θ) cos(θ) 0

.

Exercise 4.9.9. Compute etA for the matrix

A =

1 1 0−1 0 −10 −1 1

.

Exercise 4.9.10. Let A ∈ Mn(R) be a matrix (independent of t), ‖ · ‖ be a norm on Rn and |||·||| the associatedoperator norm on Mn(R).

i) a) Show that for all t ∈ R and all k ∈ N∗, there exists Ck(t) ≥ 0 such that,∣∣∣∣∣∣∣∣∣∣∣∣e tkA − (I +t

kA

)∣∣∣∣∣∣∣∣∣∣∣∣ ≤ 1

k2Ck(t).

with limk→∞ Ck(t) = t2

2

∣∣∣∣∣∣A2∣∣∣∣∣∣.

Page 68: Dynamical Systems Lecture Notes Julien Arino … › ~jarino › courses › math...Dynamical Systems { Lecture Notes { J. Arino 1. A brief introduction to dynamical systems In order

62Dynamical Systems – Lecture Notes – J. Arino

4. Linear systems of equations

b) Show that for all t ∈ R and all k ∈ N∗, ∣∣∣∣∣∣∣∣∣∣∣∣I +t

kA

∣∣∣∣∣∣∣∣∣∣∣∣ ≤ e |t|k |||A|||.c) Deduce that

etA = limk→∞

(I +

t

kA

)k.

ii) Suppose now that A is symmetric and that its eigenvalues are > −α, with α > 0.

a) Show by induction that, for k ≥ 0,

(αI +A)−(k+1) =

∫ ∞0

e−t(αI+A) tk

k!dt.

b) Deduce that for all u > 0,∣∣∣∣∣∣(I + uA)−k∣∣∣∣∣∣ ≤M if, and only if,

∣∣∣∣∣∣e−tA∣∣∣∣∣∣ ≤M, ∀t > 0.

c) Show that (∀t > 0, e−tA ≥ 0

)⇔(∃λ0, ∀λ > λ0, (λI +A)−1 ≥ 0

)where by convention, for B = (bij) ∈ Mn(R), writing that B ≥ 0 means that bij ≥ 0 for all i, j =1, . . . , n.

iii) Do the results of part 2) hold true if A is a nonsymmetric matrix?

Exercise 4.9.11. Consider the system (x1

x2

)′=

(a bc d

)(x1

x2

)where a, b, c, d ∈ R are constants such that ad− bc = 0. Discuss all possible behaviours of the solutions and sketchthe corresponding phase plane trajectories.