On qualitative properties of generalized ODEs
Rogelio Grau Acuña
SERVIÇO DE PÓS-GRADUAÇÃO DO ICMC-USP
Data de Depósito:
Assinatura:_______________________
Rogelio Grau Acuña
On qualitative properties of generalized ODEs
Doctoral thesis submitted to the Instituto de CiênciasMatemáticas e de Computação - ICMC-USP inpartial fulfillment of the requirements for the degreeof the Doctorate Program in Mathematics. FINALVERSION
Concentration Area: Mathematics
Advisor: Profe. Dra. Márcia Cristina Anderson BrazFederson
Co-advisor: Profa. Dra. Jaqueline Godoy Mesquita
USP – São Carlos September 2016
Ficha catalográfica elaborada pela Biblioteca Prof. Achille Bassie Seção Técnica de Informática, ICMC/USP,
com os dados fornecidos pelo(a) autor(a)
Acuña, Rogelio GrauA634o On qualitative properties of generalized ODEs
/ Rogelio Grau Acuña; orientadora Márcia CristinaAnderson Braz Federson; coorientador JaquelineGodoy Mesquita. – São Carlos – SP, 2016.
134 p.
Tese (Doutorado - Programa de Pós-Graduação emMatemática) – Instituto de Ciências Matemáticas e deComputação, Universidade de São Paulo, 2016.
1. Template. 2. Qualification monograph.3. Dissertation. 4. Thesis. 5. Latex. I. Federson,Márcia Cristina Anderson Braz, orient. II. Mesquita,Jaqueline Godoy, coorient. III. Título.
Rogelio Grau Acuña
Sobre propriedades qualitativas de EDOs generalizadas
Tese apresentada ao Instituto de CiênciasMatemáticas e de Computação – ICMC-USP, comoparte dos requisitos para obtenção do título deDoutor em Ciências – Matemática. VERSÃOREVISADA
Área de Concentração: Matemática
Orientadora: Profa. Dra. Márcia Cristina AndersonBraz Federson
Coorientadora: Profa.Dra. Jaqueline Godoy Mesquita
USP – São Carlos Setembro de 2016
Acknowledgment
First of all, I want to thank God for giving me the wisdom and strength to make this a
reality project.
I would like to thank my advisor, professor Marcia Cristina Anderson Braz Federson,
for all the opportunities that she provided me during my PhD course, encouraging me to
improve my career as a researcher, showing a lot of opportunities.
I am grateful to my co-advisor, professor Jaqueline Godoy Mesquita, who gave me all the
necessary support during my stay in Brasilia and Ribeirao Preto. She taught me a lot and
shared with me her knowledge. This was really important to me.
I am grateful to CNPq and CAPES for the financial support during my doctorate.
Finally, i would like to thank my loved family: Sonia Acuna, Rosa ortiz, Maria Carolina,
Maria Camila, Osvaldo Grau, Clareth Grau and the rest of my family.
i
Resumo
Neste trabalho, nosso objetivo e provar resultados sobre prolongamento de solucoes, lim-
itacao uniforme de solucoes, estabilidade uniforme e estabilidade uniforme assintotica (no
sentido classico de Lyapunov) para equacoes diferenciais em medida e para equacoes dinami-
cas em escalas temporais.
A fim de obter os nossos resultados, empregamos a teoria de EDOs generalizadas, uma
vez que estas equacoes abrangem equacoes diferenciais em medida e equacoes dinamicas
em escalas temporais. Portanto, para obter nossos resultados, vamos comecar por provar,
os resultados que queremos para EDOs generalizadas abstratas. Em seguida, usando a
correspondencia entre as solucoes de EDOs generalizadas e solucoes de equacoes diferenciais
em medida (ver [38]), estenderemos os resultados para estas ultimas equacoes. Depois disso,
usando a correspondencia entre as solucoes de equacoes diferenciais em medida e as solucoes
de equacoes dinamicas em escalas temporais (ver [21]), estenderemos todos os resultados
para estas ultimas equacoes.
Finalmente, investigamos EDOs generalizadas autonomas e mostramos que estas equacoes
nao aumentam a classe de EDOs autonomas classicas, mesmo quando consideramos uma
classe mais geral de funcoes nos lados direitos das equacoes.
Os novos resultados encontrados estao contidos em [16, 17, 18, 19].
Palavras-chaves: Equacoes diferenciais em medida, equacoes diferenciais ordinarias
generalizadas, equacoes dinamicas em escalas temporais, limitacao, estabilidade de Lya-
punov, prolongamento, integral de kurzweil-Henstock-Stieltjes, funcionais de Lyapunov.
iii
Abstract
In this work, our goal is to prove results on prolongation of solutions, uniform bounded-
ness of solutions, uniform stability as well uniform asymptotic stability (in the classical sense
of Lyapunov) for measure differential equations and for dynamic equations on time scales.
In order to get our results, we employ the theory of generalized ODEs, since these equa-
tions encompass measure differential equations and dynamic equations on time scales. There-
fore, to get our results, we start by proving the expected result for abstract generalized ODEs.
Then, using the correspondence between the solutions of these equations and the solutions of
measure differential equations (see [38]), we extend all the results to these the latter. After
that, using the correspondence between the solutions of measure differential equations and
the solutions of dynamic equations on time scales (see [21]), we extend all the results to these
last equations.
Finally, we investigate autonomous generalized ODEs and show that these equations do
not enlarge the class of classical autonomous ODEs, even when we consider a more general
class of functions as right-hand sides.
All the new results presented in this work are contained in papers [16, 17, 18, 19].
Keywords: Measure differential equations, generalized ordinary differential equations,
dynamic equations on time scales, boundedness, Lyapunov stability, prolongation, Kurzweil-
Henstock-Stieltjes integral, Lyapunov functionals.
v
Contents
Introduction 1
1 Generalized ODE 5
1.1 The Kurzweil integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Generalized ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Dynamic equations on time scales 17
2.1 Time scales calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Kurzweil-Henstock delta integrals . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Correspondences between equations 25
3.1 Measure Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Measure differential equations and Generalized ODEs . . . . . . . . . . . . . 26
3.3 Dynamic equation on time scales and measure differential equations . . . . . 36
4 Prolongation of solutions 47
4.1 Prolongation of the solutions of generalized ODEs . . . . . . . . . . . . . . . 47
4.2 Prolongation of solutions of measure differential equations . . . . . . . . . . 61
4.3 Prolongation of solutions of dynamic equation on time scales . . . . . . . . . 66
vii
viii CONTENTS
5 Boundedness of solutions 79
5.1 Boundedness of solutions of generalized ODEs . . . . . . . . . . . . . . . . . 79
5.2 Boundedness of solutions of measure differential equations . . . . . . . . . . 91
5.3 Boundedness of solutions of dynamic equations on time scales . . . . . . . . 94
6 Lyapunov stability 99
6.1 Lyapunov stability for generalized ODEs . . . . . . . . . . . . . . . . . . . . 99
6.2 Lyapunov stability for measure differential equations . . . . . . . . . . . . . 107
6.3 Lyapunov stability for dynamic equations on time scales . . . . . . . . . . . 110
7 Remarks on autonomous generalized ODEs 119
7.1 Autonomous generalized ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2 Correspondence between F(Ω, h, ω) and F(Ω, h, ω) . . . . . . . . . . . . . . 122
7.3 The classes F(Ω∞, h, ω) and F(Ω∞, h, ω, E) . . . . . . . . . . . . . . . . . . 127
Introduction
The theory of generalized ordinary differential equations (we write generalized ODEs, for
short) was introduced by J. Kurzweil in 1957 for Euclidean and Banach space-valued func-
tions with the purpose to generalize certain results on continuous dependence on parameters
of the solutions of ordinary differential equations (see [30]).
Since then, these equations have been attracting the attention of many researchers, be-
cause they encompass several types of differential equations, such as ordinary and functional
differential equations (see [5, 33]), measure differential equations and measure functional dif-
ferential equations (see [20, 38]), dynamic equations on time scales and functional dynamic
equations on time scales (see [20, 42]), differential equations with impulses (see [1, 23]),
among others.
We point out that by the correspondences between the solutions of a generalized ODE and
the solutions of other types of equations, we are able to translate the results from generalized
ODEs to another class of differential equations. The gain in doing so comes from the use
of a nonabsolute integral in the integral form of a differential equation. Such integral is
the Kurzweil integral whose main feature is to integrate highly oscillating functions. It also
copes well with many discontinuities.
In this thesis, our goal is to present results on prolongation of solutions of measure dif-
ferential equations and of solutions of dynamic equations on time scales, as well as results
on boundedness of solutions and Lyapunov-type stability results for these two types of equa-
tions. In order to obtain our results, we employ the theory of generalized ODEs, since these
equations encompass measure differential equations and dynamic equations on time scale.
Therefore, we start by proving our results for abstract generalized ODEs and, then, by the
correspondence between the solutions of these equations and the solutions of measure differ-
ential equations, we extend our results to this last class of equations. After that, we use the
correspondence between the solutions of measure differential equations and the solutions of
dynamic equations on time scales to extend our results to these last equations.
1
2 Introduction
Measure differential equations are present in the literature since the decade of 1960 with
a work by W. Schmaedeke (see [37]) on control theory. Since then, several works had been
developed in measure differential equations on qualitative properties of their solutions, their
asymptotic behavior and applications. See [1, 2, 6, 10, 11, 12, 13, 14, 34, 39, 40].
It worths noticing that measure differential equations represent a very important tool
to applications, since they describe impulsive systems with discontinuous solutions and,
therefore, can be used to studying evolutionary processes such as biological or physical
phenomena, optimal control models in economics, among others. See, for instance, [1, 12,
34, 37]
In this work, we consider an integral form of a measure differential equations of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (0.1)
where τ0 ≥ t0, f : O × [t0,+∞) → Rn, O ⊂ Rn is an open set, and g : [t0,+∞) → Ris a nondecreasing function. The integral on the right-hand side of (0.1) is the Kurzweil-
Henstock-Stieljes integral. In order to obtain our main results, we assume conditions on
indefinite integral instead of conditions on the integrands. This choice enables us to deal
with functions which may be highly oscillating or have many discontinuites.
On the other hand, dynamic equations on time scales also play an important role on
applications to several fields of knowledge. This is a recent theory, which was introduced
by Stefan Hilger in his doctor thesis (see [27]), in order to unify the discrete and continuous
analysis and the cases “in between”. The main idea of this theory is to prove the results for
a type of equation, called dynamic equation, where the domain of the unknown function is a
time scale, which is an arbitrary closed and nonempty subset of the real numbers. Therefore,
depending on the chosen time scale, we obtain a result for a different type of equation. For
example, when the time scale is the set of real numbers, we obtain a result for differential
equations, and when the time scale is the set of integers, we obtain a result for difference
equations. Notice that we can obtain several results for different types of equations depending
on the chosen time scale.
Since dynamic equations on time scales have several applications to real-world models,
it is very important to understand their properties on unbounded intervals, that is, when
the time is sufficiently large, because then one can investigate properties such as asymptotic
behavior, stability, boundedness, attractors, bifurcation, among others. Therefore the first
step in order to do that is to investigate existence and uniqueness of a maximal solution and
prolongation of solutions for these equations.
The present thesis is divided into seven chapters which are organized as follows. In the
first chapter, we recall the basic concepts and properties concerning the Kurzweil integral and
Introduction 3
its applications to generalized ODEs. The second chapter is devoted to recalling the theory
of dynamic equations on time scales, and to presenting the main results and basic concepts
of this theory. In the third chapter, we recall the basis of the theory of measure differential
equations and its main properties. Also, we investigate the correspondence between the
solutions of measure differential equations and the solutions of generalized ODEs. Finally,
we present a connection between the solutions of measure differential equations and the
solutions of dynamic equations on time scales.
The fourth chapter is devoted to investigating the prolongation of solutions of generalized
ODEs, measure differential equations, as well as the solutions of dynamic equations on time
scales. We start by investigating the existence and uniqueness of maximal solutions of
generalized ODEs defined in a general Banach space. New results are obtained on this
subject. Then, using the correspondence between the solutions, we obtain analogous results
for measure differential equations and for dynamic equations on time scales. Most of the
results presented in this chapter are new.
On the other hand, we are also interested in investigating the boundedness of solutions of
measure differential equations and of dynamic equations on time scales. Then we prove new
results on boundedness of solutions of generalized ODE which do not require a Lipschitz-
type condition with respect to the second variable on the Lyapunov functional, improving
the results found in the literature (see [2]). After that, we extend our results to measure
differential equations, using the correspondence between the solutions of these equations
and the solutions of generalized ODEs, improving the results found in the literature for
measure differential equations. Furthermore, we prove a result on boundedness of solutions
of dynamic equations on time scales, using the fact that a dynamic equation on time scales
is a special case of measure differential equations. Moreover, we introduce new concepts
of boundedness of solutions of dynamic equations on time scales, namely, quasi-uniformly
ultimately boundedness and uniformly ultimately boundedness. All these result are new and
are collected in Chapter 5.
In this work, we also investigate Lyapunov-type stability of the trivial solution of general-
ized ODEs in Banach spaces. The stability concepts presented here extend the concepts from
[44]. Also, we point out that our results on uniform stability and uniform asymptotic stabil-
ity of the trivial solution of a generalized ODE do not require any Lipschitz-type condition
on the Lyapunov functional, improving the results found in the literature (see [3, 22, 24]).
Using the correspondences between the solutions, we extend our results to measure differ-
ential equations and to dynamic equations on time scales. All these results are original and
are contained in Chapter 6.
Finally, we investigate autonomous generalized ODEs and prove that these equations do
not enlarge the class of classical autonomous ODEs.
All the new results presented here are contained in the papers [16, 17, 18, 19].
Chapter
1Generalized ODE
In this chapter, we recall the concept of generalized ordinary differential equations (we
write generalized ODEs, for short) and present some basic results which play a fundamental
role throughout this work.
We divide this chapter into two sections. In the first section, we present the definition
of the Kurzweil integral which is the basis to define the concept of a generalized ODE. The
second section is devoted to presenting the basic theory of generalized ODEs. The main
references for this chapter are [31, 38].
1.1 The Kurzweil integral
We start this section by recalling some basic concepts concerning the Kurzweil integral.
Let [a, b] be an interval of R, −∞ < a < b < +∞. A tagged division of [a, b] is a finite
collection of point-interval pairs D = (τi, [si−1, si]), where a = s0 ≤ s1 ≤ . . . ≤ s|D| = b
is a division of [a, b] and τi ∈ [si−1, si], i = 1, 2, . . . , |D|, where the symbol |D| denotes the
number of subintervals in which [a, b] is divided.
A gauge on a set B ⊂ [a, b] is any function δ : B → (0,∞). Given a gauge δ on [a, b], we
say that a tagged division D = (τi, [si−1, si]) is δ-fine if, for every i, we have
[si−1, si] ⊂ (τi − δ(τi), τi + δ(τi)).
Using these definitions and notations, we are able to present the concept of a Kurzweil
integrable function defined on [a, b]× [a, b] taking values in an abstract space X.
Throughout this section, let us assume that X is a Banach space with norm ‖ · ‖ .
5
6 Chapter 1 — Generalized ODE
Definition 1.1. A function U : [a, b]× [a, b]→ X is called Kurzweil integrable over [a, b], if
there is an element I ∈ X such that given ε > 0, there is a gauge δ on [a, b] such that∥∥∥∥∥∥|D|∑i=1
[U (τi, si)− U (τi, si−1)]− I
∥∥∥∥∥∥ < ε
for every δ-fine tagged division D = (τi, [si−1, si]) of [a, b]. In this case, I is called the
Kurzweil integral of U over [a, b] and it is denoted by∫ baDU (τ, t).
We use the notation S(U,D) =|D|∑i=1
[U (τi, si) − U (τi, si−1)] for the Riemann-type sum
corresponding to the function U and to the tagged division D.
Analogously to the Riemann integral, the Kurzweil integral has the usual properties of
linearity, additivity with respect to adjacent intervals and integrability on subintervals.
If the integral∫ baDU(τ, t) exists, then we define∫ a
b
DU(τ, t) = −∫ b
a
DU(τ, t), if a < b,
and ∫ b
a
DU(τ, t) = 0, if a = b.
In particular, when U(τ, t) = f(τ)t, where f : [a, b] → Rn, then we obtain the usual
Kurzweil-Henstock integral of f . More generally, when U(τ, t) = f(τ)g(t), we obtain the
Kurzweil-Henstock-Stieljes (or Perron-Stieltjes) integral of f : [a, b] → Rn with respect to a
function g : [a, b]→ R and we will denote such integral byb∫a
f(s)dg(s).
Clearly, Definition 1.1 makes sense only if, for a given gauge δ of [a, b], there exists at
least one δ-fine division D of [a, b]. This is the content of the next result. For more details,
see [26, Theorem 4.1].
Lemma 1.2 (Cousin Lemma). Given a gauge δ of [a, b], there exists a δ-fine tagged division
D of [a, b].
The next result is known as the Saks-Henstock Lemma. A proof of it can be found in
[38, Lemma 1.13].
Lemma 1.3 (Saks-Henstock Lemma). Let U : [a, b]× [a, b]→ X be Kurzweil integrable over
[a, b]. Given ε > 0, let δ be a gauge of [a, b] corresponding to ε in the definition of the
1.1 The Kurzweil integral 7
Kurzweil integral such that∥∥∥∥∥∥|D|∑i=1
[U (τi, αi)− U (τi, αi−1)]−b∫
a
DU(τ, t)
∥∥∥∥∥∥ < ε, (1.1)
for every δ-fine tagged division D = (τi, [αi−1, αi]) of [a, b]. If
a ≤ β1 ≤ ξ1 ≤ γ1 ≤ β2 ≤ ξ2 ≤ γ2 ≤ . . . ≤ βm ≤ ξm ≤ γm ≤ b
represents a δ-fine tagged partial division (ξi, [βi, γi]) : i = 1, 2, . . . ,m , that is,
ξi ∈ [βi, γi] ⊂ (ξi − δ(ξi), ξi + δ(ξi)), i = 1, 2, . . . ,m,
butm⋃i=1
[βi, γi] does not need to equal [a, b], then
∥∥∥∥∥∥m∑i=1
[U (τi, αi)− U (τi, αi−1)−
b∫a
DU(τ, t)]∥∥∥∥∥∥ < ε.
The following result is an immediate consequence of the Saks-Henstock Lemma. See [38].
Corollary 1.4. Let U : [a, b] × [a, b] → X be Kurzweil integrable over [a, b]. Given ε > 0,
let δ be a gauge on [a, b] corresponding to ε in the definition of the Kurzweil integral and
[γ, v] ⊂ [a, b]. Then the following assertions hold.
(i) If (v − γ) < δ (γ) , then∥∥∥∥U (γ, v)− U (γ, γ)−∫ v
γ
DU (τ, t)
∥∥∥∥ < ε;
(ii) If (v − γ) < δ (v) , then∥∥∥∥U (v, v)− U (v, γ)−∫ v
γ
DU (τ, t)
∥∥∥∥ < ε.
The next result is a generalization of [38, Theorem 1.14] for general Banach space-valued
function. We omit its proof here, since it is similar to the finite dimensional case.
8 Chapter 1 — Generalized ODE
Theorem 1.5. If U : [a, b] × [a, b] → X is a function such that for every c ∈ [a, b), U is
Kurzweil integrable over [a, c] and the limit
limc→b−
[ c∫a
DU(τ, t)− U(b, c) + U(b, b)]
= I ∈ X
exists, then the function U is Kurzweil integrable over [a, b] and
b∫a
DU(τ, t) = I.
Similarly, if the function U is Kurzweil integrable over [c, b] for every c ∈ (a, b] and the limit
limc→a+
[ b∫c
DU(τ, t) + U(a, c)− U(a, a)]
= I ∈ X
exists, then the function U is Kurzweil integrable over [a, b] and
b∫a
DU(τ, t) = I.
The next result shows that the indefinite Kurzweil integral of U : [a, b]× [a, b]→ X, the
function s 7→∫ saDU(τ, t), is not continuous in general. A proof of this fact can be found in
[38, Theorem 1.16].
Theorem 1.6 (Hake-type Theorem). Let U : [a, b]× [a, b]→ X be Kurzweil integrable over
[a, b] and let c ∈ [a, b]. Then,
lims→c
[ s∫a
DU(τ, t)− U(c, s) + U(c, c)]
=
c∫a
DU(τ, t)
and
lims→c
[ b∫s
DU(τ, t) + U(c, s)− U(c, c)]
=
b∫c
DU(τ, t).
Remark 1.7. Notice that the indefinite Kurzweil integral is continuous at a point c ∈ [a, b],
if and only if, U(c, ·) : [a, b] → X, given by U(c, t) for t ∈ [a, b], is continuous at the point
t = c.
1.1 The Kurzweil integral 9
We recall the reader that a function f : [a, b]→ X is called regulated, if the lateral limits
f(t−) = lims→t−
f(s), t ∈ (a, b] and f(t+) = lims→t+
f(s), t ∈ [a, b)
exist. On the other hand, given a function f : [a, b] → X, its variation var[a,b] f on the
interval [a, b] is defined by
var[a,b] f := supD∈D[a,b]
|D|∑i=1
|f(ti)− f(ti−1)| ,
where D[a, b] denotes the set of all division of [a, b]. If var[a,b] f < +∞, we say that f is of
bounded variation in the interval [a, b].
The space of all regulated functions f : [a, b]→ X will be denoted by G([a, b], X) which
is a Banach space when endowed with the usual supremum norm
‖f‖∞ = sups∈[a,b]
‖f(s)‖ .
It is also known that functions of bounded variation in [a, b] are regulated. For more infor-
mation, see [25, 29].
The following result, which describes some properties of the indefinite Kurzweil-Henstock-
Stieltjes integral, is a special case of Theorem 1.6.
Theorem 1.8. Let f : [a, b] → Rn and g : [a, b] → R be functions such that g is regulated
and∫ baf(s)dg(s) exists. Then the functions
h(t) =
∫ t
a
f(s)dg(s), t ∈ [a, b] and k(t) =
∫ b
t
f(s)dg(s), t ∈ [a, b],
are regulated on [a, b] and satisfy
h(t+) = h(t) + f(t)∆+g(t), t ∈ [a, b),
h(t−) = h(t)− f(t)∆−g(t), t ∈ (a, b],
k(t+) = k(t)− f(t)∆+g(t), t ∈ [a, b),
k(t−) = k(t) + f(t)∆−g(t), t ∈ (a, b],
where
∆+g(t) = g(t+)− g(t) and ∆−g(t) = g(t)− g(t−).
10 Chapter 1 — Generalized ODE
Also,
g(t+) = limξ→t+
g(ξ), t ∈ [a, b) and g(t−) = limξ→t−
g(ξ), t ∈ (a, b].
Analogously, one can define h(t+), h(t−), k(t+) and k(t−).
We finish this section by presenting a result which ensures the existence of the Kurzweil-
Henstock-Stieltjes integralb∫a
f(s)dg(s). This result can be found in [20, Theorem 2.1].
Theorem 1.9. Let f : [a, b] → Rn be a regulated function on [a, b] and g : [a, b] → R be a
nondecreasing function. Then, the following conditions hold
(i) The integral
∫ b
a
f(s)dg(s) exists;
(ii)
∣∣∣∣∫ b
a
f(s)dg(s)
∣∣∣∣ ≤ ∫ b
a
|f(s)| dg(s).
1.2 Generalized ODEs
In this section, we reacall the definition of a generalized ODEs and its main properties.
The main reference is [38].
Consider a subset O ⊂ X, an interval [t0,+∞) ⊂ R and an X-valued function F : Ω→ X
defined for each (x, t) ∈ Ω, where Ω = O × [t0,+∞).
Definition 1.10. A function x : [α, β]→ X is called a solution of the generalized ODE
dx
dτ= DF (x, t) (1.2)
on the interval [α, β] ⊂ [t0,+∞), if (x(t), t) ∈ Ω, for every t ∈ [α, β], and
x(s2)− x(s1) =
s2∫s1
DF (x(τ), t), (1.3)
for every s1, s2 ∈ [α, β].
The integral on the right-hand side of (1.3) can be understood as the Kurzweil integral
of U(τ, t) = F (x(τ), t) described in Definition 1.1.
1.2 Generalized ODEs 11
Remark 1.11. It is worth mentioning that the notation in (1.2) is only symbolical. The
letter D indicates that (1.2) is a generalized ODE and this concept is defined via its solutions.
Even the expressiondx
dτdoes not mean that the solution has a derivative.
We can also define a solution of the generalized ODE (1.2) with an initial condition as
shows the next definition.
Definition 1.12. The function x : [α, β] → X is a solution of the generalized ODE (1.2)
with initial condition x(τ0) = x0 on the interval [α, β] ⊂ [t0,+∞), if τ0 ∈ [α, β], (x(t), t) ∈ Ω
for every t ∈ [α, β] and
x(s)− x0 =
s∫τ0
DF (x(τ), t)
for every s ∈ [α, β].
Although a solution x of the generalized ODE (1.2) is defined on a bounded interval [α, β]
in Definition 1.10, it is possible to extend this definition to a nondegenerated interval I, not
necessarily compact. This is the content of the next definition.
Definition 1.13. We say that x : I → X, where I is a nondegenerated subinterval of
[t0,+∞), is a solution of the generalized ODE (1.2) on I, if (x(t), t) ∈ Ω for every t ∈ I and
if the following equality
x(s2)− x(s1) =
s2∫s1
DF (x(τ), t)
is satisfied for every s1, s2 ∈ I.
In what follows, we present a class of functions F : Ω→ X which is be crucial to obtaining
important properties concerning the solutions of the generalized ODE (1.2).
Definition 1.14. Given a nondecreasing function h : [t0,+∞)→ R, we say that a function
F : Ω→ X belongs to the class F(Ω, h), if
‖F (x, t)− F (x, s)‖ ≤ |h(t)− h(s)| (1.4)
for all (x, t), (x, s) ∈ Ω, and if
‖F (x, t)− F (x, s)− F (y, t) + F (y, s)‖ ≤ ‖x− y‖ |h(t)− h(s)| (1.5)
for all (x, t), (x, s), (y, t), (y, s) ∈ Ω.
The following result is essential to our purposes. It can be found in [38, Lemma 3.9].
12 Chapter 1 — Generalized ODE
Lemma 1.15. Let F : Ω→ X satisfy condition (1.4). If [α, β] ⊂ [t0,+∞) and x : [α, β]→
X is such that (x(t), t) ∈ Ω for every t ∈ [α, β] and if the Kurzweil integralβ∫α
DF (x(τ), t)
exists, then for every s2, s1 ∈ [α, β], we have∥∥∥∥∥∥s2∫s1
DF (x(τ), t)
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)| .
The next result describes an important property of the solutions of the generalized ODE
(1.2). It can be found in [38, Lemma 3.10].
Lemma 1.16. Assume F : Ω → X satisfies condition (1.4). If [a, b] ⊂ [t0,+∞) and
x : [a, b]→ X is a solution of the generalized ODE (1.2), then the inequality
‖x(t)− x(s)‖ ≤ |h(t)− h(s)|
holds for all t, s ∈ [a, b]. Also, each point in [a, b] at which the function h is continuous is a
continuity point of the solution x : [a, b]→ X.
To the next result, let var[a,b] x be the variation of a function x : [a, b]→ X. If the right-
hand side of the generalized ODE (1.2) satisfies condition (1.4), then the next result ensures
that its solution is a function of bounded variation. A proof of it can be found in [38, Lemma
3.11].
Corollary 1.17. Let F : Ω → X satisfy condition (1.4). If [α, β] ⊂ [t0,+∞) and x :
[α, β] −→ X is a solution of the generalized ODE (1.2), then x is a function of bounded
variation (and, therefore, regulated) in [α, β] and
var[α,β]x ≤ h(β)− h(α) < +∞. (1.6)
Now, we present a result which ensures the existence of the integral involved in the
definition of the solution of the generalized ODE (1.2). This result is a consequence of [38,
Lemma 3.9, Corollary 3.16].
Proposition 1.18. Let F : Ω→ X be an element of the class F(Ω, h). If [α, β] ⊂ [t0,+∞)
and x : [α, β]→ X is regulated (in particular, a function of bounded variation) in [α, β] and
(x(s), s) ∈ Ω for every s ∈ [α, β], then the Kurzweil integral∫ βαDF (x(τ), t) exists and the
function s 7→∫ sαDF (x(τ), t) is of bounded variation in [α, β].
The next result describes the discontinuities of a solution of the generalized ODE (1.2),
provided F satisfies (1.4). For a proof of it, see [38, Lemma 3.12].
1.2 Generalized ODEs 13
Lemma 1.19. Consider [α, β] ⊂ [t0,+∞), x : [α, β] → X is a solution of the generalized
ODE (1.2) and F : Ω→ X satisfies condition (1.4). Then, we have
x(t+)− x(t) = F (x(t), t+)− F (x(t), t), for every t ∈ [α, β),
and
x(t)− x(t−) = F (x(t), t)− F (x(t), t−), for every t ∈ (α, β],
where
F (x, t+) = lims→t+
F (x, s), for every t ∈ [α, β),
and
F (x, t−) = lims→t−
F (x, s), for every t ∈ (α, β].
The next result can be found in [38, Theorem 3.14].
Theorem 1.20. Let F ∈ F(Ω, h). If x : [a, b] → X, is the uniform limit of a sequence
(xk)k∈N of step functions xk : [a, b] → X such that (x(s), s)) ∈ Ω and (xk(s), s) ∈ Ω, for
every k ∈ N and for every s ∈ [a, b], then the integral∫ baDF (x(τ), t) exists and∫ b
a
DF (x(τ), t) = limk→∞
∫ β
α
DF (xk(τ), t).
The next result ensures the existence and uniqueness of a local solution for an initial
value problem of the generalized ODE (1.2). For a proof of it, see [23, Theorem 2.15].
Theorem 1.21 (Local existence and uniqueness). Let F : Ω → X be an element of the
class F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing and left-continuous.
If (x0, τ0) ∈ Ω is such that x0 + F (x0, τ+0 ) − F (x0, τ0) ∈ O, then there exist ∆ > 0 and a
function x : [τ0, τ0 + ∆]→ X which is the unique solution of the generalized ODE (1.2) for
which x(τ0) = x0.
Remark 1.22. Notice that the condition x0 +F (x0, τ+0 )−F (x0, τ0) ∈ O from Theorem 1.21
is sufficient, but not necessary. The following example illustrates such a claim.
Example 1.23. Let X = R with norm | · | (absolute value), B1 := B(0, 1) ⊂ R, [0, 1] ⊂ Rand Ω := B1 × [0, 1], where B(0, 1) = x ∈ R : |x| < 1 . Consider a function ϕ : [0, 1] → Rgiven by
ϕ(t) :=
0, t = 0
t− 1, 0 < t ≤ 1,(1.7)
and a function F : Ω → R given by F (x, t) := ϕ(t), for all (x, t) ∈ Ω. Notice that the
function F is constant with relation to the first variable, that is, F (x, t) = F (y, t) for every
14 Chapter 1 — Generalized ODE
x, y ∈ B1. Also, define a function h : [0, 1]→ R as follows
h(t) :=
0, t = 0
2t+ 1, 0 < t ≤ 1.
Note that, by the definition, the function h is left-continuous on (0, 1] and increasing on
[0, 1]. Consider the generalized ODE given bydx
dτ= DF (x, t) = D[ϕ(t)]
x(0) = 0.(1.8)
Assertion 1. F ∈ F(Ω, h).
i) At first, we will show that |F (x, t2)−F (x, t1)| ≤ |h(t2)−h(t1)|, for all (x, t2), (x, t1) ∈ Ω.
– Let 0 < t1 ≤ t2 ≤ 1 and x ∈ B1. Then
|F (x, t2)− F (x, t1)| = |ϕ(t2)− ϕ(t1)|= |t2 − 1− (t1 − 1)|= |t2 − t1|≤ |2(t2 − t1)|= |2t2 + 1− (2t1 + 1)|= |h(t2)− h(t1)|.
– Let t1 = 0, 0 < t2 ≤ 1 and x ∈ B1. Then
|F (x, t2)− F (x, t1)| = |ϕ(t2)− ϕ(t1)|= |t2 − 1| = 1− t2< 1 < 2t2 + 1
= h(t2)− h(0) = |h(t2)− h(t1)|.
– The case t1 = t2 = 0 follows trivially. Therefore, we have
|F (x, t2)− F (x, t1)| ≤ |h(t2)− h(t1)|, for every (x, t2), (x, t1) ∈ Ω.
ii) Also, the following inequality
|F (x, t2)− F (x, t1)− F (y, t2) + F (y, t1)| ≤ |h(t2)− h(t1)| |x− y| ,
1.2 Generalized ODEs 15
is satisfied for all (x, t2), (x, t1), (y, t2), (y, t1) ∈ Ω. Indeed, notice that
|F (x, t2)− F (x, t1)− F (y, t2) + F (y, t1)| = |ϕ(t2)− ϕ(t1)− ϕ(t2) + ϕ(t1)|= 0 ≤ |h(t2)− h(t1)| |x− y| ,
for all (x, t2), (x, t1), (y, t2), (y, t1) ∈ Ω.
Assertion 2. For x0 = 0 and τ0 = 0, we have 0 + F (0, 0+)− F (0, 0) /∈ B1.
Indeed, notice that by the definition of F, we have
F (0, 0+)− F (0, 0) = limt→0+
F (0, t)− F (0, 0) = limt→0+
ϕ(t) = −1,
which implies that
0 + F (0, 0+)− F (0, 0) = −1 /∈ B1.
Assertion 3. The function ϕ, given by (1.7), is the unique solution of the generalized ODE
(1.8) on [0, 1].
Indeed, we will start by proving that ϕ is solution of the generalized ODE (1.8) on [0, 1]. Let
s ∈ [0, 1], then
ϕ(s)− ϕ(0) =
s∫0
D[ϕ(t)] =
s∫0
DF (ϕ(τ), t).
Therefore, ϕ is solution of (1.8) on [0, 1].
Now, suppose x : [0, 1]→ R is also a solution of (1.8) on [0, 1]. Then, given s ∈ [0, 1], we
have
x(s) = 0 +
s∫0
DF (x(τ), t)︸ ︷︷ ︸ϕ(t)
=
s∫0
D[ϕ(t)] = ϕ(s)− ϕ(0)︸︷︷︸0
= ϕ(s),
that is, x(s) = ϕ(s) for all s ∈ [0, 1]. Notice that the equality ϕ(s)−ϕ(0) =s∫
0
D[ϕ(t)] follows
directly from the definition of the Kurzweil integral.
Chapter
2Dynamic equations on time scales
In this chapter, we will present the basic concepts concerning the theory of dinamic
equations on time scales. This theory is recent and it was introduced by Stefan Hilger in his
doctor thesis (see [27]) in order to unify the discrete and continuous analysis and the cases
“in between”.
This chapter is divided into three sections. In the first section, we present the basic
definitions and notations about the theory of dynamic equations on time scales. The second
section is dedicated to presenting the basic concepts and properties of delta derivatives. The
last section is devoted to the study of the Kurzweil-Henstock ∆-integrals.
The main references for this chapter are [7, 8, 27].
2.1 Time scales calculus
In this section, our goal is to present the basic concepts and results concerning the theory
of dynamic equations on time scales.
A time scale is an arbitrary nonempty and closed subset of the real numbers (with the
standard topology of R). Thus
R, [1, 2] ∪ N, Z, the Cantor set
are examples of time scales. On the other hand, open intervals, half-open intervals, R\Q, Qare not examples of time scales, since they are not closed sets in R. Here, we denote a time
scale by the symbol T.
17
18 Chapter 2 — Dynamic equations on time scales
Let T be a time scale. We define the forward jump operator σ : T→ T by
σ(t) := inf s ∈ T : s > t ,
while the backward jump operator ρ : T→ T is given by
ρ(t) := sup s ∈ T : s < t .
In the above definitions, we consider inf ∅ = supT and sup ∅ = inf T. Then, it is not difficult
to see that σ(t) = t, if T has a maximum t, and ρ(t) = t, if T has a minimum t. Since Tis a closed set, it is clear from the definition of the forward jump operator and backward
jump operator, that the values σ(t) and ρ(t) belong to T, for all t ∈ T. Also, we define the
graininess function µ : T→ [0,+∞) by
µ(t) := σ(t)− t.
Definition 2.1. Let T be a time scale and t ∈ T. We say that
(i) t is right-scattered, if σ(t) > t;
(ii) t is left-scattered, if ρ(t) < t;
(iii) t is isolated, if it is right-scattered and left-scattered.
Definition 2.2. Let T be a time scale and t ∈ T. We say that
(i) t is right-dense, if t < supT and σ(t) = t;
(ii) t is left-dense, if inf T < t and ρ(t) = t;
(iii) t is dense, if it is right-dense and left-dense.
In what follows, we present some examples to illustrate the above definitions. They are
borrowed from [7].
Example 2.3. Let T = Z. Then for all t ∈ Z, we have
σ(t) = inf s ∈ Z : s > t = inf t+ 1, t+ 2, . . . = t+ 1 > t
ρ(t) = sup s ∈ Z : s < t = sup t− 1, t− 2, . . . = t− 1 < t;
Also, we have
µ(t) = 1.
Therefore, we can easily see that every point in Z is isolated.
2.1 Time scales calculus 19
Example 2.4. Let T = R. Then, for each t ∈ R, we have
σ(t) = inf s ∈ R : s > t = inf(t,+∞) = t
ρ(t) = sup s ∈ R : s < t = sup(−∞, t) = t.
Thus, each t ∈ R is a right-dense and left-dense point. Hence t is dense. Furthermore,
µ(t) = 0, for all t ∈ T.
Example 2.5. Let T = qZ ∪ 0 , with q > 1. Then for any t ∈ T\ 0 , we have
σ(t) = inf s ∈ T : s > t = infqt, q2t, . . .
= qt > t
ρ(t) = sup s ∈ Z : s < t = sup
t
q,t
q2, . . .
=t
q< t.
Thus, each t ∈ T\ 0 is an isolated point. On the other hand, if t = 0, then σ(0) = 0 which
implies that 0 is a right-dense point. Also, note that
µ(t) =
(q − 1)t, t ∈ T\ 0 ,0, t = 0.
Example 2.6. Let T = hZ = hk : k ∈ Z , where h > 0. Then, for every t ∈ T, we have
σ(t) = inf s ∈ T : s > t = inf t+ nh : n ∈ N = t+ h > t
ρ(t) = sup s ∈ T : s < t = sup t− nh : n ∈ N = t− h < t.
Hence, every point t ∈ T is an isolated point and
µ(t) = σ(t)− t = h, for any t ∈ T,
which implies that µ is constant.
To conclude this section, we define the set Tκ which is derived from the time scale T as
follows
Tκ =
T\(ρ(supT), supT], if supT <∞,T, if supT =∞.
(2.1)
This definition is important to define the ∆-derivative of a function f, which we will presented
in the next section.
20 Chapter 2 — Dynamic equations on time scales
2.2 Differentiation
In this section, we present the basic concepts and main properties about the ∆-derivative
of a function f : T→ R.
Let f : T→ R be a function. We define the so-called delta (or Hilger) derivative of f at
a point t ∈ Tκ as follows.
Definition 2.7. Suppose that f : T → R is a function and t ∈ Tκ. Then, we define f∆(t)
to be the number (provided it exists) with the property that, given any ε > 0, there is a
neighborhood U of t (that is, U = (t− δ, t+ δ) ∩ T for some δ > 0) such that∣∣(f(σ(t))− f(s))− f∆(t)(σ(t)− s)∣∣ ≤ ε |σ(t)− s| , for all s ∈ U.
We call f∆(t) the delta (ou Hilger) derivative of f at t. Moreover, we say that f is delta
differentiable on Tκ, provided f∆(t) exists for all t ∈ Tκ. The function f∆ : Tκ → R is called
the ∆-derivative of f on Tκ.
It is also possible to extend the concept of ∆-derivative presented in Definition 2.7 for an
Rn-valued functions.
Definition 2.8. Let t ∈ Tκ and f : T → Rn be an Rn-valued function, f = (f1, . . . , fn)
where each component fi is a real function defined on T. We say that f is ∆-differentiable
at t if each component fi is ∆-differentiable at t and we define
f∆(t) = (f∆1 (t), . . . , f∆
n (t)).
In the sequel, we present some examples of the ∆-derivative of a function f : T → R.
Such examples are borrowed from [7].
Example 2.9. Let f : T→ R be given by f(t) = c, for all t ∈ T, where c ∈ R is a constant.
Then f∆(t) = 0 for any t ∈ Tκ. Indeed, for every ε > 0, we have
|(f(σ(t))− f(s))− 0(σ(t)− s)| = |c− c| = 0 ≤ ε |σ(t)− s| , for all s ∈ Tκ.
Example 2.10. Let f : T → R be a function given by f(t) = t2, for any t ∈ T. Then
f∆(t) = t + σ(t), for all t ∈ Tκ. In fact, given ε > 0, let us take U = (t − δ, t + δ) with
2.2 Differentiation 21
0 < δ ≤ ε. Then, for all s ∈ U, we obtain:
|(f(σ(t))− f(s))− (t+ σ(t))(σ(t)− s)| =∣∣(σ(t))2 − s2 − tσ(t) + ts− (σ(t))2 + sσ(t)
∣∣= |σ(t)(s− t)− s(s− t)|= |s− t| |σ(t)− s|< δ |σ(t)− s| ≤ ε |σ(t)− s| .
The next result presents some important properties of the ∆-derivatives. For a proof of
it, see [7, Theorem 1.16].
Theorem 2.11. Let f : T → R be a function and t ∈ Tκ. Then the following assertions
hold
(i) If f is ∆-differentiable at t, then f is continuous at t.
(ii) If f is continuous at t and t is right-scattered, then f is ∆-differentiable at t, with
f∆(t) =f(σ(t))− f(t)
µ(t).
(iii) If t is right-dense, then f is ∆-differentiable at t if, and only if, the limit
lims→t
f(s)− f(t)
s− t
exists as a finite number. In this case, we have
f∆(t) = lims→t
f(t)− f(s)
t− s.
(iv) If f is ∆-differentiable at t, then
f(σ(t)) = f(t) + µ(t)f∆(t).
The next example illustrates the above properties. Such example is borrowed from [7].
Example 2.12. Let h > 0 and
T = hZ = hk : k ∈ Z .
Let f : T→ R be a function. As it was showed in Example 2.6, any t ∈ T is right-scattered.
Also, note that f is continuous on T = hZ (since each point t ∈ T isolated). Thus, by the
22 Chapter 2 — Dynamic equations on time scales
statement (ii) from Theorem 2.11, we have
f∆(t) =f(σ(t))− f(t)
µ(t)=f(t+ h)− f(t)
h, for all t ∈ T.
The next result describes algebraic properties of the ∆-derivatives. A proof of it can be
found in [7, Theorem 1.20].
Theorem 2.13. Suppose f, g : T→ R are ∆-differentiable at t ∈ Tκ. Then, we have
(i) f + g : T→ R is ∆-differentiable at t, with
(f + g)∆(t) = f∆(t) + g∆(t).
(ii) For every α ∈ R, αf : T→ R is ∆-differentiable at t, with
(αf)∆(t) = αf∆(t).
(iii) f · g : T→ R is ∆-differentiable at t, with
(f · g)∆(t) = f∆(t)g(t) + f(σ(t))g∆(t) = f(t)g∆(t) + f∆(t)g(σ(t)).
(iv) If g(t)g(σ(t)) 6= 0, thenf
gis ∆-differentiable at t and
(fg
)∆
(t) =f∆(t)g(t)− f(t)g∆(t)
g(t)g(σ(t)).
Remark 2.14. By Theorem 2.13, it is not difficult to see that if T = R, we have the usual
properties for the Frechet derivative.
2.3 Kurzweil-Henstock delta integrals
In this section, we present the basic concepts and the main properties of the Kurzweil-
Henstock ∆-integrals. This section is based on [35].
Let T be a time scale. For each pair of numbers a, b ∈ T, a ≤ b, set [a, b]T = [a, b] ∩ T.The open and half-open time scales intervals are defined in a similar way.
We say that δ = (δL, δR) is a ∆-gauge in [a, b]T, provided δL(t) > 0 in (a, b]T, δR(t) > 0
in [a, b)T, δL(a) ≥ 0, δR ≥ 0 and δR(t) ≥ µ(t) for all t ∈ [a, b)T.
2.3 Kurzweil-Henstock delta integrals 23
We say that P is a tagged division of [a, b]T if
P = a = s0 ≤ ξ1 ≤ s1 ≤ ξ2 ≤ s2 ≤ · · · ≤ sn−1 ≤ ξn ≤ sn = b
with si−1 < si for all i = 1, . . . , n and si, ξi ∈ T. We denoted such a tagged division by
P = [si−1, si]T : ξi , where [si−1, si]T denotes an interval in T and ξi is the associated “tag
point” in [si−1, si]T.
If δ is a ∆-gauge of [a, b]T, then a tagged division P is called δ-fine, whenever
ξi − δL(ξi) ≤ si−1 < si ≤ ξi + δR(ξi),
for i = 1, . . . , n.
In the sequel, we recall the concept of the Kurzweil-Henstock ∆-integral. This concept
was introduced for the first time by A. Peterson and B. Thompson in [35].
Definition 2.15. A function f : [a, b]T → R is called Kurzweil-Henstock ∆-integrable (or
KH delta integrable) over [a, b]T, if there is a number I such that given ε > 0, there exists a
∆-gauge δ on [a, b]T such that ∣∣∣∣∣I −n∑i=1
f(ξi)(si − si−1)
∣∣∣∣∣ < ε
for all δ-fine tagged division P of [a, b]T. In this case, I is called the Kurzweil-Henstock
∆-integral of f over [a, b]T and it will be denoted by KH∫ baf(t)∆t.
Remark 2.16. When it is clear, we write simply∫ baf(t)∆t instead of KH
∫ baf(t)∆t.
Notice that Definition 2.15 makes sense only if, for a given ∆-gauge δ on [a, b]T, there
exists at least one δ-fine tagged division of [a, b]T. The next result ensures this existence and
it is a generalization of the known Cousin Lemma for a ∆-gauge on a time scale interval.
This result can be found [35, Lemma 1.9].
Lemma 2.17. If δ is a ∆-gauge on [a, b]T, then there exists a δ-fine tagged division P of
[a, b]T.
Finally, we present the last result of this section. It describes a very important property
of the Kurzweil-Henstock ∆-integral.
Theorem 2.18. Let f : [a, b]T → R be a function and c ∈ [a, b]. Then f is Kurzweil-Henstock
∆-integrable over [a, b]T if, and only if, f is Kurzweil-Henstock ∆-integrable over [a, c]T and
[c, b]T. In this case, ∫ b
a
f(t)∆t =
∫ c
a
f(t)∆t+
∫ b
c
f(t)∆t.
24 Chapter 2 — Dynamic equations on time scales
Also, if f, g : [a, b]T → R are Kurzweil-Henstock ∆-integrable functions over [a, b]T, then
αf + βg is a Kurzweil-Henstock ∆-integrable function over [a, b]T and∫ b
a
(αf + βg)(t)∆t = α
∫ b
a
f(t)∆t+ β
∫ b
a
g(t)∆t,
for all α, β ∈ R.
Chapter
3Correspondences between equations
In this chapter, we present the correspondence between the solutions of a measure dif-
ferential equation and the solutions of a generalized ODE. Furthermore, we prove that the
dynamic equations on time scales can be viewed as a measure differential equations (we write
MDE, for short). These correspondences are fundamental to our purposes.
We divide this chapter into three sections. In the first section, we discuss the main
properties of measure differential equations (MDEs). In the second section, we present the
correspondence between MDEs and generalized ODEs. In the third section, we recall the
reader that dynamic equations on time scales are a special class of measure differential
equations. The main references for this chapter are [10, 20, 21, 38, 42].
3.1 Measure Differential Equations
Throughout this section, let Rn be the n-dimensional Euclidean space with norm | · | .Consider an open set O ⊂ Rn. Also, let f : O × [t0,+∞) → Rn and g : [t0,+∞) → R be
functions.
In this work, we consider the integral form of a measure differential equations of the type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (3.1)
where τ0 ≥ t0, and the integral on the right-hand side is in the sense of Kurzweil-Henstock-
Stieltjes taken with respect to g : [t0,+∞)→ R, which is a nondecreasing and left-continuous
function.
25
26 Chapter 3 — Correspondences between equations
Since these equations are very important to applications, their qualitative properties have
been investigated by different authors (see, for instance, [9, 10, 32, 40, 41]). It is a known fact
that these equations encompass integral form ordinary and impulsive differential equations
Remark 3.1. It is not difficult to see that if I ⊂ [t0,+∞) is a nondegenerate interval. Then,
the function x : I → Rn is a solution of the measure differential equation (3.1) if, and only
if, (x(t), t) ∈ O × I for all t ∈ I and
x(s2)− x(s1) =
s2∫s1
f(x(s), s)dg(s),
for every s1, s2 ∈ I, where the integral on the right-hand side is in the sense of Kurzweil-
Henstock-Stieltjes.
3.2 Measure differential equations and Generalized ODEs
In this section, we present a correspondence inspired in a result found in [38, Theorem
5.17]. Such correspondence describes the relation between the solutions of a measure differ-
ential equation and the solutions of a generalized ODE. This correspondence is crucial to
prove our main results.
We will show that, under certain assumptions, we can establish a correspondence between
the solutions of a integral form of a measure differential equation of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0,
and the solutions of the generalized ODE
dx
dτ= DF (x, t),
where F is given by F (x, t) =∫ tτ0f(x, s)dg(s), for x ∈ O and t ∈ [t0,+∞).
We recall the reader that G([t0,+∞),Rn) denotes the vector space of functions x :
[t0,+∞)→ Rn such that x|[α,β] belongs to the space G([α, β],Rn), for all [α, β] ⊂ [t0,+∞).
We use the symbolG0([t0,+∞),Rn) to denote the vector space of all functions x ∈ G([t0,+∞),Rn)
such that sups∈[t0,+∞)
e−(s−t0) |x(s)| < +∞. This space is endowed with the norm
‖x‖[t0,+∞) = sups∈[t0,+∞)
e−(s−t0) |x(s)| , x ∈ G0([t0,+∞),Rn),
3.2 Measure differential equations and Generalized ODEs 27
and it becomes a Banach space (see [28]).
We will use the notation x ∈ G([t0,+∞), O) for a function x ∈ G([t0,+∞),Rn) such that
x(s) ∈ O, for all s ∈ [t0,+∞). The notation x ∈ G0([t0,+∞), O) is defined in a similar way.
In what follows, we say that γ : [t0,+∞) → R+ is locally Kurzweil-Henstock-Stieltjes
integrable with respect to g if, and only if, the function [s1, s2] 3 t 7→ γ(t) is Kurzweil-
Henstock-Stieltjes integrable with respect to g on [s1, s2], for every s1, s2 ∈ [t0,+∞).
Throughout this section, let us assume the following conditions on the functions f :
O × [t0,+∞)→ Rn and g : [t0,+∞)→ R.
(A1) The function g : [t0,+∞)→ R is nondecreasing and left-continuous on (t0,+∞).
(A2) The Kurzweil-Henstock-Stieltjes integral∫ s2
s1
f(x(s), s)dg(s)
exists, for all x ∈ G([t0,+∞), O) and all s1, s2 ∈ [t0,+∞).
(A3) There exists a Kurzweil-Henstock-Stieltjes integrable function M : [t0,+∞)→ R+ with
respect to g such that ∣∣∣∣∣∣s2∫s1
f(x(s), s)dg(s)
∣∣∣∣∣∣ ≤s2∫s1
M(s)dg(s),
for all x ∈ G([t0,+∞), O) and all s1, s2 ∈ [t0,+∞), s1 ≤ s2.
(A4) There exists a Kurzweil-Henstock-Stieltjes integrable function L : [t0,+∞)→ R+ with
respect to g such that∣∣∣∣∣∣s2∫s1
[f(x(s), s)− f(z(s), s)]dg(s)
∣∣∣∣∣∣ ≤ ‖x− z‖[t0,+∞)
s2∫s1
L(s)dg(s),
for all x, z ∈ G0([t0,+∞), O) and all s1, s2 ∈ [t0,+∞), s1 ≤ s2.
The next result ensures that if the function f : O× [t0,+∞)→ Rn satisfies the conditions
(A2), (A3) and (A4), and g : [t0,+∞)→ R satisfies condition (A1), then the function F given
by F (x, t) =∫ tτ0f(x, s)dg(s), for (x, t) ∈ O×[t0,+∞) belongs to the class F(O×[t0,+∞), h),
where h(t) =∫ tτ0
(M(s) + L(s))dg(s), t ∈ [t0,+∞).
Theorem 3.2. Assume f : O × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞)→ R satisfies condition (A1). Choose an arbitrary τ0 ∈ [t0,+∞) and define
28 Chapter 3 — Correspondences between equations
F : O × [t0,+∞)→ Rn by
F (x, t) =
∫ t
τ0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (3.2)
Then F ∈ F(Ω, h), where Ω = O × [t0,+∞), and h : [t0,+∞)→ R given by
h(t) =
∫ t
τ0
(M(s) + L(s))dg(s), t ∈ [t0,+∞) (3.3)
is a nondecreasing function.
Proof. At first, we prove that F is well-defined. Indeed, let t ∈ [t0,+∞) and x ∈ O. Define
the following function
cx : [t0,+∞)→ O (3.4)
s 7→ cx(s) = x.
Note that cx ∈ G0([t0,+∞), O) (in particular, x ∈ G([t0,+∞), O)). Thus, condition (A2)
implies
∫ t
τ0
f(cx(s), s)dg(s) =
∫ t
τ0
f(x, s)dg(s) exists and, therefore, F is well-defined. On
the other hand, since M and L are Kurzweil-Henstock-Stieltjes integrable functions, h is
well-defined. Also, by the left-continuity of g, h is a left-continuous function.
Now, for an arbitrary x ∈ O and for t0 ≤ s1 ≤ s2 < +∞, by condition (A3), we have
|F (x, s2)− F (x, s1)| =∣∣∣∣∫ s2
τ0
f(x, s)dg(s)−∫ s1
τ0
f(x, s)dg(s)
∣∣∣∣=
∣∣∣∣∫ s2
s1
f(x, s)dg(s)
∣∣∣∣ =
∣∣∣∣∫ t
τ0
f(cx(s), s)dg(s)
∣∣∣∣≤∫ s2
s1
M(s)dg(s) ≤∫ s2
s1
(M(s) + L(s))dg(s) = (h(s2)− h(s1)).
Analogously, condition (A4) implies that if x, y ∈ O and t0 ≤ s1 ≤ s2 < +∞, then
|F (x, s2)− F (x, s1)− F (y, s2) + F (y, s1)|
=
∣∣∣∣∫ s1
s1
[f(x, s)− f(y, s)]dg(s)
∣∣∣∣ =
∣∣∣∣∫ s1
s1
[f(cx(s), s)− f(cy(s), s)]dg(s)
∣∣∣∣≤ ‖cx − cy‖[t0,+∞)
s2∫s1
L(s)dg(s) ≤ |x− y|s2∫s1
L(s)dg(s) = |x− y| (h(s2)− h(s1)).
3.2 Measure differential equations and Generalized ODEs 29
(Note that ‖cx − cy‖[t0,+∞) = sups∈[t0,+∞)
e−(s−t0) |cx(s)− cy(s)|︸ ︷︷ ︸|x−y|
≤ |x− y|).
Remark 3.3. Since g is left-continuous, by relation (3.3), clearly h is left-continuous.
Remark 3.4. Given a compact interval [a, b] ⊂ [t0,+∞) and a function x ∈ G([a, b], O), we
identify the function x with its constant prolongation to the whole interval [t0,+∞), that is,
with the function
x(t) =
x(a), t ∈ [t0, a]
x(t), t ∈ [a, b]
x(b), t ∈ [b,+∞).
(3.5)
It is clear that x ∈ G0([t0,+∞), O) and ‖x‖[t0,+∞) ≤ ‖x‖∞ .
The next result brings an important property of the regulated function on [a, b]. It can
be found in [29].
Theorem 3.5. Any function in G([a, b], X) (in particular, any function of bounded variation
in [a, b]) can be written as the uniform limit of step functions.
Remark 3.6. Notice that by the Theorem 3.5 we can ensure that given x ∈ G([a, b], X)
and ε > 0, there exists a step function ϕ : [a, b] → X such that ‖ϕ(t)− x(t)‖ < ε for every
t ∈ [a, b]. On the other hand, it is important to point out that the result still valid if we
change X by an set O ⊂ X, that is, given x : [a, b] → O a step function, there exist a step
function φ : [a, b] → O such that ‖φ(t)− x(t)‖ < ε, for every t ∈ [a, b]. In particular, given
x : [a, b]→ O a step function, there exist a sequence of finite step functions φk : [a, b]→ O,
k = 1, 2, . . . such that
‖φk − x‖∞ = sups∈[a,b]
|φk(s)− x(s)| k→+∞→ 0.
For more details, see [42].
The next result describes the relation between the Kurzweil-Henstock-Stieltjes integral
and the Kurzweil integral. A similar version of this result was proved in [38, Proposition 5.12],
for the case where f satisfies the usual conditions of Caratheodory. Also, [38, Proposition
5.12] describes the relation between the Lebesgue-Stieltjes integral and the Kurzweil integral.
Theorem 3.7. Assume f : O × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Let F : O × [t0,+∞) → Rn be defined
by (3.2). If [a, b] ⊂ [t0,+∞) and x : [a, b] → O is a regulated function (in particular,
a function of bounded variation), then both the Kurzweil integral∫ baDF (x(τ), t) and the
Kurzweil-Henstock-Stieltjes integral∫ baf(x(s), s)dg(s) exist and have the same value.
30 Chapter 3 — Correspondences between equations
Proof. Let F : O × [t0,+∞) → Rn be defined by (3.2). By Theorem 3.2, F ∈ F(Ω, h),
where Ω = O × [t0,+∞), and h is defined by (3.3). Now, suppose x : [a, b] → O is a
regulated function. Then by Proposition 1.18, the Kurzweil integral∫ baDF (x(τ), t) exists.
Also, according to condition (A2),∫ baf(x(s), s)dg(s) exists, where x is defined as in Remark
3.4. On the other hand,∫ baf(x(s), s)dg(s) =
∫ baf(x(s), s)dg(s) and, therefore, the last
integral exists.
We will show that∫ baDF (x(τ), t) =
∫ baf(x(s), s)dg(s).
Assertion 1. For any finite step function ϕ : [a, b]→ O, we have∫ a
b
f(ϕ(s), s)dg(s) =
∫ b
a
DG(ϕ(τ), t).
Indeed, let ϕ : [a, b] → O be a finite step function. Then, ϕ is a regulated function on
[a, b] and, therefore, the Kurzweil integral∫ baDF (ϕ(τ), t) exists and the Kurzweil-Henstock-
Stieltjes integral∫ baf(ϕ(s), s)dg(s) also exists.
Since ϕ is a step function, there exists a division
a = s0 < s1 < . . . < sn = b
and vectors c1, c2, . . . , cn ∈ Rn such that
x(s) = ck for every s ∈ (sk−1, sk), k = 1, 2, . . .m.
It is easy to check that if sk−1 < σ1 < σ2 < sk, then∫ σ2
σ1
DG(ϕ(τ), t) =
∫ σ2
σ1
DF (ck, t) = F (ck, σ2)− F (ck, σ1).
Since
F (ck, σ2)− F (ck, σ1) =
∫ σ2
t0
f(ck, s)dg(s)−∫ σ1
t0
f(ck, s)dg(s)
=
∫ σ2
σ1
f(ck, s)dg(s) =
∫ σ2
σ1
f(ϕ(s), s)dg(s),
we have ∫ σ2
σ1
f(ϕ(s), s)dg(s) = F (ck, σ2)− F (ck, σ1) =
∫ σ2
σ1
DF (ϕ(τ), t).
3.2 Measure differential equations and Generalized ODEs 31
Let k ∈ 1, 2, . . . , n and suppose σ ∈ (sk−1, sk) is given. By Theorem 1.8, we obtain∫ σ
sk−1
f(ϕ(s), s)dg(s)− f(ϕ(sk−1), sk−1)∆+g(sk−1) = limξ→s+k−1
∫ σ
ξ
f(ϕ(s), s)d(s)
= limξ→s+k−1
F (ck, σ)− F (ck, ξ)
= F (ck, σ)− F (ck, s+k−1),
that is,∫ σ
sk−1
f(ϕ(s), s)dg(s) = F (ck, σ)− F (ck, s+k−1) + f(ϕ(sk−1), sk−1)∆+g(sk−1). (3.6)
On the other hand, according to Theorem 1.6, we get∫ σ
sk−1
DF (ϕ(τ), t) = limξ→s+k−1
(∫ σ
ξ
DF (ϕ(τ), t) + F (ϕ(sk−1), ξ)− F (ϕ(sk−1), sk−1))
= limξ→s+k−1
(∫ σ
ξ
DF (ck, t) +
∫ ξ
t0
f(ϕ(sk−1), s)dg(s)−∫ sk−1
t0
f(ϕ(sk−1), s)dg(s))
= limξ→s+k−1
(F (ck, σ)− F (ck, ξ) +
∫ ξ
sk−1
f(ϕ(sk−1), s)dg(s))
= F (ck, σ)− F (ck, s+k−1) + f(ϕ(sk−1), sk−1)∆+g(sk−1). (3.7)
Notice that the last equality follows directly by (i) from the Corollary 1.8. Hence, by (3.6)
and (3.7), ∫ σ
sk−1
f(ϕ(s), s)dg(s) =
∫ σ
sk−1
DF (ϕ(τ), t),
for every k ∈ 1, 2, . . . , n and for all σ ∈ (sk−1, sk).
Analogously, we can prove that∫ sk
σ
f(ϕ(s), s)dg(s) =
∫ sk
σ
DF (ϕ(τ), t),
for each k ∈ 1, 2, . . . , n and for each σ ∈ (sk−1, sk). Therefore, we get∫ b
a
f(ϕ(s), s)dg(s) =n∑k=1
∫ sk
sk−1
f(ϕ(s), s)dg(s)
32 Chapter 3 — Correspondences between equations
=n∑k=1
∫ sk
sk−1
DF (ϕ(τ), t) =
∫ b
a
DF (ϕ(τ), t),
that is, ∫ b
a
f(ϕ(s), s)dg(s) =
∫ b
a
DF (ϕ(τ), t).
This proves Assertion 1.
Since x : [a, b] → O is a regulated function, by Theorem 3.5 (see Remark 3.6), there
exists a sequence of finite step functions ϕk : [a, b]→ O, k = 1, 2, . . . such that
‖ϕk − x‖∞ = sups∈[a,b]
|ϕk(s)− x(s)| k→+∞→ 0.
By Assertion 1, ∫ b
a
f(ϕk(s), s)dg(s) =
∫ b
a
DF (ϕk(τ), t). (3.8)
Now, using Theorem 1.20, we have
limk→∞
∫ b
a
DF (ϕk(τ), t) =
∫ b
a
DF (x(τ), t). (3.9)
Then, by (3.8) and (3.9), we obtain
limk→∞
∫ b
a
f(ϕk(s), s)dg(s) = limk→∞
∫ b
a
DF (ϕk(τ), t) =
∫ b
a
DF (x(τ), t). (3.10)
On the other hand, condition (A4) and Remark 3.4 imply∥∥∥∥∥∥b∫
a
[f(ϕk(s), s)− f(x(s), s)]dg(s)
∥∥∥∥∥∥ =
∥∥∥∥∥∥b∫
a
[f(ϕk(s), s)− f(x(s), s)]dg(s)
∥∥∥∥∥∥≤ ‖ϕk − x‖[t0,+∞)
b∫a
L(s)dg(s)
≤ ‖ϕk − x‖∞
b∫a
L(s)dg(s)k→+∞→ 0,
that is,
limk→∞
∫ b
a
f(ϕk(s), s)dg(s) =
∫ b
a
f(x(s), s)dg(s). (3.11)
3.2 Measure differential equations and Generalized ODEs 33
Finally, by (3.10) and (3.11),∫ b
a
DF (x(τ), t) =
∫ b
a
f(x(s), s)dg(s),
obtaining the desired result.
The next result gives us a correspondence between the solutions of the integral form of
an MDE of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (3.12)
and the solutions of the generalized ODE
dx
dτ= DF (x, t),
where F is given by F (x, t) =∫ tτ0f(x, s)dg(s). This result is inspired in [38, Theorem 5.17].
Since this result is crucial to our main results, we repeat its proof here, following the ideas
of [38].
Theorem 3.8. Assume f : O × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Then, the function x : I → Rn is a solution
of the MDE (3.12) on I ⊂ [t0,+∞) if, and only if, x is a solution of the generalized ODE
dx
dτ= DF (x, t)
on I with the function F is given by (3.2).
Proof. Let x : I → Rn be a solution of the MDE (3.12) on I ⊂ [t0,+∞). Then, given
s1, s2 ∈ I, we have
x(t)− x(s1) =
∫ t
s1
f(x(s), s)dg(s), for all t ∈ [s1, s2]. (3.13)
Now, from Theorem 1.8, [s1, s2] 3 t 7→∫ ts1f(x(s), s)dg(s) is a regulated function and, there-
fore, x|[s1,s2] is also a regulated function for any s1, s2 ∈ I.
Let F : O × [t0,+∞) → Rn be defined by (3.2). By Theorem 3.7, given s1, s2 ∈ I, the
integrals∫ s2s1DF (x(τ), t) and
∫ s2s1f(x(s), s)dg(s) exist and∫ s2
s1
DF (x(τ), t) =
∫ s2
s1
f(x(s), s)dg(s).
34 Chapter 3 — Correspondences between equations
Thus
x(s2)− x(s1) =
s2∫s1
f(x(s), s)dg(s) =
∫ s2
s1
DF (x(τ), t),
that is,
x(s2)− x(s1) =
∫ s2
s1
DF (x(τ), t).
This shows that x is a solution of the generalized ODE
dx
dτ= DF (x, t).
Conversely, let x : I → Rn be a solution of the generalized ODEdx
dτ= DF (x, t) on I, with
the function F given by (3.2).
Let s1, s2 ∈ I be such that s1 ≤ s2. According to the Theorem 3.2, F ∈ F(Ω, h), where
h is a nondecreasing function. Thus, by Corollary 1.17, x|[s1,s2] is a function of bounded
variation in [s1, s2]. Again, using Theorem 3.7, we have
x(s2)− x(s1) =
∫ s2
s1
DF (x(τ), t) =
∫ s2
s1
f(x(s), s)dg(s),
that is,
x(s2)− x(s1) =
∫ s2
s1
f(x(s), s)dg(s).
This shows that x is a solution of the MDE (3.12), which completes the proof.
We finish this section by presenting an example which illustrates the previous theorem.
Example 3.9. Let R be the 1-dimensional Euclidean space with norm | · | (absolute value).
Consider the MDE in its integral form
x(t) = x(0) +
∫ t
0
f(x(s), s)dg(s), t ≥ 0 (3.14)
where f : R× [0,+∞)→ R and g : [0,+∞)→ R are given by respectively
f(x, t) =t sinx
4t + [|t|],
3.2 Measure differential equations and Generalized ODEs 35
for all (x, t) ∈ R× [0,+∞) and
g(t) =
t, t ∈ [0, 1]
t+ 1, t ∈ (1,+∞),
where the symbol [|t|] denotes the integer part of t, and the symbol t := t − [|t|] denotes
the fractional part of t.
We will show that f satisfies conditions (A2), (A3), (A4), and g satisfies condition (A1).
Indeed, clearly g satisfies condition (A1). Consider an arbitrary x ∈ G([0,+∞),R) and let
s1, s2 ∈ [0,+∞) be such that s1 ≤ s2. Notice that [s1, s2] 3 t 7→ f(x(t), t) is a regulated
function on [s1, s2]. Thus by Theorem 1.9 (item (i)),∫ s2s1f(x(s), s)dg(s) exists and, therefore,
f satisfies (A2).
Now, define M : [0,+∞)→ R by M(t) = t , for t ∈ [0,+∞). Evidently, M is a locally
Kurzweil-Henstock-Stieltjes integrable with respect to g and
∣∣∣∣∫ s2
s1
f(x(s), s)dg(s)
∣∣∣∣ Theorem 1.9↓≤
∫ s2
s1
|f(x(s), s)| dg(s)
=
∫ s2
s1
∣∣∣∣s sin(x(s))
4s + [|s|]
∣∣∣∣ dg(s)
≤∫ s2
s1
s dg(s) =
∫ s2
s1
M(s)dg(s),
for every x ∈ G([0,+∞),R) and s1, s2 ∈ [0,+∞), s1 ≤ s2. Thus, f satisfies condition (A3).
On the other hand, define L : [0,+∞)→ R by L(t) := M(t) for t ∈ [0,+∞). Then
∣∣∣∣∫ s2
s1
[f(x(s), s)− f(y(s), s)] dg(s)
∣∣∣∣ Theorem 1.9↓≤
∫ s2
s1
|f(x(s), s)− f(y(s), s)| dg(s)
≤∫ s2
s1
s e−s∣∣∣ sin(x(s))− sin(y(s))
∣∣∣dg(s)
≤∫ s2
s1
L(s)e−s |x(s)− y(s)| dg(s)
≤∫ s2
s1
L(s) ‖x− y‖[0,+∞) dg(s)
= ‖x− y‖[0,+∞)
∫ s2
s1
L(s)dg(s),
for all x, y ∈ G0([0,+∞),R) and s1, s2 ∈ [0,+∞), s1 ≤ s2. Hence f and g fulfill all the
hypotheses of Theorem 3.8.
36 Chapter 3 — Correspondences between equations
To obtain the corresponding generalized ODE, we choose an arbitrary τ0 ∈ [0,+∞) and
define
F (x, t) :=
∫ t
τ0
f(x, s)dg(s), (3.15)
for (x, t) ∈ R× [0,+∞). Notice that
F (x, t) = ϕ(t) sinx,
where
ϕ(t) =
∫ t
τ0
s4s + [|s|]
dg(s),
for t ∈ [0,+∞).
On the other hand, if x : I → R, where 0 ∈ I ⊂ [0,+∞), is a solution of the MDE
(3.14), by Theorem 3.8, x is also a solution of the generalized ODEdx
dτ= DF (x, t) with the
function F given by (3.15) and, therefore,
x(ξ) = x(0) +
∫ ξ
0
DF (x(τ), t) = x(0) +
∫ ξ
0
D[ϕ(t) sinx(τ)] = x(0) +
∫ ξ
0
sinx(s)dϕ(s),
for all ξ ∈ I.
3.3 Dynamic equation on time scales and measure dif-
ferential equations
In this section, we recall some definitions and concepts concerning dynamic equations on
time scales and prove that these equations are a special case of MDEs.
We recall that, given a time scale T, for each pair of numbers a, b ∈ T, a ≤ b, we define a
closed time scale interval by [a, b]T = [a, b]∩T. The open and half-open intervals are defined
in a similar way.
Let T be a time scale. The next notation is borrowed from [42]. Given a real number
t ≤ supT, define
t∗ = inf s ∈ T : s ≥ t . (3.16)
By this definition, t ≤ t∗, for every real number t ≤ supT and t∗ = t for any t ∈ T.Note that t∗ may differ from σ(t). The following example illustrates this fact.
Example 3.10. Let T = N and t ∈ T. Then
t∗ = inf s ∈ T : s ≥ t = inf t, t+ 1, t+ 2, . . . = t
3.3 Dynamic equation on time scales and measure differential equations 37
aand
σ(t) = inf s ∈ T : s > t = inf t+ 1, t+ 2, t+ 3, . . . = t+ 1.
Moreover, t∗ 6= σ(t).
Coming back to time scales T, since T is a closed set, we have t∗ ∈ T. Now, define the
extension of T by
T∗ =
(−∞, supT], if supT <∞,(−∞,∞), otherwise.
(3.17)
Given a function f : T→ Rn, we define its extension f ∗ : T∗ → Rn by
f ∗(t) = f(t∗), t ∈ T∗.
Analogously, for each function f : B × T→ Rn, B ⊂ Rn, we define f ∗ : B × T∗ → Rn by
f ∗(x, t) = f(x, t∗), x ∈ B, t ∈ T∗.
Note that, while the function f is defined on T, its extension f ∗ is defined on the whole
interval (−∞, supT], whenever supT < +∞, and (−∞,+∞), whenever supT = +∞.
Let a, b ∈ T, a ≤ b, and f : [a, b]T → Rn be a function. Notice that if t ∈ [a, b], then t∗
is well-defined and t∗ ∈ [a, b]T, since a∗ = a ≤ t ≤ b = b∗ ≤ supT. Thus, we conclude that
f ∗ : [a, b]→ Rn given by f ∗(t) = f(t∗), for every t ∈ [a, b], is well-defined.
On the other hand, given a real number t0 ∈ T and let f : [t0,+∞)T → Rn be a function,
it is clear that we may have points t ∈ [t0,+∞) such that t > supT. Therefore, t∗ is not well-
defined. Thus, in order to ensure that the function t 7→ t∗ is well-defined, for t ∈ [t0,+∞),
it is necessary to require that supT = +∞.
In this work, we consider a dynamic equation on time scales of type
x∆(t) = f(x∗, t) (3.18)
where f : O × [t0,+∞)T → Rn, where O ⊂ Rn is an open subset. Also, x∆ denotes the
∆-derivative of x, as defined in Chapter 2. The integral form of the dynamic equation on
time scales (3.18) is given by
x(t)− x(t0) =
∫ t
t0
f(x∗(s), s)∆s, t ≥ t0, (3.19)
where the integral on the right-hand side is the Kurzweil-Henstock ∆-integral.
38 Chapter 3 — Correspondences between equations
The next result describes the properties of the extension f ∗, based on the properties of
f . This result can be found in [42, Lemma 4].
Lemma 3.11. If f : T → Rn is a regulated function, then f ∗ : T∗ → Rn is also regulated.
If f is left-continuous on T, then f ∗ is left-continuous on T∗. If f is right-continuous on T,then f ∗ is right-continuous at right-dense points of T∗.
Lemma 3.12. Let T be a time scale such that supT = +∞, and t0 ∈ T. Let g : [t0,+∞)→ Rbe given by g(t) = t∗, for all t ∈ [t0,+∞). Then g satisfies the following conditions
(i) g is a nondecreasing function;
(ii) g is left-continuous on (t0,+∞).
Proof. Item (i) follows immediately from the definition of the function g.
On the other hand, item (ii) is an immediate consequence of Lemma 3.11 by considering
the identity function i : [t0,+∞)T → R given by i(t) = t.
The next result describes a correspondence between Kurzweil-Henstock-Stieltjes integrals
and Kurzweil-Henstock ∆-integrals. This result is very important to establish a correspon-
dence between dynamic equations on time scales and measure differential equations. It can
be found in [21, Theorem 4.2].
Theorem 3.13. Let T be a time scale and f : [a, b]T → Rn be a function. Define g(t) = t∗,
for every t ∈ [a, b]. Then, the Kurzweil-Henstock ∆-integral∫ baf(t)∆t exists if, and only if,
the Kurzweil-Henstock-Stieltjes integral∫ baf ∗(t)dg(t) exists; In this case, both integrals have
the same value.
The following result is a consequence of [21, Lemma 4.4]. We repeat its proof here,
following the ideas of [21].
Theorem 3.14. Let T be a time scale such that supT = +∞, t0 ∈ T and g(t) = t∗ for every
t ∈ [t0,+∞). If f : [t0,+∞) → Rn is such that the Kurzweil-Henstock-Stieltjes integral∫ dcf(t)dg(t) exists for every c, d ∈ [t0,+∞), then∫ d
c
f(t)dg(t) =
∫ d∗
c∗f(t)dg(t),
for every t0 ≤ c < d < ∞, where the integral on the right-hand side is in the sense of
Kurzweil-Henstock-Stieltjes and c∗, d∗ are defined as in (3.16).
3.3 Dynamic equation on time scales and measure differential equations 39
Proof. Let c, d ∈ [t0,+∞) be such that c < d. We will show that g is constant on [c, c∗] and
[d, d∗]. Indeed, for any s ∈ [c, c∗] and t ∈ [d, d∗], statement (i) from Lemma 3.12 implies that
g(c) ≤ g(s) ≤ g(c∗) and g(d) ≤ g(t) ≤ g(d∗).
Since g(c) = g(c∗) = c∗ and g(d) = g(d∗) = d∗, we obtain g(s) = c∗, for all s ∈ [c, c∗], and
g(t) = d∗, for all t ∈ [d, d∗].
Now, using the definition of the Kurzweil-Henstock-Stieltjes integral and the fact that g
is constant on [c, c∗] and [d, d∗], we obtain∫ c∗cf(t)dg(t) = 0 and
∫ d∗df(t)dg(t) = 0. Hence∫ d
c
f(t)dg(t) =
∫ c∗
c
f(t)dg(t) +
∫ d
c∗f(t)dg(t) +
∫ d∗
d
f(t)dg(t) =
∫ d∗
c∗f(t)dg(t)
and this completes the proof.
The next result describes the relation between the Kurzweil-Henstock ∆-integral and
the Kurzweil-Henstock-Stieltjes integral, where the function f is defined on unbounded time
scales interval. Such result can be found in [21, Theorem 4.5].
Theorem 3.15. Let T be a time scale such that supT = +∞, t0 ∈ T and f : [t0,+∞)T → Rn
be a function such that the Kurzweil-Henstock ∆-integral∫ b
a
f(s)∆s
exists for every a, b ∈ [t0,+∞)T and a < b. Choose an arbitrary a ∈ [t0,+∞)T and define
F1(t) =
∫ t
a
f(s)∆s, t ∈ [t0,+∞)T,
F2(t) =
∫ t
a
f ∗(s)dg(s), t ∈ [t0,+∞),
where g(s) = s∗, for every s ∈ [t0,+∞). Then F2 = F ∗1 . In particular, F2(t) = F1(t) for all
t ∈ [t0,+∞)T.
40 Chapter 3 — Correspondences between equations
Proof. Let t ∈ [t0,+∞). Then, by Theorems 3.13 and 3.14 and the fact a = a∗, we have
F2(t) =
∫ t
a
f ∗(s)dg(s) =
∫ t∗
a∗f ∗(s)dg(s)
=
∫ t∗
a
f ∗(s)dg(s)
=
∫ t∗
a
f(s)∆s
= F1(t∗) = F ∗1 (t)
and the proof is finished.
The next result is essential to prove that dynamic equations on time scales are a special
case of MDEs. A proof of it can be found in [20, Theorem 4.2.].
Theorem 3.16. Let T be a time scale, g : [a, b] → R be defined by g(s) = s∗, for every
s ∈ [a, b], and [a, b] ⊂ T∗. Consider a pair of functions f1, f2 : [a, b]→ Rn such that f1(t) =
f2(t) for every t ∈ [a, b] ∩ T. If the Kurzweil-Henstock-Stieltjes integral∫ baf1(s)dg(s) exists,
then the Kurzweil-Henstock-Stieltjes integral∫ baf2(s)dg(s) also exists and both integrals have
the same value.
Proof. Suppose the Kurzweil-Henstock-Stieltjes integral
∫ b
a
f1(s)dg(s) exists. Given ε > 0,
there is a gauge δ : [a, b]→ R+ such that∥∥∥∥∥∥|D|∑i=1
f1(τi)(g(ti)− g(ti−1))−∫ b
a
f1(s)dg(s)
∥∥∥∥∥∥ < ε,
for every δ-fine tagged division D = ([ti−1, ti], τi) of [a, b]. Let us define a gauge δ : [a, b]→R+ by
δ(t) =
δ(t), t ∈ [a, b] ∩ T,min
δ(t), 1
2inf |t− s| : s ∈ T
, t ∈ [a, b]\T.
Note that δ(t) ≤ δ(t) for any t ∈ [a, b]. Then, evidently, every δ-fine tagged division D of
[a, b] is also δ-fine. Consider an arbitrary δ-fine tagged division D = ([ti−1, ti], τi) of [a, b].
For every i ∈ 1, 2, . . . , |D| , there are two possibilities
either [ti−1, ti] ∩ T = ∅ or τi ∈ T.
3.3 Dynamic equation on time scales and measure differential equations 41
In the first case, g is constant on [ti−1, ti]. Consequently, g(ti−1) = g(ti) and, therefore,
f2(τi)(g(ti)− g(ti−1)) = 0 = f2(τi)(g(ti)− g(ti−1)).
In the second case, f1(τi) = f2(τi) and
f1(τi)(g(ti)− g(ti−1)) = f2(τi)(g(ti)− g(ti−1)).
In any case,
f1(τi)(g(ti)− g(ti−1)) = f2(τi)(g(ti)− g(ti−1)), for i = 1, 2, . . . , |D| .
Thus, we have∥∥∥∥∥∥|D|∑i=1
f2(τi)(g(ti)− g(ti−1))−∫ b
a
f1(s)dg(s)
∥∥∥∥∥∥ =
∥∥∥∥∥∥|D|∑i=1
f1(τi)(g(ti)− g(ti−1))−∫ b
a
f1(s)dg(s)
∥∥∥∥∥∥< ε.
Since ε > 0 can be arbitrarily small, we conclude that∫ baf2(s)dg(s) exists and
∫ baf2(s)dg(s) =∫ b
af1(s)dg(s).
The next result establishes a correspondence between the solutions of the dynamic equa-
tion on time scales (3.18) and the solutions of the integral form of an MDE of type
x(t) = x(t0) +
∫ t
t0
f ∗(x, s)dg(s), t ≥ t0, (3.20)
where f : O × [t0,+∞)T → Rn and g : [t0,+∞) → R are given by g(s) = s∗. This result is
a slightly modified version of [21, Theorem 5.2] (which is the special case when the domain
of the solution is a compact interval). The proof from [21] can be carried out without any
changes and we reproduce it here.
Theorem 3.17. Let T be a time scale such that supT = +∞ and [t0,+∞)T be a time scale
interval. Let O ⊂ Rn be an open subset, f : O× [t0,+∞)T → Rn be a function. Assume that
for every x ∈ G([t0,+∞)T,Rn), the function t 7→ f(x(t), t) is Kurzweil-Henstock ∆-integrable
on [s1, s2]T, for every s1, s2 ∈ [t0,+∞)T. Define g : [t0,+∞) → R by g(s) = s∗, for every
s ∈ [t0,+∞). Also, let J ⊂ [t0,+∞) be a nondegenerate interval such that J ∩T is nonempty
and for each t ∈ J, we have t∗ ∈ J ∩ T. If x : J ∩ T → Rn is a solution of the initial value
problem given by x∆(t) = f(x∗, t), t ∈ J ∩ T,x(s0) = x0,
(3.21)
42 Chapter 3 — Correspondences between equations
where x0 ∈ O and s0 ∈ J ∩ T, then x∗ : J → Rn is a solution of the initial value problem
y(t) = x0 +
∫ t
s0
f ∗(y(s), s)dg(s) = x0 +
∫ t
s0
f(y(s), s∗)dg(s) (3.22)
Conversely, if y : J → Rn satisfies the initial value problem (3.22), then it must have the
form y = x∗, where x : J ∩ T→ Rn is a solution of the initial value problem (3.21).
Proof. Assume that x : J ∩ T → Rn satisfies the dynamic equation on time scales (3.21).
Then
x(t)− x0 =
∫ t
s0
f(x∗(s), s)∆s, t ∈ J ∩ T.
By Theorem 3.15,
x∗(t)− x0 =
∫ t
s0
f(x∗(s∗), s∗)dg(s), t ∈ J.
Since f(x∗(s∗), s∗) = f(x∗(s), s∗) for all s ∈ [s0, t] ∩ T, by Theorem 3.16, we get
x∗(t)− x0 =
∫ t
s0
f(x∗(s), s∗)dg(s), t ∈ J,
that is, x∗ : J → Rn satisfies the initial value problem (3.22), which proves the first part of
the theorem.
Conversely, suppose y : J → Rn satisfies the initial value problem (3.22). Then
y(t) = x0 +
∫ t
s0
f(y(s), s∗)dg(s), t ∈ J.
Note that g is constant on every interval (α, β], where β ∈ T and α = sup s ∈ T : s < β .Thus, y has the same property and it follows that y = x∗ for some x : J ∩ T → Rn. By
reversing our previous reasoning of proof of the first part of the theorem, we conclude that
x satisfies the dynamic equation on time scale (3.21).
Corollary 3.18. Let T be a time scale such that supT = +∞ and [t0,+∞)T be a time scale
interval. Let O ⊂ Rn be an open subset, f : O× [t0,+∞)T → Rn be a function. Assume that
for every x ∈ G([t0,+∞)T,Rn), the function t 7→ f(x(t), t) is Kurzweil-Henstock ∆-integrable
on [s1, s2]T, for every s1, s2 ∈ [t0,+∞)T. Define g : [t0,+∞) → R by g(s) = s∗, for every
s ∈ [t0,+∞). Also, let J ⊂ [t0,+∞) be a nondegenerate interval such that J ∩T is nonempty
and for each t ∈ J, we have t∗ ∈ J ∩ T. If x : J ∩ T → Rn is a solution of the dynamic
equation on time scales given by
x∆(t) = f(x∗, t), t ∈ J ∩ T. (3.23)
3.3 Dynamic equation on time scales and measure differential equations 43
then x∗ : J → Rn is a solution of the MDE
y(t2)− y(t1) =
∫ t2
t1
f ∗(y, s)dg(s), t1, t2 ∈ J. (3.24)
Conversely, if y : J → Rn satisfies the MDE (3.24), then it must have the form y = x∗,
where x : J ∩ T→ Rn is a solution of (3.23).
Proof. Suppose J ∩T is nonempty and x : J ∩T→ Rn is a solution of the dynamic equation
on time scales x∆(t) = f(x∗, t). Since J ∩ T is nonempty, there exists τ0 ∈ R such that
τ0 ∈ J ∩ T. Hence, x : J ∩ T→ Rn is a solution of the initial value problemx∆(t) = f(x∗, t), t ∈ J ∩ T,x(τ0) = z0,
(3.25)
where z0 := x(τ0). Thus, by Theorem 3.17, x∗ : J → Rn is a solution of
y(t) = z0 +
∫ t
τ0
f ∗(y(s), s)dg(s), t ∈ J. (3.26)
Thus, by Remark 3.1, x∗ : J → Rn satisfies
y(t2)− y(t1) =
∫ t2
t1
f ∗(y(s), s)dg(s),
for all t1, t2 ∈ J.
The converse assertion can be proved similarly.
We finish this section by presenting an example which illustrates Theorem 3.16.
Example 3.19. Let R be the 1-dimensional Euclidean space with norm | · | (absolute value)
and let T be a time scale such that supT = +∞ and 0 ∈ T. Consider the dynamic equation
on time scales given by x∆(t) = f(x∗, t), t ∈ [0,+∞)T
x(0) = x0,(3.27)
where x0 ∈ R and f : R × [0,+∞)T → R is defined by f(x, t) =5 cos(2x)
et + t+ 1, for all (x, t) ∈
R× [0,+∞)T.
44 Chapter 3 — Correspondences between equations
We will show that f satisfies conditions (B1), (B2) and (B3). Indeed, consider an arbi-
trary y ∈ G([0,+∞)T,R) and let s1, s2 ∈ [0,+∞)T with s1 ≤ s2. Notice that the function
[s1, s2] 3 t 7→ f(y(t∗), t∗) =5 cos(2y(t∗))
et∗ + t∗ + 1=
5 cos(2y∗(t))
eg(t) + g(t) + 1
is regulated in [s1, s2], and g : [0,+∞)→ R is given by g(t) = t∗. Thus, by Theorem 1.9 (item
(i)),∫ s2s1f(y(t∗), t∗)dg(t) exists and, therefore, according to Theorem 3.13,
∫ s2s1f(y(t), t)∆t
exists and∫ s2s1f(y(t∗), t∗)dg(t) =
∫ s2s1f(y(t), t)∆t.
Now, define M : [0,+∞) → R by M(t) = t + 5, for t ∈ [0,+∞). Evidently, M is a
Kurzweil-Henstock-Stieltjes integrable function with respect to g and∣∣∣∣∫ s2
s1
f(y(s), s)∆s
∣∣∣∣ =
∫ s2
s1
f(y(s∗), s∗)dg(s) =
∫ s2
s1
∣∣∣∣ 5 cos(2y∗(s))
eg(s) + g(s) + 1
∣∣∣∣ dg(s)
≤∫ s2
s1
5dg(s) ≤∫ s2
s1
M(s)dg(s),
for every y ∈ G([0,+∞),R) and s1, s2 ∈ [0,+∞)T, s1 ≤ s2. Thus f satisfies condition (B2).
On the other hand, define L : [0,+∞)→ R by L(t) := 10, for t ∈ [0,+∞), then∣∣∣∣∫ s2
s1
[f(y(s), s)− f(w(s), s)] ∆s
∣∣∣∣ =
∫ s2
s1
[f(y(s∗), s∗)− f(w(s∗), s∗)] dg(s)
≤∫ s2
s1
|f(y(s∗), s∗)− f(w(s∗), s∗)| dg(s) ≤∫ s2
s1
5e−g(s)∣∣∣ cos(2y∗(s))− cos(2w∗(s))
∣∣∣dg(s)
≤∫ s2
s1
10e−g(s) |y∗(s)− w∗(s)| dg(s) =
∫ s2
s1
L(s)e−s∗ |y(s∗)− w(s∗)| dg(s),
≤∫ s2
s1
L(s) ‖y − w‖[0,+∞)Tdg(s) = ‖y − w‖[0,+∞)T
∫ s2
s1
L(s)dg(s),
for all y, w ∈ G0([0,+∞)T,R) and s1, s2 ∈ [0,+∞), s1 ≤ s2. Hence, f and g fulfill all the
hypotheses Theorem 3.17.
Then the corresponding measure differential equation in the integral form is given by
x(t) = x0 +
∫ t
0
f ∗(x(s), s)dg(s), t ≥ 0, (3.28)
where f ∗(x, t) = f(x, t∗) =5 cos(2x)
et∗ + t∗ + 1, for all (x, t) ∈ R× [0,+∞).
Now, let J ⊂ [0,+∞) be a nondegenerate interval such that J ∩ T is nonempty and for
each t ∈ J, we have t∗ ∈ J ∩ T. Theorem 3.17 says that if x : J ∩ T→ R is a solution of the
3.3 Dynamic equation on time scales and measure differential equations 45
dynamic equation on time scales (3.27). Then the function x∗ : J → R is a solution of the
measure differential equation (3.28).
Conversely, every solution y : J → R of the measure differential equation (3.28) has the
form y = x∗, where x : J ∩T→ R is a solution of the dynamic equation on time sales (3.27).
Chapter
4Prolongation of solutions
In this chapter, we present results on the prolongation of solutions of generalized ODEs,
measure differential equations (MDEs) and dynamic equations on time scales.
It is important to mention that, in Section 4.1, we present some new results in the
theory of generalized ODEs. Furthermore, all the results presented in Sections 4.2 and 4.3
of this chapter are new and either they follow by using the correspondence between MDEs
and generalized ODEs or they follow by the correspondence between MDEs and dynamic
equations on time scales (see Chapter 3). These results are contained in [16].
4.1 Prolongation of the solutions of generalized ODEs
In this section, our goal is to prove a result on the prolongation of solutions of an abstract
generalized ODE.
Let X be a Banach space. Consider O ⊂ X an open set, an interval [t0,+∞) ⊂ R and
Ω = O × [t0,+∞). Consider the generalized ODE
dx
dτ= DF (x, t), (4.1)
where F ∈ F(Ω, h) and the function h : [t0,+∞)→ R is nondecreasing and left-continuous.
The next result ensures the prolongation of a solution of the generalized ODE (4.1). The
proof of the following result is inspired in the proof of [38, Proposition 4.15]. Our result
generalizes the result found in [38, Lemma 4.4]. In fact, [38, Lemma 4.4] can be obtained as
a consequence of our result (see Corollary 4.2 in the sequel).
47
48 Chapter 4 — Prolongation of solutions
Throughout this section, for t0 < β < ϑ < +∞, define Γ∞β,ϑ := [β, ϑ], [β, ϑ), [β,+∞).
Theorem 4.1. Let F ∈ F(Ω, h), where the function h : [t0,+∞)→ R is nondecreasing and
left-continuous. If x : [γ, β) → X and y : I → X, I ∈ Γ∞β,ϑ, are solutions of the generalized
ODE (4.1) on [γ, β) and I, respectively, where t0 ≤ γ < β < ϑ < +∞, and if the limit
limt→β−
x(t) exists and limt→β−
x(t) = y(β), then the function z : [γ, β) ∪ I → X defined by
z(t) =
x(t), t ∈ [γ, β)
y(t), t ∈ I,
is a solution of the generalized ODE (4.1) on [γ, β) ∪ I=
[γ, ϑ], if I = [β, ϑ],
[γ, ϑ), if I = [β, ϑ),
[γ,+∞), if I = [β,+∞).
Proof. Suppose the limit limt→β−
x(t) exists and
limt→β−
x(t) = y(β). (4.2)
Define z : [γ, β) ∪ I → X by
z(t) =
x(t), t ∈ [γ, β),
y(t), t ∈ I.
Notice that the function z is well-defined and it is a regulated function, since by Lemma
1.17, x and y are regulated functions. We will show that z is a solution of the generalized
ODE (4.1) on [γ, β) ∪ I.
Since β is an accumulation point of the set [γ, β), there exists a sequence tnn∈N ⊂ [γ, β)
such that tnn→∞→ β. Therefore, by (4.2), we have
limn→∞
x(tn) = y(β). (4.3)
4.1 Prolongation of the solutions of generalized ODEs 49
Let s1, s2 ∈ [γ, β)∪ I be such that s1 ∈ [γ, β) and s2 ∈ I. Then, for n ∈ N sufficiently large,
we have tn ∈ (s1, β) and
s2∫s1
DF (z(τ), t) =
β∫s1
DF (z(τ), t) +
s2∫β
DF (z(τ), t)
=
∫ tn
s1
DF (x(τ), t) +
∫ β
tn
DF (z(τ), t) +
∫ s2
β
DF (y(τ), t)
= x(tn)− x(s1) +
β∫tn
DF (z(τ), t) + y(s2)− y(β),
that is,s2∫s1
DF (z(τ), t) = x(tn)− y(β) + y(s2)− x(s1) +
β∫tn
DF (z(τ), t).
By the definition of z, we have
s2∫s1
DF (z(τ), t) = x(tn)− y(β) + z(s2)− z(s1) +
β∫tn
DF (z(τ), t). (4.4)
Thus, by Lemma 1.15, we get∥∥∥∥∥∥β∫
tn
DF (z(τ), t)
∥∥∥∥∥∥ ≤ |h(β)− h(tn)| , for sufficiently large n. (4.5)
On the other hand, since h is left-continuous and tnn→∞→ β, tn < β for all n ∈ N,
limn→∞
|h(β)− h(tn)| = 0. (4.6)
Thus, by (4.5) and (4.6), we obtain
limn→∞
β∫tn
DF (z(τ), t) = 0. (4.7)
50 Chapter 4 — Prolongation of solutions
Finally, applying the limit when n→∞ in (4.4) and using (4.3) and (4.7), we have
s2∫s1
DF (z(τ), t) = z(s2)− z(s1),
for every s1, s2 ∈ [γ, β) ∪ I such that s1 ∈ [γ, β) and s2 ∈ I.
The cases where s1, s2 ∈ [γ, β) or s1, s2 ∈ I follow immediately. Therefore z is a solution
of the generalized ODE (4.1) on [γ, β) ∪ I.
The next result is inspired in [38, Lemma 4.4] for the case I = [β, ϑ]. Here, we present a
different proof of it.
Corollary 4.2. Let F ∈ F(Ω, h), where the function h : [t0,+∞)→ R is nondecreasing and
left-continuous. If x : [γ, β] → X and y : I → X, I ∈ Γ∞β,ϑ, are solutions of the generalized
ODE (4.1) on [γ, β] and I respectively, where t0 ≤ γ < β < ϑ < +∞ and x(β) = y(β), then
the function z : [γ, β] ∪ I → X given by
z(t) =
x(t), t ∈ [γ, β]
y(t), t ∈ I(4.8)
is a solution of the generalized ODE (4.1) on [γ, β] ∪ I.
Proof. Suppose
x(β) = y(β). (4.9)
Since F ∈ F(Ω, h) and h is a left-continuous function on [t0,+∞), by Lemma 1.16, x is
left-continuous on (γ, β]. In particular, we have
limt→β−
x(t) = x(β).
Thus, by (4.9), it follows that
limt→β−
x(t) = y(β).
On the other hand, since x : [γ, β] → X is a solution of the generalized ODE (4.1) on
[γ, β], x|[γ,β) is a solution of the equation (4.1) on [γ, β). Note that all the hypotheses of
Theorem 4.1 are satisfied. Then z defined by (4.8) is a solution of the generalized ODE (4.1)
on [γ, β] ∪ I and the proof is complete.
Now, our goal is to prove that, under certain conditions, the unique solution x : [τ0, τ0 +
∆]→ X of (4.1) satisfying x(τ0) = x0, which is ensured by Theorem 1.21, can be extended
4.1 Prolongation of the solutions of generalized ODEs 51
to intervals greater than [τ0, τ0 + ∆], up to a maximal interval, at least while the graph of
the solution does not reach the boundary of Ω.
In the sequel, we recall the concept of maximal solution of a generalized ODE. See [38].
Let F ∈ F(Ω, h) and (x0, τ0) ∈ Ω be such that
x0 + F (x0, τ+0 )− F (x0, τ0) ∈ O,
where we recall the reader that F (x0, τ+0 ) = lim
t→τ+0F (x0, t). Fix (x0, τ0) ∈ Ω and define the
following set
Sτ0,x0 := x : Ix ⊂ [t0,+∞)→ X : Ix is an interval with the left endpoint τ0,
x is a solution of the generalized ODE (4.1), x(τ0) = x0 .
Definition 4.3. Given x : Ix → X and z : Iz → X in Sτ0,x0 , we say that x is smaller than
or equal to z (x z), if and only if, Ix ⊂ Iz and z|Ix = x.
The next result shows that the relation defines a partial order in Sτ0,x0 .
Proposition 4.4. The relation defines a partial order relation in Sτ0,x0 .
Proof. We must show that is reflexive, antisymmetric and transitive.
Reflexivity. Let x : Ix → X in Sτ0,x0 . Then Ix ⊂ Ix and x|Ix ≡ x. Therefore, x x.
Antisymmetry. Let x : Ix → X and z : Iz → X belong to Sτ0,x0 , with x z and z x.
Since x z, we have
Ix ⊂ Iz and z|Ix = x. (4.10)
On the other hand, since z x, we have
Iz ⊂ Ix and x|Iz = z. (4.11)
Thus, by (4.10) and (4.11), we obtain Ix = Iz and x = z.
Transitivity. Let x : Ix → X, z : Iz → X and y : Iy → X belong to Sτ0,x0 such that x z
and z y. Since x z, we have
Ix ⊂ Iz and z|Ix = x. (4.12)
By z y, we obtain
Iz ⊂ Iy and y|Iz = z. (4.13)
Thus, by (4.12) and (4.13), Ix ⊂ Iy and y|Ix = x, that is, x y.
52 Chapter 4 — Prolongation of solutions
Notice that to be able to investiigate the forward maximal solution, it is necessary to
require
x0 + F (x0, τ+0 )− F (x0, τ0) ∈ O.
Otherwise, it could happen that x(t) /∈ O for t > τ0, contradicting the definition of a solution
of a generalized ODE. Therefore, throughout this chapter, we assume
x+ F (x, t+)− F (x, t) ∈ O, for every x ∈ O and t ∈ [t0,+∞),
that is, we consider
Ω = ΩF :=
(x, t) ∈ Ω : x+ F (x, t+)− F (x, t) ∈ O. (4.14)
This means that there are no points in Ω to which the solution of the generalized ODE (4.1)
can escape from the set O.
Definition 4.5. (Prolongation of solution) Let τ0 ≥ t0 and let x : I → X, I ⊂ [t0,+∞),
be a solution of (4.1) on the interval I, I with left endpoint s0. The solution y : J → X,
J ⊂ [t0,+∞), with left endpoint τ0, of the generalized ODE (4.1) is called a prolongation
to the right of x, if I ⊂ J and x(t) = y(t) for all t ∈ I. If I ( J, then y is called a proper
prolongation of x to the right.
In the next definition, the concept of a maximal solution of a generalized ODE is pre-
sented.
Definition 4.6. (Maximal solution) Let (x0, τ0) ∈ Ω. We say that x : J → X is a maximal
solution of the generalized ODE dx
dτ= DF (x, t),
x(τ0) = x0,(4.15)
if x ∈ Sτ0,x0 and, for every z : I → X in Sτ0,x0 such that x z, we have x = z. In other words,
x is a maximal solution of (4.15), whenever x ∈ Sτ0,x0 , and there is no proper prolongation
of x to the right.
The following result is crucial to prove the existence and uniqueness of a maximal solution
of the generalized ODE (4.15), for the case where X is a Banach space and Ω = O×[t0,+∞),
where O being an open subset of X.
Lemma 4.7. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing and
left-continuous and Ω = ΩF , where ΩF is given by (4.14). Let (x0, τ0) ∈ Ω and consider the
generalized ODE (4.15). If y : Jy → X and z : Jz → X are solutions of the generalized
4.1 Prolongation of the solutions of generalized ODEs 53
ODE (4.15), where Jy and Jz are intervals, both with left endpoint τ0, then y(t) = z(t) for
all t ∈ Jy ∩ Jz.
Proof. Let (x0, τ0) ∈ Ω be fixed. Suppose y : Jy → X and z : Jz → X are solutions of the
generalized ODE (4.15), where Jy and Jz are intervals with left endpoint τ0. Then,
y(τ0) = z(τ0) = x0. (4.16)
We will show that y(t) = z(t) for all t ∈ Jy ∩ Jz. Note that Jy ∩ Jz is an interval of the form
[τ0, d], or [τ0, d), d ≤ +∞.
Therefore, we will consider the next two cases.
Case 1. Jy ∩ Jz = [τ0, d]. We define
Λ := t ∈ [τ0, d] : y(s) = z(s), for all s ∈ [τ0, t] .
Note that Λ 6= ∅, since τ0 ∈ Λ. We denote λ = sup Λ. It is clear that λ ≤ d and [τ0, λ) ⊂ Λ.
Therefore,
y(t) = z(t), for all t ∈ [τ0, λ). (4.17)
On the other hand, since F ∈ F(Ω, h), h is a left-continuous function. Also, since by the
hypotheses, the functions y and z are solutions of the generalized ODE (4.15) on Jy ∩ Jz, it
follows by Lemma 1.16 that y and z are left-continuous on (τ0, λ]. Then we obtain
y(λ) = limt→λ−
y(t) = limt→λ−
z(t) = z(λ),
that is,
y(λ) = z(λ). (4.18)
Thus, by (4.17) and (4.18), we get
λ ∈ Λ and [τ0, λ] ⊂ Λ. (4.19)
Finally, we will show that λ = d. If λ < d. Then since (y(λ), λ) ∈ ΩF = Ω, by Theorem 1.21,
there are δ > 0 (we can take, for instance, λ+δ < d) and a unique solution x : [λ, λ+δ]→ X
of the generalized ODE dx
dτ= DF (x, t)
x(λ) = y(λ) = z(λ).(4.20)
54 Chapter 4 — Prolongation of solutions
On the other hand, y∣∣[λ,λ+δ] and z
∣∣[λ,λ+δ] are solutions of (4.20). Then, by the uniqueness,
x(t) = z(t) = y(t), for all t ∈ [λ, λ+ δ]. (4.21)
Thus, by (4.19) and (4.21), λ+ δ ∈ Λ which contradicts the definition of λ. Then d = λ and
y(t) = z(t), for all t ∈ Jy ∩ Jz.
Case 2. Jy ∩ Jz = [τ0, d), with d ≤ +∞.
In order to prove Case 2, we will prove two assertions.
Assertion 1. For every τ ∈ (τ0, d), y(s) = z(s), for all s ∈ [τ0, τ).
Define
Λ := t ∈ (τ0, d) : y(s) = z(s), for all s ∈ [τ0, t) .
Let us prove that Λ 6= ∅. Since (x0, τ0) ∈ Ω = ΩF , Theorem 1.21 ensures the existence of
η > 0 (we can take, for instance, τ0 + η < d) and a unique solution x : [τ0, τ0 + η] → X of
the generalized ODE dx
dτ= DF (x, t)
x(τ0) = x0 = y(τ0) = z(τ0).(4.22)
On the other hand, since y∣∣[τ0,τ0+η] and z
∣∣[τ0,τ0+η] are solutions of (4.22). Then,
x(t) = y(t) = z(t), for all t ∈ [τ0, τ0 + η]. (4.23)
In particular, y(t) = z(t) for all t ∈ [τ0, τ0 + η), that is, τ0 + η ∈ Λ. Thus Λ is a nonempty
set. Denote λ = sup Λ. Note that λ ≤ d and (τ0, λ) ⊂ Λ. Let us prove that λ = d. Suppose
the contrary, that is, λ < d. We will show that y(t) = z(t), for t ∈ [τ0, λ) and y(λ) = z(λ).
Indeed, let t ∈ [τ0, λ). If t = τ0, follows immediately by (4.16). On the other hand, if
t ∈ (τ0, λ), then t ∈ Λ (since (τ0, λ) ⊂ Λ). Thus, y(s) = z(s), for all s ∈ [τ0, t). Since y∣∣(τ0,t]
and z∣∣(τ0,t] are left-continuous functions, we get
y(t) = lims→t−
y(s) = lims→t−
z(s) = z(t).
Therefore
y(t) = z(t), for all t ∈ [τ0, λ). (4.24)
Thus, by (4.24) and by the left continuity of the functions y|(τ0,λ] and z|(τ0,λ], we have
y(λ) = z(λ). (4.25)
4.1 Prolongation of the solutions of generalized ODEs 55
Then, by (4.24) and (4.25), we obtain
y(t) = z(t), for all t ∈ [τ0, λ]. (4.26)
Since (y(λ), λ) ∈ ΩF = Ω, Theorem 1.21 implies there are δ > 0 (we can take, for instance,
λ+ δ < d) and a unique solution x : [λ, λ+ δ]→ X of the generalized ODEdx
dτ= DF (x, t)
x(λ) = y(λ) = z(λ).(4.27)
On the other hand, since z∣∣[λ,λ+δ] and y
∣∣[λ,λ+δ] are solutions of (4.27), the uniqueness of a
solution yields
x(t) = z(t) = y(t), for all t ∈ [λ, λ+ δ]. (4.28)
Thus, by (4.26) and (4.28), we have λ + δ ∈ Λ, which contradicts the definition of λ. Then
d = λ and Λ = (τ0, d).
Assertion 2. For all t ∈ (τ0, d), y(t) = z(t).
Indeed, let t ∈ (τ0, d). Then, from Assertion 1, y(s) = z(s), for all s ∈ [τ0, t). Thus, by the
left-continuity of the functions y|(τ0,λ] and z|(τ0,λ], we have
y(t) = lims→t−
y(s) = lims→t−
z(s) = z(t),
and Assertion 2 follows.
Finally, by (4.16) and Assertion 2, we get
y(t) = z(t), for all t ∈ [τ0, d) = Jy ∩ Jz,
proving the result.
The next result gives sufficient conditions to ensure the existence and uniqueness of a
maximal solution of the generalized ODE (4.1) with initial condition x(τ0) = x0, that is,dx
dτ= DF (x, t)
x(τ0) = x0.
A version of this result was proved in [38, Proposition 4.13], considering the case X = Rn
and Ω = O × (a, b), where O = B(0, c) ⊂ Rn and (a, b) ⊂ R. Here, we present a proof for
the case where X is a Banach space and Ω = O× [t0,+∞), where O is an open subset of X.
We recall the reader that ΩF = (x, t) ∈ Ω : x+ F (x, t+)− F (x, t) ∈ O .
56 Chapter 4 — Prolongation of solutions
Theorem 4.8. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing
and left-continuous. If Ω = ΩF , then, for every (x0, τ0) ∈ Ω, there exists a unique maximal
solution x : J → X of (4.1), where x(τ0) = x0 and J is an interval with left endpoint τ0.
Proof. Suppose Ω = ΩF . Let (x0, τ0) ∈ Ω be fixed. At first, we will show the existence of a
maximal solution.
Existence. Consider the set
S := x : Jx ⊂ [t0,+∞)→ X : Jx is an interval with left endpoint τ0 and
x is a solution of the generalized ODE (4.1), x(τ0) = x0
The set S is nonempty by the local uniqueness and existence of a solution given by Theorem
1.21.
Define J :=⋃y∈S
Jy and x : J → X by the relation x(t) = y(t), where y ∈ S and t ∈ Jy.
Note that if y and z belongs to S, then y(s) = z(s), for all s ∈ Jy ∩Jz, by Lemma 4.7. Thus,
we conclude that x is well-defined. Note that J is an interval with the left endpoint τ0 (since
J is union connected with a common point) and x is a maximal solution of the generalized
ODE (4.1), proving the existence of a maximal solution.
It remains to ensure the uniqueness of a maximal solution.
Uniqueness. Assume that x1 : J1 → X and x2 : J2 → X are two maximal solutions of the
generalized ODE (4.1) with x1(τ0) = x2(τ0) = x and J1, J2 are intervals with left endpoint
τ0. Thus, by Lemma 4.7, we have to show that x1(t) = x2(t), for all t ∈ J1 ∩ J2.
Define x3 : J3 → X, J3 := J1 ∪ J2 by
x3(t) =
x1(t), t ∈ J1
x2(t), t ∈ J2.
It is clear that x3 is a solution of the generalized ODE (4.1) with initial condition x3(τ0) = x0,
J1, J2 ⊂ J3 and x3|J1 = x1, x3|J2 = x2. Since the solutions x1 and x2 are assumed to be
maximal, J3 = J2 = J1 and x3(t) = x2(t) = x1(t), for all t ∈ J3, that is, x1 = x2.
The following theorem shows that the maximal interval J of definition of a maximal
solution is half-open. This result was proved in [38, Proposition 4.14], for the case X = Rn
and Ω = O × (a, b), where O = B(0, c) ⊂ Rn and (a, b) ⊂ R. Here, we generalize the result
for the case where X is a Banach space and Ω = O × [t0,+∞), exhibiting a different proof
from the one presented in [38].
4.1 Prolongation of the solutions of generalized ODEs 57
Theorem 4.9. Let F ∈ F(Ω, h), where the function h : [t0,+∞) is nondecreasing and left-
continuous, and Ω = ΩF . Suppose (x0, τ0) ∈ Ω and x : J → X is the maximal solution of the
generalized ODE (4.1) with x(τ0) = x0 and J is an interval with the left endpoint τ0. Then
J = [τ0, ω) with ω ≤ +∞.
Proof. It is clear that the maximal interval J satisfies J ⊂ [t0,+∞). Define ω := sup J. Then
clearly ω ≤ +∞. If ω = +∞, the result follows immediately. Assume that ω < +∞. Let us
prove that ω /∈ J. Suppose the contrary, that is, ω ∈ J. Then J = [τ0, ω] and by the definition
of a solution, that (x(ω), ω) ∈ Ω = ΩF . Therefore, by Theorem 1.21, there exists η > 0 such
that, on [ω, ω+η], there is a unique solution z : [ω, ω+η]→ X of the generalized ODE (4.1)
such that z(ω) = x(ω). Thus, by Corollary 4.2,
y(t):=
x(t), t ∈ J = [τ0, ω]
z(t), t ∈ [ω, ω + η]
is a solution of the generalized ODE (4.1) with y(τ0) = x0. It is easy to see that z is a proper
prolongation of x, which is assumed to be maximal. Therefore we have a contradiction.
Hence ω /∈ J and J = [τ0, ω), obtaining the desired result.
The following result is a generalization of [38, Proposition 4.15], for functions F taking
values in a Banach space X. The proof of this result follows the same ideas of [38].
Theorem 4.10. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing
and left-continuous, and Ω = ΩF . Suppose (x0, τ0) ∈ Ω and x : [τ0,+∞) → X is the
maximal solution of (4.1) with x(τ0) = x0. Then, for every compact set K ⊂ Ω, there exists
tK ∈ [τ0, ω) such that (x(t), t) /∈ K, for all t ∈ (tK , ω).
Proof. Suppose the contrary. Then, we will obtain the existence of a compact set K ⊂ Ω
and a sequence tnn∈N ⊂ [τ0, ω) such that
tnn→+∞→ ω and (x(tn), tn) ∈ K, for all n ∈ N.
Let us consider two cases: ω = +∞ and ω < +∞.
Case 1. Suppose ω = +∞. Since K is compact, the sequence (x(tn), tn)n∈N contains a
convergent subsequence which, without loss of generality, we denote again by (x(tn), tn)n∈N.
Then there exists (y, τ) ∈ K such that
limn→∞
(x(tn), tn) = (y, τ).
In particular,
tnn→+∞→ τ,
58 Chapter 4 — Prolongation of solutions
which contradicts tnn→+∞→ ω = +∞.
Case 2. Suppose ω < +∞. Since K is compact, the sequence (x(tn), tn)n∈N contains a
convergent subsequence which, without loss of generality, we denote again by (x(tn), tn)n∈N.
Then there exists (y, τ) ∈ K ⊂ Ω such that
limn→∞
(x(tn), tn) = (y, τ).
In particular,
tnn→+∞→ τ.
By the uniqueness of the limit, τ = ω. Thus, since (y, ω) = (y, τ) ∈ Ω = ΩF , we have
y + F (y, ω+)− F (y, ω) ∈ O.
Therefore, by Theorem 1.21, there exists a η > 0 such that on [ω, ω + η], there is a unique
solution z : [ω, ω + η]→ X of the generalized ODE (4.1) satisfying z(ω) = y.
Define u : [τ0, ω + η]→ X by the relation
u(t) =
x(t), t ∈ [τ0, ω),
z(t), t ∈ [ω, ω + η].
Thus, by Theorem 4.1, u : [τ0, ω + η] → X is a solution of the generalized ODE (4.1) with
initial condition u(τ0) = x0. But notice that u is clearly a proper prolongation of the solution
x, which is assumed to be maximal. This contradiction proves the result.
The following results are completely new in the theory of generalized ODEs, even for the
case X = Rn.
Corollary 4.11. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing
and left-continuous, and Ω = ΩF . Let (x0, τ0) ∈ Ω and x : [τ0, ω) → X be the maximal
solution of (4.1) with x(τ0) = x0. If x(t) ∈ N for all t ∈ [τ0, ω), where N is a compact subset
of O, then ω = +∞.
Proof. Suppose the contrary, that is, ω < +∞. Then K := N × [τ0, ω] is a compact subset
of Ω and by the hypotheses, we get
(x(t), t) ∈ K, for all t ∈ [τ0, ω). (4.29)
Since all the hypotheses from Theorem 4.10 are satisfied, there exists tK ∈ [τ0, ω) such that
(x(t), t) /∈ K, for all t ∈ (tK , ω),
4.1 Prolongation of the solutions of generalized ODEs 59
which contradicts (4.29).
Corollary 4.12. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing
and left-continuous, and Ω = ΩF . Let (x0, τ0) ∈ Ω and x : [τ0, ω) → X be the maximal
solution of (4.1) with x(τ0) = x0. If ω < +∞, then the following conditions hold
(i) The limit limt→ω−
x(t) exists;
(ii) (y, ω) ∈ ∂Ω and (x(t), t)t→ω−→ (y, ω), where y := lim
t→ω−x(t).
Proof. Suppose ω < +∞. Let ε > 0. Since ω ∈ (τ0,+∞) and h is left-continuous on
(τ0,+∞), the limit limt→ω−
h(t) exists. Thus, by the Cauchy condition, there exists δ > 0 (we
can take τ0 < ω − δ) such that
|h(t)− h(s)| < ε, for all t, s ∈ (ω − δ, ω). (4.30)
Then, by (4.30) and Lemma 1.16, the following inequality holds
‖x(t)− x(s)‖ ≤ |h(t)− h(s)| < ε,
for every t, s ∈ (ω − δ, ω). Then again by the Cauchy condition, the limit limt→ω−
x(t) exists,
that is, there exists y ∈ X such that
y = limt→ω−
x(t) (4.31)
and it proves item (i).
Now, we will prove (ii). Note that
(x(t), t)t→ω−→ (y, ω).
Since ω is an accumulation point of the set [τ0, ω), there exists a sequence tnn∈N ⊂ [τ0, ω)
such that tnn→∞→ ω. Thus, by (4.31), we have
x(tn)n→∞→ y.
Since (x(tn), tn)k∈N ⊂ Ω and (x(tn), tn)n→+∞→ (y, ω), we obtain
(y, ω) ∈ Ω. (4.32)
60 Chapter 4 — Prolongation of solutions
Let us prove that (y, ω) /∈ Ω. Suppose the contrary, that is, (y, ω) ∈ Ω = ΩF . Theorem
1.21 yields that there exists ∆ > 0 such that, on [ω, ω + ∆], there is a unique solution
z : [ω, ω + ∆]→ X of the generalized ODE (4.1) with z(ω) = y.
Define u : [t0, ω + ∆]→ X by
u(t) :=
x(t), t ∈ [τ0, ω)
z(t), t ∈ [ω, ω + ∆].
Thus, by Theorem 4.1, u is a solution of the generalized ODE (4.1) with initial condition
u(τ0) = x(τ0) = x0, which is clearly a proper prolongation of the solution x. But x is assumed
to be maximal, hence we have a contradiction. Therefore we conclude that (y, ω) /∈ Ω, that
is, (y, ω) ∈ Ωc, which implies that
(y, ω) ∈ Ωc. (4.33)
By (4.32) and (4.33), we conclude that
(y, ω) ∈ ∂Ω, (4.34)
obtaining the desired result.
Even considering O = X, it is possible to ensure that the maximal solution is defined on
[τ0,+∞), when x(τ0) = x0. This is the content of the next result.
Corollary 4.13. If Ω = X × [t0,+∞) and F ∈ F(Ω, h), where h : [t0,+∞) → R is
nondecreasing and left-continuous, then for every (x0, τ0) ∈ Ω, there exists a unique maximal
solution defined on [τ0,+∞) of the generalized ODE (4.1), with x(τ0) = x0.
Proof. At first, we need to ensure that Ω = ΩF . Let (z0, s0) ∈ Ω. Since h is a nondecreasing
function, the limit lims→s+0
h(s) exists. Therefore, by the Cauchy condition, given ε > 0, there
exists δ > 0 such that if s, t ∈ (s0, s0 + δ), then |h(t)− h(s)| < ε. Then, since F ∈ F(Ω, h),
we have
‖F (z0, t)− F (z0, s)‖ ≤ |h(t)− h(s)| < ε,
for all t, s ∈ (s0, s0 + δ), that is, the limit lims→s+0
F (z0, s) exists and we denote it by F (z0, s+0 ).
Therefore
x0 + F (z0, s+0 )− F (z0, s0) ∈ X,
that is, (z0, s0) ∈ ΩF .
Now, let (x0, τ0) ∈ Ω and x : [τ0, ω)→ X be the maximal solution of the generalized ODE
(4.1) with x(τ0) = x0. Note that the existence of such a solution is guaranteed by Theorems
4.8 and 4.9.
4.2 Prolongation of solutions of measure differential equations 61
Suppose, by contradiction, ω < +∞. Thus, by Corollary 4.12, the limit limt→ω−
x(t) exists.
Let
y := limt→ω−
x(t) ∈ X. (4.35)
Since ω is an accumulation point of the set [τ0, ω), there exists a sequence tnn∈N ⊂ [τ0, ω)
such that tnn→∞→ ω. Therefore, by (4.35), we have
x(tn)n→∞→ y.
It is easy to check that N := x(tn)n∈N∪y is a compact subset of X. Thus N× [τ0, ω] is a
compact subset of Ω. By Theorem 4.10, there exists t ∈ [τ0, ω) such that (x(t), t) /∈ N×[τ0, ω],
for all t ∈ (t, ω), which contradicts the fact that x(tn) ∈ N for all n ∈ N and tnn→∞→ ω.
Therefore ω = +∞ and we obatain the desired result.
4.2 Prolongation of solutions of measure differential equa-
tions
In this section, our goal is to investigate the prolongation of solutions of measure differ-
ential equations.
Let Rn be the n-dimensional Euclidean space with norm ‖ · ‖ , and O ⊂ Rn be an open
set. Consider the integral form of a measure differential equation (MDE, for short) of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (4.36)
where τ0 ≥ t0, f : O × [t0,+∞)→ Rn and g : [t0,+∞)→ R are functions.
We start by presenting a definition of prolongation to the right of a solution x : J → Rn
of the MDE (4.36)
Definition 4.14. (Prolongation to the right) Let τ0 ≥ t0 and x : J → Rn, J ⊂ [t0,+∞), be
a solution of the MDE (4.36) on the interval J with left endpoint τ0. The solution y : J → X,
J ⊂ [t0,+∞) with left endpoint τ0, of the MDE (4.36) is called a prolongation to the right
of x, if J ⊂ J and x(t) = y(t) for all t ∈ J. If J ( J , then y is called a proper prolongation
of x to the right.
In what follows, we present a definition of maximal solution of the MDE (4.36).
62 Chapter 4 — Prolongation of solutions
Definition 4.15. (Maximal solution) Let (x0, τ0) ∈ O× [t0,+∞). The solution y : I → Rn,
I ⊂ [t0,+∞) and I with the left endpoint τ0, of the MDE
y(t) = x0 +
∫ t
τ0
f(y(s), s)dg(s), t ≥ τ0,
is called maximal, if there is no proper prolongation of y to the right.
In what follows, we present a result which ensures the existence and uniqueness of a
solution of the MDE (4.36).
Theorem 4.16. Suppose f : O × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Also, assume that for all (z0, s0) ∈ O ×[t0,+∞), we have z0 + f(z0, s0)∆+g(s0) ∈ O. Then, for every (x0, τ0) ∈ O × [t0,+∞), there
exists a unique maximal solution x : J → Rn of the MDE (4.36) with x(τ0) = x0 and J (an
interval) with left endpoint τ0.
Proof. Let (x0, τ0) ∈ O×[t0,+∞) (arbitrary, but fixed). Define a function F : O×[t0,+∞)→Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (4.37)
Then, by Theorem 3.2 and Remark 3.3 there is a nondecreasing and left-continuous function
h : [t0,+∞)→ R such that F ∈ F(Ω, h), where Ω = O × [t0,+∞) and
h(t) =
∫ t
t0
(L(s) +M(s))dg(s).
On the other hand, for each (z0, s0) ∈ Ω, we have
z0 + F (z0, s+0 )− F (z0, s0) = z0 + lim
s→s+0
s∫t0
f(z0, τ)dg(τ)−s0∫t0
f(z0, τ)dg(τ) (4.38)
= z0 + lims→s+0
s∫s0
f(z0, τ)dg(τ)
= z0 + f(z0, s0)∆+g(s0) ∈ O,
by Theorem 1.8. Thus
z0 + F (z0, s+0 )− F (z0, s0) ∈ O,
4.2 Prolongation of solutions of measure differential equations 63
that is, Ω = ΩF . Thus all the hypotheses of Theorem 4.8 are satisfied. Therefore, there exists
a unique maximal solution x : J → X of the generalized ODE
dx
dτ= DF (x, t), (4.39)
with x(τ0) = x0, where F is given by (4.37) and J an interval with left endpoint τ0. Thus, by
Theorem 3.8, x : J → Rn is also a solution of the MDE (4.36) with x(s0) = x0. Let us prove
that x (as a solution of (4.36)) does not admit proper extension to the right. Suppose the
contrary, that is, there exists a solution x : J → Rn of the MDE (4.36) with x(τ0) = x0, where
J is an interval with left endpoint τ0, such that x extends x properly. Thus, by Theorem
3.8, x is a solution of the generalized ODE
dx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.37), which contradicts the maximality of x (as a
solution of the generalized ODE (4.39)). Therefore x (as a solution of the MDE (4.36)) does
not admit a proper extension to the right, that is, x is a maximal solution of the MDE (4.36)
with x(τ0) = x0. Again by Theorems 4.8 and 3.8, we have the uniqueness of the maximal
solution x of the MDE (4.36), obtaining the desired result.
Remark 4.17. Notice that by the proof of Theorem 4.16, x : J → Rn is the maximal
solution of the MDE (4.36) with x(τ0) = x0 if, and only if, x : J → Rn is the maximal
solution of the generalized ODEdx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.37).
Theorem 4.18. Suppose f : O × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Also, assume that for all (z0, s0) ∈ O ×[t0,+∞), we have z0 +f(z0, s0)∆+g(s0) ∈ O. Suppose (x0, τ0) ∈ O×[t0,+∞) and x : J → Rn
is the maximal solution of (4.36) with x(τ0) = x0 and J an interval with left endpoint τ0.
Then J = [τ0, ω), with ω ≤ +∞.
Proof. Let (x0, τ0) ∈ O × [t0,+∞) and x : J → Rn be the maximal solution of the MDE
(4.36) with x(τ0) = x0 and J be an interval with left endpoint τ0. Note that the existence of
such a solution is guaranteed by Theorem 4.16.
On the other hand, define a function F : O × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (4.40)
64 Chapter 4 — Prolongation of solutions
Then using the same arguments as in the proof of Theorem 4.16, we can prove that
Ω = ΩF and F ∈ F(Ω, h),
where Ω = O × [t0,+∞) and h : [t0,+∞) → R is a nondecreasing and left-continuous
function. Also, by Remark 4.17, x : J → Rn is the maximal solution of the generalized ODE
dx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.40). Thus, all the hypotheses of Theorem 4.9 are
satisfied. Then J = [τ0, ω), where ω ≤ +∞. This completes the proof.
Theorem 4.19. Suppose f : O × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Also, assume that for all (z0, s0) ∈ O ×[t0,+∞), we have z0 + f(z0, s0)∆+g(s0) ∈ O. Suppose (x0, τ0) ∈ O × [t0,+∞) and x :
[τ0, ω) → Rn is the maximal solution of the MDE (4.36) with x(τ0) = x0. Then, for every
compact set K ⊂ O × [t0,+∞), there exists tK ∈ [τ0, ω) such that (x(t), t) /∈ K, for all
t ∈ (tK , ω).
Proof. Let (x0, τ0) ∈ O× [t0,+∞) and x : [τ0, ω)→ Rn be the maximal solution of the MDE
(4.36) with x(τ0) = x0. Also, assume that K is a compact subset of Bc × [t0,+∞).
On the other hand, define a function F : O × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (4.41)
Similarly as in the proof of Theorem 4.16, we can prove that
Ω = ΩF and F ∈ F(Ω, h),
where Ω = O × [t0,+∞) and h : [t0,+∞) → R is a nondecreasing and left-continuous
function. Also, by Remark 4.17, x : [τ0, ω) → Rn is the maximal solution of generalized
ODEdx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.41). Hence all the hypotheses of Theorem 4.10 are
satisfied. Then there exist tK ∈ [τ0, ω) such that (x(t), t) /∈ K, for all t ∈ (tK , ω), obtaining
the desired result.
Corollary 4.20. Suppose f : O × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Also, assume that for all (z0, s0) ∈ O ×[t0,+∞), we have z0 + f(z0, s0)∆+g(s0) ∈ O. Suppose (x0, τ0) ∈ O × [t0,+∞) and x :
4.2 Prolongation of solutions of measure differential equations 65
[τ0, ω)→ Rn is the maximal solution of the MDE (4.36) with x(τ0) = x0. If x(t) ∈ N for all
t ∈ [τ0, ω), where N is a closed subset of O, then ω = +∞.
Proof. Let (x0, τ0) ∈ O× [t0,+∞) and x : [τ0, ω)→ Rn be the maximal solution of the MDE
(4.36) with x(τ0) = x0. Moreover, suppose x(t) ∈ N for all t ∈ [τ0, ω), where N is a closed
subset of O.
Define a function F : O × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (4.42)
Then, using the same arguments as in the proof of Theorem 4.16, we can prove that
Ω = ΩF and F ∈ F(Ω, h),
where Ω = O × [t0,+∞) and h : [t0,+∞) → R is a nondecreasing and left-continuous
function. Also, by Remark 4.17, x : [τ0, ω) → Rn is the maximal solution of generalized
ODEdx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.42).
On the other hand, note that N is compact (since N is closed and bounded in Rn).
Thus, all the hypotheses of Corollary 4.11 are satisfied. Then ω = +∞ and the proof is
complete.
Corollary 4.21. Suppose f : O × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4),
and g : [t0,+∞) → R satisfies condition (A1). Also, assume that for all (z0, s0) ∈ O ×[t0,+∞), we have z0 + f(z0, s0)∆+g(s0) ∈ O. Suppose (x0, τ0) ∈ O × [t0,+∞) and x :
[τ0, ω)→ Rn is the maximal solution of the MDE (4.36) with x(τ0) = x0. If ω < +∞, then
the following conditions hold
(i) The limit limt→ω−
x(t) exists;
(ii) (y, ω) ∈ ∂Ω and (x(t), t)t→ω−→ (y, ω), where y := lim
t→ω−x(t).
Proof. Suppose ω < +∞. Let (x0, τ0) ∈ O × [t0,+∞) and x : [τ0, ω) → Rn be the maximal
solution of (4.36) with x(τ0) = x0.
Define a function F : O × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ O × [t0,+∞). (4.43)
66 Chapter 4 — Prolongation of solutions
Proceding as in the proof of Theorem 4.16, we can prove that
Ω = ΩF and F ∈ F(Ω, h),
where Ω = O × [t0,+∞) and h : [t0,+∞) → R is a nondecreasing and left-continuous
function.
Also, by Remark 4.17 x : [τ0, ω)→ Rn is the maximal solution of generalized ODE
dx
dτ= DF (x, t),
with x(τ0) = x0, where F given by (4.43). Therefore, all the hypotheses of Corollary 4.12
are satisfied. Then the limit y := limt→ω−
x(t) exists, (y, ω) ∈ ∂Ω and (x(t), t)t→ω−→ (y, ω).
Corollary 4.22. Suppose f : Rn × [t0,+∞) → Rn satisfies conditions (A2), (A3) and
(A4) (with O = Rn), and g : [t0,+∞) → R satisfies condition (A1). Then for every
(x0, τ0) ∈ Rn× [t0,+∞), the maximal solution of the MDE (4.36) with x(τ0) = x0, is defined
in [τ0,+∞).
Proof. Let (x0, τ0) ∈ Rn × [t0,+∞). Define a function F : Rn × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (x, t) ∈ Ω. (4.44)
By Theorem 3.2 and Remark 3.3, there exists a nondecreasing and left-continuous function
h : [t0,+∞)→ R such that F ∈ F(Ω, h), where Ω = Rn× [t0,+∞). Thus all the hypotheses
of Corollary 4.13 are satisfied. Then there exist a unique maximal solution x : [τ0,+∞)→ Rn
of the generalized ODEdx
dτ= DF (x, t),
with x(τ0) = x0, where F is given by (4.44). Now, the result follows from Remark 4.17.
4.3 Prolongation of solutions of dynamic equation on
time scales
In this section, our goal is to prove results on prolongation of solutions of dynamic
equations on time scales.
Let T be a time scale and consider the dynamic equation on time scales given by
x∆(t) = f(x∗, t), (4.45)
4.3 Prolongation of solutions of dynamic equation on time scales 67
where f : O × [t0,+∞)T → Rn is a function and O ⊂ Rn is a open set.
Definition 4.23. Let x : IT → Rn, IT ⊂ [t0,+∞)T, be a solution of (4.45) on the interval
IT, with τ0 = min IT. The solution y : JT → Rn, with JT ⊂ [t0,+∞)T and τ0 = min JT, of the
dynamic equation on time scales (4.45) is called a prolongation of x to the right, if IT ⊂ JTand x(t) = y(t) for all t ∈ IT. If IT ( JT, then y is called a proper prolongation of x to the
right.
Definition 4.24. (Maximal solution) Let (x0, τ0) ∈ O×[t0,+∞)T. The solution y : IT → Rn,
with IT ⊂ [t0,+∞)T and τ0 = min IT, of the dynamic equation on time scalesy∆(t) = f(y∗, t)
y(τ0) = x0
is called maximal, if there is no proper prolongation of y to the right.
The symbol G([t0,+∞)T,Rn) denotes the set of all regulated functions x : [t0,+∞)T →Rn. Also, the symbol G0([t0,+∞)T,Rn) denotes the set of all functions x ∈ G([t0,+∞)T,Rn)
such that sups∈[t0,+∞)T
e−(s−t0) |x(s)| is finite. This space is endowed with the norm
‖x‖[t0,+∞)T= sup
s∈[t0,+∞)T
e−(s−t0) |x(s)| , x ∈ G0([t0,+∞)T,Rn),
and it becomes a Banach space.
We use the notation x ∈ G([t0,+∞)T, O) for a function x ∈ G([t0,+∞)T,Rn) such that
x(s) ∈ O, for all s ∈ [t0,+∞)T. The notation x ∈ G0([t0,+∞)T, Bc) is defined in an similar
way.
Remark 4.25. Note that if x ∈ G0([t0,+∞), O), then y := x|[t0,+∞)T ∈ G0([t0,+∞)T, O)
and ‖y‖[t0,+∞)T≤ ‖x‖[t0,+∞) .
From now on, we assume the following conditions on the function f : O×[t0,+∞)T → Rn:
(B1) The Kurzweil-Henstock ∆-integral
∫ t2
t1
f(y(t), t)∆t exists, for all y ∈ G([t0,+∞)T, O)
and all t1, t2 ∈ [t0,+∞)T.
(B2) There exists a Kurzweil-Henstock ∆-integrable function M : [t0,+∞)T → R such that∣∣∣∣∫ t2
t1
f(y(t), t)∆t
∣∣∣∣ ≤ ∫ t2
t1
M(t)∆t,
for all y ∈ G([t0,+∞)T, O) and all t1, t2 ∈ [t0,+∞)T, t1 ≤ t2.
68 Chapter 4 — Prolongation of solutions
(B3) There exists a Kurzweil-Henstock ∆-integrable function L : [t0,+∞)T → R such that∣∣∣∣∫ t2
t1
[f(y(t), t)− f(w(t), t)]∆t
∣∣∣∣ ≤ ‖y − w‖[t0,+∞)T
∫ t2
t1
L(t)∆t,
for all y, w ∈ G0([t0,+∞)T, O) and all t1, t2 ∈ [t0,+∞)T, t1 ≤ t2.
The next result describes a relation between the conditions on the Kurzweil-Henstock
∆-integrals and those on Kurzweil-Henstock-Stieltjes integrals. Such result is essential to
our purposes. Our proof is inspired in [21], Lemma 5.3.
Theorem 4.26. Let T be a time scale such that supT = +∞ and t0 ∈ T, and let f :
O × [t0,+∞)T → Rn be a function. Define g(t) = t∗ and f ∗(y, t) = f(y, t∗) for every y ∈ Oand t ∈ [t0,+∞).
1. If f : O×[t0,+∞)T → Rn satisfies the condition (B1), then the integral
∫ t2
t1
f ∗(x(t), t)dg(t)
exists, for all x ∈ G([t0,+∞), O) and for all t1, t2 ∈ [t0,+∞).
2. If f : O × [t0,+∞)T → Rn satisfies the condition (B2), then f ∗ : O × [t0,+∞) → Rn
satisfies the following condition∣∣∣∣∫ t2
t1
f ∗(x(t), t)dg(t)
∣∣∣∣ ≤ ∫ t2
t1
M∗(t)dg(t),
for all t1, t2 ∈ [t0,+∞), t1 ≤ t2, and for all x ∈ G([t0,+∞), O), where g(t) = t∗.
3. If f : O × [t0,+∞)T → Rn satisfies the condition (B3), then f ∗ : O × [t0,+∞) → Rn
satisfies the condition∣∣∣∣∫ t2
t1
[f ∗(x(t), t)− f ∗(z(t), t)]dg(t)
∣∣∣∣ ≤ ‖x− z‖[t0,+∞)
∫ t2
t1
L∗(t)dg(t),
for all t1, t2 ∈ [t0,+∞), t1 ≤ t2, and for all x, z ∈ G([t0,+∞), O), where g(t) = t∗.
Proof. Consider an arbitrary x ∈ G([t0,+∞), O). Let t1, t2 ∈ [t0,+∞). Then t∗1, t∗2 ∈
[t0,+∞)T. According to Remark 4.25, y := x|[t0,+∞)T ∈ G([t0,+∞)T, O) and, therefore, by
hypothesis (B1), the Kurzweil-Henstock ∆-integrals
∫ s2
s1
f(y(t), t)∆t =
∫ s2
s1
f(x(t), t)∆t ex-
ists, for every s1, s2 ∈ [t0,+∞)T. In particular,
∫ t∗2
t∗1
f(x(t), t)∆t exists. Then, by Theorems
3.13 and 3.14, we have∫ t∗2
t∗1
f(x(t), t)∆tTheorem 3.13
↓=
∫ t∗2
t∗1
f(x(t∗), t∗)dg(t)Theorem 3.16
↓=
∫ t∗2
t∗1
f(x(t), t∗)dg(t)
4.3 Prolongation of solutions of dynamic equation on time scales 69
Theorem 3.14↓=
∫ t2
t1
f(x(t), t∗)dg(t) =
∫ t2
t1
f ∗(x(t), t)dg(t),
that is, the last integral exists for every t1, t2 ∈ [t0,+∞) as well. This proves 1.
The remaining two statements follow from Theorems 3.13 and 3.14 For 2., we have∣∣∣∣∫ t2
t1
f ∗(x(t), t)dg(t)
∣∣∣∣ Theorem 3.16↓=
∣∣∣∣∫ t2
t1
f(x(t∗), t∗)dg(t)
∣∣∣∣ Theorem 3.14↓=
∣∣∣∣∣∫ t∗2
t∗1
f(x(t∗), t∗)g(t)
∣∣∣∣∣Theorem 3.13
↓=
∣∣∣∣∣∫ t∗2
t∗1
f(x(t), t)∆t
∣∣∣∣∣ =
∣∣∣∣∣∫ t∗2
t∗1
f(y(t), t)∆t
∣∣∣∣∣ ≤∫ t∗2
t∗1
M(t)∆t
=
∫ t∗2
t∗1
M∗(t)dg(t) =
∫ t2
t1
M∗(t)dg(t),
obtaining the desired on concerning the function f ∗.
For 3., we have∣∣∣∣∫ t2
t1
[f ∗(x(t), t)− f ∗(z(t), t)]dg(t)
∣∣∣∣ Theorem 3.16↓=
∣∣∣∣∫ t2
t1
[f(x(t∗), t∗)− f(z(t∗), t∗)]dg(t)
∣∣∣∣Theorem 3.14
↓=
∣∣∣∣∣∫ t∗2
t∗1
[f(x(t∗), t∗)− f(z(t∗), t∗)]dg(t)
∣∣∣∣∣ Theorem 3.13↓=
∣∣∣∣∣∫ t∗2
t∗1
[f(x(t), t)− f(z(t), t)]∆t
∣∣∣∣∣=
∣∣∣∣∣∫ t∗2
t∗1
[f(y(t), t)− f(w(t), t)]∆t
∣∣∣∣∣ ≤ ‖y − w‖[t0,+∞)T
∫ t∗2
t∗1
L(t)∆t
= ‖y − w‖[t0,+∞)T
∫ t∗2
t∗1
L∗(t)dg(t) = ‖y − w‖[t0,+∞)T
∫ t2
t1
L∗(t)dg(t)
Remark 4.25↓≤ ‖x− z‖[t0,+∞)
∫ t2
t1
L∗(t)dg(t),
for every x, z ∈ G0([t0,+∞), O), where y := x|[t0,+∞)T and w := z|[t0,+∞)T .
The next result ensures the existence and uniqueness of a maximal solution of dynamic
equations on time scales.
Theorem 4.27. Let T be a time scale such that supT = +∞ and t0 ∈ T. Suppose f : O ×[t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3). Also, assume that for all (z0, s0) ∈O× [t0,+∞)T, we have z0 + f(z0, s0)µ(s0) ∈ O. Then, for all (x0, τ0) ∈ O× [t0,+∞)T, there
exists a unique maximal solution x : [τ0, ω)T → Rn, ω ≤ +∞ of the dynamic equation on
70 Chapter 4 — Prolongation of solutions
time scales given by x∆(t) = f(x∗, t).
x(τ0) = x0
(4.46)
Also, if ω < +∞, we get ω ∈ T and ω is left-dense.
Proof. Let (x0, τ0) ∈ O × [t0,+∞)T (arbitrary, but fixed).
Define f ∗ : O × [t0,+∞) → Rn by f ∗(x, t) = f(x, t∗), for all (x, t) ∈ O × [t0,+∞)
and g(t) = t∗, for every t ∈ [t0,+∞). Since f satisfies conditions (B1), (B2) and (B3), by
Theorem 4.26, f ∗ satisfies conditions (A2), (A3) and (A4). Also, if (z0, s0) ∈ O× [t0,+∞)T,
we have
z0 + f ∗(z0, s0)∆+g(s0) = z0 + f(z0, s∗0)(g(s+
0 )− g(s0)) (4.47)
= z0 + f(z0, s∗0)(σ(s∗0)− s∗0)
= z0 + f(z0, s∗0)µ(s∗0)
= z0 + f(z0, s0)µ(s0) ∈ O
that is,
z0 + f ∗(z0, s0)∆+g(s0) ∈ O,
for all (z0, s0) ∈ O × [t0,+∞)T. On the other hand, if (z0, s0) ∈ O × [t0,+∞), but s0 /∈ T,then
z0 + f ∗(z0, s0)∆+g(s0) = z0 + f(z0, s∗0)(g(s+
0 )− g(s0)) (4.48)
= z0 + f(z0, s∗0)(s∗0 − s∗0)
= z0 ∈ O.
It implies that for each (z0, s0) ∈ O × [t0,+∞), we obtain z0 + f ∗(z0, s0)∆+g(s0) ∈ O. Note
that, by Lemma 3.12, g is a nondecreasing and left-continuous function on (t0,+∞). Then
f ∗ and g fulfill all the hypotheses of Theorems 4.16 and 4.18. Thus, there exists a unique
maximal solution y : [τ0, ω)→ Rn, ω ≤ +∞, of the MDE
y(t) = x0 +
∫ t
τ0
f ∗(y(s), s)dg(s), (4.49)
where g(s) = s∗.
Let us consider two cases.
Case 1. Suppose ω = +∞.
4.3 Prolongation of solutions of dynamic equation on time scales 71
By Theorem 3.17, y : [τ0,+∞)→ Rn must have the form
y = x∗, (4.50)
where x : [τ0,+∞)T → Rn is a solution of the dynamic equation on time scales (4.46).
Evidently, x is a maximal solution of the dynamic equation on time scales (4.46).
Case 2. Now, assume ω < +∞.
In order to prove that ω ∈ T and ω is left-dense, we will prove two assertions.
Assertion 1. ω ∈ T.
Suppose the contrary, that is, ω /∈ T. We define
B := s ∈ T : s < ω .
Note that B is nonempty, because τ0 ∈ B. Since ω /∈ T, we have B = T ∩ (−∞, ω] and,
therefore, B is closed subset of R. We denote β := supB. Since B is a closed, we have
β ∈ B. Also, notice that β ≤ ω. But ω /∈ T, hence β < ω.
Note that g is constant on (β, ω] and, therefore,∫ tsf(y(s), s)dg(s) = 0, for all s, t ∈ (β, ω].
Now, let σ ∈ (β, ω) be fixed, and define a function u : [τ0, ω]→ Rn by
u(t) =
y(t), t ∈ [τ0, ω)
y(σ), t = ω.(4.51)
Note that u is well-defined and u|[τ0,ω) = y. We will show that u is a solution of the MDE
(4.49) on [τ0, ω]. Indeed, let s1, s2 ∈ [τ0, ω] be such that s1 ∈ [τ0, ω) and s2 = ω. Then
u(s2)− u(s1) = y(σ)− y(s1)
(4.49)
↓=
∫ σ
s1
f ∗(y(s), s)dg(s)
=
∫ σ
s1
f ∗(y(s), s)dg(s) +
∫ ω
σ
f ∗(y(s), s)dg(s)︸ ︷︷ ︸=0
=
∫ ω
s1
f ∗(y(s), s)dg(s)
s2=ω
↓=
∫ s2
s1
f ∗(y(s), s)dg(s)
=
∫ s2
s1
f ∗(u(s), s)dg(s),
that is,
u(s2)− u(s1) =
∫ s2
s1
f ∗(u(s), s)dg(s),
72 Chapter 4 — Prolongation of solutions
for all s1 s2 ∈ [τ0, ω] such that s1 ∈ [τ0, ω) and s2 = ω.
The case s1, s2 ∈ [τ0, ω) follows immediately by the definition of u. Then u is a solution
of the MDE (4.49) on [τ0, ω]. Note that u is clearly a proper prolongation of y : [τ0, ω)→ Rn
to the right, which contradicts the fact that y is the maximal solution of the MDE (4.49).
Therefore, we conclude that ω ∈ T.
Assertion 2. ω is left-dense, that is, ρ(ω) = ω.
Suppose the contrary, that is, ρ(ω) = sup s ∈ T : s < ω < ω. Thus, g is constant on
(ρ(ω), ω] and, therefore, using the same arguments as in the proof of Assertion 1 (with
β = ρ(ω)), we can prove that there exists a proper prolongation of y : [τ0, ω) → Rn to the
right, which contradicts the fact that y is the maximal solution of the MDE (4.49). Then, ω
is left-dense.
On the other hand, by Theorem 3.17, y : [τ0, ω) → Rn must have the form y = x∗,
where x : [τ0, ω)T → Rn is a solution of the dynamic equation on time scales (4.46). We
assert that x : [τ0, ω)T → Rn is a maximal solution of the dynamic equation on time scales
(4.46). Suppose the contrary, that is, let φ : JT → Rn be a proper prolongation of x :
[τ0, ω)T → Rn to the right. Then, without loss of generality, we can consider JT = [τ0, ω]T.
Since φ : [τ0, ω]T → Rn is a solution of the dynamic equation on time scales (4.46) (because
φ is a proper prolongation of x), by Theorem 3.17, φ∗ : [τ0, ω] → Rn is solution of MDE
(4.49). Also, note that φ∗|[τ0,ω) = y. Thus, φ∗ : [τ0, ω] → Rn is a proper prolongation of
y : [τ0, ω) → Rn, which contradicts the fact that y is the maximal solution of the MDE
(4.49). Thus, x : [τ0, ω)T → Rn is a maximal solution of the dynamic equation on time scales
(4.46).
Finally, we will show the uniqueness of the maximal solution x. Suppose ψ : LT → Rn is
also a maximal solution of the dynamic equation on time scales (4.46).
Statement 1. x(t) = ψ(t), for all t ∈ [τ0, ω)T ∩ LT.
Indeed, by Theorem 3.17, ψ∗ : L→ Rn is a solution of the MDE (4.49). But y : [τ0, ω)→ Rn
is the unique maximal solution of (4.49). Then y(t) = ψ∗(t), for every t ∈ [τ0, ω) ∩ L. In
particular, since
[τ0, ω)T ∩ LT = [τ0, ω) ∩ L ∩ T ⊂ [τ0, ω) ∩ L,
we have
y(t) = ψ∗(t), for all t ∈ [τ0, ω)T ∩ LT,
and, therefore, for t ∈ [τ0, ω)T ∩ LT, we get
x(t) = x(t∗) = x∗(t)
(4.50)
↓= y(t) = ψ∗(t) = ψ(t∗) = ψ(t),
that is, x(t) = ψ(t), for all t ∈ [τ0, ω)T ∩ LT. This ends the proof of Statement 1.
4.3 Prolongation of solutions of dynamic equation on time scales 73
Now, define λ : ET → Rn, E = [τ0, ω) ∪ L, by
λ(t) =
x(t), t ∈ [τ0, ω)T
ψ(t), t ∈ LT.
By Statement 1, λ is well-defined. Notice that λ is a solution of (4.46) and the time scales
intervals [τ0, ω)T and LT are contained in ET and λ|[τ0,ω)T = x, λ|LT = ψ. Since x and ψ are
maximal solutions of (4.46), we have ET = [τ0, ω)T = LT and λ(t) = x(t) = ψ(t), for all
t ∈ ET, that is, x(t) = ψ(t), for all t ∈ ET = [τ0, ω)T = LT, proving the uniqueness of x.
Corollary 4.28. Let T be a time scale such that supT = +∞ and t0 ∈ T. Suppose f :
O × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3). Also, assume that for all
(z0, s0) ∈ O × [t0,+∞)T, we have z0 + f(z0, s0)µ(s0) ∈ O. Let (x0, τ0) ∈ O × [t0,+∞)T and
x : [τ0, ω)T → Rn be the maximal solution of the dynamic equation on time scales (4.46)
(which is ensured by Theorem 4.27). If each point of T is left-scattered, then ω = +∞.
Proof. Suppose the contrary, that is, ω < +∞. Thus, by Theorem 4.27, ω ∈ T and ω is left-
dense, which contradicts the fact that each point of T is left-scattered. Then ω = +∞.
Remark 4.29. Notice that in the proof of Theorem 4.27, y = x∗ : [τ0, ω) → Rn is the
maximal solution of the measure differential equation (4.49) if, and only if, x : [τ0, ω)T → Rn
is the maximal solution of the dynamic equation on time scales (4.46), where (x0, τ0) ∈O × [t0,+∞)T.
Theorem 4.30. Let T be a time scale such that supT = +∞ and t0 ∈ T. Suppose f :
O × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3). Also, assume that for all
(z0, s0) ∈ O × [t0,+∞)T, we have z0 + f(z0, s0)µ(s0) ∈ O. Let (x0, τ0) ∈ O × [t0,+∞)Tand x : [τ0, ω)T → Rn be the maximal solution of the dynamic equation on time scales
(4.46). Then, for every compact set K ⊂ O × [t0,+∞)T, there exists tK ∈ [τ0, ω) such that
(x(t), t) /∈ K, for t ∈ (tK , ω) ∩ T.
Proof. Let (x0, τ0) ∈ O × [t0,+∞)T and x : [τ0, ω)T → Rn be the maximal solution of the
dynamic equation on time scales (4.46). Also, let K ⊂ O × [t0,+∞)T be a compact set. In
particular, K ⊂ O × [t0,+∞).
On the other hand, by Remark 4.29, x∗ : [τ0, ω) → Rn is the maximal solution of the
following integral form of the MDE
y(t) = x0 +
∫ t
τ0
f ∗(y(s), s)dg(s), t ≥ τ0,
where f ∗ : O × [t0,+∞) → Rn is given by f ∗(x, t) = f(x, t∗), for all (x, t) ∈ O × [t0,+∞)
and g(t) = t∗, for all t ∈ [t0,+∞).
74 Chapter 4 — Prolongation of solutions
Since f satisfies conditions (B1), (B2) and (B3), by Theorem 4.26, f ∗ satisfies conditions
(A2), (A3) and (A4). Also, by Lemma 3.12, g is a nondecreasing and left-continuous function
on (t0,+∞). Note that, using the same arguments as in the proof of Theorem 4.27, we can
prove that
z0 + f ∗(z0, s0)∆+g(s0) ∈ O,
for all (z0, s0) ∈ O × [t0,+∞). Thus, f ∗, g and x∗ fulfill all the hypotheses of Theorem
4.19 and, therefore, there exists tK ∈ [τ0, ω) such that (x∗(t), t) /∈ K, for all t ∈ (tK , ω). In
particular, since (tK , ω) ∩ T ⊂ (tK , ω), we get
(x(t), t) = (x(t∗), t) = (x∗(t), t) /∈ K, for all t ∈ (tK , ω) ∩ T,
that is,
(x(t), t) /∈ K, for all t ∈ (tK , ω) ∩ T.
This completes the proof of the theorem.
Remark 4.31. Note that in the proof of the previous theorem, the set (tK , ω) ∩ T is
nonempty. Indeed, if ω < +∞, according to Theorem 4.27, ω ∈ T and ω is left-dense
and, therefore, (tK , ω) ∩ T 6= ∅. On the other hand, if ω = +∞, then (tK ,+∞) ∩ T 6= ∅,because supT = +∞.
Theorem 4.32. Let T be a time scale such that supT = +∞ and t0 ∈ T. Suppose f :
O × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3). Also, assume that for all
(z0, s0) ∈ O × [t0,+∞)T, we have z0 + f(z0, s0)µ(s0) ∈ O. Suppose (x0, τ0) ∈ O × [t0,+∞)Tand x : [τ0, ω)T → Rn is the unique maximal solution of (4.45) with x(τ0) = x0. If x(t) ∈ Nfor all t ∈ [τ0, ω)T, where N is a closed subset of O, then, ω = +∞.
Proof. Let (x0, τ0) ∈ O× [t0,+∞)T and x : [τ0, ω)T → Rn be the unique maximal solution of
(4.45) with x(τ0) = x0. Note that the existence of such a solution is guaranteed by Theorem
4.27. Also, assume x(t) ∈ N for any t ∈ [τ0, ω)T, where N is a closed subset of O.
On the other hand, by Remark 4.29, the function x∗ : [τ0, ω)→ Rn is the unique maximal
solution of
y(t) = x0 +
∫ t
τ0
f ∗(y(s), s)dg(s), (4.52)
where f ∗ : O × [t0,+∞)T → Rn is given by f ∗(x, t) = f(x, t∗) for all (x, t) ∈ O × [t0,+∞)
and g(t) = t∗ for all t ∈ [t0,+∞).
Since x(t) ∈ N for all t ∈ [τ0,+∞)T, for each s ∈ [τ0, ω), we have
x∗(s) = x(s∗) ∈ N
4.3 Prolongation of solutions of dynamic equation on time scales 75
because s∗ ∈ [τ0, ω)T. Thus
x∗(s) ∈ N, for all s ∈ [τ0, ω).
Now, N is compact (since N is closed and bounded in Rn). Moreover, by Theorem 4.26, f ∗
satisfies conditions (A2), (A3) and (A4) because f satisfies conditions (B1), (B2) and (B3),
and by Lemma 3.12, g is a nondecreasing and left-continuous function on (τ0,+∞). We can
prove, by the same arguments as in the proof of Theorem 4.27, that z0 +f ∗(z0, τ0)∆+g(τ0) ∈O, for all (z0, τ0) ∈ O × [t0,+∞). Thus, f ∗ : O × [t0,+∞) → Rn, g : [t0,+∞) → R and
x∗ : [τ0, ω) → Rn fulfill all the hypotheses of Corollary 4.20. Then, ω = +∞ and the proof
is complete.
Theorem 4.33. Let T be a time scale such that supT = +∞ and t0 ∈ T. Assume that
f : O × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3). Also, assume that for all
(z0, s0) ∈ O × [t0,+∞)T, we have z0 + f(z0, s0)µ(s0) ∈ O. Let (x0, τ0) ∈ O × [t0,+∞)T and
x : [τ0, ω)T → Rn be the maximal solution of the dynamic equation on time scales (4.46). If
ω < +∞, then the following conditions hold
(i) The limit limt→ω−
x(t) exists;
(ii) (y, ω) ∈ ∂ΩT and (x(t), t)t→ω−→ (y, ω), where y := lim
t→ω−x(t) and ΩT = O × [t0,+∞)T.
Proof. Let (x0, τ0) ∈ O × [t0,+∞)T and x : [τ0, ω)T → Rn be the maximal solution of the
dynamic equation on time scales (4.46). Also, suppose ω < +∞.
Define a functions f ∗ : O × [t0,+∞) → Rn by f ∗(x, t) = f(x, t∗), for all (x, t) ∈ O ×[t0,+∞) and g : [t0,+∞) → R by g(t) = t∗, for all t ∈ [t0,+∞). Similarly as in the proof
of Theorem 4.27, we can prove that f ∗ satisfies conditions (A2), (A3), (A4), and g satisfies
condition (A1). Also, z0 + f ∗(z0, s0)∆+g(s0) ∈ O, for all (z0, s0) ∈ O × [t0,+∞).
On the other hand, by Remark 4.29, x∗ : [τ0, ω) → Rn is the maximal solution of the
integral form of a MDE of type
y(t) = x0 +
∫ t
τ0
f ∗(y(s), s)dg(s), t ≥ τ0.
Since all hypotheses from Theorem 4.21 are satisfied, the limit limt→ω−
x∗(t) exists and the
following conditions hold
(y, ω) ∈ ∂Ω and (x(t), t)t→ω−→ (y, ω), (4.53)
where y := limt→ω−
x(t) and Ω = O × [t0,+∞).
76 Chapter 4 — Prolongation of solutions
Note that by Theorem 4.27, ω ∈ T and ω is left-dense.
In order to prove item (i), let tkk∈N be a sequence in [τ0, ω)T such that tkn→+∞→ ω.
Clearly tk = t∗k, for all k ∈ N, and, therefore,
x(tk) = x(t∗k) = x∗(tk)k→+∞→ y,
that is, the limit limt→ω−
x(t) exists and
limt→ω−
x(t) = y. (4.54)
This proves (i).
Now, we will prove (ii). By (4.53), (y, ω) ∈ Ωc. But Ωc ⊂ ΩcT, because ΩT ⊂ Ω. Thus
(y, ω) ∈ ΩcT. (4.55)
On the other hand, since ω is left-dense, there exists a sequence skk∈N in [τ0, ω)T such
that skk→+∞→ ω. Thus, by item (4.54), we have
x(sk)k→+∞→ y.
Since (x(sk), sk)k∈N ∈ ΩT and (x(sk), sk)k→+∞→ (y, ω), we obtain
(y, ω) ∈ ΩT. (4.56)
Finally, item (ii) follows from (4.55) and (4.56), obtaining the desired result.
Theorem 4.34. Let T be a time scale such that supT = +∞ and t0 ∈ T. Suppose f :
Rn × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3) (with O = Rn). Then, for
every (x0, τ0) ∈ Rn × [t0,+∞)T there exists a unique maximal solution of (4.45) defined in
[τ0,+∞)T such that x(τ0) = x0.
Proof. Let (x0, τ0) ∈ Rn × [t0,+∞)T (arbitrary, but fixed). Then (x0, τ0) ∈ Rn × [t0,+∞).
Define f ∗ : Rn× [t0,+∞)→ Rn by f ∗(x, t) = f(x, t∗) for all (x, t) ∈ Rn× [t0,+∞). Since
f satisfies conditions (B1), (B2) and (B3) (with O = Rn), by Theorem 4.26, f ∗ satisfies
conditions (A2), (A3) and (A4) (with O = Rn), and by Lemma 3.12, g is a nondecreasing
and left-continuous function on (t0,+∞). Hence f ∗ and g fulfill all the hypotheses of Corolary
4.22, Then there exists a unique maximal solution y : [τ0,+∞)→ Rn of
y(t) = x0 +
∫ t
τ0
f ∗(y(s), s)dg(s).
4.3 Prolongation of solutions of dynamic equation on time scales 77
By Theorem 3.17, y must have the form y = x∗ : [τ0,+∞)→ Rn, where x : [τ0,+∞)T → Rn
is a solution of (4.45) with x(τ0) = x0. The result follows from Remark 4.29.
Chapter
5Boundedness of solutions
In this chapter, we present results concerning the boundedness of the solutions of gen-
eralized ordinary differential equations (generalized ODEs), measure differential equations
(MDEs) and dynamic equations on time scales.
It worths mentioning that all the results presented in Section 5.1 generalize the results
found in [2], and all the results presented in Sections 5.2 and 5.3 of this chapter improve
the results found in the literature (see, for instance, [4, 15, 36]) and the improvements are
obtained by using the correspondence between the solutions of the mentioned equations (see
Chapter 3). Such results are contained in [17].
5.1 Boundedness of solutions of generalized ODEs
In this section, our goal is to prove some results concerning the boundedness of the
solutions of generalized ODEs using Lyapunov functionals.
Throughout this section, consider X a Banach space with norm ‖·‖ and Ω = X×[t0,+∞)
where t0 ≥ 0. Let F : Ω → X be a function defined for every (x, t) ∈ Ω and taking values
in a Banach space X. Also, suppose F ∈ F(Ω, h), where the function h : [t0,+∞) → R is
left-continuous on (t0,+∞). Under these conditions, consider the following generalized ODE
dz
dτ= DF (z, t) (5.1)
with the initial condition
z(s0) = z0, (5.2)
79
80 Chapter 5 — Boundedness of solutions
where (z0, s0) ∈ Ω.
From now on, we assume that for every (z0, s0) ∈ Ω, there exists a unique (maximal)
solution x : [s0,+∞) → X of the generalized ODE (5.1) with x(s0) = z0. The existence of
such a solution is ensured by Theorem 4.13.
In what follows, for every (z0, s0) ∈ Ω, we denote by x(s, s0, z0) the unique maximal
solution of the generalized ODE (5.1) with x(s0) = z0.
Next, we present the concepts of uniform boundedness for generalized ODEs. The basic
references for this subject are [2] and [43].
Definition 5.1. We say that the generalized ODE (5.1) is
• Uniformly bounded: if for every α > 0, there exists M = M(α) > 0 such that, for
all s0 ∈ [t0,+∞) and all z0 ∈ X, with ‖z0‖ < α, we have
‖x(s, s0, z0)‖ < M, for all s ≥ s0,
where x(s, s0, z0) is the maximal solution of (5.1) with x(s0) = z0.
• Quasi-uniformly ultimately bounded: if there exists B > 0 such that for all α > 0,
there exists T = T (α) > 0, such that for all s0 ∈ [t0,+∞) and for all z0 ∈ X, with ‖z0‖ < α,
we have
‖x(s, s0, z0)‖ < B, for all s ≥ s0 + T,
where x(s, s0, z0) is the maximal solution of (5.1) with x(s0) = z0.
• Uniformly ultimately bounded: if it is uniformly bounded and quasi-uniformly
ultimately bounded.
The following result can be found in [38], Proposition 10.11. It will be essential to our
purposes.
Proposition 5.2. Suppose −∞ < a < b < +∞ and f, g : [a, b] → R are left-continuous
functions on (a, b]. If for every σ ∈ [a, b), there exists δ = δ(σ) > 0 such that for all η ∈ (0, δ),
the following inequality holds
f(σ + η)− f(σ) ≤ g(σ + η)− g(σ),
then
f(s)− f(a) ≤ g(s)− g(a),
for every s ∈ [a, b].
The next auxiliary result will be crucial to prove our main results.
5.1 Boundedness of solutions of generalized ODEs 81
Lemma 5.3. Let F ∈ F(Ω, h), where the function h : [t0,+∞) → R is nondecreasing and
left-continuous. Also, suppose V : [t0,+∞) × X → R is such that for each left-continuous
function z : [α, β] → X on (α, β], the function [α, β] 3 t 7→ V (t, z(t)) is left-continuous on
(α, β]. Moreover, suppose V satisfies the following conditions
(V1) For all functions x, y : [α, β] → X, [α, β] ⊂ [t0,+∞), of bounded variation in [α, β],
the following condition
|V (t, x(t))− V (t, y(t))− V (s, x(s)) + V (s, y(s))|
≤ (h1(t)− h1(s)) supξ∈[α,β]
‖x(ξ)− y(ξ)‖
holds for every α ≤ s < t ≤ β, where h1 : [t0, ,+∞) → R is a nondecreasing and
left-continuous function.
(V2) There exists a function Φ : X → R such that for every solution z : [s0,+∞) → X,
s0 ≥ t0, of (5.1), we have
V (s, z(s))− V (t, z(t)) ≤ (s− t)Φ(z(t)),
for every s0 ≤ t < s < +∞.
If x : [γ, v] → X, t0 ≤ γ < v < ∞, is left-continuous on (γ, v] and of bounded variation in
[γ, v], then
V (v, x(v))−V (γ, x(γ)) ≤ (h1(v)−h1(γ)) sups∈[γ,v]
∥∥∥∥x(s)− x(γ)−∫ s
γ
DF (x(τ), t)
∥∥∥∥+ (v− γ)K,
where K = sup Φ(x(t)) : t ∈ [γ, v].
Proof. Let x : [γ, v] → X, [γ, v] ⊂ [t0,+∞), be a left-continuous function on (γ, v] and of
bounded variation in [γ, v] ⊂ [t0,+∞) and K := sup Φ(x(t)) : t ∈ [γ, v] . If K = +∞, then
clearly the desired inequality is trivially satisfied. Therefore, the statement of the theorem
holds.
Now, let us assume that K < +∞. Note that Proposition 1.18 implies the existence of
the integral∫ vγDF (x(τ), t).
Take σ ∈ [γ, v]. Since (x(σ), σ) ∈ Ω = X × [t0,+∞), by Theorem 4.13, there exists
a unique maximal solution x : [σ,+∞) → X of the generalized ODE (5.1) on [σ,+∞),
satisfying the initial condition x(σ) = x(σ).
Let η1 > 0 be fixed. Then, x|[σ,σ+η1] is also a solution of the generalized ODE (5.1). Thus,
by Corollary 1.17 and Proposition 1.18, the integral∫ σ+η1σ
DF (x(τ), t) exists.
82 Chapter 5 — Boundedness of solutions
Consider η2 > 0 such that η2 ≤ η1 and σ + η2 ≤ v. Then, the integral∫ σ+η2σ
DF (x(τ), t)
exists and the integral∫ σ+η2σ
D[F (x(τ), t) − F (x(τ), t)] also exists by the property of inte-
grability on subintervals of the Kurzweil integral. Therefore, given ε > 0, there exists a
gauge δ of the interval [σ, σ + η2] corresponding to ε in the definition of the last integral.
We can assume, without loss of generality, that η2 < δ(σ). By hypothesis (V2), there exists
a function Φ : X → R such that
V (s, x(s))− V (t, x(t)) ≤ (s− t)Φ(x(t)),
for every σ ≤ t < s < +∞. In particular for every 0 < η < η2, we have
V (σ + η, x(σ + η))− V (σ, x(σ)) ≤ ηΦ(x(σ)), (5.3)
where s = σ + η and t = σ. By Corollary 1.4, we have∥∥∥∥F (x(σ), s)− F (x(σ), σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥ < ηε
2(h1(σ + η)− h1(σ))(5.4)
and ∥∥∥∥F (x(σ), s)− F (x(σ), σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥ < ηε
2(h1(σ + η)− h1(σ)), (5.5)
for every s ∈ [σ, σ + η]. Notice that
sups∈[σ,σ+η]
∥∥∥∥∫ s
σ
D[F (x(τ), t)− F (x(τ), t)]
∥∥∥∥− sup
s∈[σ,σ+η]
‖F (x(σ), s)− F (x(σ), σ)− F (x(σ), s) + F (x(σ), σ)‖
≤ sups∈[σ,σ+η]
∥∥∥∥∫ s
σ
D[F (x(τ), t)− F (x(τ), t)]
−(F (x(σ), s)− F (x(σ), σ)− F (x(σ), s) + F (x(σ), σ))‖
≤ sups∈[σ,σ+η]
∥∥∥∥F (x(σ), s)− F (x(σ), σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥+ sup
s∈[σ,σ+η]
∥∥∥∥F (x(σ), s)− F (x(σ), σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥ . (5.6)
On the other hand,
sups∈[σ,σ+η]
‖F (x(σ), s)− F (x(σ), σ)− F (x(σ), s) + F (x(σ), σ)‖
5.1 Boundedness of solutions of generalized ODEs 83
≤ ‖x(σ)− x(σ)‖ sups∈[σ,σ+η]
|h(s)− h(σ)| = 0, (5.7)
where the first inequality follows from the fact that F ∈ F(Ω, h) and the second equality
follows since x(σ) = x(σ). Therefore, by (5.4), (5.5), (5.6) and (5.7), we have
sups∈[σ,σ+η]
∥∥∥∥∫ s
σ
D[F (x(τ), t)− F (x(τ), t)]
∥∥∥∥ ≤ ηε
(h1(σ + η)− h1(σ)). (5.8)
Since F ∈ F(Ω, h) and the function h is nondecreasing, by Corollary 1.17, x is of bounded
variation in [γ, v] and, therefore, in [σ, σ + η] ⊂ [γ, v]. Thus, by hypothesis (V1) and by the
relation x(σ) = x(σ), we get
V (σ + η, x(σ + η))− V (σ + η, x(σ + η))
= V (σ + η, x(σ + η))− V (σ + η, x(σ + η))− V (σ, x(σ)) + V (σ, x(σ))
≤ |V (σ + η, x(σ + η))− V (σ + η, x(σ + η))− V (σ, x(σ)) + V (σ, x(σ))|
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
‖x(s)− x(s)‖
= (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
‖x(s)− x(σ) + x(σ)− x(s)‖
= (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
‖x(s)− x(σ)−∫ s
σ
DF (x(τ), t)‖,
which implies
V (σ + η, x(σ + η))− V (σ + η, x(σ + η))
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
‖x(s)− x(σ)−∫ s
σ
DF (x(τ), t)‖. (5.9)
Therefore, by (5.3) and (5.9), we obtain
V (σ + η, x(σ + η))− V (σ, x(σ))
= V (σ + η, x(σ + η))− V (σ + η, x(σ + η)) + V (σ + η, x(σ + η))− V (σ, x(σ))
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
∥∥∥∥x(s)− x(σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥+ ηΦ(x(σ))
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
∥∥∥∥x(s)− x(σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥+ ηK
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
∥∥∥∥x(s)− x(σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥
84 Chapter 5 — Boundedness of solutions
+(h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
∥∥∥∥∫ s
σ
D[F (x(τ), t)− F (x(τ), t)]
∥∥∥∥+ ηK
≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
∥∥∥∥x(s)− x(σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥+ ηε+ ηK, (5.10)
where the last inequality follows from (5.8). Given s ∈ [γ, v], define
P (s) := x(s)−∫ s
γ
DF (x(τ), t).
Since x is a function of bounded variation in [γ, v] and (x(s), s) ∈ Ω for every s ∈ [γ, v],
then by Proposition 1.18, it follows that the Kurzweil integralv∫γ
DF (x(τ), t) exists and the
function s 7→s∫γ
DF (x(τ), t) is of bounded variation in [γ, v]. Hence, for each s ∈ [γ, v], the
Kurzweil integrals∫γ
DF (x(τ), t) also, exists by the property of integrability on subintervals
of the Kurzweil integral. Then the function P is well-defined and is of bounded variation
in [γ, v]. Moreover, by Lemma 1.15, P is left-continuous on (γ, v], since x and h are left-
continuous on (γ, v].
On the other hand, for s ∈ [γ, v], we have
P (s)− P (σ) = x(s)− x(σ)−∫ s
γ
DF (x(τ), t) +
∫ σ
γ
DF (x(τ), t)
= x(s)− x(σ)−∫ s
σ
DF (x(τ), t). (5.11)
Now, we define the function f : [γ, v]→ R by
f(t) =
(h1(t)− h1(σ)) sup
s∈[γ,t]
‖P (s)− P (σ)‖+ εt+Kt, t ∈ [γ, σ]
(h1(t)− h1(σ)) sups∈[σ,t]
‖P (s)− P (σ)‖+ εt+Kt, t ∈ [σ, v].
Clearly, f is well-defined. Moreover, by the left-continuity of the functions h1 and P, f is
left-continuous on (γ, v]. Also, since x : [γ, v] → X is left-continuous, it follows from the
hypotheses that the function [γ, v] 3 t 7→ V (t, x(t)) is left-continuous on (γ, v].
On the other hand, by (5.10) and (5.11), we have
V (σ + η, x(σ + η))− V (σ, x(σ)) ≤ (h1(σ + η)− h1(σ)) sups∈[σ,σ+η]
‖P (s)− P (σ)‖+ ηε+ ηK
= f(σ + η)− f(σ).
5.1 Boundedness of solutions of generalized ODEs 85
Thus, the functions [γ, v] 3 t 7→ V (t, x(t)) and [γ, v] 3 t 7→ f(t) satisfy all the hypotheses of
Proposition 5.2. Hence
V (v, x(v))− V (γ, x(γ)) ≤ f(v)− f(γ)
= (h1(v)− h1(σ)) sups∈[σ,v]
‖P (s)− P (σ)‖+ εv +Kv
− (h1(γ)− h1(σ)) sups∈[γ,γ]
‖P (s)− P (σ)‖ − εγ −Kγ
= (h1(v)− h1(σ)) sups∈[σ,v]
‖P (s)− P (σ)‖+ εv +Kv
+ (h1(σ)− h1(γ)) sups∈[γ,γ]
‖P (s)− P (σ)‖ − εγ −Kγ
≤ (h1(v)− h1(σ)) sups∈[γ,v]
‖P (s)− P (σ)‖+ εv +Kv
+ (h1(σ)− h1(γ)) sups∈[γ,v]
‖P (s)− P (σ)‖ − εγ −Kγ
= (h1(v)− h1(γ)) sups∈[γ,v]
‖P (s)− P (σ)‖+ ε(v − γ) +K(v − γ)
= (h1(v)− h1(γ)) sups∈[γ,v]
∥∥∥∥x(s)− x(σ)−∫ s
σ
DF (x(τ), t)
∥∥∥∥+ K(v − γ) + ε(v − γ).
Since ε > 0 is arbitrary, the result follows.
In the sequel, we present a result which ensures that the generalized ODE (5.1) is uni-
formly bounded. Our result generalizes the result found in [2].
Theorem 5.4. Let V : [t0,+∞)×X → R be a function such that, for each left-continuous
function z : [α, β] → X on (α, β], the function [α, β] 3 t 7→ V (t, z(t)) is left-continuous on
(α, β]. Moreover, suppose V satisfies the following conditions
(i) There are two monotone increasing functions p, b : R+ → R+ such that p(0) = b(0) =
0,
lims→+∞
b(s) = +∞ (5.12)
and
b(‖z‖) ≤ V (t, z) ≤ p(‖z‖), (5.13)
for every pair (t, z) ∈ [t0,+∞)×X.
(ii) For every solution of type z : [s0,+∞) → X, s0 ≥ t0, of the generalized ODE (5.1),
we have
V (s, z(s))− V (t, z(t)) ≤ 0,
for every s0 ≤ t < s < +∞
86 Chapter 5 — Boundedness of solutions
Then the generalized ODE (5.1) is uniformly bounded.
Proof. Let α > 0 be fixed. Since p(α) > 0, by (5.12), there exists M = M(α) > 0 such that
p(α) < b(s), for all s ≥M.
In particular, for s = M, we obtain
p(α) < b(M). (5.14)
Now, let s0 ∈ [t0,+∞), z0 ∈ X and x(·) = x(·, s0, z0) : [s0,+∞) → X be the solution of
the generalized ODE (5.1) with initial condition x(s0) = z0, where ‖z0‖ < α. We will show
that
‖x(t)‖ < M, for all s ≥ s0.
Indeed, by hypothesis (ii) and condition (5.13), for each s ≥ s0, we have
V (s, x(s)) ≤ V (s0, x(s0)) = V (s0, z0) (5.15)
≤ p(‖z0‖)≤ p(α)
< b(M),
that is,
V (s, x(s)) < b(M), for all s ≥ s0. (5.16)
Finally, we will show that ‖x(s, s0, z0)‖ = ‖x(s)‖ < M, for all s ≥ s0. Suppose the
contrary, that is, suppose there exists s ∈ [s0,+∞) such that ‖x(s)‖ ≥ M . Then, by
hypothesis (5.13) and using the fact that b is an increasing function, we have
V (s, x(s)) ≥ b(‖x(s)‖) ≥ b(M),
which contradicts (5.16). Therefore ‖x(s)‖ < M, for all s ≥ s0, and the result follows.
The next result establishes conditions which guarantee that the generalized ODE (5.1)
is uniformly ultimately bounded. Our result generalizes a result found in [2].
Theorem 5.5. Let V : [t0,∞) × X → R be a function such that for each left-continuous
function z : [α, β] → X on (α, β], the function [α, β] 3 t 7→ V (t, z(t)) is left-continuous
on (α, β] and satisfies condition (i) from Theorem 5.4. Moreover, suppose V satisfies the
following conditions
5.1 Boundedness of solutions of generalized ODEs 87
(V1) For every x, y : [α, β]→ X, [α, β] ⊂ [t0,+∞), of bounded variation in [α, β], we have
|V (t, x(t)− V (t, y(t))− V (s, x(s)) + V (s, y(s))|
≤ (h1(t)− h1(s)) supξ∈[α,β]
‖x(ξ)− y(ξ)‖ ,
for every α ≤ s < t ≤ β, where h1 : [t0, ,+∞) → R is a nondecreasing and left-
continuous function.
(V2) There exists a continuous function Φ : X −→ R, with Φ(0) = 0 and Φ(x) > 0, x 6= 0,
such that for every solution z : [s0,+∞)→ X, s0 ≥ t0, of (5.1), we have
V (s, z(s))− V (t, z(t)) ≤ (s− t)(− Φ(z(t))
),
for every s0 ≤ t < s < +∞.
Then the generalized ODE (5.1) is uniformly ultimately bounded.
Proof. At first, notice that, by hypothesis (V 2),
V (s, z(s))− V (t, z(t)) ≤ (s− t)(− Φ(z(t))
)≤ 0,
for every solution z : [s0,+∞)→ X, s0 ∈ [t0,+∞), of the generalized ODE (5.1), with s0 ≤t < s < +∞ Hence, all the hypotheses of the Theorem 5.4 are satisfied and, consequently,
the generalized ODE (5.1) is uniformly bounded. It remains to show that equation (5.1) is
quasi-uniformly ultimately bounded.
By the uniform boundedness of the generalized ODE (5.1), there exists B = B(t0 +1) > 0
such that, for every t ∈ [t0,+∞) and for every x ∈ X with ‖x‖ < t0 + 1, we have∥∥x(s, t, x)∥∥ < B, for all s ≥ t, (5.17)
where x(s, t, x) is the maximal solution of the generalized ODE (5.1) with x(t) = x(t, t, x) =
x. Without loss of generality, we can take B ∈ (t0 + 1,+∞) (because otherwise we can take
B > B′ such that B′ ∈ (t0 + 1,+∞)) and we have∥∥x(s, t, x)∥∥ < B, for all s ≥ t.
Let α > 0, s0 ∈ [t0,+∞), z0 ∈ X and x(·) = x(·, s0, z0) : [s0,+∞) → X be the
solution of the generalized ODE (5.1) with initial condition (5.2), where ‖z0‖ < α. Since
(5.1) is uniformly bounded, there exists a positive number M1 = M1(α) (we can take M1 >
88 Chapter 5 — Boundedness of solutions
max α, t0 + 1) such that
‖x(s, s0, z0)‖ < M1, for all s ≥ s0.
On the other hand, using the same argument as in (5.14) from the proof of Theorem 5.4,
there exists M2 = M2(α) > 0 such that p(α) < b(M2).
Now, let M = M(α) := max M1(α),M2(α). Notice that
‖x(s, s0, z0)‖ < M, for all s ≥ s0. (5.18)
and
p(α) < b(M). (5.19)
Define
N := sup −Φ(z) : t0 + 1 ≤ ‖z‖ < M < 0
and
T (α) := −2b(M)
N> 0.
We want to show that ‖x(s, s0, z0)‖ < B, for all s ≥ s0 + T (α). Suppose the contrary, that
is, there exists s > s0 + T (α) such that
‖x(s, s0, z0)‖ ≥ B > t0 + 1. (5.20)
Assertion 1. The following inequality holds
‖x(s, s0, z0)‖ ≥ t0 + 1, for all s ∈ [s0, s].
Suppose the assertion is false, that is, there exists t ∈ [s0, s] such that∥∥x(t, s0, z0)∥∥ < t0 + 1.
On the other hand, by (5.17) (with x = x(t, s0, z0)) that∥∥x(s, t, x)∥∥ < B, for all s ≥ t. (5.21)
Also, we know that x(s, t, x), s ∈ [t,+∞), is the unique solution of the initial value
problem dz
dτ= DF (z, t)
z(t) = x = x(t, s0, z0).(5.22)
5.1 Boundedness of solutions of generalized ODEs 89
But x(·, s0, z0)|[t,+∞) is also a solution of the generalized ODE (5.22). Therefore, we have
x(s, s0, z0) = x(s, t, x), for all s ∈ [t,+∞).
In particular, since s ∈ [t,+∞), we obtain
x(s, s0, z0) = x(s, t, x). (5.23)
Thus, (5.21) and (5.23) imply that ‖x(s, s0, z0)‖ < B, which contradicts (5.20). Thus,
Assertion 1 follows.
By Assertion 1, we have
‖x(s, s0, z0)‖ ≥ t0 + 1, for all s ∈[s0 +
T (α)
2, s0 + T (α)
]. (5.24)
since [s0 + T (α)2, s0 + T (α)] ⊂ [s0, s].
However, since x(·) := x(·, s0, z0)∣∣∣[s0+
T (α)2,s0+T (α)]
is a solution of the generalized ODE
(5.1) and F ∈ F(Ω, h), where the function h is left-continuous and nondecreasing, then
the function x(·, s0, z0)∣∣∣[s0+
T (α)2,s0+T (α)]
is left-continuous on (s0 + T (α)/2, s0 + T (α)] and of
bounded variation in Iα := [s0 + T (α)2, s0 + T (α)] by Lemma 1.16 and Corollary 1.17. Thus,
by Lemma 5.3, it follows that
V (s0 + T (α), x(s0 + T (α))) ≤ V
(s0 +
T (α)
2, x
(s0 +
T (α)
2
))+
+
(h1(s0 + T (α))− h1
(s0 +
T (α)
2
))sups∈Iα
∥∥∥∥∥∥∥∥x(s)− x(s0 +
T (α)
2
)−
s∫s0+
T (α)2
DF (x(τ, s0, z0), t)
∥∥∥∥∥∥∥∥︸ ︷︷ ︸zero
+
+T (α)
2· sup −Φ(x(s)) : s ∈ [s0 + T (α)/2, s0 + T (α)]
90 Chapter 5 — Boundedness of solutions
which implies
V (s0 + T (α), x(s0 + T (α))) ≤ V
(s0 +
T (α)
2, x
(s0 +
T (α)
2
))+
T (α)
2. sup
−Φ(x(s)) : s ∈
[s0 +
T (α)
2, s0 + T (α)
]≤ V
(s0 +
T (α)
2, x
(s0 +
T (α)
2
))+
T (α)
2. sup−Φ(z) : t0 + 1 ≤ ‖z‖ < M, (5.25)
where the last inequality follows from the relation
t0 + 1 ≤ ‖x(s)‖ = ‖x(s, s0, z0)‖ < M
for all s ∈ [s0 + T (α)2, s0 + T (α)]. Also, by (5.19) and using the same argument as in (5.15)
from the proof of Theorem 5.4, we get
V
(s0 +
T (α)
2, x
(s0 +
T (α)
2
))< b(M).
Therefore, by (5.25), we have
V (s0 + T (α), x(s0 + T (α))) < b(M) +T (α)
2. sup−Φ(z) : t0 + 1 ≤ ‖z‖ < M
= b(M) +T (α)
2.N
= b(M)− 2b(M)
2N.N
= 0,
which implies that
V (s0 + T (α), x(s0 + T (α))) < 0. (5.26)
On the other hand, by condition (5.13) of the present theorem and by (5.24), we have
V (s0 + T (α), x(t0 + T (α))) ≥ b(‖x(s0 + T (α))‖) ≥ b(t0 + 1) > 0,
which contradicts (5.26). Therefore ‖x(s, s0, z0)‖ < B, for all s ≥ s0 + T (α). The the
generalized ODE (5.1) is quasi-uniformly ultimately bounded and the proof is complete.
5.2 Boundedness of solutions of measure differential equations 91
5.2 Boundedness of solutions of measure differential equa-
tions
In this section, our goal is to present results concerning boundedness of solutions of
measure differential equations.
Let Rn be the n-dimensional Euclidean space with norm ‖ · ‖ .
Consider the integral form of a measure differential equation of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (5.27)
where τ0 ≥ t0, f : Rn × [t0,+∞)→ Rn and g : [t0,+∞)→ R.
From now on, we assume that for every (z0, s0) ∈ Rn × [t0,+∞), there exists a unique
(maximal) solution x : [s0,+∞) → Rn of the measure differential equation (5.27) with
x(s0) = z0. The Corollary 4.13, ensures the existence and uniqueness of such a solution.
Also, assume f satisfies conditions (A2), (A3) and (A4), and g satisfies condition (A1)
defined in Chapter 3.
In what follows, for every (z0, s0) ∈ Rn × [t0,+∞), we denote by x(s, s0, z0) the unique
maximal solution of the measure differential equation (5.27) with x(s0) = z0.
Now, we present some concepts related to uniform boundedness for measure differential
equations.
Definition 5.6. We say that the measure differential equation (5.27) is
• Uniformly bounded: if for every α > 0, there exists M = M(α) > 0 such that, for
all s0 ∈ [t0,+∞) and all z0 ∈ Rn, where ‖z0‖ < α, we have
‖x(s, s0, z0)‖ < M, for all s ≥ s0.
• Quasi-uniformly ultimately bounded: if there exists B > 0 such that for every
α > 0, there exists T = T (α) > 0, such that for all s0 ∈ [t0,+∞) and all z0 ∈ Rn, where
‖z0‖ < α, we have
‖x(s, s0, z0)‖ < B, for all s ≥ s0 + T.
• Uniformly ultimately bounded: if it is uniformly bounded and quasi-uniformly
ultimately bounded.
The next result ensures that the measure differential equation (5.27) is uniformly bounded.
92 Chapter 5 — Boundedness of solutions
Theorem 5.7. Assume f : Rn × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4)
and the function g : [t0,+∞) → R satisfies condition (A1). Let U : [t0,+∞) × Rn → Rbe a function such that for each left-continuous function z : [α, β] → Rn on (α, β], the
function [α, β] 3 t 7→ U(t, z(t)) is left-continuous on (α, β]. Moreover, suppose U satisfies
the following conditions
(i) There are two monotone increasing functions p, b : R+ → R+ such that p(0) = b(0) = 0,
lims→+∞
b(s) = +∞
and
b(‖z‖) ≤ U(t, z) ≤ p(‖z‖)
for every pair (t, z) ∈ [t0,+∞)× Rn.
(ii) For every solution z : [s0,+∞) → Rn, s0 ≥ t0, of the measure differential equation
(5.27), we have
U(s, z(s))− U(t, z(t)) ≤ 0,
for every s0 ≤ t < s < +∞.
Then the measure differential equation (5.27) is uniformly bounded.
Proof. Define a function F : Rn × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s) (5.28)
for all (x, t) ∈ Rn× [t0,+∞). Since f satisfies conditions (A2), (A3) and (A4) and g satisfies
condition (A1), by Theorem 3.2, F ∈ F(Ω, h), where the function h : [t0,+∞)→ R is given
by
h(t) =
∫ t
t0
(M(s) + L(s))dg(s).
By Remark 3.3, h : [t0,+∞) → R is left-continuous on (t0,+∞). Also, by Theorem 3.8
and the hypotheses of the present theorem, it is not difficult to see that U : [t0,+∞)×Rn → Rsatisfies all the hypotheses from Theorem 5.4. Hence the generalized ODE (5.1) is uniformly
bounded, where F is given by (5.28).
Again, by Theorem 3.8, it follows that the measure differential equation (5.27) is also
uniformly bounded, obtaining the desired result.
Finally, we present the last result of this section. Such result ensures that the measure
differential equation (5.27) is uniformly ultimately bounded.
5.2 Boundedness of solutions of measure differential equations 93
Theorem 5.8. Assume f : Rn × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4)
and the function g : [t0,+∞) → R satisfies condition (A1). Let U : [t0,+∞) × Rn → R be
a function such that for each left-continuous function z : [α, β]→ Rn on (α, β], the function
[α, β] 3 t 7→ U(t, z(t)) is left-continuous on (α, β] and satisfies condition (i) from Theorem
5.7. Moreover, suppose U satisfies the following conditions
(U1) For every x, y : [α, β] → Rn, [α, β] ⊂ [t0,+∞), of bounded variation in [α, β], we
have
|U(t, x(t))− U(t, y(t))− U(s, x(s)) + U(s, y(s))| ≤(∫ t
s
K(τ)du(τ)
)supξ∈[α,β]
‖x(ξ)− y(ξ)‖,
for every α ≤ s < t ≤ β, where u : [t0,+∞) → R is a nondecreasing and left-
continuous function and K : [t0,+∞) → R is a locally Kurzweil-Henstock-Stieltjes
integrable function with respect to u.
(U2) There exists a continuous function φ : Rn → R, with φ(0) = 0 and φ(x) > 0, x 6= 0,
such that for every solution z : [s0,+∞) → Rn, s0 ≥ t0, of the measure differential
equation (5.27), we have
U(s, z(s))− U(t, z(t)) ≤ (s− t)(− φ(z(t))
),
for every s0 ≤ t < s < +∞.
Then, the measure differential equation (5.27) is uniformly ultimately bounded.
Proof. Define a function F : Rn × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s), (5.29)
for all (x, t) ∈ Rn× [t0,+∞). Since f satisfies conditions (A2), (A3) and (A4) and g satisfies
condition (A1), by Theorem 3.2, F ∈ F(Ω, h), where the function h : [t0,+∞)→ R is given
by
h(t) =
∫ t
t0
(M(s) + L(s))dg(s).
Notice that the function h : [t0,+∞)→ R is left-continuous by Remark 3.3.
By Theorem 3.8, U satisfies hypothesis (i) from Theorem 5.4. Also, defining the function
h1 : [t0,+∞)→ R by
h1(t) =
∫ t
t0
K(τ)du(τ),
94 Chapter 5 — Boundedness of solutions
for every t ∈ [t0,+∞), it follows that h1 is a nondecreasing and left-continuous function.
Moreover, by (U1), U satisfies the following condition
|U(t, x(t))− U(t, y(t))− U(s, x(s)) + U(s, y(s))| ≤ (h1(t)− h1(s)) supξ∈[α,β]
‖x(ξ)− y(ξ)‖,
for every α ≤ s < t ≤ β, and for every x, y : [α, β] → Rn, [α, β] ⊂ [t0,+∞), of bounded
variation in [α, β].
Also, by Theorem 3.8 and by hypothesis (U2), it is clear that U satisfies the hypothesis
(V 2) from Theorem 5.5. Therefore, all the hypotheses from Theorem 5.5 are fulfilled and
the generalized ODE (5.1) is uniformly ultimately bounded. By Theorem 3.8, the measure
differential equation (5.27) is also uniformly ultimately bounded, obtaining the desired result.
5.3 Boundedness of solutions of dynamic equations on
time scales
In this section, our goal is to prove the results concerning boundedness of solutions of
dynamic equations on time scales.
Consider the dynamic equation on time scales given by
x∆(t) = f(x∗, t), (5.30)
where f : Rn × [t0,+∞)→ Rn satisfies the following conditions
(B1) The Kurzweil-Henstock ∆-integral
∫ t2
t1
f(y(t), t)∆t exists, for all y ∈ G([t0,+∞)T,Rn)
and all t1, t2 ∈ [t0,+∞)T.
(B2) There exists a locally Kurzweil-Henstock ∆-integrable function M : [t0,+∞)T → Rsuch that ∣∣∣∣∫ t2
t1
f(y(t), t)∆t
∣∣∣∣ ≤ ∫ t2
t1
M(t)∆t,
for all y ∈ G([t0,+∞)T,Rn) and all t1, t2 ∈ [t0,+∞)T, t1 ≤ t2.
(B3) There exists a locally Kurzweil-Henstock ∆-integrable function L : [t0,+∞)T → R such
that ∣∣∣∣∫ t2
t1
[f(y(t), t)− f(w(t), t)]∆t
∣∣∣∣ ≤ ‖y − w‖[t0,+∞)T
∫ t2
t1
L(t)∆t,
for all y, w ∈ G0([t0,+∞)T,Rn) and all t1, t2 ∈ [t0,+∞)T, t1 ≤ t2.
5.3 Boundedness of solutions of dynamic equations on time scales 95
Moreover, assume that for every (z0, s0) ∈ Rn×[t0,+∞)T, there exists a unique (maximal)
solution x : [s0,+∞)T → Rn of the dynamic equation on time scales (5.30) with x(s0) = z0.
The existence of such a solution is ensured by Theorem 4.34.
In what follows, for every (z0, s0) ∈ Rn × [t0,+∞)T, we denote by x(s, s0, z0) the unique
(maximal) solution of the dynamic equation on time scales (5.30) with x(s0) = z0 defined on
[s0,+∞)T.
In the sequel, we introduce the concepts concerning the boundedness of the solutions of
the dynamic equation on time scales (5.30).
Definition 5.9. Let T be a time scale such that supT = +∞. We say that the dynamic
equation on time scales (5.30) is
• Uniformly bounded: if for every α > 0, there exists M = M(α) > 0 such that, for
all s0 ∈ [t0,+∞)T and all z0 ∈ Rn, where ‖z0‖ < α, we have
‖x(s, s0, z0)‖ < M, for all s ∈ [s0,+∞)T,
where x is the maximal solution of (5.30) with x(s0) = z0.
• Quasi-uniformly ultimately bounded: if there exists a B > 0 such that for every
α > 0, there exists a T = T (α) > 0, such that for all s0 ∈ [t0,+∞)T and for all z0 ∈ Rn,
where ‖z0‖ < α, we have:
‖x(s, s0, z0)‖ < B, for all s ∈ [s0 + T,+∞) ∩ T,
where x is the maximal solution of (5.30) with x(s0) = z0.
• Uniformly ultimately bounded: if it is uniformly bounded and quasi-uniformly
ultimately bounded.
In what follows, we will prove a result which ensures that the dynamic equation on time
scales (5.30) is uniformly bounded.
Theorem 5.10. Let T be a time scale such that supT = +∞ and [t0,+∞)T be a time
scale interval. Suppose f : Rn × [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3),
and U : [t0,+∞)T × Rn → R be a function such that for each left-continuous function
z : [α, β]T → Rn on (α, β]T, the function [α, β]T 3 t 7→ U(t, z(t)) is left-continuous on
(α, β]T. Moreover, suppose the following conditions concerning U are satisfied
(i) There are two monotone increasing functions p, b : R+ → R+ such that b(0) = 0,
lims→+∞
b(s) = +∞
96 Chapter 5 — Boundedness of solutions
and
b(‖z‖) ≤ U(t, z) ≤ p(‖z‖),
for every pair (t, z) ∈ [t0,+∞)T × Rn.
(ii) For every solution z : [s0,+∞) ∩ T → Rn, s0 ≥ t0, of the dynamic equation on time
scales (5.30), we have
U(s, z(s))− U(t, z(t)) ≤ 0,
for every s, t ∈ [s0,+∞) ∩ T with t ≤ s.
Then, the dynamic equation on time scales (5.30) is uniformly bounded.
Proof. Since f satisfies conditions (B1), (B2) and (B3), by Theorem 4.26, f ∗ satisfies condi-
tions (A2), (A3) and (A4). Define a function U∗ : [t0,+∞)× Rn → R by
U∗(t, x) = U(t∗, x), t ∈ [t0,+∞), x ∈ Rn.
Then, by hypothesis (i), there are two monotone increasing functions p, b : R+ → R+ such
that p(0) = b(0) = 0,
lims→+∞
b(s) = +∞
and
b(‖z‖) ≤ U∗(t, z)︸ ︷︷ ︸=U(t∗,z)
≤ p(‖z‖)
for every pair (t, z) ∈ [t0,+∞)× Rn, which implies that U∗ satisfies the hypothesis (i) from
Theorem 5.7.
Now, let y : [s0,+∞) → Rn, s0 ≥ t0, be a solution of the measure differential equation
(5.27). By Corollary 3.18, y : [s0,+∞) → Rn must have the form y = z∗, where z :
[s0,+∞) ∩ T → Rn is a solution of the dynamic equation on time scales (5.30). Hence, for
each s0 ≤ t < s < +∞, we have
U∗(s, y(s))− U∗(t, y(t)) = U∗(s, z∗(s))− U∗(t, z∗(t))
= U(s∗, z∗(s))− U(t∗, z∗(t)) = U(s∗, z(s∗))− U(t∗, z(t∗)) ≤ 0,
by hypothesis (ii). Therefore all the hypotheses from Theorem 5.7 are satisfied. Then the
measure differential equation (5.27) is uniformly bounded. By Corollary 3.18, the dynamic
equation on time scales (5.30) is uniformly bounded, obtaining the desired result.
Finally, we present the last result of our chapter. Our result ensures that the dynamic
equation on time scales (5.30) is uniformly ultimately bounded.
5.3 Boundedness of solutions of dynamic equations on time scales 97
Theorem 5.11. Let T be a time scale such that supT = +∞, t0 ∈ T and t0 ≥ 0. Suppose
f : Rn× [t0,+∞)T → Rn satisfies conditions (B1), (B2) and (B3), and U : [t0,+∞)T×Rn →R be a function such that for each left-continuous function z : [α, β]T → Rn on (α, β]T, the
function [α, β]T 3 t 7→ U(t, z(t)) is left-continuous on (α, β]T and satisfies condition (i) from
Theorem 5.10. Moreover, suppose U satisfies the following conditions
(I) For all x, y, z, w ∈ Rn and for all α, β ∈ [t0,+∞)T with α ≤ β, we have
|U(β, x)− U(β, y)− U(α, z) + U(α,w)| ≤(∫ β
α
K(τ)∆τ
)max ‖x− y‖ , ‖z − w‖ ,
where K : [t0,+∞)T → R is a locally Kurzweil-Henstock ∆-integrable function.
(II) There exists a continuous function φ : Rn → R, with φ(0) = 0 and φ(x) > 0, x 6= 0
such that for every solution z : [s0,+∞)T ∩ T→ Rn, s0 ≥ t0, of the dynamic equation
on time scales (5.30), we have
U(s∗, z(s∗))− U(t∗, z(t∗)) ≤ (s− t)∗(− φ(z(t∗))
),
for every t, s ∈ [s0,+∞) ∩ T with t ≤ s.
Then, the dynamic equation on time scales (5.30) is uniformly ultimately bounded.
Proof. Since f satisfies conditions (B1), (B2) and (B3), by Theorem 4.26, f ∗ satisfies con-
ditions (A2), (A3) and (A4), and by Lemma 3.12, g is a nondecreasing and left-continuous
function.
Also, define the functional U∗ : [t0,+∞)T × Rn → R by
U∗(t, x) = U(t∗, x),
for every pair (t, x) ∈ [t0,+∞)× Rn.
Since K is a locally Kurzweil-Henstock ∆-integrable function on [t0,+∞)T, it follows from
Theorem 3.13 that the function K∗ : [t0,+∞) → R is locally Kurzweil-Henstock integrable
with respect to the nondecreasing function u : T∗ → R, given by u(t) = t∗ and∫ β
α
K(τ)∆τ =
∫ β
α
K∗(τ)du(τ),
98 Chapter 5 — Boundedness of solutions
for all α, β ∈ [t0,+∞)T. Thus, for all x, y : [υ, γ] → Rn, [υ, γ] ⊂ [t0,+∞), of bounded
variation in [υ, γ] and all υ ≤ s < t ≤ γ, we have
|U∗(t, x(t))− U∗(t, y(t))− U∗(s, x(s)) + U∗(s, y(s))|
= |U(t∗, x(t))− U(t∗, y(t))− U(s∗, x(s)) + U(s∗, y(s))|
≤(∫ t∗
s∗K(τ)∆τ
)max ‖x(t)− y(t)‖ , ‖x(s)− y(s)‖
=
(∫ t∗
s∗K∗(τ)du(τ)
)max ‖x(t)− y(t)‖ , ‖x(s)− y(s)‖
≤(∫ t∗
s∗K∗(τ)du(τ)
)supξ∈[υ,γ]
‖x(ξ)− (ξ)‖
=
(∫ t
s
K∗(τ)du(τ)
)supξ∈[υ,γ]
‖x(ξ)− (ξ)‖,
where Theorem 3.14 is used to establish the last equality. Thus, the hypothesis (U1) from
Theorem 5.8 is fulfilled.
Now, let y : [s0,+∞) → Rn, s0 ≥ t0, be a solution of the measure differential equation
(5.27). Then By Corollary 3.18, y : [s0,+∞) → Rn must have the form y = z∗, where
z : [s0,+∞) ∩ T→ Rn is a solution of the dynamic equation on time scales (5.30). Hence
U∗(s, y(s))− U∗(t, y(t)) = U∗(s, z∗(s))− U∗(t, z∗(t))
= U(s∗, z∗(s))− U(t∗, z∗(t)) = U(s∗, z(s∗))− U(t∗, z(t∗))
≤ (s− t)∗(−φ(z(t∗))) = (s− t)∗(−φ(y(t)))
≤ (s− t)(−φ(y(t))),
for every s0 ≤ t < s < +∞. Thus, condition (U2) from the Theorem 5.8 is fullfiled. There-
fore, all the hypotheses from Theorem 5.8 are fulfilled and the measure differential equation
(5.27) is uniformly ultimately bounded. By Corollary 3.18, it follows that the dynamic equa-
tion on time scales (5.30) is uniformly ultimately bounded, obtaining the desired result.
Chapter
6Lyapunov stability
In this chapter, our goal is to investigate Lyapunov-type stability results for generalized
ODEs and, then, to obtain Lyapunov-type results for measure differential equations. Using
relations between the solutions of measure differential equations and the solutions of dynamic
equations on time scales, we also present Lyapunov-type results for these last equations.
The results presented in this chapter are new and they are contained in the paper [18].
6.1 Lyapunov stability for generalized ODEs
In this section, we will present Lyapunov-type stability results for generalized ODEs.
Let us assume that X is a Banach space with the norm ‖ · ‖ and set Ω = Bc × [t0,+∞),
where Bc = x ∈ X : ‖x‖ < c, c > 0 and t0 ≥ 0.
Consider the generalized ODE given by
dx
dτ= DF (x, t). (6.1)
We assume that F ∈ F(Ω, h), where the function h is nondecreasing and left-continuous,
and F (0, t)− F (0, s) = 0, for t, s ≥ t0. Then for every [a, b] ⊂ [t0,+∞), we have∫ b
a
DF (0, t) = F (0, b)− F (0, a) = 0,
which implies that x ≡ 0 is a solution of the generalized ODE (6.1) on [t0,+∞).
99
100 Chapter 6 — Lyapunov stability
We also assume that Ω = ΩF , that is, Ω = (x, t) ∈ Ω : x+ F (x, t+)− F (x, t) ∈ Bc .Then, by Theorems 4.8 and 4.9, for every (x0, s0) ∈ Ω, there exists a unique maximal
solution x : [s0, ω(s0, x0)) → X of the generalized ODE (6.1) such that x(s0) = x0. Here,
we will denote by x(t) = x(t, s0, x0), t ∈ [s0, ω(s0, x0)), the unique maximal solution of the
generalized ODE (6.1) with x(s0) = x0.
Remark 6.1. For simplicity of notations, when it is clear, we write only ω instead of
ω(s0, x0).
The following Lyapunov-type stability concepts of the trivial solution of a generalized
ODE extend the concepts presented in [44, Definition 4.1].
Definition 6.2. The trivial solution x ≡ 0 of the generalized ODE (6.1) is said to be
• Stable, if for every s0 ≥ t0, ε > 0, there exists δ = δ(ε, s0) > 0 such that if x0 ∈ Bc
with
‖x0‖ < δ,
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω), where x is the maximal solution of (6.1) with x(s0) = x0.
• Uniformly stable, if it is stable with δ independent of s0.
• Uniformly asymptotically stable, if there exists δ0 > 0 and for every ε > 0, there
exist T = T (ε) ≥ 0 such that if s0 ≥ t0 and x0 ∈ Bc with
‖x0‖ < δ0,
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω)∩[s0+T,+∞), where x is the maximal solution of (6.1) with x(s0) = x0.
In the sequel, we present a concept of Lyapunov functional for generalized ODEs.
Definition 6.3. We say that V : [t0,+∞)×Bρ → R is a Lyapunov functional with respect
to the generalized ODE (6.1), 0 < ρ < c, if the following conditions are satisfied:
(i) V : [t0,+∞)×Bρ → R is left-continuous with respect to the first variable on (t0,+∞),
for all x ∈ Bρ;
6.1 Lyapunov stability for generalized ODEs 101
(ii) There exists a continuous increasing function b : R+ → R+ satisfying b(0) = 0 (we say
that such a function is of Hahn class) such that
V (t, x) ≥ b(‖x‖),
for every (t, x) ∈ [t0,+∞)×Bρ;
(iii) The function [s0, ω) 3 t 7→ V (t, x(t)) is nonincreasing along every maximal solution
x(t) = x(t, s0, x0), (x0, s0) ∈ Bρ× [t0,+∞), of the generalized ODE (6.1) with x(s0) =
x0.
In what follows, we present a result which ensures that the trivial solution of the general-
ized ODE (6.1) is uniformly stable. Such result does not require any Lipschitz-type condition
concerning the Lyapunov functional, improving, then, the results found in the literature. See,
for instance, [3, 22, 24, 38].
Theorem 6.4. Let F ∈ F(Ω, h), where h : [t0,+∞) → R is a left-continuous and nonde-
creasing function, and let V : [t0,+∞) × Bρ → R, 0 < ρ < c, be a Lyapunov functional.
Assume that V satisfies the following condition
(H1) There exists a continuous increasing function a : R+ → R+ satisfying a(0) = 0, such
that for every solution x : I → Bρ, I ⊂ [t0,+∞), of the generalized ODE (6.1), we have
V (t, x(t)) ≤ a(‖x(t)‖)
for all t ∈ I.
Then the trivial solution x ≡ 0 of the generalized ODE (6.1) is uniformly stable.
Proof. Since V is a Lyapunov functional, there exists a function of Hahn class b : R+ → R+
such that
V (t, x) ≥ b(‖x‖),
for every (t, x) ∈ [t0,+∞)×Bρ.
Let s0 ≥ t0 and ε > 0. Since a(0) = 0, a is increasing and a|[0,ε] is uniformly continuous,
there exists δ = δ(ε), 0 < δ < ε, such that a(δ) < b(ε).
Suppose x0 ∈ Bρ and the maximal solution x(t) = x(t, s0, x0) of the generalized ODE
(6.1) satisfies
‖x(s0)‖ = ‖x0‖ < δ.
We want to prove that
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω).
102 Chapter 6 — Lyapunov stability
Since V is a Lyapunov functional, by (iii) from Definition 6.3, we have
V (t, x(t)) ≤ V (s0, x(s0)), for all t ∈ [s0, ω).
Therefore, for every t ∈ [s0, ω),
b(‖x(t)‖) ≤ V (t, x(t)) ≤ V (s0, x(s0)) ≤ a(‖x(s0)‖) < a(δ) < b(ε).
Since b is an increasing function, we obtain
‖x(t)‖ < ε,
for all t ∈ [s0, ω), which completes the proof.
The next result establishes sufficient conditions so that the trivial solution of the gener-
alized ODE (6.1) is uniformly asymptotically stable.
Theorem 6.5. Suppose F ∈ F(Ω, h), where h : [t0,+∞) → R is left-continuous and non-
decreasing, V : [t0,+∞) × Bρ → R, 0 < ρ < c, satisfies conditions (i), (ii) from Defini-
tion 6.3 and (H1) from Theorem 6.4. Moreover, suppose there exists a continuous function
Φ : X → R satisfying Φ(0) = 0 and Φ(x) > 0 for x 6= 0, such that for every maximal solution
x(t) = x(t, s0, x0), (x0, s0) ∈ Bρ × [t0,+∞), of the generalized ODE (6.1),
V (s, x(s))− V (t, x(t)) ≤ (s− t)(− Φ(x(t))
), (6.2)
for all t, s ∈ [s0, ω), with t ≤ s. Then the trivial solution x ≡ 0 of the generalized ODE (6.1)
is uniformly asymptotically stable.
Proof. From the inequality (6.2), it is clear that the function [s0,+∞) 3 t 7→ V (t, x(t)) is
nonincreasing along every maximal solution x(t) = x(t, s0, x0), (x0, s0) ∈ Ω, of the generalized
ODE (6.1). Since all the hypotheses from Theorem 6.4 are satisfied, the trivial solution x ≡ 0
of (6.1) is uniformly stable.
Let δ0 :=ρ
2and ε > 0. By the uniform stability of the solution x ≡ 0 of the generalized
ODE (6.1), there exists δ = δ(ε) > 0 (we can take δ < ρ) such that, if τ0 ≥ t0 and y0 ∈ Bρ
satisfies
‖y0‖ < δ,
then
‖x(t, τ0, y0)‖ < ε, for all t ∈ [τ0, ω(τ0, y0)). (6.3)
Let
N := sup −Φ(ϑ) : δ(ε) ≤ ‖ϑ‖ < ρ < 0
6.1 Lyapunov stability for generalized ODEs 103
and
T (ε) := −a(δ0)
N> 0.
Suppose s0 ≥ t0 and x0 ∈ Bρ and consider the maximal solution x(·) = x(·, s0, x0) :
[s0, ω+(s0, x0))→ Bρ of (6.1) such that
‖x(s0)‖ = ‖x0‖ < δ0. (6.4)
We want to prove that
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε, for all t ∈ [s0, ω(s0, x0)) ∩ [s0 + T (ε),+∞). (6.5)
At first, assume that ω(s0, x0) < +∞. Then let us consider two cases.
Case 1. Suppose T (ε) ≥ ω(s0, x0)− s0, that is, ω(s0, x0) ≤ s0 + T (ε). Therefore,
[s0, ω(s0, x0)) ∩ [s0 + T (ε),+∞) = ∅.
Under these conditions, (6.5) holds trivially.
Case 2. Suppose T (ε) < ω(s0, x0)− s0, that is, s0 + T (ε) < ω(s0, x0). Therefore,
[s0, ω(s0, x0)) ∩ [s0 + T (ε),+∞) = [s0 + T (ε), ω(s0, x0)).
Now, let us prove that there exists t ∈ [s0, s0 + T (ε)] such that∥∥x(t, s0, x0)
∥∥ =∥∥x(t)
∥∥ <δ(ε). Suppose the contrary, that is, ‖x(s, s0, x0)‖ = ‖x(s)‖ ≥ δ(ε), for every s ∈ [s0, s0+T (ε)].
Thus,
δ(ε) ≤ ‖x(s)‖ < ρ, for all s ∈ [s0, s0 + T (ε)]. (6.6)
By (H1), (6.2), (6.4) and (6.6), we obtain
V (s0 + T (ε), x(s0 + T (ε))) ≤ V (s0, x(s0)) + T (ε)(− Φ(x(s0))
)≤ V (s0, x(s0)) + T (ε) sup
s∈[s0,s0+T (ε)]
−Φ(x(s))
≤ a(‖x(s0)‖) + T (ε) sup −Φ(ϑ) : δ(ε) ≤ ‖ϑ‖ < ρ︸ ︷︷ ︸N
< a(δ0) + T (ε)N = a(δ0) +(− a(δ0)
N
)N = 0,
that is,
V (s0 + T (ε), x(s0 + T (ε))) < 0,
104 Chapter 6 — Lyapunov stability
which contradicts the following inequality
V (s0 + T (ε), x(s0 + T (ε))) ≥ b(‖x(s0 + T (ε))‖) ≥ 0.
Then we conclude that there exists t ∈ [s0, s0+T (ε)] such that∥∥x(t, s0, x0)
∥∥ =∥∥x(t)
∥∥ < δ(ε).
Therefore, by (6.3) (with τ0 = t and y0 = x(t) = x(t, s0, x0) =: y), we obtain∥∥x(t, t, y)∥∥ < ε, for all t ∈ [t, ω(t, y)). (6.7)
Since
x(·, t, y) : [t, ω(t, y))→ Bρ
and the restriction
x|[t,ω(s0,x0))(·, s0, x0) : [t, ω(s0, x0))→ Bρ
is a solution of the generalized ODEdx
dτ= DF (x, t)
x(t) = y = x(t, s0, x0),
by Lemma 4.7, it follows that
x(t, t, y) = x|[t,ω(s0,x0))(t, s0, x0), for all t ∈ [t, ω(t, y)) ∩ [t, ω(s0, x0)),
that is,
x(t, t, y) = x(t, s0, x0), for all t ∈ [t, ω(t, y)) ∩ [t, ω(s0, x0)). (6.8)
Now, we want to prove that ω(t, y) = ω(s0, x0). Without loss of generality, suppose
ω(t, y) < ω(s0, x0). Thus, by (6.8), we have
x(t, t, y) = x(t, s0, x0), for all t ∈ [t, ω(t, y)). (6.9)
On the other hand, since ω(t, y) < +∞ (because ω(t, y) < ω(s0, x0)), by Proposition
4.12, the limit limt→ω(t,y)−
x(t, t, y) exists. Thus, by (6.9) and using the fact that x(·, s0, x0) is a
left-continuous function at the point ω(t, y), we obtain
x(ω(t, y), s0, x0) = limt→ω(t,y)−
x(t, s0, x0) = limt→ω(t,y)−
x(t, t, y),
that is,
limt→ω(t,y)−
x(t, t, y) = x(ω(t, y), s0, x0).
6.1 Lyapunov stability for generalized ODEs 105
Therefore, by Lemma 4.1, the function z1 : [t, ω(s0, x0))→ X given by the relation
z1(t) =
x(t, t, y), t ∈ [t, ω(t, y))
x(t, s0, x0), t ∈ [ω(t, y), ω(s0, x0)),
is a solution of the generalized ODE (6.1) with z1(t) = x(t, t, y) = y. Notice that z1 is a proper
right prolongation of x : [t, ω(t, y))→ Bρ, which contradicts the fact that x : [t, ω(t, y))→ Bρ
is the maximal solution of the generalized ODE (6.1) with x(t) = y.
Analogously, if ω(s0, x0) < ω(t, y) (note that ω(s0, x0) < +∞ in the case we are consid-
ering), then the function
z2(t) =
x(t, s0, x0), t ∈ [s0, ω(s0, x0)),
x(t, t, y), t ∈ [ω(s0, x0), ω(t, y))
is a proper right prolongation of x : [s0, ω(s0, x0)) → Bρ, which contradicts the fact that
x : [s0, ω(s0, x0))→ Bρ is the maximal solution of the generalized ODE (6.1) with x(s0) = x0.
Then,
ω(t, y) = ω(s0, x0). (6.10)
Thus, by (6.7), (6.8) and (6.10), we obtain
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε, for all t ∈ [t, ω(s0, x0)).
In particular,
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε, for all t ∈ [s0, ω(s0, x0)) ∩ [s0 + T (ε),+∞)︸ ︷︷ ︸[s0+T (ε),ω(s0,x0))
,
since [s0 + T (ε), ω(s0, x0)) ⊂ [t, ω(s0, x0)) (because t ≤ s0 + T (ε)). Hence the trivial solution
x ≡ 0 of the generalized ODE (6.1) is uniformly asymptotically stable.
Now, let us consider ω(s0, x0) = +∞. Then, we have
[s0, ω(s0, x0)︸ ︷︷ ︸+∞
) ∩ [s0 + T (ε),+∞) = [s0 + T (ε),+∞).
We have only to consider the case where s0 + T (ε) < +∞. Using the same arguments as in
the first part of proof of Case 2, there exists t ∈ [s0, s0 + T (ε)] such that∥∥x(t, s0, x0)∥∥ =
∥∥x(t)∥∥ < δ(ε).
106 Chapter 6 — Lyapunov stability
Then, by (6.3) (with τ0 = t and y0 = x(t) = x(t, s0, x0) =: y), we have∥∥x(t, t, y)∥∥ < ε, for all t ∈ [t, ω(t, y)). (6.11)
Since
x(·, t, y) : [t, ω(t, y))→ Bρ
and
x|[t,+∞)(·, s0, x0) : [t,+∞)→ Bρ
are solutions of dx
dτ= DF (x, t)
x(t) = y = x(t, s0, x0),
by Lemma 4.7, we obtain
x(t, t, y) = x|[t,+∞)(t, s0, x0), for all t ∈ [t, ω(t, y)) ∩ [t,+∞), (6.12)
that is,
x(t, t, y) = x(t, s0, x0), for all t ∈ [t, ω(t, y)) ∩ [t,+∞). (6.13)
Finally, we will prove that ω(t, y) = +∞. Suppose the contrary, that is, ω(t, y) < +∞.Then using the same argument as before for z1, the function z3 : [t,+∞) → Bρ, given by
the relation
z3(t) =
x(t, t, y), t ∈ [t, ω(t, y))
x(t, s0, x0), t ∈ [ω(t, y),+∞),
is a proper right prolongation of x : [t, ω(t, y)) → Bρ, which contradicts the fact that x :
[t, ω(t, y)) → Bρ is the maximal solution of the generalized ODE (6.1) with x(t) = y.
Therefore,
ω(t, y) = +∞. (6.14)
Thus, by (6.11), (6.13) and (6.14), we get
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε, for all t ∈ [t,+∞).
In particular
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε, for all t ∈ [s0,+∞) ∩ [s0 + T (ε),+∞)︸ ︷︷ ︸[s0+T (ε),+∞)
,
because [s0 + T (ε),+∞) ⊂ [t,+∞) (since t ≤ s0 + T (ε)). The proof is then completed.
6.2 Lyapunov stability for measure differential equations 107
6.2 Lyapunov stability for measure differential equations
In this section, our goal is to prove stability results for measure differential equations using
Lyapunov functionals. Throughout this section, let us consider that Rn is the n-dimensional
Euclidean space with norm ‖ · ‖ , and set
Bc = x ∈ Rn : ‖x‖ < c ,
with c > 0.
We consider the integral form of a measure differential equations of type
x(t) = x(τ0) +
∫ t
τ0
f(x(s), s)dg(s), t ≥ τ0, (6.15)
where τ0 ≥ t0, f : Bc × [t0,+∞)→ Rn and g : [t0,+∞)→ R.
Throughout this section, we assume that the function f : Bc × [t0,+∞) → Rn satis-
fies conditions (A2), (A3), (A4) and g : [t0,+∞) → R satisfies condition (A1) (all these
conditions presented in Chapter 3).
We also assume that f(0, t) = 0, for every t ∈ [t0,+∞). This condition implies that x ≡ 0
is a solution of (6.15) on [t0,+∞) and the function F, given by F (x, t) =∫ tt0f(x, s)dg(s),
satisfies F (0, s) − F (0, t) = 0, for all t, s ∈ [t0,+∞). Notice that, under this condition, it
makes sense to prove the stability of the trivial solution of the measure differential equation
(6.15).
Moreover, we assume that x0 + f(x0, s0)∆+g(s0) ∈ Bc, for all s0 ∈ [t0,+∞) and x0 ∈ Bc.
This condition ensures us that for all (x0, s0) ∈ Bc× [t0,+∞), there exists a unique maximal
solution x : [s0, ω) → Rn of the measure differential equation (6.15) with x(s0) = x0. Note
that the existence of such a solution is guaranteed by Theorems 4.8 and 4.9. In what follows,
for all (x0, s0) ∈ Bc × [t0,+∞), we denote by x(t, s0, x0) the unique maximal solution of the
measure differential equation (6.15) with x(s0) = x0 defined on [s0,+∞).
In the sequel, we present some concepts of stability of the trivial solution of the measure
differential equations (6.15).
Definition 6.6. The trivial solution x ≡ 0 of the measure differential equation (6.15) is said
to be
• Stable, if for every s0 ≥ t0 and ε > 0, there exists δ = δ(ε, s0) > 0 such that if x0 ∈ Bc
with
‖x0‖ < δ,
108 Chapter 6 — Lyapunov stability
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω+(s0, x0)), where x is the maximal solution of (6.15) with x(s0) = x0.
• Uniformly stable, if it is stable and δ is independent of s0.
• Uniformly asymptotically stable, if there exists δ0 > 0 and for every ε > 0, there
exists T = T (ε) ≥ 0 such that if s0 ≥ t0 and x0 ∈ Bc with
‖x0‖ < δ0,
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω+(s0, x0)) ∩ [s0 + T,+∞), where x is the maximal solution of (6.15)
with x(s0) = x0.
In what follows, we introduce a concept of Lyapunov functional with respect to the
measure differential equation (6.15).
Definition 6.7. We say that U : [t0,+∞) × Bρ → R, 0 < ρ < c, is a Lyapunov functional
with respect to the measure differential equation (6.15), if the following conditions are satisfied
(I) U(·, x) : [t0,+∞)→ R is left-continuous on (t0,+∞) for all x ∈ Bρ;
(II) There exists a continuous increasing function b : R+ → R+ satisfying b(0) = 0 (we say
that such function is of Hahn class) such that
U(t, x) ≥ b(‖x‖),
for every (t, x) ∈ [t0,+∞)×Bρ;
(III) The function [s0, ω) 3 t 7→ U(t, x(t)) is a nonincreasing function along every maximal
solution x(t) = x(t, s0, x0), (x0, s0) ∈ Bρ×[t0,+∞), of the measure differential equation
(6.15).
The next result ensures us that the trivial solution x ≡ 0 of the measure differential
equation (6.15) is uniformly stable.
Theorem 6.8. Suppose f : Bc × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4),
g : [t0,+∞) → R satisfies condition (A1) and f(0, t) = 0 for every t ∈ [t0,+∞). Also, let
U : [t0,+∞) × Bρ → R, 0 < ρ < c, be a Lyapunov functional with respect to the measure
differential equation (6.15). Assume that U satisfies the condition
6.2 Lyapunov stability for measure differential equations 109
(H2) There exists a continuous increasing function a : R+ → R+ such that a(0) = 0 and for
all solutions x : I → Bρ, I ⊂ [t0,+∞) of (6.15), we have
U(t, x(t)) ≤ a(‖x(t)‖)
for all t ∈ I.Then the trivial solution x ≡ 0 of the measure differential equation (6.15) is uniformly stable.
Proof. Define a function F : Bc × [t0,+∞)→ Rn by
F (x, t) :=
∫ t
t0
f(x, s)dg(s) (6.16)
for all (x, t) ∈ Bc × [t0,+∞). Then, by Theorem 3.2, it follows that F ∈ F(Ω, h), where the
function h : [t0,+∞)→ R is given by
h(t) =
∫ t
t0
(M(s) + L(s))dg(s),
since the function f : Bc × [t0,+∞)→ Rn satisfies conditions (A2), (A3) and (A4) and the
function g : [t0,+∞)→ R satisfies condition (A1).
By Theorem 3.8, it is clear that U : [t0,+∞) × Bρ → R is a Lyapunov functional with
respect to the generalized ODE (6.1), where F is given by (6.16). Moreover, by condition
(H2), it is clear that U also satisfies condition (H1) from Theorem 6.4. Therefore, the trivial
solution x ≡ 0 of the generalized ODE (6.1) is unifomly stable. By Theorem 3.8, it follows
that the trivial solution x ≡ 0 of the measure differential equation (6.15) is also uniformly
stable, obtaining the desired result.
In the sequel, we present our last result of this section, which ensures that the trivial
solution x ≡ 0 of the measure differential equation (6.15) is uniformly asymptotically stable.
Theorem 6.9. Suppose f : Bc × [t0,+∞) → Rn satisfies conditions (A2), (A3) and (A4),
g : [t0,+∞) → R satisfies condition (A1) and f(0, t) = 0 for every t ∈ [t0,+∞). Suppose
U : [t0,+∞) × Bρ → R, 0 < ρ < c, satisfies conditions (I) and (II) from Definition 6.7
and condition (H2) from Theorem 6.8. Moreover, suppose there exists a continuous function
Φ : X → R satisfying Φ(0) = 0 and Φ(z) > 0 for z 6= 0, such that for every maximal solution
x(t) = x(t, s0, x0), (x0, s0) ∈ Bρ × [t0,+∞), of the measure differential equation (6.15), we
have
U(s, x(s))− U(t, x(t)) ≤ (s− t)(− Φ(x(t))
), (6.17)
for all t, s ∈ [s0, ω) with t ≤ s. Then the trivial solution x ≡ 0 of (6.15) is uniformly
asymptotically stable.
110 Chapter 6 — Lyapunov stability
Proof. Define a function F : Bc × [t0,+∞)→ Rn by
F (x, t) =
∫ t
t0
f(x, s)dg(s)
for all (x, t) ∈ Bc× [t0,+∞). Since f satisfies conditions (A2), (A3) and (A4) and g satisfies
condition (A1), by Theorem 3.2, F ∈ F(Ω, h), where the function h : [t0,+∞)→ R is given
by
h(t) =
∫ t
t0
(M(s) + L(s))dg(s).
By Theorem 3.8, it is easy to check that U : [t0,+∞)×Bρ → R satisfies all the hypotheses
of Theorem 6.5. Therefore, by Theorem 6.5, the trivial solution x ≡ 0 of the generalized ODE
(6.1) is uniformly asymptotically stable. Applying Theorem 3.8 again, the trivial solution
x ≡ 0 of the measure differential equation (6.15) is uniformly asymptotically stable and we
obtain the desired result.
6.3 Lyapunov stability for dynamic equations on time
scales
In this section, our goal is to prove results concerning stability of solutions of dynamic
equations on time scales using Lyapunov functionals.
Consider the dynamic equation on time scale given by
x∆(t) = f(x∗, t), t ∈ [t0,+∞)T, (6.18)
where f : Bc × [t0,+∞)T → Rn. We recall the reader that x∗ denotes the extension of x
defined in Chapter 3.
Throughout this section, we assume that the fucntion f satisfies conditions (B1), (B2)
and (B3) presented in Chapter 4.
From now on, let us assume that f(0, t) = 0 for every t ∈ [t0,+∞)T. This condition
implies that x ≡ 0 is a solution of the dynamic equation on time scales (6.18).
We also assume that for all (z0, τ0) ∈ Bc × [t0,+∞)T, we have z0 + f(z0, τ0)µ(τ0) ∈ Bc.
Then, by Theorem 4.27, for all (x0, s0) ∈ Bc × [t0,+∞)T, there exists a unique maximal
solution x : [s0, ω)T → Rn, ω ≤ +∞ of the dynamic equation on time scales (6.18) with
x(s0) = x0. Here, we will denote by x(t, s0, x0), t ∈ [s0, ω(s0, x0))T, the unique maximal
solution of the dynamic equation on time scales (6.18) with x(s0) = x0.
6.3 Lyapunov stability for dynamic equations on time scales 111
In the sequel, let us present the concept of stability of the trivial solution of the dynamic
equations on time scales (6.18).
Definition 6.10. Let T be a time scale such that supT = +∞. The trivial solution x ≡ 0
of the dynamic equation on time scales (6.18) is said to be
(i) Stable, if for every s0 ∈ T, with s0 ≥ t0 and ε > 0, there exists δ = δ(ε, s0) > 0 such
that if x0 ∈ Bc with
‖x0‖ < δ,
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω(s0, x0))T, where x is the maximal solution of (6.18) with x(s0) = x0.
(ii) Uniformly stable, if it is stable with δ independent of s0.
(iii) Uniformly asymptotically stable, if there exists δ0 > 0 and for every ε > 0, there
exists T = T (ε) ≥ 0 such that if s0 ∈ T, s0 ≥ t0 and x0 ∈ Bc with
‖x0‖ < δ0,
then
‖x(t, s0, x0)‖ = ‖x(t)‖ < ε,
for all t ∈ [s0, ω(s0, x0)) ∩ [s0 + T,+∞) ∩ T, where x is the maximal solution of (6.18)
with x(s0) = x0.
In what follows, we will present the definition of a Lyapunov functional with respect to
the dynamic equation on time scales (6.18).
Definition 6.11. Let t0 ∈ T and c > 0. We say that U : [t0,+∞)T × Bρ → R, 0 < ρ < c,
is a Lyapunov functional with respect to the dynamic equation on time scales (6.18), if the
following conditions are satisfied
(i) U(·, x) : [t0,+∞)T → R is left-continuous on (t0,+∞)T for all x ∈ Bρ;
(ii) There exists a continuous increasing function b : R+ → R+ satisfying b(0) = 0 (we say
that such function is of Hahn class) such that
U(t, x) ≥ b(‖x‖),
for every (t, x) ∈ [t0,+∞)T ×Bρ;
112 Chapter 6 — Lyapunov stability
(iii) The function [τ0, ω)T 3 t 7→ U(t, z(t)) is nonincreasing along every maximal solution
z(t) = z(t, τ0, x0), (x0, τ0) ∈ Bρ × [t0,+∞), of the dynamic equation on time scales
(6.18).
We recall the reader that the function g : [t0,+∞) → R given by g(t) = t∗, for all
t ∈ [t0,+∞), is nondecreasing and lef-continuous on (t0,+∞).
The following results ensure us the uniform stability and uniform asymptotically stability
of the trivial solution of the dynamic equation on time scales (6.18) via Lyapunov functionals.
Theorem 6.12. Let T be a time scale such that supT = +∞ and [t0,+∞)T be a time
scale interval. Assume that f : Bc × [t0,+∞)T → Rn satisfies conditions (B1), (B2), (B3)
and f(0, t) = 0 for every t ∈ [t0,+∞)T. Let U : [t0,+∞)T × Bρ → R, 0 < ρ < c, be a
Lyapunov functional with respect to the dynamic equation on time scales (6.18). Assume
that U satisfies the following condition
(H3) There exists a continuous increasing function a : R+ → R+ such that a(0) = 0 and for
all (t, x) ∈ [t0,+∞)T ×Bc, we have
U(t, x) ≤ a(‖x‖). (6.19)
Then the trivial solution x ≡ 0 of the dynamic equation on time scales (6.18) is uniformly
stable.
Proof. Since f satisfies conditions (B1), (B2) and (B3), by Theorem 4.26, f ∗ satisfies con-
ditions (A2), (A3) and (A4). Also, since f(0, t) = 0, for every t ∈ [t0,+∞)T, clearly
f ∗(0, t) = 0, for every t ∈ [t0,+∞).
Define
U∗(t, x) := U(t∗, x),
for every (t, x) ∈ [t0,+∞) × Bρ. We affirm that U∗ : [t0,+∞) × Bρ → R is a Lyapunov
functional with respect to the measure differential equation
y(t) = y(τ0) +
∫ t
τ0
f ∗(y(s), s)dg(s), for t, τ0 ∈ [t0,+∞), (6.20)
where g(t) = t∗. Indeed, by condition (i) from Definition 6.11, U(·, x) : [t0,+∞)T → R is left-
continuous on (t0,+∞)T for all x ∈ Bρ. Since U∗(t, x) = U(t∗, x) = U(g(t), x) and g(t) = t∗
is a left-continuous function on (t0,+∞), we have U∗(·, x) : [t0,+∞)→ R is left-continuous
on (t0,+∞) for all x ∈ Bρ, that is, U∗ satisfies condition (I) from Definition 6.7.
6.3 Lyapunov stability for dynamic equations on time scales 113
On the other hand, by condition (ii) from Definition 6.11, there exists a continuous and
increasing function b : R+ → R+ satisfying b(0) = 0 such that
U(s, x) ≥ b(‖x‖),
for every (s, x) ∈ [t0,+∞)T ×Bρ. Then, for every (t, x) ∈ [t0,+∞)×Bρ, we obtain
U∗(t, x) = U(t∗, x) ≥ b(‖x‖),
that is, U∗ satisfies condition (II) from Definition 6.7.
Finally, we will prove that U∗ satisfies condition (III) from Definition 6.7. Indeed, let
(x0, s0) ∈ Bρ× [t0,+∞) and y = y(·, s0, x0) : [s0, ω)→ Bρ be the unique maximal solution of
measure differential equation (6.20). Since y(t) ∈ Bρ ⊂ Bc, for t ∈ [s0, ω), by Corollary 4.20
(with N = Bρ), ω = +∞. Now, since y : [s0,+∞)→ Bρ is a solution of (6.20), we obtain
y(τ) = y(s0) +
∫ τ
s0
f(y(s), s∗)dg(s), τ ∈ [s0,+∞),
where g(ξ) = ξ∗.
We consider two cases.
Case 1. Suppose s0 ∈ [t0,+∞)\T. Then g is constant on [s0, s∗0] which implies
ξ∗ = s∗0, for all ξ ∈ [s0, s∗0]. (6.21)
Therefore,
∫ τ
s0
f(y(s), s∗)dg(s) = 0, for all τ ∈ [s0, s∗0], that is, y is constant on [s0, s
∗0]. In
particular,
y(τ) = y(s∗0), for all τ ∈ [s0, s∗0]. (6.22)
On the other hand, by Corollary 3.18, y|[s∗0,+∞) must have the form
y|[s∗0,+∞) = x∗, (6.23)
where x : [s∗0,+∞)T → Rn is a solution of the dynamic equation on time scales (6.18). Notice
that x is also the maximal solution of the dynamic equation on time scales (6.18) through
(x(s∗0), s∗0) and y(s∗0) = x∗(s∗0) = x(s∗0). Hence the following holds true
• If s0 ≤ t < s ≤ s∗0, then, by (6.21), (6.22) and the fact y(s∗0) = x(s∗0), we get
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, y(s))− U(t∗, y(t))
= U(s∗, x(s∗))− U(t∗, x(t∗)) = 0
114 Chapter 6 — Lyapunov stability
• If s0 ≤ t ≤ s∗0 < s, then, by (6.23), (6.21) and (6.22), we have
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, y(s))− U(t∗, y(t))
= U(s∗, x∗(s))− U(s∗0, y(s∗0))
= U(s∗, x(s∗))− U(s∗0, x(s∗0))
= U(s∗, x(s∗))− U(t∗, x(t∗)) ≤ 0.
• If s∗0 ≤ t < s, then, by (6.23), we obtain
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, y(s))− U(t∗, y(t))
= U(s∗, x∗(s))− U(t∗, x∗(t))
= U(s∗, x(s∗))− U(t∗, x(t∗)) ≤ 0.
Thus, U∗(s, y(s)) − U∗(t, y(t)) ≤ 0 for all s0 ≤ t < s < +∞, that is, the function
t 7→ U∗(t, y(t)) is nonincreasing and this completes the proof of condition (III) in the situation
of case 1.
Case 2. Now, assume that s0 ∈ T. Then s0 = s∗0 ∈ T and, therefore, [s0,+∞)T = [s∗0,+∞)T.
By Corollary 3.18, y : [s0,+∞)→ Rn must have the form y = x∗, where x : [s0,+∞)T → Rn
is a solution (more precisely, the maximal solution) of the dynamic equation on time scales
(6.18) through (y(s0), s0). Hence
U∗(s, y(s))− U∗(t, y(t)) = U∗(s, x∗(s))− U∗(t, x∗(t))= U(s∗, x∗(s))− U(t∗, x∗(t))
= U(s∗, x(s∗))− U(t∗, x(t∗)) ≤ 0,
for every s0 ≤ t < s < +∞ and, therefore, the function t 7→ U∗(t, y(t)) is nonincreasing.
This completes the proof of condition (III) in case 2.
Conditions (I), (II) and (III) proved previously imply U∗ is a Lyapunov functional with
respect to the measure differential equation (3.20).
Now, by hypothesis (H3), there exists a continuous and increasing function a : R+ → R+
such that a(0) = 0 which satisfies condition (6.19). Then, for all solutions y : I → Bρ,
I ⊂ [t0,+∞) of the measure differential equation (6.20), we have
U∗(t, y(t)) = U(t∗, y(t)) ≤ a(‖y(t)‖)
for all t ∈ I. Therefore all the conditions of Theorem 6.8 are satisfied and, hence, the trivial
solution x∗ ≡ 0 of the measure differential equation (6.20) is uniformly stable. By Corolary
6.3 Lyapunov stability for dynamic equations on time scales 115
3.18, the trivial solution x ≡ 0 of the dynamic equation on time scales (6.18) is uniformly
stable and we obtain the desired result.
Theorem 6.13. Let T be a time scale such that supT = +∞ and [t0,+∞)T be a time
scale interval. Assume that f : Bc × [t0,+∞)T → Rn satisfies conditions (B1), (B2), (B3)
and f(0, t) = 0 for every t ∈ [t0,+∞)T. Suppose U : [t0,+∞)T × Bρ → R, 0 < ρ < c,
satisfies conditions (i) and (ii) from Definition 6.11 and condition (H3) from Theorem 6.12.
Moreover, suppose there exists a continuous function φ : X → R satisfying φ(0) = 0 and
φ(υ) > 0 for υ 6= 0 such that for every maximal solution x(t) = x(t, s0, x0), (x0, s0) ∈Bρ × [t0,+∞)T, of the dynamic equation on time scales (6.18),
U(s∗, x(s∗))− U(t∗, x(t∗)) ≤ (s− t)∗(−φ(x(t∗))) (6.24)
for all t, s ∈ [s0, ω)T with t ≤ s. Then the trivial solution x ≡ 0 of the dynamic equation on
time scales (6.18) is uniformly asymptotically stable.
Proof. Since f satisfies conditions (B1), (B2) and (B3), by Theorem 4.26, f ∗ satisfies con-
ditions (A2), (A3) and (A4). Furthermore, f(0, t) = 0, for every t ∈ [t0,+∞)T, hence
f ∗(0, t) = 0, for every t ∈ [t0,+∞).
Define
U∗(t, x) := U(t∗, x),
for all (t, x) ∈ [t0,+∞)×Bρ. Then U∗ : [t0,+∞)×Bρ → R satisfies conditions (I) and (II)
from Definition 6.7, using the same arguments as in the proof of Theorem 6.12.
By hypothesis (H3) from Theorem 6.12, U∗ satisfies the hypothesis (H2) from Theorem
6.8, using the same arguments as in the proof of Theorem 6.12.
Finally, we will prove that U∗ satisfies the hypothesis (6.17). Indeed, let (x0, s0) ∈Bρ × [t0,+∞) and y = y(·, s0, x0) : [s0, ω) → Bρ be the unique maximal solution of the
measure differential equation (6.20). Since y(t) ∈ Bρ ⊂ Bc, for all t ∈ [s0, ω), by Corollary
4.20 (with N = Bρ), we have ω = +∞.
We consider two cases.
Case 1. Suppose s0 ∈ [t0,+∞)\T. Using the same argument as in Case 1 from the proof of
Theorem 6.12, we can prove that y|[s∗0,+∞) must have the form
y|[s∗0,+∞) = x∗,
where x : [s∗0,+∞)T → Rn is the maximal solution of the dynamic equation on time scales
(6.18) through (x(s∗0), s∗0). Hence
116 Chapter 6 — Lyapunov stability
• If s0 ≤ t < s ≤ s∗0, we get
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, y(s))− U(t∗, y(t))
= U(s∗, x(s∗))− U(t∗, x(t∗)) = 0 (6.25)
On the other hand, by (6.24),
U(s∗, x(s∗))− U(t∗, x(t∗)) ≤ (s− t)∗(−φ(x(t∗))) ≤ 0. (6.26)
Then, by (6.25) and (6.26), we have (−φ(x(t∗))) = 0 and, therefore, x(t∗) = 0, that is,
y(t) = 0 (because x(t∗)
(6.21)
↓= x(s∗0) = y(s∗0)
(6.22)
↓= y(t)). Thus, we obtain
U∗(s, y(s))− U∗(t, y(t)) = (s− t)(−φ(y(t))) = 0.
• If s0 ≤ t ≤ s∗0 < s, we have
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, x(s∗))− U(t∗, x(t∗))
≤ (s− t)∗(−φ(x(t∗)))
= (s− t)∗(−φ(x∗(t)))
= (s− t)∗(−φ(y(t))) ≤ (s− t)(−φ(y(t))).
• If s∗0 ≤ t < s, we obtain
U∗(s, y(s))− U∗(t, y(t)) = U(s∗, x(s∗))− U(t∗, x(t∗))
≤ (s− t)∗(−φ(x(t∗)))
= (s− t)∗(−φ(x∗(t)))
= (s− t)∗(−φ(y(t))) ≤ (s− t)(−φ(y(t))).
Thus, U∗(s, y(s))− U∗(t, y(t)) ≤ (s− t)(−φ(y(t))) for all s0 ≤ t < s < +∞.
Case 2. Now, assume that s0 ∈ T. Then s0 = s∗0 ∈ T and, therefore, [s0,+∞)T = [s∗0,+∞)T.
By Corollary 3.18, y : [s0,+∞)→ Rn must have the form y = x∗, where x : [s0,+∞)T → Rn
is a solution (more precisely, the maximal solution) of the dynamic equation on time scales
6.3 Lyapunov stability for dynamic equations on time scales 117
(6.18) through (x(s0), s0). Hence
U∗(s, y(s))− U∗(t, y(t)) = U∗(s, x∗(s))− U∗(t, x∗(t))= U(s∗, x∗(s))− U(t∗, x∗(t))
= U(s∗, x(s∗))− U(t∗, x(t∗))
≤ (s− t)∗(−φ(x(t∗)))
= (s− t)∗(−φ(y(t))) ≤ (s− t)(−φ(y(t))).
Thus U∗(s, y(s))− U∗(t, y(t)) ≤ (s− t)(−φ(y(t))), for all s0 ≤ t < s < +∞.
Therefore all the hypotheses of Theorem 6.9 are satisfied. Then the trivial solution x∗ ≡ 0
of the measure differential equation (6.20) is uniformly asymptotically stable. By Corollary
3.18 the trivial solution x ≡ 0 of the dynamic equation on time scales (6.18) is uniformly
asymptotically stable, obtaining the desired result.
Chapter
7Remarks on autonomous generalized
ODEs
In this section, our goal is to prove that autonomous generalized ODEs do not enlarge
the class of classical autonomous ODEs. More precisely, we will prove that if a function
H : Ω → X belongs to the class F(Ω, h, ω) (or to one of the class F(Ω, h, ω), F(Ω∞, h, ω)
or F(Ω, h, ω, E) defined in this chapter) and H(x, t) = F (x)t, for all (x, t) ∈ Ω, then F is a
continuous function and this fact implies that the classes of autonomous generalized ODEsdx
dτ= D[F (x)t] and of classic autonomous ODE
dx
dt= F (x(t)) coincide.
The results presented here are new and are contained in [19].
7.1 Autonomous generalized ODEs
We start this section by recalling the concept of an autonomous generalized ODE intro-
duced in [38] and used in [44].
Definition 7.1. Let X be a Banach space and O ⊂ X be open. An autonomous generalized
ODE is a generalized ODE of the form
dx
dτ= D[H(x, t)], (7.1)
where H : Ω→ X, Ω = O × [α, β], is given by
H(x, t) = F (x)t
119
120 Chapter 7 — Remarks on autonomous generalized ODEs
with F : O → X and t ∈ [α, β].
At this moment, the reader may find awkward the name “autonomous” given after an
equation of typedx
dτ= D[F (x)t], (7.2)
with t“appearing” on its right-hand side. Recall that a solution of the generalized ODE (7.2)
is any function x : [σ, η]→ X, [σ, η] ⊂ [α, β], such that x(t) ∈ O for all t ∈ [σ, η] and
x(v)− x(γ) =
∫ v
γ
D[F (x(τ))t] = (K)
∫ v
γ
F (x(t))dt (7.3)
for all γ, v ∈ [σ, η], where the last integral is precisely the Kurzweil integral (the prefix (K)
will be used to distinguish this integral from others). Thus, the integral form of (7.2) is given
by
x(t) = x(σ) + (K)
∫ t
σ
F (x(s))ds. (7.4)
The next result ensures that all solutions of the autonomous generalized ODE (7.2) are
continuous.
Lemma 7.2. If x : [σ, η] → X, [σ, η] ⊂ [α, β], is a solution of the autonomous generalized
ordinary differential equation (7.2) on [σ, η], then x is a continuous function on [σ, η].
Proof. Suppose x : [σ, η] → X, [σ, η] ⊂ [α, β], is a solution of the autonomous generalized
ODE (7.2) on [σ, η]. Let a ∈ [σ, η]. We have
x(s) = x(a) +
s∫a
D[F (x(τ))t], s ∈ [σ, η].
Now, define U : [σ, η]× [σ, η]→ X by U(τ, t) := F (x(τ))t, for every (τ, t) ∈ [σ, η]× [σ, η].
Notice that the function
U(a, ·) : [σ, η]→ X
t 7→ U(a, t) = F (x(a))t
is continuous at the point t = a. Thus, by Remark 1.7, the function [σ, η] 3 s 7→s∫a
D[F (x(τ))t]
is continuous at the point s = a and, therefore, x is continuous at a.
7.1 Autonomous generalized ODEs 121
In the sequel, we recall an important class of right-hand sides of generalized ODEs. The
reader may want to consult [38]. Assume that h : [α, β] → R is a nondecreasing function
and that ω : [0,+∞)→ R is a continuous and increasing function with ω(0) = 0.
Definition 7.3. We say that a function H : Ω→ X belongs to the class F(Ω, h, ω), if
‖H(x, s2)−H(x, s1)‖ ≤ |h(s2)− h(s1)| (7.5)
for all (x, s2), (x, s1) ∈ Ω and
‖H(x, s2)−H(x, s1)−H(y, s2) +H(y, s1)‖ ≤ ω(‖x− y‖)|h(s2)− h(s1)| (7.6)
for all (x, s2), (x, s1), (y, s2), (y, s1) ∈ Ω.
Remark 7.4. When the function ω : [0,+∞)→ R is the identity, we will denote this class
simply by F(Ω, h).
We recall the reader that if F : O → X is a continuous function and x0 ∈ O, then
x : I → X is a solution of the autonomous ODE given by
dx
dt= F (x(t)). (7.7)
on I with x(t0) = x0 if, and only if, x is a continuous function on I, x(t) ∈ O for every t ∈ Iand
x(t) = x(t0) + (R)
∫ t
t0
F (x(s))ds,
where the last integral is the Riemann integral on Banach spaces.
The following result ensures us that if F : O → X is a continuous function, then the
autonomous generalized ODE (7.2) and the autonomous ODE (7.7) coincide. Thus all the
theory in the literature concerning autonomous ODE remains true also for autonomous
generalized ODEs.
Proposition 7.5. Let F : O → X be a function. If F is a continuous function, then the
autonomous generalized ODE (7.2) and the autonomous ODE (7.7) coincide.
Proof. Suppose x : [σ, η] → X satisfies the autonomous generalized ODE (7.2). Then, by
Lemma 7.2, x is a continuous function on [σ, η]. Thus, the function [σ, η] 3 ξ 7→ F (x(ξ))
is a continuous function on [σ, η] and, therefore, the Kurzweil integral∫ ξαD[F (x(τ))t] =
(K)∫ ξαF (x(s))ds and the Riemann integral (R)
∫ ξαF (x(s))ds coincide (i.e are equal). Then
x(ξ) = x(σ) + (K)
∫ ξ
σ
F (x(s))ds = x(σ) + (R)
∫ ξ
σ
F (x(s))ds, ξ ∈ [σ, η], (7.8)
122 Chapter 7 — Remarks on autonomous generalized ODEs
that is, x satisfies the autonomous ODE (7.7).
Analogously, if x : [σ, η]→ X satisfies the autonomous ODE (7.7), by the equality (7.8),
x satisfies the autonomous generalized ODE (7.2).
Proposition 7.6. Let H ∈ F(Ω, h, ω) be such that
H(x, t) = F (x)t, (7.9)
for all (x, t) ∈ Ω, where F : O → X. Then
‖F (x)− F (y)‖ ≤ Cω(‖x− y‖), (7.10)
for every x, y ∈ O, with C = |h(β)−h(α)|β−α . In particular, F is a continuous function.
Proof. let x, y ∈ O. Then, by condition (7.6) from Definition 7.3, we have
‖F (x)(β − α)− F (y)(β − α)‖ = ‖H(x, β)−H(x, α)−H(y, β) +H(y, α)‖≤ (h(β)− h(α))ω(‖x− y‖),
that is,
‖F (x)− F (y)‖ ≤ Cω(‖x− y‖),
where C = h(β)−h(α)β−α . Since Cω(‖x− y‖)→ 0 as x→ y, we get ‖F (x)− F (y)‖ → 0 as x→ y.
Thus F is a continuous function. The result follows now from Proposition 7.5.
Corollary 7.7. Let H ∈ F(Ω, h, ω) be such that
H(x, t) = F (x)t, (7.11)
for all (x, t) ∈ Ω, where F : O → X. Then the autonomous generalized ODE (7.2) and the
autonomous ODE (7.7) coincide.
Proof. The result follows from Propositions 7.5 and 7.6.
7.2 Correspondence between F(Ω, h, ω) and F(Ω, h, ω)
In this section, we prove that if H ∈ F(Ω, h, ω) with H(x, t) = F (x)t, then the au-
tonomous generalized ODE (7.2) and the autonomous ODE (7.7) are coincide. Also, we will
prove that the class F(Ω, h, ω) and the class F(Ω, h, ω) coincide.
The next definition is a slightly modified version of [44, Definition 1.7] (which is concerned
with the special case ω(t) = t)
7.2 Correspondence between F(Ω, h, ω) and F(Ω, h, ω) 123
Definition 7.8. A function H : Ω → X belongs to the class F(Ω, h, ω), if the integralβ∫α
DH(z(τ), t) exists for all z ∈ G([α, β], O) and the following conditions
∥∥∥∥∥∥s2∫s1
DH(z(τ), t)
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)| (7.12)
and ∥∥∥∥∥∥s2∫s1
D[H(z(τ), t)−H(x(τ), t)]
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)|ω(‖z − w‖∞) (7.13)
hold, for all z, x ∈ G([α, β], O) and all s2, s1 ∈ [α, β].
Note that Proposition 1.18 ensures us the existence of the integrals in (7.12) and (7.13).
The following statement is a slightly modified version of [44, Proposition 1.8] (which is
concerned with the special case ω(t) = t). The proof from [44] can be carried over without
any changes and we reproduce it here.
Proposition 7.9. F(Ω, h, ω) ⊂ F(Ω, h, ω).
Proof. Suppose H ∈ F(Ω, h, ω) and let z, w ∈ G([α, β], O) be given. By Proposition 1.18
and Lemma 1.15, we have (7.12). Thus it remains to prove that (7.13) also holds.
By the existence of the integrals∫ βαDH(z(τ), s) and
∫ βαDH(w(τ), s), and by the inte-
grability on subintervals, given s1, s2 ∈ [α, β], with s1 < s2, and given ε > 0, there is a gauge
δ of [s1, s2] such that, for every δ-fine tagged division D = (τi, [ti−1, ti]) of [s1, s2], we have∥∥∥∥∥∥∫ s2
s1
DH(z(τ), t)−|D|∑i=1
[H(z(τi), ti)−H(z(τi), ti−1)]
∥∥∥∥∥∥ < ε
and ∥∥∥∥∥∥∫ s2
s1
DH(w(τ), t)−|D|∑i=1
[H(w(τi), ti)−H(w(τi), ti−1)]
∥∥∥∥∥∥ < ε.
124 Chapter 7 — Remarks on autonomous generalized ODEs
Thus, ∥∥∥∥∫ s2
s1
[DH(z(τ), t)−DH(w(τ), t)]
∥∥∥∥≤
∥∥∥∥∥∥∫ s2
s1
DH(z(τ), t)−|D|∑i=1
[H(z(τi), ti)−H(z(τi), ti−1)]
∥∥∥∥∥∥+
∥∥∥∥∥∥∫ s2
s1
DH(w(τ), t)−|D|∑i=1
[H(w(τi), ti)−H(w(τi), ti−1)]
∥∥∥∥∥∥+
|D|∑i=1
‖H(z(τi), ti)−H(z(τi), ti−1)−H(w(τi), ti) +H(w(τi), ti−1)‖
< 2ε+
|D|∑i=1
ω(‖z(τi)− w(τi)‖)[h(ti)− h(ti−1)] ≤ 2ε+ ω(‖z − w‖∞)[h(s2)− h(s1)].
Hence, since ε > 0 is arbitrarily small, we obtain∥∥∥∥∫ s2
s1
[DH(z(τ), t)−DH(w(τ), t)]
∥∥∥∥ ≤ ω(‖z − w‖∞)[h(s2)− h(s1)]
which completes the proof.
Proposition 7.10. Let H ∈ F(Ω, h, ω) be such that
H(x, t) = F (x)t, (7.14)
for all (x, t) ∈ Ω, where F : O → X. Then
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖), (7.15)
for every d, b ∈ O, with K = |h(β)−h(α)|β−α . In particular, F is a continuous function.
Proof. Let d, b ∈ O. Define the following functions
z : [α, β]→ O (7.16)
s 7→ z(s) = d,
and
w : [α, β]→ O (7.17)
s 7→ w(s) = b.
7.2 Correspondence between F(Ω, h, ω) and F(Ω, h, ω) 125
Notice that z, w ∈ G([α, β], O). Thus, by condition (7.13) from Definition 7.8, we have
∥∥∥∥∥∥β∫α
D[H(z(τ), t)−H(w(τ), t)]
∥∥∥∥∥∥ ≤ |h(β)− h(α)|ω(‖z − w‖∞)︸ ︷︷ ︸=ω(‖b−d‖)
. (7.18)
We assert that
β∫α
DH(z(τ), t) = F (d)(β − α) and
β∫α
DH(w(τ), t) = F (b)(β − α). (7.19)
We will only prove the first equality holds. The second equality follows analogously. Let
U(τ, t) = H(z(τ), t), for all (τ, t) ∈ [α, β]× [α, β]. Notice that
U(τ, t) = H(z(τ), t)
(7.16)
↓= H(d, t)
(7.14)
↓= F (d)t,
that is,
U(τ, t) = F (d)t, for every (τ, t) ∈ [α, β]× [α, β]. (7.20)
Let D =(τk, [tk−1, tk]
)be an arbitrary tagged division of the interval [α, β]. Then,
S(U,D) =
|D|∑k=1
[U(τk, tk)− U(τk, tk−1)]
(7.20)
↓=
|D|∑k=1
[F (d)tk − F (d)tk−1]︸ ︷︷ ︸telescopic sum
= F (d)t|D| − F (d)t0 = F (d)(t|D| − t0) = F (d)(β − α).
Hence, we obtainβ∫α
DH(z(τ), t) = F (d)(β − α),
which completes the proof of the assertion.
By (7.18) and our (7.19), we obtain
‖F (b)(β − α)− F (d)(β − α)‖ ≤ |h(β)− h(α)|ω(‖b− d‖),
that is,
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖),
where K = |h(β)−h(α)|β−α . This completes the proof of the proposition.
126 Chapter 7 — Remarks on autonomous generalized ODEs
As an immediate consequence of Proposition 7.10, we have the following result.
Corollary 7.11. Let H ∈ F(Ω, h, ω) be such that
H(x, t) = F (x)t, (7.21)
for all (x, t) ∈ Ω, where F : O → X. Then the autonomous generalized ODE (7.2) and the
autonomous ODE (7.7) coincide.
Proof. The proof is a consequence of Propositions 7.5 and 7.10.
Now, let us prove the inclusion F(Ω, h, ω) ⊂ F(Ω, h, ω).
Proposition 7.12. F(Ω, h, ω) ⊂ F(Ω, h, ω).
Proof. Suppose H ∈ F(Ω, h, ω). Let x, y ∈ O and s2, s1 ∈ [α, β] be given. Define the
following functions
z : [α, β]→ O (7.22)
s 7→ z(s) = x,
and
w : [α, β]→ O (7.23)
s 7→ w(s) = y.
Using the same arguments as in the proof of Theorem 7.10, we obtain
s2∫s1
DH(z(τ), t) = H(x, s2)−H(x, s1) and
s2∫s1
DH(w(τ), t) = H(y, s2)−H(y, s1).
Thus, since H ∈ F(Ω, h, ω), we get
‖H(x, s2)−H(x, s2)‖ =
∥∥∥∥∫ s2
s1
DH(z(τ), t)
∥∥∥∥ ≤ |h(s2)− h(s1)|
and
‖H(y, s2)−H(y, s1)−H(x, s2) +H(x, s2)‖ =
∥∥∥∥∫ s2
s1
DH(w(τ), t)−∫ s2
s1
DH(z(τ), t)
∥∥∥∥≤ |h(s2)− h(s1)|ω(‖w − z‖∞)
= |h(s2)− h(s1)|ω(‖y − x‖),
7.3 The classes F(Ω∞, h, ω) and F(Ω∞, h, ω, E) 127
that is, H ∈ F(Ω, h, ω), obtaining the desired result.
Corollary 7.13. The class F(Ω, h, ω) and the class F(Ω, h, ω) coincide.
Proof. The result is a consequence of Propositions 7.9 and 7.12.
7.3 The classes F(Ω∞, h, ω) and F(Ω∞, h, ω, E)
In this section, let us show that if H ∈ F(Ω∞, h, ω) with H(x, t) = F (x)t, then the
autonomous generalized ODE (7.2) and autonomous ODE (7.7) are coincide. Moreover, we
will prove a surprising consequence when H ∈ F(Ω∞, h, ω, E).
Let us assume that Ω∞ = O × [t0,+∞), where O ⊂ Rn is open and t0 ∈ R.
We recall the reader that G([t0,+∞),Rn) is the vector space of functions ϕ : [t0,+∞)→Rn such that ϕ|[u,v] belongs to the space G([u, v],Rn) for all [u, v] ⊂ [t0,+∞). For every
compact interval [u, v] ⊂ [t0,+∞), we can define a seminorm on G([t0,+∞),Rn)
‖ · ‖∞,[u,v] : G([t0,+∞),Rn)→ R
by
‖ψ‖∞,[u,v] := supt∈[u,v]
∣∣ψ|[u,v](t)∣∣ = sup
t∈[u,v]
|ψ(t)| ,
for all ψ ∈ G([t0,+∞),Rn). The topology induced by the family of seminorms‖ · ‖∞,K
K∈Γ
on G([t0,+∞),Rn), where Γ := [u, v] : [u, v] ⊂ [t0,+∞) is called the topology of locally
uniform convergence on G([t0,+∞),Rn). We will use the notation x ∈ G([t0,+∞), O) for a
function x ∈ G([t0,+∞),Rn) such that x(s) ∈ O, for all s ∈ [t0,+∞).
Assume that h : [t0,+∞) → R is a nondecreasing function defined on [t0,+∞) and
ω : [0,+∞)→ R is a continuous and increasing function with ω(0) = 0.
Definition 7.14. A function H : Ω∞ → X belongs to the class F(Ω∞, h, ω), if the integralv∫u
DH(z(τ), t) exists for every z ∈ G([t0,+∞), O) and, for all [u, v] ⊂ [t0,+∞), we have
∥∥∥∥∥∥s2∫s1
DH(z(τ), t)
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)| (7.24)
and ∥∥∥∥∥∥s2∫s1
D[H(z(τ), t)−H(w(τ), t)]
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)|ω(‖z − w‖∞,[s1,s2]), (7.25)
128 Chapter 7 — Remarks on autonomous generalized ODEs
for all z, w ∈ G([t0,+∞), O) and for all s2, s1 ∈ [t0,+∞).
Remark 7.15. When the function ω is the identity, we write F(Ω∞, h) instead of F(Ω∞, h, ω).
In the sequel, we present a surprising result which follows from the definition of the class
F(Ω∞, h, ω).
Proposition 7.16. Let H ∈ F(Ω∞, h) be such that
H(x, t) = F (x)t, (7.26)
for all (x, t) ∈ Ω∞, where F : O → X. Then
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖), (7.27)
for all d, b ∈ O, where K = |h(t0 + 1)− h(t0)| . In particular, F is a continuous function.
Proof. Let d, b ∈ O. Define the following functions
z : [t0,+∞)→ O (7.28)
s 7→ z(s) = d,
and
w : [t0,+∞)→ O (7.29)
s 7→ w(s) = b.
Notice that z, w ∈ G([t0,+∞), O). Thus, by condition (7.25) of Definition 7.14, we have∥∥∥∥∥∥t0+1∫t0
D[H(z(τ), t)−H(w(τ), t)]
∥∥∥∥∥∥ ≤ |h(t0 + 1)− h(t0)|ω(‖z − w‖∞,[t0,t0+1])︸ ︷︷ ︸=ω(‖b−d‖)
. (7.30)
Now, using the same arguments as in the proof of Proposition 7.10, we have
t0+1∫t0
DH(z(τ), t) = F (d)(t0+1−t0) = F (d) and
t0+1∫t0
DH(w(τ), t) = F (b)(t0+1−t0) = F (b).
Then, (7.30) implies
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖),
where K = |h(t0 + 1)− h(t0)| and we complete the proof.
7.3 The classes F(Ω∞, h, ω) and F(Ω∞, h, ω, E) 129
The next result is an immediate consequence of Propositions 7.5 and 7.16.
Corollary 7.17. Let H ∈ F(Ω∞, h, ω) be such that
H(x, t) = F (x)t, (7.31)
for all (x, t) ∈ Ω, where F : O → X. Then the autonomous generalized ODE (7.2) and the
autonomous ODE (7.7) coincide.
Note that the main idea behind the proofs of Propositions 7.10, 7.12 and 7.16 is to define
constant functions z and w. But, now, let us define a class which does not contain such
constant functions and let us show that, even in this class, our result remains true.
Consider the following set
E := ϕ ∈ G([t0,+∞), O) : ϕ is constant .
Definition 7.18. Let h : [t0,+∞) → R be a nondecreasing function defined on [t0,+∞)
and Ω∞ = O × [t0,+∞). A function H : Ω∞ → X belongs to the class F(Ω∞, h, ω, E),
if the integralβ∫α
DH(z(τ), t) exists for every z ∈ G([t0,+∞), O)\E and for every interval
[α, β] ⊂ [t0,+∞), we have ∥∥∥∥∥∥s2∫s1
DH(z(τ), t)
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)| (7.32)
and ∥∥∥∥∥∥s2∫s1
D[H(z(τ), t)−H(w(τ), t)]
∥∥∥∥∥∥ ≤ |h(s2)− h(s1)|ω(‖z − w‖∞,[s1,s2]), (7.33)
for all z, w ∈ G([t0,+∞), O)\E and all s2, s1 ∈ [t0,+∞).
Proposition 7.19. Let H ∈ F(Ω∞, h, E) be such that
H(x, t) = F (x)t, (7.34)
for all (x, t) ∈ Ω∞, where F : O → X. Then
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖), (7.35)
for every d, b ∈ O, with K = |h(t0 + 1)− h(t0)| . In particular, F is a continuous function.
Proof. Let d, b ∈ O. If d = b, then (7.35) holds trivially. We suppose d 6= b.
130 Chapter 7 — Remarks on autonomous generalized ODEs
Define functions z, w : [t0,+∞)→ O by
z(t) =
d, t ∈ [t0, t0 + 1]
b, t ∈ (t0 + 1,+∞)
and
w(t) =
b, t ∈ [t0, t0 + 1]
d, t ∈ (t0 + 1,+∞).
Notice that, z, w ∈ G([t0,+∞), O)\E. Therefore, by condition (7.33) of Definition 7.18, we
obtain∥∥∥∥∥∥t0+1∫t0
D[H(z(τ), t)−H(w(τ), t)]
∥∥∥∥∥∥ ≤ |h(t0 + 1)− h(t0)|ω(‖z − w‖∞,[t0,t0+1])︸ ︷︷ ︸=ω(‖b−d‖)
. (7.36)
Now, using the same arguments as in the proof of Proposition 7.10, we have
t0+1∫t0
DH(z(τ), t) = F (d)(t0+1−t0) = F (d) and
t0+1∫t0
DH(w(τ), t) = F (b)(t0+1−t0) = F (b).
Therefore, (7.36) can be transformed into
‖F (b)− F (d)‖ ≤ Kω(‖b− d‖),
with K = |h(t0 + 1)− h(t0)| and the proof is complete.
As an immediate consequence of Proposition 7.19, we have the following result.
Corollary 7.20. Let H ∈ F(Ω∞, h, ω, E) be such that
H(x, t) = F (x)t, (7.37)
for all (x, t) ∈ Ω, where F : O → X. Then the autonomous generalized ODE (7.2) and the
autonomous ODE (7.7) coincide.
Proof. The proof is a consequence of Propositions 7.5 and 7.19.
Bibliography
[1] S. M. Afonso, Equacoes diferenciais funcionais com retardamento e impulsos em tempo
variavel via equacoes diferenciais ordinarias generalizadas, PhD thesis. Universidade De
Sao Paulo, ICMC-USP, 2011.
[2] S. M. Afonso, E. M. Bonotto, M. Federson and L. P. Gimenes, Boundednessof solutions
of retarded functional differential equations with variable impulses via generalized ordi-
nary differential equations, Mathematische Nachrichten, v.285, no 5-6, 545-561, 2012.
[3] S. M. Afonso, E. Bonotto, M. Federson and L. P. Gimenes, Stability of functional
differential equations with variable impulsive perturbations via generalized ordinary
differential equations, Bulletin des Sciences Mathematiques. (Paris, 1885), v. 13, 189-
214, 2013.
[4] E. Akin-Bohner and Y. Raffoul, Boundedness in functional dynamic equations on time
scales, Advances in Difference Equations, p. 1-18, 2006.
[5] Z. Artstein, Topological dynamics of an ordinary differential equations and Kurzweil
equations, J. Differential Equations, 23, 224-243, 1977.
[6] S. S. Bellale and B. C. Dhage, Abstract measure integro-differential equations, Global
J. Math. Anal. 1, 91-108, 2007.
[7] M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with
Applications, Birkhauser, Boston, 2001.
[8] M. Bohner and A. Peterson, Advances in Dynamic Equations on Time Scales,
Birkhauser, Boston, 2003.
[9] D. N. Chate, B. C. Dhage and S.K. Ntouyas, Abstract measure differential equations,
Dynam, System Appl. 13, No. 1, 1005-1017, 2004.
131
132 Bibliography
[10] P. C. Das and R. R. Sharma, Existence and stability of measure differential equations,
Czech. Math. Journal. 22(97), 145-158, 1972.
[11] S. G. Deo and S. R. Joshi, On abstract measure delay differential equations, An. Stiint.
Univ. Al I. Cuza Iasi XXVI, 327-335, 1980.
[12] S. G. Deo and S. G. Pandit, Differential Systems Involving Impulses., Springer-Verlag,
Berlin-New York, 954, 1982.
[13] B. C. Dhage, On system of abstract measure integro-differential inequalities and appli-
cations, Bull. Inst. Math. Acad. Sinica 18, 65-75, 1989.
[14] B. C. Dhage and J. R. Graef, On stability of abstract measure delay integro-differential
equations, Dynam. Systems Appl. 19, 323-334, 2010.
[15] J. Diblık, M. Ruzickova and B. Vaclavıkova, Bounded solutions of dynamic equations on
time scales, International Journal of Difference Equations, v.3 number 1, 61-69, 2008.
[16] M. Federson, R. Grau and J. G. Mesquita, Prolongation of solutions of measure differ-
ential equations and dynamical equations on time scales, submitted.
[17] M. Federson, R. Grau, J. G. Mesquita and E.Toon, Boundedness of solutions of measure
differential equations and dynamic equations on time scales, submitted.
[18] M. Federson, R. Grau, J. G. Mesquita and E.Toon, Lyapunov stability for measure
differential equations and dynamic equations on time scales, submitted.
[19] M. Federson, R. Grau, J. G. Mesquita and E.Toon, Remark on autonomous generalized
ordinary differential equations, submitted.
[20] M. Federson, J. G. Mesquita and A. Slavık, Measure functional differential equations
and functional dynamic equations on time scales, J. Differential Equations. 252, no. 6,
3816-3847, 2012.
[21] M. Federson, J. G. Mesquita and A. Slavık, Basic results for functional differential and
dynamic equations involving impulses, Math. Nachr. 286, no. 2-3, 181-204, 2013.
[22] M. Federson, J. G. Mesquita and E. Toon, Lyapunov theorems for measure functional
differential equations via kurzweil equations, Mathematische Nachrichten. v. 218, p.
1487-1511, 2015.
[23] M. Federson and S. Schwabik, Generalized ODEs approach to impulsive retarded dif-
ferential equations, Diff. integral equations. 19(11), 1201-1234, 2006.
Bibliography 133
[24] M. Federson and S. Schwabik, A new approach to impulsive retarded differential equa-
tions: stability results, Functional Differential Equations. 16(4), 583-607, 2009.
[25] D. Frankova, Regulated functions. Math. Bohem. 116, no. 1, 20-59, 1991.
[26] R. Henstock, Lectures in the Theory of Integration., World Scientific, Singapore, 1998.
[27] S. Hilger, Ein Maβkettenkalkul mit Anwendung auf zentrum smanning faltig keiten,
PhD thesis. Universitat Wurzburg, 1988.
[28] Y. Hino, S. Murakami and T. Naito, Functional differential Equations with Infinite
Delay, Springer- Verlag, 1991.
[29] C. S. Honig, Volterra Stieltjes Integral Equations., North Holland Publ. Comp. Amster-
dam, 1975.
[30] J. Kurzweil, Generalized ordinary differential equations and continuous dependence on
a parameter, Czechoslovak Math. J. 7(82), 418-448, 1957.
[31] J. Kurzweil, Generalized ordinary differential equations, Czechecoslovak Math. J. 8(83),
360-388, 1958.
[32] S. Leela, Stability of measure differential equations, Pacific Journal of Mathematics.
Vol. 55. No. 2, 1974.
[33] F. Oliva and Z. Vorel, Functional equations and generalized ordinary differential equa-
tions, Bol. Soc. Mat. Mexicana, 11, 40-46, 1966.
[34] S. G. Pandit, Differential systems with impulsive perturbations, Pacific J. Math 86,
553-560, 1980.
[35] A. Peterson and B. Thompson, Henstock-Kurzweil delta and nabla integrals, J. Math.
Anal. Appl. 323, 162-178, 2006.
[36] A. Peterson and C. Tisdell, Boundedness and uniqueness of solutions to dynamic equa-
tions on time scales, Journal of Difference Equations and Applications, v.10, 13-15,
1295-1306, 2004.
[37] W. W. Schmaedeke, Optimal control theory for nonlinear vector differential equations
containing measures, SIAM J., Ser. A, Control 3, 231–280, 1965.
[38] S. Schwabik, Generalized Ordinary Differential Equations, Series in Real Analysis, vol.
5, World Scientific, Singapore, 1992.
134 Bibliography
[39] P. C. Das, R. R. Sharma, On optimal controls for measure delay-differential equations,
J. SIAM Control 9, 43-61, 1971.
[40] R. R. Sharma, An abstract measure differential equations, Proc. Amer. Math. Soc., 32,
503-510, 1972.
[41] R. R. Sharma, A measure differential inequality with applications, Proc. Amer. Math.
Soc., 48, 87-97, 1975.
[42] A. Slavık, Dynamic equations on time scales and generalized ordinary diffrerential equa-
tions, J. Math. Anal. Appl., 385, 534-550, 2012.
[43] I. M. Stamova, Boundedness of impulsive functional differential equations with variable
impulsive perturbations, Bull Austral Math. Soc., 77, 331-345, 2008.
[44] E. Toon, Equacoes diferenciais ordinarias generalizadas e aplicacoes as equacoes difer-
enciais classicas, PhD thesis. Universidade De Sao Paulo, ICMC-USP. 2012.