30
University of Notre Dame Senior Thesis Submitted to the Department of Mathematics in partial fulfullment of the requirements for graduation with Senior Thesis Applications of Brownian motion in finance Author: Eric Eun Seuk Choi Advisor: Dr. Nancy K. Stanton Department of Mathematics University of Notre Dame Notre Dame, Indiana July 28, 2016

Thesis_Eric Eun Seuk Choi

Embed Size (px)

Citation preview

Page 1: Thesis_Eric Eun Seuk Choi

University of Notre Dame

Senior Thesis

Submitted to the Department of Mathematics inpartial fulfullment of the requirements for

graduation with Senior Thesis

Applications of

Brownian motion in finance

Author:Eric Eun Seuk Choi

Advisor:Dr. Nancy K. Stanton

Department of MathematicsUniversity of Notre Dame

Notre Dame, Indiana

July 28, 2016

Page 2: Thesis_Eric Eun Seuk Choi

Abstract

This thesis is an exposition of Brownian motion and its applicationsin finance. The concepts of random walks, stochastic processes, andprobability space are discussed first, as they are the building blocks ofBrownian motion. Along with the definition, distribution, and filtra-tion of Brownian motion, properties of Brownian motion are analyzed.Finally, real-world examples of Brownian motion in finance are pre-sented with the analysis of the Black-Scholes model and of geometricBrownian motion.

1

Page 3: Thesis_Eric Eun Seuk Choi

Contents

1 Acknowledgements 3

2 Introduction and motivation 3

3 Building blocks of Brownian motion 43.1 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . 43.2 Probability spaces, σ-algebra, filtration, and

Lebesgue measure . . . . . . . . . . . . . . . . . . . . . . . . . 53.3 Symmetric Random Walks . . . . . . . . . . . . . . . . . . . . 73.4 Increments of the Symmetric Random Walk . . . . . . . . . . 93.5 Scaled Symmetric Random Walk . . . . . . . . . . . . . . . . 93.6 Limiting Distribution of the Scaled Random Walk . . . . . . . 10

4 Brownian motion 124.1 Mathematical definition . . . . . . . . . . . . . . . . . . . . . 124.2 Distribution of Brownian motion . . . . . . . . . . . . . . . . 14

5 Properties of Brownian motion 165.1 Filtration of Brownian motion . . . . . . . . . . . . . . . . . . 165.2 Martingale property . . . . . . . . . . . . . . . . . . . . . . . 175.3 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . 185.4 Nowhere Differentiable . . . . . . . . . . . . . . . . . . . . . . 20

6 Applications of Brownian motion in Finance 206.1 Brief history of Brownian motion in Finance . . . . . . . . . . 206.2 Black-Scholes Model . . . . . . . . . . . . . . . . . . . . . . . 216.3 Brownian motion in Black-Scholes Model . . . . . . . . . . . . 226.4 Random walk with time step ∆t and space step ∆x . . . . . . 236.5 Construction of Brownian motion as the limit of random walks 256.6 Geometric Brownian motion . . . . . . . . . . . . . . . . . . . 27

7 Conclusion 28

2

Page 4: Thesis_Eric Eun Seuk Choi

1 Acknowledgements

The work on Brownian motion was started from Directed Readings withProfessor Nancy Stanton, my research advisor. It would not have been writ-ten without instruction from her; she guided me every week since Spring2015. Having started from concepts of Markov chains, symmetric randomwalks, and probability space, I could build up the necessary elements for thisthesis under her. She has been critical in understanding relevant materials,choosing a topic, and writing a thesis. The topic was motivated by Profes-sor Alex Himonas, who provided me a list of possible thesis topics that fitinto my knowledge of Financial Mathematics. He also recommended Shreve’sStochastic Calculus for Finance book [5], used as a backbone of this thesis. Inaddition, his notes for the Mathematical Methods in Finance and Economicscourse were extremely helpful in writing about applications of Brownian mo-tion in the Black-Scholes formula. I also want to thank the College of Sciencefor supporting me with a Summer Undergraduate Research Fellowship. Fi-nally, to all of my friends, family who have been with me during this project,my academic advisor, Professor Sonja Mapes, and professors of the NotreDame Mathematics department, thank you for your support throughout myundergraduate years.

2 Introduction and motivation

After taking Introduction to Probability course, I started Directed Read-ings about Markov chains and random walks, where we left off in the In-troduction to Probability course. After taking Mathematical Methods inFinance and Economics course, I became interested in mathematical con-cepts widely used in the finance industry. Brownian motion was the topicthat could combine my interests in random walks and financial mathemat-ics, since it not only is widely used in finance but also contains probabilisticconcepts like symmetric random walks as its basis. It also stimulated myintellectual curiosity in that seemingly random motions of particles can beexplained by mathematical concepts. Professor Shreve’s Stochastic Calculusfor Finance book was the resource I mostly used to learn various conceptsthat comprise Brownian motion.

Brownian motion is the random motion of particles suspended in a fluidresulting from their collision with the quick atoms or molecules in the gas or

3

Page 5: Thesis_Eric Eun Seuk Choi

liquid. It was first discovered by the botanist Robert Brown in 1827. Whilehe was looking through a microscope at particles trapped in cavities insidepollen grains in water, he noted that the particles moved through the waterbut he was not able to determine the mechanisms that caused this motion.He also observed that the motion of particles never ceased, did not appearto have a tangent, and the smaller the particle, the greater the motion.This phenomenon was finally explained by Albert Einstein, who published apaper in 1905 that explained in precise detail that it was caused by molecularmotion: the grains of pollen are bombarded on all sides by the molecules ofwater, and the particles are so small that the imbalance of impacts on oneside and the other produces a noticeable motion. He tried to formulate adiffusion equation for Brownian particles and relate the diffusion coefficientto measurable physical quantities.

The organization of this thesis is simple and intuitive. Stochastic pro-cesses and symmetric random walks are discussed first to serve as buildingblocks of Brownian motion. Secondly, a formal and mathematical definitionof Brownian motion is explained. Thirdly, distribution, filtration, and prop-erties of Brownian motion occupy a significant amount of this thesis. Finally,an application of Brownian motion in finance is discussed.

I hope this thesis could be easily read by students who have some back-ground in probability and are curious about concepts of Brownian motionand also by those who have little background in probability but are interestedin roles of Brownian motion in finance.

3 Building blocks of Brownian motion

3.1 Stochastic Processes

In probability theory, a stochastic process is a collection of random vari-ables evolving randomly in time. Simple examples of stochastic processes aresignals such as speech, audio, video, and exchange rate fluctuations in stockmarket. We start with the concept of a stochastic process, since Brownianmotion is a continuous-time stochastic process.

Definition A stochastic process is a family Xt, t ∈ T of random vari-ables defined on the same probability space.

Here, T is a parameter set and a subset of (−∞,∞). It can be anything:a stochastic process is just a collection of random variables. The definition is

4

Page 6: Thesis_Eric Eun Seuk Choi

so general as to be almost all-encompassing, but we usually have somethingmore specific. The process is called a continuous parameter process if T is aninterval having positive length (e.g. T=[0,∞)) and a discrete parameter processif T is a subset of the integers (e.g. T=0, 1, 2, ...). If T = 0, 1, 2, ... it isusual to denote the process by Xn, n ≥ 0. Brownian motion is a continuousparameter process.

One example of a discrete-time stochastic process is a sequence X1, X2, ...of independent random variables. A typical example of a continuous-timestochastic process would be D(t), t ≥ 0, whereD(t) is the Dow-Jones stock-market average at time t. There are other stochastic processes with largerparameter sets, such as T (x, y, t) : t ≥ 0, 0 ≤ x ≤ 360, 0 ≤ y ≤ 180, whereT (x, y, t) is the temperature at latitude x, longitude y, and time t on thesurface of the world.

Let Xt, t ∈ T be a stochastic process on (Ω,F ,P) (This notation willbe explained in the next section). Its finite-dimensional distributions are ofthe form P(Xt1 , ..., Xtn) ∈ A, where n = 1, 2, . . . and t1, . . . , tn ∈ T , and Ais a Borel set in Rn. Then:

Theorem [6, Theorem 7.2] The entire distribution of a stochastic processis determined by its finite-dimensional distributions.

3.2 Probability spaces, σ-algebra, filtration, andLebesgue measure

Most of the topics in this paper are constructed based on the basicmathematical model of probability, probability space (also called probabilitytriples) (Ω,F ,P), σ-algebra, filtration, and Lebesgue measure. Those con-cepts usually do not show up in an undergraduate-level probability textbook,but you do not have to worry about understanding them, as this section goesover basic definitions.

Here, Ω in a probability triple means the sample space, the set of allpossible outcomes. For example, when you roll a regular dice once, Ω will be1,2,3,4,5,6, possible outcomes of one roll. F is a σ-algebra of subsets of Ω.A class % of subsets of Ω is a σ-algebra (also called σ-field) if

(i) ∅ ∈ %;(ii) A ∈ % ⇒ Ac ∈ %;(iii) A1, A2,... ∈ % ⇒

⋃∞i=1Ai ∈ %.

Some examples of σ-algebras are

5

Page 7: Thesis_Eric Eun Seuk Choi

(1) All subsets of Ω.(2) Ω,∅. (This is called the trivial σ-algebra.)(3) All countable subsets of R together with their complements.

Note that every σ-algebra contains both ∅ and Ω, so the trivial σ-algebra isalso the smallest possible σ-algebra.

The σ-algebra F is a collection of all the events (a set of zero or moreoutcomes, namely a subset of the sample space) we would like to consider.Finally, P is a probability measure on F , the assignment of probabilities tothe events, a function P from events to probabilities.

Definition Let F be a σ-field. A probability measure P on F is areal-valued function defined on F such that

(i) if A ∈ F , P(A) ≥ 0;(ii) P(Ω) = 1;(iii) if A1, A2, ... is a finite or countably infinite sequence of disjointelements of F (i.e., i 6= j ⇒ Ai ∩ Aj = ∅), then

P(⋃n

An) =∑n

P(An). (1)

P(A) is called the probability of ADefinition Let Ω be a nonempty set. Let T be a fixed positive number,

and assume that for each t ∈ [0,T] there is a σ-algebra F(t). Assume furtherthat if s ≤ t, then every set in F(s) is also in F(t). Then we call the collectionof σ-algebras F(t), 0 ≤ t ≤ T , a filtration.

Intuitively, F(t) is what you know at time up to t. For example, letX1, X2, ..., Xi, ... be ith coin toss. If you toss a coin 5 times, you have infor-mation about a coin toss until X5, but you do not have information aboutX10. If A ∈ F(s) then A ∈ F(t) for t ≥ s, so F(t) is bigger σ-algebra. Thereare more random variables for the σ-algebra of F(t) than F(s), since X10

contains information of X1, X2, ..., X10, so it includes information about X5.The concept of filtration will later be used when we prove the martingaleproperty of Brownian motion.

Finally, in measure theory, Lebesgue measure is the standard way ofassigning a measure to subsets of n-dimensional Euclidean space. For n=1,2,or 3, it coincides with the standard measure of length, area, or volume. Thefollowing is the definition of it. Given a subset E ⊆ R, with the length ofan (open, closed, semi-open) interval I = [a, b] given by l(I) = b− a, the

6

Page 8: Thesis_Eric Eun Seuk Choi

Lebesgue outer measure λ∗(E) is defined as

λ∗(E) = inf

∞∑k=1

l(Ik) : (Ik)k∈N is a sequence of open intervals with E ⊆∞⋃k=1

Ik

.

The Lebesgue measure of E is given by its Lebesgue outer measure λ(E) = λ∗(E)if, for every A ⊆ R,

λ∗(A) = λ∗(A ∩ E) + λ∗(A ∩ Ec).

Despite a complicated definition of Lebesgue measure, all you need toknow about it to understand this thesis is that for any closed interval [a, b] ofreal numbers, its Lebesgue measure is the length b− a. In addition, the openinterval (a, b) has the same measure, since the difference between the two sets[a, b] and (a, b) consists only of the end points a and b and has measure zero.

3.3 Symmetric Random Walks

A random walk is a mathematical formalization of a path that consists ofa sequence of random steps. Examples of random walks are the path tracedby a molecule as it travels in a liquid or a gas and the price of a fluctuatingstock. In a symmetric random walk on a finite space, the probabilities of thelocation jumping to each one of its adjacent neighbor are the same. A simplecase of a symmetric random walk is shown in Figure 1.

7

Page 9: Thesis_Eric Eun Seuk Choi

1 2 3 4 5 6

−2

−1

1M1

M0 M2

M3

M4

M5

Figure 1. Five steps of a random walk

Brownian motion will be obtained as a limit of random walks. To con-struct a symmetric random walk, we repeatedly toss a fair coin (p, the prob-ability of H on each toss, and q = 1− p, the probability of T on each toss,are both equal to 1/2). Let’s denote the successive outcomes of the tosses byω = ω1ω2ω3.... In other words, ω is the infinite sequence of tosses, and ωn isthe outcome of the nth toss. Let

Xj =

1 if ωj = H,−1 if ωj = T,

(2)

and define M0 = 0,

Mk =k∑j=1

Xj, k = 1, 2, 3, ... (3)

The process Mk, k = 0, 1, 2, ... is a symmetric random walk, because witheach toss, it either steps up one unit or down one unit and each of the twopossibilities is equally likely.

8

Page 10: Thesis_Eric Eun Seuk Choi

3.4 Increments of the Symmetric Random Walk

A random walk has independent increments. Suppose we choose nonneg-ative integers 0 = k0 < k1 < ... < km. Each of these random variables,

Mki+1−Mki =

ki+1∑j=ki+1

Xj, (4)

is called an increment of the random walk. It is basically the change in theposition of the random walk between times ki and ki+1. Increments overnonoverlapping time intervals are independent.

Increments over nonoverlapping time intervals are independent becausethe Xj’s are. Intuitively, they are independent because they depend on dif-ferent coin tosses.

Finally, the random variables

Mk1 = (Mk1 −Mk0), (Mk2 −Mk1), ..., (Mkm −Mkm−1) (5)

are independent.Moreover, each increment Mkm−Mkm−1 has expected value 0 and variance

km − km−1. The expected value is 0, because the expected value of eachXj appearing on the right-hand side of (4) has an expected value of 0 (i.e.expected value of -1 occurring with probability 1/2 and 1 occurring withprobability 1/2 is zero). In addition, E(X2

j ) = (−1)2 ∗ 1/2 + 12 ∗ 1/2 = 1and V ar(Xj) = E(X2

j )− (E(Xj))2 = 1− 0 = 1. Then from (4)

V ar(Mki+1−Mki) =

ki+1∑j=ki+1

V ar(Xj) =

ki+1∑j=ki+1

1 = ki+1 − ki. (6)

The variance of the symmetric random walk accumulates at rate one per unittime, so that the variance of the increment over any time interval k to l fornonnegative integers k < l is l − k.

3.5 Scaled Symmetric Random Walk

To approximate a Brownian motion, we fix a positive integer n and definethe scaled symmetric random walk

W (n)(t) =1√nMnt, (7)

9

Page 11: Thesis_Eric Eun Seuk Choi

given that nt is an integer. Like the random walk, the scaled random walkhas independent increments. If 0 = t0 < t1 < ... < tm are such that each ntjis an integer, then

(W (n)(t1)−W (n)(t0)), (W (n)(t2)−W (n)(t1)), ..., (W (n)(tm)−W (n)(tm−1))

are independent. For example, (W (100)(0.30)−W (100)(0)) depends on thefirst 30 coin tosses and (W (100)(0.70)−W (100)(0.20)) depends on the 50 tossesafter the first 20. In addition, if s ≤ t are such that ns and nt are integers,then

E(W (n)(t)−W (n)(s)) = 0, V ar(W (n)(t)−W (n)(s)) = t− s. (8)

The logic is the same as the expected value of (4) and the result of (6).This is also because (W (n)(t)−W (n)(s)) is the sum of n(t− s) independentrandom variables, each with expected value zero and variance 1

n. (Note that

Var(X1+X2+...Xn) = Var(X1)+Var(X2)+...+Var(Xn) if X1,X2,...,Xn areindependent). For example, (W (100)(0.70)−W (100)(0.20)) is the sum of 50independent random variables, each of which takes the value 1

10or −1

10. Each

of these random variables has expected value zero and variance 1100

, so thevariance of (W (100)(0.70)−W (100)(0.20)) is 50 ∗ 1

100= 0.5.

3.6 Limiting Distribution of the Scaled Random Walk

In a single path of the scaled random walk, we have fixed a sequence ofcoin tosses ω = ω1ω2ω3... and drawn the path of the resulting process as timet varies. Another way to think about the scaled random walk is to fix the timet and consider the set of all possible paths evaluated at that time t. In otherwords, we can fix t and think about the scaled random walk correspondingto different values of ω, the sequence of coin tosses. For example, set t = 0.25and consider the set of possible values of W (100)(0.25) = 1

10M25. This random

variable is generated by 25 coin tosses and since the unscaled random walkM25 can take the value of any odd integers between -25 and 25 (think ofresults of tossing a coin 25 times, with a possible value of each coin toss -1or 1), the scaled random walk W (100)(0.25) can take any of the values -2.5,-2.3, -2.1,..., -0.3, -0.1, 0.1, 0.3,..., 2.1, 2.3, 2.5.

In order for W (100)(0.25) to take the value 0.1, we must obtain 13 headsand 12 tails in the 25 tosses. The probability of this is

PW (100)(0.25) = 0.1 =25!

13!12!(1

2)25 = 0.1555. (9)

10

Page 12: Thesis_Eric Eun Seuk Choi

We plot this information in Figure 2 by drawing a histogram bar centeredat 0.1 with area 0.1555. Since this bar has width 0.2, its height must be0.1555

0.2= 0.7775. Figure 2 shows similar histogram bars for all possible values

of W (100)(0.25) between -1.5 and 1.5.

−1.5 −1 −0.5 0.5 1 1.5

0.2

0.4

0.6

0.8

Figure 2. Distribution of W (100)(0.25) and normal curve y = 2√2πe−2x2

The random variable W (100)(0.25) has expected value zero and variance0.25. Superimposed on the histogram in Figure 2 is the normal densitywith this mean and variance. We see that the distribution of W (100)(0.25)is nearly normal. If we were given a continuous bounded function g(x) andasked to compute Eg(W (100)(0.25)), a good approximation would be obtainedby multiplying g(x) by the normal density shown in Figure 2 and integrating:

Eg(W (100)(0.25)) ' 2√2π

∫ ∞−∞

g(x)e−2x2

dx. (10)

The Central Limit Theorem asserts that the approximation in (10) isvalid. We provide the version of it that applies to our context.

Theorem (Central Limit Theorem). Fix t≥0. As n→∞, the distributionof the scaled random walk W (n)(t) evaluated at time t converges to the normaldistribution with mean zero and variance t.

Outline of proof [5, p.89 ]: One can identify distributions by identify-ing their moment-generating functions (moment-generating functions will be

11

Page 13: Thesis_Eric Eun Seuk Choi

used to identify the distribution of Brownian motion). For the normal den-

sity f(x) = 1√2πte

−x2

2t with mean zero and variance t, the moment-generatingfunction is

ϕ(u)

=

∫ ∞−∞

euxf(x)dx

=1√2πt

∫ ∞−∞

expux− x2

2tdx

= e12u2t 1√

2πt

∫ ∞−∞

exp−(x− ut)2

2tdx

= e12u2t (11)

because 1√2πte

−(x−ut)2

2t is a normal density with mean ut and variance t andhence integrates to 1.

4 Brownian motion

4.1 Mathematical definition

We obtain Brownian motion as the limit of the scaled random walksW (n)(t) of (7) as n→∞. Brownian motion inherits properties from theserandom walks. This leads to the following mathematical definition.

Definition 1. Let (Ω,F ,P) be a probability space. For each ω ∈ Ω, sup-pose there is a continuous function W (t) of t ≥ 0 that satisfies W (0) = 0and that depends on ω. Then W (t), t ≥ 0, is a Brownian motion if for all0 = t0 < t1 < ... < tm the increments

W (t1) = W (t1)−W (t0),W (t2)−W (t1), ...,W (tm)−W (tm−1) (12)

are independent and each of these increments is normally distributed with

E[W (ti+1)−W (ti)] = 0 (13)

V ar[W (ti+1)−W (ti)] = ti+1 − ti. (14)

While existence is not obvious, it will be addressed in Section 6.

12

Page 14: Thesis_Eric Eun Seuk Choi

One difference between Brownian motion W (t) and the scaled randomwalk W (100)(t) is that the scaled random walk has a natural time step 1

100and

is linear between these time steps, whereas the Brownian motion has no linearpieces. The other difference is that while the scaled random walk W (100)(t) isonly approximately normal for each t (Figure 2), Brownian motion is exactlynormal. In fact, not only is W (t) = W (t)−W (0) normally distributed foreach t, but the increments W (t)−W (s) are normally distributed for all0 ≤ s < t.

There are two ways to understand ω in (12). One way is to think of ωas the Brownian motion path. A random experiment is performed, and itsoutcome is the path of the Brownian motion. Then W (t) is the value ofthis path at time t, and this value depends on which path resulted from therandom experiment. Alternatively, one can think of ω as something moreprimitive than the path itself, akin to the outcome of a sequence of cointosses, but now the coin is being tossed “infinitely fast”. Once the sequenceof coin tosses has been performed and the result ω obtained, the path of theBrownian motion can be drawn. If the tossing is done again and a differentω is obtained, then a different path will be drawn. In either case, the samplespace Ω is the set of all possible outcomes of a random experiment, F is the σ-algebra of subsets of Ω whose probabilities are defined, and P is a probabilitymeasure. For each A ∈ F , the probability of A is a number P(A) betweenzero and one. The distributional statements about Brownian motion pertainto P. For example, we might wish to determine the probability of the setcontaining all ω ∈ Ω that result in a Brownian motion path satisfying 0 ≤W (0.25) ≤ 0.2. Let us first consider this for the scaled random walk W (100).If we were asked to determine the set ω : 0 ≤W (100)(0.25) ≤ 0.2, we wouldnote that in order for the scaled random walk W (100) to fall between 0 and0.2 at time 0.25, the unscaled random walk M25 = 10W (100)(0.25) must fallbetween 0 and 2 after 25 tosses. Since M25 (the result of tossing a coin 25times) can only be an odd number, it falls between 0 and 2 if and only if itis equal to 1, equivalently, if and only if W (100)(0.25) = 0.1. Note that M25

cannot be either 0 or 2 because the result is an odd number (1, 3, 5...., 25).To achieve the value 1, the coin tossing must result in 13 heads and 12 tailsin the first 25 tosses. Therefore, A is the set of all infinite sequences of cointosses with the property that in the first 25 tosses there are 13 heads and 12tails. The probability that one of these sequence occurs is

PW (100)(0.25) = 0.1 =25!

13!12!(1

2)25 = 0.1555. (15)

13

Page 15: Thesis_Eric Eun Seuk Choi

Therefore, P(A) = 0.1555 For the Brownian motion W , there is also a setof outcomes ω to the random experiment that results in a Brownian motionpath satisfying 0 ≤W (0.25) ≤ 0.2. This is a subset of Ω, and the probabilityof this set is

P0 ≤ W (0.25) ≤ 0.2 =2√2π

∫ 0.2

0

e−2x2

dx. (16)

In place of the area in the histogram bar centered at 0.1 in Figure 2, whichis 0.1555, we now have the area under the normal curve between 0 and 0.2in that figure. These two areas are nearly the same.

4.2 Distribution of Brownian motion

This section is heavily quoted from [5, pp. 95-97.] Since the incrementsW (t1) = W (t1)−W (t0),W (t2)−W (t1), ...,W (tm)−W (tm−1) of (12) are in-dependent and normally distributed, the random variables W (t1),W (t2), ...,W (tm) are jointly normally distributed. The joint distribution of jointly nor-mal random variables is determined by their means and covariances. Eachof the random variables W (ti) has mean zero. For any two times, 0 ≤ s < t,the covariance of W (s) and W (t) is

E[W (s)W (t)]

= E[W (s)(W (t)−W (s)) +W 2(s)]

= E[W (s)] ∗ E[W (t)−W (s)] + E[W 2(s)]

= 0 + V ar[W (s)] = s. (17)

E[W (s)(W (t)−W (s))] = E[W (s)] ∗ E[W (t)−W (s)] by the independence ofincrements of Brownian motion (i.e. W (s)−W (0) and W (t)−W (s) are in-dependent in this case) and this leads to the second equality. E[W (t)−W (s)]= 0 by (13) and E[W 2(s)] = V ar(W (s))+(E(W (s))2 = V ar(W (s)) + 0 =V ar(W (s)) and this finally leads to the third equality.

The moment-generating function of the random vector (W (t1), ...,W (tm))can be computed using the moment-generating function (11) for a zero-meannormal random variable with variance t and the independence of the incre-ments in (12). To assist in this computation, we note first that

u3W (t3) + u2W (t2) + u1W (t1)

= u3(W (t3)−W (t2)) + (u2 + u3)(W (t2)−W (t1)) + (u1 + u2 + u3)W (t1)

14

Page 16: Thesis_Eric Eun Seuk Choi

and more generally,

umW (tm) + um−1W (tm−1) + um−2W (tm−2) + ...+ u1W (t1)

= um(W (tm)−W (tm−1)) + (um−1 + um)(W (tm−1)−W (tm−2))

+(um−2 +um−1 +um)(W (tm−2)−W (tm−3))+ ...+(u1 +u2 + ...+um)W (t1).

We use these facts to compute the moment-generating function of the randomvector (W (t1),W (t2), ...,W (tm)):

ϕ (u1, u2, ..., um)

= EexpumW (tm) + um−1W (tm−1) + ...+ u1W (t1)= Eexpum(W (tm)−W (tm−1)) + (um−1 + um)(W (tm−1)−W (tm−2)) + ...

+(u1 + u2 + ...+ um)W (t1)= Eexpum(W (tm)−W (tm−1)) ∗ Eexp(um−1 + um)(W (tm−1)−W (tm−2))...∗Eexp(u1 + u2 + ...+ um)W (t1)

= exp1

2u2m(tm − tm−1) ∗ exp1

2(um−1 + um)2(tm−1 − tm−2...

∗exp1

2(u1 + u2 + ...+ um)2t1.

The final equality was derived by using (11) (the fact that each in-crement is normally distributed). In conclusion, the moment-generatingfunction for Brownian motion (i.e., for the m-dimensional random vector(W (t1),W (t2), ...,W (tm))) is

ϕ (u1, u2, ..., um)

= exp1

2(u1 + u2 + ...+ um)2t1 +

1

2(u2 + u3 + ...+ um)2(t2 − t1) + ...

+1

2(um−1 + um)2(tm−1 − tm−2) +

1

2u2m(tm − tm−1). (18)

The distribution of the Brownian increments in (12) can be specifiedby specifying the joint density or the joint moment-generating function ofthe random variables W (t1),W (t2), ...,W (tm). This leads to the followingtheorem.

Theorem (Alternative characterizations of Brownian motion)Let (Ω,F ,P) be a probability space. For each ω ∈ Ω, suppose there is a

continuous function W(t) of t ≥ 0 that satisfies W(0) = 0 and that dependson ω. The following are equivalent.

15

Page 17: Thesis_Eric Eun Seuk Choi

(i) For all 0 = t0 < t1 < ... < tm, the increments W (t1) = W (t1)−W (t0),W (t2)−W (t1), ...,W (tm)−W (tm−1) are independent and each of theseincrements is normally distributed with mean and variance given by (13)and (14).(ii) For all 0 = t0 < t1 < ... < tm, the random variables W (t1),W (t2), ...,W (tm) have the joint moment-generating function (18).

These imply (iii):(iii) For all 0 = t0 < t1 < ... < tm, the random variables W (t1),W (t2), ...,W (tm) are jointly normally distributed with means equal to zero.

If (i) and (ii) hold (and hence both hold), then W (t), t ≥ 0 is a Brownianmotion.

5 Properties of Brownian motion

5.1 Filtration of Brownian motion

In addition to Brownian motion itself, we will need some notation for theamount of information available at each time. We do that with a filtration.

Definition. Let (Ω,F ,P) be a probability space on which is defined aBrownian motion W(t), t ≥ 0. A filtration for the Brownian motion is acollection of σ-algebras F(t), t ≥ 0, satisfying:

(i) (Information accumulates) For 0 ≤ s < t, every set in F(s) isalso in F(t). In other words, there is at least as much informationavailable at the later time F(t) as there is at the earlier time F(s).(ii) (Adaptivity) For each t ≥ 0, the Brownian motion W(t) at time tis F(t)-measurable. In other words, the information available at time t issufficient to evaluate the Brownian motion W(t) at that time.(iii) (Independence of future increments) For 0 ≤ t < u, theincrement W(u)-W(t) is independent of F(t). In other words, anyincrement of the Brownian motion after time t is independent of theinformation available at time t.Let ∆(t), t ≥ 0, be a stochastic process. We say that ∆(t) is adapted

to the filtration F(t) if for each t ≥ 0 the random variable ∆(t) is F(t)-measurable.

Properties (i) and (ii) in the definition above guarantee that the infor-mation available at each time t is at least as much as one would learn from

16

Page 18: Thesis_Eric Eun Seuk Choi

observing the Brownian motion up to time t. Property (iii) means that thisinformation is of no use in predicting future movements of the Brownianmotion.

There are two possibilities for the filtration F(t) for a Brownian motion.One is to let F(t) contain only the information obtained by observing theBrownian motion itself up to time t. The other is to include in F(t) infor-mation obtained by observing the Brownian motion and one or more otherprocesses. However, if the information in F(t) includes observations of pro-cesses other than the Brownian motion W , this additional information is notallowed to give clues about the future increments of W because of property(iii). Therefore, we think of F(t) as containing only the information obtainedby observing the Brownian motion itself up to time t.

5.2 Martingale property

A martingale is a mathematical model of a fair game, where knowledge ofpast events never helps predict the mean of the future winnings. It takes itsname from the French “la grande martingale,” the betting strategy in whichone doubles the bet after each loss. In a fair game, the gambler should, on theaverage, come out even. Mathematically, the gambler’s expected winningsand expected losses should cancel out. But it is not enough to have the overallexpectation vanish. The expectation must vanish at the time of the bet. Thiscan be mathematically explained by the following definition.

Definition Let (Ω,F ,P) be a probability space, let T be a fixed positivenumber, and let F(t), 0 ≤ t ≤ T , be a filtration of sub-σ-algebras of F . Con-sider an adapted stochastic process M(t), 0 ≤ t ≤ T . If E[M(t)|F(s)] = M(s)for all 0 ≤ s ≤ t ≤ T , we say this process is a martingale. It has no tendencyto rise or fall.

Here, E[X|F ] is an F measurable random variable satisfying∫AE[X|F ]dP

=∫AXdP for all A ∈ F . In the coin toss example, E[M(t)|F(s)] is the result

of t tosses given that we have the result of first s coin tosses. M(s) is theresult after s steps. A simple example can explain the martingale property.Suppose we choose one of two coins at random, and start flipping it. Oneof the coins has two heads, and the other has two tails. Bet a dollar on theoutcome of each toss at even odds. The probability of heads on each toss isclearly the probability we choose the two-headed coin, one-half, so the bet isfair for one toss. However, if we toss it twice, the bet on the second toss isnot fair, because we have already seen the result of the first toss, and know

17

Page 19: Thesis_Eric Eun Seuk Choi

which coin is being flipped this time, so we can win the second toss by bettingon whichever side came up on the first. Therefore, it is not the expectationwhich has to vanish, but the conditional expectation, given all that we knowat the time we make the bet. In other words, conditional expectation shouldbe zero at any time.

It is interesting to observe that Brownian motion satisfies a martingaleproperty.

Theorem. Brownian motion is a martingale.Proof: Let 0 ≤ s ≤ t be given. Then

E[W (t)|F(s)]

= E[(W (t)−W (s)) +W (s)|F(s)]

= E[(W (t)−W (s))|F(s)] + E[W (s)|F(s)]

= E[W (t)−W (s)] +W (s)

= W (s). (19)

The second equality is by the linearity of conditional expectation (i.e.E(X + Y |S) = E(X|S) + E(Y |S)). The third equality is from the equation∫AE[W (s)|F(s)]dP =

∫AW (s)dP for all A ∈ F(s), since W(s) is

F(s)-measurable. The fourth equality is from (13), the independence.

5.3 Markov property

A Markov process is a stochastic model that has the Markov property. Itcan be used to model a random system that changes states according to atransition rule that only depends on the current state. Brownian motion isa Markov Process, but let’s go over the basic example of Markov property.Suppose that you start with 10 dollars, and you wager 1 dollar on an unend-ing, fair, coin toss indefinitely, or until you lose all of your money. If I knowthat you have 12 dollars now, then it would be expected that with even odds,you will either have 11 dollars or 13 dollars after the next toss. This guessis not improved by the additional knowledge such that you started with 10dollars, then went up to 11 dollars, down to 10 dollars, up to 11 dollars, andthen to 12 dollars. It only depends on your previous outcome. The processdescribed here is a Markov chain on a countable state space that follows arandom walk.

The mathematical definition of a Markov process follows.

18

Page 20: Thesis_Eric Eun Seuk Choi

Definition Let (Ω,F ,P) be a probability space, let T be a fixed positivenumber, and let F(t), 0 ≤ t ≤ T , be a filtration of sub-σ-algebras of F . Con-sider an adapted stochastic process X(t), 0 ≤ t ≤ T . Assume that for all0 ≤ s ≤ t ≤ T and for every nonnegative, Borel-measurable function f, thereis another Borel-measurable function g such that

E[f(X(t))|F(s)] = g(X(s)). (20)

Then we say that the X is a Markov process. You can understand amartingale and a Markov process in similar ways. Now, we can show Brow-nian motion is a Markov process. In order to prove the Markov property ofBrownian motion, the following lemma is needed.

Lemma (Independence) Let (Ω,F ,P) be a probability space, and letG be a sub-σ-algebra of F . Suppose the random variables X1, ..., XK are G-measurable and the random variables Y1, ..., YL are independent of G. Letf(x1, ..., xK , y1, ..., yL) be a function of the dummy variables x1, ..., xK andy1, ..., yL, and define

g(x1, ..., xK) = E[f(x1, ..., xK , Y1, ..., YL)]. (21)

Then

E[f(X1, ..., XK , Y1, ..., YL)|G] = g(X1, ..., XK).

Theorem Let W(t), t ≥ 0, be a Brownian motion and let F(t), t ≥ 0,be a filtration for this Brownian motion (see (5.1)). Then W(t), t ≥ 0, is aMarkov process.

Proof: [5, p.107] According to (20), we must show that whenever 0 ≤ s ≤ tand f is a Borel-measurable function, there is another Borel-measurable func-tion g such that

E[f(W (t))|F(s)] = g(W (s)). (22)

To prove this, we write

E[f(W (t))|F(s)] = E[f((W (t)−W (s) +W (s))|F(s)]. (23)

The random variable W (t)−W (s) is independent of F(s), and the ran-dom variable W (s) is F(s)-measurable. This permits us to apply the Inde-pendence Lemma. In order to compute the expectation on the right-hand

19

Page 21: Thesis_Eric Eun Seuk Choi

side of (23), we replace W (s) by a dummy variable x to hold it constantand then take the unconditional expectation of the remaining random vari-able (i.e., we define g(x) = E[f(W (t)−W (s) + x)]. However, W (t)−W (s)normally distributed with mean zero and variance t− s. Therefore,

g(x) =1√

2π(t− s)

∫ ∞−∞

f(w + x)e−w2

2(t−s)dw. (24)

The Independence Lemma states that if we now take the function g(x) definedby (24) and replace the dummy variable x by the random variable W (s), thenequation (22) holds.

5.4 Nowhere Differentiable

One of important and interesting properties of Brownian motion is thatalmost all its paths are nowhere differentiable. A nowhere differentiablefunction is a function that does not have a derivative at any point. Theproof of nowhere differentiability is beyond this thesis, but it is provided in[6, §10.6 ].

6 Applications of Brownian motion in Finance

6.1 Brief history of Brownian motion in Finance

A stock market is characterized by its unpredictability. Since there arenumerous factors affecting stock markets, a number of macrofactors and mi-crofactors, there have been some attempts by researchers to try to explainits randomness with mathematical models. Their endeavors were not juststating that a stock market is unpredictable, so it is useless to try to lookinto historical data and find patterns in it, but they actually explained seem-ingly arbitrary phenomenon with some patterns based on the mathematicalmodel.

Asset pricing model was originally started in 1900, when the French math-ematician Louis Bachelier completed a doctoral thesis, “Theory of Specula-tion,” in which he worked out a model for the variation of the prices of assets,such as stocks and bonds. 60 years later, his thesis was referred to by theeconomist Paul Samuelson and others, who realized that Bachelior actually

20

Page 22: Thesis_Eric Eun Seuk Choi

made a significant contribution in understanding asset prices. His model laidthe groundwork of quantitative finance now.

Bachelior’s key insight was that if the asset prices demonstrated anyidentifiable pattern, other than the long-term growth trend associated withmacroeconomic expansion, one can anticipate that speculators would find itand exploit it, thereby eliminating it. Therefore, if it were possible to knowthat a certain stock’s price will rise in three months, speculators would actto smooth out that rise, by trading on knowledge they have now. Thus, afterspeculators have incorporated all available knowledge into their trades, onecan expect that the result will be prices showing unpredictable fluctuations,independent of their past history. He tried to explain these unpredictablefluctuations under no arbitrage assumption by using his mathematical model.This is the starting point of a “random walk” and “martingale”, also repre-sented as a fair game. Finally, the equation that Bachelior obtained from hismodel corresponds to Brownian motion.

6.2 Black-Scholes Model

The Black-Scholes model, a renowned financial model by Fischer Blackand Myron Scholes in 1973, is a mathematical model of a financial market, in-cluding derivative investment instruments. From the model, one can deducethe formula called the Black-Scholes formula, which provides a theoreticalestimate of the price of European-style options. This formula generated aninstant boom in options trading and other derivative activities at the ChicagoBoard Options Exchange and other option markets around the world. Up tonow, many empirical tests have demonstrated that the Black-Scholes price is“fairly close” to the observed price.

The key concept of the formula is the partial differential equation theyderived,

∂V

∂t+

1

2σ2S2∂

2V

∂S2+ rS

∂V

∂S− rV = 0, t ∈ [0, T ], s ≥ 0. (25)

For t ≥ 0, the stock price St satisfies the stochastic differential equaton(SDE)

dSt = µStdt+ σStdWt. (26)

21

Page 23: Thesis_Eric Eun Seuk Choi

Here r is a risk-free interest rate, V is a pay-off of an option, and S is theprice of the stock, σ is the volatility of the stock, and Wt is a Brownianmotion stochastic process.

Equations (25) and (26) estimate the price of the option over time. Thekey idea behind the model is to hedge the option by buying and selling theunderlying asset in just the right way and thus to eliminate risk. This typeof hedging is called delta hedging and is the basis of other complex hedgingtechniques used in investment banks, hedge funds, and proprietary tradingshops.In order to understand (25) and (26), we need to learn the three followingtopics:

(1) Brownian motion (also called a Wiener stochastic process),(2) Stochastic differential equations,(3) Ito’s formula.Of those three primary building blocks of Black-Scholes equation, this

paper will focus on (1), how Brownian motion is incorporated in the Black-Scholes formula and (2), stochastic differential equations, as those equationscontain Brownian motion.

6.3 Brownian motion in Black-Scholes Model

In 1905, Albert Einstein analyzed Brownian motion and showed that theprobability the pollen is in an interval [a,b] at time t is given by the formula

P(a ≤ Bt ≤ b) =1√2πt

∫ b

a

e−12tx2

dx. (27)

The definition of Brownian motion in the Black-Scholes model is similarto the one in section 4.1.

Definition A Brownian motion or Wiener process Wtt≥0, t ≥ 0 is afamily of random variables Wt : Ω→ R, where Ω is a probability space,satisfying the following properties:

(1) W (0) = 0.(2) It has continuous paths; i.e., for each ω ∈ Ω, the map t → W (ω, t) iscontinuous from [0,∞) into R with probability one.(3) For t > s ≥ 0 we have that W (t)−W (s) is normally distributed with

22

Page 24: Thesis_Eric Eun Seuk Choi

mean zero and variance t− s. That is,

P (a ≤ W (t)−W (s) ≤ b) =1√

2π(t− s)

∫ b

a

e−1

2(t−s)x2

dx. (28)

(4) It has independent increments. That is, for all 0 = t0 < t1 < t2 < ... < tn,the increments

W (t1) = W (t1)−W (t0), W (t2)−W (t1), ...,W (tn)−W (tn−1)

are independent random variables.

6.4 Random walk with time step ∆t and space step ∆x

This section heavily quotes [1, p. 215-218] with very little change. Wewill obtain Brownian motion as a limit of simple stochastic processes calledrandom walks. While this section might be analogous to section 3.1 and 3.3,we will go over a more specific case, divided into space steps and time steps.Let Ω denote the aforementioned sample space, and suppose that the coin isfair; that is, we have P(H) = P(T) = 1

2.

Let Xj : Ω→ R be the variable defined by

Xj =

1 if ωj = H,−1 if ωj = T.

(29)

Recall that X1, X2, X3, ... is a sequence of independent Bernoulli randomvariables with E(Xj) = 0 and V ar(Xj) = 1. Next, let ∆x > 0 be a spacestep and ∆t > 0 be a time step, and imagine the following random walkon the x-axis.

23

Page 25: Thesis_Eric Eun Seuk Choi

∆t 2∆t 3∆t 4∆t 5∆t 6∆t 7∆t 8∆t 9∆t

-2∆x

-1∆x

∆x

2∆x

W∆t

W0

W∆2t

W∆3t

W∆4t

W∆5t

W∆6t

W∆7t

W∆8t

W∆9t

Figure 3. A sample path of Random walk

At the time t = 0, we are at the origin, x = 0, and flip a coin. If theoutcome is heads, then we walk a distance of ∆x > 0 units in the positivedirection, and if it is tails, then we walk ∆x > 0 units in the negative di-rection, both times at the constant speed ∆x

∆t. At time ∆t > 0, the coin is

flipped (instantaneously) for the second time, and we do the same kind ofwalk again. That is, we walk ∆x > 0 units in the positive direction if theoutcome is heads and ∆x > 0 units in the negative direction if the outcomeis tails. Flipping the coin for the third, fourth, etc. Figure 3 displays ourpath in the tx-plane if ω = HHTHTTTTHw10w11... Wj∆t is our positionat time j∆t. In terms of the random variable Xj, we have

W0 = 0 and Wk∆t = Σkj=1Xj∆x, k = 1, 2, ... (30)

At a time t, which may not be a multiple of ∆t, our position Wt is foundby linear interpolation. More precisely, let k be the positive integer such thatk∆t ≤ t < (k + 1)∆t. Then

Wt = (k + 1− t

∆t)Wk∆t + (

t

∆t− k)W(k+1)∆t, k∆t ≤ t < (k + 1)∆t. (31)

Given k∆t ≤ t < (k + 1)∆t, you can understand (31) by thinking of pWk∆t

+ qW(k+1)∆t, where p+q = 1. The stochastic process Wtt≥0 defined by the

24

Page 26: Thesis_Eric Eun Seuk Choi

formulas (30) and (31) is called a random walk with time step ∆t and spacestep ∆x. This random walk has four properties.

(1) W0(ω) = 0 for all ω ∈ Ω.(2) For every ω ∈ Ω, the path t ∈ [0,∞)→ Wt(ω) ∈ R is continuous.(3) If t = k∆t, then

E(Wt) = 0 and V ar(Wt) =(∆x)2

∆tt. (32)

(4) If 0 = t0 < t1 < t2 < ... < tn and each tj is a non-negative integermultiple of ∆t, then the increments

Wt1 = Wt1 −Wt0 ,Wt2 −Wt1 , ...,Wtn −Wtn−1 ,

are independent random variables.Properties (1) and (2) are true by construction. To prove (3), we use the factthat Xj are independent and (30). Therefore,

E(Wk∆t) = E(k∑j=1

Xj∆x) =k∑j=1

E(Xj)∆x =k∑j=1

0 ∗∆x = 0.

Also, using the independence of Xj and the fact that V ar(Xj) = 1, we obtain

V ar(Wk∆t) = V ar(k∑j=1

Xj∆x) =k∑j=1

V ar(Xj)(∆x)2 =k∑j=1

1 ∗ (∆x)2

= k(∆x)2 =(∆x)2

∆tk∆t =

(∆x)2

∆tt.

Finally, property (4) follows from the independence of Xj.

6.5 Construction of Brownian motion as the limit ofrandom walks

Recall the limiting distribution of the scaled random walk in section 3.6.In a similar way, this section will derive Brownian motion under space step(∆x) and time step (∆t).

25

Page 27: Thesis_Eric Eun Seuk Choi

Since we are looking for a stochastic process having variance at time tequal to t, property (3) of the random walks constructed above suggest thatwe should choose the steps (∆x) and (∆t) so that

(∆x)2

∆t= 1.

Motivated by this relation, for each positive integer n, we construct therandom walk Wtt≥0 with steps

∆t =1

n, and ∆x =

1√n.

For any t > 0 and fixed n, we shall define W nt by using formula (31). In

particular, we have

t

∆t= nt. (33)

As a result, let k in (31) be the largest positive integer which is less thanor equal to nt, which we represent by bntc. Then,

W nt = (bntc+ 1− nt)W n

bntc 1n

+ (nt− bntc)W n(bntc+1) 1

n

= W nbntc 1

n+ (nt− bntc)(W n

(bntc+1) 1n−W n

bntc 1n)

=1√n

bntc∑j=1

Xj +nt− bntc√

nXbntc+1

=

√bntc√n

1√bntc

bntc∑j=1

Xj +nt− bntc√

nXbntc+1. (34)

Now, observe that

limn→∞

√bntc√n

=√t, lim

n→∞

nt− bntc√n

Xbntc+1 = 0

26

Page 28: Thesis_Eric Eun Seuk Choi

and by the Central Limit Theorem, we have

1√bntc

bntc∑j=1

Xj → Z, as n→∞ in probability.

Here Z is a standard normal distribution. Therefore, by letting n go to ∞in equation (34), we see that

W nt →

√tZ = Bt as n→∞ in probability,

where Bt satisfies all the properties of a Brownian motion.

6.6 Geometric Brownian motion

A geometric Brownian motion is a continuous-time stochastic process inwhich the logarithm of the randomly varying quantity follows a Brownianmotion. This is an important example of a stochastic process satisfying astochastic differential equation.

A stochastic process St is said to follow a geometric Brownian motion ifit satisfies the following stochastic differential equation.

dSt = µStdt+ σStdWt (35)

where Wt is a Brownian motion, µ is a percentage drift, and σ is a percentagevolatility. For an arbitrary initial value S0 the above stochastic differentialequation has the analytic solution

St = S0 exp

((µ− σ2

2

)t+ σWt

). (36)

We can solve the stochastic differential equation, using Ito’s integral (beyondthe scope of this thesis, but the basic construction of it will be providedbelow).

Given dSt = µStdt+ σStdWt, dividing both sides by St leads todSt

St= µdt+ σdWt. Now, apply integrals to both sides

∫ t

0

dStSt

=

∫ t

0

(µdt+ σdWt) = µt+ σWt (37)

27

Page 29: Thesis_Eric Eun Seuk Choi

assuming W0 = 0. dSt

Stlooks like the derivative of lnSt, but St is an Ito

process which requires the use of Ito’s Calculus. Apply Ito’s formula,

d(lnSt) =dStSt− 1

2

1

S2t

σ2 S2t dt =

dStSt− 1

2σ2 dt.

Therefore, we have

dStSt

= d(lnSt) +1

2σ2 dt. (38)

Substituting (38) in (37), we have∫ t

0

d(lnSt) +1

2σ2 dt = µt+ σWt.

Evaluating the integral gives

lnStS0

+1

2σ2t = µt+ σWt.

Moving 12σ2t to the right-hand side gives

lnStS0

=

(µ− σ2

2

)t+ σWt .

Finally, we have

St = S0 exp

((µ− σ2

2

)t+ σWt

).

This proves the analytic solution of stochastic differential equation (36).

7 Conclusion

This thesis focused on explaining the term Brownian motion. First ofall, to introduce a mathematical definition of it, the necessary backgroundof Brownian motion, such as stochastic processes, probability spaces and σ-algebras, scaled random walks, and limiting distribution of the scaled randomwalks were explained first. Those building blocks of Brownian motion helpto understand what concepts comprise Brownian motion and how Brownian

28

Page 30: Thesis_Eric Eun Seuk Choi

motion is constructed. Then, a mathematical definition and the distributionof Brownian motion were stated. After introducing all the background and aformal definition of Brownian motion, properties of Brownian motion, suchas filtration, martingale, Markov property, and nowhere differentiability wereexplained to understand interesting characteristics of Brownian motion. Forthe last part of this thesis, a concise history of Brownian motion in financewas first mentioned. Applications of Brownian motion in finance primarilyfocused on Brownian motion in Black-Scholes model. This part contains howBrownian motion is incorporated in the Black-Scholes formula, with the con-struction of Brownian motion as the limit of random walks. Finally, geomet-ric Brownian motion was introduced to briefly explain stochastic differentialequation, since this equation contains Brownian motion as its constituent.Through those applications, we could learn that Brownian motion has animportant role in constructing the Black-Scholes model.

References

[1] Tom Cosimano and Alex Himonas. Mathematical Methods in Finance andEconomics. Lecture Notes, 2015.

[2] Paul G. Hoel, Sidney C. Port, Charles J. Stone. Introduction to ProbabilityTheory. Houghton Mifflin Company, 1971.

[3] Paul G. Hoel, Sidney C. Port, Charles J. Stone. Introduction to ProbabilityTheory. Houghton Mifflin Company, 1972.

[4] Jefferey S. Rosenthal. A First Look at Rigorous Probability. World Scien-tific Publishing Co., 2006.

[5] Steven E. Shreve Stochastic Calculus for Finance II Continuous-TimeModels. Springer Finance, 2011.

[6] John B. Walsh. Knowing the Odds An Introduction to Probability. Amer-ican Mathematical Society, 2012.

[7] Wikipedia contributors. Geometric Brownian motion. Wikipedia,The Free Encyclopedia, 2016.https://en.wikipedia.org/wiki/Geometric Brownian motion

29