Probability Theory Presentation 08

Embed Size (px)

Citation preview

  • 8/8/2019 Probability Theory Presentation 08

    1/53

    BST 401 Probability Theory

    Xing Qiu Ha Youn Lee

    Department of Biostatistics and Computational BiologyUniversity of Rochester

    September 30, 2009

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    2/53

    Outline

    1 Basic Properties of Integrals

    2 Useful Inequalities

    3 Convergence Theorems

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    3/53

    Review

    Random variables and Borel-measurable functions.

    Simple functions.

    Two things make a Lebesgue-Stieltjes integral (for simplefunctions).

    Example about change either one of them: one measure is

    Lebesgue measure, an other one a probability measure.

    You can say that Lebesgue integral w.r.t. is just an

    weighted Riemann integral/summation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    4/53

    Review

    Random variables and Borel-measurable functions.

    Simple functions.

    Two things make a Lebesgue-Stieltjes integral (for simplefunctions).

    Example about change either one of them: one measure is

    Lebesgue measure, an other one a probability measure.

    You can say that Lebesgue integral w.r.t. is just an

    weighted Riemann integral/summation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    5/53

    Review

    Random variables and Borel-measurable functions.

    Simple functions.

    Two things make a Lebesgue-Stieltjes integral (for simplefunctions).

    Example about change either one of them: one measure is

    Lebesgue measure, an other one a probability measure.

    You can say that Lebesgue integral w.r.t. is just an

    weighted Riemann integral/summation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    6/53

    Review

    Random variables and Borel-measurable functions.

    Simple functions.

    Two things make a Lebesgue-Stieltjes integral (for simplefunctions).

    Example about change either one of them: one measure is

    Lebesgue measure, an other one a probability measure.

    You can say that Lebesgue integral w.r.t. is just an

    weighted Riemann integral/summation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    7/53

    Review

    Random variables and Borel-measurable functions.

    Simple functions.

    Two things make a Lebesgue-Stieltjes integral (for simplefunctions).

    Example about change either one of them: one measure is

    Lebesgue measure, an other one a probability measure.

    You can say that Lebesgue integral w.r.t. is just an

    weighted Riemann integral/summation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    8/53

    Linearity

    Assume h, gare two functions both measurable w.r.t. the

    same measure .

    l() = c1h() + c2g() is called a linear combination of hand g. (c1, c2 are two constants)c1h+ c2gd = c1

    h+ c2

    gd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    9/53

    Linearity

    Assume h, gare two functions both measurable w.r.t. the

    same measure .

    l() = c1h() + c2g() is called a linear combination of hand g. (c1, c2 are two constants)c1h+ c2gd = c1

    h+ c2

    gd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    10/53

    Linearity

    Assume h, gare two functions both measurable w.r.t. the

    same measure .

    l() = c1h() + c2g() is called a linear combination of hand g. (c1, c2 are two constants)c1h+ c2gd = c1

    h+ c2

    gd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    11/53

    Other properties

    Theorem (4.7), page 458.

    How to prove? First prove these properties are true for

    simple functions. Then take limits to generalize them to

    measurable functions.

    All these equalities/inequalities can be replaced by their

    almost everywhere counterparts.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    12/53

    Other properties

    Theorem (4.7), page 458.

    How to prove? First prove these properties are true for

    simple functions. Then take limits to generalize them to

    measurable functions.

    All these equalities/inequalities can be replaced by their

    almost everywhere counterparts.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    13/53

    Other properties

    Theorem (4.7), page 458.

    How to prove? First prove these properties are true for

    simple functions. Then take limits to generalize them to

    measurable functions.

    All these equalities/inequalities can be replaced by their

    almost everywhere counterparts.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    14/53

    Jensens inequality

    A function (x) is convex if for all (0,1) and x, y R,

    (x) + (1 )(y) (x+ (1 )y).

    Show students a graph and explain why this definition is

    more general than a simpler definition via 2nd derivative.Jensens inequality. Denote X = X() as a randomvariable defined on a probability space (,F, ) andE(X) :=

    X()d(x), which is called the mathematical

    expectation of X. If X is integrable (E|X| < ), then

    (E(X)) E((X)) .

    If is concave, then we have the opposite inequality.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    15/53

    Jensens inequality

    A function (x) is convex if for all (0,1) and x, y R,

    (x) + (1 )(y) (x+ (1 )y).

    Show students a graph and explain why this definition is

    more general than a simpler definition via 2nd derivative.Jensens inequality. Denote X = X() as a randomvariable defined on a probability space (,F, ) andE(X) :=

    X()d(x), which is called the mathematical

    expectation of X. If X is integrable (E|X| < ), then

    (E(X)) E((X)) .

    If is concave, then we have the opposite inequality.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    16/53

    Jensens inequality

    A function (x) is convex if for all (0,1) and x, y R,

    (x) + (1 )(y) (x+ (1 )y).

    Show students a graph and explain why this definition is

    more general than a simpler definition via 2nd derivative.Jensens inequality. Denote X = X() as a randomvariable defined on a probability space (,F, ) andE(X) :=

    X()d(x), which is called the mathematical

    expectation of X. If X is integrable (E|X| < ), then

    (E(X)) E((X)) .

    If is concave, then we have the opposite inequality.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    17/53

    Jensens inequality

    A function (x) is convex if for all (0,1) and x, y R,

    (x) + (1 )(y) (x+ (1 )y).

    Show students a graph and explain why this definition is

    more general than a simpler definition via 2nd derivative.Jensens inequality. Denote X = X() as a randomvariable defined on a probability space (,F, ) andE(X) :=

    X()d(x), which is called the mathematical

    expectation of X. If X is integrable (E|X| < ), then

    (E(X)) E((X)) .

    If is concave, then we have the opposite inequality.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    18/53

    Heuristics of the Jensens inequality

    A graphical proof based on (x) = x2, = [a,b].

    A convex transformation accentuates the extreme values

    of X and a concave transformation attenuates these

    extreme values.

    Modern microeconomics depends on several assumptions.One of which is the famous law of diminishing marginal

    returns of virtually everything. One example: you may

    choose from two investing portfolios. One is more

    aggressive (high return high risk) and the other more

    conservative (low risk low return).

    In this context, Jensens inequality is the foundation of the

    price theory of the insurance industry and financial market.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    19/53

    Heuristics of the Jensens inequality

    A graphical proof based on (x) = x2, = [a,b].

    A convex transformation accentuates the extreme values

    of X and a concave transformation attenuates these

    extreme values.

    Modern microeconomics depends on several assumptions.One of which is the famous law of diminishing marginal

    returns of virtually everything. One example: you may

    choose from two investing portfolios. One is more

    aggressive (high return high risk) and the other more

    conservative (low risk low return).

    In this context, Jensens inequality is the foundation of the

    price theory of the insurance industry and financial market.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    20/53

    Heuristics of the Jensens inequality

    A graphical proof based on (x) = x2, = [a,b].

    A convex transformation accentuates the extreme values

    of X and a concave transformation attenuates these

    extreme values.

    Modern microeconomics depends on several assumptions.One of which is the famous law of diminishing marginal

    returns of virtually everything. One example: you may

    choose from two investing portfolios. One is more

    aggressive (high return high risk) and the other more

    conservative (low risk low return).

    In this context, Jensens inequality is the foundation of the

    price theory of the insurance industry and financial market.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    21/53

    Heuristics of the Jensens inequality

    A graphical proof based on (x) = x2, = [a,b].

    A convex transformation accentuates the extreme values

    of X and a concave transformation attenuates these

    extreme values.

    Modern microeconomics depends on several assumptions.One of which is the famous law of diminishing marginal

    returns of virtually everything. One example: you may

    choose from two investing portfolios. One is more

    aggressive (high return high risk) and the other more

    conservative (low risk low return).

    In this context, Jensens inequality is the foundation of the

    price theory of the insurance industry and financial market.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    22/53

    Hlders inequality

    We use fp, p [0,) to denote |f(x)|pd1p, the Lp

    norm of f w.r.t. measure .

    Later we will learn that this norm can be considered as the

    length of a measurable function f.

    For p, q (1,) with 1p +1q = 1, we have

    |fg|d fpgq.

    A special case: p= q= 2. Its called the Cauchy-Schwarz

    inequality.The Hlders inequality is a very important inequality in

    many different branches of mathematics. As an example,

    in probability theory, it can be used to show that finite

    variance must imply finite expectation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    23/53

    Hlders inequality

    We use fp, p [0,) to denote |f(x)|pd1p, the Lp

    norm of f w.r.t. measure .

    Later we will learn that this norm can be considered as the

    length of a measurable function f.

    For p, q (1,) with 1p +1q = 1, we have

    |fg|d fpgq.

    A special case: p= q= 2. Its called the Cauchy-Schwarz

    inequality.The Hlders inequality is a very important inequality in

    many different branches of mathematics. As an example,

    in probability theory, it can be used to show that finite

    variance must imply finite expectation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    24/53

    Hlders inequality

    We use fp, p [0,) to denote |f(x)|pd1p, the Lp

    norm of f w.r.t. measure .

    Later we will learn that this norm can be considered as the

    length of a measurable function f.

    For p, q (1,) with 1p +1q = 1, we have

    |fg|d fpgq.

    A special case: p= q= 2. Its called the Cauchy-Schwarz

    inequality.The Hlders inequality is a very important inequality in

    many different branches of mathematics. As an example,

    in probability theory, it can be used to show that finite

    variance must imply finite expectation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    25/53

    Hlders inequality

    We use fp, p [0,) to denote |f(x)|pd1p, the Lp

    norm of f w.r.t. measure .

    Later we will learn that this norm can be considered as the

    length of a measurable function f.

    For p, q (1,) with 1p +1q = 1, we have

    |fg|d fpgq.

    A special case: p= q= 2. Its called the Cauchy-Schwarz

    inequality.The Hlders inequality is a very important inequality in

    many different branches of mathematics. As an example,

    in probability theory, it can be used to show that finite

    variance must imply finite expectation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    26/53

    Hlders inequality

    We use fp, p [0,) to denote |f(x)|pd1p, the Lp

    norm of f w.r.t. measure .

    Later we will learn that this norm can be considered as the

    length of a measurable function f.

    For p, q (1,) with 1p +1q = 1, we have

    |fg|d fpgq.

    A special case: p= q= 2. Its called the Cauchy-Schwarz

    inequality.The Hlders inequality is a very important inequality in

    many different branches of mathematics. As an example,

    in probability theory, it can be used to show that finite

    variance must imply finite expectation.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    27/53

    Minkowskis inequality

    See the book. Leave as a homework.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    28/53

    Why inequalities are important?

    Sometimes an equality is impossible to derive so an

    inequality estimation is the next best thing.

    Inequalities lay the foundation of functional spaces, suchas the Lp spaces, which will be introduced later.

    All different types of convergence of random variables are

    described by inequalities (the definition).

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    29/53

    Why inequalities are important?

    Sometimes an equality is impossible to derive so an

    inequality estimation is the next best thing.

    Inequalities lay the foundation of functional spaces, suchas the Lp spaces, which will be introduced later.

    All different types of convergence of random variables are

    described by inequalities (the definition).

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    30/53

    Why inequalities are important?

    Sometimes an equality is impossible to derive so an

    inequality estimation is the next best thing.

    Inequalities lay the foundation of functional spaces, suchas the Lp spaces, which will be introduced later.

    All different types of convergence of random variables are

    described by inequalities (the definition).

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    31/53

    Bounded convergence theorem

    En := {x|fn(x) = 0}, E=

    nEn, (E) < (boundedmeasure).

    |fn

    (x)| M< (uniformly bounded range).

    Then we have limn

    fnd = limn

    fnd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    32/53

    Bounded convergence theorem

    En := {x|fn(x) = 0}, E=

    nEn, (E) < (boundedmeasure).

    |fn

    (x)| M< (uniformly bounded range).

    Then we have limn

    fnd = limn

    fnd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    33/53

    Bounded convergence theorem

    En := {x|fn(x) = 0}, E=

    nEn, (E) < (boundedmeasure).

    |fn(x)| M< (uniformly bounded range).

    Then we have limn

    fnd = limn

    fnd.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    34/53

    Monotone convergence theorem

    Motivation: we already have monotone convergence for

    numbers, for sets, for measurable function. This is just

    another version of the same trick for the integrals.

    f1, f2, . . . is a monotonesequence of nonnegative

    measurable functions. fn f pointwisely. ThenB fnd

    B fd for all B B, in particular,

    fnd

    fd.

    Remember the definition of integral by monotone

    sequence of simple functions? This theorem says theintegral of the limit of measurable functions, not

    necessarily just simple functions, is the limit of integrals.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    35/53

    Monotone convergence theorem

    Motivation: we already have monotone convergence for

    numbers, for sets, for measurable function. This is just

    another version of the same trick for the integrals.

    f1, f2, . . . is a monotonesequence of nonnegative

    measurable functions. fn f pointwisely. ThenB fnd

    B fd for all B B, in particular,

    fnd

    fd.

    Remember the definition of integral by monotone

    sequence of simple functions? This theorem says theintegral of the limit of measurable functions, not

    necessarily just simple functions, is the limit of integrals.

    Qiu, Lee BST 401

    M h

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    36/53

    Monotone convergence theorem

    Motivation: we already have monotone convergence for

    numbers, for sets, for measurable function. This is just

    another version of the same trick for the integrals.

    f1, f2, . . . is a monotonesequence of nonnegative

    measurable functions. fn f pointwisely. ThenB fnd

    B fd for all B B, in particular,

    fnd

    fd.

    Remember the definition of integral by monotone

    sequence of simple functions? This theorem says theintegral of the limit of measurable functions, not

    necessarily just simple functions, is the limit of integrals.

    Qiu, Lee BST 401

    C ll i

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    37/53

    Corollaries

    (

    n=1 fn) d =

    n=1 fd holds for nonnegative fn.Can we exchange limit/integral in general? The answer is

    no.

    Qiu, Lee BST 401

    C ll i

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    38/53

    Corollaries

    (

    n=1 fn) d =

    n=1 fd holds for nonnegative fn.Can we exchange limit/integral in general? The answer is

    no.

    Qiu, Lee BST 401

    C t l

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    39/53

    Counter examples

    A moving block: An = (n,n+ 1), fn = 1An.

    With Lebesgue measure, this sequence of integrals

    escapes to x-infinity.

    With a probability measure, the above example wont be a

    counter example, why?

    A sequence which leads to the Dirac function (escapes to

    y-infinity).

    The above observation implies that for a probability

    measure , the only way to break the interchangeabilityof lim/integral is to escape to y-infinity.

    Qiu, Lee BST 401

    Counter examples

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    40/53

    Counter examples

    A moving block: An = (n,n+ 1), fn = 1An.

    With Lebesgue measure, this sequence of integrals

    escapes to x-infinity.

    With a probability measure, the above example wont be a

    counter example, why?

    A sequence which leads to the Dirac function (escapes to

    y-infinity).

    The above observation implies that for a probability

    measure , the only way to break the interchangeabilityof lim/integral is to escape to y-infinity.

    Qiu, Lee BST 401

    Counter examples

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    41/53

    Counter examples

    A moving block: An = (n,n+ 1), fn = 1An.

    With Lebesgue measure, this sequence of integrals

    escapes to x-infinity.

    With a probability measure, the above example wont be a

    counter example, why?

    A sequence which leads to the Dirac function (escapes to

    y-infinity).

    The above observation implies that for a probability

    measure , the only way to break the interchangeabilityof lim/integral is to escape to y-infinity.

    Qiu, Lee BST 401

    Counter examples

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    42/53

    Counter examples

    A moving block: An = (n,n+ 1), fn = 1An.

    With Lebesgue measure, this sequence of integrals

    escapes to x-infinity.

    With a probability measure, the above example wont be a

    counter example, why?

    A sequence which leads to the Dirac function (escapes to

    y-infinity).

    The above observation implies that for a probability

    measure , the only way to break the interchangeabilityof lim/integral is to escape to y-infinity.

    Qiu, Lee BST 401

    Counter examples

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    43/53

    Counter examples

    A moving block: An = (n,n+ 1), fn = 1An.

    With Lebesgue measure, this sequence of integrals

    escapes to x-infinity.

    With a probability measure, the above example wont be a

    counter example, why?

    A sequence which leads to the Dirac function (escapes to

    y-infinity).

    The above observation implies that for a probability

    measure , the only way to break the interchangeabilityof lim/integral is to escape to y-infinity.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    44/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    45/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    46/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    47/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    48/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Dominated convergence theorem

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    49/53

    Dominated convergence theorem

    This is perhaps the single most useful convergence theorem.Two counter examples prompt an idea: to find an

    -integrable function g (e.g., , integral

    |g|d is finite) thatdominates the sequence fn.

    f1, f2, . . ., f, gare all measurable functions.|fn| g for all n (in other words, gdominates |fn|).

    g is a -integrable function.

    Conclusion: fn f implies

    fnd

    fd, or say, you

    can swap limit/integral.The monotone convergence theorem is just a special case

    of this theorem.

    Qiu, Lee BST 401

    Corollary

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    50/53

    Corollary

    This corollary is the foundation of the Lp convergence.

    Conditions just as above, plus:

    |g|p is -integrable (p> 0 is a fixed constant).

    Then: a) |f|p is integrable; b)

    |fn f|pd 0.

    In practice, most popular choices of p: either 1 or 2.

    Qiu, Lee BST 401

    Corollary

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    51/53

    Co o a y

    This corollary is the foundation of the Lp convergence.

    Conditions just as above, plus:

    |g|p is -integrable (p> 0 is a fixed constant).

    Then: a) |f|p is integrable; b)

    |fn f|pd 0.

    In practice, most popular choices of p: either 1 or 2.

    Qiu, Lee BST 401

    Corollary

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    52/53

    y

    This corollary is the foundation of the Lp convergence.

    Conditions just as above, plus:

    |g|p is -integrable (p> 0 is a fixed constant).

    Then: a) |f|p is integrable; b)

    |fn f|pd 0.

    In practice, most popular choices of p: either 1 or 2.

    Qiu, Lee BST 401

    Corollary

    http://goforward/http://find/http://goback/
  • 8/8/2019 Probability Theory Presentation 08

    53/53

    y

    This corollary is the foundation of the Lp convergence.

    Conditions just as above, plus:

    |g|p is -integrable (p> 0 is a fixed constant).

    Then: a) |f|p is integrable; b)

    |fn f|pd 0.

    In practice, most popular choices of p: either 1 or 2.

    Qiu, Lee BST 401

    http://goforward/http://find/http://goback/