Lecture _10 -- Part 1

Embed Size (px)

Citation preview

  • 7/31/2019 Lecture _10 -- Part 1

    1/34

    Modeling & Simulation

    Lecture 10

    Part 1

    Generating Random

    Variates

    Instructor:

    Eng. Ghada Al-MashaqbehThe Hashemite UniversityComputer Engineering Department

  • 7/31/2019 Lecture _10 -- Part 1

    2/34

    The Hashemite University 2

    Outline

    Introduction.

    Characteristics of random variatesgeneration algorithms.

    General approaches for randomvariates generation. Inverse transform method.

    Composition.Convolution.

    Acceptance rejection method.

  • 7/31/2019 Lecture _10 -- Part 1

    3/34

    The Hashemite University 3

    Generating Random Variates

    Say we have fitted an exponentialdistribution to interarrival times ofcustomers

    Every time we anticipate a new customer

    arrival (place an arrival event on theevents list), we need to generate arealization of the arrival times

    We know how to generate unit uniform.

    Can we use this to generate exponential?(And other distributions)

  • 7/31/2019 Lecture _10 -- Part 1

    4/34

    The Hashemite University 4

    Generating Random VariatesAlgorithms Many algorithms exist in the literature to transform the

    generated random number into a variate from a specifieddistribution.

    Such algorithm or approach depends on the useddistribution: Whether it is discrete or continuous.

    Can meet the algorithm requirements. But if you have more than one algorithm that can

    be used which one to use: Accuracy: whether the algorithm can produce exact

    variates from the distribution (i.e. with high accuracy) or

    it is just an estimation. Efficiency: in terms of needed storage and execution time. Complexity: you must find a trade off between complexity

    and performance (i.e. accuracy). Some specific technical issues: some algorithms needs

    RNG other than the U(0,1), so pay attention.

  • 7/31/2019 Lecture _10 -- Part 1

    5/34

    The Hashemite University 5

    Two Types of Approaches

    First we will explore the general approachesfor random variates generation which canbe applied for continuous, discrete, andmixed distributions.

    Two types: Direct

    Obtain an analytical expression Inverse transform

    Requires inverse of the distribution function Composition & Convolution

    For special forms of distribution functions

    Indirect Acceptance-rejection method

  • 7/31/2019 Lecture _10 -- Part 1

    6/34

    The Hashemite University 6

    Inverse-Transform Method

    Will be examined for three cases: Continuous distributions. Discrete distributions. And mixed distributions (combination of both

    continuous and discrete ones). The main requirement is:

    Continuous case: The underlying distributioncdf (F(x)) must have an inverse in either: Closed form. Or can be calculated numerically with a good

    accuracy level.

    For the discrete case: always apply, norestrictions.

  • 7/31/2019 Lecture _10 -- Part 1

    7/34

    The Hashemite University 7

    Inverse Transform Continuous Case I

    Conditions: F(x) must be continuous and strictly increasing

    (i.e. F(x1) < F(x2) if x1 < x2).

    F(x) must have an inverse F-1(x).

    Algorithm:

    )(Return.2

    )1,0(~Generate1.

    1UFX

    UU

  • 7/31/2019 Lecture _10 -- Part 1

    8/34

    The Hashemite University 8

    Inverse Transform Continuous Case II

    Proof that the returned X has the desireddistribution F(x)

  • 7/31/2019 Lecture _10 -- Part 1

    9/34

    The Hashemite University 9

    Example: Weibull Distribution I

  • 7/31/2019 Lecture _10 -- Part 1

    10/34

    The Hashemite University 10

    Example: Weibull Distribution II

    -- Density functionof Weibulldistribution.

    -- F(x) is shown onthe next slide.

  • 7/31/2019 Lecture _10 -- Part 1

    11/34

    The Hashemite University 11

    Example: Weibull Distribution II

  • 7/31/2019 Lecture _10 -- Part 1

    12/34

    The Hashemite University 12

    Example: Weibull Distribution III

  • 7/31/2019 Lecture _10 -- Part 1

    13/34

    The Hashemite University 13

    Inverse Transform Discrete Case I

    Here you have discrete values of x, pmf p(xi),and a cdf F(x).

    Algorithm

    So, the above algorithm returns xi if and only if

    Proof:

    Requires search technique to find the closest xvalue, many suggestions can be introduced: Start with the largest p(xi).

  • 7/31/2019 Lecture _10 -- Part 1

    14/34

    The Hashemite University 14

    Inverse Transform Discrete Case II

    -- Unlike the continuous case, the discrete inversetransform can be applied for any discrete distribution butit could be not the most efficient one.

  • 7/31/2019 Lecture _10 -- Part 1

    15/34

    The Hashemite University 15

    Inverse Transform DiscreteCase -- Example

  • 7/31/2019 Lecture _10 -- Part 1

    16/34

    The Hashemite University 16

    Inverse Transform: Generalization

    Algorithm that can be used for both continuousand discrete distributions is:

    The above algorithm can be used with mixed

    distributions that have discontinuities in its F(x). Have a look at the example shown in Figure 8.5

    in the textbook (pp. 430).

    UxFxXUU

    )(:minReturn2.

    )1,0(~Generate.1

  • 7/31/2019 Lecture _10 -- Part 1

    17/34

    The Hashemite University 17

    Inverse Transform More!

    Disadvantages: Must evaluate the inverse of the distribution

    function May not exist in closed form

    Could still use numerical methods

    May not be the fastest way

    Advantages: Needs only one random number to generate a

    random variate value. Ease of generating truncated distributions

    (redefine F(X) on a smaller finite period of x).

  • 7/31/2019 Lecture _10 -- Part 1

    18/34

    The Hashemite University 18

    Composition

    It is based on having a weighted sum of distributions(called convex combination).

    That is a distribution that has a from (or can bedecomposed as follows):

    Most of the time in such cases it is very difficult to find theinverse of the distribution (i.e. you cannot use the inversetransform method).

    Each Fj in the above formulation is a distribution function

    found in F(x) where Fjs can be different. You can also have a composed pdf function f(x) similar to

    F(x) but remember we are working on F(x). So, the trick is to find Fjs that are easy to use in variates

    generation where you can apply geometry to decomposeF(x).

    11

    1where,)()(j

    jj

    jj

    pxFpxF

  • 7/31/2019 Lecture _10 -- Part 1

    19/34

    The Hashemite University 19

    Composition Algorithm The composition method is performed in two steps:

    First: select one of the composed cdfs to work with. Use the inverse transform to generate the variate from this

    cdf.

    The first step is choosing Fjwith probabilitypj. Such selection can be done using the discrete inverse

    transform method where P(Fj

    )=pj

    Step 1 Algorithm1. Generate a positive random integer, such that P(J=j)=pj2. ReturnXwith distribution Fj

    Also, to obtain the second step you can use the inversetransform method (or any other method you want).

    Step 2 Algorithm: U1 is used to select Fj U2 is used to obtain X from the selected Fj.

    As, you see you must generate at least two randomnumbers U1 and U2 to obtain X using the compositionmethod

  • 7/31/2019 Lecture _10 -- Part 1

    20/34

    The Hashemite University 20

    How to decompose a complex F(x)?

    Using geometry to decompose F(x) include twoapproaches:

    Either divide F(x) vertically so you split the interval ofx over which the whole F(x) is defined.

    Here you do not alter Fjs at all over each interval.

    See example 1 on the next slides. Or divide F(x) horizontally the same x interval for all

    Fjs which is the same of the original F(x).

    See example 2 on the next slides.

    Your next step is to definepjassociated with

    each Fj. Hint: always treat pjas the area portion under Fj

    from the total area under F(X) but you mustcompensate for it.

  • 7/31/2019 Lecture _10 -- Part 1

    21/34

    The Hashemite University 21

    Composition Example 1

    We will solve it using two methods:

    -- inverse transform.

    -- composition.

  • 7/31/2019 Lecture _10 -- Part 1

    22/34

    The Hashemite University 22

    Composition Example 1 cont.

  • 7/31/2019 Lecture _10 -- Part 1

    23/34

    The Hashemite University 23

    Composition Example 1 cont.

  • 7/31/2019 Lecture _10 -- Part 1

    24/34

    The Hashemite University 24

    Composition Example 2

    Try to solve it using the inverse transform method and seehow it is easier to solve it using composition.

  • 7/31/2019 Lecture _10 -- Part 1

    25/34

    The Hashemite University25

    Composition Example 2 cont.

  • 7/31/2019 Lecture _10 -- Part 1

    26/34

    The Hashemite University26

    Convolution

    So, you need one random number U then transform it to each Y.

    Do not get confused between convolution and composition: In composition: you express the cdf of X as a weighted sum

    of other distribution functions.

    In convolution: you express X itself as a sum of otherdistribution functions.

  • 7/31/2019 Lecture _10 -- Part 1

    27/34

    The Hashemite University27

    Convolution -- Example

  • 7/31/2019 Lecture _10 -- Part 1

    28/34

    The Hashemite University28

    Acceptance-Rejection Method I

    Specify a function that majorizes the density

    Now work with the new density function r(x)

    Algorithm

    xxfxt ),()(

    1.Stepback togoOtherwise

    .return),()(If3.

    oftindependenGenerate.2

    densityithGenerate1.

    YXYtYfU

    YU

    rwY

    dxxtcc

    xtxr ).(where,

    )()(

  • 7/31/2019 Lecture _10 -- Part 1

    29/34

    The Hashemite University29

    Acceptance-Rejection Method II

    Note the following: f(x) and r(x) are density functions since they integrate

    to 1 over their total interval.

    t(x) is not a density function.

    The acceptance rejection method is an indirectmethod since it works on f(x) indirectly throughboth r(x) and t(x).

    Step 1 in the previous algorithm use one of thedirect methods for random variates generation

    that we have learned before.

  • 7/31/2019 Lecture _10 -- Part 1

    30/34

    The Hashemite University30

    Acceptance-Rejection Method III

    The probability of acceptance of Y in step 3 =1/c.

    So, as c decreases, and so the area under t(x) issmall this will reduce the number of iterations of

    this algorithm. Smaller c means that t(x) is very close to f(x)

    (has a very similar shape).

    So, your task is to find a suitable t(x) (has many

    details to resemble f(x)) but at the same timenot too complex to generate Y from it (or fromr(x)).

  • 7/31/2019 Lecture _10 -- Part 1

    31/34

    The Hashemite University31

    Acceptance-Rejection -- Example

  • 7/31/2019 Lecture _10 -- Part 1

    32/34

    The Hashemite University32

    Acceptance-Rejection Example cont.

    Exercise: Generate 3 random variates for this example usingany LCG you want.

  • 7/31/2019 Lecture _10 -- Part 1

    33/34

    The Hashemite University33

    Acceptance-Rejection Example Better t(x)

    -- See the completesolution of theexample at yourtextbook.

    -- Now how togenerate Y from r(x)?

    Need compositionthen inversetransform.

    -- So, three methodsare involved here.

  • 7/31/2019 Lecture _10 -- Part 1

    34/34

    The Hashemite University34

    Additional Notes

    The lecture covers the followingsections from the textbook:

    Chapter 8

    Sections:

    8.1,

    8.2 (8.2.1 8.2.4)