Click here to load reader

# Hastings paper discussion

• View
2.880

• Download
2

Tags:

• #### n n f x

Embed Size (px)

DESCRIPTION

Talk given by Donia Skanji at the "reading classics seminar" in Paris-Dauphine

### Text of Hastings paper discussion

• 1.Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionMonte Carlo Sampling methods using Markov Chains and their ApplicationsHastings-University of TorontoReading seminar on classics: C.P.Robertpresented by:Donia SkanjiDecember 3, 20121/40Hastings-University of Toronto Reading Seminar:MCMC

2. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionOutline1 Introduction2 Monte Carlo Principle3 Markov Chain Theory4 MCMC5 Conclusion2/40Hastings-University of Toronto Reading Seminar:MCMC 3. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC Conclusion Introduction to MCMC Methods3/40Hastings-University of Toronto Reading Seminar:MCMC 4. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionIntroduction:There are several numerical problems such as Integralcomputing and Maximum evaluation in large dimensionalspacesMonte Carlo Methods are often applied to solve integrationand optimisation problems.Monte Carlo Markov chain (MCMC) is one of the most knownMonte Carlo methods.MCMC methods involve a large class of sampling algorithmsthat have had a greatest inuence on science development.4/40Hastings-University of Toronto Reading Seminar:MCMC 5. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionStudy objectifTo expose some relevant theory and techniques ofapplication related to MCMC methods To present a generalization of Metropolis sampling method.5/40Hastings-University of Toronto Reading Seminar:MCMC 6. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionNext Steps Monte Carlo Principle 6/40 Hastings-University of Toronto Reading Seminar:MCMC 7. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionNext Steps Monte Carlo PrincipleMarkov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC 8. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionNext Steps Monte Carlo PrincipleTo introduce: Markov Chain6/40 Hastings-University of TorontoReading Seminar:MCMC 9. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionNext Steps Monte Carlo PrincipleTo introduce:-MCMC Methods Markov Chain6/40 Hastings-University of TorontoReading Seminar:MCMC 10. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionNext Steps Monte Carlo PrincipleTo introduce:-MCMC Methods -MCMC Algorithms Markov Chain6/40 Hastings-University of TorontoReading Seminar:MCMC 11. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC Conclusion Monte Carlo Methods7/40Hastings-University of Toronto Reading Seminar:MCMC 12. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionOverviewThe idea of Monte Carlo simulation is to draw an i.i.d. set ofsamples{x i }N from a target density . i=1These N samples can be used to approximate the targetdensity with the following empirical point-mass function: 1 NN (x) = N i=1 x (i) (x)For independent samples, by Law of Large numbers, one canapproximate the integrals I (f ) with tractable sums IN (f ) thatconverge as follows:1N i IN (f ) =Ni=1 f (x ) I (f ) = f (x)(x)dx a.s see example8/40 Hastings-University of TorontoReading Seminar:MCMC 13. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionN sample from xNx3 6 91 x xx7x x285xxx4But independent sampling from may be dicult especially in ahigh dimensional space.9/40Hastings-University of Toronto Reading Seminar:MCMC 14. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionIt turns out that N N f (x i ) f (x)(x)dx (N )1i=1still applies if we generate samples using a Markovchain(dependent samples).The idea of MCMC is to use Markov chain convergenceproperties to overcome the dimensionality problems met byregular Monte carlo methods.But rst, some revision of Markov chains in a discrete set .10/40Hastings-University of Toronto Reading Seminar:MCMC 15. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC Conclusion Markov Chain Theory11/40Hastings-University of Toronto Reading Seminar:MCMC 16. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionDenitionFinite Markov ChainA Markov chain is a mathematical system that undergoestransitions from one state to another, between a nite or countablenumber of possible states. It is a random process usuallycharacterized as memoryless:P(X (t+1) /X (0) , X (1) , . . . , X (t) ) = P(X (t+1) /X (t) )12/40Hastings-University of Toronto Reading Seminar:MCMC 17. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionTransition MatrixLet P = {Pij } the transition Matrix of a markov chain with states0, 1, 2 . . . , S then, if X (t) denotes the state occupied by theprocess at time t, we have: Pr (X (t+1) = j/X (t) = i) = Pij X (t+1) = X (t) .P 13/40 Hastings-University of Toronto Reading Seminar:MCMC 18. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionPropertiesStationarity/IrreducibilityStationarity 14/40 Hastings-University of Toronto Reading Seminar:MCMC 19. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionPropertiesStationarity/IrreducibilityStationarity As t ,the Markov chain converges to its stationary(invariant) distribution: = .P14/40 Hastings-University of Toronto Reading Seminar:MCMC 20. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionPropertiesStationarity/IrreducibilityStationarity As t ,the Markov chain converges to its stationary(invariant) distribution: = .PIrreducibility14/40 Hastings-University of Toronto Reading Seminar:MCMC 21. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionPropertiesStationarity/IrreducibilityStationarity As t ,the Markov chain converges to its stationary(invariant) distribution: = .PIrreducibility Irreducible means any set of states can be reached from any other state in a nite number of moves (p(i, j) > 0 for every i and j).14/40 Hastings-University of Toronto Reading Seminar:MCMC 22. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionMCMC The idea of Markov Monte Carlo Method is to choose P the transition Matrix so that (the target density which is very dicult to sample from) is its unique stationary distribution. Assume the Markov Chain:has a stationary distribution (X )is irreducible and aperiodic Then we have an Ergodic Theorem: Theorem(Ergodic Theorem) if the Markov chain xt is irriducible, aperiodic and stationary then for any function h with E |h| 1N i h(xi ) h(x)d(x) when N 15/40Hastings-University of TorontoReading Seminar:MCMC 23. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionSummaryRecall that our goal is to build a markov chain (X t )using a transition matrix P so that the limiting distri-bution of (X t ) is the target density and integrals canbe approximated using the ergodic theorem.16/40Hastings-University of Toronto Reading Seminar:MCMC 24. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionQuestion How do we construct a Markov chain whose stationary distribution is the target distribution, Metropolis et al (1953) showed how. The method was generalized by Hastings (1970). 17/40 Hastings-University of Toronto Reading Seminar:MCMC 25. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionConstruction of the transition matrixin order to construct a markov chain with as its stationarydistribution, we have to consider a transition matrix P thatsatisfy the reversibility condition that for all i and ji p(i j) = j p(j i)i pij = j pjiThis property ensures that i pij = j (denition of astationary distribution) and hence that is a stationarydistribution of P18/40 Hastings-University of TorontoReading Seminar:MCMC 26. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionConstruction of the transition matrix How to choose thetransition MatrixP so that the i Pij = j Pjireversibility con-dition is veried?19/40Hastings-University of Toronto Reading Seminar:MCMC 27. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusionOverview Suppose that we have a proposal matrix denoted Q where j qij = 1 . If it happens that Q itself satises the reversibility condition:i qij = j qji for all i and j then our research is over,but most likely it will not. We might nd for example that for some i and j:i qij > j qji A convenient way to correct this condition is to reduce the number of moves from i to j by introducing a probability ij that the move is made.20/40 Hastings-University of Toronto Reading Seminar:MCMC 28. Outline Introduction Monte Carlo Principle Markov Chain TheoryMCMC ConclusionThe choice of the transition matrixwe assume that the transition matrix P has this form:Pij = qij ij if i = jPii = 1 j=i Pij if i = jwhere: Q = qij is the proposal matrix or jumping matrix of anarbitrary Markov chain on the states 0, 1..S, which suggests anew sample value j given a sample value i. ij is the acceptance probability to move from state i tostate j. 21/40Hastings-University of Toronto Reading Seminar:MCMC 29. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusion In order to obtain the reversibility condition, we have to verify :i pij = j pji i ij qij = j ji qji () The probabilities ij and ji are introduced to ensure that thetwo sides of () are in balance. In his paper, Hastings dened a generic form of the acceptanceprobability: sijij = q1+ i qij j jiWhere:sij is a symetric function of i and j(sij = sji ) chosen sothat 0 ij 1 for all i and jWith this form of Pij and ij suggested by Hastings, its readilyveried the reversibility condition. 22/40 Hastings-University of TorontoReading Seminar:MCMC 30. OutlineIntroductionMonte Carlo PrincipleMarkov Chain Theory MCMCConclusion2-The acceptance probability The choice of Recall that in this paper, Hastings dened the acceptance pro

Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
##### Discussion paper 1
Health & Medicine
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents