88
Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Embed Size (px)

Citation preview

Page 1: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Monte Carlo Integration I

Digital Image SynthesisYung-Yu Chuang

with slides by Pat Hanrahan and Torsten Moller

Page 2: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Introduction

• The integral equations generally don’t have analytic solutions, so we must turn to numerical methods.

• Standard methods like Trapezoidal integration or Gaussian quadrature are not effective for high-dimensional and discontinuous integrals.

)ωp,( ooL )ω,p( oeL

iiiio ωθcos)ωp,()ω,ωp,(2

dLf is

Page 3: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Numerical quadrature

• Suppose we want to calculate , but can’t solve it analytically. The approximations through quadrature rules have the form

which is essentially the weighted sum of samples of the function at various points

b

adxxfI )(

n

iii xfwI

1

)(ˆ

Page 4: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Midpoint rule

convergence

Page 5: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Trapezoid rule

convergence

Page 6: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Simpson’s rule

• Similar to trapezoid but using a quadratic polynomial approximation

convergence

assuming f has a continuous fourth derivative.

Page 7: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Curse of dimensionality and discontinuity• For an sd function f,• If the 1d rule has a convergence rate of

O(n-r), the sd rule would require a much larger number (ns) of samples to work as well as the 1d one. Thus, the convergence rate is only O(n-r/s).

• If f is discontinuous, convergence is O(n-1/s) for sd.

Page 8: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Randomized algorithms

• Las Vegas v.s. Monte Carlo• Las Vegas: always gives the right answer

by using randomness.• Monte Carlo: gives the right answer on the

average. Results depend on random numbers used, but statistically likely to be close to the right answer.

Page 9: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Monte Carlo integration

• Monte Carlo integration: uses sampling to estimate the values of integrals. It only requires to be able to evaluate the integrand at arbitrary points, making it easy to implement and applicable to many problems.

• If n samples are used, its converges at the rate of O(n-1/2). That is, to cut the error in half, it is necessary to evaluate four times as many samples.

• Images by Monte Carlo methods are often noisy. Most current methods try to reduce noise.

Page 10: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Monte Carlo methods

• Advantages– Easy to implement– Easy to think about (but be careful of statistical

bias)– Robust when used with complex integrands

and domains (shapes, lights, …)– Efficient for high dimensional integrals

• Disadvantages– Noisy– Slow (many samples needed for convergence)

Page 11: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Basic concepts

• X is a random variable• Applying a function to a random variable

gives another random variable, Y=f(X).

• CDF (cumulative distribution function)

• PDF (probability density function): nonnegative, sum to 1

• canonical uniform random variable ξ (provided by standard library and easy to transform to other distributions)

dx

xdPxp

)()(

}Pr{)( xXxP

Page 12: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Discrete probability distributions

• Discrete events Xi with probability pi

• Cumulative PDF (distribution)

• Construction of samples: To randomly select an event,

Select Xi if

0ip 1

1n

ii

p

1i iP U P

ip

1

j

j ii

P p

U

1

0Uniform random variable 3X

iP

Page 13: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Continuous probability distributions• PDF

• CDF1

0

( ) Pr( )P x X x

Pr( ) ( )

( ) ( )

X p x dx

P P

( )p x

10

Uniform

( ) 0p x

0

( ) ( )x

P x p x dx

(1) 1P ( )P x

Page 14: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Expected values

• Average value of a function f(x) over some distribution of values p(x) over its domain D

• Example: cos function over [0, π], p is uniform

Dp dxxpxfxfE )()()(

00

1cos)cos( dxxxE p

Page 15: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Variance

• Expected deviation from the expected value• Fundamental concept of quantifying the error

in Monte Carlo methods

2)()()]([ xfExfExfV

Page 16: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Properties

)()( xfaExafE

ii

ii XfEXfE )()(

)()( 2 xfVaxafV

22 )()()( xfExfExfV

Page 17: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Monte Carlo estimator

• Assume that we want to evaluate the integral of f(x) over [a,b]

• Given a uniform random variable Xi over [a,b], Monte Carlo estimator

says that the expected value E[FN] of the estimator FN equals the integral

N

iiN Xf

N

abF

1

)(

Page 18: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

General Monte Carlo estimator

• Given a random variable X drawn from an arbitrary PDF p(x), then the estimator is

• Although the converge rate of MC estimator is O(N1/2), slower than other integral methods, its converge rate is independent of the dimension, making it the only practical method for high dimensional integral

N

i i

iN Xp

Xf

NF

1 )(

)(1

Page 19: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Convergence of Monte Carlo

• Chebyshev’s inequality: let X be a random variable with expected value μ and variance σ2. For any real number k>0,

• For example, for k= , it shows that at least half of the value lie in the interval

• Let , the MC estimate FN becomes

)(/)( iii XpXfY

N

iiN Y

NF

1

1

2

1}|Pr{|

kkX

2)2,2(

Page 20: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Convergence of Monte Carlo

• According to Chebyshev’s inequality,

• Plugging into Chebyshev’s inequality,

So, for a fixed threshold, the error decreases at the rate N-1/2.

YVN

YVN

YVN

YN

VFVN

ii

N

ii

N

iiN

1111][

12

12

1

21

][|][|Pr N

NN

FVFEF

21

][1||Pr

YV

NIFN

Page 21: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Properties of estimators

• An estimator FN is called unbiased if for all N

That is, the expected value is independent of N.

• Otherwise, the bias of the estimator is defined as

• If the bias goes to zero as N increases, the estimator is called consistent

QFE N ][

QFEF NN ][][

0][lim N

NF

QFE NN

][lim

Page 22: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example of a biased consistent estimator• Suppose we are doing antialiasing on a 1d

pixel, to determine the pixel value, we need to evaluate , where is the filter function with

• A common way to evaluate this is

• When N=1, we have

1

0)()( dxxfxwI )(xw

1

01)( dxxw

N

i i

N

i iiN

Xw

XfXwF

1

1

)(

)()(

IdxxfXfEXw

XfXwEFE

1

011

111 )()]([

)(

)()(][

Page 23: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example of a biased consistent estimator• When N=2, we have

• However, when N is very large, the bias approaches to zero

Idxdxxwxw

xfxwxfxwFE

1

0

1

0 2121

22112 )()(

)()()()(][

N

i i

N

i ii

N

XwN

XfXwNF

1

1

)(1

)()(1

Idxxfxwdxxw

dxxfxw

XwN

XfXwNFE

N

i iN

N

i iiN

NN

1

01

0

1

0

1

1)()(

)(

)()(

)(1

lim

)()(1

lim][lim

Page 24: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Choosing samples

• Carefully choosing the PDF from which samples are drawn is an important technique to reduce variance. We want the f/p to have a low variance. Hence, it is necessary to be able to draw samples from the chosen PDF.

• How to sample an arbitrary distribution from a variable of uniform distribution?– Inversion– Rejection– Transform

N

i i

iN Xp

Xf

NF

1 )(

)(1

Page 25: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Inversion method

• Cumulative probability distribution function

• Construction of samplesSolve for X=P-1(U)

• Must know:1. The integral of p(x)

2. The inverse function P-1(x)

U

X

1

0

( ) Pr( )P x X x

Page 26: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Proof for the inversion method

• Let U be an uniform random variable and its CDF is Pu(x)=x. We will show that Y=P-1(U) has the CDF P(x).

Page 27: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Proof for the inversion method

• Let U be an uniform random variable and its CDF is Pu(x)=x. We will show that Y=P-1(U) has the CDF P(x).

because P is monotonic,

Thus, Y’s CDF is exactly P(x).

)())(()(Pr)(PrPr 1 xPxPPxPUxUPxY u

)()( 2121 xPxPxx

Page 28: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Inversion method

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Page 29: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example: power function

It is used in sampling Blinn’s microfacet model.nxxp )(

Page 30: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example: power function

• Assume( ) ( 1) np x n x

11 1

0 0

1

1 1

nn x

x dxn n

1( ) nP x x

1 1~ ( ) ( ) nX p x X P U U

1 2 1max( , , , , )n nY U U U U 1

1

1

Pr( ) Pr( )n

n

i

Y x U x x

Trick

It is used in sampling Blinn’s microfacet model.

(It only works for sampling power distribution)

Page 31: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Example: exponential distribution

useful for rendering participating media.axcexp )(

Page 32: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Example: exponential distribution

useful for rendering participating media.axcexp )(

10

dxce ax ac

axx as edsaexP 1)(0

)1ln(1

)(1 xa

xP

ln1

)1ln(1

aaX

Page 33: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Rejection method

• AlgorithmPick U1 and U2

Accept U1 if U2 < f(U1)

• Wasteful?

1

0

( )

( )

y f x

I f x dx

dx dy

( )y f x

Efficiency = Area / Area of rectangle

• Sometimes, we can’t integrate into CDF or invert CDF

Page 34: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Rejection method

• Rejection method is a dart-throwing method without performing integration and inversion.

1. Find q(x) so that p(x)<Mq(x)

2. Dart throwinga. Choose a pair (X, ξ), where X is sampled from

q(x)

b. If (ξ<p(X)/Mq(X)) return X

• Equivalently, we pick a point (X, ξMq(X)). If it lies beneath p(X) then we are fine.

Page 35: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Why it works

• For each iteration, we generate Xi from q. The sample is returned if ξ<p(X)/Mq(X), which happens with probability p(X)/Mq(X).

• So, the probability to return x is

whose integral is 1/M• Thus, when a sample is returned (with

probability 1/M), Xi is distributed according to p(x).

M

xp

xMq

xpxq

)(

)(

)()(

Page 36: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example: sampling a unit disk

void RejectionSampleDisk(float *x, float *y) {

float sx, sy;

do {

sx = 1.f -2.f * RandomFloat();

sy = 1.f -2.f * RandomFloat();

} while (sx*sx + sy*sy > 1.f)

*x = sx; *y = sy;

}

π/4 ~ 78.5% good samples, gets worse in higher dimensions, for example, for sphere, π/6 ~ 52.4%

Page 37: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Transformation of variables

• Given a random variable X from distribution px(x) to a random variable Y=y(X), where y is one-to-one, i.e. monotonic. We want to derive the distribution of Y, py(y).

• • PDF:

)(}Pr{)}(Pr{))(( xPxXxyYxyP xy

dx

xdP

dx

xydPxy )())((

)(xpx

dx

dy

dy

ydP

dx

dyyp y

y

)()(

)()(1

xpdx

dyyp xy

x

y

Px(x)

Py(y)

Page 38: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example

XY

xxpx

sin

2)(

2

11

1

sin2

cos

2)()(cos)(

y

y

x

xxpxyp xy

Page 39: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Transformation method

• A problem to apply the above method is that we usually have some PDF to sample from, not a given transformation.

• Given a source random variable X with px(x) and a target distribution py(y), try transform X into to another random variable Y so that Y has the distribution py(y).

• We first have to find a transformation y(x) so that Px(x)=Py(y(x)). Thus,

))(()( 1 xPPxy xy

Page 40: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Transformation method

• Let’s prove that the above transform works. We first prove that the random variable Z=

Px(x) has a uniform distribution. If so, then

should have distribution Py from the inversion method.

Thus, Z is uniform and the transformation works.

• It is an obvious generalization of the inversion method, in which X is uniform and Px(x)=x.

)(1 ZPy

xxPPxPXxXPxZ xxxx ))(()(Pr)(PrPr 11

Page 41: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Example

xxpx )( yy eyp )(y

Page 42: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Thus, if X has the distribution , then the random variable has the distribution

Example

xxpx )( yy eyp )(

2)(

2xxPx y

y eyP )(

2lnln2)2

ln())(()(2

1 xx

xPPxy xy

yyPy ln)(1

2lnln2 XYxxpx )(

yy eyp )(

y

Page 43: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Multiple dimensions

We often need the other way around,

Page 44: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Spherical coordinates

• The spherical coordinate representation of directions is

cos

sinsin

cossin

rz

ry

rx

sin|| 2rJT

),,(sin),,( 2 zyxprrp

Page 45: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Spherical coordinates

• Now, look at relation between spherical directions and a solid angles

• Hence, the density in terms of

ddd sin , dpddp )(),(

)(sin),( pp

Page 46: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Multidimensional sampling

• Separable case: independently sample X from px and Y from py. p(x,y)=px(x)py(y)

• Often, this is not possible. Compute the marginal density function p(x) first.

• Then, compute the conditional density function

• Use 1D sampling with p(x) and p(y|x).

dyyxpxp ),()(

)(

),()|(

xp

yxpxyp

Page 47: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a hemisphere

• Sample a hemisphere uniformly, i.e. cp )(

Page 48: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a hemisphere

• Sample a hemisphere uniformly, i.e.

• Sample first

• Now sampling

2

1)( p

2

sin),( p

)(1 p2

1c

sin2

sin),()(

2

0

2

0

ddpp

2

1

)(

),()|(

p

pp

cp )(

Page 49: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a hemisphere

• Now, we use inversion technique in order to sample the PDF’s

• Inverting these:

cos1''sin)(0

dP

2'

2

1)|(

0

dP

2

11

2

cos

Page 50: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a hemisphere

• Convert these to Cartesian coordinate

• Similar derivation for a full sphere

2

11

2

cos

1

212

212

cos

1)2sin(sinsin

1)2cos(cossin

z

y

x

Page 51: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a disk

1

2

2 U

r U

1

2

2 U

r U

RIGHT Equi-ArealWRONG Equi-Areal

Page 52: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a disk

RIGHT Equi-ArealWRONG Equi-Areal

1

2

2 U

r U

1

2

2 U

r U

Page 53: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a disk

• Uniform

• Sample r first.

• Then, sample .

• Invert the CDF.

1

),( yxp

ryxrprp ),(),(

rdrprp 2),()(2

0

2

1

)(

),()|(

rp

rprp

2)( rrP 2

)|( rP

1r 22

Page 54: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Shirley’s mapping

1

2

14

r U

U

U

Page 55: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling a triangle

121 1 1

0 0 00

(1 ) 1(1 )

2 2

u uA dv du u du

( , ) 2p u v

0

0

1

u

v

u v

u

v

1u v

( ,1 )u u

Page 56: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

)(

),()|(

vp

vupvup

Sampling a triangle

• Here u and v are not independent!• Conditional probability

( , ) 2p u v

00 0 00 0

0 0

1( | ) ( | )

(1 ) (1 )o ov v v

P v u p v u dv dvu u

1( | )

(1 )p v u

u

0 20 00

( ) 2(1 ) (1 )u

P u u du u

1

0

( ) 2 2(1 )u

p u dv u

0 11u U

0 1 2v U U

dvvupup ),()(

Page 57: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Cosine weighted hemisphere

cos)( p

dp )(1

2

0 0

2 sincos1 ddc

ddd sin

2

0sincos21

dc

1c

sincos1

),( p

Page 58: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Cosine weighted hemisphere

sincos1

),( p

2sinsincos2sincos1

)(2

0 dp

2

1

)(

),()|(

p

pp

12

12cos

2

1)( P

22)|(

P

)21(cos2

11

1

22

Page 59: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Cosine weighted hemisphere

• Malley’s method: uniformly generates points on the unit disk and then generates directions by projecting them up to the hemisphere above it.

Vector CosineSampleHemisphere(float u1,float u2){

Vector ret;

ConcentricSampleDisk(u1, u2, &ret.x, &ret.y);

ret.z = sqrtf(max(0.f,1.f - ret.x*ret.x -

ret.y*ret.y));

return ret;

}

r

Page 60: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Cosine weighted hemisphere

• Why does Malley’s method work?• Unit disk sampling• Map to hemisphere

r

rp ),(

),(sin),( r

sinr

),( rY ),( XT

)()())((1

xpxJxTp xTy

cos10

0cos)(

xJT

Page 61: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Cosine weighted hemisphere

sinr

),( rY ),( XT

)()())((1

xpxJxTp xTy

cos10

0cos)(

xJT

sincos

),(),( rpJp T

Page 62: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling Phong lobe

np cos)(

ncp cos)(

2

0

2/

0

1sincos ddc n

0

1cos

1coscos2

dc n 11

2

n

c

2

1

nc

sincos2

1),( nn

p

Page 63: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling Phong lobe

'cos1

1

cos)1(coscos)1(

sincos)1()'(

sincos)1(sincos2

1)(

1

'cos

1cos

1'

0

'

0

2

0

n

nn

n

nn

nndn

dnP

ndn

p

11

1cos n

sincos2

1),( nn

p

Page 64: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling Phong lobe

2

'

2

1)|'(

2

1

sincos)1(

sincos

)(

),()|(

'

0

21

dP

np

pp

n

nn

22

sincos2

1),( nn

p

Page 65: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Sampling Phong lobe

)2,(cos),( 1,n 211

211 2),21(cos

2

1),(

cosine-weighted hemisphere

2

12cos

2

1)( P 21 cos1cos1)( nP

222 cos1sin2

1)sin21(

2

1

2

12cos

2

1

When n=1, it is actually equivalent to

Page 66: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Piecewise-constant 2d distributions• Sample from discrete 2D distributions.

Useful for texture maps and environment lights.

• Consider f(u, v) defined by a set of nunv values f[ui, vj].

• Given a continuous [u, v], we will use [u’, v’] to denote the corresponding discrete (ui, vj) indices.

Page 67: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Piecewise-constant 2d distributions

1

0

1

0

],[1

),(u vn

i

n

jji

vuf vuf

nndudvvufI

i j jivu vufnn

vuf

dudvvuf

vufvup

,)(1

','

),(

),(),(

f

i iu

I

vufnduvupvp

',)1(),()(

]'[

]','[

)(

),()|(

vp

Ivuf

vp

vupvup f

integral

pdf

marginaldensity

conditionalprobability

Page 68: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Piecewise-constant 2d distributions

),( vuf

low-resmarginaldensity

)(vp

low-resconditionalprobability

)|( vup

Distribution2D

Page 69: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Metropolis sampling

• Metropolis sampling can efficiently generate a set of samples from any non-negative function f requiring only the ability to evaluate f.

• Disadvantage: successive samples in the sequence are often correlated. It is not possible to ensure that a small number of samples generated by Metropolis is well distributed over the domain. There is no technique like stratified sampling for Metropolis.

Page 70: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Metropolis sampling

• Problem: given an arbitrary function

assuming

generate a set of samples

Page 71: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Metropolis sampling

• Steps

– Generate initial sample x0

– mutating current sample xi to propose x’

– If it is accepted, xi+1 = x’

Otherwise, xi+1 = xi

• Acceptance probability guarantees distribution is the stationary distribution f

Page 72: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Metropolis sampling

• Mutations propose x’ given xi

• T(x→x’) is the tentative transition probability density of proposing x’ from x

• Being able to calculate tentative transition probability is the only restriction for the choice of mutations

• a(x→x’) is the acceptance probability of accepting the transition

• By defining a(x→x’) carefully, we ensure

Page 73: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Metropolis sampling

• Detailed balance

stationary distribution

Page 74: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Pseudo code

Page 75: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Pseudo code (expected value)

Page 76: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Binary example I

Page 77: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Binary example II

Page 78: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Acceptance probability

• Does not affect unbiasedness; just variance

• Want transitions to happen because transitions are often heading where f is large

• Maximize the acceptance probability– Explore state space better– Reduce correlation

Page 79: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Mutation strategy

• Very free and flexible; the only requirement is to be able to calculate transition probability

• Based on applications and experience• The more mutation, the better• Relative frequency of them is not so

important

Page 80: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

Start-up bias

• Using an initial sample not from f’s distribution leads to a problem called start-up bias.

• Solution #1: run MS for a while and use the current sample as the initial sample to re-start the process.– Expensive start-up cost– Need to guess when to re-start

• Solution #2: use another available sampling method to start

Page 81: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

1D example

Page 82: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

1D example (mutation)

Page 83: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

1D example

mutation 1 mutation 210,000

iterations

Page 84: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

1D example

mutation 1 mutation 2300,000 iterations

Page 85: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

1D example

mutation 1 90% mutation 2+ 10% mutation

1Periodically using uniform mutations increases ergodicity

Page 86: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

2D example (image copy)

Page 87: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

2D example (image copy)

Page 88: Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller

2D example (image copy)

1 sample per pixel

8 samples per pixel

256 samples per pixel