Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and...

Preview:

Citation preview

Monte Carlo Integration

Digital Image SynthesisYung-Yu Chuang

with slides by Pat Hanrahan and Torsten Moller

Introduction

• The integral equations generally don’t have analytic solutions, so we must turn to numerical methods.

• Standard methods like Trapezoidal integration or Gaussian quadrature are not effective for high-dimensional and discontinuous integrals.

)ωp,( ooL )ω,p( oeL

iiiio ωθcos)ωp,()ω,ωp,(2

dLf is

Numerical quadrature

• Suppose we want to calculate , but can’t solve it analytically. The approximations through quadrature rules have the form

which is essentially the weighted sum of samples of the function at various points

b

adxxfI )(

n

iii xfwI

1

)(ˆ

Midpoint rule

convergence

Trapezoid rule

convergence

Simpson’s rule

• Similar to trapezoid but using a quadratic polynomial approximation

convergence

assuming f has a continuous fourth derivative.

Curse of dimensionality and discontinuity• For an sd function f,• If the 1d rule has a convergence rate of

O(n-r), the sd rule would require a much larger number (ns) of samples to work as well as the 1d one. Thus, the convergence rate is only O(n-r/s).

• If f is discontinuous, convergence is O(n-1/s) for sd.

Randomized algorithms

• Las Vegas v.s. Monte Carlo• Las Vegas: always gives the right answer

by using randomness.• Monte Carlo: gives the right answer on the

average. Results depend on random numbers used, but statistically likely to be close to the right answer.

Monte Carlo integration

• Monte Carlo integration: uses sampling to estimate the values of integrals. It only requires to be able to evaluate the integrand at arbitrary points, making it easy to implement and applicable to many problems.

• If n samples are used, its converges at the rate of O(n-1/2). That is, to cut the error in half, it is necessary to evaluate four times as many samples.

• Images by Monte Carlo methods are often noisy. Most current methods try to reduce noise.

Monte Carlo methods

• Advantages– Easy to implement– Easy to think about (but be careful of statistical

bias)– Robust when used with complex integrands

and domains (shapes, lights, …)– Efficient for high dimensional integrals

• Disadvantages– Noisy– Slow (many samples needed for convergence)

Basic concepts

• X is a random variable• Applying a function to a random variable

gives another random variable, Y=f(X).

• CDF (cumulative distribution function)

• PDF (probability density function): nonnegative, sum to 1

• canonical uniform random variable ξ (provided by standard library and easy to transform to other distributions)

dx

xdPxp

)()(

}Pr{)( xXxP

Discrete probability distributions

• Discrete events Xi with probability pi

• Cumulative PDF (distribution)

• Construction of samples: To randomly select an event,

Select Xi if

0ip 1

1n

ii

p

1i iP U P

ip

1

j

j ii

P p

U

1

0Uniform random variable 3X

iP

Continuous probability distributions• PDF

• CDF1

0

( ) Pr( )P x X x

Pr( ) ( )

( ) ( )

X p x dx

P P

( )p x

10

Uniform

( ) 0p x

0

( ) ( )x

P x p x dx

(1) 1P ( )P x

Expected values

• Average value of a function f(x) over some distribution of values p(x) over its domain D

• Example: cos function over [0, π], p is uniform

Dp dxxpxfxfE )()()(

00

1cos)cos( dxxxE p

Variance

• Expected deviation from the expected value• Fundamental concept of quantifying the error

in Monte Carlo methods

2)()()]([ xfExfExfV

Properties

)()( xfaExafE

ii

ii XfEXfE )()(

)()( 2 xfVaxafV

22 )()()( xfExfExfV

Monte Carlo estimator

• Assume that we want to evaluate the integral of f(x) over [a,b]

• Given a uniform random variable Xi over [a,b], Monte Carlo estimator

says that the expected value E[FN] of the estimator FN equals the integral

N

iiN Xf

N

abF

1

)(

General Monte Carlo estimator

• Given a random variable X drawn from an arbitrary PDF p(x), then the estimator is

• Although the converge rate of MC estimator is O(N1/2), slower than other integral methods, its converge rate is independent of the dimension, making it the only practical method for high dimensional integral

N

i i

iN Xp

Xf

NF

1 )(

)(1

Convergence of Monte Carlo

• Chebyshev’s inequality: let X be a random variable with expected value μ and variance σ2. For any real number k>0,

• For example, for k= , it shows that at least half of the value lie in the interval

• Let , the MC estimate FN becomes

)(/)( iii XpXfY

N

iiN Y

NF

1

1

2

1}|Pr{|

kkX

2)2,2(

Convergence of Monte Carlo

• According to Chebyshev’s inequality,

• Plugging into Chebyshev’s inequality,

So, for a fixed threshold, the error decreases at the rate N-1/2.

YVN

YVN

YVN

YN

VFVN

ii

N

ii

N

iiN

1111][

12

12

1

21

][|][|Pr N

NN

FVFEF

21

][1||Pr

YV

NIFN

Properties of estimators

• An estimator FN is called unbiased if for all N

That is, the expected value is independent of N.

• Otherwise, the bias of the estimator is defined as

• If the bias goes to zero as N increases, the estimator is called consistent

QFE N ][

QFEF NN ][][

0][lim N

NF

QFE NN

][lim

Example of a biased consistent estimator• Suppose we are doing antialiasing on a 1d

pixel, to determine the pixel value, we need to evaluate , where is the filter function with

• A common way to evaluate this is

• When N=1, we have

1

0)()( dxxfxwI )(xw

1

01)( dxxw

N

i i

N

i iiN

Xw

XfXwF

1

1

)(

)()(

IdxxfXfEXw

XfXwEFE

1

011

111 )()]([

)(

)()(][

Example of a biased consistent estimator• When N=2, we have

• However, when N is very large, the bias approaches to zero

Idxdxxwxw

xfxwxfxwFE

1

0

1

0 2121

22112 )()(

)()()()(][

N

i i

N

i ii

N

XwN

XfXwNF

1

1

)(1

)()(1

Idxxfxwdxxw

dxxfxw

XwN

XfXwNFE

N

i iN

N

i iiN

NN

1

01

0

1

0

1

1)()(

)(

)()(

)(1

lim

)()(1

lim][lim

Choosing samples

• Carefully choosing the PDF from which samples are drawn is an important technique to reduce variance. We want the f/p to have a low variance. Hence, it is necessary to be able to draw samples from the chosen PDF.

• How to sample an arbitrary distribution from a variable of uniform distribution?– Inversion– Rejection– Transform

N

i i

iN Xp

Xf

NF

1 )(

)(1

Inversion method

• Cumulative probability distribution function

• Construction of samplesSolve for X=P-1(U)

• Must know:1. The integral of p(x)

2. The inverse function P-1(x)

U

X

1

0

( ) Pr( )P x X x

Proof for the inversion method

• Let U be an uniform random variable and its CDF is Pu(x)=x. We will show that Y=P-1(U) has the CDF P(x).

Proof for the inversion method

• Let U be an uniform random variable and its CDF is Pu(x)=x. We will show that Y=P-1(U) has the CDF P(x).

because P is monotonic,

Thus, Y’s CDF is exactly P(x).

)())(()(Pr)(PrPr 1 xPxPPxPUxUPxY u

2121 if )()( xxxPxP

Inversion method

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Example: power function

It is used in sampling Blinn’s microfacet model.nxxp )(

Example: power function

• Assume( ) ( 1) np x n x

11 1

0 0

1

1 1

nn x

x dxn n

1( ) nP x x

1 1~ ( ) ( ) nX p x X P U U

1 2 1max( , , , , )n nY U U U U 1

1

1

Pr( ) Pr( )n

n

i

Y x U x x

Trick

It is used in sampling Blinn’s microfacet model.

(It only works for sampling power distribution)

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Example: exponential distribution

useful for rendering participating media.axcexp )(

• Compute CDF P(x)

• Compute P-1(x)

• Obtain ξ

• Compute Xi=P-1(ξ)

Example: exponential distribution

useful for rendering participating media.axcexp )(

10

dxce ax ac

axx as edsaexP 1)(0

)1ln(1

)(1 xa

xP

ln1

)1ln(1

aaX

Rejection method

• AlgorithmPick U1 and U2

Accept U1 if U2 < f(U1)

• Wasteful?

1

0

( )

( )

y f x

I f x dx

dx dy

( )y f x

Efficiency = Area / Area of rectangle

• Sometimes, we can’t integrate into CDF or invert CDF

Rejection method

• Rejection method is a dart-throwing method without performing integration and inversion.

1. Find q(x) so that p(x)<Mq(x)

2. Dart throwinga. Choose a pair (X, ξ), where X is sampled from

q(x)

b. If (ξ<p(X)/Mq(X)) return X

• Equivalently, we pick a point (X, ξMq(X)). If it lies beneath p(X) then we are fine.

Why it works

• For each iteration, we generate Xi from q. The sample is returned if ξ<p(X)/Mq(X), which happens with probability p(X)/Mq(X).

• So, the probability to return x is

whose integral is 1/M• Thus, when a sample is returned (with

probability 1/M), Xi is distributed according to p(x).

M

xp

xMq

xpxq

)(

)(

)()(

Example: sampling a unit disk

void RejectionSampleDisk(float *x, float *y) {

float sx, sy;

do {

sx = 1.f -2.f * RandomFloat();

sy = 1.f -2.f * RandomFloat();

} while (sx*sx + sy*sy > 1.f)

*x = sx; *y = sy;

}

π/4 ~ 78.5% good samples, gets worse in higher dimensions, for example, for sphere, π/6 ~ 52.4%

Transformation of variables

• Given a random variable X from distribution px(x) to a random variable Y=y(X), where y is one-to-one, i.e. monotonic. We want to derive the distribution of Y, py(y).

• • PDF:

)(}Pr{)}(Pr{))(( xPxXxyYxyP xy

dx

xdP

dx

xydPxy )())((

)(xpx

dx

dy

dy

ydP

dx

dyyp y

y

)()(

)()(1

xpdx

dyyp xy

x

y

Px(x)

Py(y)

Example

XY

xxpx

sin

2)(

2

11

1

sin2

cos

2)()(cos)(

y

y

x

xxpxyp xy

Transformation method

• A problem to apply the above method is that we usually have some PDF to sample from, not a given transformation.

• Given a source random variable X with px(x) and a target distribution py(y), try transform X into to another random variable Y so that Y has the distribution py(y).

• We first have to find a transformation y(x) so that Px(x)=Py(y(x)). Thus,

))(()( 1 xPPxy xy

Transformation method

• Let’s prove that the above transform works. We first prove that the random variable Z=

Px(x) has a uniform distribution. If so, then

should have distribution Px(x) from the inversion method.

Thus, Z is uniform and the transformation works.

• It is an obvious generalization of the inversion method, in which X is uniform and Px(x)=x.

)(1 ZPy

xxPPxPXxXPxZ xxxx ))(()(Pr)(PrPr 11

Example

xxpx )( yy eyp )(y

Thus, if X has the distribution , then the random variable has the distribution

Example

xxpx )( yy eyp )(

2)(

2xxPx y

y eyP )(

2lnln2)2

ln())(()(2

1 xx

xPPxy xy

yyPy ln)(1

2lnln2 XYxxpx )(

yy eyp )(

y

Multiple dimensions

We often need the other way around,

Spherical coordinates

• The spherical coordinate representation of directions is

cos

sinsin

cossin

rz

ry

rx

sin|| 2rJT

),,(sin),,( 2 zyxprrp

Spherical coordinates

• Now, look at relation between spherical directions and a solid angles

• Hence, the density in terms of

ddd sin , dpddp )(),(

)(sin),( pp

Multidimensional sampling

• Separable case: independently sample X from px and Y from py. p(x,y)=px(x)py(y)

• Often, this is not possible. Compute the marginal density function p(x) first.

• Then, compute the conditional density function

• Use 1D sampling with p(x) and p(y|x).

dyyxpxp ),()(

)(

),()|(

xp

yxpxyp

Sampling a hemisphere

• Sample a hemisphere uniformly, i.e.

• Sample first

• Now sampling

2

1)( p

2

sin),( p

)(1 p2

1c

sin2

sin),()(

2

0

2

0

ddpp

2

1

)(

),()|(

p

pp

cp )(

Sampling a hemisphere

• Now, we use inversion technique in order to sample the PDF’s

• Inverting these:

cos1''sin)(0

dP

2'

2

1)|(

0

dP

2

11

2

cos

Sampling a hemisphere

• Convert these to Cartesian coordinate

• Similar derivation for a full sphere

2

11

2

cos

1

211

211

cos

1)2sin(sinsin

1)2cos(cossin

z

y

x

Sampling a disk

1

2

2 U

r U

1

2

2 U

r U

RIGHT Equi-ArealWRONG Equi-Areal

Sampling a disk

RIGHT Equi-ArealWRONG Equi-Areal

1

2

2 U

r U

1

2

2 U

r U

Sampling a disk

• Uniform

• Sample r first.

• Then, sample .

• Invert the CDF.

1

),( yxp

ryxrprp ),(),(

rdrprp 2),()(2

0

2

1

)(

),()|(

rp

rprp

2)( rrP 2

)|( rP

1r 22

Shirley’s mapping

1

2

14

r U

U

U

Sampling a triangle

121 1 1

0 0 00

(1 ) 1(1 )

2 2

u uA dv du u du

( , ) 2p u v

0

0

1

u

v

u v

u

v

1u v

( ,1 )u u

)(

),()|(

vp

vupvup

Sampling a triangle

• Here u and v are not independent!• Conditional probability

( , ) 2p u v

00 0 00 0

0 0

1( | ) ( | )

(1 ) (1 )o ov v v

P v u p v u dv dvu u

1( | )

(1 )p v u

u

0 20 00

( ) 2(1 ) (1 )u

P u u du u

1

0

( ) 2 2(1 )u

p u dv u

0 11u U

0 1 2v U U

dvvupup ),()(

Cosine weighted hemisphere

cos)( p

dp )(1

2

0 0

2 sincos1 ddc

ddd sin

2

0sincos21

dc

1c

sincos1

),( p

Cosine weighted hemisphere

sincos1

),( p

2sinsincos2sincos1

)(2

0 dp

2

1

)(

),()|(

p

pp

12

12cos

2

1)( P

22)|(

P

)21(cos2

11

1

22

Cosine weighted hemisphere

• Malley’s method: uniformly generates points on the unit disk and then generates directions by projecting them up to the hemisphere above it.

Vector CosineSampleHemisphere(float u1,float u2){

Vector ret;

ConcentricSampleDisk(u1, u2, &ret.x, &ret.y);

ret.z = sqrtf(max(0.f,1.f - ret.x*ret.x -

ret.y*ret.y));

return ret;

}

r

Cosine weighted hemisphere

• Why does Malley’s method work?• Unit disk sampling• Map to hemisphere

r

rp ),(

),(sin),( r

sinr

),( rY ),( XT

)()())((1

xpxJxTp xTy

cos10

0cos)(

xJT

Cosine weighted hemisphere

sinr

),( rY ),( XT

)()())((1

xpxJxTp xTy

cos10

0cos)(

xJT

sincos

),(),( rpJp T

Sampling Phong lobe

np cos)(

ncp cos)(

2

0

2/

0

1sincos ddc n

0

1cos

1coscos2

dc n 11

2

n

c

2

1

nc

sincos2

1),( nn

p

Sampling Phong lobe

'cos1

1

cos)1(coscos)1(

sincos)1()'(

sincos)1(sincos2

1)(

1

'cos

1cos

1'

0

'

0

2

0

n

nn

n

nn

nndn

dnP

ndn

p

11

1cos n

sincos2

1),( nn

p

Sampling Phong lobe

2

'

2

1)|'(

2

1

sincos)1(

sincos

)(

),()|(

'

0

21

dP

np

pp

n

nn

22

sincos2

1),( nn

p

Sampling Phong lobe

)2,(cos),( 1,n 211

211 2),21(cos

2

1),(

cosine-weighted hemisphere

2

12cos

2

1)( P 21 cos1cos1)( nP

222 cos1sin2

1)sin21(

2

1

2

12cos

2

1

When n=1, it is actually equivalent to

Recommended