54
CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 Overview Earlier lectures Monte Carlo I: Integration Signal processing and sampling Today: Monte Carlo II Noise and variance Efciency Stratied sampling Importance sampling

Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Overview

Earlier lectures

■Monte Carlo I: Integration

■ Signal processing and sampling

Today: Monte Carlo II

■Noise and variance

■ Efficiency

■ Stratified sampling

■ Importance sampling

Page 2: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Review: Basic MC Estimator

Xi 2 [0, 1)nE

"1

N

NX

i=1

f(Xi)

#=

Zf(x) dx

Page 3: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

High-Dimensional Integration

Complete set of samples:

■ ‘The curse of dimensionality’

Random sampling error:

Numerical integration error:

In high dimensions, Monte Carlo requires fewer samples than quadrature-based numerical integration for the same error

N = n⇥ n⇥ · · ·⇥ n| {z }d

= nd

E ⇠ V 1/2 ⇠ 1pN

E ⇠ 1

n=

1

N1/d

Page 4: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

A Tale of Two Functions

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

f(x) = e�5000(x� 12 )

2

f(x) = e�20(x� 12 )

2

Page 5: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Monte Carlo Estimates, n=32

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.4190.3790.3770.3610.3320.3800.4760.463

0.01000.05200.02360.00010.00780.03560.02960.0270

f(x) = e�5000(x� 12 )

2

Zf(x)dx = 0.39571 . . .

Zf(x)dx = 0.025067 . . .

f(x) = e�20(x� 12 )

2

Page 6: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Random Sampling Introduces Noise

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

1 shadow ray

Center Random

Page 7: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Less Noise with More Rays

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

16 shadow rays1 shadow ray

Page 8: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Variance

Definition

Variance decreases linearly with sample size

V [Y ] ⌘ E[(Y � E[Y ])2]

= E[Y 2]� E[Y ]2

V

"1

N

NX

i=1

Yi

#=

1

N2

NX

i=1

V [Yi] =1

N2NV [Y ] =

1

NV [Y ]

V [aY ] = a2V [Y ]

Page 9: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Comparing Different Techniques

Efficiency measure

Comparing two sampling techniques A and B

If A has twice the variance as B, then it takes twice as many samples from A to achieve the same variance as B

If A has twice the cost of B, then it takes twice as much time reduce the variance using A compared to using B

The product of variance and cost is a constant independent of the number of samples

Recall: Variance goes as 1/N, time goes as C*N

E�ciency / 1

Variance · Cost

Page 10: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

(Live Demo of Variance)

Page 11: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

High Variance From Narrow Function

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Estimate of integral: 6.1418⇥ 10�12

Zf(x)dx = 0.025067 . . .

Page 12: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Stratified Sampling

Allocate samples per region

Estimate each region separately

New variance

If the variance in each region is the same, then total variance goes as

21

1[ ] [ ]N

N ii

V F V FN =

= ∑

1

1 N

N ii

F FN =

= ∑

1/N

Page 13: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Stratified Sampling

Sample a polygon

If the variance in some regions are smaller, then the overall variance will be reduced

2 1.51

1 [ ][ ] [ ]N

EN j

i

V FV F V FN N=

= =∑

Page 14: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Stratified Samples, n=32

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Zf(x)dx = 0.025067 . . .

VarianceUniform 3.99E-04Stratified 2.42E-04

Page 15: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Jittered Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Add uniform random jitter to each sample

Page 16: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Jittered vs. Uniform Supersampling

4x4 Jittered Sampling 4x4 Uniform

Page 17: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Theory: Analysis of Jitter

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

( ) ( )n

nn

s x x xδ=∞

=−∞

= −∑

n nx nT j= +

~ ( )

1 1/ 2( )

0 1/ 2

( ) sinc

nj j x

xj x

x

J ω ω

" ≤$= %

>$&=

2 22

2

1 2 2( ) 1 ( ) ( ) ( )

1 1 sinc ( )

n

n

nS J JT T T

T

π πω ω ω δ ω

ω δ ω

=−∞

=−∞

& '= − + −( )

& '= − +( )

Non-uniform sampling Jittered sampling

Page 18: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Poisson Disk Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Dart throwing algorithm Less energy near origin

Page 19: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Distribution of Extrafoveal Cones

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Monkey eye cone distribution Fourier transform

Yellot

Page 20: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Fourier Radial Power Spectrum

0 1 2 3 4

Frequency

0

1

2Power

0 1 2 3 4

Frequency

0

1

2

Power

Jitt

erPois

son D

isk

Pilleboue et al. [2015]

Page 21: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Fourier Radial Power Spectrum

0 1 2 3 4

Frequency

0

1

2Power

0 1 2 3 4

Frequency

0

1

2

Power

Jitt

erPois

son D

isk

Pilleboue et al. [2015]

Page 22: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Theoretical Convergence Rates in 2D

Sampler Worst Case Best Case

Jitter

Poisson Disk

Random O(N�1) O(N�1)

O(N�1) O(N�1)

O(N�1.5) O(N�2)

Page 23: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Measuring Power at a Pixel

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Film Plane

Lens Aperture

p

p0

r2

✓0

Pixel area

Page 24: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Measuring Power at a Pixel

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Film Plane

Lens Aperture

p

p0

r2

✓0

Pixel area

E(p) =

Z

Alens

L(p0 ! p)

cos ✓ cos ✓0

||p0 � p||2 dA0

Page 25: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Measuring Power at a Pixel

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Film Plane

Lens Aperture

p

p0

r2

✓0

Pixel area

E(p) =

Z

Alens

L(p0 ! p)

cos ✓ cos ✓0

||p0 � p||2 dA0

W =

Z

Apixel

Z

Alens

L(p0 ! p)

cos ✓ cos ✓0

||p0 � p||2 dA0dA

Page 26: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Lens Exit Pupils

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

In Focus Out of Focus

Page 27: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

How to Stratify?

n strata in d dimensions:

■ 8 strata in 4 dimensions: 4096 samples!

O(nd)

Page 28: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

How to Stratify?

n strata in d dimensions:

■ 8 strata in 4 dimensions: 4096 samples!

Solution: padding

■Generate stratified lower dimensional point sets, randomly associate pairs of samples

O(nd)

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

(p0, p1, p2, p3)

Page 29: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

(Live Demo of Stratification)

Page 30: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

“Biased” Sampling is “Unbiased”

Probability

Estimator

Proof

~ ( )iX p x

( )( )

ii

i

f XYp X

=

=

=

"#

$%&

'=

dxxf

dxxpxpxf

xpxf

EYE i

)(

)()()()()(][

Page 31: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

( )( )[ ]f xp xE f

=!

( )( )( )f x

f xp x

=!!

Sample according to f

Page 32: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

( )( )[ ]f xp xE f

=!

( )( )( )f x

f xp x

=!!

Sample according to f

2 2[ ] [ ] [ ]V f E f E f= −

Variance

Page 33: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

22

2

2

( )[ ] ( )( )

( ) ( )( ) / [ ] [ ]

[ ] ( )

[ ]

f xE f p x dx

p x

f x f xdx

f x E f E f

E f f x dx

E f

! "= # $

% &

! "= # $

% &

=

=

! !!

( )( )[ ]f xp xE f

=!

( )( )( )f x

f xp x

=!!

Sample according to f

2 2[ ] [ ] [ ]V f E f E f= −

Variance

Page 34: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

22

2

2

( )[ ] ( )( )

( ) ( )( ) / [ ] [ ]

[ ] ( )

[ ]

f xE f p x dx

p x

f x f xdx

f x E f E f

E f f x dx

E f

! "= # $

% &

! "= # $

% &

=

=

! !!

( )( )[ ]f xp xE f

=!

( )( )( )f x

f xp x

=!!

Sample according to f

2[ ] 0V f =!

Zero variance!

2 2[ ] [ ] [ ]V f E f E f= −

Variance

Page 35: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

22

2

2

( )[ ] ( )( )

( ) ( )( ) / [ ] [ ]

[ ] ( )

[ ]

f xE f p x dx

p x

f x f xdx

f x E f E f

E f f x dx

E f

! "= # $

% &

! "= # $

% &

=

=

! !!

( )( )[ ]f xp xE f

=!

( )( )( )f x

f xp x

=!!

Sample according to f

2[ ] 0V f =!

Zero variance!

2 2[ ] [ ] [ ]V f E f E f= −

Variance

Gotcha?

Page 36: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

2

4

6

8

Sample distribution

f(x) = e�5000(x� 12 )

2

Page 37: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

2

4

6

8

Sample distribution

f(x) = e�5000(x� 12 )

2

Page 38: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

2

4

6

8

Sample distribution

f(x) = e�5000(x� 12 )

2

MC Estimates, n=16

0.025020.024900.025340.022290.024430.025920.026780.02612

Zf(x)dx = 0.025067 . . .

Page 39: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

0.2 0.4 0.6 0.8 1.0

2

4

6

8

Sample distribution

f(x) = e�5000(x� 12 )

2

VarianceUniform 3.99E-04Stratified 2.42E-04Importance 3.75E-07

Page 40: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Sample distribution?

f(x) = e�5000(x� 12 )

2

0.0 0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Page 41: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Sample distribution?

f(x) = e�5000(x� 12 )

2

0.0 0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Page 42: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Importance Sampling: Area

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Solid Angle

100 shadow rays

Area

100 shadow rays

Page 43: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Ambient Occlusion

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Z

⌦V (!) cos ✓ d!

Landis, McGaugh, Koch @ ILM

Page 44: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Importance Sampling

Uniform hemisphere sampling:

p(!) =1

2⇡

(⇠1, ⇠2) ! (

q1� ⇠21 cos(2⇡⇠2),

q1� ⇠21 sin(2⇡⇠2),

q1� ⇠21)

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

f(!) = Li(!) cos ✓

Page 45: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Importance Sampling

Uniform hemisphere sampling:

p(!) =1

2⇡

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

Z

⌦f(!) d! ⇡ 1

N

NX

i

f(!i

)

p(!i

)

=

1

N

NX

i

Li(!i

) cos ✓o

1/2⇡=

2⇡

N

NX

i

Li(!i

) cos ✓

f(!) = Li(!) cos ✓

Page 46: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Importance Sampling

Cosine-weighted hemisphere sampling:

0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

p(!) =cos ✓

⇡Z

⌦f(!) d! ⇡ 1

N

NX

i

f(!i)

p(!i)=

1

N

NX

i

Li(!i) cos ✓icos ✓i/⇡

=

N

NX

i

Li(!i)

f(!) = Li(!) cos ✓

Page 47: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Ambient Occlusion

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Z

⌦V (!) cos ✓ d!

Page 48: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Reference: 4096 samples

Page 49: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Uniform, 32 samples Variance 0.0160

Page 50: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Cosine, 32 samples Variance 0.0103

Page 51: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Uniform, 72 samples Variance 0.00931

Page 52: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Sampling a Circle

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

1

2

2 U

r U

θ π=

=

Equi-Areal

Page 53: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

Shirley’s Mapping: Better Strata

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

1

2

14

r UUU

πθ

=

=

Page 54: Overview - Computer Graphicsgraphics.stanford.edu/.../lectures/11_mc2/11_mc2_slides.pdfA Tale of Two Functions CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017 0.2 0.4 0.6

1. Numerical integration

■ Quadrature/Integration rules

■ Efficient for smooth functions

■ “Curse of dimensionality”

2. Statistical sampling (Monte Carlo integration)

■ Unbiased estimate of integral

■ High dimensional sampling:

3. Signal processing

■ Sampling and reconstruction

■ Aliasing and antialiasing

■ Blue noise good

4. Quasi Monte Carlo

■ Bound error using discrepancy

■ Asymptotic efficiency in high dimensions

CS348b Lecture 11 Pat Hanrahan / Matt Pharr, Spring 2017

Views of Sampling

1/N1/2