8
ACCELERATING PATH TRACING BY EYE PATH REPROJECTION N. Henrich University of Koblenz [email protected] J. B¨ arz University of Koblenz [email protected] T. Grosch University of Magdeburg [email protected] S. M ¨ uller University of Koblenz [email protected] ABSTRACT Recently, path tracing has gained interest for real-time global illumination since it allows to simulate an unbiased result of the rendering equation. However, path tracing is still too slow for real-time applications and shows noise when displayed at interactive frame rates. The radiance of a pixel is computed by tracing a path, starting from the eye and connecting each point on the path with the light source. While conventional path tracing uses the informa- tion of a path for a single pixel only, we demonstrate how to distribute the intermediate results along the path to other pixels in the image. We show that this reprojection of an eye path can be implemented efficiently on graphics hard- ware with only a small overhead. This results in an over- all improvement of the whole image since the number of paths per pixel increases. The method is especially useful for many indirections, which is often circumvented to save computation time. Furthermore, instead of improving the quality of the rendering, our method is able to increase the rendering speed by reusing the reprojected paths instead of tracing new paths while maintaining the same quality. Our solution remains the correct solution of the rendering equa- tion up to a user-defined discretization threshold. This is the author’s version of the paper. The ultimate ver- sion has been published in the GRVR 2011 conference pro- ceedings. 1 Introduction Solving the rendering equation in real-time is still an open problem in computer graphics. Many real-time methods exist for global illumination, mostly implemented on the GPU, that display only approximations. In contrast to this, path tracing is a Monte Carlo technique to compute an un- biased estimate of the rendering equation. There is a cur- rent trend to use path tracing for interactive global illumi- nation as well. However, path tracing is still too slow for real-time applications and shows noise when displayed at interactive frame rates. To improve this, we introduce a method that reduces the noise in the image by re-using in- formation computed along a path for other pixels in the im- age. While conventional path tracing computes a path for a single pixel only, we show how to reproject the radiance computed at each point along the path into the image. This reprojection of a path can be efficiently implemented on graphics hardware with only a small overhead (see Fig. 1). 4 Paths 16 Paths 4 Paths + Reprojection 16 Paths + Reprojection Figure 1. During progressive rendering, eye path reprojec- tion improves the image. 2 Related Work Recursive ray tracing, the first algorithm to accurately sim- ulate perfectly specular reflection, refractions, and shadows based on geometric optics was presented by Whitted [21]. Cook et al. [4] proposed to stochastically distribute the di- rections of the rays over space and time allowing the com- putation of non-singular effects like depth of field, soft shadows, motion blur, and glossy reflections. Introducing the rendering equation, Kajiya [11] provided a common mathematical basis to derive rendering methods from. In the same publication, path tracing was proposed to simulate all possible light paths. Starting from the eye, the render- ing equation is recursively solved by generating a random path to the light sources applying Monte Carlo sampling. To reduce the noise of path tracing simulations, important advancements like bidirectional path tracing [12, 18], start- ing paths from the light sources as well, were presented. By randomly modifying existing paths, Metropolis light transport (MLT) [18] effectively handles complicated light paths. Combining MLT mutation strategies with path trac- ing was done in [3]. Another approach to increase the efficiency of Monte Carlo methods involves storing illumination information in an intermediate data structure. Arvo [1] and Heckbert [9] proposed to accumulate energy from an initial light pass in textures, introducing a discretization error. By stor- ing flux from a light pass in a multi-dimensional search tree, Jensen [10] introduced photon mapping, achieving the decoupling of illumination from the underlying geometry. While providing a re-usable, view-independent solution to simulate global illumination, photon mapping is prone to

ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

ACCELERATING PATH TRACING BY EYE PATH REPROJECTIONN. Henrich

University of [email protected]

J. BarzUniversity of [email protected]

T. GroschUniversity of Magdeburg

[email protected]

S. MullerUniversity of Koblenz

[email protected]

ABSTRACTRecently, path tracing has gained interest for real-timeglobal illumination since it allows to simulate an unbiasedresult of the rendering equation. However, path tracing isstill too slow for real-time applications and shows noisewhen displayed at interactive frame rates. The radianceof a pixel is computed by tracing a path, starting from theeye and connecting each point on the path with the lightsource. While conventional path tracing uses the informa-tion of a path for a single pixel only, we demonstrate howto distribute the intermediate results along the path to otherpixels in the image. We show that this reprojection of aneye path can be implemented efficiently on graphics hard-ware with only a small overhead. This results in an over-all improvement of the whole image since the number ofpaths per pixel increases. The method is especially usefulfor many indirections, which is often circumvented to savecomputation time. Furthermore, instead of improving thequality of the rendering, our method is able to increase therendering speed by reusing the reprojected paths instead oftracing new paths while maintaining the same quality. Oursolution remains the correct solution of the rendering equa-tion up to a user-defined discretization threshold.This is the author’s version of the paper. The ultimate ver-sion has been published in the GRVR 2011 conference pro-ceedings.

1 Introduction

Solving the rendering equation in real-time is still an openproblem in computer graphics. Many real-time methodsexist for global illumination, mostly implemented on theGPU, that display only approximations. In contrast to this,path tracing is a Monte Carlo technique to compute an un-biased estimate of the rendering equation. There is a cur-rent trend to use path tracing for interactive global illumi-nation as well. However, path tracing is still too slow forreal-time applications and shows noise when displayed atinteractive frame rates. To improve this, we introduce amethod that reduces the noise in the image by re-using in-formation computed along a path for other pixels in the im-age. While conventional path tracing computes a path fora single pixel only, we show how to reproject the radiancecomputed at each point along the path into the image. Thisreprojection of a path can be efficiently implemented ongraphics hardware with only a small overhead (see Fig. 1).

28.09.2010

1

4 Paths 16 Paths

4 Paths + Reprojection 16 Paths + Reprojection

Figure 1. During progressive rendering, eye path reprojec-tion improves the image.

2 Related Work

Recursive ray tracing, the first algorithm to accurately sim-ulate perfectly specular reflection, refractions, and shadowsbased on geometric optics was presented by Whitted [21].Cook et al. [4] proposed to stochastically distribute the di-rections of the rays over space and time allowing the com-putation of non-singular effects like depth of field, softshadows, motion blur, and glossy reflections. Introducingthe rendering equation, Kajiya [11] provided a commonmathematical basis to derive rendering methods from. Inthe same publication, path tracing was proposed to simulateall possible light paths. Starting from the eye, the render-ing equation is recursively solved by generating a randompath to the light sources applying Monte Carlo sampling.To reduce the noise of path tracing simulations, importantadvancements like bidirectional path tracing [12, 18], start-ing paths from the light sources as well, were presented.By randomly modifying existing paths, Metropolis lighttransport (MLT) [18] effectively handles complicated lightpaths. Combining MLT mutation strategies with path trac-ing was done in [3].

Another approach to increase the efficiency of MonteCarlo methods involves storing illumination information inan intermediate data structure. Arvo [1] and Heckbert [9]proposed to accumulate energy from an initial light passin textures, introducing a discretization error. By stor-ing flux from a light pass in a multi-dimensional searchtree, Jensen [10] introduced photon mapping, achieving thedecoupling of illumination from the underlying geometry.While providing a re-usable, view-independent solution tosimulate global illumination, photon mapping is prone to

Page 2: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

bias. Post-processing filters can be applied to reduce thenoise in the image, e.g. [5]. While these filters create visu-ally pleasing images, they often remove small details andintroduce bias to the rendering.

Besides aiming for higher quality, ray tracing researchfocused on increasing the rendering performance in thelast decades. Initial publications relied on distributing theworkload on parallel supercomputers [13]. The develop-ment of faster acceleration data structures and traversaltechniques especially in the context of SIMD instructionsets and multi-threading enabled interactive frame rates ondesktop computers [7]. An overview on real-time ray trac-ing based on the CPU is given in [19]. With the adventof CUDA, a number of approaches to efficiently port raytracing to the GPU [15, 14] have been suggested.

Several approaches deal with reusing paths to reducenoise and computation time. Bekaert et al. [2] reuse pathsby combining the path of one pixel with the start point ofa neighboring pixel. This approach reduces the noise inthe image, but artifacts like clusters of bright pixels can ap-pear in the image, which was further addressed by Xu andSbert [22]. In contrast to our approach, no reprojection ofa path to a spatially different pixel is used here. Path re-projection is also used in bidirectional path tracing [12],where the light path is directly reprojected into a so-calledlight image. We extend this reprojection to a reprojectionof the eye path, complementing the set of possible combi-nations of light and eye paths. Yue et al. [23] pre-computepaths with an arbitrary starting point and connect them tothe light source for rendering. Walter et al. [20] show howto reproject illuminated points for a moving camera. Sbertet al. [16] and Szeczi et al. [17] reuse paths for light ani-mations. Here paths can be reused in case of a moving lightsource since most of them remain valid if the movement ofthe light source is small. Furthermore, the starting point ofa path can be reused by different pixels of several frames inan animation rendering [6, 8]. These methods only reusethe complete path for different frames, whereas our methodis capable of reusing every point along a path for spatiallyvarying pixels.

3 Our Approach

Our method is a GPU-assisted extension of path tracing. Todescribe our approach, we first review standard path trac-ing.

3.1 Path Tracing

To synthesize images applying path tracing, a set of mea-surements of the incident radiation on a virtual two-dimensional image plane has to be computed from a well-defined perspective. The average radiance Lp incident tothe pth picture element with area Ap on the image planeequals

Lp =1

Ap

∫Ap

Li(q, ωq) dq,

01.10.2010

1

1+nxLight

onL 1+

onL ,nx

nq,ωv

qp

on ,1+

1, +nqωv

1ωv

jp

mq

Eye Path1x 1−nx

qωv

onL ,1−

1, −nqω

Figure 2. Path tracing creates a path of N points xn tocompute the radiance Lp at pixel p. During computationfor the final pixel radiance, N − 1 intermediate radiancevalues Ln,o are computed. Reprojecting these points intothe image results in radiance estimates for other pixels pj .

where Li is the incident radiance at point q from directionωq , given by the optical path within the receiving aperture.To solve the integral, the Monte Carlo method is applied togenerate M sample positions qs from a probability densityfunction pdf(qs) over the surface of the pixel:

Lp ≈1

Ap

1

M

M∑s=1

Li(qs, ωq,s)

pdf(qs)(1)

In the context of path tracing, the incident radiance Li

is more conveniently expressed in terms of exitant radianceusing the geometrical invariance. The exitant radiance fora point xn is given by

Ln,o(xn, ωo) =

∫S2

f(xn, ωo, ωi)Li(xn, ωi) cos θ dωi,

(2)where f(xn, ωo, ωi) is the bidirectional reflectance

distribution function at xn for the incoming directionωi and outgoing direction ωo while θ is the angle be-tween the receiver normal and ωi. To efficiently computeLn,o(xn, ωo), Monte Carlo integration is applied splittingthe integral into the sum of direct and indirect lighting.As is customary, the direct illumination Ln,d(xn, ωo) iscomputed by expressing the hemispherical integral in Eq.2 as an integral over the surface of the light sources andthe indirect illumination Ln,id(xn, ωo) by choosing a sin-gle direction ωs using importance sampling of the BRDFf(xn, ωo, ωi) at xn.

Thus, a path (x1, ..,xN ) is generated recursively,starting at the first intersection of the eye ray. Without in-troducing bias, the random path is terminated with a prob-ability of 1 − r, r ∈]0, 1] applying Russian roulette. Oncethe recursion is stopped, the computed values for Ln,o are

Page 3: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

propagated backwards along the path contributing to pic-ture element p (see Fig. 2). Computing M different pathseffectively solves Eq. 1.

3.2 Eye Path Reprojection

The basic idea of our method is to reuse the Ln,o valuescollected along a path for other pixels. Suppose we havea path (x1, ..,xN ), starting from the eye for a pixel p. Inconventional path tracing we only compute L1,o for p anddiscard the intermediate values L2,o, ..., LN,o. However,along the path, we computed the radiance Ln,o for eachof the points xn. If we reproject a point xn on a diffusesurface to a point qm on the image plane (contributing toanother pixel pj), we get an additional measurement Lpj

(see Eq. 1), arising from a path of length N + 1− n:

Lpj=

1

Apj

Li(qm, ωq,m)

pdf(qm)(3)

Here, pdf(qm) is the density function that describesthe distribution of the reprojected samples over the area ofthe pixel pj . Since this distribution is initially unknown,we assume a uniform distribution here. The radiance Lpj

corresponds to a path which starts at the light source, prop-agates in descending order to xn, and finally redirects to-wards the viewer. In this way, a measurement for pixel pjcan be obtained with only a single intersection test for thereprojection and no new path computation is required (seeFig. 2). Please note that Russian roulette is necessary to ac-quire an unbiased image, especially since the path lengthsN + 1 − n can vary significantly around the average of≈ N/2. Otherwise, the reprojection image will be darkerdue to the shorter path length. The final measurement foreach pixel is then computed as the weighted sum of the Mpath tracing samples (see Eq. 1) and the total number ofM ′ reprojection samples arising from Eq. 3.

3.3 GPU Reprojection

Instead of tracing a ray to test if a reprojected point is vis-ible inside the image, occlusions can be determined withthe GPU applying a simple depth comparison operation(Fig. 3). We first generate a deep framebuffer that con-tains the 3D position and normal for each pixel. Duringpath generation, each point xn (n = 2..N) is placed insidea Vertex Buffer Object (VBO) with its radiance estimateas pixel color. The reprojection is performed by drawingthe VBO as a set of points with an occlusion test enabledthat discards all points which are not approximately insidethe plane defined by the pixel position and normal (see Fig.4). This is more accurate than a simple depth comparison.The GPU thus performs the reprojection, clipping, and theocclusion test. For all visible points, a fragment shader ac-cumulates the reprojected radiance Lpj (Eq. 3), and countsthe number M ′ of reprojections that occur for each pixel tocorrectly normalize the accumulated radiance. Each pixel

10.12.2010

1

VBORendering

1. High Resolution Deep Framebuffer 3. High ResolutionReprojection Image

Point ListGeneration Downsampling

Rendering

Combination

2. Path Tracing 4. Final Image

Figure 3. Algorithm overview: First, a high resolutiondeep framebuffer is rendered (1). During path tracing (2),all points xn and the corresponding radiance values Ln,o

along a path are stored in a vertex buffer object (VBO).The points in the VBO are then reprojected into the image(3) using the deep framebuffer for correct occlusion. Thefinal image (4) is then composed of the original path trac-ing image and the reprojection image. All CPU steps areshown in yellow, the GPU steps are shown in blue. TheGPU operations require only a small overhead. For properanti-aliasing, the reprojection is performed on a higher res-olution and downsampled before it is combined with thepath tracing image.

in the final image is then composed of theM original pathsand the M ′ reprojected paths. For anti-aliasing, the repro-jection image is computed on a higher resolution and down-sampled before it is combined with the path tracing image(for details see Sec. 4.2).

Since the GPU memory is limited, there is an up-per bound for the size of the VBO. We therefore run ourmethod progressively: After one path per pixel is gener-ated, the VBO is reprojected and combined with the pathtracing image. Then the process is repeated and the pathtracing starts again. To save memory, we do not store nor-mals for each point, although they could be included to de-tect backfacing points.

3.4 Pixel Skipping

Instead of improving the quality of an image, path repro-jection can also be applied to significantly reduce the ren-dering time. If we aim at generating at least P samplesper pixel and the reprojection already generated M ′ sam-ples for a pixel, we only need to trace P −M ′ conventionalpaths for the respective pixel. Furthermore, we can skip

Page 4: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

26.09.2010

1

1x

ε

1

2x

3x4x

p 5x4x

6x

Figure 4. Reprojection of several samples on a pixel p:To verify whether a sample point is valid, we compute thedistance to the plane at the pixel center. Point x1 is cor-rectly rejected, because it is behind the plane. The pointsx2, x3 and x4 are correctly accepted, because they are alllocated inside the depth tolerance region ε. Point x5 is alsoaccepted, because it is inside the region, but in fact it isoccluded. Point x6 is visible inside the pixel, but rejectedbecause it is outside the tolerance region. The misclassi-fications for x5 and x6 cannot be resolved at this imageresolution. Using a higher resolution reduces the problems.

pixels entirely if enough reprojected paths are available.During rendering, the path tracer checks for each pixel ifthe conventional eye paths plus the reprojected paths M ′

exceed P . In that case, no new paths must be created forthis pixel and the computation time can be saved. As thereare still at least P paths generated per pixel, we maintainan appropriate quality for the final rendering.

4 Results and Discussion

We tested our algorithm with a set of test scenes, whichare shown in Fig. 5. The scenes are of varying size anddemonstrate the strengths and weaknesses of our approach.For each scene, we show the original path tracing image,the reprojection image and the combined image. Addition-ally, statistic images show the distribution of reprojectedsamples.

The reprojection adds new information to the image.For a path of lengthN , we may end up inN−1 sub-paths ofshorter length that can be used for other pixels. Effectively,we use intermediate results from one path of a pixel forother pixels, but the image remains unbiased up to a user-defined discretization error. In the worst case, all N − 1points xn are outside the viewing frustum or occluded anddo not improve the image.

For our results we used a 5122 image resolution anda 20482 reprojection resolution. Russian roulette startedafter 8 intersections and terminated the paths with a prob-ability of 0.1. With a quad-core CPU and an NVIDIA8800GTX GPU, it takes approximately 1 second to repro-ject 45 million samples. Please note that these samples con-

tain the information of incoherent paths, which is often thereason for slowdown.

The Conference Room is a good example for our tech-nique since this is a closed scene and most parts of it arevisible. Most improvements can be observed in the dis-tant regions, because many reprojected samples fall withinone pixel. In contrast, nearby regions are only slightly im-proved and the combined image is almost identical to thepath tracing image. Note also the improved soft shadowsof the area lights since the direct lighting contribution isexplicitly sampled for each reprojected intersection point.

In the textured Sponza scene, the viewer is locatedsuch that about half of the scene is inside the viewing frus-tum. Consequently, approximately every second point isoutside the viewing frustum. Additionally, many points in-side the viewing frustum are hidden behind the columns,finally only each fifth sample contributes to the reprojec-tion image. The overall improvement is therefore smallerthan in the Conference Room. A worst-case scenario forthe reprojection would be a viewer who is directly locatedin front of a wall inside a large hall.

4.1 Pixel Skipping

Pixel skipping is additionally used in Fig. 6 to significantlyreduce the rendering time. The leftmost image is a con-ventional path tracing image, the second image to the leftcombines reprojection and path tracing using pixel skip-ping. Due to the reprojection, only one half of the eye pathsare required for an overall similar image quality. Since thetotal rendering time highly depends on the number of eyepaths, pixel skipping leads to a significantly lower compu-tation time. In this example, the rendering time could bereduced to one half.

4.2 Anti Aliasing

If the same resolution is used for the path tracing andreprojection image, we get several misclassified samplessince a planar surface is assumed inside a pixel. There-fore, edges appear aliased since a pixel contains severalpolygons with different planes and many samples are er-roneously rejected. A better image quality can be achievedwith a high resolution reprojection image. Afterwards, adownsampling is required to combine the reprojection im-age with the path tracing image. Fig. 7 shows reprojectionswith different resolutions.

The GPU rendering time is only slightly affected bythese modifications: Some additional time is required torender the deep framebuffer at a higher resolution and forthe downsampling render pass. In contrast to path tracingrequiring at least seconds to converge, only some millisec-onds are required for the GPU for these additional steps.The time for rendering the VBO point set is not affected bythe higher resolution. On current GPUs, the upper limit forthe reprojection image is 81922. For high resolution path

Page 5: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

Figure 5. Eye path reprojection achieves a higher image quality in a similar time. From left to right, the columns show: Aconventional path tracing, the reprojected samples, and a combination of both. The last column displays the distribution of theaccepted reprojected samples. For the Conference Room, 16 paths per pixel were traced. A total number of 60.9M sampleswere reprojected with 34% being accepted, 28% occluded, and 38% culled. The overhead with respect to conventional pathtracing amounted to 0.8%. 8 paths per pixel were traced for the Sponza Scene, resulting in 27.9M reprojected samples. 19%of the samples were accepted, 32% were occluded and 49% were outside the viewing frustum and thus culled. Again, theoverhead with respect to a conventional path tracing was at 0.7%. Timings are given for an unoptimized CPU path tracer.

Figure 6. Combining path reprojection with pixel skipping achieves a similar image quality as path tracing with fewer paths.The left image shows a conventional path tracing simulation with 256 paths per pixel. The right image shows a combination ofreprojection and path tracing using only 47% of the eye paths due to pixel skipping. This leads to a reduction of the computationtime by one half likewise. The statistic images showing the distribution of reprojected samples and eye paths reveal that morepaths are generated in regions with fewer reprojections. Timings are given for an unoptimized CPU path tracer.

tracing images, tiling techniques can be used in combina-tion with a geometry shader that directs each point to thecorresponding camera.

An important issue is the correct normalization of thereprojection image. The simple solution is to directly applyEq. 3 for each pixel in the reprojection image, assuming a

Page 6: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

Figure 7. Edges in the reprojection image appear aliasedusing the same resolution as in the path tracing image (left).This happens because only a single depth value is availablefor each pixel and many valid samples are rejected. Anti-Aliasing is achieved by downsampling a higher resolutionreprojection image (center 2× 2, right 4× 4).

uniform distribution of samples inside the pixel. However,we do not know the actual distribution of reprojected sam-ples, and in a few cases this assumption leads to visibleartifacts. This mainly affects edges where one of polygonsis viewed from a grazing angle. In this case, the samplingdensity can be very high on this polygon and low in the re-maining region. As shown in Fig. 8, the affected edges mayappear aliased and shifted by one pixel. A more reasonablechoice at this point is to compute an individual radiancevalue for each subpixel first, using Eq. 3. The radiancevalue for the pixel is then computed as the average subpixelradiance. This only assumes a uniform sampling distribu-tion inside the subpixel. Although this is not necessarilycorrect and subpixels with only a few samples are weightedequally in comparison to already converged subpixels, wedid not notice visible artifacts when using at least 4 × 4subpixels. Since the downsampling requires only sums andaverages, fast techniques like mipmapping and separatedfilters can be used.

Figure 8. Downsampling aliasing problems occur if eachsample is weighted equally: As shown in the statistics im-age, the sampling density is high on top of the box and lowon the wall. Therefore, the enlarged edge appears aliasedbecause most of the samples for the average radiance aretaken from the top of the box (center). Computing an aver-age radiance value for each subpixel, prior to averaging theradiance of all subpixels reduces the problem (right). Theappearance is very similar to the path tracing image shownon the left.

To verify the correctness of our method, we computeda converged image of pure path tracing and the combinedimage, shown in Fig. 9.

Figure 9. Left: Converged path tracing image (512 pathsper pixel). Center: Converged path tracing image withreprojection (4 × 4 subpixel resolution). Right: Invertedabsolute difference image ×10. The differences are morepronounced at the edges due to aliasing artifacts.

4.3 Depth Tolerance

When using a high depth tolerance value ε, more samplesare accepted, even those which are not on the plane. Fur-thermore, occluded samples are also accepted, thus objectsmay appear transparent, see Fig. 10 for some examples. Wefound 0.03% of the scene extent to be a good value for ε.However, even setting ε = 0 results in an image improve-ment. To avoid the discretization error, the reprojection canalso be implemented on the CPU using a normal ray inter-section. This is slower than the GPU-based method but stillfaster than computing a complete path for a pixel.

Figure 10. When using a high tolerance value ε, edges atobjects may appear transparent (left). In the center image,the tolerance value is correct. The right image shows CPU-based reprojection tracing a visibility ray towards the cam-era.

4.4 Russian Roulette

To obtain a consistent reprojection image, Russian roulettemust be applied. Given a path of length N , the reprojectedpaths have a length of 1..N − 1. Assuming a uniform dis-tribution of reprojections, we get an average path length of≈ N/2 in the reprojection image. Due to this shorter pathlength, the reprojection image is slightly darker than the

Page 7: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

original path tracing image if Russian roulette is omitted.Fig. 11 shows the difference.

Figure 11. The upper half of the images shows the pathtracing image, the lower half is the reprojection image. Dueto the shorter path length of reprojected samples, the repro-jection image can be slightly darker than the original pathtracing image (left). Using Russian roulette (right), the re-projection image is unbiased and consistent with the origi-nal path tracing image.

The image improvement due to path reprojectionlargely depends on the path length. If longer paths are gen-erated, the image improvement gets more pronounced asmore points are reprojected. Fig. 12 shows a comparisonfor different path length settings.

Figure 12. The lower half of the images shows the pathtracing image, the upper half is the combined result image.For all images, the albedo of the surface was used as thetermination probability for Russian Roulette. For the im-ages on the left hand side, Russian Roulette started after 5intersections, on the right hand side after 10 intersections.The image improvement due to path reprojection increaseswith longer paths.

4.5 Bidirectional Path Tracing

Since we re-use existing paths and do not compute newpaths, the problems of original path tracing still remain.This affects the slow convergence in scenes with caustics

and multiple bounces of light before the light reaches theeye. Bidirectional path tracing is a technique to overcomethese problems, and our method can be applied here aswell. The idea of path reprojection is also used in bidi-rectional path tracing: If the length of the eye path is set tozero, the light path directly reprojects into the image (so-called light image). Our technique corresponds to a lightpath of length zero and complements the set of possiblecombinations of light path and eye path, as shown in Fig.13.

28.09.2010

1

Light Path1

2

3

3

Eye Path1 2

3

Figure 13. Bidirectional path tracing: Two paths, startingfrom the eye and from the light are computed and then con-nected at arbitrary positions. Using an eye path of lengthzero results in a reprojection of the light path (red lines). Areprojection of the eye path (green lines) is not part of bidi-rectional path tracing. As shown in the image, this comple-ments the possible combinations of light and eye path.

5 Conclusion and Future Work

We demonstrated how a path tracer can use its own inter-mediate results to improve the image. This is accomplishedby reprojecting the radiance collected along each point ofthe eye paths into the image. This reprojection can easilybe implemented on the GPU with only a small overhead.

As future work, we would like to include our reprojec-tion technique into a real-time ray tracing system. It wouldbe interesting to see if the coherence of primary rays is af-fected by pixel skipping. We did not yet test our methodwith bidirectional path tracing, environment maps, cameraanimations and additional image-space filters. All these ex-tensions should work properly.

Glossy materials are still an open problem, sincethe contribution in the reprojection is very small and theBRDF has to be re-evaluated when re-projecting the point.Here, multiple importance sampling [18] could be appliedto combine the conventional and the reprojected samplesmore efficiently. If the camera sees only a small fraction

Page 8: ACCELERATING PATH TRACING BY EYE PATH ......propagated backwards along the path contributing to pic-ture element p(see Fig. 2). Computing Mdifferent paths effectively solves Eq. 1

of a large scene, the number of reprojections can be small.To increase the number of valid reprojected samples, unbi-ased methods that create paths preferrably inside the view-ing frustum would be an interesting extension. Moreover,we think that multi-pass GPU methods could be used toreproject samples for depth of field and motion blur. Dueto its high speed and simple implementation, we hope thatour method will find its way into many existing path tracingsystems.

References

[1] James Arvo. Backward ray tracing. In In ACM SIG-GRAPH 1986 Course Notes - Developments in RayTracing, pages 259–263, 1986.

[2] Philippe Bekaert, Mateu Sbert, and John H. Halton.Accelerating path tracing by re-using paths. In Ren-dering Techniques, pages 125–134, 2002.

[3] David Cline, Justin Talbot, and Parris Egbert. En-ergy redistribution path tracing. In ACM SIGGRAPH2005 Papers, SIGGRAPH ’05, pages 1186–1195,New York, NY, USA, 2005. ACM.

[4] Robert L. Cook, Thomas Porter, and Loren Carpenter.Distributed ray tracing. SIGGRAPH Comp. Graph.,18(3):137–145, 1984.

[5] Holger Dammertz, Daniel Sewtz, Johannes Hanika,and Hendrik Lensch. Edge-avoiding a-trous wavelettransform for fast global illumination filtering. In Pro-ceedings of HPG 2010, pages 67–75, 2010.

[6] A. Mendez Feliu, Mateu Sbert, and Laszlo Szirmay-Kalos. Reusing frames in camera animation. In 14thInternational Conference in Central Europe on Com-puter Graphics, Visualization and Computer Vision,2006.

[7] Iliyan Georgiev and Philipp Slusallek. Rtfact:Generic concepts for flexible and high performanceray tracing. In IEEE/EG Symposium on InteractiveRay Tracing 2008, August 2008.

[8] Vlastimil Havran, Cyrille Damez, KarolMyszkowski, and Hans-Peter Seidel. An effi-cient spatio-temporal architecture for animationrendering. In 14th Eurographics Workshop onRendering, 2003.

[9] Paul S. Heckbert. Adaptive radiosity textures for bidi-rectional ray tracing. SIGGRAPH Comput. Graph.,24(4):145–154, 1990.

[10] Henrik Wann Jensen and Niels Jørgen Christensen.Photon maps in bidirectional monte carlo ray trac-ing of complex objects. Computers & Graphics,19(2):215–224, 1995.

[11] James T. Kajiya. The rendering equation. SIGGRAPHComput. Graph., 20(4):143–150, 1986.

[12] Eric Lafortune and Yves Willems. Bi-directional pathtracing. In Proceedings of CompuGraphics ’93, pages145–153, Alvor, Portugal, December 1993.

[13] Steven Parker, William Martin, Peter-Pike J. Sloan,Peter Shirley, Brian Smits, and Charles Hansen. In-teractive ray tracing. In In Symposium on interactive3D graphics, pages 119–126, 1999.

[14] Steven G. Parker, James Bigler, Andreas Dietrich,Heiko Friedrich, Jared Hoberock, David Luebke,David McAllister, Morgan McGuire, Keith Morley,Austin Robison, and Martin Stich. Optix: a gen-eral purpose ray tracing engine. ACM Trans. Graph.,29(4):1–13, 2010.

[15] Stefan Popov, Johannes Gunther, Hans-Peter Seidel,and Philipp Slusallek. Stackless kd-tree traversal forhigh performance GPU ray tracing. Computer Graph-ics Forum, 26(3):415–424, September 2007. (Pro-ceedings of Eurographics).

[16] Mateu Sbert, Laszlo Szecsi, and Laszlo Szirmay-Kalos. Real-time light animation. Comput. Graph.Forum, 23(3):291–300, 2004.

[17] Laszlo Szecsi, Laszlo Szirmay-Kalos, and MateuSbert. Light animation with precomputed light pathson the gpu. In Graphics Interface, pages 187–194,2006.

[18] Eric Veach. Robust Monte Carlo Methods for LightTransport Simulation. PhD thesis, Stanford Univer-sity, Stanford, California, December 1997.

[19] Ingo Wald, William R. Mark, Johannes Gunther,Solomon Boulos, Thiago Ize, Warren Hunt, Steven G.Parker, and Peter Shirley. State of the Art in Ray Trac-ing Animated Scenes. In Eurographics 2007 State ofthe Art Reports, 2007.

[20] Bruce Walter, George Drettakis, and Steven G. Parker.Interactive rendering using the render cache. In Ren-dering Techniques, pages 19–30, 1999.

[21] Turner Whitted. An improved illumination model forshaded display. SIGGRAPH Comp. Graph., 13(2):14,1979.

[22] Qing Xu and Mateu Sbert. A new way to re-usingpaths. In Proceedings of ICCSA - Volume Part II,pages 741–750, 2007.

[23] Yonghao Yue, Kei Iwasaki, Yoshinori Dobashi, andTomoyuki Nishita. Global illumination for interactivelighting design using light path pre-computation andhierarchical histogram estimation. Computer Graph-ics and Applications, Pacific Conference, pages 87–98, 2007.