RayTracing Tutorial

Embed Size (px)

Citation preview

  • 8/17/2019 RayTracing Tutorial

    1/38

    Ray Tracing Tutorialby the Codermind team

  • 8/17/2019 RayTracing Tutorial

    2/38

    Contents:

    1. Introduction : What is ray tracing ?

    2. Part I : First rays

    3. Part II : Phong, Blinn, supersampling, sRB and e!posure". Part III : Procedural te!tures, #ump mapping, cu#e

    en$ironment map

    %. Part I& : 'epth o( (ield, Fresnel, #lo#s

    2

  • 8/17/2019 RayTracing Tutorial

    3/38

     This article is the foreword of a serie of article about ray tracing. It's probably a word that youmay have heard without really knowing what it represents or without an idea of how to implementone using a programming language. This article and the following will try to fill that for you. Notethat this introduction is covering generalities about ray tracing and will not enter into muchdetails of our implementation. If you're interested about actual implementation, formulas andcode please skip this and go to part one after this introduction.

    What is ray tracing ?

    Ray tracing is one of the numerous techniues that e!ist to render images with computers. Theidea behind ray tracing is that physically correct images are composed by light and that light willusually come from a light source and bounce around as light rays "following a broken line path# ina scene before hitting our eyes or a camera. $y being able to reproduce in computer simulationthe path followed from a light source to our eye we would then be able to determine what our eyesees.

    %f course it's not as simple as it sounds. &e need some method to follow these rays as the naturehas an infinite amount of computation available but we have not. %ne of the most natural idea ofray tracing is that we only care about the rays that hit our eyes directly or after a few rebounds.

     The second idea is that our generated images will usually be grids of pi!els with a limitedresolution. Those two ideas together form the basis of most basic raytracers. &e will place ourpoint of view in a ( scene, and we will shoot rays e!clusively from this point of view and towardsa representation of our )( grid in space. &e will then try to evaluate the amount of reboundsneeded to go from the light source to our eye. This is mostly okay because the actual lightsimulation does not take into account the actual direction of the ray to be accurate. %f course it isa simplification we'll see later why.

    How are raytracers used ?

     The ideas behind ray tracing "in its most basic form# are so simple, we would at first like to use iteverywhere. $ut it's not used everywhere.

    Ray tracing has been used in production environment for off*line rendering for a few decades now. That is rendering that doesn't need to have finished the whole scene in less than a fewmilliseconds. %f course we should not generali+e and let you know that several implementationsof raytracer have been able to hit the interactive mark. Right now so called real*time raytracing is a very active field right now, as it's been seen as the ne!t big thing that (

    3

  • 8/17/2019 RayTracing Tutorial

    4/38

    accelerators need to be accelerating. Raytracer are really liked in areas where the uality ofreflections is important. - lot of effects that seem hard to achieve with other techniues are verynatural using a raytracer. Reflection, refraction, depth of field, high uality shadows. %f coursethat doesn't necessarily mean they are fast.

    raphics card on the other hand they generate the ma/ority of images these days but are verylimited at ray tracing. Nobody can say if that limitation will be removed in the future but it is astrong one today. The alternative to ray tracing that graphics card use is rasteri+ation.

    Rasteri+ation has a different view on image generation. Its primitive is not rays, but it is triangles.0or each triangle in the scene you would estimate its coverage on the screen and then for eachvisible pi!el that is touched by a triangle you would compute its actual color. raphics cards arevery good at rasteri+ation because they can do a lot of optimi+ation related to this. 1ach triangleis drawn independently from the precedent in what we call an immediate mode. This immediatemode knows only what the triangle is made of, and compute its color based on a serie ofattributes like shader program, global constants, interpolated attributes, te!tures. - rasteri+erwould for e!ample typically draw reflection using an intermediate pass called render to te!ture, aprevious rasteri+er pass would feed into itself, but with the same original limitations which thencause all kind of precision issues. This amnesia and several other optimi+ations are what allowthe triangle drawing to be fast. Raytracing on the other hand doesn't forget about the wholegeometry of the scene after the ray is launched. In fact it doesn't necessarily know in advancewhat triangles, or ob/ects it will hit, and because of interrefle!ion they may not be constrained to

    a single portion of space. Ray tracing tends to be global, rasteri+ation tends to be local. Thereis no branching besides simple decisions in rasteri+ation, branching is everywhere on ray tracing.

    Ray tracing is not used everywhere in off line rendering either. The speed advantage ofrasteri+ation and other techniues "scan line rendering, mesh subdivision and fast rendering ofmicro facets# has often been hold against true ray tracing solution, especially for primary rays.2rimary rays are the one that hit the eye directly, instead of having rebound. Those primary raysare coherent and they can be accelerated with pro/ective math "the attribute interpolation thatfast rasteri+ers rely upon#. 0or secondary ray "after a surface has been reflecting it, or refractingit#, all bets are off because those nice properties disappear.

    In the interest of full disclosure it should be noted that ray tracing maths can be used to generatedata for rasteri+er for e!ample. lobal illumination simulation may need to shoot ray to determinelocal properties such as ambiant occlusion or light bleeding. -s long as the data is made local in

    the process that will be able to help a strict immediate renderer.Complications, infinity and recursion

    Raytracers cannot and will not be a complete solution. No solution e!ist that can deal with truerandom infinity. It's often a trade off, trying to concentrate on what makes or break an image andits apparent realism. 1ven for off line renderer performance is important. It's hard to decide toshoot billions of rays per millions of pi!el. 1ven if that's what the simulation seems to reuire.&hat trade offs do we need to do 3 &e need to decide to not follow every path possible.

    lobal illumination is a good e!ample. In global illumination techniues such as photon mapping,we try to shorten the path between the light and the surface we're on. In all generality, doing fullglobal illumination would reuire to cast infinite amount of ray in all the directions and see which

    percentage hit the light. &e can do that but it's going to be very slow, instead, we'll first castphotons "using the e!act same algorithm as ray tracing but in reverse from the light point ofview# and see what surface they hit. Then we use that information to compute lighting in firstappro!imation at each surface point. &e only follow a new ray if we can have a good idea using acouple of rays "perfect reflection needs only one additional ray#. 1ven then it can be e!pensive,as the tree of rays e!pands at a geometric rate. &e have often to limit ourself to a ma!imumdepth recursion.

    Acceleration structure.

    1ven then, we'll want to go faster. 4aking intersection tests becomes the bottleneck if we havethousand or millions of intersectable ob/ects in the scene and go for a linear search of intersectionfor each ray. Instead we definitely need an acceleration structure. %ften used are hierarchical

    representation of a scene. 5omething like a 6(*Tree or octree would come to mind. The structure

    "

  • 8/17/2019 RayTracing Tutorial

    5/38

    that we'll use may depend on the type of scene that we intend to render, or the constraints suchas it needs to be updated for dynamic ob/ects, or we can't address more than that amount ofmemory, etc. -cceleration structures can also hold photons for the photon mapping, and variousother things "collisions for physics#.

    &e can also e!ploit the coherence of primary rays. 5ome people will go as far as implementingtwo solutions, one that use rasteri+ation or similar techniue for primary rays and raytracing foranything beyond that. &e of course need a good mean of communication between both stages

    and a hardware that is sufficiently competent at both.

    «First RAY!

     This is the first part of the ray tracing series of tutorials. &e've seen in the introduction what araytracer was and how it was different from other solutions. Now let's concentrate on what itwould reuires to implement one in C7C88.

    %ur raytracer will have the following functionality. It will not try to be real time, and so will nottake shortcuts because of performance only. Its code will try to stay as straightforward aspossible, that means we'll try not to introduce comple! algorithms. If we can avoid it. &e'll make

    sure the basic concepts e!plained here will be visible in the code itself. The raytracer will work asa command line e!ecutable, that will take a scene file such as this one and output an image suchas this one 9

    What does the raytracer do ?

    0irst start with a simple pseudo language description of the basic ray tracing algorithm.

    for each pi!el of the screen:  0inal color ; =  Repeat

    :

    %

    http://files/raytracer/scene_page_1.txthttp://files/raytracer/scene_page_1.txthttp://files/raytracer/scene_page_1.txt

  • 8/17/2019 RayTracing Tutorial

    6/38

      for each ob/ect in the scene  :  determine closest ray ob/ect7intersection=  >

    if intersection e!ists:

      for each light in the scene 

    :  if the light is not in shadow of another ob/ect  :  add this light contribution to computed color=  >

    >  >  0inal color ; 0inal color 8 computed color ? previous reflection factor=  reflection factor ; reflection factor ? surface reflection property=  increment depth=  > until reflection factor is < or ma!imum depth is reached=>0or this first part, we'll limit ourselves to spheres as ob/ects "a center point and a radius#, and

    point lights as light sources "a center point and an intensity#.

    canning the screen

    &e have to cast a ray, at each one of our pi!el to determine its color. This translates as thefollowing C code 9

     for "int y ;

  • 8/17/2019 RayTracing Tutorial

    7/38

     Types of pro/ection change the way the ray are related to each other. $ut it should be noted thatit doesn#t change anything to the $eha%ior of a single ray. 5o you'll see that even laterwhen we change our code to handle conic perspective, the rest of the code, intersection andlighting will N%T change the least bit. That's a nice property of raytracing compared to othermethods, for e!ample rasteri+ation hardware relies heavily on the property of orthographic7conicperspective.

    &e set the starting point at an arbitrary position that we hope will enclose the whole visible

    scene. Ideally there should be no clipping, so the starting point will depend on the scene. $ut itshould not be too far, as too big a floating point number will cause floating point precision errorsin the rendering.

    &hen we'll go conic, such a problem of determination of the starting point will disappearbecause the positions behind the oberver are then IEE1-E.

    &he closest intersection

     The ray is defined by its starting point and its direction "normali+ed# and is then parameteri+ed bya arithmetic distance called t. It denotes the distance from any point of the ray to the starting

    point of the ray.

    &e only move in one direction on the ray from the smaller t to the bigger t. &e'll iterate through

    each ob/ect, determine if there is an intersection point, we'll find the corresponding t parameterof this intersection and we'll find the closest one by taking the smallest t that is positive "thatmeans it's in front of our starting point not behind#.

    phere'Ray intersection

    5ince we're only dealing with spheres our code only contains a sphere*ray intersection function. Itis one of the fastest and simplest intersection code that e!ists. That's why we start with sphereshere. The math themselves are not that interesting we'll introduce our parameter inside anintersection euation. That will give us a second degree euation with one unknown t.

    *

  • 8/17/2019 RayTracing Tutorial

    8/38

    &e compute the delta of the euation and if the delta is less than +ero there is no solution, so nointersection and if it's greater than +ero there is at most two solutions. &e take the closest one. Ifyou want more details to how we got to this please consult our -nswers section " &hat is theintersection of a ray and a sphere 3#. Fere is the code of that function 9

      vecteur dist ; s.pos * r.start=double $ ; r.dir ? dist=

     

    double ( ; $?$ * dist ? dist 8 s.si+e ? s.si+e=if "( @ &e take the closest one, but it has to be further than the starting point, that is why we comparet< and t to

  • 8/17/2019 RayTracing Tutorial

    9/38

    distance. -s soon as we find an intersection point between this new ray and any ob/ect in thescene we stop the iteration because we know that the ob/ect is in the light shadow.

    (am$ert

    Eambert was a 1uropean mathematician of the Bth century. Eambert cosine's law is a relationbetween the angle an incoming light ray has with a surface, the light intensity of this ray and theamount of light energy that is received by unit of surface. In short if the angle is low "gra+inglight#, the received energy is low, and if the angle is high "+enith# the energy is high. That law willfor e!ample in an unrelated topic e!plain why the temperature at the euator is higher than thetemperature at the earth poles.

    Fow is that important to us 3 If we are given a perfectly diffusing surface, that's the only thingthat will matter to compute the lighting. The property of diffusion is that the energy that isreceived in one direction is absorbed and partially reemitted in all directions. - perfect diffusesurface will emit the same amount of energy in all directions. That important property is why theperceived lighting of a surface that is perfectly diffuse will only depend on the angle of theincident light with that surface 9 it is totally independent of the viewing angle. 5uch a material isalso called a lambertian, because its lighting property is only determined by the lambert's law

    described above "that's the only type of material that we support in this first chapter. 0or morecomple! models see the following chapters#. %n the illustration below, you can see the incominglight and the redistributed energy, it is constant in all direction that's why it's represented in aperfect circle.

    &hen we hit such a surface in our raytracer code, we will compute the cosine of the angle thetathat the incoming ray does with the surface "via the normal# 9

     float lambert ; "lightRay.dir ? n# ? coef= Then we multiply that lambertian coeficient with the diffuse color property of the surface, that willgive us the perceived lighting for the current viewing ray.

    Reflection

    &hat would be ray tracing without that good old perfect "though irrealist# reflection. In addition tothe diffusion color attribute described above, we'll give our surface a reflection coefficient. If thiscoefficient is

  • 8/17/2019 RayTracing Tutorial

    10/38

    iterations by pi!el "for e!ample, no more than ten iterations in the current code#. %f course whenwe hit that limitation the lighting will be wrong, but as long as we have no perfect reflection "lessthan a hundred percent# each iteration will contribute less than the previous one to the overalllighting. 5o hopefully cutting after the tenth iteration will not cause too many visual issues. &ecould alter that if we are to render a special scene that still reuires more "a hall of mirror type ofscene for e!ample#.

    (igression 9 Computing successive reflections seems to be a natural fit for an recursive algorithm

    from the book, but in practice and in our case, we preferred to go for an iterative algorithm. &esave on the C stack usage but it will also reuire us to do some trade off in fle!ibility later.

     The lighting models that we use here "Eambert 8 perfect reflection# are way too limited to renderthe comple! natural materials. %ther types of models "2hong, $linn, -nisotropic, $R(0 theory...#need to be introduced to account for more. &e'll come back to it later.

    -s we saw previously in our first part, a perfectly diffuse material will send incident light backuniformly in all directions. -t the other side of the spectrum, a perfect mirror only sends light inthe opposite direction of the incident ray.

    Reality is not as black and white as that, real ob/ect surfaces are often comple! and difficult tomodel simply. I will describe here two empirical models of shiny surfaces that are not puremirror.

     This is the second part of the our series of articles about ray tracing in C88. It follows the partcalled 0irst rays.

    )hong

    $ui Tuong 2hong was a student at the niversity of tah when he invented the process that tookhis name "2hong lighting#. It is a uick and dirty way to render ob/ects that reflect light in apriviledged direction without a full blown reflection. In most cases that priviledged direction isthat of the light vector, reflected by the normal vector of the surface.

     The common usage is to describe that as the specular term, opposed to the diffuse term from theEambert formula. The specular term varies based on the position of the observer, as you can see

    below in the euation that takes into account viewRay.dir 9 float reflet ; ).

  • 8/17/2019 RayTracing Tutorial

    11/38

    (igression9 an amusing story about the 2hong lighting is that in papers that $ui Tuong 2hongwrote his name is written in this order, which is the traditional order in Lietnam "and sometimesalso in other countries where the given name is uoted after the family name#. $ut for mostenglish speaking readers that order is reversed and so people assumed his last name was 2hong.

    %therwise his work would have been known as the $ui Tuong lighting.

    *linn')hong

    - possible variation on the specular term is the derived version based on the work of Mim $linn,who was at the time professor for $ui Tuong 2hong in 5alt Eake City and is now working for4icrosoft Research.

     Mim $linn added some physical considerations to the initial empiric result. This goes through thecomputation of an intermediate vector at the midpoint between the light direction and theviewer's one "$linn's vector#. Then it goes through the computation of the dot product of this$linn vector with the normal to the surface. - comparison with previous formula allows us to see

    that $linn term is eually ma!imum when the view ray is the reflected of the light ray 9 it is aspecular term.

     vecteur blinn(ir ; lightRay.dir * viewRay.dir= float temp ; srtf" blinn(ir ? blinn(ir#= if "temp ;

  • 8/17/2019 RayTracing Tutorial

    12/38

    Antialiasing

     There are numerous methods aimed at reducing aliasing. This article is not meant to be acomplete description of what anti*aliasing does, or a complete directory of all the methods. 0ornow we'll /ust describe our current solution which is based on basic supersampling.

     The supersampling is the idea that you can take more individual color samples per pi!el in orderto reduce aliasing problems "mostly stair effect and to some e!tent moire effect#. In our case for afinal image of K,O resolution, we'll render to the higher resolution of )?K, )?O then take theaverage of four samples to get the color of one pi!el. This is effectively a P! supersamplingbecause we're computing four times more rays per pi!els. This is not necessary that those foure!tra samples are within the boundary of the original pi!el, or that the averaging be a regulararithmetic average. $ut more on that later.

     Oou can see on the following code how it goes. 0or each pi!el that we have to compute we will

    effectively launch four rays, but those rays will see their contribution reduced to the uarter. Therelative position of each pi!el is free. $ut for simplicity reason and for now we'll /ust order themon a regular grid "as if we had /ust computed a four times larger image#.

     for "int y ;

  • 8/17/2019 RayTracing Tutorial

    13/38

    +amma function

    %riginally CRTs didn't have good restitution curve. That is the relation between a color stored in aframe buffer and its perceived intensity on the screen would not be linear. sually you wouldinstead have something like Iscreen ; 2ow"I, gamma#, with gamma greater than .

    In order to correctly reproduce linear gradients or anything that rely on the good linear relation"for e!ample dithering and anti*aliasing#, you would then have to take into account this functionwhen doing all your computations. In the case above for e!ample, you would have to apply theinverse function 2ow"I, 7gamma#. Nowadays that correction can be done at any point in thevisuali+ation pipeline, modern 2C have even dedicated hardware in their (-C "digital to analogconverter# that can apply this inverse function to the color stored into the frame buffer /ust beforesending it to the monitor. $ut that's not the whole story.

     To provide a file format that would be calibrated in advance for all kind of configurations"monitors, applications, graphics cards# that could e!ist out there is the suaring of the circle 9 it's

    impossible. Instead we rely on conventions and e!plicit tables "the /peg format has profiles thatdescribe what kind of response the monitor should provide to the encoded colors, then theapplication layer can match that to the profile provided for the monitor#. 0or the world wide weband &indows platforms a standard has been defined, it is called sR$. -ny file or buffer that issR$ encoded has had its color multiplied by the power of 7).) before being written out to thefile. In our raytracer we deal with a ) bit float per color component, we have a lot of e!traprecision, that way we can apply the sR$ transform /ust before downconverting to B bit integerper component.

    Fere is an image that is sR$ encoded. Oour monitor should have been calibrated to minimi+e theperceived difference between the solid gray parts and the dithered black and white pattern. "Itwill be harder to calibrate your EC( than your CRT, because, on typical EC(s, angle of view willimpact the perceived relation between intensities but you should be able to find a spot that isgood enough hopefully#.

    &hat would be the interest to encode images in sR$ format, rather than have them encoded ina linear color space and let the application7graphics card apply the necessary corrections at theend 3 &ell other than the fact that it is the standard and having your png or tga file in an sR$space will ensure that the viewer if he's correctly calibrated will see them as you would e!pect,sR$ can be seen as a form of compression also. The human eye is more sensitive to lower

    intensity and so when reducing the ) bit color to a B bit value you have less risk of having visible

    13

    http://files/raytracer/supersampling-png.html

  • 8/17/2019 RayTracing Tutorial

    14/38

    banding when reproducing them. $ecause the pow"7).)# transform gives more precision to thelower intensity. "of course that's a poor compression, anything other than T- is better, like /pegor png, but since we output a tga file, the loss are already factored in#.

    Fere is the corresponding code 9

    float srgb1ncode"float c#:  if "c @;

  • 8/17/2019 RayTracing Tutorial

    15/38

    Fere is the corresponding code 9

     float e!posure ; *.

  • 8/17/2019 RayTracing Tutorial

    16/38

    color leaking in the whole photography process, or because of small defects in our eyes#, butmost of the time they are added for dramatic effect. &e won't use those in our program, but youcould add those as an e!ercise.Fere is the output of the program with the notions brought up in this page 9

    Now that we've seen how to treat some simple surface properties, we will try to add a few moredetails to our scenes. %ne of the simplest way to add details is via the use of te!tures.

    I won't cover the te!turing in its full e!haustive details, but I will detail here two concepts, one isthe procedural te!ture and the other one is the environment te!ture or cubemap.

     Specularity, supersampling, gamma correction and photo exposure

    )erlin noise

    6en 2erlin is one of the pionneers of the image synthesis field. Fe contributed to movies such as Tron, and his famous noise is one of the most used formula in the world of graphics. Oou can goand visit his webpage to learn more about his current pro/ect 9 http977mrl.nyu.edu7perlin7 . It'salso on that website that you can find the source code for his noise algorithm that is remarkablysimple. Fere is the C88 version that I used and that is directly translated from the originalversion in /ava 9

    struct perlin:  int pUA)V=  perlin"void#=  static perlin H getInstance"#:static perlin instance= return

    instance=>

    1)

    http://mrl.nyu.edu/~perlin/http://mrl.nyu.edu/~perlin/

  • 8/17/2019 RayTracing Tutorial

    17/38

    >=

    static intpermutationUV ; : A,

  • 8/17/2019 RayTracing Tutorial

    18/38

      lerp"u, grad"my2erlin.pU-$8V, !, y*, +* #,  grad"my2erlin.pU$$8V, !*, y*, +* ####=>

    perlin99perlin "void#:

    for "int i;>&hat this code does is to create a kind of pseudo*random noise, that can be visuali+ed in one,two three dimensions. Noise that looks random enough is inevitable to break visible patterns ifwe were instead to rely on hand made content only. 6en 2erlin also contributed some niceprocedural models as we'll se with the following e!amples.

    )rocedural tetures

    If we were to populate our ( world with ob/ects, we would have to create geometry but also

    te!tures. Te!tures contain spatial data that we will use to modulate the e!isting appearance ofour geometry. To have the sphere represent the 1arth, there is no need to create geometric detailwith the right color properties on the sphere. &e can keep our simple sphere and instead map adiffuse te!ture on the surface that would modulate the lighting to the local property of the 1arth"blue in the oceans, green or brown on the firm ground.#. This is good enough in firstappro!imation "as long as we don't +oom too much on it#. The 1arth te!tures would have to beauthored. 4ost likely by a te!ture artist. This is generally true for all ob/ects in a scene, detailneeded to make it lifelike is too important to rely only on geometric detail "and the use of te!turescan also be used to compress geometric details, see displacement mapping#.

    2rocedural te!tures are subset of te!tures. %ne that hasn't to be authored by a te!ture artist.-ctually somebody still has to design them and place them on ob/ects, but he wouldn't have todecide of the color of each of the te!ture component, the te!els. That's the case because theprocedural te!ture is generated with an implicit formula rather than fully described. &hy would

    we want to do that 3 &ell there's a few advantages. $ecause the image is not fully described wecan represent a lot more details without using a lot of storage. -ctually some formulas "such asfractals# allow for infinite level of details, something that an hand drawing artist even full*timecannot achieve. There is also no limitations on the dimensions that the te!ture can have. 0or our2erlin noise, it's as efficient to have three dimensional te!tures as it is to have two dimensionalte!tures. 0or a fully described te!tures, storage reuirements would augment as NX, and theauthoring would be made much more difficult. 2rocedural te!tures can achieve cheap, infinitedetail, that do not suffer from mapping problems "because we can te!ture three dimensionalob/ects with a three dimensional te!ture instead of having to figure a L mapping on the surface,for e!ample have a twod image#.

    &e can't use procedural te!tures everywhere. &e are limited by the formulas that we haveimplemented, which can try to mimic an e!isting material, but as we've seen if we wanted to

    represent the 1arth we would be better off with somebody working directly from a known imageof the 1arth. -lso an advantage and inconvenient of procedural is that everything is computed inplace. Oou don't compute what is not needed, but accelerating stuff by computing them offlinecan prove more difficult "you can do so but you lose the uality over storage advantage#.2rocedural content has still a lot of things to say in the near7far future because labor intensivemethods can only get you as far as you have access to that cheap labor. -lso you shouldn't baseyour impression of what procedural can do for you only on those e!amples, as the field has led tomany applications that are better looking and more true to nature as what I'm presenting in thislimited tutorial. $ut let's stop the digression now.

    &ur$ulent teture

    1+

  • 8/17/2019 RayTracing Tutorial

    19/38

    %ur code now contains a basic implementation of an effect that 6en 2erlin presented as anapplication of his noise, the turbulent te!ture. The basic idea of this te!ture and several othersimilar ones is that you can easily add details to the basic noise, by combining several versions ofit at different resolutions. Those are called harmonics, like the multiple freuencies thatcompose a sound. 1ach version is attenuated so that the result is not pure noise but rather addssome additional shape to the previous low resolution te!ture.

    case material99turbulence9

    :  for "int level ; = level @

  • 8/17/2019 RayTracing Tutorial

    20/38

     The lighting reacts to the normal directions, but if the bumps are not there we can still alter thenormals themselves. %ur code that follows will read the scale of the bumps in our scene file, thenwill displace the normal of the affected surface material by a vector that is computed from the2erlin noise. $efore doing actual lighting computation we make sure our normal is normali+ed,that meant that his length is one, so that we don't get distortion in the lighting "we don't wantamplification, /ust perturbation#. If the length isn't one, we correct that by taking its current

    length and dividing the current normal "all three normal coordinates# by it. $y definition the resultwill have a length of one "and it keeps the same direction as the original#.

    if "current4at.bump#:  float noiseCoef! ;

    float"noise"

  • 8/17/2019 RayTracing Tutorial

    21/38

    Cu$ic en%ironment mapping

     The image synthesis world is all about illusion and shortcuts. Instead of having the idea ofrendering the whole world that surrounds our scene, here is a much better idea. &e can imaginethat instead of being in a world full of geometries, light sources and so on, we are inside a bigsphere "or a surrounding ob/ect#, big enough to contain our whole scene. This sphere has someimage pro/ected on it and this image is the view of the world in all e!isting directions. 0rom ourrendering point of view, as long as the following properties are true, it doesn't matter where weare 9 the environment is far enough that small translation movements in the scene won't alter theview in a significant way "no paralla! effect# and the outside environement is static and doesn'tmove by itself. That certainly wouldn't fit every situations but there are still some cases wherethis applies.

    &ell, in my case, I'm /ust interested to introduce the concept to you and to get cheapbackgrounds without having to trace rays into that. The uestion is how do you map theenvironment to the big sphere 3 There are several answers to that uestion but a possible answeris through cube mapping. 5o instead of te!turing a sphere we'll te!ture a cube.

    &hy a cube 3 There are multiple reasons. The first is that it doesn't really matter what shape thatenvironment has. It could be a cube, a sphere, a dual paraboloid, in the end all that matters isthat if you send a ray towards that structure you will get the right color7light intensity as if youwere in the environment simulated by that structure. 5econd is that rendering to a cube is verysimple, each face is similar to our planar pro/ection, the cube being constituted of si! differentplanar pro/ections. -lso storage is easy, faces of the cube are two dimensional arrays that can bestored easily and with a limited distortion. Fere are the si! faces of the cubes put end to end 9

    21

  • 8/17/2019 RayTracing Tutorial

    22/38

    Not only that but also pointing a ray to that cube environment and finding the corresponding coloris dead easy. &e have to find first which face we're pointing at but that's pretty simple with basicmath as seen in the code below 9

    color readCubemap"const cubemap H cm, ray myRay#:  color ? currentColor =  color outputColor ; :

  • 8/17/2019 RayTracing Tutorial

    23/38

      :  if "myRay.dir.+ G

  • 8/17/2019 RayTracing Tutorial

    24/38

      77 implemented addressing type for now.  77 Clamping is done by bringing anything below +ero  77 to the coordinate +ero  77 and everything beyond one, to one.  umin ; min"ma!"umin,

  • 8/17/2019 RayTracing Tutorial

    25/38

      8

    Anne / Corrections on teture read

    - te!ture like our cubic environment map is a light source like another one and should becontroled finely as well. 0irst the storage of our te!ture usually happen after the original data hasbeen e!posed "if it is a B bits per component image# and is encoded as an sR$ image "becauseit is adapted to the viewing conditions on your 2C#. $ut internally all our lighting computationshappen in a linear space and before e!posure took place. If we don't take those elements intoaccount, reading those images and integrating them in our rendering will be slightly off.

    5o in order to take that into account, each sample read from our cube map is corrected as this 9

      if "cm.bsR$#  :  77 &e make sure the data that was in sR$ storage mode is brought back to a

    77 linear format. &e don't need the full accuracy of the sR$1ncode function  77 so a powf should be sufficient enough.  outputColor.blue ; powf"outputColor.blue, ).)f#= 

    outputColor.red ; powf"outputColor.red, ).)f#= 

    outputColor.green ; powf"outputColor.green, ).)f#=  >

      if "cm.b1!posed#  :  77 The E(R "low dynamic range# images were supposedly already  77 e!posed, but we need to make the inverse transformation  77 so that we can e!pose them a second time.  outputColor.blue ; *logf".

  • 8/17/2019 RayTracing Tutorial

    26/38

    0or this page and the following ones you will also need the e!tra files in the archivecalled te!tures.rar " 4$#. It contains the cube map files used to render those images.

    -fter dealing with surface properties in our previous articles, we'll go a bit more in depth of whata raytracer allows us to do easily.

    %ne of the interesting property of ray tracing is to be independent of the model of camera, andthat is what makes it possible to simulate advanced camera model with only slight codemodification. The model we'll e!plore in this page is the one of the pin hole camera with a conicpro/ection and a depth of field. -nother interesting property is that contrary to a lot of otherrendering methods such as rasteri+ation, adding implicit ob/ects is the matter of defining thatob/ect euation and intersection, in some cases "spheres# it is easier to compute intersection withthe implicit ob/ect than it is with the euivalent triangle based one. 5pheres are one possibleimplicitly defined primitive. &e'll add a new one which we'll call blobs. 0inally ray tracing is goodfor reflections as we've seen in our previous articles, but it can as easily simulate anotherphysical effect called refraction. Refraction is a property of transparent ob/ects, allowing us todraw glass*like or water*like surfaces.

     This is the fourth part of our ray tracing in C88 article series. This article follows the one titled

    Procedural textures, bump mapping, cube environment map

    Camera model and depth of field

    &hat follows is the e!treme simplification of what e!ists in our eye or in a camera.

    - simplified camera is made of a optical center, that means the point where all the light raysconverge, and a pro/ection plane. This pro/ection plane can be placed anywhere, even inside thescene since it happens to be a simulation on a computer and not a true camera "in the case ofthe true camera, the pro/ection plane would be on the film or CC(#. It also contained an opticalsystem that is there to converge the rays that pass through the diaphragm. %ne thing importantto understand for the physical model of cameras is that they have to let pass a full beam of light,big enough so that the the film or the retina are inscribed. The optical system must be configuredso that the point of interest of the scene is sharp on the pro/ection plane. It is often mutuallye!clusive, that means there can't be two points, one far and one near that can be both sharp atthe same time. In photography we call that the depth of field. %ur simplified model of cameraallows us to simulate the same effect for an improved photo*realism. $ut the main advantage wehave compared to real life photographer is that we can do easily without the constraint of thephysical system.

     The blur caused by the depth of field is a bit e!pensive to simulate because of the addedcomputation. The reason is that in our case we'll try to simulate that effect with a stochasticmethod. That means we'll send a lot of pseudo*random rays to try to accommodate thecomple!ity of the problem. The idea behind this techniue is to imagine we shoot a photon raythrough our optical system and that ray will be slightly deviated from the optical center because

    of the aperture of the diaphragm. The larger the hole at the diaphragm, the bigger the deviationcan be. $ut for simplicity's sake we'll consider the optical system like a black bo!. 5ome people

    2)

    http://files/raytracer/textures.rarhttp://files/raytracer/textures.rarhttp://files/raytracer/textures.rar

  • 8/17/2019 RayTracing Tutorial

    27/38

    successfully managed to render images with accurate representations of lenses, etc.. $ut that isnot our goal here in this limited tutorial. The only thing we care about from now on is at whatdistance from our optical center are the points in the scene that will be seen as sharp and fromwhich direction our photon was hitting our pro/ection plane so that we can follow its reverse path.

    &e need to use the geometrical properties of the optical systems. Those are called the invariants.0irst any ray passing through the optical center itself is not deviated from its course. 5econd, tworays crossing the sharp plane at the same point will converge to the same point on the

    pro/ection plane, this is by definition of the sharp plane. Those two properties will allow us todetermine the reverse path of

  • 8/17/2019 RayTracing Tutorial

    28/38

      >  color rayResult ; addRay "viewRay, my5cene, conte!t99get(efault-ir"##=  fTotal&eight 8; .  temp ; ".

  • 8/17/2019 RayTracing Tutorial

    29/38

    -nd here is the result when the sharp plane is put at a farther distance. If the distance of thesharp plane is very large compared to the si+e of the aperture then only the landscape "at infinity#is seen as sharp here. 0or our eye and for a camera, infinity can be only a few feet away. It'sonly a matter of perception.

    2

  • 8/17/2019 RayTracing Tutorial

    30/38

     To get this level of uality without too much noise in the blur we need to send a large number ofray, more than one hundred per pi!el in the image above. The time needed to render will almostscale linearly with the number of rays 9 if it takes one minute with one ray per pi!el, then it willtake one hundred minutes with one hundred rays per pi!el. %f course this image took less"around several minutes at the time#. Oour mileage may vary, it all depends on the comple!ity ofthe scene, the amount of optimi+ation that you could put into the renderer and of course of theperformance of the C2 you're running it on.

     The contribution of all the rays for a pi!el are added together before the e!posure and gammacorrection took place. This is unlike the antialiasing computation. The main reason would be thatthe blur is the result of the addition of all photons before they hit the eye or the photosensitivearea. In practice a very bright but blurry point one would leave a bigger trace on the image than adarker one. This is not a recent discovery as already painters like Lermeer e!ploited that smalldetails to improve the rendering of their paintings. Lermeer like other people at his time hadaccess to the dark chamber "camera oscura# which profoundly changed the perception of lighting

    itself.

    sosurfaces 0$lo$s1

    - iso*surface is a non planar surface that is defined mathematically like the set of points where afield of varying potential have the e!act same value. The field has to be varying continuously elsethe definition does not constitute a surface. &e suppose this field of scalar has the followingproperties 9

    * the fields that we consider are additive. That means that we have the field created by a source-, and the field created by the source $, if we put those two sources together, then the resultingfield is simply the addition in each point of the space of the field - and the field $. In practice a lot

    of fields in real life acts this way. This is true for the magnetic field, the electric field and the

    3-

  • 8/17/2019 RayTracing Tutorial

    31/38

    gravitational field, so the domain is well known. 5o if we wanted to represent an iso*gravitationalsurface we could use this method or a similar one.

    * the potential field value when the source is a single point is well known. The potential is simplythe inverse to the suare distance to the source point. Thanks to this and the property above wecan easily define the potential field created by N source points.

    * this isn't a property but a mere discovered truth. &e don't need a perfectly accuraterepresentation of the iso*surface. &e allow appro!imations in the rendering.

    $elow is a representation of )( iso*potential curves "in red#. -ctually this image represents thefield of three identical attractors in a gravitational field. 1ach single red line represents the iso*potential for a different value, in blue, we have the direction of the gradient of the scalar field.

     The gradient in that case is also the direction of the gravitational force.

    &hat are the /ustifications for the above properties 3 There is none. This is /ust one possibledefinitions for a blob you will probably be able to find other ones. -lso you should note that for asingle given source point, the iso*potential surfaces form a spherical shape. %nly if there is asecond "third, etc.# source in the area is the sphere deformed. That's why we'll sometimes see ourblobs being called meta*balls. They are like spherical balls until they are close from each other, inwhich case they tend to fusion, as their potential fields add up.

    Now let's e!amine our code implementation. -dding each potential and trying to resolve directlythe potential ; constant euation will be a bit difficult. -ctually we'll end up with the resolution ofpolynomial euations of the nth degree but those will e!ceed what we want to do for our series ofarticle. &hat we'll do instead is go the appro!imate way, but hopefully close enough for ourlimited scenes.

    If we take the potential function f"r# ; 7rX), we can appro!imate this rational function as a set oflinear functions instead each of them operating on a different subdivision of space. If we dividethe space around each source point with concentric spheres, between two concentric sphere wehave the value 7rX) being close to a linear function f"rX)# ; a?rX) 8 b. 2lus as we go furtheralong the radius, at some point the potential for that source point becomes so close to +ero that ithas no chance to contribute to the iso*surface. %utside of that sphere, it's as if only the N* othersource points contributed. This is an easy optimi+ation that we can apply to uickly cull sourcepoints. The image below illustrate how our 7rX) function gets appro!imated by a series of linearfunction in rX).

    31

  • 8/17/2019 RayTracing Tutorial

    32/38

     The image below illustrates how our view ray has to go through our field of potential. Thepotential function is null at the start, so the coefficients a, b, c are +ero, +ero, +ero. Then it hits thefirst outer sphere. -t this point with our appro!imation from above, the 7rX) potential becomes apolynomial of the first degree in rX), r being defined as the distance that is between a point onour ray and the point that is the first source. This suared distance can be e!pressed in term of t,

    the arithmetic distance along our ray. This becomes then a polynomial function of the seconddegree in terms of t a?tX)8b?t8c. &e can then solves the euation a?tX)8b?t8c ;constant to see if the ray has hit the iso*surface in that +one. Euckily for us it's a second degreeeuation with t as an unknown so very easy to solve "same comple!ity as our ray7sphereintersection described in the first page#. &e have to do that same intersection test for each new+one that the ray will encounter. 1ach +one is delimited by a sphere, that sphere will modify theeuation by adding or subtracting a few terms but never by changing the shape of the euation.In fact we have very good features designed in that guarantee that the solution to our euationwill be a continuous iso*potential surface.

    5o here follows the source code. $efore doing the intersection test with the iso*surface we haveto do the intersection tests with the concentric spheres. &e'll have to determine how eachintersection is sorted along the view ray, and how much each one contributes to the finaleuation. &e'll have precomputed the contribution of each +one in a static array to accelerate thecomputation "those values stay constant for the whole scene#.

    for "unsigned int i;

  • 8/17/2019 RayTracing Tutorial

    33/38

      vecteur v(ist ; current2oint * r.start=  const float - ; .

  • 8/17/2019 RayTracing Tutorial

    34/38

      :  t ; t=  bResult ; true=  >  if "bResult#  return true=  >

    >return false=%nce the intersection point found, we have to compare it to the intersection point of other ob/ectsin the scene of course. If that's the one that is closer to the starting point we'll then have to doour lighting computation on that point. &e have the point coordinates but we still need thenormal to the surface in order to do the lighting, reflection and refraction. I've seen some peoplesuggest to do the weighted average of all normals of the spheres around the source. $ut we cando way better than that, because of a nice property of the iso*potential surface. In the case of asurface defined at the points where potential;constant, then the normal to that surface is in thesame direction as the gradient of the potential field. $elow e!press as a mathematicale!pression 9

     The gradient of a potential field in 7rX) being a well known entity, we have the following code tocompute the gradient of the sum of a couple of 7rX) fields 9

    vecteur gradient ; :-fter all those computations, here what we obtain when we combine three source points 9

    3"

  • 8/17/2019 RayTracing Tutorial

    35/38

    Fresnel and transparent %olumes

    5ome materials have the property that they transmit the light really well "glass, water etc.# butaffect its progression, which leads to two phenomenon. %ne phenomenon is the refraction, theray of light is propagated but is deviated from its original tra/ectory. The deviation is controlled by

    a simple law that was first formulated by (escartes and 5nell.

    In short, if ThetaI is the incident angle that the view ray "or the light ray# does with the normal tothe surface, then we have ThetaT, the angle that the transmitted ray does with the normal.

     ThetaT and ThetaI are part of the following euation 9 n) ? sin"ThetaT# ; n ? sin"ThetaI#. This isthe famous 5nell*(escartes law.

    n is called the refraction inde! of the medium. It is a variable with no unit that is computed foreach medium by its relation to the void itself. The void is given a refraction inde! of . n is onthe incident side and n) is on the transmission side. %f course since the problem is totallysymmetrical, you can easily swap the value of ThetaI and ThetaT, n and n). In classical opticsthere is no difference between incoming ray and a ray that goes away. That's why we canconsider light rays and view rays as similar ob/ects.

     The above relation cannot work for all values of ThetaI and ThetaT though. If n G n), then thereare some possible ThetaI values for which there is no corresponding ThetaT "because sin"ThetaT#cannot be bigger than one#. &hat it means in practice is that in the real world, when the ThetaIe!ceeds a certain value then there is simply no transmission possible.

     The second effect that those transparent surface cause is to reflect a part of their incident light. This reflection is similar to the mirror reflection we had described in the previous pages, that isthe reflected ray is the symmetrical ray to the incident ray with the normal as the a!is ofsymmetry. $ut what's different is that the intensity of the reflected ray will vary according to theamount of light that was transmitted by refraction.

    &hat is the amount of light that is transmitted and the amount that is reflected 3 That's where0resnel comes into play. The 0resnel effect is a comple! mechanism that needs to take intoaccount the ondulatory nature of light "light is a wave#. ntil now we've treated all surfaces asperfect mirror. - surface of a perfect mirror has loaded particles that will be e!cited by the lightand generate a perfect counter wave that will seemingly cause the ray to go towards thereflected direction.

    $ut most surfaces are not perfect mirrors. 4ost of the time they will reflect light a little bit atsome angles, and be more reflective at some other angles. That's the case of materials such aswater and glass. $ut it's also the case for a lot of materials that are not even transparent, a papersheet for e!ample will reflect light that comes at a gra+ing angle and will absorb and re*diffusethe rest "according to the Eambert's formula#. &ood, or ceramic would behave similarly too. Theamount of light that is reflected in one direction depending on the angle of incidence is called thereflectance.

    0or a surface that reacts to an incident electromagnetic field there are two types of reflectancedepending on the polarity of light. 2olarity is the orientation along whom the electric and themagnetic fields are oscillating. 0or a given incident ray we can basically separate it into twoseparate rays, one where the photons are oriented along the a!is parallel to the surface and onewhere the photons are oriented along a perpendicular a!is. 0or a light that was not previously

    3%

  • 8/17/2019 RayTracing Tutorial

    36/38

    polari+ed, the intensity of each kind is divided in half and half. %f course for simplification reaonswe forget about the polarity of light and we simply take the average of the reflectance computedfor parallel and perpendicular photons.

    float fLiew2ro/ection ; viewRay.dir ? vNormal=fCosThetaI ; fabsf"fLiew2ro/ection#=f5inThetaI ; srtf" * fCosThetaI ? fCosThetaI#=f5inThetaT ; "f(ensity 7 f(ensity)# ? f5inThetaI=

    if "f5inThetaT ? f5inThetaT G

     Then we have to modify slightly the code that computes the lighting and the rest. 0irst if thereflection and refraction coefficients are both eual to one then we forget about the any otherlighting because in that case that means that all the light intensity is either reflected ortransmitted. Then we have to follow the view ray after it hit the surface. $efore that point we hadvery limited choice we were either reflected or stopped. Fere we have a third choice appearing.

     The C88 language and the computer e!ecution being seuential in nature we cannot follow allchoices at the same time. &e can use recursion and store temporary choices on the C88 stack or

    do something else. In our code we decided to do something else but it's pretty arbitrary so youshouldn't think that's the only solution. 5o instead of following the full tree of solutions, I onlyfollow one ray at a time even if the choices made me diverge from another possible path. Fowcan we account for $%TF reflection and refraction in this case 3 &e'll use the fact that we'vealready shot a lot of rays for the depth of field effect and that we can evaluate a lot of differentpaths by shooting rays and varying randomly "or not# the path taken in each ray. I call that theRussian roulette, because at each divergence I pick randomly based on some importanceheuristic which road my ray will take.

    &e need a lot of samples per pi!els but we have to acknowledge that some samples willcontribute more to the final image because they contributed more light. This leads to ourimplementation of importance sampling. The reflectance and transmittance we've computedabove have then to be taken as probabilities that a given ray will take one path vs the other.

    Fere's what we have when we've rewritten those ideas in C88 9

    float fRoulette ; ".else if"fRoulette @; fReflectance 8 fTransmittance#: 

    77 ..transmitted..  coef ?; current4at.refraction=

    3)

  • 8/17/2019 RayTracing Tutorial

    37/38

      float f%ldRefractionCoef ; myConte!t.fRefractionCoef=  if "bInside#

    myConte!t.fRefractionCoef ; conte!t99get(efault-ir"#.fRefractionCoef=  else  myConte!t.fRefractionCoef ; current4at.density=  viewRay.start ; ptFit2oint=  viewRay.dir ; viewRay.dir 8 fCosThetaI ? vNormal= 

    viewRay.dir ; "f%ldRefractionCoef 7 myConte!t.fRefractionCoef#? viewRay.dir=  viewRay.dir 8; "*fCosThetaT# ? vNormal=>else:  77 .. or diffused  b(iffuse ; true=>5o here's what you would get with a refractive inde! of ).< combined with the blob renderingfrom above 9

    %ne of the disadvantage of shooting rays with russian roulette is that you risk ending up with anoisy look, this is often the case for renderers that rely on randomness in their algorithms. Thisis not too visible on the above image because we did shoot enough rays to hide that effect. $ut itcan prove e!pensive to hide it.

    Fere is the output of the program with the notions that we e!plained in this page and theprevious ones 9

    3*

  • 8/17/2019 RayTracing Tutorial

    38/38

    0or this page and the following ones you will also need the e!tra files in the archivecalled te!tures.rar " 4$#. It contains the cube map files used to render those images.

    http://files/raytracer/textures.rarhttp://files/raytracer/textures.rarhttp://files/raytracer/textures.rar