133
CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI 6360/4360

CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Embed Size (px)

Citation preview

Page 1: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

CG Summary:Lighting and Shading

“From Vertices to Fragments”Discrete Techniques

Angel, Chapters 5, 6, 7; “Red Book”

slides from AW, red book, etc.

CSCI 6360/4360

Page 2: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Overview(essential concepts)

• The rest of the essential concepts … for your projects

• Shading and illumination– Basic models and OpenGL use (30)

• “Vertices to fragments” (42)– Drawing algorithms and hidden surface removal, e.g., z-buffer (55)– Color (80)

• Discrete techniques (90)– The pixel pipeline (98)– Texture mapping (103)– Alpha blending and antialiasing– Depth cueing, fog, motion blur, stencil …

Page 3: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Illumination and Shading Overview

• Photorealism and complexity– Light interactions with a solid

• Rendering equation– “infinite scattering and absorption of light”– Local vs. global illumination

• Local techniques– Flat, Gouraud, Phong

• An illumination model– “describes the inputs, assumptions, and outputs that we will use to calculate

illumination of surface elements”– Light– Reflection characteristics of surfaces

• Diffuse reflection, Lambert’s law, specular reflection– Phong approximation – interpolated vector shading

Page 4: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Photorealism and Complexity• Recall, from 1st lecture … examples below exhibit range of “realism”

• In general, trade off realism for speed – interactive computer graphics– Wireframe – just the outline– Local illumination models, polygon based

• Flat shading – same illumination value for all of each polygon• Smooth shading (Gouraud and Phong) – different values across polygons

– Global illlumination models• E.g., Raytracing – consider “all” interactions of light with object

Polygons – Flat shading Ray tracingWireframe Polygons - Smooth shading

Page 5: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Shading• So far, just used OpenGL pipeline for vertices

• Polygons have all had constant color– glColor(_)– Not “realistic” – or computationally complex

• Of course, OpenGL can (efficiently) provide more realistic images

• Light-material interactions cause each point to have a different color or shade

• Need to consider :– Light sources– Material properties– Location of viewer– Surface orientation

• Terminology– “Lighting”

• modeling light sources, surfaces, and their interaction– “Shading”

• how lighting is done with polygon

Page 6: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Rendering Equation

• Light travels …– Light strikes A

• Some scattered, some absorbed, …

– Some of scattered light strikes B• Some scattered, some absorbed, …• Some of this scattered light strikes A• And so on …

• Infinite scattering and absorption of light can be described by rendering equation – Bidirectional reflection distribution function– Cannot be solved in general– Ray tracing is a special case for perfectly

reflecting surfaces

• Rendering equation is global, includes:– Shadows– Multiple scattering from object to object– … and everything

translucent surface

shadow

multiple reflection

Page 7: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Light – Material Interactions for CG(quick look)

• Will examine each of these in detail

• Diffuse surface– Matte, dull finish– Light “scattered” in all directions

• Specular surface– Shiny surface– Light reflected (scattered) in narrow range

of directions

• Translucent surface– Light penetrates surface and emerges in

another location on object

Page 8: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

“Surface Elements” for Interactive CG(a big idea)

• A computer graphics issue/orientation:– Consider everything or just “sampling a scene”?

• Again, global view considers all light coming to viewer:– From each point on each surface in scene - object precision

– Points are smallest units of scene

– Can think of points having no area or infinitesimal area

• i.e., there are an infinite number of visible points.

• Of course, computationally intractable

• Alternatively, consider surface elements– Finite number of differential pieces of surface

• E.g., polygon

– Figure out how much light comes to viewer from each of these pieces of surface

– Often, relatively few (vs. infinite) is enough

• Reduction of computation through use of surface

elements is at core of tractable/interactive cg

Page 9: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Surface Elements and Illumination, 1

• Tangent Plane Approximation for Objects– Most surfaces are curved: not flat– Surface element is area on that surface

• Imagine breaking up into very small pieces • Each of those pieces is still curved,

– but if we make the pieces small enough, – then we can make them arbitrarily close to being flat

– Can approximate this small area with a tiny flat area

• Surface Normals– Each surface element lies in a plane.– To describe plane, need a point and a normal– Area around each of these vertices is a surface element

where we calculate “illumination”

• Illumination

Page 10: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Surface Elements and Illumination, 2

• Tangent Plane Approximation for Objects

• Surface Normals

• Illumination– Again, light rays coming from rest of scene

strike surface element and head out in different directions

– Light that goes in direction of viewer from that surface element

• If viewer moves, light will change

– This is “illumination” of that surface element– Will see model for cg later

Page 11: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Light-Material Interaction, 1

• Light that strikes an object is partially absorbed and partially scattered (reflected)

• Amount reflected determines color and brightness of object

– Surface appears red under white light because the red component of light is reflected and rest is absorbed

– Can specify both light and surface colors

rough surface

Livingstone, “Vision and Art”

Page 12: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Light-Material Interaction, 2

• Diffuse surfaces– Rough (flat, matte) surface scatters

light in all directions– Appear same from different viewing

angles

• Specular surfaces– Smoother surfaces, more reflected

light is concentrated in direction a perfect mirror would reflected the light

– Light emerges at single angle– … to varying degrees – Phong

shading will model

• Reflected light is scattered in a manner that depends on the smoothness and orientation of surface to light source

Page 13: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Light Sources

• General light sources are difficult to work with because must integrate light coming from all points on the source

• Use “simple” light sources

• Point source– Model with position and color– Distant source = infinite distance away (parallel)

• Spotlight– Restrict light from ideal point source

• Ambient light– A real convenience – recall, “rendering equation” – but real nice– Same amount of light everywhere in scene– Can model (in a good enough way) contribution of many sources and reflecting

surfaces

Page 14: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Overview: Local Rendering Techniques

• Will consider– Illumination (light) models focusing on following elements:

• Ambient• Diffuse• Attenuation• Specular Reflection

– Interpolated shading models:• Flat, Gouraud, Phong, modified/interpolated Phong (Blinn-Phong)

Page 15: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

About (Local) Polygon Mesh Shading(it’s all in how many polygons there are)

• Angel example of approximation of a sphere…

• Recall, any surface can be illuminated/shaded/lighted (in principle) by:

1. calculating surface normal at each visible point and

2. applying illumination model

… or, recall surface model of cg!

• Again, where efficiency is consideration, e.g., for interactivity (vs. photorealism) approximations are used

– Fine, because polygons themselves are approximation

– And just as a circle can be considered as being made of “an infinite number of line segments”,

• so, it’s all in how many polygons there are!

Page 16: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

About (Local) Polygon Mesh Shading(interpolation)

• Interpolation of illumination values are widely used for speed– And can be applied using any illumination model

• Will see three methods - each treats a single polygon independently of others (non-global)

– Constant (flat)– Gouraud (intensity interpolation)– Interpolated Phong (normal-vector interpolation)

• Each uses interpolation differently

Page 17: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Flat/Constant Shading, About• Single illumination value per polygon

– Illumination model evaluated just once for each polygon • 1 value for all polygon, Which is as fast as it gets!

– As “sampling” value of illumination equation (at just 1 point)– Right is flat vs. smooth (Gouraud) shading

• If polygon mesh is an approximation to curved surface, – faceted look is a problem– Also, facets exaggerated by mach band effect

• For fast, can (and do) store normal with each surface– Or can, of course, compute from vertices

• But, interestingly, approach is valid, if:– Light source is at infinity (is constant on polygon)– Viewer is at infinity (is constant on polygon)– Polygon represents actual surface being modeled (is not an

approximation)!

Page 18: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Gouraud Shading, About• Recall, for flat/constant shading, single illumination value per polygon

• Gouraud (or smooth, or interpolated intensity) shading overcomes problem of discontinuity at edge exacerbated by Mach banding

– “Smooths” where polygons meet– H. Gouraud, "Continuous shading of curved surfaces,"

IEEE Transactions on Computers, 20(6):623–628, 1971.

• Linearly interpolate intensity along scan lines– Eliminates intensity discontinuities at polygon edges– Still have gradient discontinuities,

• So mach banding is improved, but not eliminated– Must differentiate desired creases from tessellation artifacts

• (edges of a cube vs. edges on tesselated sphere)

Page 19: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Gouraud Shading, About• To find illumination intensity, need intensity of

illumination and angle of reflection– Flat shading uses 1 angle– Gouraud estimates – …. Interpolates

1. Use polygon surface normals to calculate “approximation” to vertex normals

– Average of surrounding polygons’ normals– Since neighboring polygons sharing vertices and edges

are approximations to smoothly curved surfaces – So, won’t have greatly differing surface normals

• Approximation is reasonable one

2. Interpolate intensity along polygon edges

3. Interpolate along scan lines– i.e,, find:

• Ia, as interpolated value between I1 and I2• Ib, as interpolated value between I1 and I3• Ip, as interpolated value between Ia and Ib

– formulaically, next slide

Page 20: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Simple Illumination Model

• One of first models of illumination that “looked good” and could be calculated efficiently

– simple, non-physical, non-global illumination model– describes some observable reflection characteristics of surfaces– came out of work done at the University of Utah in the early 1970’s– still used today, as it is easy to do in software and can be optimized in hardware

• Later, will put all together with normal interpolation

• Components of a simple model– Reflection characteristics of surfaces

• Diffuse Reflection• Ambient Reflection• Specular Reflection

• Model not physically-based, and does not attempt to accurately calculate global illumination

– does attempt to simulate some of important observable effects of common light interactions

– can be computed quickly and efficiently

Page 21: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Diffuse Reflection (1/7)

• Diffuse Reflection– Diffuse (Lambertian) reflection

• typical of dull, matte surfaces – e.g. carpet, chalk plastic

• independent of viewer position• dependent on light source position

– (in this case a point source, again a non-physical abstraction)

• Vecs L, N used to determine reflection– Value from Lambert’s cosine law … next slide

Page 22: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

• Lambert’s cosine law:– Specifies how much energy/light reflects

toward some point– Computational form used in equation for

illumination model

• Now, have intensity (I) calculated from:

– Intensity from point source– Diffuse reflection coefficient (arbitrary!)– With cos-theta calculated using

normalized vectors N and V• For computational efficiency

• Again:

Reflection Characteristics of Surfaces, Lambert’s Law (2/7)

Page 23: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Energy Density Falloff (3/7)

• Less light as things are farther away from light source

• Reflection - Energy Density Falloff– Should also model inverse square law

energy density falloff

• Formula often creates harsh effects

– However, this makes surfaces with equal differ in appearance ¾ important if two surfaces overlap

– Do not often see objects illuminated by point lights

• Can instead use formula at right– Experimentally-defined constants – Heuristic

Page 24: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Ambient Reflection (4/7)

• Ambient Reflection

• Diffuse surfaces reflect light

• Some light goes to eye, some to scene– Light bounces off of other objects and

eventually reaches this surface element – This is expensive to keep track of accurately – Instead, we use another heuristic

• Ambient reflection– Independent of object and viewer position– Again, a constant – “experimentally determined”– Exists in most environments

• some light hits surface from all directions • Approx. indirect lighting/global illumination

– A total convenience• but images without some form of ambient lighting

look stark, they have too much contrast– Light Intensity = Ambient + Attenuation*Diffuse

Page 25: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Color (5/7)

• Colored Lights and Surfaces

• Write separate equation for each component of color model

– Lambda - wavelength– represent an object’s diffuse color by one

value of for each component• e.g., in RGB• are reflected in proportion to• e.g., for the red component

• Wavelength dependent equation– Evaluating the illumination equation at only 3

points in the spectrum is wrong, but often yields acceptable pictures.

– To avoid restricting ourselves to one color sampling space, indicate wavelength dependence with (lambda).

Page 26: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Specular Reflection (6/7)

• Specular Reflection

• Directed reflection from shiny surfaces– typical of bright, shiny surfaces, e.g. metals– color depends on material and how it scatters light energy

• in plastics: color of point source, in metal: color of metal• in others: combine color of light and material color

– dependent on light source position and viewer position– Early model by Phong neglected effect of material color on specular highlight

• made all surfaces look plastic– for perfect reflector, see light iff– for real reflector, reflected light falls off as increases– Below, “shiny spot” size < as angle view >

Page 27: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Specular Reflection (7a/7)

• Phong Approximation– Again, non-physical, but works– Deals with differential “glossiness” in a

computationally efficient manner

• Below shows increasing n, left to right– “Tightness” of specular highlight– n in formula below (k, etc., next slide)

Page 28: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Reflection Characteristics of Surfaces, Specular Reflection (7b/7)

• Yet again, constant, k, for specular component

• Vectors R and V express viewing angle and so amount of illumination

• n is exponent to which viewing angle raised

– Measure of how “tight”/small specular highlight is

Page 29: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Putting it all together:A Simple Illumination Model

• Non-Physical Lighting Equation– Energy from a single light

reflected by a single surface element

• For multiple point lights, simply sum contributions

• An easy-to-evaluate equation that gives useful results

– It is used in most graphics systems,

• but it has no basis in theory and does not model reflections correctly!

Page 30: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

OpenGL Shading Functions

Page 31: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

OpenGL Shading Functions

• Polygonal shading– Flat– Smooth– Gouraud

• Steps in OpenGL shading1. Enable shading and select model

2. Specify normals

3. Specify material properties

4. Specify lights

Page 32: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Enabling Shading

• Shading calculations are enabled by– glEnable(GL_LIGHTING)– Once lighting is enabled, glColor() ignored

• Must enable each light source individually– glEnable(GL_LIGHTi) i=0,1…..

• Can choose light model parameters:– glLightModeli(parameter, GL_TRUE)

• GL_LIGHT_MODEL_LOCAL_VIEWER – do not use simplifying distant viewer assumption in calculation

• GL_LIGHT_MODEL_TWO_SIDED – shades both sides of polygons independently

Page 33: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Defining a Point Light Source• For each light source, can set an RGBA (A for alpha channel)

– For diffuse, specular, and ambient components, and – For the position– Code below from Angel, other ways to do it (of course)

GL float diffuse0[]={1.0, 0.0, 0.0, 1.0};GL float ambient0[]={1.0, 0.0, 0.0, 1.0};GL float specular0[]={1.0, 0.0, 0.0, 1.0};Glfloat light0_pos[]={1.0, 2.0, 3,0, 1.0};

glEnable(GL_LIGHTING);glEnable(GL_LIGHT0);glLightv(GL_LIGHT0, GL_POSITION, light0_pos);glLightv(GL_LIGHT0, GL_AMBIENT, ambient0);glLightv(GL_LIGHT0, GL_DIFFUSE, diffuse0);glLightv(GL_LIGHT0, GL_SPECULAR, specular0);

Page 34: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Distance and Direction

• Position is given in homogeneous coordinates– If w =1.0, we are specifying a finite location– If w =0.0, we are specifying a parallel source with the given direction vector

• Coefficients in distance terms are by default a=1.0 (constant terms), b=c=0.0 (linear and quadratic terms)

– Change by: • a= 0.80;• glLightf(GL_LIGHT0, GLCONSTANT_ATTENUATION, a);

GL float diffuse0[]={1.0, 0.0, 0.0, 1.0};GL float ambient0[]={1.0, 0.0, 0.0, 1.0};GL float specular0[]={1.0, 0.0, 0.0, 1.0};Glfloat light0_pos[]={1.0, 2.0, 3,0, 1.0};

Page 35: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Spotlights

• Use glLightv to set – Direction GL_SPOT_DIRECTION– Cutoff GL_SPOT_CUTOFF– Attenuation GL_SPOT_EXPONENT

• Proportional to cosaf

Page 36: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Global Ambient Light

• Ambient light depends on color of light sources– A red light in a white room will cause a red ambient term that

disappears when the light is turned off

• OpenGL also allows a global ambient term that is often helpful for testing– glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient)

Page 37: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Moving Light Sources

• Light sources are geometric objects whose positions or directions are affected by the model-view matrix

• Depending on where we place the position (direction) setting function, we can

– Move the light source(s) with the object(s)– Fix the object(s) and move the light source(s)– Fix the light source(s) and move the object(s)– Move the light source(s) and object(s) independently

Page 38: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Material Properties

• Material properties are also part of the OpenGL state and match the terms in the modified Phong model

• Set by glMaterialv()GLfloat ambient[] = {0.2, 0.2, 0.2, 1.0};

GLfloat diffuse[] = {1.0, 0.8, 0.0, 1.0};

GLfloat specular[] = {1.0, 1.0, 1.0, 1.0};

GLfloat shine = 100.0

glMaterialf(GL_FRONT, GL_AMBIENT, ambient);

glMaterialf(GL_FRONT, GL_DIFFUSE, diffuse);

glMaterialf(GL_FRONT, GL_SPECULAR, specular);

glMaterialf(GL_FRONT, GL_SHININESS, shine);

Page 39: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Emissive Term

• We can simulate a light source in OpenGL by giving a material an emissive component

• This component is unaffected by any sources or transformations– GLfloat emission[] = 0.0, 0.3, 0.3, 1.0);– glMaterialf(GL_FRONT, GL_EMISSION, emission);

Page 40: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Transparency

• Material properties are specified as RGBA values

• The A value can be used to make the surface translucent

• The default is that all surfaces are opaque regardless of A

• Later we will enable blending and use this feature

Page 41: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Polygonal Shading

• Shading calculations are done for each vertex– Vertex colors become vertex shades

• By default, vertex shades are interpolated across the polygon– glShadeModel(GL_SMOOTH);

• If we use glShadeModel(GL_FLAT); the color at the first vertex will determine the shade of the whole polygon

Page 42: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

“From Vertices to Fragments”

• .

Page 43: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Implementation

• “From Vertices to Fragments”

• Next steps in viewing pipeline:– Clipping

• Eliminating objects that lie outside view volume– and, so, not visible in image

– Rasterization• Produces fragments from remaining objects

– Hidden surface removal (visible surface determination)• Determines which object fragments are visible • Show objects (surfaces)not blocked by objects closer to camera

• Will consider above in some detail in order to give feel for computational cost of these elements

• “in some detail” = algorithms for implementing– … algorithms that are efficient– Same algorithms for any standard API

• Whether implemented by pipeline, raytracing, etc.– Will see different algorithms for same basic tasks

Page 44: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Tasks to Render a Geometric Entity1

• Angel introduces more general terms and ideas, than just for OpenGL pipeline…

– Recall, chapter title “From Vertices to Fragments” … and even pixels– From definition in user program to (possible) display on output device– Modeling, geometry processing, rasterization, fragment processing

• Modeling– Performed by application program, e.g., create sphere polygons (vertices)– Angel example of spheres and creating data structure for OpenGL use– Product is vertices (and their connections)– Application might even reduce “load”, e.g., no back-facing polygons

Page 45: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Tasks to Render a Geometric Entity2

• Geometry Processing– Works with vertices– Determine which geometric objects appear on display– 1. Perform clipping to view volume

• Changes object coordinates to eye coordinates

• Transforms vertices to normalized view volume using projection transformation

– 2. Primitive assembly• Clipping object (and it’s surfaces) can result in new surfaces (e.g., shorter

line, polygon of different shape)• Working with these “new” elements to “re-form” (clipped) objects is primitive

assembly• Necessary for, e.g., shading

– 3. Assignment of color to vertex

• Modeling and geometry processing called “front-end processing”– All involve 3-d calculations and require floating-point arithmetic

Page 46: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Tasks to Render a Geometric Entity3

• Rasterization– Only x, y values needed for (2-d) frame buffer– Rasterization, scan conversion, determines which fragments displayed (put in frame

buffer)• For polygons, rasterization determines which pixels lie inside 2-d polygon

determined by projected vertices– Colors

• Most simply, fragments (and their pixels) are determined by interpolation of vertex shades & put in frame buffer

– Output of rasterizer is in units of the display (window coordinates)

Page 47: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Tasks to Render a Geometric Entity4

• Fragment Processing– Colors

• OpenGL can merge color (and lighting) results of rasterization stage with geometric pipeline

• E.g., shaded, texture mapped polygon (next chapter)– Lighting/shading values of vertex merged with texture map

• For translucence, must allow light to pass through fragment• Blending of colors uses combination of fragment colors, using colors already in

frame buffer– e.g., multiple translucent objects

– Hidden surface removal performed fragment by fragment using depth information– Anti-aliasing also dealt with

Page 48: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Efficiency and Algorithms

• For cg illumination/shading, saw how role of efficiency drove algorithms

– Phong shading is “good enough” to be perceived as “close enough” to real world– Close attention to algorithmic efficiency

• Similarly, for often calculated geometric processing efficiency is a prime consideration

• Will consider efficient algorithms for:– Clipping– Line drawing– Visible surface drawing

Page 49: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Recall, Clipping …

• Scene's objects are clipped against clip space bounding box– Eliminates objects (and pieces of objects) not visible in image– Efficient clipping algorithms for homogeneous clip space

• Perspective division divides all by homogeneous coordinates, w

• Clip space becomes Normalized Device Coordinate (NDC) space after perspective division

Page 50: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Rasterization

Page 51: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Rasterization

• End of pipeline (processing) – – Putting values in the frame buffer (or raster)– write_pixel (x, y, color)

• At this stage, fragments – clipped, colored, etc. at level of vertices, are turned into values to be displayed

– (deferring for a moment the question of hidden surfaces and colors)

• Essential question is “how to go from vertices to display elements?”– E.g., lines

• Algorithmic efficiency is a continuing theme

Page 52: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Drawing Algorithms

• As noted, implemented in graphics processor– Used bazillions of times per second– Line, curve, … algorithms

• Line is paradigm example– most common 2D primitive - done 100s or 1000s or 10s of 1000s of times each

frame– even 3D wireframes are eventually 2D lines– optimized algorithms contain numerous tricks/techniques that help in designing

more advanced algorithms

• Will develop a series of strategies, towards efficiency

Page 53: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Drawing Lines: Overview

• Recall, fundamental “challenge” of computer graphics:

– Representing the analog (physical) world on a discrete (digital) device

– Consider a very low resolution display:

• Sampling a continuous line on a discrete grid introduces sampling errors: the “jaggies”

– For horizontal, vertical and diagonal lines all pixels lie on the ideal line: special case

• For lines at arbitrary angle, pick pixels closest to the ideal line

• Will consider several approaches– But, “fast” will be best

Page 54: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Hidden Surface Removal

• Or, Visible Surface Determination (VSD)

Page 55: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Recall, Projection …

• Projectors

• View plane (or film plane)

• Direction of projection

• Center of projection– Eye, projection reference point

Page 56: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

About Visible Surface Determination, 1

• Have been considering models, and how to create images from models

– e.g., when viewpoint/eye/COP changes, transform locations of vertices (polygon edges) of model to form image

• In fact, projectors are extended from front and back of all polygons– Though only concerned with “front” polygons

Projectors from front (visible) surface only

Page 57: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

About Visible Surface Determination, 2

• To form image, must determine which objects in scene obscured by other objects

– Why might objects not be visible? – Occlusion, and also clipping

• Definition of visible surface determination (VSD): – Given a set of 3-D objects and a view specification (camera), determine which

lines or surfaces of the object are visible– Also called Hidden Surface Removal (HSR)

Page 58: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Visible Surface Determination: Historical notes

• Problem first posed for wireframe rendering

– doesn’t look too “real” (and in fact is ambiguous)

• Solution called “hidden-line (or surface) removal”

– Lines themselves don’t hide lines • Lines must be edges of opaque surfaces that hide other

lines

– Some techniques show hidden lines as dotted or dashed lines for more info

• Hidden surface removal often appears as one stage in other models

Page 59: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Classes of VSD Algorithms

• Will see different VSD algorithms have advantages and disadvantages:

0. “Conservative” visibility testing: – only trivial reject - does not give final answer

• e.g., back-face culling, canonical view volume clipping

• have to feed results to algorithms mentioned below

1. Image precision– resolve visibility at discrete points in image

• Z-buffer, scan-line (both in hardware), ray-tracing

2. Object precision

– resolve for all possible view directions from a given eye point

Page 60: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Painter’s Algorithm

• To start at the beginning …– Way to resolve visibility exactly– Create drawing order, each poly overwriting the previous ones guarantees correct

visibility at any pixel resolution

• Strategy is to work back to front– find a way to sort polygons by depth (z), then draw them in that order

• do a rough sort of polygons by smallest (farthest) z-coordinate in each polygon• draw most distant polygon first, then work forward towards the viewpoint

(“painters’ algorithm”)

Page 61: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Back-Face Culling: Overview

• Back-face culling directly eliminates polygons not facing the viewer

• Makes sense given constraint of convex (no “inward” face) polygons

• Computationally, can eliminate back faces by:

– Line of sight calculations– Plane half-spaces

• In practice, – surface (and vertex) normals often stored with

vertex list representations– Normals used both in back face culling and

illumination/shading models

Page 62: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Back-Face Culling: Line of Sight Interpretation

• Line of Sight Interpretation

• Use outward normal (ON) of polygon to test for rejection

• LOS = Line of Sight, – The projector from the center of projection (COP) to any point P on the polygon. – (note: For parallel projections LOS = DOP = direction of projection)

• If normal is facing in same direction as LOS, it’s a back face:– Use cross-product– if LOS ON >= 0, then polygon is invisible—discard– if LOS ON < 0, then polygon may be visible

Page 63: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Back-Face Culling - OpenGL

• OpenGL automatically computes an outward normal from the cross product of two consecutive screen-space edges and culls back-facing polygons– just checks the sign of the resulting z component

Page 64: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Algorithm, 1

• Recall, frame/refresh buffer:– Screen is refreshed one scan line at a

time, from pixel information held in a refresh or frame buffer

• Additional buffers can be used to store other pixel information– E.g., double buffering for animation

• 2nd frame buffer to which to draw an image (which takes a while)• then, when drawn, switch to this 2nd frame/refresh buffer and start

drawing again in 1st

• Also, a z-buffer in which z-values (depth of points on a polygon) stored for VSD

Page 65: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Algorithm, 2

• Init Z-buffer to background value– furthest plane view vol., e.g, 255, 8-bit

• Polygons scan-converted in arbitrary order– When pixels overlap, use Z-buffer to decide which polygon “gets” that pixel

• If new point has z values less than previous one (i.e., closer to the eye), its z-value is placed in the z-buffer and its color placed in the frame buffer at the same (x,y)

• Otherwise the previous z-value and frame buffer color are unchanged– Below shows numeric z-values and color to represent fb values

• Just draw every polygon– If find a piece (one or more pixels) of a polygon is closer to the front of what

there already, draw over it

Page 66: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Algorithm, 3

• Polygons scan-converted in arbitrary order

• After 1st polygon scan-converted, at depth 127

• After 2nd polygon, at depth 63 – in front of some of 1st polygon

Page 67: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Algorithm, 4

• Algorithm again:– Draw every polygon that we can’t reject trivially– “If find piece of polygon closer to front, paint

over whatever was behind it”

void zBuffer() { // Initialize to “far” for ( y = 0; y < YMAX; y++) for ( x = 0; x < XMAX; x++) { WritePixel (x, y, BACKGROUND_VALUE); WriteZ (x, y, 0); } // Go through polygons for each polygon for each pixel in polygon’s projection { // pz = polygon’s Z-value at pixel (x, y); if ( pz < ReadZ (x, y) ) { // New point is closer to front of view WritePixel (x, y, poly’s color at pixel (x, y)); WriteZ (x, y, pz); } } }

Frame buffer holds values of polygons’ colors:

Z buffer holds z values of polygons:

Page 68: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

FYI - Z-Buffer Algorithm, 5

• How to compute this efficiently?– incrementally

• As in polygon filling– As we moved along the Y-axis, we tracked an x position where each edge intersected the current scan-line

• Can do same thing for z coord. using simple “remainder” calculations with y-z slope

• Once we have za and zb for each edge, can incrementally calculate zp as scan across

• Do similar when calculating color per pixel... (Gouraud shading)

Page 69: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Pros

• Simplicity lends itself well to hardware implementations - fast– ubiquitous

• Polygons do not have to be compared in any particular order: – no presorting in z is necessary

• Only consider one polygon at a time– ...even though occlusion is a global problem!– brute force, but it is fast!

• Z-buffer can be stored with an image– allows you to correctly composite multiple images (easy!) – w/o having to merge the models (hard!)– great for incremental addition to a complex scene– all VSD algorithms could produce a Z-buffer for this

• Can be used for non-polygonal surfaces, CSGs– (intersect, union, difference), any z = f (x,y)

Page 70: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z-Buffer Problems

• Can’t do anti-aliasing– Requires knowing all poly’s involved in a given pixel

• Perspective foreshortening– Compression in z axis caused in post-perspective space– Objects originally far away from camera end up having Z-values that are very close

to each other

• Depth information loses precision rapidly, which gives Z-ordering bugs (artifacts) for distant objects

– Co-planar polygons exhibit “z-fighting” - offset back polygon – Floating-point values won’t completely cure this problem

Page 71: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z – Fighting, 1

• Because of limited z-buffer precision (e.g. only 16 bits), z-values must be rounded

– Due to floating point rounding errors, z-values end up in different “bins”

• Z-fighting occurs when two primitives have similar values in the z-buffer

– Coplanar polygons (two polygons occupy the same space)

– One is arbitrarily chosen over the other

– Behavior is deterministic: the same camera position gives the same z-fighting pattern

Two intersecting cubes

Page 72: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Z – Fighting, 2

• Lack of precision in z-buffer leads to artifacts

Van Dam, 2010

Page 73: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

The Aliasing Problem

Page 74: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

The Aliasing Problem• Aliasing is cause by finite addressability of the display

• Approximation of lines and circles with discrete points often gives a staircase appearance or "Jaggies“

• Recall, ideal line and turning on pixels to approximate

• Fundamental “challenge” of computer graphics:– Representing the analog (physical) world on a discrete (digital) device

Page 75: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Aliasing

• Ideal rasterized line should be 1 pixel wide– But, of course, not possible with discrete display

• Color multiple pixels for each x depending on, e.g., per cent coverage by ideal line

Page 76: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Aliasing / Antialiasing Examples

• x

(C) Doug Bowman, Virginia Tech, 2002

Page 77: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Antialiasing - solutions• More later …

• Aliasing can be smoothed out by using higher addressability

• If addressability is fixed but intensity is variable, use the intensity to control the address of a "virtual pixel"

– Two adjacent pixels can be used to give impression of point part way between them.

– The perceived location of the point is dependent upon the ratio of the intensities used at each.

– The impression of a pixel located halfway between two addressable points can be given by having two adjacent pixels at half intensity.

• An antialiased line has a series of virtual pixels each located at the proper address.

Page 78: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Color and Displays

Page 79: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Display Considerations

• Color Systems

• Color cube and tristimulus theory

• Gamuts

• “XYZ” system – CIE

• Hue-lightness-saturation

Page 80: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Chromatic Color: Introduction

• Hue – distinguishes among colors such as red, green, purple, and yellow

• Saturation – refers to how pure the color is, how much white/gray is mixed with it

• red is highly saturated; pink is relatively unsaturated• royal blue is highly saturated; sky blue is relatively unsaturated• pastels are less vivid, less intense

• Lightness – embodies the achromatic notion of perceived intensity of a reflecting object

• Brightness – is used instead of lightness to refer to the perceived intensity of a self-luminous (i.e.,

emitting rather than reflecting light) object, such as a light bulb, the sun, or a CRT

• Humans can distinguish ~7 million colors – when samples placed side-by-side (JNDs)– when differences only in hue, l difference of JND colors are 2 nm in central part of visible

spectrum, 10 nm at extremes - non-uniformity!– about 128 fully saturated hues are distinct– eye less discriminating for less saturated light (from 16 to 23 saturation steps for fixed hue

and lightness), and less sensitive for less bright light

Page 81: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Trichromacy Theory of Color Perception

• “Sensation of light entering eye leads to perception of color”

– Sensation: “firing” or photosensitive receptors– Perception: subjective “experience” of color

• Trichromacy theory is one account of human color perception

– Follows naturally from human physiology

• 2 types of retinal receptors:– Rods, low light, monochrome

• So overstimulated at all but low levels contribute little– Cones, high light, color– Not evenly distributed on retina

Distribution of receptors across the retina, left eye shown; the cones are concentrated in the fovea,

which is ringed by a dense concentration of rods http://www.handprint.com/HP/WCL/color1.html#oppmodelWandell, Foundations of Vision

Page 82: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Trichromacy Theory of Color Perception

• Again, cones responsible for sensation at all but lowest light levels

• 3 types of cones – Differentially sensitive (fire in response) to

wavelengths– Hence, “trichromacy”– No accident 3 colors in monitor

• Red, green, blue• Printer

– Cyan, magenta, yellow• Can match colors perceived with 3 colors

– Cone receptors least sensitive to (least output for) to blue

• Will return to human after some computer graphics fleshing out

Cone sensitivity functions

Page 83: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

RGB Color Cube

• Again, can specify color with 3– Will see other way

• RGB Color Cube– Neutral Gradient Line– Edge of Saturated Hues

http://graphics.csail.mit.edu/classes/6.837/F01/Lecture02/ http://www.photo.net/photo/edscott/vis00020.htm

Page 84: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Color Gamut

• Gamut is the colors that a device can display, or a receptor can sense

• Figure at right:– CIE standard describing human

perception– E.g, color printer cannot reproduce

all the colors visible on a color monitor

• From figure at right, neither film, monitor, or printer reproduce all colors humans can see!

Page 85: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

CIE System of Color Standards

• CIE color standards– Commission International de

l’Eclairage (CIE)– Standard observer– Lights vs. surfaces– Often used for calibration

• Uses “abstract” primaries– Not correspond to eye, etc.– Y axis is luminance

• Gamuts– Perceivable colors, gray cone– Produced by monitor, RGB axes

Page 86: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

CIE Standard and Colorimetric Properties

• Chromaticity coordinates:– x, y coordinates– Correspond to wave length

1. If 2 colored lights are represented by 2 points, color of mixture lies on line between points

2. Any set of 3 lights specifies triangle, and within it are realizableGamut

3. Spectrum locus– Chromaticity coordinates of pure

monochromatic (single wavelength) lights– E.g., 0.1, 1.1 ~ 480 nm, “blue”

4. Purple boundary – line connecting chrom. coords. of longest

visible red (~700nm) and blue (~400nm)

Page 87: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

CIE Standard and Colorimetric Properties

5. White light– Has equal mixture of all wavelengths– ~ 0.33, ~0.33– Incadescent tungsten source: ~0.45, ~0.47

• More yellow than daylight

6. Excitation purity– Distance along a line between a pure spectral

wave length and the white point– Dis pt to white pt / Dis white pt to spectrum line– Vividness or saturation of color

7. Complementary wavelength of a color– Draw line that color to white and extrapolate to

opposite spectrum locus– Adding a color to its complement produces white

Page 88: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Gamuts for Color Cube and CIE

• Again, monitor gamut lies within gamut of human perception

– In figure below CIE and color cube within– In figure at right, other gamuts

http://graphics.csail.mit.edu/classes/6.837/F01/Lecture02/

Page 89: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

End

• .

Page 90: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Discrete Techniques Overview

• Geometry path and pixel path

• Buffers

• Digital images– Color indexing for psuedocolor

• Bitmaps and bitblt

• Mappings – texture and environment– Spinning cube example– Multipass rendering

• Accumulation buffer– Blending, scene antialiasing, depth cue, motion blur, fog

• Antialiasing and sampling theory

Page 91: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Introduction• Historically, graphics APIs have provided access to geometric objects

– Lines, polygons, polyhedra, …– … which were rasterized into frame buffer– APIs not oriented to direct manipulation of frame buffer

• However, many techniques rely on direct interactions with frame buffer– Texture mapping, antialiasing, compositing, alpha blending, …

• OpenGL allows API access to both “geometry path” and “pixel path”– Bitmaps, pixel rectangles to rasterization and texture mapping to fragment processing

Geometry PathGeometry Path

Pixel Path

Page 92: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

OpenGL Buffers

• In practice, buffer depth typically a few hundred bits and can be much more

– Color• Front: from which image is scanned out • Back: “next” image written into• 2 x 32-color + 8-alpha

– Depth• Hidden surface removal using z-buffer

algorithm• 16 min, 32 for fp arithmetic

– Color indices• Lookup tables

– Accumulation• Blending, compositing, etc.

• Pixel at mi, nj is all bits of the k bit-planes

Page 93: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Creating an OpenGL Image

• In fact can directly form images in OpenGl– Mostly simple images, as shown below– Just assign a value, here, for each of R, G and B

GLubyte check[512][512][3] ; // RGB image

int i, j, k;

for (i=0; i<512; i++)

for (j=0; j<512; j++)

{

for (k=0; k<3; k++)

if ((8*i+j/64)%64)

check[i][j][0]=255;

else

check[i][j][k]=0;

}

• What does this form?

Page 94: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Image Formats and OpenGLNot Good News

• Large number of formats poses problems for graphics APIs

• Angel:– “The OpenGL API avoids the problem by not including any image formats”

• Nice feature! (or not)– “… it is the application programmer’s responsibility to read any formatted

images into processor memory and write them out to formatted files.”

• So, no support in OGL for reading even “standard” format images!– Code available on Web– opengl.org

• Code for .bmp is available at a number of places– This is an easy, can be uncompressed format– When using, be sure image and reader are of same .bmp format

Page 95: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Writing in Buffers

• Have seen this, when discussed rubber banding and XOR writes …

• Conceptually, can consider all of memory large two-dimensional array of pixels

– Frame buffer is part of this memory

• Can read and write:– Rectangular block of pixels

• Bit block transfer (bitblt) operations– Individual values

• How “write” to memory depends on writing mode

– Read destination pixel before writing source– Not just write over old with new– Might have new value be, e.g., old XOR new

Page 96: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Also, Can Combine Images/Values OpenGL Writing Modes

• Accumulation, stencil, etc. buffers can be sources and destinations

– Use OpenGL Writing Modes

• Source and destination bits combined bitwise

• 16 possible functions

replace ORXOR

Page 97: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

OpenGL – bit block transferglBitmap and Example

• From Blue book …

• glBitmap -- draw a bitmap– Draws at current raster position

• void glBitmap(– GLsizei width, – GLsizei height, – GLfloat xorig, – GLfloat yorig, – GLfloat xmove, – GLfloat ymove, – const GLubyte *bitmap)

• PARAMETERS– width, height - … of bitmap image. – xorig, yorig - … loc of origin of bitmap

image– xmove, ymove - … x and y offsets to be

added to the current raster position after bitmap is drawn.

– bitmap - … address of bitmap image.

• Example - Create bitmap directly and sent to current buffer at current raster position

GLubyte wb[2] = {0 x 00, 0 x ff};GLubyte check[512];int i, j;

.:for(i=0; i<64; i++)

for (j=0; j<64, j++) check[i*8+j] = wb[(i/8+j)

%2];

glBitmap(64, 64, 0.0, 0.0, 0.0, 0.0, check);

Page 98: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

The Pixel PipelineWhat? Another pipeline?

Page 99: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

The Pixel PipelineWhat? Another pipeline?

• So far, have seen vertex pipeline– Focus on coordinate systems

• OGL has separate pipeline for pixels!– Use can in practice be costly

• Writing pixels involves (next slide)– Moving pixels processor memory to fr-buff– Format conversions– Mapping, lookups, tests, …

• Reading pixels– Format conversion

Geometry PathGeometry Path

Pixel Path

Page 100: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

The Pixel Pipeline

• Writing pixels involves – Moving pixels from processor memory to frame buffer– Format conversions– Mapping, lookups, tests, …

Geometry PathGeometry Path

Pixel Path

Page 101: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Buffer Selection

• OpenGL can draw into or read from any of the color buffers (front, back, auxiliary)

– Default to the back buffer– Have used “swapbuffer”– Change with glDrawBuffer and glReadBuffer

• Later, will discuss use of several buffers

• Again, format of pixels in frame buffer is different from that of processor memory and these two types of memory reside in different places

– Need packing and unpacking– Drawing and reading can be slow

Page 102: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Mapping – Uses Buffers

• Use of discrete data for surface rendering is powerful – Allows efficient means for

producing “realistic” images, where algorithmic techniques too slow

– E.g., “realistic” image of orange

• Texture mapping

• Environment mapping

Page 103: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Texture Mapping

Page 104: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Texture Mapping• Responsible for much of today’s

photorealistic cg

• Puts an image on a facet (polygon)– Using some geometry– Will see example

• Lots of variations

• OpenGL texture functions and options

x

y

z

image

geometry display

Page 105: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Texture Mapping: Simple, but …1

• Angel: “Conceptually, the texture-mapping process is simple. …

• Textures are patterns (created or read-image)– Brought in (or created) as array– 1,2,3, or 4 D – consider just 2-D

• A texture has texels (texture elements), as display has pixels – elts of array– Texture is T(s,t) , and s and t are texture coordinates

• Texture map associates a texel with each point on a geometric object– And that object is mapped to window coordinates on the display

0, 0

Texture Space

t1, 1

(0.8, 0.4)

A

B C

a

bc

Object Space

Page 106: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Texture Mapping: Simple, but …2

• Angel: “On closer examination, we face a number of difficulties.”

– In fact working with all of screen, object, texture, and parametric coordinates

– From texture coordinates to object coordinates

– Actually, because of pixel-by-pixel basis of rendering process more interested in inverse map of screen coordinates to texture coordinates (than easier texture to screen)

– Actually, need to map not texture points to screen points, but texture areas to screen areas

• Wealth of information available about this and other pragmatic issues

– Yet again, all just a “survival guide”

Page 107: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Textures in OpenGL

• Here, texture is 256 x 256 image that has been mapped to a rectangular polygon which is viewed in perspective

• Three steps to applying a texture:

1. Specify the texture• A. read or generate image• B. assign to texture• C. enable texturing

2. Assign texture coordinates to vertices• Mapping function is set in application

3. Specify texture parameters• wrapping, filtering

• Will work through an example

Page 108: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Texture Mapping and OpenGL Pipeline

• Again, big idea …

• Images and geometry flow through separate pipelines that join during rasterization and fragment processing

– “Complex” textures do not affect geometric complexity– Texture application is done “at last minute”

• Can have gigabytes of textures in texture memory– Moore’s law is good

Geometry PathGeometry Path

Pixel Path

Page 109: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

• Define texture image from array of texels (texture elements) in memory– glubyte my_texels[512][512];

• Defined as any other pixel map– Scanned image, generate by application code, etc.

• Enable texture mapping– OpenGL supports 1-4 dimensional texture maps– glEnable(GL_TEXTURE_2D)

• Define image as texture:

Applying a texture:

1. Specifying a Texture Image

glTexImage2D( target, level, components, w, h, border, format, type, texels );

target: type of texture, e.g. GL_TEXTURE_2Dlevel: used for mipmapping (discussed later)components: elements per texelw, h: width and height of texels in pixelsborder: used for smoothing (discussed later)format and type: describe texelstexels: pointer to texel array

glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, my_texels);

Page 110: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Applying a texture:

May Need to Convert Texture Image …• OpenGL requires texture dimensions to be powers of 2

• If dimensions of image are not powers of 2– gluScaleImage(

format, w_in, h_in, type_in, *data_in, w_out, h_out,t ype_out, *data_out );

• data_in is source image• data_out is for destination image

• Image interpolated and filtered during scaling

Page 111: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

• “Put the texture on the polygon”– Map texture to polygon

• Based on texture coordinates

• glTexCoord*() specified at each vert

• Extraordinary flexibility

s1, 0

t

1, 1

0, 1

0, 0

(s, t) = (0.2, 0.8)

(0.4, 0.2)

(0.8, 0.4)

A

B C

a

b

c

Texture Space Object Space

Applying a texture:

2. Mapping Texture -> PolygonglBegin(GL_POLYGON);

glColor3f(r0, g0, b0); // no shading usedglNormal3f(u0, v0, w0); // shading usedglTexCoord2f(s0, t0); // using values glVertex3f(x0, y0, z0);glColor3f(r1, g1, b1);glNormal3f(u1, v1, w1);glTexCoord2f(s1, t1);glVertex3f(x1, y1, z1);

.

.glEnd();

Page 112: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Applying a texture:

3. Texture Parameters• OpenGL has variety of parameters that determine how texture applied:

– Wrapping parameters • determine what happens if s and t are outside the (0,1) range

– Filter modes • Allow use of area averaging instead of point samples

– Mipmapping • Allows use of textures at multiple resolutions

– Environment parameters • Determine how texture mapping interacts with shading

– glTexEnv{fi}[v]( GL_TEXTURE_ENV, prop, param )

– GL_TEXTURE_ENV_MODE– GL_MODULATE: modulates with computed shade– GL_BLEND: blends with an environmental color– GL_REPLACE: use only texture color

Page 113: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Applying a texture: Magnification and Minification

Texture Polygon PolygonTexture

• Magnification• > 1 pixel can cover a texel

• Minification • > 1 texel can cover a pixel

Page 114: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Spinning Textured Cube Code

Page 115: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Spinning Textured Cube Code

• Angel demo

• Spinning cube

• Mouse to select which axis about which to rotate

• Change of rotation during idle

• Now, – Generate textures– Map textures on to cube

faces

Page 116: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Environment Maps1

Page 117: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Environment Maps• Theme:

– Global models are slow for interactive cg, so use efficient and “good enough” models

• But, have seen that many elements of photorealism can’t be captured by fast

– Interactive cg models limited by just following projectors from object to eye/cop

– Global models, e.g., follow rays from light source, through all of it’s interactions with objects, to eye/cop

• Ok, actually other way around

• Environment, or reflection, maps improve on simple object to eye projects

• Use multipass rendering (a new big idea)• To capture some elements of global model,

e.g. reflection from mirror– Use texture mapping

• To (re)introduce into image that which is not captured at first, but on later pass

• Map mirror reflection onto object

Page 118: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Environment Maps2 steps to construct

• E.g., want to have mirror in scene – so far, local can’t handle

– Polygon with a highly specular surface

• Pos. of viewer and poly (mirror) orientation known– So, can calculate angle of reflection

• Can follow angle until intersect environment– And obtain shade that is reflected in mirror– Shade result of shading process that involves light

sources and materials in scene

• Two step rendering pass to do this:– 1. Render scene without mirror polygon with camera

placed at center of the mirror pointed in the direction of the normal of the mirror

• “What the mirror sees” (or, is seen by an observer of mirror)• A map of the environment

– 2. Use this image to obtain shades (texture values) to place on mirror polygon for a 2nd rendering with mirror in scene

• Place texture map from 1st pass onto mirror in 2nd pass

COP

“Mirror COP”

Page 119: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Compositing Techniques - Blending

Page 120: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Compositing Techniques - Blending

• So far, concerned with forming single image using only opaque polygons

• Compositing techniques allow combining of objects (fragments, pixels)

• Translucent (and transparent) objects possible as well

– And done in hardware, so efficient– Makes use of pixel operations using

raster operations

• Alpha () blending– 4th channel in RGBA or RGB– Can control for each pixel– If blending enabled, value of controls

how RGB values written into frame buffer

– Objects are “blended” or “composited” together

Page 121: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Writing Model for Blending - 2

• Use A component of RGBA (or RGB) color to store opacity (transparency)

• During rendering can expand writing model to use RGBA values

• Opacity– measure of how much light

penetrates– Opacity of 1 ( = 1) for

opaque surface that blocks all light, and = 0 for transparent

• Subtractive model (more later)

Page 122: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Writing Model for Blending

• Use A component of RGBA (or RGB) color to store opacity

• During rendering can expand writing model to use RGBA values

Color Buffer

destinationcomponent

blend

destination blending factor

source blending factor sourcecomponent

Page 123: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Antialiasing and Blending

• Major use of -channel is antialiasing

• Default width of line is 1 pixel– But, unless horizontal or vertical, line covers a

number of pixels

• During geometry processing of fragment, might set –value for corresponding pixel

– Range 0 – 1 reflecting amount covered by fragment– Use to modulate color as render to frame buffer– Destination blending factor of 1 - and Src factor of

Page 124: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Antialiasing and Blending - Overlap

• No overlap, opaque background w/frame buffer value C0– Can set = 0,

• as no part of pixel yet covered with fragments from polygons– Polygon rendered, and

• Color of dest pixel = Cd = (1- )C0 + 1C1 • –value = d = 1

– Fragment that covers entire pixel (= 1) • will have its color assigned to the destination pixel, • and dest pixel will be opaque

• If fragments overlap, must blend colors– Cd = (1- ) ((1- ) C0 + 1C1) + C2

– d = + )

• To have OpenGL take care of antialiasing:– glEnable(GL_POINT_SMOOTH)– glEnable(GL_POINT_SMOOTH)– glEnable(GL_POINT_SMOOTH)– glEnable(GL_POINT_SMOOTH)– glBlendfunction (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)

Page 125: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

More Buffer Uses

• Depth cueing • Fog• Accumulation buffer• Scene antialiasing• Motion blur• Depth of field• Stencil buffer

Page 126: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Depth Cueing

• Depth cueing early cg technique– Human depth perception in fact arises from

many elements• Occlusion, motion parallax, stereopsis, etc.

• Drawing lines farther from viewer dimmer

• Also, atmospheric haze

Page 127: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Depth Cueing and Fog

• Extend basic ideas– Create illusion of partially translucent space

between object and viewer– Blend in a distance-dependent color

• Technique: let,– f = fog factor– z = distance viewer to fragment– C = color of fragment– Cf = color of fog

• Then, – Cs’ = fCs + (1-f)Cf

• How f varies determines how perceived– Linearly, depth cueing– Exponentially, more like fog

• OpenGL supports linear, exponential and Gaussian fog densities and user specified color

Page 128: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Fog - OpenGL Example

• For Cs’ = fCs + (1-f)Cf

• OpenGL has straightforward calls to set parameters

– linear, exponential and Gaussian fog densities

• To set up a fog density function, f = e-0.5z2

– Glfloat fcolor[4] = {…};

– glEnable(GL_FOG)

– glFogv = (GL_FOG_COLOR, fcolor)– glFogf = (GL_FOG_MODE, GL_EXP)– glFogf = (GL_FOG_DENSITY, 0.5)

Page 129: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Motion Blur

• Motion blur– Objects in scene are blurred along path they are

taking

• Use accumulation buffer– Jitter an object and render it multiple times– Leaving positions of other object unchanged, get

dimmer copies of jittered objects in final image– Each is at slightly different location, so sum is

less, and so illumination level lower

• If object moved along a path, rather than randomly jittered, see the trail of the object

• Can adjust constant in glAccum to render object final position with greater opacity or to create the impression of speed differences

http://www.sgi.com/products/software/opengl/examples/glut/advanced/

Page 130: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Stencil Buffer – Last One!

• Often used for masking

• In writing to frame buffer, stencil buffer values compared write values for write

– Sounds like use of accumulation buffer use– Provides alternative way

• Can use to mask off part of viewport– E.g., dashboard panel in car - on scene– Just 0’s and 1’s in stencil

• Just enable and write to it

• Can also draw into stencil buffer, then use what drawn in as mask

– E.g., just replace what would be drawn at some locs with what already in stencil

Page 131: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Aliasing and Sampling

Page 132: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

Aliasing and Sampling1

• Term “aliasing” comes from sampling theory

• Consider trying to discover nature of sinusoidal wave (solid line) of some frequency

• Measure at a number of different times

• Use to infer shape/frequency of wave form

• If sample very densely (many times in each period of what measuring), can determine frequency and amplitude

http://www.daqarta.com/dw_0haa.htm

Page 133: CG Summary: Lighting and Shading “From Vertices to Fragments” Discrete Techniques Angel, Chapters 5, 6, 7; “Red Book” slides from AW, red book, etc. CSCI

End