View
213
Download
0
Category
Tags:
Preview:
Citation preview
CSCE 552 Spring 2010
Graphics
By Jijun Tang
Fundamentals
Frame and Back Buffer Visibility and Depth Buffer Stencil Buffer Triangles Vertices Coordinate Spaces Textures Shaders Materials
Frame and Back Buffer
Both hold pixel colors Frame buffer is displayed on screen Back buffer is just a region of memory Image is rendered to the back buffer
Half-drawn images are very distracting You do not want to see the screen is
drawn pixel by pixel
Two Buffers
Switching
Back buffer can be standard, frame buffer should aim at each hardware
Swapped to the frame buffer May be a swap, or may be a copy Copy is actually preferred, because you can still
modify the format Back buffer is larger if anti-aliasing
Some hardware may not support anti-aliasing Shrink and filter to frame buffer
Anti-aliasing
In Game Difference
In Game Difference
Visibility and Depth Buffer
Depth buffer is a buffer that holds the depth of each pixel in the scene, and is the same size as back buffer
Holds a depth or “Z” value (often called the “Z buffer”)
Pixels test their depth against existing value If greater, new pixel is further than existing pixel Therefore hidden by existing pixel – rejected Otherwise, is in front, and therefore visible Overwrites value in depth buffer and color in back buffer
No useful units for the depth value By convention, nearer means lower value Non-linear mapping from world space
Object Depth Is Difficult
Basic Depth Buffer Algorithm
for each object in scene in any orderfor each pixel in the object being rendered
if the pixel has already been rendered, then
if the new object pixel is closer than the old one, then
Replace the displayed pixel with the new one
Stencil Buffer
The stencil buffer is a buffer that guides the translation between the back buffer and the front buffer and is application specific
It is per pixel, and usually is one byte in size and usually interleaved with 24-bit depth buffer (shared memory)
Can write to stencil buffer Can reject pixels based on comparison between
existing value and reference Most useful for adding shadows and planar
reflections
Stencil Example
Frustum
The viewing frustum is the region of space in the modeled world that may appear on the screen
Frustum culling is the process of removing objects that lie completely outside the viewing frustum from the rendering process
Frustum and Frustum Culling
Coordinate Spaces
World space Arbitrary global game space
Object space Child of world space Origin at entity’s position and orientation Vertex positions and normals stored in this
Camera space Camera’s version of “object” space
Screen (image) space Clip space vertices projected to screen space Actual pixels, ready for rendering
Relationship
Clip Space
Distorted version of camera space Edges of screen make four side planes Near and far planes
Needed to control precision of depth buffer Total of six clipping planes Distorted to make a cube in 4D clip space Makes clipping hardware simpler
Clip Space
Eye
Triangles will be clipped
Cameraspacevisible
frustum
Clipspace
frustum
Another View
Tangent Space
Defined at each point on surface of mesh Usually smoothly interpolated over surface Normal of surface is one axis “tangent” and “binormal” axes lie along
surface Tangent direction is controlled by artist Useful for lighting calculations
Tangent Space
Textures
Array of texels Same as pixel, but for a texture Nominally R,G,B,A but can mean anything
1D, 2D, 3D and “cube map” arrays 2D is by far the most common Basically just a 2D image bitmap Often square and power-of-2 in size
Cube map - six 2D arrays makes hollow cube Approximates a hollow sphere of texels For environmental
Cube Map
Texture Example
Shaders
A program run at each vertex or pixel Generates pixel colors or vertex positions
Relatively small programs Usually tens or hundreds of instructions
Explicit parallelism No direct communication between shaders “Extreme SIMD” programming model
Hardware capabilities evolving rapidly
Example
Textures
Texture Formats Texture Mapping Texture Filtering Rendering to Textures
Texture Formats
Textures made of texels Texels have R,G,B,A components
Often do mean red, green, blue colors Really just a labelling convention Shader decides what the numbers “mean”
Not all formats have all components Different formats have different bit widths for
components Trade off storage space and speed for fidelity Accuracy concerns
Common formats
A8R8G8B8 (RGBA8): 32-bit RGB with Alpha 8 bits per comp, 32 bits total
R5G6B5: 5 or 6 bits per comp, 16 bits total A32f: single 32-bit floating-point comp A16R16G16B16f: four 16-bit floats DXT1: compressed 4x4 RGB block
64 bits Storing 16 input pixels in 64 bits of output (4 bits per pixel) Consisting of two 16-bit R5G6B5 color values and a 4x4
two bit lookup table
Texels Arrangement
Texels arranged in variety of ways 1D linear array of texels 2D rectangle/square of texels 3D solid cube of texels Six 2D squares of texels in hollow cube
All the above can have mipmap chains Mipmap is half the size in each dimension Mipmap chain – all mipmaps to size 1
MIP Maps
Pre-calculated, optimized collections of bitmap images that accompany a main texture
MIP meaning "much in a small space“ Each bitmap image is a version of the
main texture, but at a certain reduced level of detail (like LOD in 3D Models)
MIPMapping Example
Illustration
8x8 2D texture with mipmap
chain
4x4 cube map(shown with
sides expanded)
Texture Mapping
Texture coordinates called U, V, W Only need U for 1D; U,V for 2D U,V,W typically stored in vertices Or can be computed by shaders Ranges from 0 to 1 across texture
No matter how many texels the texture contains (64X64, 128X128, etc)
Except for cube map – range is -1 to +1
Texture Space
Wrap mode
Mirror once (mirror+clamp)
Clamp
Border color
Wrap
Controls values outside 0-1
Mirror
Original
Black edges shown for illustration
only
Texture Filtering for Resize
Point sampling enlarges without filtering When magnified, texels vary obvious When minified, texture is “sparkly”
Bilinear Filtering
Used to smooth textures when displayed larger or smaller than their actually are
Blends edges of texels Texel only specifies color at centre Magnification looks better Minification still sparkles a lot
Bilinear Filtering Procedure
When drawing a textured shape on the screen, the texture is not displayed exactly as it is stored, and most pixels will end up needing to use a point on the texture that's 'between' texels
Bilinear filtering uses these points to perform bilinear interpolation between the four texels nearest to the point that the pixel represents
Bilinear filtering Effect
Point Sampling Bilinear Filtering
Mipmap and Trilinear Filtering
Mipmap chains help minification Pre-filters a texture to half-size Multiple mipmaps, each smaller than last Rendering selects appropriate level to use
Transitions between levels are obvious Change is visible as a moving line
Use trilinear filtering Blends between mipmaps smoothly
Trilinear Filtering
Trilinear can over-blur textures When triangles are edge-on to camera Especially roads and walls
Anisotropic filtering solves this Takes multiple samples in one direction Averages them together Quite expensive in current hardware
Trilinear and Anisotropic Filtering
Rendering to Textures
Textures usually made in art package Photoshop/Paint/GIMP/Paint Shop Loaded from disk
But any 2D image can be a texture Can set texture as the target for rendering Render scene 1 to texture Then set backbuffer as target again Render scene 2 using texture
Cube map needs six renders, one per face
Lighting
Components Lighting Environment Multiple Lights Diffuse Material Lighting Normal Maps Pre-computed Radiance Transfer Specular Material Lighting Environment Maps
Lighting and Approaches
Processes to determine the amount and direction of light incident on a surface how that light is absorbed, reemitted, and reflected which of those outgoing light rays eventually reach the
eye Approaches:
Forward tracing: trace every photon from light source Backward tracing: trace a photon backward from the eye Middle-out: compromise and trace the important rays
Components
Lighting is in three stages: What light shines on the surface? How does the material interact with light? What part of the result is visible to eye? Real-time rendering merges last two
Occurs in vertex and/or pixel shader Many algorithms can be in either
Lighting Environment
Answers first question: What light shines on the surface?
Standard model is infinitely small lights Position Intensity Color
Physical model uses inverse square rule: brightness = light brightness / distance2
Practical Ways
But this gives huge range of brightnesses Monitors have limited range In practice it looks terrible Most people use inverse distance
brightness = light brightness / distance Add min distance to stop over-brightening
Except where you want over-brightening! Add max distance to cull lights
Reject very dim lights for performance
Multiple Lights
Environments have tens or hundreds Too slow to consider every one every pixel Approximate less significant ones
Ambient light Attributable to the reflection of light off of
surfaces in the environment Single color added to all lighting Washes contrasts out of scene Acceptable for overcast daylight scenes
Ambient Lights
Hemisphere lighting
Three major lights: Sky is light blue Ground is dark green or brown Dot-product normal with “up vector” Blend between the two colors
Good for brighter outdoor daylight scenes
Example
Other Multiple Lights
Cube map of irradiance Stores incoming light from each direction Look up value that normal points at Can represent any lighting environment
Spherical harmonic irradiance Store irradiance cube map in frequency space 10 color values gives at most 6% error Calculation instead of cube-map lookup Mainly for diffuse lighting
Cube Map
Lightmaps
Usually store result of lighting calculation
But can just store irradiance Lighting still done at each pixel Allows lower-frequency light maps Still high-frequency normal maps Still view-dependent specular
Lightmap Example
Diffuse Material Lighting
Light is absorbed and re-emitted Re-emitted in all directions equally So it does not matter where the eye is
Same amount of light hits the pupil “Lambert” diffuse model is common Brightness is dot-product between
surface normal and incident light vector
Example
Normal Maps
Surface normal vector stored in vertices Changes slowly so that surfaces look
smooth Real surfaces are rough
Lots of variation in surface normal Would require lots more vertices
Normal maps store normal in a texture Look up normal at each pixel Perform lighting calculation in pixel shader
Normal Mapping Example
Radiance Transfer
Surface usually represented by: Normal Color Roughness
But all we need is how it responds to light from a certain direction
Above data is just an approximation Why not store response data directly?
Pre-computed Radiance Transfer
Can include effects of: Local self-shadowing Local scattering of light Internal structure (e.g. skin layers)
But data size is huge Color response for every direction Different for each part of surface Cube-map per texel would be crazy!
Pre-computed Radiance Transfer
Store cube-maps as spherical harmonics One SH per texel Further compression by other methods
But: Difficult to do animated meshes Still lots of memory Lots of computation Poor at specular materials
Results
Specular Material Lighting
Light bounces off surface How much light bounced into the eye?
Other light did not hit eye – so not visible! Common model is “Blinn” lighting Surface made of “microfacets” They have random orientation
With some type of distribution
Example
Specular Material Lighting (2)
Light comes from incident light vector …reflects off microfacet …into eye
Eye and light vectors fixed for scene So we know microfacet normal required Called “half vector”
half vector = (incident + eye)/2 How many have that normal?
Specular Material Lighting (3)
Microfacets distributed around surface normal according to “smoothness” value
Dot-product of half-vector and normal Then raise to power of “smoothness”
Gives bright spot Where normal=half vector Tails off quicker when material is smoother
Specular Material Lighting (4)
half=(light+eye)/2
alignment=max (0, dot (half,normal))
brightness=alignmentsmoothness
light vectornormal
half vector
eye vector
Environment Maps
Blinn used for slightly rough materials Only models bright lights
Light from normal objects is ignored Smooth surfaces can reflect everything
No microfacets for smooth surfaces Only care about one source of light The one that reflects to hit the eye
Example
Environment Maps - 2
Put environment picture in cube map Reflect eye vector in surface normal Look up result in cube map Can take normal from normal map
Bumpy chrome
Environment Maps - 3
Environment map can be static Generic sky + hills + ground Often hard to notice that it’s not correct Very cheap, very effective
Or render every frame with real scene Render to cube map sides Selection of scene centre can be tricky Expensive to render scene six times
Hardware Rendering Pipe
Input Assembly Vertex Shading Primitive Assembly, Cull, Clip Project, Rasterize Pixel Shading Z, Stencil, Framebuffer Blend Shader Characteristics Shader Languages
Hardware Rendering Pipe
Current outline of rendering pipeline Can only be very general Hardware moves at rapid pace Hardware varies significantly in details Functional view only
Not representative of performance Many stages move in actual hardware
Input Assembly
State changes handled Textures, shaders, blend modes
Streams of input data read Vertex buffers Index buffers Constant data
Combined into primitives
Vertex Shading
Vertex data fed to vertex shader Also misc. states and constant data
Program run until completion One vertex in, one vertex out
Shader cannot see multiple vertices Shader cannot see triangle structure
Output stored in vertex cache Output position must be in clip space
Primitive Assembly, Cull, Clip
Vertices read from cache Combined to form triangles Cull triangles
Frustum cull Back face (clockwise ordering of vertices)
Clipping performed on non-culled tris Produces tris that do not go off-screen
Project, Rasterize
Vertices projected to screen space Actual pixel coordinates
Triangle is rasterized Finds the pixels it actually affects Finds the depth values for those pixels
Finds the interpolated attribute data Texture coordinates Anything else held in vertices
Feeds results to pixel shader
Pixel Shading
Program run once for each pixel Given interpolated vertex data Can read textures Outputs resulting pixel color May optionally output new depth value May kill pixel
Prevents it being rendered
Z, Stencil, Framebuffer Blend
Z and stencil tests performed Pixel may be killed by tests If not, new Z and stencil values written If no framebuffer blend
Write new pixel color to backbuffer Otherwise, blend existing value with new
Shader Characteristics
Shaders rely on massive parallelism Breaking parallelism breaks speed
Can be thousands of times slower Shaders may be executed in any order So restrictions placed on what shader can
do Write to exactly one place No persistent data No communication with other shaders
Shader Languages
Many different shader capabilities Early languages looked like assembly
Different assembly for each shader version Now have C-like compilers
Hides a lot of implementation details Works with multiple versions of hardware
Still same fundamental restrictions Don’t break parallelism!
Expected to keep evolving rapidly
Conclusions
Traverse scene nodes Reject or ignore invisible nodes Draw objects in visible nodes
Vertices transformed to screen space Using vertex shader programs Deform mesh according to animation
Make triangles from them Rasterize into pixels
Conclusions (2)
Lighting done by combination Some part vertex shader Some part pixel shader Results in new color for each pixel
Reject pixels that are invisible Write or blend to backbuffer
Recommended