Upload
rudolf
View
51
Download
2
Tags:
Embed Size (px)
DESCRIPTION
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Texturing. K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology. Overview. - PowerPoint PPT Presentation
Citation preview
Photo-realistic Rendering and Global Illumination in Computer Graphics
Spring 2012
Texturing
K. H. Ko
School of MechatronicsGwangju Institute of Science and Technology
2
Overview
Texturing is a process that takes a surface and modifies its appearance at each location using some images, functions, or other data source.Huge modeling, memory and speed savings are
obtained by combining images and surfaces.Color image texturing also provides a way to use
photographic images and animations on surfaces.
3
Generalized Texturing
Texturing is a technique for efficiently modeling the surface’s properties.
Generalized Texture Pipeline
3
Generalized Texturing
4
Generalized Texturing
A location in space is the starting point for the texturing process.It is more often in the model’s frame of reference.
When the model moves, the texture moves along with it.
Projector FunctionThe goal of the projector function is to generate texture coordinates.It is a function to give parameter space values, which will be used for accessing the
texture.It typically works by converting a three-dimensional point in space into texture
coordinates.: projection of a point in 3D to a point in 2D.Spherical, cylindrical and planar projectionsNatural projections
5
Generalized Texturing
Projector FunctionSpherical projection casts points onto an imaginary sphere centered around some
point.Cylindrical projection computes the u texture coordinate the same as spherical
projection, with the v texture coordinate computed as the distance along the cylinder’s axis.
The planar projection is like an x-ray slide projector, projecting along a direction and applying the texture to all surfaces. It uses orthographic projection.
6
Generalized Texturing
Corresponder FunctionsThey convert parameter-space values to texture-space locations.They provide flexibility in applying textures to surfaces.An optional matrix transformation: translate, rotate, scale, shear and
even project the texture on the surface.Another class of corresponder functions controls the way an image is
applied.Determine the behavior when values are outside of the range.
Wrap, mirror, clamp, border.
7
Generalized TexturingOnce the texture values have been retrieved, they may
be used directly or further transformed.
The resulting values are used to modify one or more surface attributes.Almost all real-time systems use Gouraud shading,
meaning that only certain values are interpolated across a surface.So these are the only values that the texture can modify.
We modify the RGB result of the lighting equation, since this equation was evaluated at each vertex and the color is then interpolated.
8
Generalized TexturingCombine Functions or Texture Blending
Operations: They glue an image texture onto a surface.Replace: simply replace the original surface color with
the texture color.Decal: like replace, but when an alpha texture value is
available, the texture is blended with the underlying color but the original color is not modified.
Modulate: multiply the surface color by the texture color.
9
Generalized TexturingSummary with a brick wall texture example.
A modeler sets the (u,v) parameter values once in advance for the wall model vertices.
The texture is read into the renderer, and the wall polygons are sent down the rendering pipeline.
A white material is used in computing the illumination at each vertex.
This color and (u,v) values are interpolated across the surface.
At each pixel, the proper brick image’s texel is retrieved and modulated by the illumination color and displayed.
10
Image TexturingIn image texturing, a two-dimensional image is effectively
glued onto the surface of a polygon and rendered.Issues
We have an image of size 256×256 and want to use it as a texture on a square.
Case 1: The projected square on the screen is roughly the same size as the texture.
Case 2: The projected square on the screen is larger than the texture.Magnification
Case 3: The projected square on the screen is smaller than the texture.Minification
Solution depends on what kind of sampling and filtering methods are used.
11
MagnificationNearest Neighbor
The method takes the value of the nearest texel to each pixel center when magnifying, resulting in a blocky appearance.
One characteristic of this magnification technique is that the individual texels may become apparent.
12
MagnificationBilinear Interpolation
Find the four neighboring texels and linearly interpolates in two dimensions to find a blended value for the pixel.
The result is blurrier.Much of the jaggedness
from using the nearest neighbor method has disappeared.
13
MagnificationBilinear Interpolation
14
MinificationWhen a texture is minimized, several texels may cover a pixel’s cell.
To get a correct color value for each pixel, integrate the effect of the texels influencing the pixel.
However, it is difficult to determine precisely the exact influence of all texels near a particular pixel.
It is effectively impossible to do so perfectly in real time.
15
MinificationOne method
Use the nearest neighbor.It selects the texel which is visible at the very center of
the pixel’s cell.This filter may cause severe aliasing problems.
Temporal aliasing: only one of the many texels influencing a pixel is chosen to represent the surface.
16
MinificationBilinear Interpolation
The same as the magnification filter.It is only slightly better than the nearest neighbor
approach for minification.It blends four texels instead of using just one.When a pixel is influenced by more than four texels, the
filter soon fails and produces aliasing.
17
MinificationIncrease the pixel’s sampling frequency or decrease the
texture frequency.The Nyquist limit.
We need to make sure that the texture’s signal frequency is no greater than half the sample frequency.
In general, for textures, there should be at most one texel per pixel to avoid aliasing.
All texture antialiasing algorithmsTo preprocess the texture and create data structure that will help
compute a quick approximation of the effect of a set of texels on a pixel.A single sample will retrieve the effects of one or more texels.
18
MinificationMipmapping
The most popular antialisaing method for textures.
The texture (level zero) is downsampled to a quarter of the original area.Each new texel value is often computed as
the average of the four neighbor texels.The new, level-one texture is called a
subtexture of the original texture.The reduction is performed
recursively until one or both of the dimensions of the texture equals 1 texel.
19
MinificationMipmapping
The goal is to determine roughly how much of the texture influences the pixel.We compute d for the mipmap.
The intent of computing the coordinate d is to determine where to sample along the mipmap’s pyramid axis.A pixel-to-texel ratio is at least 1:1 to
achieve the Nyquist rate.As the pixel cell comes to include more
texels, d increaes.A smaller, blurrier version of the texture is
accessed.
20
MinificationMipmapping
The result of mipmapping is that, instead of trying to sum all the texels which affect the pixel individually, precombined sets of texels are access and interpolated. Causes overblurring.
21
MinificationRipmapping
To avoid some or all of the overblurringThe idea is to extend the mipmap to include
downsampled rectangular areas as subtextures that can be accessed.
Need four coordinates for access
22
Minification
Summed-Area TableCreate an array that is the size of the texture but contains more bits of
precision for the color stored.At each location in this array, one must compute and store the sum of all
the corresponding texture’s texels in the rectangle formed by this location and texel (0,0).
During texturing, the pixel cell’s projection onto the texture is bound by a rectangle.
The summed-area table is then accessed to determine the average color of this rectangle, which is passed back as the texture’s color for the pixel.
23
MinificationUnconstrained Anisotropic Filtering
For current graphics hardware, the most common method to further improve texture filtering is to reuse existing mipmap hardware.
The basic ideaThe pixel cell is back-projected.This quadrilateral on the texture is then sampled a number of times.The samples are combined.
Each mipmap sample has a location and a squarish area associated with it.
Instead of using a single mipmap sample to approximate this quad’s coverage, the algorithm uses a number of squares to cover the quad.
24
MinificationUnconstrained Anisotropic Filtering
A line of anisotropy is
formed between the longer sides.
25
Multipass Texture Rendering
The various parts of the lighting equation can be evaluated in separate passes, with each successive pass modifying the previous results.Motion blur, antialiasing, shadows, etc.
ExampleTo have the diffuse color modulated by a texture and want the specular
highlight to be unmodified by the texture.In the first pass, compute and interpolate the diffuse illumination contribution
and modulate it by the texture.Then compute and interpolate the specular part and render the scene again.This result then would be added to the existing diffusely lit, textured image.
26
Multipass Texture RenderingThe basic idea behind multipass rendering is that each
pass computes a piece of the lighting equation and the frame buffer is used to store intermediate results.
On the fastest machine, up to 10 passes are done on some objects to render a single frame.Passes 1-4 : accumulate bump mapPass 5 : diffuse lightingPass 6 : base texture with specular componentPass 7 : specular lightingPass 8 : emissive lightingPass 9 : volumetric/atmospheric effectsPass 10: screen flashes
27
Multitexturing
Most graphics hardware today allows two or more textures to be applied in a single rendering pass. -> Multitexturing.
To combine the results of these texture accesses, a texture blending cascade is defined that is made up of a series of texture stages, or texture units.The first texture stage combines two texture values, typically RGB and
alpha, and this result is then passed on to the next texture stage.Second and successive stages blend another texture’s or interpolant’s
values with the previous result.
28
Texturing MethodsWe will cover various other forms of
texturing beyond gluing simple color images onto surfaces.Alpha blending in texturingReflections via environment mappingRough surface simulation using bump mappingEtc.
29
Alpha MappingThe alpha value can be used for many interesting
effects.Decaling
You wish to put a picture of a flower on a teapot.By properly setting the decal texture’s alpha, you can replace or
blend the underlying surface with the decal.By assigning an alpha of 0 to a texel, you make it transparent so that it has
no effect.
Making cutoutsYou make a decal image of a tree, but you do not want the
background of this image to affect the scene at all.If an alpha is found to be fully transparent, the textured surface itself does
not affect that pixel.
30
Alpha Mapping
The alpha value can be used for many interesting effects.Combining alpha blending and texture animation can
produce convincing special effectsFlickering torches, plant growth, explosions, atmospheric effects, etc.
31
Light Mapping
For static lighting in an environment, the diffuse component on any surface remains the same from any angle.
Because of this view-independence, the contribution of light to a surface could be captured in a texture structure attached to a surface.
By using a separate, pre-computed texture that captured the lighting contributions, and multiplying it with the underlying surface, one can achieve Phong-like shading.
32
Light MappingIf the lighting will never change, or will only change in overall brightness, the light texture can simply be multiplied by the surface’s material texture during the modeling stage, and the single resulting texture can be used.
Using textures to represent shadows and projective textures: both are related with light maps.
33
Gloss MappingNot all objects are uniformly shiny over their surface.
This can be simulated by using a technique called gloss mapping and the texture that makes this happen is called a gloss map.
A gloss map is a texture that varies the contribution of the specular component over the surface.
The key idea is that all material properties can be supplied by textures rather than by constants or per-vertex values.
34
Environment Mapping
Environment Mapping (EM), also called reflection mapping, is a simple yet powerful method of generating approximations of reflections in curved surfaces.All EM methods start with a ray from the viewer to a point on the
reflector.This ray is then reflected with respect to the normal at that point.Instead of finding the intersection with the closest surface, EM uses the
direction of the reflection vector as an index to an image containing the environment.
35
Environment MappingThe steps of an EM algorithm are
Generate or load a two-dimensional image representing the environment.
For each pixel that contains a reflective object, compute the normal at the location on the surface of the object.
Compute the reflection vector from the view vector and the normal.
Use the reflection vector to compute an index into the environment map that represents the objects in the reflection direction.
Use the texel data from the environment map to color the current pixel.
36
Cubic Environment Mapping
Far and away the most popular EM method implemented in modern graphics hardware, due to its speed and flexibility.
The cubic environment map is obtained by placing the camera in the center of the environment and then projecting the environment onto the sides of a cube positioned with its center at the camera’s location.
The images of the cube are then used as the environment map.In practice, the scene is rendered six times with the camera at the
center of the cube, looking at each cube face with a 90-degree view angle.
37
Cubic Environment Mapping
Cubic maps have no singularities, and are view-independent. They can be used for any view direction.
38
Sphere Mapping
The texture image is derived from the appearance of the environment as viewed orthographically in a perfectly reflective sphere.This texture is called a sphere map.One way to make a sphere map of a real environment is to
take a photograph of a shiny sphere.Sphere map textures for synthetic
scene can be generated using ray tracing or by warping the images generated for a cubic environment map.
39
Lighting Using Environment Mapping
An important use of EM techniques is generating specular reflections and refractions.Gouraud shading can miss highlights (reflections of lights). EM can solve this problem by representing the lights in the
texture.We can simulate highlighting on a per-pixel basis for any
number of lights at a fixed cost.
40
Lighting Using Environment Mapping
Recursive reflections of objects in a scene can be performed using EM.Compute only one environment map per frame
using the environment maps from the previous frame.
41
Bump Mapping
Bump mapping is a technique that makes a surface appear uneven in some manner: bumpy, wrinkled, wavy, etc.
Bump maps can simulate features that would otherwise take many polygons to model.
The basic idea is that instead of using a texture to change a color component in the illumination equation, we access a texture to modify the surface normal.The geometric normal of the surface remains the same. We merely modify the normal
used in the lighting equation.We perform changes on the surface normal, but the surface itself remains smooth in the geometric
sense.
42
Bump Mapping
Method 1(offset vector bump map or offset map): Store in a texture two signed values, bu and bv, at each point. These two vectors are added to the normal to change its direction.
Method 2: Use a heightfield to modify the surface normal’s direction.The heightfield is used to derive u and v signed values.
43
Bump Mapping
Two drawbacksThe illusion breaks down around the silhouettes of objects.
The viewer notices that there are no real bumps, just smooth outlines.
The bumps do not cast shadows onto their own surface, which can look unrealistic.
More advanced real-time rendering methods can be used to provide self-shadowing effects.
44
Bump Mapping
Emboss Bump MappingIt is a way to give a chiseled look to an image.
Render the surface with the heightfield applied as a diffuse monochrome texture.
Shift all the vertex (u,v) coordinates in the direction of the light.Render this surface with the heightfield again applied as a diffuse
texture, subtracting from the first-pass result. This gives the emboss effect.
Render the surface again with no heightfield, diffusely illuminated and Gouraud-shaded. Add this shaded image to the result.
45
Bump Mapping
Dot Product Bump MappingInstead of storing heights or slopes, the actual normals for
the surface are stored as (x,y,z) vectors in a normal map.The bump texture, which consists of normals, is then
combined with the interpolated light vector at each pixel.These are combined by taking their dot product, which is a special
texture-blending function provided precisely for this purpose.
46
Bump Mapping
Environment Map Bump Mapping (EMBM)One way to give the appearance of bumpiness to a shiny
surface.The idea is to perturb (u,v) environment-mapping coordinates
by u and v differentials found in the bump texture.This gives the effect of wobbling the reflection vector, thereby
distorting the look of the reflected surface.
47
Bump Mapping