Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
i
Perceptually Driven Simplification of Lit Polygonal Models
A Thesis Presented to
The University of Virginia
by
Nathaniel Custis Williams
a
Bachelor of Arts Candidate in
the Cognitive Science Program
April 19th, 2002
Charlottesville, Virginia
ii
Acknowledgements
I feel very fortunate to have worked in two research laboratories as an undergraduate in
the Cognitive Science Program. Through the kindness and guidance of Psychology Professor
Peter Brunjes and Computer Science Professor David Luebke I’ve matured a great deal as a
researcher in the last two years. I am very honored to consider these men my friends as well
as my advisors. I would also like to acknowledge the assistance of Professor Jonathan Cohen
of Johns Hopkins University and fellow undergraduates Mike Kelley and Brenden Schubert
for their role in development of this system.
iii
Table of Contents
Acknowledgements................................................................................................................... ii
Table of Contents..................................................................................................................... iii
List of Figures .......................................................................................................................... iv
Glossary of Terms..................................................................................................................... v
Abstract ................................................................................................................................... vii
Chapter 1 : Introduction........................................................................................................... 1
1.1 Driving Problem.............................................................................................................. 1
1.2 Literature Review............................................................................................................ 1
1.3 Contribution .................................................................................................................... 6
1.4 Thesis Organization......................................................................................................... 7
Chapter 2 : Perceptual Model .................................................................................................. 8
2.1 Spatial Frequency............................................................................................................ 8
2.2 Contrast ......................................................................................................................... 10
2.3 The Contrast Sensitivity Function................................................................................. 13
2.4 Imperceptibility Distance .............................................................................................. 15
Chapter 3 : A Perceptually Driven Simplification System ................................................... 16
3.1 The Multi-Triangulation................................................................................................ 16
3.2 Preprocess...................................................................................................................... 17
3.3 Budget-Based Simplification ........................................................................................ 18
3.4 Simplification of Textured Models ............................................................................... 19
3.5 Simplification of Dynamically Illuminated Models...................................................... 20
Chapter 4 : Results ................................................................................................................. 24
4.1 Comparison to the Luebke-Hallen Approach................................................................ 26
4.2 Comparison to Appearance Preserving Simplification ................................................. 26
Chapter 5 : Conclusions ......................................................................................................... 30
5.1 Summary ....................................................................................................................... 30
5.2 Future Work .................................................................................................................. 31
Bibliography ........................................................................................................................... 35
iv
List of Figures
Figure 1.1 Several LODs for the Stanford bunny ..................................................................... 2
Figure 1.2 LODs rendered at increasing distance from the viewer .......................................... 2
Figure 1.3 A pair of vertices is merged..................................................................................... 3
Figure 1.4 Silhouette preservation ............................................................................................ 4
Figure 1.5 Bumpy torus model demonstrating two surface normal representations ................ 6
Figure 2.1 Two contrast gratings with equal contrast............................................................... 8
Figure 2.2 Bounding the lowest spatial frequency affected by geometry................................. 9
Figure 2.3 Texture deviation................................................................................................... 10
Figure 2.4 Calculation of silhouette status with normal cones ............................................... 13
Figure 2.5 The Contrast Sensitivity Function ......................................................................... 14
Figure 3.1 The Multi-Triangulation........................................................................................ 17
Figure 3.2 Contrast visualization of a texture mapped torus .................................................. 20
Figure 3.3 Calculation of normal vectors with shading cones................................................ 22
Figure 3.4 Contrast visualization of an illuminated torus....................................................... 23
Figure 4.1 Simplification of an illuminated torus drawn in wireframe .................................. 24
Figure 4.2 Difference image used for RMS calculation ......................................................... 25
Figure 4.3 Comparison of our approach to Luebke & Hallen's algorithm ............................. 26
Figure 4.4 Simplification comparison of a normal mapped red armadillo ............................. 27
Figure 4.5 Simplification comparison of a vertex illuminated red armadillo......................... 28
Figure 4.6 Simplification comparison of a lit torus with a puzzle texture.............................. 29
Figure 4.7 Models used for simplification comparison .......................................................... 29
Figure 5.1 Perceptual image difference comparing two armadillo representations................ 33
v
Glossary of Terms
Arc – an element of the multi-triangulation that indicates a dependency between two nodes. Bounding Sphere – a data structure for conservatively bounding the error introduced by
performing a simplification operation (raising a node above the cut). This sphere bounds all geometry that is affected by the simplification. The bounded geometry consists of all triangles contained in the parent and child arcs of the node.
Budget-Based Simplification – an approach to view-dependent simplification where the level of detail size is constrained by a triangle budget.
Computer Graphics – the study of digital information visualization. Contrast – a measure of the luminance variation across a stimulus. Contrast Gratings – a sinusoidally varying pattern of luminance used to detect the limits of
low-level visual perception. Contrast Sensitivity – sensitivity to luminance differences between two areas of a stimulus.
Contrast sensitivity is the inverse of threshold contrast. Contrast Sensitivity Function – a model of the low-level perceptibility of visual stimuli
developed through contrast grating experiments. Cut – a set of arcs from a multi-triangulation that represents a unique level of detail. Dynamic Level of Detail – a data structure that encodes a continuum of model simplifications
allowing for runtime extraction of the required level of detail. Imperceptibility Distance – the distance a user must be from a stimulus such that it will not
be perceptible. This is determined by inversion of the contrast sensitivity function. Just-Noticeable Differences – a measure of the perceptual error between two images given in
multiples of the signal threshold. Level of Detail - a technique for increasing rendering speed by displaying a simplification of
a complex geometric model. Also, this is the name given to a model’s simplification. Luminance – the achromatic measure of a stimulus’s brightness. Multi-Triangulation – a directed acyclic graph representation of a polygonal model that
permits view-dependent simplification. Node – an element of the multi-triangulation that encodes a local simplification operation. Normal Cone – a data structure for bounding the normals associated with a patch of surface.
The representation consists of a cone axis vector and a sub-angle that indicates the cone’s breadth with respect to the main axis. We use a normal cone for shading calculations, where the cone bounds normals from the normal map. We also use a normal cone for silhouette detection, where normals come from the triangles of a surface patch.
vi
Normal Maps – a method of assigning surface normals using a parameterization to link between the model and the normal map. This technique decouples the surface normals from underlying geometry, generally allowing for higher simplification quality.
Our System – the unnamed view-dependent visualization system that I have designed and implemented with David Luebke, Jonathan Cohen, Mike Kelley, and Brenden Schubert. In this thesis I speak of the system in the first person plural to acknowledge their contributions.
Root – the highest node in a multi-triangulation. A cut directly below the root represents the coarsest level of detail for the model.
Root-Mean-Squared Error – a measure of error between two images. A difference image is first computed as the pixel-wise subtraction of one image from another. The Root-Mean-Squared Error is the square root of the sum of all pixels in the difference image.
Sink – the lowest node in a multi-triangulation. A cut directly above the sink represents the original model.
Spatial Frequency – a measure of contrast grating frequency in cycles per degree of visual arc. This measure depends both on absolute grating frequency and a viewer’s distance from the grating.
Static Level of Detail – an approach to simplification where multiple representations of a model are produced prior to visualization. At runtime an LOD is selected to meet quality and speed requirements of the system.
Suprathreshold Sensitivity – concerned with measuring the visual system’s sensitivity to stimuli below the threshold for detection.
Texture Deviation – a measure of the distance between two parametrically corresponding points, used to gauge possible distortion in a texture mapped or normal mapped level of detail.
Threshold Contrast – the minimum contrast that can be perceived for a stimulus of known spatial frequency. It is often measured using contrast gratings.
Vertex Normals –a surface normal is stored with every vertex in this representation. These normals are generally used for illumination calculations. Since vertex normals are tied to the geometry, simplification of these models often produces visible artifacts.
View-Dependent Simplification – an approach to simplification based on dynamic level of detail. A novel level of detail is generated before every frame with regard to the viewer’s location.
Visual Masking - the modulation of a stimulus’s perceptibility due to interference from adjacent stimuli.
vii
Abstract
For over twenty years computer graphics practitioners have designed systems for the
interactive rendering of complex models. The method of polygonal simplification trades off
visual fidelity for speed by rendering a simpler representation of a complex model. We
present a visualization system that drives polygonal simplification with a model of low-level
human vision. We consider the perceptual effect of simplification so that we may produce a
new representation that is difficult to distinguish from the original. We improve on previous
perceptually based algorithms by accounting for dynamic lighting and texture mapping. Our
perceptual model automatically weighs the relative importance of these effects on
simplification. As a result, our visualization system requires little user input and still
produces high fidelity simplifications at interactive rates. Our system could be applied to
tasks such as medical visualization, architectural walkthroughs, and video games – anywhere
that a complex polygonal model must be simplified to permit interactive visualization.
1
Chapter 1: Introduction
1.1 Driving Problem Computer graphics is concerned with the presentation of digital information to a user.
As the complexity of the digital information grows, it becomes increasingly difficult to
present the information at interactive rates. In certain applications of computer graphics,
such as the creation of computer-generated films, slow rendering of frames is a minor issue
since the movie is created before it is viewed. However, in cases such as medical
visualization, architectural walkthroughs, and modern video games, the user needs to interact
with the computer-generated scene. These applications must draw enough frames and
remain responsive to input or the user may become distracted.
Simplification is an established method for rendering complex data at interactive rates.
Given a complex polygonal model, a simplification process may be applied to create a
simpler model, which can then be shown to the user instead of the original. This simple
model can be rendered more quickly, resulting in a higher frame rate. Unfortunately, this
simplification process may result in decreased visual fidelity of the model. Perceptual
psychologists are concerned with how humans perceive their environment and have
constructed models that determine how perceptible a stimulus is. By applying a perceptual
model to the simplification process, we can reduce model complexity in ways that will be as
inconspicuous as possible.
1.2 Literature Review The computer graphics community has developed level of detail (LOD) techniques as a
means of displaying complex models at interactive rates [5]. Early LOD approaches involve
2
an artist’s construction of several discrete LODs for a model; the system chooses one of these
static LODs at run time and displays it on screen. A coarse LOD with few polygons can be
rendered more quickly, but is noticeably cruder than the original model (Figure1.1).
However, as the displayed size of the model is decreased, it becomes increasingly difficult to
distinguish between the representations (Figure 1.2). Many early level of detail systems
choose an LOD based on the distance from the object to the viewer.
Figure 1.1 Several LODs for the Stanford bunny The LOD on the right provides the worst approximation to the original model (left), but can be rendered fastest. Courtesy of David Luebke, University of Virginia.
Figure 1.2 LODs rendered at increasing distance from the viewer As the bunny’s projected size is decreased, increasingly coarse LODs can be substituted without perceptible effect. Courtesy of David Luebke, University of Virginia.
Over the last ten years, researchers have developed sophisticated algorithms to
automate the process of LOD creation. The vertex-merging techniques reduce model
complexity by collapsing several vertices in a region of the original model to a single vertex
3
in the resulting LOD. The Rossignac-Borrel vertex-merging algorithm begins by associating
a weight with every vertex in the model [28]. A vertex is given more weight if it is in a
region of high curvature or if the vertex is incident on large polygons. A uniform grid is then
overlaid on the model and all vertices in a grid cell are collapsed into that vertex with the
highest weight in its cell. The quality and size of the resulting LOD are determined by the
resolution of the overlaid grid. In another vertex-merging approach, quadric error metrics,
pairs of vertices are iteratively collapsed together resulting in a model with one less vertex
after every step (Figure 1.3) [15]. The algorithm selects the vertex pair with the least error to
collapse, where error is defined as the sum of squared distances from the vertex to every
polygon incident on it. This polygonal simplification technique preserves areas of high
curvature. Quadric error metrics is a fast way of calculating a static LOD of specified size
and recently has been extended to account for more than geometric error. New
implementations also weigh vertex attributes such as surface normals, color, and textures in
the creation of coarser LODs [14, 18]. One disadvantage of these extensions is that the
relative importance of these attributes must be hand-tuned in a trial and error process, since
there is no intuitive way to automatically weight the value of different attributes.
Figure 1.3 A pair of vertices is merged The vertex pair (v1, v2) is merged into vertex v resulting in a simplification from 10 to 8 triangles. This vertex contraction is also known as an edge collapse. Courtesy of Michael Garland, University of Illinois Urbana-Champaign.
4
Dynamic level of detail techniques represent a different approach to the LOD selection
process [16]. These approaches involve preprocess construction of a data structure that
encompasses a full range of possible simplifications for a given model. At run time the
visualization system extracts an LOD from the data structure that uses no more polygons than
necessary, generally resulting in a better balance between visual fidelity and rendering speed.
A natural use of dynamic LOD is view-dependent simplification, which allows a single model
to be dynamically simplified far from the viewpoint while maintaining fidelity nearby [17,
23]. This approach is especially practical for terrain visualization, since the distant areas of
large terrain models can be aggressively simplified.
Rendering the model’s silhouette regions at a higher resolution than the non-silhouette
regions is a common heuristic in view-dependent frameworks (Figure 1.4) [23]. The
importance of silhouettes was initially recognized in many of the static LOD approaches that
prioritize simplification according to surface curvature. High curvature regions are likely to
be on the silhouette during visualization. View-dependent simplification allows explicit
calculation of a model’s silhouette for every frame, which results in better guarantees on
LOD quality.
Figure 1.4 Silhouette preservation This sphere has been dynamically simplified for a user viewing from the right. The finely tessellated stripe represents silhouette preservation for that user.
5
Although researchers have developed useful tools for measuring the geometric quality
of a simplification, the fundamental goal of LOD techniques is to create simplifications that
look like the original [4, 24]. Funkhouser and Sequin were among the first to develop a
system that accounted for perceptual factors [13]. Their system employed static LODs,
which limited possible model size reduction. They also weighted the importance of
perceptual criteria such as visual eccentricity in an ad-hoc manner that required user
manipulation of parameters to tune properly.
Reddy was the first to drive all aspects of LOD selection with a unified perceptual
model of the human visual system [27]. His virtual reality system attempted to choose static
LODs based on pre-analyzed data of the LOD’s frequency content. Lindstrom and Turk
developed an image-based approach that evaluates the perceptual error associated with a
simplification operation in a preprocess [22]. Their algorithm compares images of the
original and simplified model from multiple viewpoints and so predicts the error associated
with using a particular LOD at runtime. However, since it is impossible to pre-render the
model from every viewing direction, both of these approaches could fail if insufficient data is
analyzed. Luebke and Hallen developed a system that guarantees imperceptible
simplification within a view-dependent visualization system [24]. Their approach makes
conservative judgments of the perceptual effect of local simplification operations. However,
by requiring imperceptible simplification, their system does not always simplify models
enough to permit interactive visualization. Furthermore, they are unable to handle the classes
of textured and dynamically lit models.
6
1.3 Contribution Our research applies a perceptual model to the simplification of textured and
dynamically illuminated models. We have developed a view-dependent, polygonal
simplification system that renders simplified models with as little perceptible difference from
the original model as possible, while maintaining interactive frame rates. We utilize a
budget-based approach to simplification that allows the user to select a comfortable trade off
between interactivity and quality of simplification.
We handle a more general class of models than the similar Luebke-Hallen approach,
which only supported models with colors assigned at each vertex. Our system is able to
simplify models according to their texture content and shading due to standard per-vertex
lighting or per-pixel normal map lighting. Normal mapping is the calculation of surface
normals for lighting from a map of normals, much as color is extracted from a texture map
[6, 12]. Since these normals aren’t as closely tied to the geometry as vertex normals are,
normal mapped LODs generally look more like the original model when illuminated (Figure
1.5). These extensions make our system more applicable to practical use.
Figure 1.5 Bumpy torus model demonstrating two surface normal representations Left: 44,252 triangles, full resolution mesh. Middle: 5,531 triangles with vertex normals. Right: 5,531 triangles with normal maps. Notice that the normal map approach produces higher fidelity simplification. Courtesy of Jonathan Cohen, Johns Hopkins University.
7
There are multiple advantages to driving simplification with a perceptual metric.
Unlike approaches developed for Quadric Error Metrics to handle non-geometric vertex
attributes, our perceptually based algorithm is automatic (Section 1.2). Our system
automatically weighs the relative importance of the vertex attributes such as silhouette status,
texture content, and feature size using a perceptual metric. LOD techniques generally strive
to create simplifications that look like the original by using heuristics. Our approach to
simplification directly seeks high quality LODs through application of a perceptual metric to
the selection process.
1.4 Thesis Organization In the following chapters, we describe our perceptually based model simplification
system and some results that it has generated. Chapter two examines how we judge the
perceptibility of performing a local simplification. In chapter three we describe our system
and provides some technical implementation details. Chapter four presents data comparing
our system to similar approaches. Finally, in chapter five we present conclusions and
motivate future developments to the system.
8
Chapter 2: Perceptual Model
We base our polygonal simplification scheme on research into the low-level
perceptibility of visual stimuli. The perceptibility of a stimulus is fundamentally determined
by its contrast and relative size [2]. A contrast grating is a pattern in which contrast varies
sinusoidally across the image (Figure 2.1). Historically, contrast gratings have served as the
basis for measuring human visual acuity in the perceptual psychology literature. Below we
describe how conservative approximations regarding the size and contrast associated with a
simplification operation allow us to apply a perceptual model to LOD selection.
Figure 2.1 Two contrast gratings with equal contrast The contrast grating on the left has a lower spatial frequency than the one on the right. Below, the two waves demonstrate how luminance varies sinusoidally in the gratings. Courtesy of Martin Reddy, SRI International.
2.1 Spatial Frequency The perceptibility of a simple stimulus is partially determined by its spatial frequency,
which is a measure of size in cycles per degree of visual arc. In our standard approach, we
measure spatial frequency by computing a bounding sphere that surrounds the geometry
affected by a simplification operation. The lowest spatial frequency that can be affected by a
9
bounding sphere subtending Θo of visual arc has one cycle per 2Θo (see Figure 2.2).
Conservatively bounding the model surface patch with a sphere has data storage benefits and
frees us from dealing with the perceptual effect of orientation [11]. Although higher
frequency changes may occur within this simplified region, we rely on Campbell’s finding
through Fourier analysis that the perceptibility of complex signals can be determined through
decomposition of stimuli into their pure sinusoidal components [2]. We find the worst-case
spatial frequency introduced by a simplification operation by calculating the spatial extent of
the affected region.
Figure 2.2 Bounding the lowest spatial frequency affected by geometry The lowest spatial frequency that can be affected by a simplification operation spanning Θo of visual arc has one cycle per 2Θo. Shown in orange is the sphere-bounded geometry. Courtesy of David Luebke, University of Virginia.
Our system uses a conservative bounding sphere in the standard spatial frequency
calculation, but for texture mapped models we calculate the spatial frequency associated with
a simplification operation by examining texture deviation. Texture deviation is the distance
measured between a pair of corresponding points, where the mapping is established in the
model’s parameter space [6]. In figure 2.3 the surface patch Mi-1 is re-triangulated into the
patch Mi and a correspondence between all points in Mi-1 and Mi is established through
parameter space P. For every point Xi-1 in Mi-1 we find the point Xi in patch Mi with
10
identical parametric coordinates and then measure the distance between Xi-1 and Xi in the
model’s coordinate system. The final texture deviation associated with a simplification
operation is the maximum of all measured deviations.
Figure 2.3 Texture deviation The surface patch Mi-1 is simplified into the patch Mi by collapse of the green edge. The texture deviation associated with this simplification is measured through a parameter space mapping P. See text for details. Courtesy of Jonathan Cohen, Johns Hopkins University.
High frequency textures are likely to hide perceptible simplification artifacts [29]. Our
use of texture deviation to calculate spatial frequency in textured models is motivated by this
finding as it relates the perceptual effect of visual masking. Visual masking is the
suppression or enhancement of a stimulus’s perceptibility due to interference from stimuli of
different spatial frequencies [11]. Although driving simplification with visual masking
effects is too computationally expensive in our interactive framework, it is a motivating
factor behind our use of texture deviation.
2.2 Contrast The other main influence on the perceptibility of a stimulus is contrast. As with most
contrast grating studies, we measure the Michelson contrast associated with a stimulus whose
11
luminance varies within the range Lmin and Lmax. The Michelson contrast CM of a stimulus is
defined as:
CM = minmax
minmax
LLLL
+−
(2.1)
The luminances Lmin and Lmax are pre-calculated for the patch of surface associated with
every simplification operation and used in the determination of stimulus perceptibility
(Section 3.2). The threshold contrast associated with a stimulus of known spatial frequency
is the minimum contrast that can be perceived for a stimulus of that spatial extent. The
contrast sensitivity to such a stimulus is the inverse of threshold contrast.
We begin the contrast calculation by determining luminance values from the red, green,
and blue values used to model color in our display system. We ignore chromatic effects
associated with contrast sensitivity and calculate a color’s luminance Y with the following
weighting [26]:
Y = 0.2126R + 0.7152G + 0.0722B (2.2)
This weighting was developed for use with modern cathode ray tube (CRT) monitors. Once
we have calculated luminance in the context of our display system representation, we must
convert this to physical luminance, which is the measure given by a photometer. We convert
system luminance Y to physical luminance L through a non-linear transformation called
gamma correction [26]:
L = Lblack + LwhiteYγ (2.3)
In this representation, γ represents the gamma correction factor, Lblack indicates the physical
black level luminance of the monitor, and Lwhite represents the physical white level luminance
12
of the monitor. We model our function after Lindstrom and use values γ = 1.4, Lblack = 5
cd/m2, and Lwhite = 100 cd/m2, which are values calibrated for modern monitors [21].
Gamma correction is necessary to convert system level luminance values to the same
representation used by contrast grating studies.
We also take the model’s boundary contour (silhouette) into account in determining the
contrast associated with every surface patch. We detect silhouette status by storing a
silhouette normal cone with every simplification operation [19, 23]. The normal cone
bounds the triangle normals of all triangles involved in the operation (Figure 2.4). At
runtime our silhouette test quickly compares the silhouette normal cone, the viewing
direction vector, and the stored bounding sphere. Silhouettes represent a special place in the
model where we can’t know the contrast associated with a simplification operation. In
principle, a simplification induced surface distortion could uncover the darkest or brightest
portion of the scene. This uncertainty compels us to conservatively assume maximal contrast
for operations on surfaces that we calculate to be on the model’s silhouette. Silhouette
preservation, long used as a heuristic in view-dependent simplification schemes, naturally
falls out of our perceptual framework since regions flagged as having high contrast are
unlikely to be simplified.
13
Figure 2.4 Calculation of silhouette status with normal cones Left: the surface geometry is bounded by the sphere and the surface normals are bounded by the cone. Right: a cone is projected from the eye point to enclose the bounding sphere so that silhouette detection may be performed on this surface patch. Courtesy of David Johnson, University of Utah.
2.3 The Contrast Sensitivity Function The contrast sensitivity function (CSF) is a model of the low-level perceptibility of
stimuli and was developed through contrast grating experiments. The CSF plots contrast
sensitivity against spatial frequency and so provides a model of the threshold detection
capabilities of the visual system (Figure 2.5). We maintain our assumption that the worst-
case spatial frequency is associated with a stimulus’s bounding sphere by clamping the CSF
to its peak contrast sensitivity for the lowest spatial frequencies. This clamping generally
will occur for spatial frequencies below 2-4 cycles per degree, depending on the perceptual
model.
14
Figure 2.5 The Contrast Sensitivity Function The red line indicates how we clamp the CSF (see text for details). Courtesy of Martin Reddy, SRI International.
We have chosen a variant of the Daly CSF model developed by Rushmeier as our
underlying perceptual model [7, 30]:
(2.4)
Here contrast sensitivity AD is a function of the spatial frequency f (in cycles per degree).
Daly developed a sophisticated model of the visual system by fitting a curve to empirical
data from contrast grating studies. Rushmeier’s simplification of the Daly model (equation
2.4) assumes a weak dependence on the light adaptation level and uses a constant value of 50
cd/m2. The light adaptation level indicates the amount of ambient light that the user became
adapted to before the contrast grating study began.
15
2.4 Imperceptibility Distance Ideally, our perceptual metric would take the contrast and spatial frequency of a
simplification operation as input and indicate the perceptibility of performing the operation.
This would require a suprathreshold model of the visual system capable of indicating the
relative perceptibility of stimuli below the threshold for stimulus detection. Unfortunately,
we know of no suprathreshold model in the literature that is fast enough to apply to our
interactive system.
Since we lack a proper suprathreshold model, we apply the well-understood model of
threshold level perception, the contrast sensitivity function. We begin by inverting the CSF
to take contrast as input such that it indicates the spatial frequency of a just-perceptible
stimulus of that contrast [24]. We compute the imperceptibility distance of a simplification
operation using this calculated spatial frequency and the screen-space size of the operation.
We define the imperceptibility distance of a stimulus as the shortest distance from which it is
imperceptible. Occluded features on a model have an imperceptibility distance of zero, since
they are not visible regardless of viewer distance from screen.
The LOD imperceptibility distance of a model is the maximum of all imperceptibility
distances associated with the simplification operations used to generate the LOD. Due to our
conservative calculations of contrast and spatial frequency, our system can say that a user at
LOD imperceptibility distance will not be able to distinguish the LOD from the original
model. We would be able to guarantee this characteristic if not for visual masking effects.
Due to suprathreshold effects, we can make no guarantees about the perceptibility of stimuli
inside the LOD imperceptibility distance. However, we find that using imperceptibility
distance as a measure of perceptual error seems to work well in practice (Chapter 4).
16
Chapter 3: A Perceptually Driven Simplification System
3.1 The Multi-Triangulation We represent polygonal models in our view-dependent visualization system with the
multi-triangulation (MT) data structure [8, 9]. The MT is a directed acyclic graph consisting
of a set of nodes N that are connected by a set of arcs A. The lowest node in the graph is
called the sink and the highest node in the MT is known as the root. Every node in the MT
represents a single simplification operation and arcs represent dependencies among the nodes
(see Figure 3.1). These node-based operations represent the replacement of a set of triangles
with a smaller set of triangles, where the two triangle sets always share a common border.
The triangles that the simplification operation removes from the original model are stored in
the child arcs of a node and the replacement triangles are stored with the parent arcs of a
node. The sink node has no child arcs.
To extract an LOD of the original model we generate a cut of the MT. A cut is a set of
arcs that partitions the MT into a set of nodes above the cut, which always contains the root,
and a set below the cut that always containing the sink (Figure 3.1). To extract the original
model from the MT we would take a cut A’ consisting of the drain’s parent arcs (these are
drawn as dashed lines in Figure 3.1). A’ precisely contains every triangle in the original
model and we can examine this set of arcs to discover which triangles to render.
17
Figure 3.1 The Multi-Triangulation The multi-triangulation consists of a set of nodes (rounded squares) connected by arcs (arrows). The cut drawn generates the mesh shown in the upper right. Courtesy of Leila De Floriani, Università di Genova.
3.2 Preprocess We create the MT representation of a model from the sink up through a series of edge
collapse simplification operations, which we prioritize to minimize texture deviation (Figures
1.3 & 2.3). Given the MT representation of a model, our system performs a preprocessing
step to gather data needed for runtime simplification. We first calculate a correspondence
between nodes in the MT and triangles in the original model using the texture
parameterization. This process is similar to the mapping used to calculate a node’s texture
deviation (Figure 2.3).
18
We gather information on a node’s surface region from the triangles of the original
model so that runtime simplification decisions will be made with reference to the original
model. For spatial frequency calculations, we compute a bounding sphere of the surface
patch geometry and we compute the texture deviation associated with a node. If the model
has a normal map we calculate a normal cone for use in lighting calculations. We also build
a normal cone using triangle normals for silhouette detection. In general, the normal cone for
silhouette detection has a slightly smaller angular extent than the normal cone for lighting.
For runtime contrast evaluations, we calculate the extreme luminance values Ymin and Ymax
corresponding to the texture covering the node.
In order to efficiently perform this mapping from nodes to the original model we utilize
the image pyramid paradigm. An image pyramid represents the full texture parameter space
at its lowest level and increasingly courser representations towards the pyramid’s apex. This
hierarchical representation allows for faster information retrieval and speeds up the
preprocessing step considerably. An MT representation of our 499,850 triangle armadillo
model with 102 textures takes about 3 minutes to complete the preprocessing step.
3.3 Budget-Based Simplification Our system uses budget-based simplification to minimize perceptual error in an LOD
while staying within a user-specified triangle budget. In the computationally intensive top-
down approach to budget-based simplification we re-initialize the MT’s cut to rest just below
the source node after every frame [23]. While the triangle budget has not been exceeded, we
iteratively raise the node with the greatest perceptual error above the cut. If a node cannot
immediately be raised due to dependency problems, we recursively raise all of its ancestor
19
nodes to allow the cut to drop below it. This method generally performs poorly as traversal
from the source often involves unnecessary evaluation of many nodes near the MT’s source.
We have also implemented a dual-queue approach to MT simplification similar to the
ROAM terrain visualization algorithm [10]. This approach begins each frame with the cut
from the previous frame and with the re-evaluation of perceptual error for all nodes adjacent
to the cut. One priority queue stores nodes directly below the cut that are candidates for
being raised (the lift queue) and another contains nodes just above the cut that may be
lowered (the drop queue). The algorithm iteratively pulls nodes with the greatest perceptual
error off of the two queues until the same node is at the top of both queues. This event
indicates that a cut has been found that minimizes perceptual error for a LOD with the user-
specified number of triangles. To facilitate faster simplification we amortize the cost of
updating a node’s perceptual error across several frames [10, 17].
3.4 Simplification of Textured Models Simplification of textured models is relatively simple once the data is in place to
calculate every node’s perceptual error. We calculate the contrast of visible nodes based on
the stored luminance range Lmin to Lmax (see Sections 2.2 & 3.2). We can color the model’s
nodes with the texture-induced contrast for visualization purposes (Figure 3.2). Notice that
in the contrast visualization, patches with both blue and orange sections have higher contrast,
which we indicate by rendering them more brightly. However, if a node is on the boundary
silhouette we ignore texture effects and conservatively use maximal contrast in the perceptual
error calculation (not shown in Figure 3.2). Spatial frequency calculations for textured
models use the stored texture deviation error projected into the screen space (Section 2.1).
This projection accounts for the translation of measurements in the model’s coordinate
20
system to the screen’s coordinate system. This step is necessary because the viewer
experiences the monitor’s screen as a window into the computer generated environment.
Figure 3.2 Contrast visualization of a texture mapped torus Left: a wavy torus rendered with a spotted texture. Right: a contrast visualization of the model demonstrating high contrast at the border between blue and orange regions. Here we color the lowest-level nodes of the MT according to their stored contrast value.
3.5 Simplification of Dynamically Illuminated Models Our system can also calculate the contrast of nodes that are dynamically illuminated by
white directional lights. We support the standard per-vertex lighting model by storing a
normal with every vertex in the MT. Vertex normals are generated by bilinear interpolation
of the model’s normal map to provide smooth illumination across the model. We calculate
spatial frequency in models with vertex normals by using the node bounding sphere (Section
2.1). We also support lighting with normal maps; we render the model using a texture
combiner program that we wrote for a modern graphics card, the nVidia GeForce3.
Illumination with normal mapping generally allows far more drastic simplification than
standard vertex based lighting (Figure 1.5) [6]. As with texture mapping, the most
perceptible spatial frequency associated with a normal mapped node is related to the texture
21
deviation. Both of our lighting modes require a textured model; the texture can be set to a
single color if a uniformly colored model is desired.
We calculate the luminance of a textured surface that is lit by a white directional light
and has a viewer located infinitely far away. These requirements are in place to improve the
performance of our system; other lighting models could easily be accommodated with more
extensive computation. The luminance Y of a textured pixel in our formulation is the sum of
ambient, diffuse, and specular effects:
Y = Ka*T + Kd*T*(Nd•L) + (Ns•H)n (3.1)
The ambient contribution of a light is the product of a global ambient coefficient Ka and the
texture luminance T. Diffuse reflection of light off of a surface occurs independent of the
viewing direction and is modeled as a product of the texture luminance, a diffuse coefficient
Kd for intensity control, and the dot product of the normalized light direction vector L and the
normalized surface normal Nd. We calculate specular reflection using the halfway vector
formulation designed for infinite view position and directional lights. H can be computed
once per frame as H = (L+V) / |L+V| where V is the normalized viewing vector. White
specular reflection is simply the dot product of Ns and H raised to a specular exponent n,
which models the reflection falloff. Note that for the luminance calculation at a single
surface point Nd equals Ns; we only use different values for Nd and Ns when performing
calculations on contrast associated with a node.
We find the contrast of a simplification operation (a node in the MT) by examining the
luminance ranging from Ymin to Ymax across its surface patch. Since we generate vertex
normals from the normal map, we can use the same contrast calculation for both the vertex
22
normal and normal map approaches to lighting. To evaluate a node’s contrast we use a
shading normal cone, which bounds all normals associated with the node. We find the range
of diffuse lighting’s effect by calculating the vector that is closest to the light direction vector
L and the vector that is furthest from L, but both must still lie within the normal cone (see
Figure 3.3 Left). Lighting a patch with these normals will generate the brightest and darkest
diffuse lighting possible for the whole node. The system performs a similar calculation to
determine the effect of specular lighting, but it uses the halfway vector instead of the light’s
direction vector (Figure 3.3 Right). Ymin and Ymax are found using equation 3.1 and contrast
is calculated with equation 2.1. As with textured models, we have developed a visualization
of contrast due to dynamic lighting (Figure 3.4).
Figure 3.3 Calculation of normal vectors with shading cones Left: the shading cone with axis N and a light vector L are used to find the impact of diffuse lighting on contrast. The light blue vector corresponds to the vector bounded by the normal cone that is closest to L and the yellow vector indicates the vector in the normal cone that is furthest from L. Right: the influence of specular lighting is calculated by comparing the shading cone with the halfway vector H, which is halfway between the lighting direction vector L and the viewing vector V. Courtesy of Mike Kelley, University of Virginia.
23
Figure 3.4 Contrast visualization of an illuminated torus Left: an illuminated torus with per-vertex surface normals and a uniform blue texture map. Right: a contrast visualization of the model. We color the lowest-level nodes of the MT according to the contrast induced by dynamic lighting. Notice how shadowed regions and the center of wide specular highlights are low contrast areas; we can simplify aggressively in those regions.
24
Chapter 4: Results
Our perceptual metric automatically weighs the importance of silhouette preservation,
texture sensitive simplification, and illumination sensitive simplification and allocates more
triangles to important regions (see Figure 4.1). In addition to visual inspection, we
quantitatively examine our system’s performance using two metrics that compare images of
the original model with images of a generated LOD. Our simple image comparison method
is root-mean-squared error (RMS), which is the square root of the sum of all pixel values
squared. This operation is always performed on a difference image, the pixel-wise difference
of one image from another (Figure 4.2).
Figure 4.1 Simplification of an illuminated torus drawn in wireframe This torus has been simplified from 57760 triangles to 24664 triangles with regard to effects of per-vertex illumination and the spotted texture map. Notice simplification in the shadow regions and within the specular highlights.
25
Figure 4.2 Difference image used for RMS calculation Left: 499,850 triangle armadillo model with per-vertex illumination. Center: inversion of the pixel-wise difference between the two representations. Right: the armadillo model simplified to 12,496 triangles.
We have also tested our system using the publicly available application DCTune, a
program designed to compress JPEG images using a perceptual metric [31]. One feature of
DCTune is the ability to compare the perceptual difference between two images, measured in
just-noticeable differences (JND). JNDs provide an indication of perceptual error in
multiples of the signal threshold. Although DCTune is not as sophisticated as some
commercial applications for computing perceptual error, it does account for some luminance
and contrast masking effects [32].
We provide no quantitative comparison of system responsiveness or frame rate, but as
expected, our system is slower on illuminated models where we have to perform a
calculation on the lighting induced contrast. We have increased rendering speed by using
vertex arrays; vertex arrays allow transmission of model geometry to the graphics hardware
with less function overhead. We find that amortized updates of the perceptual error
associated with a node dramatically improves system speed.
26
4.1 Comparison to the Luebke-Hallen Approach Luebke and Hallen developed a view-dependent system for guaranteed imperceptible
simplification of models with per-vertex colors. A comparison to their system is difficult
because they do not support dynamic lighting. However, we have implemented the Luebke-
Hallen approach with our MT framework and extended it to support textured models. The
major difference between these two approaches is that the Luebke-Hallen algorithm
calculates the perceptual error of a simplification operation using the node bounding sphere,
while our approach uses the node’s texture deviation. With all other parameters controlled,
texture deviation provides a better estimation of spatial frequency in generating an LOD
(Figure 4.3).
Figure 4.3 Comparison of our approach to Luebke & Hallen's algorithm These curves represent measurements made with an unlit textured face model. The results show that simplifying according to texture deviation size (our approach) produces better LODs than simplification with regard to node size. (JND data not shown.)
4.2 Comparison to Appearance Preserving Simplification We also compare our approach to appearance-preserving simplification (APS), the first
system to use the texture deviation representation in LOD selection. We have re-
0
0.5
1
1.5
2
2.5
3
3.5
0 50000 100000 150000 200000 250000 300000 350000 400000Number of Triangles
Perc
eptib
ility
(RM
S) Node Size Driven
Texture Deviation Driven
27
implemented APS within a view-dependent MT framework to facilitate comparison [6]. This
new APS implementation only differs from our system in the error measurement – our
system uses a perceptual model and APS uses screen space projection of a node’s texture
deviation. APS was designed to use normal maps for lighting, but we include comparative
data for both a normal mapped and vertex normal based armadillo (Figures 4.4 & 4.5). Since
APS does not natively support silhouette detection, we also implement an APS variant that
increases the error associated with simplification by a factor of 10 if a node is found to rest
on the boundary contour. Note that there is nothing magical about this factor of 10; one
advantage of our approach is that the relative importance of silhouettes is automatically
determined by our perceptual metric. In these data we also compare to a view-independent
approach, which simply returns texture deviation as the error associated with a simplification.
0
1
2
3
4
5
6
7
0 50000 100000 150000 200000 250000 300000 350000 400000 450000
Number of Triangles
Perc
eptib
ility
(RM
S)
View Independent
APS
Perceptual Metric
APS with Silhouettes
Figure 4.4 Simplification comparison of a normal mapped red armadillo We find that our perceptual metric performs only slightly better than other approaches when the armadillo is normal mapped. The cyan line represents the APS approach with 10x priority given to silhouette regions. (JND data not shown.)
28
0
2
4
6
8
10
12
14
16
0 50000 100000 150000 200000 250000 300000 350000 400000 450000
Number of Triangles
Perc
eptib
ility
(RM
S)
View IndependentAPSPerceptual MetricAPS with Silhouettes
Figure 4.5 Simplification comparison of a vertex illuminated red armadillo The perceptually based approach produces visibly better LODs on a complex model illuminated with per-vertex surface normals. (JND data not shown.)
We provide several examples comparing our system’s performance to APS and its
variants on different models and with our two lighting schemes (Figures 4.4, 4.5, 4.6, & 4.7).
In practice, our system performs best when it encounters specular highlights in models using
vertex normals for lighting. Notice that we only perform slightly better on the normal
mapped armadillo, a very complex model in which normal maps hide simplification artifacts
(Figure 4.4). We also find that both the RMS and JND error associated with a particular
triangle budget vary highly with the viewing position. If this weren’t the case, we would
have included more data and some statistical measures of result significance.
29
0
10
20
30
40
50
60
70
80
90
100
0 10000 20000 30000 40000 50000
Number of Triangles
Perc
eptib
ility
(JN
D)
View Independent
APS
Percpetual Metric
APS w ith Silhouettes
Figure 4.6 Simplification comparison of a lit torus with a puzzle texture The perceptual approach produces better LODs on this simple model – a torus with per-vertex lighting and a puzzle texture. (RMS data not shown.)
Figure 4.7 Models used for simplification comparison Top Left: a Cyberware laser face scan with 454,096 triangles. Top Right: a per-vertex illuminated red armadillo model with 499,850 triangles. Lower Left: a normal map illuminated red armadillo model with 499,850 triangles. Lower Right: a per-vertex illuminated torus with puzzle texture and 57,660 triangles.
30
Chapter 5: Conclusions
5.1 Summary I have presented a view-dependent simplification system that evaluates the
perceptibility associated with local polygonal simplifications. Our system aims to minimize
perceptual differences between an original model and an LOD whose size is constrained by a
triangle budget. Perceptual error is evaluated for a region undergoing simplification with
regard to texturing, illumination, and silhouette status of the region. One contribution of this
work is to account for these competing influences within a view-dependent framework.
One drawback of our approach is that our perceptual error calculation is more
computationally intensive than the average view-dependent simplification heuristic. For a
given triangle budget the frame rate of our system is worse than for our MT based
implementations of view-independent simplification and APS. Also, as with other dynamic
LOD systems, there are costs associated with maintenance of the underlying data structure
(an MT in our system). Within a set amount of time, a static LOD system can render a larger
LOD than our system, which must extract an LOD from the MT before rendering it. It is
arguable whether more intelligent use of triangles in our system makes up for this difference,
since comparison is difficult. However, we believe that as the model size grows, the
perceptually guided approach will make up for computational overhead with well-chosen
LODs.
Previous attempts to guide dynamic simplification as it is affected by illumination,
texture, and silhouette status have been ad hoc [20, 23]. The old approach to LOD
generation involved user-tunable parameters for control of the relative importance of vertex
31
attributes like color and texture. Unfortunately, these methods don’t scale well as every
additional feature necessitates another control parameter and optimal parameter selection
may not transfer between models. In our system the relative importance of factors
influencing simplification is qualified automatically by a unified perceptual model of the
visual system.
5.2 Future Work Although our system already demonstrates texture and illumination sensitive
simplification, many possibilities for refining and extending this work still exist. We could
easily supplement our system with a model of visual system eccentricity. Eccentricity
models the falloff of visual acuity that occurs with increased deviation from the viewing
direction [13, 24]. However, this extension has limited applicability as it necessitates a head
or eye tracker. We may also improve results by accounting for chromatic contrast sensitivity
or visual masking effects. Our system is especially well positioned to accept a more
sophisticated perceptual model that can quickly account for suprathreshold contrast
sensitivity. Our current measure of perceptual error as imperceptibility distance may not be
valid at the suprathreshold level, though it seems to be a reasonable approximation in
practice.
Current graphics hardware performs MIP-map filtering of textures whereby several
copies of a texture are created at decreasing resolutions in a preprocess. Due to blending, the
contrast of these low-resolution copies is often lower than the contrast of the original texture.
At runtime a low-resolution copy of the texture is chosen to cover an area of the scene that is
far from the user. By accounting for MIP-map filtering, a perceptually driven system should
32
be able to perform more aggressive simplification in these lower contrast regions and so
make a more informed determination of perceptual error.
One way to improve applicability of our system is to handle more rendering modes and
model types. Our algorithm currently requires parameterized models to provide a means of
mapping perceptual effects in the LOD to the original model. It would be useful to explore
automatic texture generation algorithms to see whether they can produce the sort of clean
model parameterizations that we require. In addition to including support for
unparameterized models, we could also increase the number of supported rendering modes.
To save on computation, our system currently requires directional white lights for both
normal mapped and vertex normal approaches to lighting. However, we could easily extend
the system to support more lighting and shading styles, if necessary.
We would like to perform further validation of our results using other image
comparison programs, user studies, and statistical analysis. We recently obtained
Lindstrom’s perceptual difference program ltdiff. Like DCTune, ltdiff will quantify the
perceptual error between a test and reference image pair and it can produce perceptual
difference images. Although we received ltdiff too late in the study to include results here,
we do provide an example difference image indicating simplification errors in the armadillo
(Figure 5.1). We would also like to perform user studies to measure the success of our
approach. Since our system is intended for human use, we shouldn’t rely entirely on
computational models of the visual system for validation. Even the best computational
model of perception is unlikely to model all aspects of human vision . We would also like to
perform rigorous statistical analysis of our results. However, for a given triangle budget, the
33
perceptual error associated with simplification varies highly across the viewing space,
complicating any statistical comparison.
Figure 5.1 Perceptual image difference comparing two armadillo representations Left: original armadillo model with per-vertex illumination. Center: inversion of the ltdiff perceptual image comparison. Dark spots indicate perceptible errors. Right: armadillo model simplified to 50,000 triangles.
Finally, we return to the measure of LOD system performance which has been largely
ignored in this paper: speed. We have improved system performance by building look-up
tables that avoid costly function evaluations and by amortizing the perceptual error
calculation. However, for our system to be truly useful to the end user, it needs to always
render the high quality LODs at interactive rates. We provide the triangle budget as a means
for the user to control the quality/speed tradeoff. As an alternative, we could predict the next
frame’s creation time based on previous frames and automatically choose a triangle budget to
meet some interactive frame rate. This would remove the final barrier preventing completely
automatic LOD selection in our system.
34
In the end, it is also worth realizing that the future of graphics may not lie in fast
triangle rasterization. Interactive ray tracing is becoming an increasingly practical method
for rendering large geometric data sets [25]. A perceptually based interactive ray tracer
might generate an LOD by shooting rays according to a model of the visual system [1].
Also, non-polygonal model representations such as subdivision surfaces, which model
piecewise smooth surfaces using relatively few parameters, may prove useful for complex
interactive data visualization [3]. The real future of perceptually based model visualization
may be based upon a fundamentally different approach to digital image generation.
35
Bibliography
1. Bolin, M.R. and Meyer, G.W., A Frequency Based Ray Tracer. in SIGGRAPH 95, (1995), 409-418.
2. Campbell, F.W. and Robson, J.G. Application of Fourier Analysis to the Visibility of Gratings. Journal of Physiology, 197. 551-566.
3. Catmull, E.E. A Subdivision Algorithm for Computer Display of Curved Surfaces Department of Computer Science, University of Utah, Salt Lake City, Utah, 1974.
4. Cignoni, P., Rocchini, C. and Scopigno, R. Metro: Measuring Error on Simplified Surfaces. Computer Graphics Forum, 17 (2). 167-174.
5. Clark, J.H. Hierarchical Geometric Models for Visible Surface Algorithms. Communications of the ACM, 19 (10). 547-554.
6. Cohen, J., Olano, M. and Manocha, D., Appearance-Preserving Simplification. in SIGGRAPH 98, (Orlando, FL, 1998), Addison Wesley, 115-122.
7. Daly, S. The Visible Differences Predictor: An Algorithm for the Assessment of Image Fidelity. in Watson, A. ed. Digital Images and Human Vision, MIT Press, Cambridge, MA, 1993, 179-206.
8. DeFloriani, L., Magillo, P. and Puppo, E., Building and Traversing a Surface at Variable Resolution. in IEEE Visualization '97, (1997), 103-110.
9. DeFloriani, L., Magillo, P. and Puppo, E., Efficient Implementation of Multi-Triangulations. in IEEE Visualization '98, (1998), 43-50.
10. Duchaineau, M., Wolinsky, M., Sigeti, D.E., Miller, M.C., Aldrich, C. and Mineev-Weinstein, M.B., ROAMing Terrain: Real-time Optimally Adapting Meshes. in Visualization '97, (1997), 81-88.
11. Ferwerda, J.A., Pattanaik, S., Shirley, P. and Greenberg, D.P., A Model of Visual masking for Computer Graphics. in SIGGRAPH 97, (1997), 143-152.
12. Fournier, A., Normal Distribution Functions and Multiple Surfaces. in Graphics Interface '92 Workshop on Local Illumination, (1992), 45-52.
13. Funkhouser, T.A. and Sequin, C.H., Adaptive Display Algorithm for Interactive Frame Rates During Visualization of Complex Virtual Environments. in SIGGRAPH 93, (1993), 247-254.
14. Garland, M. and Heckbert, P., Simplifying Surfaces with Color and Texture using Quadric Error Metrics. in IEEE Visualization '98, (1998), 263-270.
15. Garland, M. and Heckbert, P., Surface Simplification Using Quadric Error Metrics. in SIGGRAPH 97, (1997), 209-216.
16. Hoppe, H., Progressive Meshes. in SIGGRAPH 96, (1996), ACM SIGGRAPH, 99-108.
17. Hoppe, H., View-Dependent Refinement of Progressive Meshes. in SIGGRAPH 97, (1997), 189-198.
18. Hoppe, H.H., New Quadric Metric for Simplifying Meshes with Appearance Attributes. in IEEE Visualization '99, (1999), 59-66.
19. Johnson, D. and Cohen, E. Spatialized Normal Cone Hierarchies. 2001 Symposium on Interactive 3D Graphics. 129-134.
20. Klein, R. and Schilling, A. Efficient Rendering of Multiresolution Meshes with Guaranteed Image Quality. The Visual Computer, 15 (9). 443-452.
36
21. Lindstrom, P. Model Simplification using Image and Geometry-Based Metrics Computer Science, Georgia Institute of Technology, 2000.
22. Lindstrom, P. and Turk, G. Image-driven Simplification. ACM Transactions on Graphics, 19 (3). 204-241.
23. Luebke, D. and Erikson, C., View-Dependent Simplification of Arbitrary Polygonal Environments. in SIGGRAPH 97, (1997), 199-208.
24. Luebke, D. and Hallen, B., Perceptually Driven Simplification for Interactive Rendering. in 2001 Eurographics Rendering Workshop, (2001).
25. Parker, S., Parker, M., Livnat, Y., Sloan, P.-P., Hansen, C. and Shirley, P. Interactive Ray Tracing for Volume Visualization. IEEE Transactions on Visualization and Computer Graphics, 5 (3).
26. Poynton, C., The Rehabilitation of Gamma. in SPIE/IS&T, (San Jose, California, 1998), 232-249.
27. Reddy, M. Percpetually Modulated Level of Detail for Virtual Environments, University of Edinburgh, 1997.
28. Rossignac, J. and Borrel, P. Multi-Resolution 3D Approximations for Rendering Complex Scenes. in Modeling in Computer Graphics: Methods and Applications, Springer-Verlag, 1993, 455-465.
29. Rushmeier, H., Rogowitz, B. and Piatko, C., Perceptual Issues in Substituting Texture for Geometry. in SPIE 2000, (2000), 372-383.
30. Rushmeier, H., Ward, G., Piatko, C., Sanders, P. and Rust, B., Comparing Real and Synthetic Images: Some Ideas About Metrics. in 6th Eurographics Workshop on Rendering (1995), (1995), 82-91.
31. Watson, A. and Ahumada, A., DCTune. http://vision.arc.nasa.gov/dctune/ 32. Watson, A.B., DCTune: A Technique for Visual Optimization of DCT Quantization
Matrices for Individual Images. in Society for Information Display Digest of Technical Papers, (1993), 946-949.