13
Modeling and Rendering of Points with Local Geometry Aravind Kalaiah and Amitabh Varshney Abstract—We present a novel rendering primitive that combines the modeling brevity of points with the rasterization efficiency of polygons. The surface is represented by a sampled collection of Differential Points (DP), each with embedded curvature information that captures the local differential geometry in the vicinity of that point. This is a more general point representation that, for the cost of a few additional bytes, packs much more information per point than the traditional point-based models. This information is used to efficiently render the surface as a collection of local geometries. To use the hardware acceleration, the DPs are quantized into 256 different types and each sampled point is approximated by the closest quantized DP and is rendered as a normal-mapped rectangle. The advantages to this representation are: 1) The surface can be represented more sparsely compared to other point primitives, 2) it achieves a robust hardware accelerated per-pixel shading—even with no connectivity information, and 3) it offers a novel point-based simplification technique that factors in the complexity of the local geometry. The number of primitives being equal, DPs produce a much better quality of rendering than a pure splat-based approach. Visual appearances being similar, DPs are about two times faster and require about 75 percent less disk space in comparison to splatting primitives. Index Terms—Differential geometry, simplification, point sample rendering, per-pixel shading. æ 1 INTRODUCTION P OINT-BASED rendering schemes [1], [2], [3], [4], [5], [6] have evolved as a viable alternative to triangle-based representations. They promise benefits over polygon-based rendering in many areas: 1) modeling and rendering complex environments, 2) a seamless hierarchical structure to balance frame-rates with visual quality, and 3) efficient streaming over the network for remote rendering [7]. Current point primitives store only limited information about their immediate locality, such as normal, sphere of influence [5], and disk of influence on the tangent plane [4]. These primitives are then rasterized with flat shading and, in some cases, followed up with a screen-space filtering [1], [4]. Since the primitives are flat shaded, such representa- tions require very high sampling to obtain a good rendering quality. In other words, the rendering algorithm dictates the sampling frequency in the modeling stage. This is clearly undesirable as it may prescribe very high sampling, even in areas of low spatial frequency, causing two significant drawbacks: 1) slower rendering due to increase in rendering computation and related CPU-memory bus activity and 2) large disk and memory utilization. In this work, we propose an approach of storing local differential geometric information with every point. This information gives a good approximation of the surface distribution in the vicinity of each sampled point, which is then used for rendering the point and its approximated vicinity. The total surface area that a point is allowed to approximate is bounded by the characteristics of the surface at that point. If the point is in a flat or a low curvature region of the surface, then the differential information at that point can well approximate a large area of the surface around it. Alternately, if the point is located on a high- frequency area of the surface, then we limit the approxima- tion to a smaller vicinity. This scheme offers the potential of controlling the local sampling density based on its surface curvature characteristics. We present a simplification algorithm that takes the initial supersampled point-based representation and returns a sparse subset whose local sampling density reflects the local surface variation. Our rendering algorithm uses per-pixel shading when a DP is rendered with its neighborhood approximation. A sample rendering of our algorithm is shown in Fig. 1. Our approach has many benefits to offer: 1. Rendering: The surface can be rendered with fewer (point) primitives by pushing more computation into each primitive. This reduces the CPU-memory bus bandwidth and the overall amount of computa- tion resulting in a significant speedup. As the processor-memory speed gap increases, we expect this method to get even more attractive. 2. Storage: The reduction in the number of primitives more than compensates for the extra bytes of information stored with each point primitive. This leads to an overall reduction in the storage require- ments. This reduction also benefits faster streaming of information over the network. 3. Generality: The information stored with our point primitives is sufficient to derive (directly or indirectly) the requisite information for prior point primitives. Our work is primarily focused on the efficiency of per-point rendering computation. It can potentially be used in conjunction with larger point-based organization structures—hierarchical (bounding balls hierarchy [5], Layered Depth Cube (LDC) tree [4], Layered Depth Image (LDI) tree [8]), or otherwise (LDC [3], LDI [6]). 4. Simplification: Recently proposed point representa- tions have a width (or a region of influence) 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003 . The authors are with the Graphics and Visual Informatics Laboratory, Department of Computer Science and UMIACS, University of Maryland, College Park, MD 20742. E-mail: {ark, varshney}@cs.umd.edu. Manuscript received 21 Mar. 2001; revised 25 Sept. 2001; accepted 27 Sept. 2001. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 113851. 1077-2626/03/$17.00 ß 2003 IEEE Published by the IEEE Computer Society

30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

Modeling and Rendering ofPoints with Local Geometry

Aravind Kalaiah and Amitabh Varshney

Abstract—We present a novel rendering primitive that combines the modeling brevity of points with the rasterization efficiency of

polygons. The surface is represented by a sampled collection of Differential Points (DP), each with embedded curvature information

that captures the local differential geometry in the vicinity of that point. This is a more general point representation that, for the cost of a

few additional bytes, packs much more information per point than the traditional point-based models. This information is used to

efficiently render the surface as a collection of local geometries. To use the hardware acceleration, the DPs are quantized into

256 different types and each sampled point is approximated by the closest quantized DP and is rendered as a normal-mapped

rectangle. The advantages to this representation are: 1) The surface can be represented more sparsely compared to other point

primitives, 2) it achieves a robust hardware accelerated per-pixel shading—even with no connectivity information, and 3) it offers a

novel point-based simplification technique that factors in the complexity of the local geometry. The number of primitives being equal,

DPs produce a much better quality of rendering than a pure splat-based approach. Visual appearances being similar, DPs are about

two times faster and require about 75 percent less disk space in comparison to splatting primitives.

Index Terms—Differential geometry, simplification, point sample rendering, per-pixel shading.

æ

1 INTRODUCTION

POINT-BASED rendering schemes [1], [2], [3], [4], [5], [6]have evolved as a viable alternative to triangle-based

representations. They promise benefits over polygon-basedrendering in many areas: 1) modeling and renderingcomplex environments, 2) a seamless hierarchical structureto balance frame-rates with visual quality, and 3) efficientstreaming over the network for remote rendering [7].

Current point primitives store only limited informationabout their immediate locality, such as normal, sphere ofinfluence [5], and disk of influence on the tangent plane [4].These primitives are then rasterized with flat shading and,in some cases, followed up with a screen-space filtering [1],[4]. Since the primitives are flat shaded, such representa-tions require very high sampling to obtain a good renderingquality. In other words, the rendering algorithm dictates thesampling frequency in the modeling stage. This is clearlyundesirable as it may prescribe very high sampling, even inareas of low spatial frequency, causing two significantdrawbacks: 1) slower rendering due to increase in renderingcomputation and related CPU-memory bus activity and2) large disk and memory utilization.

In this work, we propose an approach of storing localdifferential geometric information with every point. Thisinformation gives a good approximation of the surfacedistribution in the vicinity of each sampled point, which isthen used for rendering the point and its approximatedvicinity. The total surface area that a point is allowed toapproximate is bounded by the characteristics of the surfaceat that point. If the point is in a flat or a low curvatureregion of the surface, then the differential information at

that point can well approximate a large area of the surfacearound it. Alternately, if the point is located on a high-frequency area of the surface, then we limit the approxima-tion to a smaller vicinity. This scheme offers the potential ofcontrolling the local sampling density based on its surfacecurvature characteristics. We present a simplificationalgorithm that takes the initial supersampled point-basedrepresentation and returns a sparse subset whose localsampling density reflects the local surface variation. Ourrendering algorithm uses per-pixel shading when a DP isrendered with its neighborhood approximation. A samplerendering of our algorithm is shown in Fig. 1.

Our approach has many benefits to offer:

1. Rendering: The surface can be rendered with fewer(point) primitives by pushing more computationinto each primitive. This reduces the CPU-memorybus bandwidth and the overall amount of computa-tion resulting in a significant speedup. As theprocessor-memory speed gap increases, we expectthis method to get even more attractive.

2. Storage: The reduction in the number of primitivesmore than compensates for the extra bytes ofinformation stored with each point primitive. Thisleads to an overall reduction in the storage require-ments. This reduction also benefits faster streamingof information over the network.

3. Generality: The information stored with our pointprimitives is sufficient to derive (directly orindirectly) the requisite information for prior pointprimitives. Our work is primarily focused on theefficiency of per-point rendering computation. Itcan potentially be used in conjunction with largerpoint-based organization structures—hierarchical(bounding balls hierarchy [5], Layered Depth Cube(LDC) tree [4], Layered Depth Image (LDI) tree [8]),or otherwise (LDC [3], LDI [6]).

4. Simplification: Recently proposed point representa-tions have a width (or a region of influence)

30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

. The authors are with the Graphics and Visual Informatics Laboratory,Department of Computer Science and UMIACS, University of Maryland,College Park, MD 20742. E-mail: {ark, varshney}@cs.umd.edu.

Manuscript received 21 Mar. 2001; revised 25 Sept. 2001; accepted 27 Sept.2001.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number 113851.

1077-2626/03/$17.00 ß 2003 IEEE Published by the IEEE Computer Society

Page 2: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

associated with each point which can differ from onepoint to another. This can lead to a significantoverlap in the surface representation by the points.Our point primitive is amenable to a simplificationscheme that significantly reduces the redundancy insurface representation.

In the following sections, we first mention some relatedworks and then outline the terminology from the differ-ential geometry literature that will be used to describe ourapproach. This is followed by a discussion of our samplingand simplification schemes that will output sparse pointrepresentations. We then describe our rendering algorithmand compare it with some splatting schemes. We concludethe paper with a discussion of this approach and its possibleextensions.

2 PREVIOUS WORK

2.1 Differential Geometry and Curvature Estimation

Our approach of rendering points with their local geometryrequires the knowledge of the surface variation at any givenpoint. Classical differential geometry gives us a mathematicalmodel for understanding the surface variation at a point.There is a collection of excellent literature on this subject and,in this paper, we follow the terminology of do Carmo [9].

Curvature computation on parametric surfaces has arobust mathematical model. Various techniques have beendesigned to estimate curvature from discrete sampledrepresentations [10], [11]. Taubin [12] estimates curvature ata mesh vertex by using the eigenvalues of an approximationmatrix constructed using the incident edges. Desbrun etal. [13] define discrete operators (normal, curvature, etc.) ofdifferential geometry using Voronoi cells and finite-ele-ment/finite-volume methods. Their discrete operatorsrespect the intrinsic properties of the continuous versionsand can be used at the vertices of a triangle mesh.

2.2 Acquisition and Processing

Point samples of real-world environments are acquiredusing several acquisition techniques [14], [15], [16], [17],[18], with the choice depending on the environment beingsampled. This information is processed by surface

reconstruction algorithms [19], [20] and subsequentlydenoised [21]. The sampled points can also be processeddirectly using spectral processing techniques [22]. Alter-nately, the coarse triangle mesh can be fitted withparametric surfaces [23], [24] for denoising and to aid otherhigher-level applications.

Point samples from synthetic environments are popu-larly acquired by shooting sampling rays from base imageplane(s) [1], [6]. In this work, we support point samplingfrom two kinds of surface representations: NURBS andpolygonal mesh. If the input is a NURBS surface, then weuniformly sample in the parametric domain of the surface.If the input is a polygonal mesh, then we use its vertices asthe sample points.

2.3 Simplification and Organization

The initial set of point samples may have significantredundancy in representing the surface due to super-sampling. The problem of pruning this set has not beengiven enough attention so far, but various hierarchicalorganization schemes have been used that develop lowerfrequency versions of the original set of point samples [4],[5], [8]. We use a simplification process to prune an initialset of points to obtain a sparse point representation. Turk[25] uses a point placement approach with the point densitybeing controlled by the local curvature properties of thesurface. Witkin and Heckbert [26] use physical properties ofa particle system to place points on an implicit surface.

Simplification methods have been studied extensivelyfor triangle meshes. They can be broadly classified into localand global approaches. Local approaches work by pruningvertices, edges, or triangles using various metrics. Globalapproaches work by replacing subsets of the mesh withsimplified versions or by using morphological operators oferosion and dilation. Cignoni et al. [27] and Cohen et al. [28]document various simplification techniques. More recently,Lindstrom [29] uses error quadrics in a vertex clusteringscheme to simplify complex datasets that are too large to fitinto main memory.

Image-assisted organization of points [1], [3], [6] isefficient at three-dimensional transformations as it usesthe implicitness of relationship among pixels to achieve fastincremental computations. It is also attractive because of itsefficiency at representing complex real-world environ-ments. The multiresolution organizations [4], [5], [8] aredesigned with the rendering efficiency in mind. They usethe hierarchical structure to achieve block culling, to controldepth traversals to meet the image-quality or frame-rateconstraints, and for efficient streaming of large datasetsacross the network [7].

2.4 Rendering

Darsa et al. [30], Mark et al. [31], and Pulli et al. [32] use atriangle mesh overlaid on the image sampling plane forrendering. It can be followed by a screen space compositingprocess. However, such systems can be expensive incomputation and storage if high resolutions are desired.Levoy and Whitted [2] introduced points as a renderingprimitive. It has been used by Shade et al. [6] and Grossmanand Dally [1] for synthetic environments. However, rawpoint primitives suffer from aliasing problems and holes.Lischinski and Rappoport [3] raytrace a point dataset.Oliveira et al. [33] use image space transformations torender point samples. Rusinkiewicz and Levoy [5] usesplatting of rectangle primitives for rendering. Chen andNguyen [34] build a bounding ball hierarchy on top of a

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 31

Fig. 1. A rendering of the human head model using differential points.

Page 3: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

triangle mesh to render small triangles as points. Alexa et al.[35] derive a polynomial surface at each point which is thenrendered by generating additional display points in a view-dependent fashion. Pfister et al. [4] follow up the splattingwith a screen-space filtering process to cover up holes andto deal with aliasing problems. Zwicker et al. [36] derive ascreen-space formulation of the EWA filter to render high-detail textured point samples with a support for transpar-ency. Mueller et al. [37] achieve per-pixel shading forrectilinear grids using gradient splats.

The main focus of this paper is a novel renderingprimitive that uses surface curvature properties to effi-ciently render an approximation of its local geometry. Themain advantage of this approach is its sparse representationof the surface which leads to a significant reduction in thecomputational cost and CPU-memory bus traffic.

3 DIFFERENTIAL GEOMETRY FOR SURFACE

REPRESENTATION

Classical differential geometry is a study of the localproperties of curves and surfaces [9]. It uses differentialcalculus of first and higher orders for this study. In thiswork, we use the regular surface model, which capturessurface attributes such as continuity, smoothness, anddegree of local surface variation. To quantify the surfacevariation around a point, we use the directional curvaturemetric. This is a second-order description of how fast thesurface varies along any tangential direction at a point onthe surface. In this section, we present a brief workingintroduction to regular surfaces with an outline of theterminology and the equations that will be used to explainour work in subsequent sections.

A regular surface is a differentiable surface which is non-self-intersecting and which has a unique tangent plane at eachpoint on the surface. It is defined as a set,S, of points in<3 suchthat every element, p, of this set has a neighborhood V � <3

and a parameterizing map x : U! V \ S (where U is anopen subset of <2) satisfying the following conditions [9]:

1. x is differentiable.2. x is a homeomorphism.3. Each map x : U! S is a regular patch.

Consider the unit normal, Np, at any point p on the

surface. As one moves outside of p along the surface, the

normal may change. This variation can be expressed as a

function of the characteristics of p and the tangential

direction of motion, tt, by a linear map, dNp : <3 ! <3,

called the differential of the normal at p. dNpðttÞ gives the

first-order normal variation at the point p along the

direction tt and this vector is tangential to the surface at p.

The Jacobian matrix of dNp has two eigenvectors, uup and

vvp, together called the principal directions. The direction of

maximum curvature, uup, is the tangential direction along

which the normal shows maximal variation at p. The

eigenvalue associated with uup, ÿ�up, is a measure of the

normal variation along uup. The term �upis the curvature at

p along the direction uup and is called the maximum normal

curvature. Similarly vvp is called the direction of minimum

curvature and has an associated eigenvalue, ÿ�vp. These

attributes are related as follows:

j�upj � j�vp

jhuup; vvpi ¼ 0

uup � vvp ¼ Np

dNpðuupÞ ¼ ÿ�upuup

dNpðvvpÞ ¼ ÿ�vpvvp;

where h�; �i is the vector dot product and � is the vectorcross product operator. These relationships are illustratedin Fig. 2. The normal variation (gradient) along any unittangent, tt (¼ uuup þ vvvp), at p, can be computed as:

dNpðttÞ ¼ dNpðuuup þ vvvpÞ¼ udNpðuupÞ þ vdNpðvvpÞ¼ ÿð�up

uuup þ �vpvvvpÞ:

ð1Þ

Similarly, it can be shown that the normal curvature along tt,�ðttÞ, is given by [9]:

�pðttÞ ¼ �upu2 þ �vp

v2: ð2Þ

The normal variation and the normal curvature terms giveus second-order information about the behavior of theregular surface around the point p.

A salient feature of the regular surface model is that itcan give a local description at a point without indexing anyglobal properties. This gives complete independence to apoint which defines its own local geometry without anyreliance, explicit or implicit, on the immediate sampledneighborhood or on any other global property. We use thisfeature to render the surface as a set of locally definedgeometries of varying surface area. This feature is alsoexploited for a variety of other applications in computergraphics. Interrante [38] uses it for visualizing iso-surfaces.Heckbert and Garland [39], among several others, use it formesh simplification. Guskov et al. [21] use curvatureinformation for surface denoising.

Most of today’s virtual environments can be character-ized as a collection of smooth surfaces meeting at points,curves, or areas of varying orders of discontinuity. This is incontrast to a regular surface which, by definition, isdifferentiable and thus has continuous partial derivativesof all orders. However, as explained in the next section, weneed the surface to be only second-order continuous toextract properties that will be used for rendering. Disconti-nuities of third or higher order, in most instances, are not

32 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

Fig. 2. Local geometry of a differential point.

Page 4: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

easily discernible and, thus, we do not make any efforttoward reproducing them visually here. However, disconti-nuities of the zeroth, first, and second order are visuallynoticeable. While a regular surface model cannot representthese points explicitly, we use an implicit procedure torepresent them. If such discontinuities are a collection ofpoints or curves, then we sample at the second-ordercontinuous neighborhood of these points and the disconti-nuities will be maintained implicitly by the intersection ofthe influence of these sampled points. If, however, suchpoints of discontinuity cover an area on the surface, or nosuch neighborhood exists, then they can be represented bypoint-primitives that do not use second-order information[1], [4], [5], [6] or by a polygonal model.

4 SAMPLING AND PROCESSING

Our fundamental rendering primitive is a point withdifferential geometry information. Virtual environments,however, are modeled using other representations such asNURBS or a triangle mesh. We derive our renderinginformation by sampling points on these models andextracting differential geometry information per sampledpoint. To ensure that the information stored in each pointcollectively represents the whole surface area, we over-sample initially and follow it up with a simplificationprocess guided by a user-specified error bound. Thisreduces the overlap of the area of influence of the sampledpoints while still ensuring that the surface area is fullyrepresented. This is a preprocess and the output is saved ina render-ready format that is an input to our renderingalgorithm outlined in Section 5.

4.1 Differential Points

We call our rendering primitive a differential point (DP). A

DP, p, is constructed from a sample point and has the

following parameters: xp (the position of the point), �upand

�vp(the principal curvatures), and uup and vvp (the principal

directions). From these, we derive the unit normal, nnp, and

the tangent plane, �p, of p. This information represents a

coordinate frame and second-order information at each DP.

We extrapolate this information to define a surface, Sp, that

will be used to approximate the local grometry of xp. The

surface Sp is defined implicitly as follows: Given any

tangent tt, the intersection of Sp with the normal plane of p

that is coplanar with tt is a semicircle with a radius of 1j�pðttÞj

with the center of the circle being located at xp þ nnp

�pðttÞand

oriented such that it is cut in half by xp (if �pðttÞ is 0, then

the intersection is a line along tt). These terms are illustrated

in Fig. 3.To aid our simplification and rendering algorithms, we

define a coordinate system on �p and Sp. The tangent plane�p is parameterized by ðu; vÞ coordinates in the vector spaceof ðuup; vvpÞ. A point on �p is denoted by xpðu; vÞ and ttðu; vÞdenotes the tangent at p in the direction of xpðu; vÞ. Weparameterize Sp with the same ðu; vÞ coordinates as �p, withXpðu; vÞ denoting a point on Sp. The points Xpðu; vÞ andxpðu; vÞ are related by a homeomorphic mapping, Pp, withxpðu; vÞ being the orthographic projection of Xpðu; vÞ on �p

along nnp. The arc length between Xpð0; 0Þ and Xpðu; vÞ isdenoted by sðu; vÞ and is measured along the semicircle ofSp in the direction ttðu; vÞ. The (unnormalized) normalvector at Xpðu; vÞ is denoted by Npðu; vÞ. Note that xp ¼Xpð0; 0Þ ¼ xpð0; 0Þ and nnp ¼ Npð0; 0Þ. We use lower casecharacters or symbols for terms related to �p and we useupper case characters or symbols for terms related to Sp. Anotable exception to this rule is the arc-length sðu; vÞ.

The surface Sp is used to describe the spatial distributionaround xp. We derive the normal distribution, Npðu; vÞ,around xp using Sp and the curvature properties of thesurface. To derive Npðu; vÞ, we express it in terms of itsorthogonal components as follows:

Npðu; vÞ ¼X

ee¼uup;vvp;nnp

hNpðu; vÞ; eei ee: ð3Þ

Consider the semicircle of Sp in the direction ttðu; vÞ. As onemoves out of xp along this curve the normal change per unitarc-length of the curve is given by the normal gradientdNpðttðu; vÞÞ. So, for an arc-length of sðu; vÞ, the normal canbe obtained by using a Taylor’s expansion on eachindividual component of (3) as follows:

Npðu; vÞ ¼X

ee¼uup;vvp;nnp

ðhNpð0; 0Þ; eei þ sðu; vÞ

hdNpðttðu; vÞÞ; eei þ Remainder TermÞ ee

� Npð0; 0Þ þ sðu; vÞ dNpðttðu; vÞÞ:

ð4Þ

The surface Sp and the normals Npðu; vÞ, give anapproximation of the spatial and the normal distributionaround xp. (Note that Npðu; vÞ is not neccessarily thenormal distribution of Sp, but is just an approximation ofthe normals around xp.) Since it is only an approximation,there is a cost associated with this: the higher the arc-length,the higher the error in approximation and, thus, a biggercompromise in the visual quality after rendering. However,an advantage to extrapolating to a larger neighborhood isthat a smaller set of sampled DPs suffices to cover the wholesurface, thus improving the rendering speed. We let theuser resolve this tradeoff according to her needs by

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 33

Fig. 3. Tangent plane parameterization of a differential point: The

tangent plane �p is parameterized by the ðu; vÞ coordinates. Sp uses the

same parameterization which relates it to �p under the projection Pp.

u�;p is defined by the constraint kXpðu; 0Þ ÿ xpðu; 0Þk � �k�up k

.

Page 5: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

specifying two error tolerances that will clamp the extent ofthe extrapolation:

1. Maximum principal error (�): This error metric is usedto set point sizes according to their curvatures. Itspecifies a curvature scaled maximum orthographicdeviation of Sp along the principal directions. Welay down this constraint as:

j�upðXpðu; 0Þ ÿ xpðu; 0ÞÞj � �

and

j�vpðXpð0; vÞ ÿ xpð0; vÞÞj � �:

Since Sp is defined by semicircles, we have that

kXpðu; 0Þ ÿ xpðu; 0Þk � 1k�upk

. It follows that � � 1. In

other words, the extrapolation is bounded by the

constraints juj � u�;p ¼ffiffiffiffiffiffiffiffiffi2�ÿ�2p

j�up jand jvj � v�;p ¼

ffiffiffiffiffiffiffiffiffi2�ÿ�2p

j�vp j,

as shown in Fig. 3. This defines a rectangle rp on �p

and bounds Sp accordingly since it uses the same

parameterization. The � constraint ensures that

points of high curvature are extrapolated to a

smaller area and that the “flatter” points are

extrapolated to a larger area.2. Maximum principal width (�): If �up

is closer to 0,

then u�;p can be very large. To deal with such

cases, we impose a maximum width constraint �.

So, u�;p is computed as minð�;ffiffiffiffiffiffiffiffiffi2�ÿ�2p

j�up jÞ. Similarly, v�;p

is minð�;ffiffiffiffiffiffiffiffiffi2�ÿ�2p

j�vp jÞ.

We call the surface Sp (bounded by the � and �constraints) the normal distribution Npðu; vÞ (bounded bythe � and � constraints) together with the rectangle rp as adifferential point because all of these are constructed fromjust the second-order information at a sampled point. Whileit is desirable to render p using Sp, such a renderingprimitive is not supported by current graphics hardware.Instead, p is rendered using rp. This is explained in moredetail in Section 5.

4.2 Sampling

Given a 3D model, it is first sampled by points. Using theinherent properties of the surface representation, we extractdifferential geometry information at each of these sampledpoints. If the surface is a parametric one, such as NURBS, itis sampled uniformly in the parametric domain andstandard techniques outlined in the differential geometryliterature [9] are used to extract the relevant information ateach sampled point. The principal directions cannot beuniquely determined at the umbilical points where thesurface is either flat or is spherical (�up

¼ �vp). At such

points, the direction of maximum curvature, uup, is assignedto be the best of the projection of the x, y, and the z-axis ontothe tangent plane. The direction of minimum curvature vvp

can be computed from uup and nnp.If the surface is a triangle mesh, then a NURBS surface

can be fit to the triangle mesh [23], [24] and points can besampled using this representation. We use a more directapproach by using the vertices of the mesh as the sampled

points and extracting differential information for each point

using the techniques developed by Taubin [12]. The DPs

thus obtained from the triangle mesh have the same

properties as the ones obtained by sampling a NURBS

surface. However, in some areas of the surface, the samples

may be spaced far apart even though the surface curvature

of that region is high. If the points of such areas were to be

assigned sizes using the criterion described in Section 4.1,

then there might be gaps in the surface coverage. This is

because the � and � values might prescribe small sizes to the

surfaces Sp, thus leaving holes in the surface coverage

because the points may not be close enough for the assigned

sizes to overlap. To deal with this issue, we also factor in the

distance of the neighbors from a sample point (mesh vertex)

in determining the point size as follows: For each point p,

the midpoint of every incident edge is projected along the

average of its adjacent triangle normals onto the tangent

plane �p of the point. Then, a rectangle is sized to contain

these points such that it is: 1) axis aligned with respect to

the ðuup; vvpÞ directions, 2) symmetric with respect to the

origin (point xp), and 3) its size is the smallest possible. The

rectangle rp is set to the smallest rectangle that encloses the

rectangle computed this way and the rectangle determined

using the steps outlined in Section 4.1.

4.3 Simplification

Initially, the surface is supersampled so that the rectangle ofeach differential point overlaps sufficiently with its neighborswithout leaving any holes in the surface coverage. While thisgives a complete surface representation, it also containsredundant samples. So, we follow up the point sampling witha simplification process that prunes redundant DPs.

Simplification works by pruning those DPs whose

geometric information is represented by the cumulative

information of their neighbors within the error margins �

and �. For this purpose, we first define a projection set,

OðpÞ, for each point p. It denotes the set of all points of the

original surface in the vicinity of xp that fall within the

surface area covered by the orthographic projection of the

rectangle rp onto the original surface along the direction nnp.

do Carmo [9] shows that, for a vicinity around the point

position xp, this projection (mapping) is a homeomorphism.

We define an overlap relation between differential points as

follows: A differential point p is said to overlap another

differential point q iff OðpÞ \ OðqÞ 6¼ �. It follows from the

definition that overlap is a symmetric relation.Simplification involves comparing a point with its

“neighbors”—denoted by the set NðpÞ. NðpÞ is initialized

to include all the immediately surrounding DPs that

overlap p. If the DPs are sampled from a parametric

surface, thenNðpÞ is chosen from the eight or the 24 nearest

samples from the sampling grid of the parametric domain

(DPs in the boundary can have less than eight immediately

surrounding DPs). Since overlap is a symmetric relation, we

have that q 2 NðpÞ iff p 2 NðqÞ. If the differential points

are sampled from a triangle mesh, thenNðpÞ is chosen from

the vertices with which it shares edges. Later, when the

simplification algorithm is in progress, in the event of any

qi 2 NðpÞ being pruned, NðpÞ is updated as follows:

34 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

Page 6: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

NðpÞ ( NðpÞ ÿ fqig [ fqjjqj 2 NðqiÞand qj 6¼ p and OðqjÞ \ OðpÞ 6¼ �g:

ð5Þ

This operation updates NðpÞ by deleting qi from it and

adding to it all the neighbors of qi that overlap with p. Last,

we define a term that will act as the prunability criteria of a

DP. A differential point p is said to be enclosed iff

OðpÞ �S

q2NðpÞ OðqÞ. In other words, p is said to be

enclosed iff each point in its projection set OðpÞ is also in

the projection set of at least one of its neighbors. During

simplification, we check to make sure that only enclosed

DPs are pruned.Our simplification algorithm assumes a greedy heuristic

of pruning the most redundant point first. A DP’s redun-dancy is a measure of how similar it is with respect to itsneighbors. It is quantified by a metric, called the redundancyfactor,RðpÞ, which quantifies the ability of p to approximatethe normal of its neighbors and vice versa. The higher thevalue of RðpÞ, the greater the redundancy of p is. RðpÞ iscomputed as follows:

RðpÞ ¼Xq2NðpÞ

�jhNq;Npðuq;p; vq;pÞij þ jhNp;Nqðup;q; vp;qÞij

2 jN ðpÞj

�;

ð6Þ

where ðuq;p; vq;pÞ is the coordinates of the point on �p

obtained by the orthographic projection of xq onto �p andNpðuq;p; vq;pÞ is the normal estimated at these coordinatesusing (4). The dot product jhNq;Npðuq;p; vq;pÞij in (6) is ameasure of how close the actual normal at q is to the normalestimated at xq using the curvature information at p. IfRðpÞ is closer to 1, then p is redundant because all thegeometric information of p is already represented by itsneighbors.

AfterRðpÞ has been computed, p is inserted into a binaryheap withRðpÞ as the key. After all the DPs are represented inthe heap, an iterative process is started which pops the top ofthe heap and checks if pruning that DP will leave any holes inthe surface representation. If not, then the point is pruned andthe neighborhood and the redundancy factor of all its ex-neighbors are updated using (5) and (6), respectively.Otherwise, the point is marked for output. A pseudocode ofsimplification is as shown in Fig. 5.

For p to be pruned it has to qualify the correctness check:that p is an enclosed point or, in other words, that thepruning of p does not leave a hole in the surfacerepresentation. This check is done by the routineEnclosed(b) of the simplification pseudocode. Testing forthe enclosure of the surfaces Sp can be very expensive.Instead, we approximate the original surface by �p and testfor the enclosure of p within this framework. This test isdone by an approximation method that samples linesegments on rp (as shown in Fig. 5) and tests if they arefully covered by the rectangles of the DPs 2 NðpÞ. Apseudocode of this test is as follows:

Enclosed(p)

1 TestLines = Sampled line segments on rp

2 8q 2 NðpÞ

3 8l 2 TestLines

4 delete l from TestLines

5 project l along np onto �q

6 clip it against rq

7 project back the leftover segments along np onto �p

8 add them to TestLines

9 if (TestLines is empty)

10 return(true)

11 return(false)

Since the coverage of line segments does not guaranteecoverage of the entire area of rp, we see infrequent slivergaps left between rectangles. We make the coverage testmore conservative by scaling down the rectangles forsimplification (the original rectangle sizes are used forrendering). For all our test models, a scale down factor of15 percent produced a hole-free and effective simplification.

The simplification algorithm also involves a test for theoverlap of q1 and q2. An approximate test for this is done bythe routine Overlapsðq1;q2Þ of the simplification pseudo-code which tests if rq1

overlaps rq2when �q2

is assumed tobe the original surface and vice versa. An approximationalgorithm for this test is as follows:

Overlaps(q1;q2)

1 return (OverlapTest(q1;q2) jj OverlapTest(q2;q1))

OverlapTest(q1;q2)

1 If the orthographic projection of xq1onto �q2

falls

within the bounds of rq2then return true

2 If the orthographic projection of any of the end points

of rq1onto �q2

falls within the bounds of rq2then

return true

3 If the orthographic projection of any of the edges of

rq1onto �q2

intersects rq2then return true

4 If all the above tests fail then return false

All the approximation algorithms work well in our caseowing to the similarity of neighboring rectangles inposition, width, and orientation. Fig. 4b shows therectangles left over after simplification in an area wherecurvature-related features change very fast. A desirablefeature of this simplification process is that the error metricsthat it uses also control the quality of the final renderedimages. This allows the user to first decide on the imagequality and then get as much simplification as possiblewithout any loss in perceptual quality. Fig. 6 shows samplemodels rendered with and without simplification.

5 RENDERING

While the differential information in a DP can be extra-polated to define a continuous spatial neighborhood Sp,current graphics hardware do not support such a renderingprimitive. We note that the main functionality of the spatialdistribution is that it derives the normal distribution aroundthe differential point. However, it is not necessary for therendering algorithm to use an accurate spatial distributiongiven the relatively small neighborhoods of extrapolation.So, rp is used as an approximation to Sp when rasterizing p.Since the shading artifacts are more readily discernible tothe human eye, the screen-space normal distribution

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 35

Page 7: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

around p has to mimic the normal variation around p on

the original surface. This is done by projecting the normal

distribution Npðu; vÞ onto rp and rasterizing rp with a

normal-map of this distribution.

5.1 Normal Distribution

Consider the projection of Npðu; vÞ onto �p using the

projection Pp discussed in Section 4.1. The resulting

(unnormalized) normal distribution, npðu; vÞ, on the tangent

plane can be expressed using (4) as:

npðu; vÞ � Npð0; 0Þ þ sðu; vÞ dNpðttðu; vÞÞ: ð7Þ

The tangent ttðu; vÞ and the arc-length sðu; vÞ terms can be

expanded as follows:

ttðu; vÞ ¼ ðu uup þ v vvpÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p ð8Þ

sðu; vÞ ¼ arcsinð�pðttðu; vÞÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p

Þ�pðttðu; vÞÞ

: ð9Þ

Using substitutions from (1), (2), (8), and (9) in (7) we get:

npðu; vÞ � Npð0; 0Þ

ÿ� ð�up

u uup þ �vpv vvpÞ

ð�upu2 þ �vp

v2Þ=ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p arcsin

��up

u2 þ �vpv2ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

u2 þ v2p

��:

It can be expressed in the local coordinate system ðeex; eey; eezÞof ðuup; vvp; nnpÞ as:

npðu; vÞ � eez

ÿ� ð�up

u eex þ �vpv eeyÞ

ð�upu2 þ �vp

v2Þ=ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p arcsin

��up

u2 þ �vpv2ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

u2 þ v2p

��;

ð10Þ

where eex ¼ ð1; 0; 0Þ, eey ¼ ð0; 1; 0Þ, and eez ¼ ð0; 0; 1Þ are the

canonical basis in <3. Note that, when specified in the local

coordinate frame, the normal distribution is independent of

uup, vvp, and Npð0;0Þ. So, the rendering algorithm uses a

local coordinate system for the shading so that the normal

distribution can be computed for one combination of �u and

�v and the same distribution is reused to render all DPs

with that combination of principal curvatures.

To shade a DP on a per-pixel basis, we would want the

normal distribution to be available at the screen space. The

only hardware support to specify such a normal distribu-

tion is normal-mapping. It would be very expensive to

36 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

Fig. 4. Effectiveness of simplification: (a) Without simplification: Wireframe rectangles corresponding to the initial (supersampled) collection of

differential points from the surface of the teapot. (b) With simplification: Wireframe rectangles of the differential points that are not pruned by the

simplification algorithm. Simplification is done within a patch and not between patches. The strips of rectangles represent the differential points on

the patch boundaries. (c) Rendering with simplification: A rendering of the simplified differential point representation.

Fig. 5. The line segments AE, CG, BF, HD, BH, HF, FD, DB comprise

the initial set of test line segments for the routine Enclosed(p).

Page 8: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

compute such a normal map at runtime for each combina-

tion of �u and �v. So, a normal map can be precomputed in

the local coordinate frame for various quantized values of

the principal curvatures �u and �v and, at runtime, rp can

be normal-mapped by the closest resembling normal map.

However, a drawback to such a scheme of quantization is

that, since �u and �v are unbounded quantities, it is

impossible to compute all possible normal-maps. To get

around this problem, we introduce a new term, �p ¼�vp

�up,

and note that ÿ1 � �p � 1 because j�upj � j�vp

j. The local

normal distribution from (10) can be rewritten using �p as:

npðu; vÞ � eez ÿ ðu eex þ �p v eeyÞarcsinð�up

pðu; vÞÞ pðu; vÞ

; ð11Þ

where pðu; vÞ ¼ ðu2 þ �pv2Þ=

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p

. Now, consider anormal distribution for a differential point m whose�um¼ 1:

nmðu; vÞ � eez ÿ ðu eex þ �m v eeyÞarcsinð mðu; vÞÞ

mðu; vÞ:

The only external parameter to nmðu; vÞ is �m. Since �m isbounded, we precompute a set, M, of normal distributionsfor discrete values of � and store them as normal-maps.Later, at render-time, rm is normal-mapped by the normalmap whose � value is closest to �m. To normal-map ageneral differential point p using the same set of normal-maps, M, we use the following lemma:

Lemma 1. When expressed in their respective local coordinateframes, npðu; vÞ � nmð�up

u; �upvÞ, where m is any DP with

�um¼ 1 and �m ¼ �p.

Proof. First , we make an observation that�up

pðu; vÞ ¼ pð�upu; �up

vÞ. Using this observation, thetangent plane normal distribution at p ((11)) can be re-written as:

npðu; vÞ � eez

ÿ�ðð�up

uÞeex þ �pð�upvÞeeyÞ

arcsinð pð�upu; �up

vÞÞ pð�up

u; �upvÞ

�¼ eez ÿ

�ðð�up

uÞeex þ �mð�upvÞeeyÞ

arcsinð mð�upu; �up

vÞÞ mð�up

u; �upvÞ

�� nmð�up

u; �upvÞ:

ut

Using Lemma 1, a general rp is normal-mapped with anappropriate normal map nmð�; �Þ with a scaling factor of �up

.

5.2 Shading

For specular shading, apart from the local normal distribu-tion, we also need a local half vector distribution. For thiswe use the cube vector mapping [40] functionality offeredin the nVIDIA GeForce series of GPUs which allows one tospecify unnormalized vectors at each vertex of a polygonand obtain linearly interpolated and normalized versions ofthese on a per-pixel basis. We use the cube vector map tospecify a unnormalized half vector at each vertex of rp

which delivers a normalized half vector at each pixel that rp

occupies. Per-pixel specular shading is achieved by usingthe per-pixel normal (from the normal map) and half vector(from the cube vector map) for illumination computationsin the register combiners. A similar technique is used fordiffuse shading. Fig. 7 shows the illumination of the UtahTeapot and the camera models with our method.

Let hhp denote the (normalized) half (halfway) vector at

the point position xp and let Hpðu; vÞ denote the (unnor-

malized) half vector at a point Xpðu; vÞ on the surface Sp

with Hpð0; 0Þ ¼ hhp. Let hpðu; vÞ be the (unnormalized) half

vector distribution on the tangent plane �p obtained as a

result of applying the projection Pp on Hpðu; vÞ. Similarly,

let llp denote the (normalized) light vector at xp and let

lpðu; vÞ and Lpðu; vÞ denote the (unnormalized) light vector

distribution on �p and Sp, respectively, with Lpð0; 0Þ ¼ llp.

Also, let wwp denote the (normalized) view vector at xp and

let wpðu; vÞ and Wpðu; vÞ denote the (unnormalized) view

vector distribution on �p and Sp, respectively, with

Wpð0; 0Þ ¼ wwp.Applying a method similar to the one used for the

derivation of (7), hpðu; vÞ can be written as:

hpðu; vÞ � Hpð0; 0Þ þ sðu; vÞ dHpðttðu; vÞÞ

� Hpð0; 0Þ þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2

pdHpðttðu; vÞÞ;

ð12Þ

whereffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p

is substituted as an approximation forsðu; vÞ. The half vector gradient can be expanded as:

dHpðttðu; vÞÞ ¼@

@uHpðu; vÞ

����u¼0v¼0

uffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p þ @

@vHpðu; vÞ

����u¼0v¼0

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2 þ v2p

and, using this result, (12) can be rewritten as:

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 37

Fig. 6. Rendering quality with and without simplification. Head Model (� ¼ 0:012, � ¼ 2:0). (a) Without simplification. (b) With simplification. Venus

Model (� ¼ 10ÿ6, � ¼ 0:05). (c) Without simplification. (d) With simplification.

Page 9: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

hpðu; vÞ � Hpð0; 0Þ þ u@

@uHpðu; vÞ

����u¼0v¼0

þ v @@v

Hpðu; vÞ����u¼0v¼0

:

ð13Þ

Let a be the position of the light and b be the position ofthe eye. The partial differential term of (13) can then berewritten as follows:

@

@uHpðu; vÞ

����u¼0v¼0

¼ @

@u

�Lpðu; vÞkLpðu; vÞk

þ Wpðu; vÞkWpðu; vÞk

�����u¼0v¼0

¼ @

@u

ÿ ðaÿXpðu; vÞÞkaÿXpðu; vÞÞk

�����u¼0v¼0

þ @

@u

ÿ ðbÿXpðu; vÞÞkbÿXpðu; vÞÞk

�����u¼0v¼0

¼ðð Lpðu;vÞkLpðu;vÞk �

@Xpðu;vÞ@u Þ Lpðu;vÞ

kLpðu;vÞk ÿ@Xpðu;vÞ

@u Þ��u¼0v¼0

kaÿ xpk

þðð Wpðu;vÞkWpðu;vÞk �

@Xpðu;vÞ@u Þ Wpðu;vÞ

kWpðu;vÞk ÿ@Xpðu;vÞ

@u Þ��u¼0v¼0

kbÿ xpk

¼ ððllp � uupÞllp ÿ uupÞkaÿ xpk

þ ððwwp � uupÞwwp ÿ uupÞkbÿ xpk

:

When expressed in the local coordinate frame, we get:

@

@uHpðu; vÞ

����u¼0v¼0

¼ ððllp � eexÞllp ÿ eexÞkaÿ xpk

þ ððwwp � eexÞwwp ÿ eexÞkbÿ xpk

;

ð14Þ

the other partial differential term of (13) can be computedsimilarly to get a tangent plane half-vector distribution. Thesubtraction and the dot products in (14) are simpleoperations and can be done fast. However, the square rootand the division operations are expensive. Both of theseoperations are combined by the fast inverse square rootapproximation [41] and, in practice, we have found that thisapproximation causes no compromise in visual quality.

The light vector distribution on �p can be derivedsimilarly. It is given by:

lpðu; vÞ � llp ÿ ueex ÿ veey:

So far, we have discussed the tangent plane normal, halfvector, and the light vector distribution around p. They areused for shading p. The overall rendering algorithm isgiven in Fig. 8.

Shading p essentially involves two kinds of computation:1) Computing the relevant vectors (coordinates) for texturemapping (CPU-end computation) and 2) per-pixel compu-tation (GPU-end computation). The rectangle rp is mappedby two textures: the normal map and the half vector (orlight vector) map. Normal-mapping involves choosing thebest approximation to the normal distribution from the setof precomputed normal maps M and computing thenormal-map coordinates ðu; vÞ for the vertices of rp. Half-vector mapping involves computing the unnormalized halfvectors at the vertices of rp using (13) and using them as thetexture coordinates of the cube vector map that is mappedonto rp. The cube vector mapping hardware delivers a per-pixel (normalized) half vector obtained as result of a linerinterpolation between the half vectors specified at thevertices of rp. Per-pixel shading is achieved at the hardwareregister combiners level using the (per-pixel) normal andhalf vectors [40]. If both diffuse and specular shading aredesired, then shading is done in two passes, with theaccumulation buffer being used to buffer the results of thefirst pass. We use a two pass scheme because nVIDIAGeForce2 allows only two textures at the register combiners.If three textures are accessible at the combiners (as inGeForce3), then both the diffuse and specular illuminationcan be done in one pass. In the presence of multiple lightsources, we do a separate rendering pass for each lightsource, with the accumulation buffer being used to bufferintermediate results.

6 IMPLEMENTATION AND RESULTS

All the test cases were run on a PC with a 866MHzPentium 3 processor with 512MB RDRAM having annVIDIA GeForce2 GTS GPU supported by 32MB of DDRRAM. All the test windows were 800� 600 in size. We used256 normal maps (jMj ¼ 256) corresponding to uniformlysampled values of � and we built a linear mip-map on each

38 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

Fig. 7. Illumination and per-pixel shading. (a) Diffuse illumination. (b) Specular illumination. (c) Diffuse and specular illumination.

Page 10: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

of these with the highest resolution being 32� 32. Theresolution of the cube vector map was 512� 512� 6.

We demonstrate our work on five models: the Utahteapot, a human head model, a camera prototype (allNURBS models), the Stanford bunny, and the Cyberwarevenus model (triangle mesh models). In case of a NURBSsurface, the component patches are sampled uniformly inthe parametric domain and simplified independently ofeach other. � is the main parameter of the sampling process.A smaller � requires a higher sampling frequency. The mainrole of � is in areas where curvature changes fast. In suchsurfaces, � ensures that the rectangles from the lowcurvature region do not block the nearby rectangles in thehigher curvature regions. � also ensures that the rectanglesdo not overrun the boundary significantly. We used asimple binary heap for heap operations of the simplificationprocess. The main functional bottleneck in the preproces-sing stage is the test for enclosure in the simplificationprocess. Since every DP popped from the heap is tested forenclosure, the number of enclosure tests is equal to thenumber of sampled DPs. Irrespective of the amount ofsupersampling of a model, simplification yielded similarresults on all attempts that shared the same error metrics (�and �). The effectiveness of simplification is summarized inTable 1. While simplification does not cause any loss ofvisual quality, it can lead to an order-of-magnitude speed-up in rendering and can save substantial storage space.

Each DP uses 62 bytes of storage without any encodingand about 13 bytes of storage with encoding. Following isthe data stored per DP without any encoding: 6 bytes for thediffuse and specular colors, 12 floats (48 bytes) for the pointlocation, principal directions, and the normal, and twofloats (8 bytes) for the two curvature values. A simplescheme of encoding is used to represent a DP with just13 bytes: three shorts (6 bytes) for the position xp, 2 bytesfor each of the principal directions uup and vvp, 2 bytes for themaximum principal curvature �up

, and 1 byte for the

value �p. The normal nnp need not be stored explicitly as itcan be computed as a cross product of the principaldirections. If the DPs were sampled from a triangle meshthen, as explained in Section 4.2, some of them would havea size bigger than what is prescribed by the curvatures. Thisadditional information is encoded in one or two bytes,depending on the nature of the point. If the size of therectangle rp can be computed solely from the curvaturevalues (as in Section 4.1), then a zero byte is written to thefile after the first 13 bytes of the DP have been written.Otherwise, the width and the height of rp are encoded in1 byte each (the bytes being nonzero). They are written tothe file after the first 13 bytes of the DP have been written.We do not save the color for each DP, but group togetherDPs with the same color and write the color informationonce for this group. Fig. 9 shows the bunny rendered withand without encoding.

The results reported in Table 1 are with dynamicillumination (the light and half vectors are computed foreach DP in each frame). Both the specular and diffuseshading are done at the hardware level. However, nVIDIAGeForce2 does not support a hardware implementation forthe accumulation buffer. Instead, the accumulation buffer isimplemented in software by the OpenGL drivers. So, thecase of both diffuse and specular illumination can be slow.Hardware support for the accumulation buffer is availableon other GPUs, such as Voodoo5 6000 AGP from 3Dfx.However, this should not be a problem with the nVIDIAGeForce3 GPU as they can allow access to four textures atthe register combiners, making it possible to compute bothdiffuse and specular shading in one straight pass.

On an average, about 330; 000 DPs can be rendered persecond with diffuse illumination. Both the diffuse andspecular illumination passes take around the same time.The main bottleneck in rendering is the bus bandwidth andthe pixel-fill rate. This can be seen by noting that specularand diffuse illumination give around the same frame rates,even though the cost of computing the half vectors is higherthan the cost of computing the light vectors and that thespecular illumination pass has more computation per-pixelthan the diffuse illumination pass.

The main focus of this paper is the efficiency of DPs asrendering primitives. Previous works on point samplerendering have orthogonal benefits such as faster transfor-mation [6] and multiresolution [4], [5], [8]. Potentially, DPscan be used in conjunction with these schemes. Todemonstrate the benefits of DPs, we designed experimentsto compare performance against the splatting approach torendering. A naive OpenGL point rendering was notconsidered because it is prone to holes and aliasingproblems. We compare the rendering performance of anunsimplified differential point representation of a teapot tothe splatting of unsimplified and unstructured versions ofsampled points. For the splatting test cases, we take theoriginal point samples from which DPs were constructedand associate each of them with a bounding ball whoseradius is determined by comparing its distance from itssampled neighbors. This makes sure that there are no holesin surface coverage by the splatting primitives. We considerthree kinds of test rendering primitives for splatting:

1. Square Primitive: They are squares parallel to theview plane with a width equal to the radius of thebounding sphere [5]. They are rendered withZ-buffering enabled but without any blending.

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 39

Fig. 8. The Rendering Algorithm.

Page 11: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

2. Rectangle Primitive: Consider a disc on the tangentplane of the point, with a radius equal to the radiusof the bounding ball. Also consider a plane parallelto the view plane and located at the position of thepoint. An orthogonal projection of the disc on thisplane results in an ellipse. The rectangle primitive isobtained by fitting a rectangle around the ellipsewith the sides of the rectangle being parallel to theprincipal axes of the ellipse [4]. The rectangleprimitives are rendered with Z-buffering but with-out any blending.

3. Elliptical Primitive: We initialize 256 texture mapsrepresenting ellipses (with a unit radius along thesemimajor axis) varying from a sphere to a nearly“flat” ellipse. The texture maps are not Gaussian,they just have an alpha value of 0 in the interior ofthe ellipse and 1 elsewhere. At runtime, therectangle primitive is texture mapped with ascaled version of the closest approximation of itsellipsoid. The texture-mapped rectangles are thenrendered with a small depth offset and blending[5]. This is implemented in hardware using theregister combiners.

DPs were compared with the splatting primitives for three

test cases: 1) the same number of rendering primitives, 2)

approximately similar visual quality of rendering, and 3)

same frame rates. Table 2 summarizes the results of the first

test case. DPs were found to deliver a much better rendering

quality for the same number of primitives, as seen in Fig. 10.

DPs fared especially well in high curvature areas which are

not well modeled and rendered by the splat primitives.Moreover, DPs had nearly the same frame rates as theellipsoidal primitive. But, DPs were slower than the squareand rectangle primitives and required more disk space.

Table 2 also summarizes the results of the second testcase. Sample renderings for this test are shown in Fig. 10.For this case, the number of square, rectangle, and ellipticalprimitives was increased by increasing the samplingfrequency of the uniformly sampled model used for DPs.In this test case, DPs clearly outperformed the splattingprimitives, both in frame rates and in the storage spacerequirements. The third test case shows that, for the sameframe rates, DPs produced better rendering quality usingfewer rendering primitives.

7 CONCLUSIONS AND FUTURE WORK

The results and the test comparisons clearly demonstratethe efficiency of DPs as rendering primitives. The ease ofsimplification gives DPs an added advantage to get asignificant speed up. The high quality of rendering isattributed to the inherent ability of DPs to well approximatethe local surface normal distribution. The renderingefficiency of DPs is attributed to the sparse surfacerepresentation that reduces bus bandwidth and its amen-ability to per-pixel shading.

One shortcoming of DPs is that the complexity of theborders limit the maximum width of the interior DPsthrough the � constraint. This leads to increased sampling inthe interior even though these DPs have enough room toexpand within the bounds laid down by the � constraint. Awidth-determination approach that uses third-order differ-ential information (such as the variation of the surfacecurvature) should be able to deal with this more efficiently.

A multiresolution scheme of DPs can be explored thatwill efficiently render lower frequency versions of theoriginal surface. Under this scheme, a rendering algorithmthat blends the DPs is a promising prospect. Another line offuture work is with regards to simplification. Currently, weusing a simple heuristic with some approximation algo-rithms which do not guarantee a hole-free representation.There is a lot of scope for improvement here. Compressionof point samples is also a promising prospect.

40 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003

TABLE 1Summary of Results

The teapot, camera, and the head are derived from NURBS and the Stanford bunny and the Cyberware Venus are derived from a triangle mesh.

Fig. 9. Rendering quality with and without encoding. (a) The bunny

rendered without encoding. (b) The bunny rendered with encoding.

Page 12: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

ACKNOWLEDGMENTS

The authors would like to thank their colleagues at the

Graphics and Visual Informatics Laboratory at the University

of Maryland, College Park, and at the Center for Visual

Computing at the State University of New York, Stony

Brook. They would like to thank Robert McNeel &

Associates for the head model, the camera model, and for

the openNURBS code. Also thanks to the Stanford Graphics

Laboratory for the bunny model, and Cyberware Inc. for the

Venus model. Last, but not the least, the authors would like

to acknowledge US National Science Foundation funding

grants IIS00-81847, ACR-98-12572, and DMI-98-00690.

REFERENCES

[1] J.P. Grossman and W.J. Dally, “Point Sample Rendering,” Proc.Rendering Techniques ’98, G. Drettakis and N. Max, eds., pp. 181-192, 1998.

[2] M. Levoy and T. Whitted, “The Use of Points as a DisplayPrimitive,” Technical Report 85-022, Computer Science Dept.,Univ. of North Carolina at Chapel Hill, Jan. 1985.

[3] D. Lischinski and A. Rappoport, “Image-Based Rendering forNon-Diffuse Synthetic Scenes,” Proc. Rendering Techniques ’98, G.Drettakis and N. Max, eds., pp. 301-314, 1998.

[4] H. Pfister, M. Zwicker, J. van Baar, and M. Gross, “Surfels: SurfaceElements as Rendering Primitives,” Proc. SIGGRAPH 2000,pp. 335-342, July 2000.

[5] S. Rusinkiewicz and M. Levoy, “QSplat: A Multiresolution PointRendering System for Large Meshes,” Proc. SIGGRAPH 2000pp. 343-352, July 2000.

[6] J. Shade, S. Gortler, L. He, and R. Szeliski, “Layered DepthImages,” Proc. SIGGRAPH ’98, pp. 231-242, Aug. 1998.

KALAIAH AND VARSHNEY: MODELING AND RENDERING OF POINTS WITH LOCAL GEOMETRY 41

TABLE 2Comparison with Splatting Primitives

Test 1: same number of rendering primitives, Test 2: approximately similar rendering quality, Test 3: similar frame rates, DP = Differential Points, SP= Square Primitive, RP = Rectangle Primitive, and EP = Elliptical Primitive.

Fig. 10. Selected areas of rendering of the teapot model for the three test cases. Top row: Test 1: Comparison of rendering quality for the samenumber of rendering primitives (157K pts.). (a) Differential points (2.13 fps). (b) Square primitive (11.76 fps). (c) Rectangle primitive (10.52 fps). (d)Elliptical primitive (2.35 fps). Middle row: Test 2: Comparison of primitives for similar rendering quality. (e) Differential points (157K pts., 2.13 fps). (f)Square primitive (1,411K pts., 1.61 fps). (g) Rectangle primitive (1,155K pts., 1.49 fps). (h) Elliptical primitive (320K pts., 1.16 fps). Bottom row:Test 3: Comparison of rendering quality for a rendering speed of about 2.1 fps. (i) Differential points (157K pts.). (j) Square primitive (1,037K pts.).(k) Rectangle primitive (819K pts.). (l) Elliptical primitive (180K pts.).

Page 13: 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER ...pdfs.semanticscholar.org › ec20 › 2a7a15612434ad590c... · 30 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL

[7] S. Rusinkiewicz and M. Levoy, “Streaming QSplat: A Viewer forNetworked Visualization of Large, Dense Models,” Proc. Symp.Interactive 3D Graphics, pp. 63-68, Mar. 2001.

[8] C.F. Chang, G. Bishop, and A. Lastra, “LDI Tree: A HierarchicalRepresentation for Image-Based Rendering,” Proc. SIGGRAPH ’99,pp. 291-298, 1999.

[9] M.P. do Carmo, Differential Geometry of Curves and Surfaces.Englewood Cliffs, N.J.: Prentice Hall, 1976.

[10] A. Hilton, J. Illingworth, and T. Windeatt, “Statistics of SurfaceCurvature Estimates,” Pattern Recognition, vol. 28, no. 8, pp. 1201-1222, 1995.

[11] E.M. Stockely and S.Y. Wu, “Surface Parameterization andCurvature Measurement of Arbitrary 3-D Objects: Five PracticalMethods,” IEEE Trans. Pattern Analysis and Machine Intelligence,vol. 12, no. 8, pp. 833-840, Aug. 1992.

[12] G. Taubin, “Estimating the Tensor of Curvature of a Surface froma Polyhedral Approximation,” Proc. Fifth Int’l Conf. ComputerVision, pp. 902-907, 1995.

[13] M. Desbrun, M. Meyer, P. Schroder, and A.H. Barr, “DiscreteDifferential-Geometry Operations in nD,” http://www.multires.caltech.edu/pubs/DiffGeoOperators.pdf, 2000.

[14] J.A. Beraldin, F. Blais, L. Cournoyer, M. Rioux, S.F. El-Hakim, R.Rodell, F. Bernier, and N. Harrison, “Digital 3D Imaging for RapidResponse on Remote Sites,” Proc. Second Int’l Conf. 3D Imaging andModelling, pp. 34-43, 1999.

[15] C. Fermuller, Y. Aloimonos, and A. Brodsky, “New Eyes forBuilding Models from Video,” CGTA: Computational Geometry:Theory and Applications, vol. 15, pp. 3-23, 2000.

[16] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L.Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade,and D. Fulk, “The Digital Michelangelo Project: 3D Scanning ofLarge Statues,” Proc. SIGGRAPH 2000, pp. 131-144, July 2000.

[17] P. Rademacher and G. Bishop, “Multiple-Center-of-ProjectionImages,” Proc. SIGGRAPH ’98, pp. 199-206, Aug. 1998.

[18] H. Rushmeier, G. Taubin, and A. Gueziec, “Applying Shape fromLighting Variation to Bump MapCapture,” Proc. RenderingTechniques ’97, J. Dorsey and P. Slusallek, eds., pp. 35-44, June1997.

[19] C.L. Bajaj, F. Bernardini, and G. Xu, “Automatic Reconstruction ofSurfaces and Scalar Fields from 3D Scans,” Proc. SIGGRAPH ’95,pp. 109-118, Aug. 1995.

[20] F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva, and G. Taubin,“The Ball-Pivoting Algorithm for Surface Reconstruction,” IEEETrans. Visualization and Computer Graphics, vol. 5, no. 4, pp. 349-359, Oct.-Dec. 1999.

[21] I. Guskov, W. Sweldens, P. Schroder, “Multiresolution SignalProcessing for Meshes,” Proc. SIGGRAPH ’99, pp. 325-334, 1999.

[22] M. Pauly and M. Gross, “Spectral Processing of Point-SampledGeometry,” Proc. SIGGRAPH ’01, pp. 379-386, Aug. 2001.

[23] M. Eck and H. Hoppe, “Automatic Reconstruction of B-SplineSurfaces of Arbitrary Topological Type,” Proc. SIGGRAPH ’96,pp. 325-334, Aug. 1996.

[24] V. Krishnamurthy and M. Levoy, “Fitting Smooth Surfaces toDense Polygon Meshes,” Proc. SIGGRAPH ’96, pp. 313-324, Aug.1996.

[25] G. Turk, “Re-Tiling Polygonal Surfaces,” Proc. SIGGRAPH ’92,pp. 55-64, July 1992.

[26] A.P. Witkin and P.S. Heckbert, “Using Particles to Sample andControl Implicit Surfaces,” Proc. SIGGRAPH ’94, pp. 269-278, July1994.

[27] P. Cignoni, C. Montani, and R. Scopigno, “A Comparison of MeshSimplification Algorithms,” Computers & Graphics, vol. 22, no. 1,pp. 37-54, 1998.

[28] J. Cohen, D. Luebke, M. Reddy, A. Varshney, and B. Watson,“Advanced Issues in Level of Detail,” Course Notes (41) ofSIGGRAPH 2000, July 2000.

[29] P. Lindstrom, “Out-of-Core Simplification of Large PolygonalModels,” Proc. SIGGRAPH 2000, pp. 259-262, July 2000.

[30] L. Darsa, B.C. Silva, and A. Varshney, “Navigating StaticEnvironments Using Image-Space Simplification and Morphing,”Proc. 1997 Symp. Interactive 3D Graphics, M. Cohen and D. Zeltzer,eds., pp. 25-34, Apr. 1997.

[31] W.R. Mark, L. McMillan, and G. Bishop, “Post-Rendering 3DWarping,” Proc. 1997 Symp. Interactive 3D Graphics, pp. 7-16, Apr.1997.

[32] K. Pulli, M. Cohen, T. Duchamp, H. Hoppe, L. Shapiro, and W.Stuetzle, “View-Based Rendering: Visualizing Real Objects fromScanned Range and Color Data,” Proc. Rendering Techniques ’97,J. Dorsey and P. Slusallek, eds., pp. 23-34, June 1997.

[33] M.M. Oliveira, G. Bishop, and D. McAllister, “Relief TextureMapping,” Proc. SIGGRAPH 2000, pp. 359-368, July 2000.

[34] B. Chen and M.X. Nguyen, “Pop: A Polygon Hybrid Point andPolygon Rendering System for Large Data,” Proc. IEEE Visualiza-tion ’01, Oct. 2001.

[35] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, C. Silva, and D.Levin, “Point Set Surfaces,” Proc. IEEE Visualization 2001, Oct.2001.

[36] M. Zwicker, H. Pfister, J. van Baar, and M. Gross, “SurfaceSplatting,” Proc. SIGGRAPH 2001, pp. 371-378, Aug. 2001.

[37] K. Mueller, T. Mller, and R. Crawfis, “Splatting without the Blur,”Proc. IEEE Visualization ’99, pp. 363-370, Oct. 1999.

[38] V.L. Interrante, “Illustrating Surface Shape in Volume Data viaPrincipal Direction-Driven 3D Line Integral Convolution,” Proc.SIGGRAPH ’97, pp. 109-116, Aug. 1997.

[39] P.S. Heckbert and M. Garland, “Optimal Triangulation andQuadric-Based Surface Simplification,” Computational Geometry,vol. 14, pp. 49-65, 1999.

[40] M.J. Kilgard, “A Practical and Robust Bump-Mapping Techniquefor Today’s GPUs,” Game Developers Conf., July 2000, http://www.nvidia.com.

[41] K. Turkowski, “Computing the Inverse Square Root,” GraphicsGems, A. Paeth, ed., vol. 5, pp. 16-21, Academic Press, 1995.

Aravind Kalaiah is a doctoral candidate in theComputer Science Department of the Universityof Maryland, College Park. He received theBTech degree in computer science from theIndian Institute of Technology, Bombay, in 1998and the MS degree in computer science from theState University of New York at Stony Brook in2000. His primary research interests are in real-time visualization, geometric modeling, andphotorealism.

Amitabh Varshney received the BTech degreein computer science from the Indian Institute ofTechnology, Delhi, in 1989 and the MS and PhDdegrees from the University of North Carolina atChapel Hill in 1991 and 1994. He is currently anassociate professor of computer science at theUniversity of Maryland at College Park. He wasan assistant professor of computer science atthe State University of New York at Stony Brookfrom 1994 to 2000. His current research inter-

ests are in interactive 3D graphics, scientific visualization, geometricmodeling, and molecular graphics. He received the US National ScienceFoundation’s CAREER award in 1995.

. For more information on this or any computing topic, please visitour Digital Library at http://computer.org/publications/dlib.

42 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 1, JANUARY-MARCH 2003