15
Technical Section Pencil rendering on 3D meshes using convolution $, $$ Yunmi Kwon, Heekyung Yang, Kyungha Min n Sangmyung University, Republic of Korea article info Article history: Received 27 February 2012 Received in revised form 1 August 2012 Accepted 6 August 2012 Available online 30 August 2012 Keywords: Pencil drawing Triangular mesh Convolution Temporal coherence Non-photorealistic rendering abstract We produce various styles of pencil drawings from a 3D triangular mesh using a new two-phase approach based on convolution. First, we generate the noise particles and integration directions required for convolution on the mesh and project them onto the image space. We then use the improved convolution algorithm to integrate the projected noise along the generated integration directions. This scheme produces pencil drawings in different styles, including feature-conveying, monochrome tone-depicting and smooth color-depicting styles, examples of which are provided in this study. This rendering process is temporally coherent. Therefore, it can be used to create drawing-styled animations. & 2012 Elsevier Ltd. All rights reserved. 1. Introduction Pencil drawing is one of the most attractive ways of depicting shape. Extensive research on non-photorealistic rendering (NPR) has been concerned with techniques for producing pencil drawing effects from photographs [5,6, 9, 1113, 17, 20, 23, 24, 27, 28,31, 32, 34, 39] or from 3D meshes [10, 15, 29, 38]. Other schemes produce related depictions, such as line illustrations [25, 33, 35] or hatchings [16, 19,37] from meshes. Most existing schemes mimic pencil strokes and produce monochrome results. In this paper, we present a novel method for mimicking a range of styles of pencil drawing from a 3D triangular mesh. To achieve this objective, we present a convolution-based approach. Convolution is already widely used for producing pencil drawing effects from photographs [17,23,24,32,39], however, it has not been used for triangular meshes. In other fields, convolution is used to visualize the shape of objects in the form of 3D volume data by producing hatching patterns or painting strokes. Interrante [8] has visualized the shape of objects in 3D volume data, which is captured by medical imaging devices, by producing hatching strokes along the princi- pal directions, with the use of convolution. Lee [26] has presented a convolution-based scheme for producing painting effects on triangular meshes. He converted the meshes into implicit volume data, where convolution produced painting strokes along the principal directions. Convolution in these studies is a volume space convolution that operates on 3D volume data. Our con- volution scheme is distinguished from this type of convolution in that it is applied in an image space where the noise and directions are computed from a 3D mesh. As shown in Fig. 2, our technique involves two stages: In the preprocessing stage, we compute smoothly varying directions on a triangular mesh as weighted averages of the principal directions. In the first stage, we generate noise particles on the mesh. We then determine three different types of noise values that control the degree of emphasis of the three styles of feature-conveying, tone-depicting and color-depicting. This noise is generated from the noise particles whose distribution and attached values are controlled to preserve the coherence required between the frames of animation. In the second stage, the directions are discretized into several stroke directions along which convolution is then performed. The result is a simulated pencil drawing in the image space. The contributions of this paper are listed as follows: 1. A convolution-based scheme for producing pencil drawings from3D meshes: The convolution-based approach enables us to produce a range of styles of pencil drawings as shown in Fig. 1, including the color pencil drawing and feature-conveying styles, which have not been presented by existing studies. 2. Control of style expression by the noise value: The various styles of pencil drawing shown in Fig. 1 are expressed by controlling the noise value, which is determined according to the strengths of view-dependent and view-independent features, the light inten- sity estimated on a vertex, and the color from the light sources. The noise value from the strength of a feature is used for the feature-conveying style; the value from the intensity, for the Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/cag Computers & Graphics 0097-8493/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cag.2012.08.002 $ This study was supported by a research grant from Sangmyung University in 2010. $$ This article was recommended for publication by Tobias Isenberg. n Corresponding author. Tel.: þ82 2 2287 5377; fax: þ82 2 22870072. E-mail address: [email protected] (K. Min). Computers & Graphics 36 (2012) 930–944

Pencil rendering on 3D meshes using convolution

  • Upload
    kyungha

  • View
    220

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Pencil rendering on 3D meshes using convolution

Computers & Graphics 36 (2012) 930–944

Contents lists available at SciVerse ScienceDirect

Computers & Graphics

0097-84

http://d

$This

in 2010$$Thn Corr

E-m

journal homepage: www.elsevier.com/locate/cag

Technical Section

Pencil rendering on 3D meshes using convolution$,$$

Yunmi Kwon, Heekyung Yang, Kyungha Min n

Sangmyung University, Republic of Korea

a r t i c l e i n f o

Article history:

Received 27 February 2012

Received in revised form

1 August 2012

Accepted 6 August 2012Available online 30 August 2012

Keywords:

Pencil drawing

Triangular mesh

Convolution

Temporal coherence

Non-photorealistic rendering

93/$ - see front matter & 2012 Elsevier Ltd. A

x.doi.org/10.1016/j.cag.2012.08.002

study was supported by a research grant

.

is article was recommended for publication

esponding author. Tel.: þ82 2 2287 5377; fax

ail address: [email protected] (K. Min).

a b s t r a c t

We produce various styles of pencil drawings from a 3D triangular mesh using a new two-phase

approach based on convolution. First, we generate the noise particles and integration directions

required for convolution on the mesh and project them onto the image space. We then use the

improved convolution algorithm to integrate the projected noise along the generated integration

directions. This scheme produces pencil drawings in different styles, including feature-conveying,

monochrome tone-depicting and smooth color-depicting styles, examples of which are provided in this

study. This rendering process is temporally coherent. Therefore, it can be used to create drawing-styled

animations.

& 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Pencil drawing is one of the most attractive ways of depictingshape. Extensive research on non-photorealistic rendering (NPR) hasbeen concerned with techniques for producing pencil drawing effectsfrom photographs [5,6,9,11–13,17,20,23,24,27,28,31,32,34,39] or from3D meshes [10,15,29,38]. Other schemes produce related depictions,such as line illustrations [25,33,35] or hatchings [16,19,37] frommeshes. Most existing schemes mimic pencil strokes and producemonochrome results.

In this paper, we present a novel method for mimicking arange of styles of pencil drawing from a 3D triangular mesh.To achieve this objective, we present a convolution-basedapproach. Convolution is already widely used for producing pencildrawing effects from photographs [17,23,24,32,39], however, ithas not been used for triangular meshes.

In other fields, convolution is used to visualize the shape ofobjects in the form of 3D volume data by producing hatchingpatterns or painting strokes. Interrante [8] has visualized theshape of objects in 3D volume data, which is captured by medicalimaging devices, by producing hatching strokes along the princi-pal directions, with the use of convolution. Lee [26] has presenteda convolution-based scheme for producing painting effects ontriangular meshes. He converted the meshes into implicit volumedata, where convolution produced painting strokes along the

ll rights reserved.

from Sangmyung University

by Tobias Isenberg.

: þ82 2 22870072.

principal directions. Convolution in these studies is a volumespace convolution that operates on 3D volume data. Our con-volution scheme is distinguished from this type of convolution inthat it is applied in an image space where the noise and directionsare computed from a 3D mesh.

As shown in Fig. 2, our technique involves two stages: In thepreprocessing stage, we compute smoothly varying directions on atriangular mesh as weighted averages of the principal directions.In the first stage, we generate noise particles on the mesh. We thendetermine three different types of noise values that control the degreeof emphasis of the three styles of feature-conveying, tone-depicting andcolor-depicting. This noise is generated from the noise particles whosedistribution and attached values are controlled to preserve thecoherence required between the frames of animation. In the secondstage, the directions are discretized into several stroke directionsalong which convolution is then performed. The result is a simulatedpencil drawing in the image space.

The contributions of this paper are listed as follows:

1.

A convolution-based scheme for producing pencil drawings from3D

meshes: The convolution-based approach enables us to produce arange of styles of pencil drawings as shown in Fig. 1, including thecolor pencil drawing and feature-conveying styles, which have notbeen presented by existing studies.

2.

Control of style expression by the noise value: The various styles ofpencil drawing shown in Fig. 1 are expressed by controlling thenoise value, which is determined according to the strengths ofview-dependent and view-independent features, the light inten-sity estimated on a vertex, and the color from the light sources.The noise value from the strength of a feature is used for thefeature-conveying style; the value from the intensity, for the
Page 2: Pencil rendering on 3D meshes using convolution

Fighol

dra

uni

ligh

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 931

tone-depicting style; and the value from the color, for the color-depicting style.

3.

The use of features in a pencil drawing for effective shape expression:Most existing studies have not used features in their pencil

. 1. Results from our algorithm (note that this David model contains some

es): (a) input mesh; (b) pencil drawing conveying important features; (c) pencil

wing depicting smooth tone by cross-hatching and (d) pencil drawing in which

-directional hatching is used to render smooth color produced by four different

t sources.

Fig. 2. Overview of our

drawing schemes. As shown in Fig. 1(a), the features can be usedto convey the salient shape of an object. Furthermore, as shown inFig. 17, the features obtained by combining tone and color are veryeffective in expressing the shape of an object.

4.

Preservation of the temporal coherence of pencil drawings: Thetemporal coherence of the pencil drawings during an animationis an important challenge in using convolution to produce pencildrawings from meshes. To preserve the temporal coherence of thedrawing, we design a coherent control scheme for the distributionof noise particles and the value of each particle.

The rest of this paper is organized as follows: in Section 2 wereview related work on pencil drawing techniques. In Section 3we suggest how to compute smoothly varying directions from a3D mesh. We show how to create noise particles in Section 4, andthen we explain the convolution algorithm in Section 5. Wepresent our results in Section 6 and draw conclusions as well assuggest directions for future work in Section 7.

2. Related work

2.1. Pencil rendering from 3D models

Schemes to produce monochrome pencil drawings or similareffects on 3D meshes have been proposed by many researchers[10,14–16,19,25,29,33,35].

In an early work, Elber [10] presented a line illustrationscheme applicable to parametric and implicit surfaces. By approx-imating a small differential area of a surface, this schemegenerates a uniform point distribution on the surface. At eachpoint, a matching scheme along the gradient direction of thesurface is executed to produce line illustration effects. Thisscheme varies the widths and lengths of the lines on the surfaceto present various line illustration styles. However, its applicationis limited to mathematical surfaces. Moreover, the scheme cannotproduce pencil stroke effects.

Hertzmann and Zorin [14] proposed a method of producingline illustrations from 3D triangular meshes. They segment a 3D

algorithm.

Page 3: Pencil rendering on 3D meshes using convolution

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944932

mesh into several regions, and the principal or isophote directionsare used to produce hatches on each region. The hatches areequally spaced smooth lines. Parallel and orthogonal hatchesdepict the tone of the mesh. They also present a duality-basedmethod for computing and segmenting silhouettes. The result isan illustration of a 3D mesh with smooth, long and equally spacedparallel or orthogonal lines.

Lake et al. [15] produced pencil textures of different tones byoverlapping five types of pencil stroke in different densities.Therefore, the shading of objects are represented by the tone ofthe pencil stroke textures. However, they do not consider thedirection of the pencil strokes. All the pencil textures are appliedon the mesh in a uniform direction.

Praun et al. [16] developed a tonal art map to produce thickand short pen-and-ink styled hatching strokes on a 3D mesh inreal time. They overlapped a simple hatching stroke with variousdensities to create a series of textures and then applied thesetextures to a mesh. Webb et al. [19] extended this technique toachieve hatching effects that give finer control of tone. In bothstudies, the details of the tone of an object are depicted throughthe density of the hatching texture. However, these schemes donot consider the salient features of the objects. Consequently,they have limitations in depicting details and salient features ofa mesh.

Zander et al. [25] introduced another hatching scheme for 3Dtriangular meshes, in which smooth and evenly distributedstreamlines are derived from the mesh along the principaldirections. The results of the hatching process are stored asvectors and are hence independent of resolution. These authorsalso consider the tapering of the hatching lines, which increasesthe realism of the results.

Lee et al. [29] rendered a 3D mesh in pencil drawing style byapplying pencil textures of various tones in the principal direc-tions. The tonal map of pencil textures is generated by over-lapping a single pencil stroke texture with a decremental method.Kim et al. [33] extended this scheme to produce monochrome lineart illustrations of diverse styles on dynamic 3D objects withhighly reflective surfaces. To this end, they introduced a real-timeimage-space algorithm for estimating principal directions fromobjects. Unfortunately, the authors do not provide an anchoringscheme that fixes the positions in the 3D mesh to the textures inthe image space. This results in unwanted shower door effects intheir animations.

Paiva et al. [35] presented a fluid-based hatching scheme torender triangular meshes of arbitrary topology and complicatedgeometry. They used the smoothed particle hydrodynamics (SPH)approximation method to determine the density of the fluid onthe surface of the mesh and placed hatching textures that followthe directions of the fluid. This scheme, however, does notconsider how to preserve the temporal coherence of their effectsduring an animation. Furthermore, it does not include featureinformation in its noise generation and line illustration.

Umenhoffer et al. [37] presented a real-time hatching algo-rithm for dynamic 3D objects used in motion pictures. Theyproduce hatching styles of wide range from very fine to veryrough hatching. They control the density of the hatching patternsin the image space and preserve the temporal coherence by usingparticles generated on the meshes. Like other studies, theirscheme does not consider the salient features of the objects.

Wang and Hu [38] proposed a use-assisted pencil drawingscheme on a 3D mesh. Their scheme involves a two-pathapproach. In the hatching path, they produce hatching patternson a mesh according to the reference color and illumination.Users can edit their desired reference color patterns on the mesh.In the feature path, the contours and silhouettes of a meshare extracted. By combining the results of both pathes, pencil

drawing effects on a 3D mesh are produced. Their schemes havelimitations such as heavy user-assistance and the loss of temporalcoherence.

We summarize the existing research as follows:

1.

Most studies use texture overlapping schemes to producepencil drawings or similar effects.

2.

Most studies produce monochrome effects. 3. Most studies use principal directions as their stroke directions;

however, they do not consider the use of feature curvesextracted from the meshes.

2.2. Pencil rendering from images

Early attempts to produce pencil drawing effects in NPR werebased on physical models, which simulate the physical propertiesof materials such as graphite and paper [11–13]. Murakami et al.[28] simulated drawing materials such as charcoal, crayon andpencil with a stroke-based scheme. The effect of a stroke dependson the paper texture, which they illuminated in 12 differentdirections. This approach can produce a limited range of coloredgraphite effects. More recently, Al Meraj et al. [34] capturedartists’ hand motions to mimic line drawing techniques withpencils. Unfortunately, these techniques are incapable of mimick-ing a wide range of drawing styles. They are also difficult toimplement and require considerable computation.

Approaches based on individual strokes can create texture byoverlapping strokes in directions that are either user-defined [9]or content-determined [31]. However, the amount of effortsrequired to create strokes one at a time implies that theseschemes are unsuitable for producing the large number of strokesneeded for tonal hatching. Techniques based on individual strokesare, therefore, mainly used to produce line illustration effectssuch as the pen-and-ink effect [5,6]. However, Matsui et al. [27]have introduced a stroke-based algorithm that produces colorpencil drawing effects. In their work, strokes are aligned alongcurves that are offset from edges detected in an input image. Thecolor of each stroke is obtained by sampling the correspondingpixels in the image.

Line integral convolution (LIC) was originally developed to visua-lize flow embedded in an image by integrating random noise alongthe flow [4]. Mao et al. [17] used LIC to produce a pencil drawingeffect by superimposing noise on an image and then integrating alonga fixed direction. Li and Huang [20] extended this scheme byintroducing a feature-guided approach, in which the features aregradient vectors at each pixel. Yamamoto et al. [23] improved Maoet al.’s scheme by rendering contours and modeling paper effects.They also overlapped the LIC results obtained from different layers ofan image. Yamamoto et al. [24] proposed an LIC-based color pencildrawing scheme in which a user assigns two dominant colors to eachregion of an image. Noise is added and LIC is applied to each colorseparately; the two monochrome results are then combined by usingthe Kubelka-Munk method [2]. Xie et al. [32] have implemented anLIC scheme by using a graphics processing unit (GPU) and applied itto a video. Yang and Min [39] combined LIC with a determination ofedge tangent flow (ETF) and produced smooth pencil drawing effects;however, this technique is poor at reproducing details and tone.

3. Preprocessing

In the preprocessing stage of our technique, we generatestroke directions in image space by projecting smooth principaldirections on the mesh.

Page 4: Pencil rendering on 3D meshes using convolution

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 933

3.1. Estimating principal directions

There are various schemes for obtaining principal directionsand curvatures [7,18,21,22,36]. Among them, we use thatdeveloped by Rusinkiewicz [22], which provides the most stableand efficient way of estimating principal directions from a 3Dtriangular mesh.

3.2. Smoothing principal directions

We smooth the input mesh before estimating principal direc-tions. Nevertheless, the resulting principal directions are notsmooth enough to be used in the convolution. Therefore, weextend the image-space weighted averaging filter by Kang et al.[30] to the principal directions on the mesh-space in order togenerate a flow of smooth principal directions. Our weightedaveraging filter obtains smoothed principal directions d0ðvÞ at avertex v as follows:

dðvÞ0 ¼1

sdðvÞþ

1

n

Xn�1

i ¼ 0

aibigidvðwiÞ

!, ð1Þ

where s is a normalization term, dðvÞ is the principal direction at avertex v, and wi, i¼ 0,n�1 are the n one-ring neighborhood verticesof v; also dvðwiÞ is the principal direction at wi, which is projectedonto the tangent plane at v. We will now explain the weight terms inEq. (1) in detail: ai is a correction term for reversed directions. Sincethe principal directions are not orientable, the principal directions of

Fig. 3. The results of smoothing process:

Fig. 4. Pencil effects obtained by using (a) raw

two neighboring vertices can have opposite directions. If dðvÞ�dðwiÞo0, then ai ¼�1; otherwise ai ¼ 1. The similarity termbi ¼ 9dðvÞ � dðwiÞ9 gives increased influence to neighbors with similarprincipal directions. The curvature term gi ¼ ðkðwiÞ�kðvÞþ1Þ=2increases the influence of the vertices with greater curvature. Thenormalized curvature k is defined as ðk�kmÞ=ðkM�kmÞ, where km

and kM respectively are the minimum and the maximum principalcurvatures over all the vertices on the mesh. Since k is in the range0–1, gi is too. We repeat the smoothing process several times toachieve a sufficiently smooth principal flow. The effect of this processis shown in Fig. 4.

3.3. Estimating the smoothness

We define the smoothness of the flow on a 3D mesh byaveraging the smoothness of the flow at each vertex. sðvÞ thesmoothness of a flow at a vertex v is defined by the followingformula:

sðvÞ ¼1

n

Xn

i ¼ 1

ð1�9pv � pi9Þ,

where i¼ 1, . . . ,n is the index of the one-ring neighbor vertices of v.We define S(V), the smoothness of the flow over the vertices of amesh as the average of sðvÞ for all v’s in V. As the smoothing processexecutes, we show that S(V) is decreased. In Fig. 3, we suggest theresults of the smoothing process for three models.

(a) David, (b) Bunny and (c) Dragon.

and (b) smoothed principal directions.

Page 5: Pencil rendering on 3D meshes using convolution

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944934

3.4. Projecting the directions

In projection, we locate a position in the mesh that will beprojected into a pixel in the image space. The smooth direction atthe position is estimated using the barycentric coordinate. We projectthe estimated direction by multiplying a projection matrix to thedirection.

Fig. 5. The relations with the noise type and the corresponding styles: (a) tone noise (to

tone-depicting style (top), feature-conveying style (middle), and color-depicting style (b

and color-depicting style (lower). Note that the noise value in (a) is represented using

Fig. 6. How noise particles are used in the pencil drawing process. During the renderi

particles to project. Then the noise values are computed. When the correct noise value

4. Noise

To express the style of pencil drawing using noise, weintroduce three types of noise values: feature noise is used toconvey the salient features of a mesh; tone noise is used indepicting the intensity; and color noise for color. Fig. 5 showsthe relations between the types of noise and their corresponding

p), feature noise (middle) and color noise (bottom); (b) their corresponding styles:

ottom) and (c) feature noise is used to emphasize the tone-depicting style (upper)

a decrement model.

ng of each frame, the projected area is calculated to determine the number of the

s have been computed, we can project the particles into image space.

Page 6: Pencil rendering on 3D meshes using convolution

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 935

styles. We describe how to determine the proper number of noiseparticles to project and to compute the noise value at eachrendering step. Fig. 6 illustrates the overall process of noisecontrol. We also explain how we preserve the temporal coherenceof the pencil drawing in managing the distribution of noiseparticles and computing noise value of each particle.

4.1. Controlling the number of noise particles

4.1.1. Initial distribution of noise particles

At the initial distribution, we generate a maximum number ofnoise particles on a face using the dart throwing algorithm [1]recording the order of generation (see Fig. 7(a)). If the area of aface f is computed as Af, then the maximum number of noiseparticles nmax is determined using the following formula:

nmax ¼ kmaxAf þnbase,

where kmax is a control parameter, and nbase ensures that a face forwhich kmaxAf less than 1 receives a minimum number of particles.We set nbase ¼ 3 and kmax ¼ 4. kmax denotes the maximum scale ofzoom-in for the mesh.

The dart throwing algorithm generates random points sequen-tially discarding a new point if it is within a certain distance of apreviously generated point. Our scheme is similar to that ofMcCool and Fiume [3], which gradually reduces the thresholddistance as more points are placed. The dart throwing algorithmdetermines the coordinates ðxi,yiÞ within a unit right triangle,

Fig. 7. (a) All the particles on a face are indexed by their order of generation.

In this example, we distribute 18 particles and (b) among the particles generated,

we project the first n particles to use in the convolution. In this example, n¼8, and

the particles shown in red are projected. (For interpretation of the references to

color in this figure caption, the reader is referred to the web version of this article.)

Fig. 8. Features conveyed with different values of kd and ki: (a) kd varies from 1 (le

to 4 (rightmost image).

which are converted into a position on a face with vertices A, B

and C by using barycentric coordinate.

4.1.2. Determining the number of noise particles to project

The number of noise particles to project is determined bycalculating the projected area Pf of a face f using the current projectionmatrix at each rendering step. We suggest that at least one noiseparticle should be projected into every pixel. Thus we need to projectat least Pf pixels onto a face. Experiments suggest that this schemeplaces at least 85% of the pixels of the image space occupied by themodel. The number of noise particles that we project is actually kf Pf ,where kf is a control factor. We discard the subsequent noise particlesprojected into a pixel which already contains a particle. Setting kf to1.1, more than 90% of the pixels are filled, and this producesconvolution effects of adequate quality.

The number of noise particles n to project from a face f isestimated using the following formula:

1 if kf Pf o1,

kf Pf else if kf Pf onmax,

nmax otherwise:

8><>:

4.1.3. Preserving temporal coherence

During a rendering process, the first n noise particles of thedistribution are projected for the convolution in image space (seeFig. 7). The first n particles also preserve Poisson-disk distribution,since we generate particles using the dart throwing algorithmthat guarantees Poisson-disk distribution for any prefix of theparticles. This strategy that minimizes the change of the particleson a face during a rendering process is effective in preservingtemporal coherence.

4.2. Computing the noise value

4.2.1. Feature noise

Feature noise wf is black or white, and depends on the extentto which a feature is present: this may be a view-dependentfeature such as a contour or a view-independent feature such as aridge or valley. When features overlap, it is natural to draw thestronger one. Therefore, the stronger feature determines thefeature value of that face.

ftmost image) to 4 (rightmost image) and (b) ki varies from 1 (leftmost image)

Page 7: Pencil rendering on 3D meshes using convolution

Fig. 9. We visualize the process of computing the value of feature noise as follows: (a) The intensity inside this triangle represents the feature strength of the face, which is

maxðu,wÞ, and the intensity inside a particle represents a; (b) the center figure shows how of is computed: If the tone inside a particle is brighter than that of the triangle,

aomaxðu,wÞ, then of is 1, but otherwise 0. The a2of graph is presented. If aoo0, of ¼ 1, otherwise 0. Increasing the strength of feature from o0 to o1 is presented in the

right figure. The of of the red particles whose a satisfies o0 raro1 is changed from 0 to 1. Reducing the strength of feature from o0 to o1 is also presented in the left

figure. The of of the red particles whose a satisfies o1 raro0 is changed from 1 to 0. (For interpretation of the references to color in this figure caption, the reader is

referred to the web version of this article.)

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944936

The strength of a view-dependent feature for a face is com-puted as follows:

u¼ ð1�9n � v9Þkd , ð2Þ

where n is the vector normal to the face, v is the view vector, andkd is a parameter that controls the range of this effect (seeFig. 8(a)). The strength w of a view-independent feature for aface is estimated using the maximum principal curvature k1,which is the average of the principal curvatures of the vertices, asfollows:

w¼ 9gðk1Þ9ki where gðxÞ ¼

sx if sxo1,

1 otherwise,

(ð3Þ

where ki is a control parameter whose effect is illustrated inFig. 8(b). Since the magnitude of the principal curvature is likelyto vary widely across a mesh, we apply a scaling and clampingfunction gð�Þ to improve the consistency of the results. The scalingfactor s is shown in Fig. 13.

We determine the value of a feature noise in a way, asillustrated in Fig. 9, by assigning a characteristic value a. This ais obtained from white noise in the range (0, 1) and assigned toeach noise particle. The feature noise value of is computed from aas follows:

of ¼1 if aomaxðu,wÞ,

0 otherwise:

(

Changing the strength of a feature comes to be the noise valueupdated, which may result in the loss of temporal coherence.To preserve the temporal coherence, we have to minimize thenumber of the particles whose value is changed. The aboveformula is effective in accomplishing our purpose. If max(u, v),

the strength of feature, changes from o0 to o1 the noise particlefor which o0raro1 change their values. This process isillustrated in Fig. 9(b).

4.2.2. Tone noise

The value of tone noise ot is determined by the intensity of thelight reaching the face at the position of the particle. Tone noise isgrayscale, and value is determined by perturbing the localintensity. This is obtained from the intensities at the vertices ofthe face using barycentric interpolation. This perturbation isproduced by b, which is white noise in the range (�d, d); we setd to 0.4. The perturbed value of the tone noise is ot ¼ tþb. Thisprocess is illustrated in Fig. 10.

When the intensity changes from t0 to t1 where t14t0, thenthe mean of the distribution is shifted from t0 to t1, which is theleft case of Fig. 10(b). The case of t1ot0 is illustrated in the rightcase. In both cases, the shape of the distribution is maintained.From this, the temporal coherence of the pencil drawing effectsproduced by the noise is preserved.

4.2.3. Color noise

The value of a color noise (or , og , ob) is determined by thecolor of the light or a designated color at the position of theparticle. Color noise is grayscale, and its value is estimated as aperturbation of a base value, which is obtained by barycentricinterpolation of the colors at the vertices of the face:

ðor ,og ,obÞ ¼ ðrþb,gþb,bþbÞ:

Temporal coherence of the effect of the color noise is obtained in asimilar way to that of tone noise.

Page 8: Pencil rendering on 3D meshes using convolution

Fig. 10. We visualize the process of determining the tone noise value as follows: (a) The intensity inside this triangle represents the base value t0, and the intensity inside

each particle represents b, which is in the range (�d, d) and (b) for each particle, ot is determined by adding its base value to its b. Red particles have positive values and

blue have negative values. The b2ot graph for (b) illustrates the relations between b and ot . Decreasing the tone from t0 to t1 is illustrated in the right figure. The value of

tone noise becomes darker. Increasing the tone from t0 to t1 is illustrated in the left figure. The value of tone noise becomes brighter. The b2ot graphs show that the shape

of the distribution of ot is preserved. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 937

5. Convolution for pencil drawing

5.1. Convolution

The convolution we use to produce pencil drawing effects issimilar to line integral convolution (LIC), which was originallydesigned to visualize vector fields embedded in an image [4].In LIC, the convolution value LðxÞ at a pixel x can be estimated bysumming the intensities of the noise in the pixels that lie in thedirection of integration, as follows:

Lðx,dÞ ¼

R þ l�l IðyÞ dt

N,

where y¼ xþtd, the range of the integration is (�l, l), its directionis d, the number of pixels containing a projected noise particle isN, and Ið�Þ is the value of the noise particle in a pixel.

We improve this formula in three ways, in order to achieve(i) control of the integration range, (ii) curvature-dependentperturbation, and (iii) a discrete directions of integration. Ournew convolution formula is

Lðx,dÞ ¼1

N

Z þ l

�lIðyÞhðIðxÞ�IðyÞ,TlÞ, ð4Þ

where y¼ xþtRðqðdÞ,yÞ, and qðxÞ is the discrete direction. Rðd,yÞdenotes the perturbation and hð�Þ controls the range of theintegration. We maintain l as 20 for integrating feature noise,and modify l according to the mesh for integrating tone and colornoise. The values of l are given in Fig. 13. In this new equation, N isagain the number of pixels containing projected noise particles.To produce a pencil drawing, we perform individual convolutionsfor each type of noise and merge the results.

5.2. Control of the integration range

LIC integrates noise along curves which may cross boundariesof objects, and these can then appear smeared in the resultingrendering. To avoid this, many existing techniques involve seg-menting the image and then executing independent LIC processon each region [17,23,32]. We address this problem by changing l

in response to differences in intensity. If the difference betweenIðxÞ, the intensity at x, and IðxþtdÞ, the intensity at xþtd, isgreater than a predefined threshold Tl, then we exclude the noiseat xþtd in computing LðxÞ. We set Tl to 0.25. This strategy isimplemented using a function hðx,TlÞ as follows:

hðx,TlÞ ¼1 if xrTl,

0 otherwise:

(

5.3. Curvature-dependent perturbation

The direction of integration d can be perturbed to producedifferent pencil drawing effects. This perturbation is achieved byusing a rotation function Rðd,yÞ (see Fig. 11(a)), where y, theamount of perturbation, is determined by the magnitude of theprincipal curvature of the mesh:

y¼d 2�

9k19þ9k29Tc

� �� �y0 if 9k19þ9k29oTc ,

y0 otherwise,

8><>: ð5Þ

where y0 is the default perturbation, Tc is a threshold, and d is acontrol parameter. We maintain y0 as 51 for feature noise, butapply different values for tone and color noise. The values of y0

Page 9: Pencil rendering on 3D meshes using convolution

Fig. 11. Perturbation of the integration direction: (a) y¼ 01, 151, 301, 451 (from left to right) and (b) varying the perturbation in inverse proportion to the curvature

removes unwanted stroke patterns.

Fig. 12. A comparison of the discretization: (a) smooth flow, (b) m in Eq. (6)¼2, (c) m¼3 and (d) m¼6.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944938

are given in Fig. 13. This results in a smoother tonal depiction of amesh (see Fig. 11(b)). We set T to 0.05 and d to 2.

Fig. 13 shows the parameters we use in the convolution: theintegration length l, the perturbation range y, and the threshold T.

5.4. Discretizing the direction

In mimicking artistic pencil drawing techniques, straightpencil strokes may be preferred to curved strokes. We canproduce straight strokes by discretizing the smooth principalflow into a small number of directions. Each of these directionsis represented by a scalar ~d, which is an angle in the range(01, 1801). The level of discretization is m, and f ~d0, . . . , ~dm�1g arethe discrete directions.

The discretization of the convolution effect of Lðx,dÞ, which isdenoted as Lðx,dÞ, is computed using the following formula:

Lðx,dÞ ¼ Lðx, ~dÞ ¼~d� ~di�1

~di�~di�1

Lðx, ~di�1Þþ~di�

~d~di�

~di�1

Lðx, ~diÞ: ð6Þ

In the formula, ~d satisfies di�1o ~dodi. Fig. 12 illustrates theresults of discretization.

6. Implementation and results

We implemented our algorithm on a PC with a Pentium i7 CPUand 4 GB of main memory. Our software environment includesMicrosoft Visual Studio 2010 with the OpenGL libraries. Thegraphics processing unit of the PC is nVidia GTX 460. To acceleratethe performance, we have implemented our algorithm usingCUDA, the GPU-based programming environment supported bynVidia.

We applied our algorithm to various models and obtained thepencil drawings shown in Figs. 14–16. These illustrate threepencil drawing styles: a feature-conveying style (Fig. 14), amonochrome tone-depicting style (Fig. 15) and a color-depictingstyle (Fig. 16). The parameters used and the details of the inputmodels are listed in Fig. 13.

The feature noise clearly enhances the tone-based or color-based depiction of shape. This is also shown in Fig. 17.

Page 10: Pencil rendering on 3D meshes using convolution

Fig. 13. Models and results: mesh data (green columns), parameters (violet columns) and computation time (orange columns). (For interpretation of the references

to color in this figure caption, the reader is referred to the web version of this article.)

Fig. 14. Results in a feature-emphasizing style: (a) view-independent features are emphasized than view-dependent ones, (b) view-dependent features are emphasized

than view-independent ones, (c) and (d) both types of features are of similar strength.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 939

Page 11: Pencil rendering on 3D meshes using convolution

Fig. 15. Results in a monochrome tone-depicting style: (a) cross-hatching with discrete stroke directions, (b) uni-directional hatching with smooth stroke directions,

(c) uni-directional hatching with highly perturbed smooth stroke directions and (d) cross-hatching with smooth stroke directions.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944940

6.1. Comparison with other schemes

6.1.1. Overall comparison

We compare our scheme to existing schemes on the followingpoints and suggest our contributions.

1.

Most existing schemes that renders a 3D mesh in pencildrawing or similar styles aim to mimic the tone of the meshby overlapping textures of various styles such as thin and longpencil strokes [15], thick and short pen-and-ink hatchingstrokes [16,19], perturbed pencil hatching strokes [29], lineillustrations through strokes of various thicknesses andlengths [33], and hatching patterns of various thicknesses[37]. Instead of applying textures, our scheme uses a convolu-tion that produces pencil drawing effects in image space byusing the noise and directions computed on the mesh. Thisstrategy enables us to produce a range of pencil drawingstyles, such as the color pencil and feature-conveying styles,which have not been produced by existing 3D mesh schemes.

2.

Most existing studies concentrate on depicting the tone of a 3Dmesh. They do not provide a scheme that presents thesalient shape of a 3D mesh in pencil drawing style. Wepresent a scheme that conveys both view-independent andview-dependent features in pencil drawing style. In coopera-tion with the tone-depicting and color pencil drawingstyles, this scheme emphasizes the salient features of objectseffectively.

3.

The existing convolution-based volume rendering schemesthat produce hatching strokes [8] or painting effects [26]execute the convolution in the 3D volume space by using thedirection fields and densities defined in each voxel. Ourscheme executes convolution in a 2D image space by usingthe directions and noise generated on the faces of a 3D mesh.We also present a scheme that generates and controls thenoise on the faces of a 3D mesh so that the temporal coherenceof the pencil drawing effects produced by convolution in 2D ispreserved.

6.1.2. Visual comparison

We also compare our scheme with Lee et al.’s scheme [29] thatpresents monochrome pencil rendering effects on a 3D mesh inFig. 18. For the comparison, we prepare models similar to thoseused in [29]. Among the styles we produce, the tone-depictingstyle is similar to the results from [29]. Our scheme producesvarious styles that cannot be produced by Lee et al.’s scheme.

In comparing our results with those from [29] about the grenademodel, we notice an interesting difference in determining strokedirections. Since they use raw principal directions in [29], the strokedirections at flat triangles such as the center of the lower imagebecome uniform. Our scheme that uses the smoothed principaldirections determines smooth stroke directions at flat triangles. Forthe Venus model, our scheme produces denser stroke patterns for

Page 12: Pencil rendering on 3D meshes using convolution

Fig. 16. Results in a color-depicting style: (a) cross-hatching with smooth stroke directions, (b) uni-directional hatching with smooth stroke directions, (c) uni-directional

hatching with discrete stroke directions and (d) cross-hatching with discrete directions. In (a) there are yellow specular and diffuse reflections. In (b)–(d) there are only

diffuse reflections. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

Fig. 17. Enhancement of a color pencil drawing result by features: (a) color noise alone and (b) color and feature noise.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 941

enlarged models, while Lee et al.’s scheme produces thicker strokepatterns. Note that our scheme that controls the density of the noisedistribution according to the scaling of a mesh produces denser strokepatterns.

6.1.3. Comparison to hand drawn images

We compare our result images with the color pencil drawingscreated by an artist in Fig. 19. The purpose of this comparison is to

show that our scheme can mimic artistic color pencil drawings.Unfortunately, there are too many factors that influence artisticpencil drawings. For a proper comparison, we attempt to set thefactors similar to our scheme. Therefore, we show the artist thedirections of convolution, which corresponds to the stroke direc-tions, and ask him to follow the directions. Furthermore, werequest that he use pencils of sharp head, which corresponds tothe kernel of the convolution filter. With these restrictions, ourscheme can produce results similar to artistic pencil drawings.

Page 13: Pencil rendering on 3D meshes using convolution

Fig. 18. Comparison of our work with Lee et al.’s [29]. The images in the leftmost column are from [29] and the right images are rendered by our schemes. We present

three styles for a grenade model and two styles for Venus model.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944942

The artist required 12 h for drawing Fig. 19(a); and 13 h fordrawing Fig. 19(b).

Even though the hand drawn images and our results look similar,there are still some differences between them. These differences arethe limitations of our scheme. The most significant difference is thecontrast of the tone. In the hand drawn images, the difference of thebright tone and the dark tone is exaggerated to present the artisticstyle. Our scheme cannot process this exaggeration. Another differ-ence is the stroke patterns. In the hand drawn image, the strokepatterns vary according to the content of the drawing, while ourspreserve similar stroke patterns over the whole content of thedrawing. In the succeeding work, we will handle these limitationsto present an improved scheme that mimics artistic pencil drawingtechniques.

6.2. Drawbacks

Drawbacks of our scheme are as follows:

1.

Like other convolution-based schemes, our scheme has limita-tions in controlling the stroke width. Fig. 20 shows some of ourattempts at controlling the stroke width by varying the size of

noise. A convolution with larger noise produces thicker strokeeffects. Unfortunately, the thicker stroke effects look smeared.

2.

The level of temporal coherence achieved by our schemematches that achieved by existing techniques [29,33]. How-ever, we have so far limited our experiment to rigid transfor-mations of the models.

7. Conclusions and suggested future work

We have presented a scheme for producing simulated pencildrawings from a 3D triangular mesh. Our aims were to producedifferent pencil drawing styles with visually pleasing qualities,while obtaining temporal coherence between the frames of ananimation. We have achieved these aims by extending existingconvolution schemes in a way that controls the generation ofnoise and stroke directions coherently. We generate noise parti-cles of various styles and determine smoothly varying integrationdirections from the mesh to be rendered. These integrationdirections are projected into image space where an extendedconvolution scheme is used to produce pencil strokes.

One of our future project is to modify our scheme to run on aGPU. Our algorithm is largely localized, suggestively that parallel

Page 14: Pencil rendering on 3D meshes using convolution

Fig. 19. Comparison of our results with pencil drawings by artists. The thumbnail images are given to artists. The left image is hand-drawn and the right one is ours:

(a) David and (b) Dragon.

Fig. 20. Stroke widths of 1 in (a), 2 (b) and 3 (c). We can produce thicker strokes by using larger noise particles, but the stroke effects become compromised as the width

increases.

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944 943

computation will make a dramatic improvement in performance.Another aim is to study artistic pencil drawing techniques morecarefully in order to improve the realism of our scheme. We arealso planning to add motion effects to pencil drawing animationsby animating the noise particles. We expect to obtain interestingresults by putting the pencil drawing simulation and motioneffects into a common framework.

References

[1] Cook RL. Stochastic sampling in computer graphics. ACM Trans Graph1986;5(1):51–72.

[2] Haase C, Meyer G. Modeling pigmented materials for realistic image synth-esis. ACM Trans Graph 1992;11(4):305–35.

[3] McCool M, Fiume E. Hierarchical poisson disk sampling distributions. In:Proceedings of graphics interface 92, 1992. p. 94–105.

[4] Cabral B, Leedom C. Imaging vector fields using line integral convolution. In:Proceedings of siggraph 1993, 1993. p. 263–70.

[5] Salisbury M, Anderson S, Barzel R, Salesin D. Interactive pen-and-inkillustration. In: Proceedings of siggraph 94, 1994. p. 101–8.

[6] Winkenbach G, Salesin D. Computer generated pen-and-ink illustration. In:Proceedings of siggraph 1994, 1994. p. 91–100.

[7] Taubin G. Estimating the tensor of curvature of a surface from a polyhedralapproximation. In: Proceedings of ICCV 1995, 1995. p. 902–7.

[8] Interrante V. Illustrating surface shape in volume data via principal direction-driven 3D line integral convolution. In: Proceedings of siggraph 1997, 1997.p. 109–16.

[9] Salisbury M, Wong M, Hughes J, Salesin D. Orientable textures for image-based pen-and-ink illustration. In: Proceedings of siggraph 1997, 1997. p.401–6.

[10] Elber G. Line art illustrations of parametric and implicit forms. IEEE Trans VisComput Graph 1998;4(1):71–81.

Page 15: Pencil rendering on 3D meshes using convolution

Y. Kwon et al. / Computers & Graphics 36 (2012) 930–944944

[11] Sousa MC, Buchanan J. Computer-generated graphite pencil rendering of 3Dpolygonal models. In: Proceedings of Eurographics 1999, 1999. p. 195–207.

[12] Sousa MC, Buchanan J. Observational model of blenders and erasers incomputer-generated pencil rendering. In: Proceedings of graphics interface1999, 1999. p. 157–66.

[13] Takagi S, Nakajima M, Fujishiro I. Volumetric modeling of colored pencildrawing. In: Proceedings of Pacific graphics 1999, 1999. p. 250–8.

[14] Hertzmann A, Zorin D. Illustrating smooth surfaces. In: Proceedings ofsiggraph 2000, 2000. p. 517–26.

[15] Lake A, Marshall C, Harris M, Blackstein M. Stylized rendering techniques forscalable real-time 3D animation. In: Proceedings of NPAR 2000, 2000. p. 13–20.

[16] Praun E, Hoppe H, Webb M, Finkelstein A. Real-time hatching. In: Proceed-ings of siggraph 2001, 2001. p. 579–84.

[17] Mao X, Nagasaka Y, Imamiya A. Automatic generation of pencil drawingusing LIC. In: Proceedings of ACM siggraph 2002 abstractions and applica-tions, 2002. p. 149.

[18] Meyer M, Desbrun M, Schroder P, Barr A. Discrete differential-geometryoperators for triangulated 2-manifolds. In: Proceedings of visualization andmathematics III, 2002. p. 35–57.

[19] Webb M, Praun E, Finkelstein A, Hoppe H. Fine tone control in hardwarehatching. In: Proceedings of NPAR 2002, 2002. p. 53–8.

[20] Li N, Huang Z. A feature-based pencil drawing method. In: Proceedings of the1st international conference on computer graphics and interactive techni-ques in Australasia and South East Asia 2003, 2003. p. 135–40.

[21] Goldfeather J, Interrante V. A novel cubic-order algorithm for approximatingprincipal direction vectors. ACM Trans Graph 2004;32(1):45–63.

[22] S. Rusinkiewicz. Estimating curvatures and their derivatives on triangularmeshes. In: Proceedings of a symposium on 3D data processing, visualizationand transmission, 2004. p. 486–93.

[23] Yamamoto S, Mao X, Imamiya A. Enhanced LIC pencil filter. In: Proceedings ofthe international conference on computer graphics, imaging and visualiza-tion 2004, 2004. p. 251–6.

[24] Yamamoto S, Mao X, Imamiya, A. Colored pencil filter with custom colors. In:Proceedings of Pacific graphics 2004, 2004. p. 329–38.

[25] Zander J, Isenberg T, Schlechtweg S, Strothotte T. High quality hatching.Comput Graph Forum 2004;23(3):421–30.

[26] Lee J. Volume painting: incorporating volumetric rendering with line integralconvolution. Master Thesis of Texas, A&M University; 2005.

[27] Matsui H, Johan H, Nishita T. Creating colored pencil images by drawingstrokes based on boundaries of regions. In: Proceedings of computer graphicsinternational 2005, 2005. p. 148–55.

[28] Murakami K, Tsuruno R, Genda E. Multiple illuminated paper textures fordrawing strokes. In: Proceedings of computer graphics international 2005,2005. p. 156–61.

[29] Lee H, Kwon S, Lee S. Real-time pencil rendering. In: Proceedings of NPAR2006, 2006. p. 37–45.

[30] Kang H, Lee S, Chui C. Coherent line drawing. In: Proceedings of NPAR 2007,2007. p. 43–50.

[31] Melikhov K, Tian F, Xie X, Seah, HS. DBSC-based pencil style simulation forline drawings. In: Proceedings of the 2006 international conference on gameresearch and development, 2006. p. 17–24.

[32] Xie D, Zhao Y, Xu D, Yang X. Convolution filter based pencil drawing and itsimplementation on GPU. Lecture Notes in Computer Science, vol. 4847, 2007.p. 723–32.

[33] Kim Y, Yu J, Yu H, Lee S. Line-art illustration of dynamic and specularsurfaces. ACM Trans Graph 2008;27(5) Article No. 156.

[34] Al Meraj Z, Wyvill B, Isenberg T, Gooch A, Richard G. Automaticallymimicking unique hand-drawn pencil lines. Comput Graph 2009;33(4):496–508.

[35] Paiva A, Brazil E, Petronetto F, Sousa M. Fluid-based hatching for tonemapping in line illustrations. Vis Comput 2009;25(5–7):519–27.

[36] Min K. Estimating principal properties on triangular meshes. In: Proceedingsof ICHIT, 2011. p. 614–21.

[37] Umenhoffer T, Szecsi L, Szimay-Kakos L. Hatching for motion pictureproduction. Comput Graph Forum 2011;30(2):533–42.

[38] Wang N, Hu B-G. IdiotPencil: an interactive system for generating pencildrawings from 3D polygonal models. In: Proceedings of international con-ference on computer-aided design and computer graphics (CAD/Graphics),2011. p. 367–74.

[39] Yang H, Min K. Feature-guided convolution for pencil rendering. KSII TransInternet Inf Syst 2011;5(7):1311–28.