11
Mesh Color Textures Cem Yuksel University of Utah [email protected] Figure 1: Example models with mesh color textures we used in our tests. The mesh colors values are sampled from 2D texture maps. The images are generated on the GPU using our trilinear filtering method with our 4D texture coordinate representation. ABSTRACT The fundamental limitations of texture mapping has been a long standing problem in computer graphics. The cost of defining and maintaining texture coordinates and the seams that introduce var- ious filtering inconsistencies lead some graphics applications to adapt alternative techniques that directly address these problems, such as mesh colors. However, alternatives to texture mapping introduce run-time cost that contradicts with the performance con- straints of real-time graphics applications. In this paper we intro- duce mesh color textures that offer all benefits of mesh colors to real-time graphics applications with strict performance constraints. Mesh color textures convert the mesh color data to a format that can be efficiently used by the texture filtering hardware on current GPUs. Utilizing a novel 4D texture coordinate formulation, mesh color textures can provide correct filtering for all mipmap levels and eliminate artifacts due to seams. We show that mesh color textures introduce negligible run-time cost with no discontinuity in texture filtering. We also discuss potential future modifications to graphics hardware and API that would further simplify the use of mesh color textures in real-time graphics applications. HPG 2017, July 28-30, Los Angeles, CA, USA © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- tion for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of HPG 2017, July 28-30, https://doi.org/10.1145/3105762.3105780. CCS CONCEPTS Computing methodologies Texturing; KEYWORDS Mesh colors, texture mapping, texture filtering ACM Reference format: Cem Yuksel. 2017. Mesh Color Textures. In Proceedings of HPG 2017, Los Angeles, CA, USA, July 28-30, 11 pages. https://doi.org/10.1145/3105762.3105780 1 INTRODUCTION Virtually in all computer graphics applications texture mapping is the essential means by which high-frequency surface details are represented. Textures typically store surface colors, but they can also be used for storing other information like various material properties, precomputed lighting, surface normals, displacement, etc. Yet, texture mapping has severe fundamental flaws that inflate the manual labor cost, negatively impact the final image quality, and limit the use of advanced features like tessellation and displacement mapping. We group the problems of texture mapping into two categories: parameterization and filtering. Parameterization is the process of defining a mapping from the model space to the (typically 2D) texture space. While researchers continue to develop ingenious ways to automate this process [Ray et al. 2010; Smith and Schaefer 2015; Tarini 2016], in practice it still requires extensive manual effort. Though the texture resolution on a part of the surface is often directly tied to its size, in reality the desired resolution also depends on the deformation that a surface undergoes through an animation sequence and the desired amount of detail an artist would like to add to a certain part of a model (for

Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color TexturesCem Yuksel

University of [email protected]

Figure 1: Example models with mesh color textures we used in our tests. The mesh colors values are sampled from 2D texture maps. Theimages are generated on the GPU using our trilinear filtering method with our 4D texture coordinate representation.

ABSTRACTThe fundamental limitations of texture mapping has been a longstanding problem in computer graphics. The cost of defining andmaintaining texture coordinates and the seams that introduce var-ious filtering inconsistencies lead some graphics applications toadapt alternative techniques that directly address these problems,such as mesh colors. However, alternatives to texture mappingintroduce run-time cost that contradicts with the performance con-straints of real-time graphics applications. In this paper we intro-duce mesh color textures that offer all benefits of mesh colors toreal-time graphics applications with strict performance constraints.Mesh color textures convert the mesh color data to a format thatcan be efficiently used by the texture filtering hardware on currentGPUs. Utilizing a novel 4D texture coordinate formulation, meshcolor textures can provide correct filtering for all mipmap levels andeliminate artifacts due to seams. We show that mesh color texturesintroduce negligible run-time cost with no discontinuity in texturefiltering. We also discuss potential future modifications to graphicshardware and API that would further simplify the use of mesh colortextures in real-time graphics applications.

HPG 2017, July 28-30, Los Angeles, CA, USA© 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa-tion for Computing Machinery.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in Proceedings ofHPG 2017, July 28-30, https://doi.org/10.1145/3105762.3105780.

CCS CONCEPTS• Computing methodologies→ Texturing;

KEYWORDSMesh colors, texture mapping, texture filtering

ACM Reference format:Cem Yuksel. 2017. Mesh Color Textures. In Proceedings of HPG 2017, LosAngeles, CA, USA, July 28-30, 11 pages.https://doi.org/10.1145/3105762.3105780

1 INTRODUCTIONVirtually in all computer graphics applications texture mapping isthe essential means by which high-frequency surface details arerepresented. Textures typically store surface colors, but they canalso be used for storing other information like various materialproperties, precomputed lighting, surface normals, displacement,etc. Yet, texture mapping has severe fundamental flaws that inflatethe manual labor cost, negatively impact the final image quality, andlimit the use of advanced features like tessellation and displacementmapping. We group the problems of texture mapping into twocategories: parameterization and filtering.

Parameterization is the process of defining a mapping from themodel space to the (typically 2D) texture space. While researcherscontinue to develop ingenious ways to automate this process [Rayet al. 2010; Smith and Schaefer 2015; Tarini 2016], in practice it stillrequires extensive manual effort. Though the texture resolution ona part of the surface is often directly tied to its size, in reality thedesired resolution also depends on the deformation that a surfaceundergoes through an animation sequence and the desired amountof detail an artist would like to add to a certain part of a model (for

Page 2: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

HPG 2017, July 28-30, Los Angeles, CA, USA Cem Yuksel

example, an artist may want to have a higher resolution texturerepresentation for the face of a character). Unfortunately, theserequire user intervention and preclude the extensive use of fullyautomated methods in practice. The parameterization process isfurther complicated by the fact that the 3D model creation andtexture preparation are virtually separated. Thus, texture mappingis often completely excluded from the 3D modeling process, andpost-iterations on the model geometry and topology after texturemapping may require redefining the parameterization and subse-quently repainting the texture (partially or completely). Therefore,parameterization has been a tedious process even for highly skilledand experienced artists.

The second group of problems are related to the filtering op-erations that are used for computing the value of a texture at apoint (or an area) on the model surface. Baring limited special cases,seams are unavoidable with texture mapping and it is extremelydifficult, if possible at all, to make sure that the interpolated colorvalues on either side of a seam would agree. Thus, seams lead tovisible discontinuities on the surface. The process of hiding seamsis typically left to the artist who paints the texture and her abilityto manually match the interpolated colors (and causes additionalmanual labor). Even though minute differences in color values canbe tolerated due to the limitations of human visual system, they canlead to unacceptable results when textures are used for geometricinformation, such as cracks on surfaces with displacement maps.In fact, this very problem was attributed as the primary reasonthat prevents widespread use of displacement maps in video games[Tatarchuk 2016], thereby limiting the use of the existing tessella-tion hardware on the GPUs. Moreover, mipmapping, the primarymethod by which minification filtering is computed, is broken be-cause of seams: manual efforts of hiding seams cannot work beyonda few mipmap levels (if that), as the mipmap levels store wronginformation by filtering the texture values based on proximity intexture space, rather than model space. Also, texels near seams canbe shared by unrelated parts of the model. Similarly, anisotropicfiltering near seams can produce incorrect results, which are alsounavoidable.

In offline computer graphics applications these severe problemsof texture mapping can be completely avoided by using alterna-tives of texture mapping that directly address these problems, likemesh colors [Yuksel et al. 2010] or Ptex [Burley and Lacewell 2008].However, these custom solutions require custom filtering opera-tions that are not supported by current GPUs. Having in mind thathardware texture filtering can be more than an order of magnitudefaster than software implementations, custom texture filtering isentirely unacceptable for video games and other real-time graphicsapplications with performance constraints. In fact, Intel’s Larrabeearchitecture [Seiler et al. 2008], which proposed removing almost allfixed function units from the GPU, still kept the texture filtering unitfor this very reason. Therefore, the problems of texture mappingare still sources of real concern for real-time computer graphicsapplications, and they continue to occupy a significant portionof artist time, which dominates the monetary cost of AAA videogame production, as recently stated by developers from Bungie andActivision studios [Tatarchuk et al. 2015].

An ideal solution to the problems of texture mapping for real-time graphics applications must use the existing GPU hardware and

should impose minimal run-time computation cost. In this paper,we introduce such a solution that converts mesh colors tomesh colortextures with 4D encoded texture coordinates, a format that can beefficiently used on existing GPUs. We also provide solutions to thelimitations of mesh colors and offer relatively simple modificationsto graphics hardware and API that would further simplify the use ofmesh color textures in real-time graphics applications. In particular,this paper contains the following technical contributions:• A method that converts mesh colors to a 2D texture representa-

tion with correct bilinear filtering,• A novel 4D texture coordinate formulation that provides correct

bilinear filtering for any mipmap level,• A texture layout generation method and an optimization scheme

that allows mipmap levels beyond vertex colors,• A formulation that permits nonuniform mesh color resolutions

on elongated triangles, and• A solution for preventing anisotropic filtering from sampling

unacceptable texture locations in future hardware.As a result, we provide a comprehensive solution that directly ad-dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation costduring rendering by utilizing existing texture filtering hardware,completely eliminates the need for model parameterization, andproduces no filtering inconsistency in any mipmap level withoutany manual effort. Therefore, we not only minimize the need formanual labor, but also allow high-quality filtering operations withmipmaps and resolve the issues that lead to limited use of tessel-lation and displacement mapping. Most importantly, our solutionprovides the authoring benefits of mesh colors without introducingnoticeable run-time overhead. Some example images of modelswith mesh color textures captured from our implementation areshown in Figure 1.

2 BACKGROUNDThere is a large body of work on surface parameterization [Floaterand Hormann 2005; Hormann et al. 2007; Sheffer et al. 2006]. How-ever, parameterization suitable for texture mapping should be ide-ally bijective or at least injective, which is guaranteed by few meth-ods. Some of them repeatedly split the surface until each piece canhave a bijective parameterization [Lévy et al. 2002; Zhou et al. 2004]or grow (potentially) multiple regions starting from seed triangles[Sorkine et al. 2002]. Global parameterization methods can guar-antee local injectivity by preventing triangle flips [Hormann andGreiner 1999; Sander et al. 2001; Schüller et al. 2013] and bound thetriangle distortion [Aigerman et al. 2014; Lipman 2012; Poranneand Lipman 2014]. Bijectivity can be achieved through non-linearoptimization to planarize either separate parts of a model [Zhanget al. 2005] or an entire model [Smith and Schaefer 2015]. Skeletontexture mapping [Madaras and Ďurikovič 2012] first computes aglobal skeleton for the model and then separates the surface intorectangular pieces. Volume-encoded UV-maps [Tarini 2016] can au-tomatically parameterize the surface by mapping each point usingits 3D position, thereby directly support multi-resolution represen-tations. However, since a desirable parameterization for texturemapping is not always the one that minimizes triangle distortionsand it can depend on the artist’s intentions and how the model will

Page 3: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color Textures HPG 2017, July 28-30, Los Angeles, CA, USA

r = 0 r = 1 r = 2 r = 3

Figure 2:Mesh colors on triangles and quadrilaterals with differentresolutions R = 2r − 1: (blue) vertex colors, (green) edge colors, and(red) face colors.

be used, the parameterization process involves manual interventionin practice. Also, regardless of the parameterization method used,seams are unavoidable and they lead to problems in filtering.

For hiding the seams researchers proposed quad segmentationthat maps each surface piece to a square portion of the texturespace with half a pixel boundary [Nießner and Loop 2013; Purnomoet al. 2004]. This allows a limited number of mipmap levels byadjusting the texture coordinate accordingly for each level. Theresolution of each quad piece can also be independently specified[Carr and Hart 2004]. Using a grid-preserving parameterizationavoids the need for enforcing a square-shaped boundary for eachpiece [Ray et al. 2010]. Color discontinuities near the seams canalso be hidden by triangulating the seams in texture space withadditional computation in the fragment/pixel shader [Gonzálezand Patow 2009]. The artifacts of displacement mapping can befixed using analytically computed normals and a tile-based textureformat [Nießner and Loop 2013].

Other approaches aim to avoid the problems of 2D parameteriza-tion by redefining texture representation. Octree textures [Bensonand Davis 2002; DeBry et al. 2002] and brick maps [Christensenand Batali 2004] store colors efficiently in a 3D embedding, butthey may require too many levels to avoid color bleeding acrossgeometrically close surfaces and GPU implementations [Kniss et al.2005; Lefebvre et al. 2005; Lefohn et al. 2006] must use multiple de-pendent texture lookups. Hash textures [Lefebvre and Hoppe 2006]avoid the color bleeding problem using a hash function in a 3Dembedding, but nonuniform resolution adjustment is not permitted.Polycube maps [Tarini et al. 2004] use a parameterization onto thefaces of multiple cubes that approximate the shape of the model,but they are not general enough to handle any model geometry.TileTrees [Lefebvre and Dachsbacher 2007] store texture values on2D elements placed at the leaf nodes of an octree, thereby sharethe limitations of octree textures.

2.1 Mesh ColorsMesh colors [Yuksel et al. 2010] eliminate the fundamental problemsof texture mapping by defining the colors directly on the model sur-face. Using the existing topology of a polygonal mesh, mesh colorsallow generating detailed textures on arbitrary surfaces withoutparameterization. Therefore, any model (containing triangles andquadrilaterals) can be painted with mesh colors directly withoutany manual or automatic parameterization effort.

Figure 3: An example polygonal mesh and its low-resolution can-vas mesh used for defining mesh colors.

Mesh colors extend the concept of vertex colors by introducingedge colors and face colors as shown in Figure 2. The color posi-tions are evenly distributed in the barycentric space for trianglesand bilinear space for quadrilaterals. Therefore, no additional datastructure is needed, beyond the resolutions of each face, which canbe adjusted independently as desired. This permits simultaneousmodel editing and texture painting. Furthermore, by placing thecolors exactly on the vertices and along the edges, mesh colorsavoid discontinuities in filtering operations.

It is often advantageous to definemesh colors on a low-resolutionversion of a mesh that uses a subset of its vertices. We refer to themesh that is used for defining mesh colors as the canvas mesh(Figure 3). Mesh colors of a canvas mesh can be directly used byany arbitrary tessellation of the canvas mesh. Remeshing is notdirectly supported, but since each color value has a well-defined3D position (defined by its embedding), resampling is trivial. Meshcolors also permit simultaneous model editing and texture painting.

All these properties make mesh colors an ideal alternative totexture mapping for a production pipeline that uses 3D painting[Lambert 2015]. For the purposes of real-time rendering, however,mesh colors share the same limitations with most other methods,that is a custom shader is required, which is often a deal-breaker.Moreover, while mesh colors provide correct mipmap treatment,the lowest resolution mipmap level is the vertex colors of the canvasmesh, which is an important limitation especially when the sizes offaces vary significantly, as it often is the case. Furthermore, sincethe resolution is defined uniformly on each triangle, elongatedtriangles can lead to extra color storage. The mesh color textureswe introduce in this paper directly address all of these problemsand make mesh colors a real alternative to texture mapping forreal-time rendering as well.

2.2 Per-face Textures (Ptex)Ptex [Burley and Lacewell 2008] avoids the problems of texturemapping by simply using a separate texture for each face of amodel. While this may sound inefficient at first glance, it is farmore preferable to the problems of texture mapping in a productionenvironment. That is why Ptex is becoming increasingly popularfor offline rendering applications, led by Disney Animation andPixar studios.

Ptex and mesh colors are very closely related. In terms of thecolor placement topologies, Ptex and mesh colors are dual pairs. Inother words, the color locations of mesh colors are exactly at thecenter of color locations used by Ptex (and vice versa), as shownin Figure 4. Mesh colors guarantee texture filtering continuity byplacing colors exactly on the vertices and along the edges. Ptex,on the other hand, accesses the colors of neighboring faces duringtexture filtering to achieve continuity (by storing adjacency dataper polygon).

Page 4: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

HPG 2017, July 28-30, Los Angeles, CA, USA Cem Yuksel

Ptex Mesh Colors

Figure 4: Color locations of Ptex and mesh colors on two square-shaped quadrilateral faces.

However, this theoretically minor difference in color placementmakes Ptex less suitable for real-time graphics applications. Firstof all, Ptex cannot properly filter near extraordinary vertices thatare not 4-valence (vertices where more or less that 4 edges meet)without special treatment during filtering. Unfortunately, extra-ordinary vertices are commonplace. More importantly, Ptex wasdesigned for quad faces, not triangles, which are heavily used inreal-time graphics applications, though it is possible to extend Ptexto equilateral triangles (by packing them as quads). Also, convertingtriangles to quadrilaterals via subdivision produces extraordinaryvertices.

Therefore, in this paper we concentrate of mesh colors, instead ofPtex. Nonetheless, most of the methods we introduce in this papercan be trivially adapted to Ptex as well (with quad-only meshes).

3 MESH COLOR TEXTURESSince we are targeting real-time graphics applications, we mustconvert mesh colors to 2D textures, which can be efficiently usedby existing GPUs. This way, the GPU can directly handle bilineartexture filtering and we benefit from all existing hardware optimiza-tions, including caching. Converting mesh colors to a 2Dmesh colortexture for bilinear filtering (Section 3.1) is relatively simple, butcare must be taken to avoid seams. Mipmaps (Section 3.2) requirespecial treatment that we efficiently handle at run-time using a 4Dtexture coordinate formulation (Section 3.3). Supporting mipmaplevels beyond vertex colors (Section 3.4) is trivial at run-time, butrequires additional considerations for generating the mipmap lev-els. Anisotropic filtering (Section 3.5) could also be achieved, but itwould require hardware modifications to avoid software filtering.At the end of this section we also provide an extension to meshcolors that allows nonuniform mesh color resolutions on skinnytriangles (Section 3.6).

3.1 Bilinear FilteringFor bilinear filtering it is sufficient to generate a single 2D texturefrom mesh color values. We first assign a set of 2D texture coordi-nates for the vertices of each face separately to form our texturelayout. Then, we copy the mesh color values to a 2D texture usingthese coordinates. Finally, we assign colors for some texels betweenfaces that are used during bilinear filtering. Note that our treatmentof this process bears similarities to prior work [Carr and Hart 2002;Purnomo et al. 2004], but our method differs with its ability tohandle correct bilinear filtering for triangles.

To generate the texture layout, we place the vertices of each faceexactly at texel centers, such that the edges lie precisely horizontally,vertically, or diagonally, with as many texels in between the verticesas specified by the face resolution (Figure 5). This way, all mesh

cc

cc

0 1

2 3

Figure 5: Texture layout of mesh colors on a 2D texture.

color values (for vertices, edges, and faces) correspond to the centerof a unique texel and assigning the texel color values becomestrivial.

In the mesh color representation, the color value along an edgeonly depends on the edge colors (and the vertex colors). This ishow filtering continuity across an edge is guaranteed. In our meshcolor texture representation, filtering continuity is guaranteed au-tomatically for horizontally and vertically placed edges. However,diagonally placed edges require some extra effort. The colors alongthese edges not only depend on the edge colors (such as c1 and c2in Figure 5) as desired, but also the two other nearby colors, onlyone of which is associated with the face (c0) and other is on theopposite side of the edge (c3). Fortunately, we can ensure that thecolor along the edge only depends on c1 and c2 by simply setting

c3 = c1 + c2 − c0 . (1)

As a result, bilinear filtering of these four color values using param-eters s, t ∈ [0, 1] simplifies down to sc1 + (1 − s)c2 along the edge,where s = 1 − t . Thus, we merely need to add a single texel-wideborder near diagonally placed edges for storing the interpolationcolors c3 on the opposite side of the edge. This way, an edge thatis placed diagonally for one face can be placed vertically or hori-zontally for the other face that shares it, without causing filteringdiscontinuity. This is an important property for supporting arbi-trary triangular meshes.

However, most color formats used for texture storage only sup-port values between 0 and 1. Though it is likely that c3 would bewithin those limits when c0, c1, and c2 have similar values, thereis no guarantee. If c3 cannot be represented correctly, we cannotachieve filtering continuity. Therefore, when we detect that a colorcomponent of c3 is negative, we modify the original mesh colors byadding δ = −c3/3 to c1 and c2, and subtracting δ from c0. This pro-vides the minimum required change in color values to set the newvalue c′3 = 0. Similarly, if a color component of c3 is greater than 1,we use δ = (1 − c3)/3. It is possible that some of these colors maybe modified again due to similar constraints for different faces. Yet,since the modification brings color values closer, it is guaranteedto converge after a few iterations.

Note that in the absolute worst case, when the edge colors areblack (c1 = c2 = 0) and the face color is white (c0 = 1), the colorvalues would be modified by 0.33. In a more typical case, the modi-fication (if any) would be minor, as it is only needed when the facecolor is at least twice as bright as the average of the edge colors,and the maximum modification of c0/3 only happens when bothedge colors are black. Nonetheless, in practice it might be a goodidea to apply the color modification as the texture is painted, sothat the user can immediately respond, if a modification occurs.

Another issue to consider is that Equation 1 effectively convertsthe bilinear filtering near the diagonal edge to barycentric filtering

Page 5: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color Textures HPG 2017, July 28-30, Los Angeles, CA, USA

(a) Triangulation (b) Barycentric filter (c) Bilinear filter

Figure 6: Mesh colors on (a) two triangles that form a squarefiltered using (b) a barycentric filter near diagonal edges, revealingthe diagonal edge without causing discontinuity, and (c) a bilinearfilter that filters across the diagonal edge.

that interpolates the three nearest colors. This is indeed the desiredbehavior and avoids filtering discontinuities, but it results in adifferent filtering behavior near edges that are placed diagonallyon the texture layout. If the two triangular faces of an edge havethe same resolution and if they both place this edge diagonally, theswitch to barycentric filtering near the edge might visually revealthe edge as shown in Figure 6b (even though it would not causediscontinuity). That is why, in this special case, it is a better idea tocopy the color for c3 directly from the corresponding color of theopposing face, so that we can get continuous bilinear interpolationacross the diagonal edge (Figure 6c). This would also eliminatethe need for Equation 1 for the copied texels and the possibilityof any color modification. Alternatively, two such triangles can bepacked as a single quad, virtually eliminating the diagonal edgealtogether (and the wasted space for duplicating the edge colorsand the borders).

Obviously, converting mesh colors to a mesh color texture usingthis procedure would copy each edge color (at most) twice and eachvertex color multiple times. Yet, the minimal wasted space (mostlydue to color duplication) is a small price we pay for achievinghardware-accelerated bilinear filtering. Note that the number ofduplicated colors can be minimized using a lower resolution canvasmesh and by overlapping the shared edges of neighboring faces onthe texture layout, as shown in Figure 5 (right).

Since mesh color textures generated in this form guarantee con-tinuous bilinear filtering, they are suitable for displacement map-ping. However, it is important to note that even though the colorvalues are copied without resampling (i.e. interpolation), since thecurrently supported hardware tessellation pattern does not alignwith themesh color positions, a tessellated vertex inside a triangularface does not correspond to a single mesh color value and its valuemust be interpolated. A variant of mesh colors addresses this issueby placing face colors using the current hardware tessellation pat-tern [Schäfer et al. 2012], but mesh colors defined with this patterncannot be converted to a mesh color texture without resampling.Quadrilateral faces, however, do not require interpolation as longas the tessellation resolution matches the resolution of the face.

3.2 Generating MipmapsWe assume that the mesh color resolutions use powers of 2. This as-sumption has limited practical consequence and it also helps whentwo neighboring faces have different resolutions. Let Ri denotethe resolution of a face i (i.e. the number of colors along an edge).

When Ri = 2ri − 1 for ri ∈ Z∗, if two faces i and j sharing anedge have different resolutions ri < r j , the edge colors for i can befreely painted and the additional edge colors for j are interpolated.Therefore, when using mesh colors, it is almost always a good ideato restrict the face resolutions to powers of 2 (minus one).

We follow the mipmap generation process of the original meshcolors method [Yuksel et al. 2010], which is different than mipmapgeneration for standard 2D textures. Therefore, we cannot simplycreate the mipmap levels from the mesh color texture directly. In-stead, the mipmap levels for mesh colors must be computed first,and then each level must be converted to a mesh color textureseparately. Better results can be achieved using a geometry-awarefiltering scheme [Manson and Schaefer 2012].

Note that the lowest resolution for mesh colors is vertex col-ors (i.e. ri = 0 and so Ri = 0). Therefore, mipmap levels can begenerated with decreasing resolution down to vertex colors. Forgenerating higher mipmap levels we continue filtering the vertexcolors, but the storage resolution is not reduced. While this is cer-tainly a limitation, it is a substantial improvement in comparison tostandard 2D textures with seams, for which we cannot even expectto have more than a few reliable mipmap levels in general.1

3.3 Trilinear Filtering with MipmapsThe main difficulty with trilinear filtering is that the 2D texturecoordinates for each mipmap level of mesh color textures are differ-ent. Unfortunately, naïvely storing a separate texture coordinate foreach mipmap level would be too expensive. Therefore, we introducea simple 4D texture coordinate representation that allows quicklycomputing the 2D texture coordinates for any mipmap level.

Let uℓ be the non-normalized 2D texture coordinates for mipmaplevel ℓ. At runtime we can compute it using

uℓ = us/2ℓ + uδ , (2)

where us is a 2D scalable coordinate representing the portion of thetexture coordinate that changes for each mipmap level and uδ is a2D constant offset, forming our 4D texture coordinate representation.In this formulation u0 = us + uδ is the texture coordinate for thehighest resolution mipmap level with ℓ = 0. Note that in practicethe division in Equation 2 can be replaced with multiplication bystoring us/2ℓmax instead of us , where ℓmax is the maximummipmaplevel, though we have not included this optimization in our tests.

For the sake of simplicity, in this section we only considermipmap levels up to vertex colors (determined separately for eachface based on its resolution). We refer to these mipmap levels asvalid mipmap levels in this section. We discuss how to handlehigher level (lower resolution) mipmap levels in the next section.

The 4D texture coordinate formulation in Equation 2 can be usedin numerous ways. To create a desirable layout for our mesh colortextures, however, we must follow three simple rules:(1) uℓ of each canvas mesh vertex for each valid mipmap levelℓ must be at the center of a texel. This means that us mustbe an even multiple of 2ri (where Ri = 2ri − 1 is the faceresolution, as mentioned above) and the uδ values must be

1Texture mapping by tiling a small texture over a surface can have proper mipmaplevels down to a single pixel, but this approach is seldom useful for texturing complexmodels (such as characters).

Page 6: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

HPG 2017, July 28-30, Los Angeles, CA, USA Cem Yuksel

1 2

3 4

5 6

7 8

13 14

15 16

9 10

11 12

17 18

19 20

1 2

3 4

5 6

7 8

13 14

15 16

9 10

11 12

1 2

3 4

Level 0

Level 1

Level 2

us uδ1 0 0 0.5 0.52 4 0 0.5 0.53 0 4 0.5 0.54 4 4 0.5 0.5

us uδ5 4 0 1.5 0.56 6 0 1.5 0.57 4 2 1.5 0.58 6 2 1.5 0.59 4 2 1.5 1.510 6 2 1.5 1.511 4 4 1.5 1.512 6 4 1.5 1.5

us uδ13 6 0 2.5 0.514 8 0 2.5 0.515 6 2 2.5 0.516 8 2 2.5 0.517 6 2 2.5 1.518 7 2 2.5 1.519 6 3 2.5 1.520 7 3 2.5 1.5

Figure 7: An example texture layout for 5 quad faces (each coloreddifferently) on 3mipmap levels and the list of 4D texture coordinatesfor each vertex of each face.

some integer plus 0.5 to move the texture coordinate to thetexel center (Figure 7). Thus, us provides integer values for allvalid mipmap levels.

(2) The offset uδ must be the same for all vertices of a face andthe us values must vary by exactly 2ri (obeying the first rule).This places a quad face onto a 2ri−ℓ + 1 square texel regionfor all valid mipmap levels ℓ, allocating exactly as many texelsas needed for copying vertex, edge, and face colors, as shownin Figure 7. Note that since we map each canvas mesh faceindependently, we assign each vertex as many separate coordi-nates as the number of faces sharing it. Also, because all offsetsuδ of all vertices of a face must be the same, if desired, it ispossible to store uδ per face, instead of storing them separatelyfor each vertex of each face.

(3) The mapped positions of separate faces must not overlap forany valid mipmap level. This can be accomplished by simplymaking sure that the uδ values of a face are no smaller thanthe preceding faces (placed on the left or above) in the texturelayout.

As long as these three rules are followed, any packing algorithmcan be used for generating the texture layout. In our experimentswe have used the packing algorithm outlined in Algorithm 1 (whereb = 1). This algorithm begins with pairing triangles into rectanglesthat contain two triangles with 2 texel spacing between them tomake room for the extra texels needed during bilinear filtering(such as the yellow texels in Figure 5). Each quad face is placed in aseparate rectangle. Then, we sort the rectangles based on their sizes(using first height then width). Starting with the largest rectangle,we place them on the mesh color texture in scanline order (leftto right then top to bottom). If placing a rectangle on the currentline would make the length of the line (the width of the texture)larger than a desirable value (taken as the square root of the totalnumber of colors to be copied), we move to the next line (past thelast non-empty texel of the line). uδ,y , the vertical coordinate ofuδ , is the same for all rectangles placed on the same row and itis taken as the number of preceding rows (when b = 1). uδ,x , the

Algorithm 1: Texture layout generation with 4D coordinates.1 Form rectangles from faces.2 Sort rectangles from large to small.3 us ← [0 0]4 uδ ← [0.5 0.5]5 foreach rectangle do6 if reached end of row then7 Find next row8 uδ ← uδ + [0 b]9 us ← next row starting position −uδ

10 end11 Make sure that us is an even multiple of 2ri .12 Place rectangle using the four texture coordinates:13 us , uδ14 us + [0 2ri ], uδ15 us + [2ri 0], uδ16 us + [2ri 2ri ], uδ17 us,x ← uδ,x + 2ri18 uδ,x ← us,x + Xi + b19 end

Figure 8: A mesh color texture and its mipmap levels.

horizontal coordinate of uδ , is the number of rectangles on theleft side (when b = 1) plus an additional offset Xi , where Xi = 0for rectangles containing quad faces and Xi = 2 for rectanglescontaining triangle pairs (to account for the constant separationbetween triangles). Before placing a rectangle on a new row, wemake sure that the horizontal component of us is an even multipleof 2ri (and skip texels if necessary).2 An example layout generatedusing this procedure is shown in Figure 8. Note that this is just oneway of generating a valid layout. We leave the exploration of anoptimal layout generation (producing optimal packing) to futurework.

While forming rectangles from faces, it is possible to groupneighboring faces into larger rectangles (as in Figure 5 (right)). Inour tests, however, we haven’t used this extra optimization thatwould reduce the number of duplicated color values.

2In most cases us is an even multiple of 2ri , but exceptions are possible, in whichcase we waste some pixels to ensure that the first rule is satisfied. Notice the tworectangular white holes near the bottom of the textures in Figure 8.

Page 7: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color Textures HPG 2017, July 28-30, Los Angeles, CA, USA

(a) (b) (c) (d) (e) (f)Trilinear Trilinear Trilinear Anisotropic Anisotropic Anisotropic

max( ∂us∂x , ∂us∂y ) (

∂us∂x +

∂us∂y )/2 min( ∂us∂x , ∂us

∂y ) (Hardware) (Software) (SW & MSAA)

Figure 9: Options for trilinear and anisotropic filtering using mipmaps. For trilinear filtering we can use (a) the maximum screen spacederivatives, (b) the average derivatives, or (c) the minimum derivatives. For anisotropic filtering the minimum derivatives are used. (d)Hardware anisotropic filtering introduces inaccuracies that reveal the edges of the canvas mesh. (e) Software anisotropic filtering can resolvethis but software filtering introduces run-time computation cost. (f) For correct anisotropic filtering anti-aliasing is needed to consider thecontributions of each face that corresponds to a pixel.

Using this method we can generate a texture layout with 4Dcoordinates and use it for generating a mesh color texture for eachmipmap level. Yet, this procedure only supports mipmap levels upto vertex colors. Since faces with higher resolution mesh colors areplaced first near the top part of the texture, the bottom/right part forhigher mipmap levels can be simply cropped (as in Figure 7), as thebottom part would correspond to invalid mipmap levels (beyondvertex colors) for faces with lower-resolution mesh colors.

During rendering, we determine the mipmap level using thescreen space derivatives of us . The filtering result using differenttrilinear filtering options are shown in Figure 9. We perform twohardware accelerated bilinear filtering operations using the twoclosest mipmap levels, and linearly interpolate the result to achievetrilinear filtering. In our implementation we store each mipmaplevel in a different texture.

3.4 Mipmaps beyond Vertex ColorsMipmap levels up to vertex colors can be arguably enough for manyapplications, since higher mipmap levels are needed only whenthe faces of the canvas mesh appear smaller than a pixel on therendered image. Nonetheless, the procedure explained above canbe extended to support mipmap levels beyond vertex colors.

These higher mipmap levels are stored as vertex colors, thoughthey are generated via additional filtering operations. Therefore,if all faces have the same mesh color resolution, there is a simplesolution for supporting higher mipmap levels. In that case, we canuse the same method explained above and merely replace 2ℓ inEquation 2 with 2min(ℓ,m), wherem is the number of mipmap levelsup to vertex colors (such thatm = ri ∀i). However, for more typicalcases where faces can have different mesh color resolutions, thissimple modification is not enough.

Note that our 4D texture coordinate representation places the ver-tices of the canvas mesh at the exact centers of texels for all mipmaplevels up to vertex colors. For higher mipmap levels, however, the

scalable portion of the texture coordinate (us/2ℓ in Equation 2)may not be whole numbers, meaning that the vertices of the canvasmesh would no longer be at the exact centers of some texels. Thisleads to two problems:

(1) Mesh color values for the higher mipmap levels beyond vertexcolors cannot be directly copied to texels. Instead, the meshcolor values must be represented as bilinear combinations oftexel colors.

(2) At higher levels the texel regions that correspond to neighbor-ing rectangles of the layout can overlap as shown in Figure 10a.

The second problem is easy to solve. All we need to do is to adda single texel separation between rectangles while generating thelayout, as shown in Figure 10b. This can be accomplished by merelyusing b = 2 in Algorithm 1. Since this separation is encoded in theconstant (uδ ) portion of the texture coordinate, the separation ispreserved for all mipmap levels. This separation ensures that forany mipmap level ℓ there is a 2 texel size difference between the uℓvalues of neighboring rectangles. At higher mipmap levels, thesenew separation texels can be included in the bilinear interpolation.This simple solution guarantees that each texel would correspondto no more than a single rectangle. Thus, we guarantee that thetexel regions of neighboring rectangles never overlap.

The first problem, however, is not as easy to solve. When tex-ture coordinates of vertices do not correspond to texel centers, wecannot simply copy the mesh color values. Instead, we must as-sign colors to the texture, such that the interpolated values wouldcorrespond to the mesh color values we aim to represent. Unfor-tunately, accurately reproducing the mesh color values from theinterpolated texel colors might require storing texel values that arenegative or greater than one. Though the texel colors are easy tocompute using bilinear extrapolation of the original mesh colorvalues, there is no guarantee that the extrapolated color valueswould be within the limits [0, 1]. In fact, for higher mipmap levelwhere the texture coordinates uℓ for the vertices of a face are close,

Page 8: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

HPG 2017, July 28-30, Los Angeles, CA, USA Cem Yuksel

Level 0

Level 1

Level 2 Level 3 Level 4

Level 0

Level 1

Level 2 Level 3 Level 4

(a) Mipmap levels without separation (b) Mipmap levels with separation

Figure 10: Example texture layouts generated with and without separation, showing which texels are associated with which faces. Noticethat without separation the texels that correspond to different faces begin to overlap at higher mipmap levels (beyond vertex colors). Addinga single texel separation ensures that each face is mapped to a distinct set of texels.

Clamped Optimized Clamped Optimized

Figure 11: Two mipmap levels beyond vertex colors, generated us-ing clamping that can produce severe artifacts, and our optimizationthat resolves this problem.

even minute differences between the original mesh color valuescan lead to extrapolated texel color values that are far outside ofthe limits [0, 1]. If the texture format does not permit storing valuesoutside of [0, 1], we cannot ignore this problem by simply clampingthe extrapolated color values, because that might easily introducesubstantial amount of error to the interpolated mesh color valuescomputed during filtering, as shown in Figure 11.

Instead of simply clamping the extrapolated texel color valuesthat exceed the limits [0, 1], we propose modifying the originalmesh color values of a face, such that the extrapolated texel colorvalues reside within [0, 1]. Thus, we convert the problem of comput-ing the extrapolated texel colors to an optimization problem thatcomputes the minimal changes to the original mesh color valuesof the mipmap level that would produce extrapolated texel colorvalues within the limit [0, 1].

We perform the optimization separately for each color channel(red, green, and blue) and for each face. Let x denote the changein the mesh color values that would make the extrapolated valueat a point k with extrapolation weights wk equal to a limit valueLk ∈ {0, 1} (whichever limit is violated), and let Ck be the extrap-olated value at point k using unmodified mesh color values. Ourgoal is to solve for x and the points k we consider are texel centerswhere the extrapolated values are out of bounds [0, 1]. Note thatvectors x and wk are four dimensional (d = 4) for quadrilateralfaces (using bilinear interpolation) and three dimensional (d = 3)

for triangles (using barycentric interpolation). If adding x to theoriginal mesh color values would make the extrapolated value at kequal to the limit value Lk , then we can write

wTk x = Lk −Ck . (3)

We can solve x using least square minimization of the form

minx∥b − Ax∥2 , (4)

where the vector b and the matrix A are

b = S

L1 −C1L2 −C2...

Ln −Cn

n×1and A = S

wT1

wT2...

wTn

n×d, (5)

where n ≤ 9 is the number of texels that correspond to the faceand S is an n × n diagonal binary matrix, such that Sk,k = 1 onlyif Ck < [0, 1] and Sk,k = 0 otherwise. Thus, matrix S determineswhich ones of the n texels are considered as constraints in theoptimization. When only a few points k are out of bounds, theoptimization problem is under-constrained, but we can still finda solution to ATAx = AT b numerically using conjugate gradientsthat would minimize ∥x∥. Since this is a very small system, thesolution can be computed quickly.

However, after the optimization, the modification of the meshcolor values x might cause other texels of the face to be out ofbounds [0, 1] (even if they were previously within bounds). There-fore, we perform this optimization in multiple steps by introducingthe constraints one by one. At each step h we add the constraint forthe point k with the largest |Lhk −C

hk | value, where the values for

Lhk andChk are determined incorporating the xh−1 values computedin the previous step (where x0 = 0). The maximum number of stepsneeded is bounded by n. Using this optimization we substantiallyreduce the interpolation error during bilinear filtering (Figure 11).

Even though each step of this optimization can be quickly solved,since this process is repeated for (potentially) each face of the canvas

Page 9: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color Textures HPG 2017, July 28-30, Los Angeles, CA, USA

mesh, the cumulative processing time can be significant, which wasup to several seconds in our tests.

Also, it is possible that themodifications on themesh color valuesfor a particular face can cause other interpolated texel colors ofother faces to fall out of bounds. To ensure that the modification ona vertex color would not lead to any discontinuity, this optimizationproblem must be solved globally, considering all interpolated texelsand all related vertex colors. This is certainly possible to do usinga much larger system of equations. However, this optimization isneeded only for higher mipmap levels (and only if the image formatis restricted to [0, 1]). Therefore, in practice, depending on the targetapplication, minor discontinuities due to this modification mightbe acceptable and the colors used for each face can be optimizedindependently. Without this optimization, however, the resultingcolor differences can be severe (Figure 11).

3.5 Anisotropic FilteringHardware anisotropic filtering can be enabled while using meshcolor textures. However, current GPUs automatically determinethe sample locations for anisotropic filtering and they offer nomechanism for restricting these sample locations. Therefore, someof the mesh color texture samples can be taken outside of the facearea on the texture layout. As a result, some anisotropic filteringsamples may not correspond to the face being shaded, leading toinaccuracies in filtering and revealing the edges of the canvas meshas shown in Figure 9d.

Unfortunately, we can offer no solution for this using existingGPUs, besides handling the anisotropy in software. However, thiscan be potentially remedied in future GPU hardware by introducinga relatively simple test. To avoid the anisotropic filtering issues,all we need to do is to make sure that the texture sample loca-tions are contained within the texture-space footprint of the shadedfragment’s triangle. The texture samples that are outside can bediscarded to avoid invalid samples. Even though the triangle infor-mation may not exist during fragment shading, it is easy to performthis test using the barycentric coordinates of the fragment and theirscreen space derivatives. Eliminating these invalid samples, alongwith multi-sampling for antialiasing, would produce the desiredanisotropic texture filtering result (Figure 9f).

However, if antialiasing is not used, limiting anisotropic filtersample locations may lead to inaccuracies near the edges of thecanvas mesh. Yet, the severity of these inaccuracies depend on themesh color values. If the mesh color values on the other side of acanvas edge are not substantially different, the inaccuracies wouldbe minimal. Otherwise, the resulting image might contain aliasingartifacts in shading, which would be similar to typical pixelationartifacts that appear with no antialiasing (Figure 9e). Nonetheless,if anisotropic filter sample locations are not limited (as in currentGPUs), the inaccuracies would likely be more severe (Figure 9d).

3.6 Nonuniform Mesh ColorsOne limitation of the original mesh colors method is that a singleresolution value is used for determining the resolution of each face.As a result, elongated triangles (with two long edges and one shortedge) may occupy more color samples than necessary (along the

(a) (b)

Figure 12: Texture layout for converting nonuniform mesh colorsto a 2D texture.

short edge). While this does not lead to any problems in filtering, itcauses needless increase in color storage.

It is trivial to address this problem for elongated quadrilateralsby simply defining a secondary resolution value per face, as shownin Figure 12a. In some cases, pairs of elongated triangles can berepresented as elongated quadrilaterals in the canvas mesh, but thisis not always possible. Fortunately, we can apply the same solutionto elongated triangles by introducing a secondary resolution. Inthis case, the primary resolution would determine the resolution ofthe two long edges and the secondary resolution is for the shorteredge. The face color locations are determined by two of the edges(with different resolutions), similar to elongated quadrilaterals, asshown in Figure 12b.

While converting such triangles to mesh color textures, the texelpairs that correspond to the diagonal edge colors must be com-puted via interpolation (Figure 12b). In this case, if the interpolatedcolors are negative or greater than 1, we can simply modify thecorresponding diagonal edge color.

Note that since we restrict the resolutions to powers of 2 (minusone), there is no need to consider three different resolutions for thethree edges of a triangle. The only cases that would not collapse atriangle (in texture space) would have the same resolution alongtwo of the edges and an equal or a lower resolution along the thirdedge (i.e. an elongated isosceles triangle).

4 RESULTSWe tested the performance of mesh color textures in comparisonto standard 2D textures on multiple GPUs spanning multiple gen-erations using different models and different texture and screenresolutions up to 4K. Some of the models we used in our tests areshown in Figure 1.

Our experiments revealed that mesh color textures with mipmapfiltering using the 4D texture coordinates introduce a negligibleoverhead as compared to standard 2D textures. Figure 13 sum-marizes our experimental results. Each one of our experimentscontained a single model with different resolutions rotating in frontof the camera at varying distances using different resolutions ofmesh color textures and standard 2D textures with matching res-olutions, rendered using different viewport resolutions. In theseexperiments a single texture lookup was performed per fragmentin an extremely simple fragment shader with no other operations.Thus, the experiments directly measure the difference in texturelookup performance for mesh color textures and standard 2D tex-tures.

On relatively new GPUs we observed an average overhead lessthan 0.01 milliseconds per frame. In fact, for some camera angles,mesh color textures provided even faster render times than com-

Page 10: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

HPG 2017, July 28-30, Los Angeles, CA, USA Cem Yuksel

Performance Overhead

0 0.01 0.02 0.03 0.04 0.05 0.06 ms

NVIDIA GeForce GTX 750 Ti

NVIDIA GeForce GTX TITAN

NVIDIA GeForce GTX 980

NVIDIA GeForce GTX TITAN X

NVIDIA GeForce GTX 1070

NVIDIA GeForce GTX 1080

Figure 13: Performance overhead of mesh color textures (in mil-liseconds) as compared to standard 2D textures, showing the rangeof overhead values we observed with different experiments (mini-mum to maximum) measured on different graphics hardware.

pletely hardware-accelerated standard 2D textures. While we couldobserve a more pronounced average overhead on older GPUs, it isstill negligible, especially considering the benefits of mesh colorsover 2D textures for both authoring (without parameterization) andfiltering quality (without seam artifacts).

It is not surprising that the mesh color textures produce a similarperformance as 2D textures. Even though our trilinear filteringexplicitly samples two separate textures, trilinear filtering withstandard 2D textures also involves sampling two separate mipmaplevels of a texture. Therefore, we attribute the minor overheadwe observed to the few extra operations we need to perform insoftware and the difference in the texture coordinates that couldimpact cache behavior. The difference in cache behavior is also anexplanation for some camera angles that happen to provide slightlyfaster results with mesh color textures.

In terms of quality, since we generated our mesh color texturesdirectly from previously painted 2D textures, the rendered resultswith mesh color textures and standard 2D textures are virtuallyindistinguishable. However, since standard 2D textures containseams, when using standard 2D textures with mipmapping, theseams are revealed as seam artifacts even with the first few mipmaplevels in our examples. The results with mesh color textures, onthe other hand, contain no seam artifacts. This highlights why itis important to use mesh color textures at run time, rather thanconverting 3D painted mesh colors to standard 2D textures forreal-time rendering.

The layout generation method we used (Algorithm 1) can pro-duce tightly packed rectangles with limited wasted space for thehighest resolution mipmap level. At higher mipmap levels withlower resolution, however, a larger percentage of texels are wasteddue to imperfect packing (see Figure 8). Yet, since lower resolu-tion mipmap levels have fewer pixels, the total percentage of thetexture data wasted with unused texels is comparable to typicaluses of standard 2D textures, as shown in Table 1. However, thisanalysis does not include duplicated vertex and edge colors formesh color textures or the duplicated colors along seams for stan-dard 2D textures. It also does not consider wasted space due tomapping distortion. On the other hand, since the width and heightof a mipmap level with mesh color textures are typically largerthan half of the preceding level, mesh color textures occupy morememory for storing the mipmap levels, as compared to a standard

Table 1: Wasted texture space due to packing

Alien Lizard Head NyraStandard 2D Texture 26% 39% 19% 17%Mesh Color Texture 15% 15% 15% 16%Mipmap level ℓ = 0 12% 8% 7% 8%Mipmap level ℓ = 1 18% 19% 17% 18%Mipmap level ℓ = 2 28% 33% 31% 33%Mipmap level ℓ = 3 35% 50% 48% 50%Mipmap level ℓ = 4 42% 66% 63% 62%Mipmap level ℓ = 5 48% 76% 71% 68%Mipmap level ℓ = 6 48% 78% 66% 67%Mipmap level ℓ = 7 45% 78% 62% 63%Mipmap level ℓ = 8 46% 78% 61% 61%Mipmap level ℓ = 9 48% 78% 62% 59%

2D texture with the same highest resolution. For the examples inFigure 1 this storage overhead varies between 8% and 15% with 10mipmap levels.

5 LIMITATIONS AND FUTUREWORKOne disadvantage of using mesh color textures is that trilinearfiltering adds some extra lines of code to the shader, in compari-son to standard 2D textures that can be filtered using a single APIcall. Even though this may lead to negligible performance penalty,considering that shader complexity can be an important concernin practice, it would be helpful to hide this complexity within thegraphics API. In particular, if future graphics API would allow spec-ifying custom resolutions for mipmap levels, all mipmap levels ofthe mesh color textures can be handled within a single texture re-source. Furthermore, a new texture sampling function that directlyuses our 4D coordinates would hide the extra complexity of meshcolor textures from the programmer, and parts of the computationcan be accelerated to eliminate the overhead using an optimizedlow-level implementation of our method.

Resolving the limitations regarding anisotropic filtering, how-ever, would likely require hardware modification. We have de-scribed a relatively simple modification that could prevent theGPU from sampling invalid texture locations. Such a feature wouldalso resolve the anisotropic filtering problems of standard texturemapping. However, since the seams with standard texture mappingalso induce other types of texture filtering artifacts, resolving theanisotropic filtering problems alone would not be an importantimprovement for standard texture mapping.

Another problem arises with multi-sample anti-aliasing (MSAA).Since MSAA avoids shading each sub-pixel sample and instead callsthe fragment shader using the extrapolated (or interpolated) valuesat the pixel center, the extrapolated 4D texture coordinate caneasily fall outside of the texture layout footprint of a face. Similarto anisotropic filtering, this reveals the edges of the canvas mesh.Unlike anisotropic filtering, however, in this case we have directcontrol over the texture sample location. Therefore, this can be

Page 11: Mesh Color Textures - Cem Yuksel · dresses the problems of texture mapping in real-time graphics appli-cations. The solution we provide adds negligible computation cost during rendering

Mesh Color Textures HPG 2017, July 28-30, Los Angeles, CA, USA

easily fixed by performing a corrected barycentric interpolation forcomputing the texture coordinate that avoids extrapolation. Thesoftware anisotropic filtering example using MSAA in Figure 9fis generated using this approach. Alternatively, this issue can beeasily avoided using centroid sampling, which ensures that the 4Dtexture coordinate is not extrapolated.

Mesh color textures assign multiple texture coordinates per can-vas mesh vertex. While traditional 2D textures also assign multipletexture coordinates for some vertices, mesh color textures wouldtypically require more storage for texture coordinates.

Another possible future hardware modification could be triangletessellation using the tessellation pattern of mesh colors. This way,displacement mapping with tessellation can be done without theneed for resampling the colors.

Mesh color textures also inherit some limitations of mesh colors.Since mesh colors rely on the topology of the canvas mesh, arbi-trary remeshing would require resampling the mesh colors. Eventhough mesh color textures convert the mesh color data into 2Dtextures and thereby provide the option of using existing 2D imageprocessing tools, 3D painting is still preferable with mesh colors.

Finally, since converting mesh colors to a mesh color texturemay require modifying a small percentage of mesh color values,not all of the original mesh colors may be exactly reproduced fromthe mesh color texture.

6 CONCLUSIONWe have shown how mesh colors can be converted to a format thatis ideal for current GPUs, making mesh colors a viable choice forreal-time graphics applications with strict performance constraints.Thus, the content creation pipeline for real-time graphics applica-tions can begin enjoying the benefits of mesh colors, which areno longer exclusive to offline graphics applications. Furthermore,since mesh color textures resolve the filtering problems of texturemapping, mipmaps can be used without incorrect filtering concernsand hardware features like tessellation and displacement mappingcan be utilized more broadly.

ACKNOWLEDGEMENTSWe thank Christer Sveen for the alien character, Murat Afşar for thelizard model, Lee Perry-Smith for the head model, and Paul Toscafor the Nyra model. We also thank the anonymous reviewers fortheir valuable feedback and suggestions on improving this paper.

REFERENCESNoam Aigerman, Roi Poranne, and Yaron Lipman. 2014. Lifted Bijections for Low

Distortion Surface Mappings. ACM Trans. Graph. 33, 4, Article 69 (July 2014),12 pages.

David Benson and Joel Davis. 2002. Octree Textures. Proc. SIGGRAPH ’02, ACMTransactions on Graphics 21, 3 (2002), 785–790.

Brent Burley and Dylan Lacewell. 2008. Ptex: Per-Face Texture Mapping for ProductionRendering. In Eurographics Symposium on Rendering 2008. 1155–1164.

Nathan A. Carr and John C. Hart. 2002. Meshed Atlases for Real-time Procedural SolidTexturing. ACM Trans. Graph. 21, 2 (2002), 106–131.

Nathan A. Carr and John C. Hart. 2004. Painting Detail. Proc. SIGGRAPH ’04, ACMTrans. on Graphics 23, 3 (2004), 845–852.

Per H. Christensen and Dana Batali. 2004. An Irradiance Atlas for Global Illuminationin Complex Production Scenes. In Proc. of Rendering Techniques (EGSR’04). 133–141.

David DeBry, Jonathan Gibbs, Devorah DeLeon Petty, and Nate Robins. 2002. Paintingand Rendering Textures on Unparameterized Models. Proc. SIGGRAPH ’02, ACMTrans. on Graphics 21, 3 (2002), 763–768.

Michael S. Floater and Kai Hormann. 2005. Surface Parameterization: a Tutorial andSurvey. In Advances in multiresolution for geometric modelling. 157–186.

Francisco González and Gustavo Patow. 2009. Continuity Mapping for Multi-chartTextures. ACM Trans. Graph. 28, 5, Article 109 (Dec. 2009), 8 pages.

K. Hormann and G. Greiner. 1999. MIPS: An Efficient Global Parametrization Method.In Curve and Surface Design Conference Proceedings 1999. 153–162.

Kai Hormann, Bruno Lévy, and Alla Sheffer. 2007. Mesh Parameterization: Theory andPractice Video Files Associated with This Course Are Available from the CitationPage. In ACM SIGGRAPH 2007 Courses (SIGGRAPH ’07). Article 1.

Joe Kniss, Aaron Lefohn, Robert Strzodka, Shubhabrata Sengupta, and John D. Owens.2005. Octree Textures on Graphics Hardware. In SIGGRAPH ’05: ACM SIGGRAPH2005 Sketches. 16.

Thibault Lambert. 2015. From 2D to 3D Painting with Mesh Colors. In ACM SIGGRAPH2015 Talks (SIGGRAPH ’15). Article 72, 1 pages.

Sylvain Lefebvre and Carsten Dachsbacher. 2007. TileTrees. In I3D ’07. 25–31.Sylvain Lefebvre and Hugues Hoppe. 2006. Perfect Spatial Hashing. Proc. SIGGRAPH

’06, ACM Trans. on Graphics 25, 3 (2006), 579–588.Sylvain Lefebvre, Samuel Hornus, and Fabrice Neyret. 2005. Texture Sprites: Texture

Elements Splatted on Surfaces. In I3D ’05: Proc. Symposium on Interactive 3DGraphicsand Games. 163–170.

Aaron E. Lefohn, Shubhabrata Sengupta, Joe Kniss, Robert Strzodka, and John D.Owens. 2006. Glift: Generic, Efficient, Random-access GPU Data Structures. ACMTrans. on Graphics 25, 1 (2006), 60–99.

Bruno Lévy, Sylvain Petitjean, Nicolas Ray, and Jérome Maillot. 2002. Least SquaresConformal Maps for Automatic Texture Atlas Generation. Proc. SIGGRAPH ’02,ACM TOG 21, 3 (2002), 362–371.

Yaron Lipman. 2012. Bounded Distortion Mapping Spaces for Triangular Meshes. ACMTrans. Graph. 31, 4, Article 108 (July 2012), 13 pages.

Martin Madaras and Roman Ďurikovič. 2012. Skeleton Texture Mapping. In Proceedingsof the 28th Spring Conference on Computer Graphics (SCCG ’12). 121–127.

Josiah Manson and Scott Schaefer. 2012. Parameterization-Aware MIP-Mapping. Com-put. Graph. Forum 31, 4 (June 2012), 1455–1463.

Matthias Nießner and Charles Loop. 2013. Analytic Displacement Mapping UsingHardware Tessellation. ACM Trans. Graph. 32, 3, Article 26 (July 2013), 9 pages.

Roi Poranne and Yaron Lipman. 2014. Provably Good Planar Mappings. ACM Trans.Graph. 33, 4, Article 76 (July 2014), 11 pages.

Budirijanto Purnomo, Jonathan D. Cohen, and Subodh Kumar. 2004. Seamless TextureAtlases. In SGP ’04: Proc. Eurographics/ACM SIGGRAPH Symposium on GeometryProcessing. 65–74.

Nicolas Ray, Vincent Nivoliers, Sylvain Lefebvre, and Bruno Lévy. 2010. InvisibleSeams. Computer Graphics Forum (2010).

Pedro V. Sander, John Snyder, Steven J. Gortler, and Hugues Hoppe. 2001. TextureMapping Progressive Meshes. In Proceedings of SIGGRAPH ’01. 409–416.

Henry Schäfer, Magdalena Prus, Quirin Meyer, Jochen Süßmuth, and Marc Stamminger.2012. Multiresolution Attributes for Tessellated Meshes. In Proceedings of the ACMSIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D ’12). 175–182.

Christian Schüller, Ladislav Kavan, Daniele Panozzo, and Olga Sorkine-Hornung. 2013.Locally Injective Mappings. Computer Graphics Forum (proceedings of Symposiumon Geometry Processing) 32, 5 (2013).

Larry Seiler, Doug Carmean, Eric Sprangle, Tom Forsyth, Michael Abrash, PradeepDubey, Stephen Junkins, Adam Lake, Jeremy Sugerman, Robert Cavin, Roger Espasa,Ed Grochowski, Toni Juan, and Pat Hanrahan. 2008. Larrabee: A Many-core x86Architecture for Visual Computing. ACM Trans. Graph. (Proc. of SIGGRAPH 2008)27, 3, Article 18 (Aug. 2008), 15 pages.

Alla Sheffer, Emil Praun, and Kenneth Rose. 2006. Mesh Parameterization Methodsand Their Applications. Foundations and Trends in Computer Graphics and Vision 2,2 (2006), 105–171.

Jason Smith and Scott Schaefer. 2015. Bijective Parameterization with Free Boundaries.ACM Trans. Graph. 34, 4, Article 70 (July 2015), 9 pages.

Olga Sorkine, Daniel Cohen-Or, Rony Goldenthal, and Dani Lischinski. 2002. Bounded-distortion Piecewise Mesh Parameterization. In Proceedings of the Conference onVisualization ’02 (VIS ’02). 355–362.

Marco Tarini. 2016. Volume-encoded UV-maps. (2016).Marco Tarini, Kai Hormann, Paolo Cignoni, and Claudio Montani. 2004. PolyCube-

Maps. Proc. SIGGRAPH ’04, ACM Trans. Graph. (TOG) 23, 3 (2004), 853–860.Natalya Tatarchuk. 2016. Open Problems in Real-time Rendering. In ACM SIGGRAPH

2016 Courses (SIGGRAPH ’16). Article 20.Natalya Tatarchuk, Aaron Lefohn, and Peter-Pike Sloan. 2015. Frontiers in Real Time

Rendering. In ACM SIGGRAPH 2015 Courses (SIGGRAPH ’15). Article 13.Cem Yuksel, John Keyser, and Donald H. House. 2010. Mesh colors. ACM Transactions

on Graphics 29, 2, Article 15 (2010), 11 pages.Eugene Zhang, Konstantin Mischaikow, and Greg Turk. 2005. Feature-based surface

parameterization and texture mapping. ACM Trans. Graph. 24, 1 (2005), 1–27.Kun Zhou, John Synder, Baining Guo, and Heung-Yeung Shum. 2004. Iso-charts:

Stretch-driven Mesh Parameterization Using Spectral Analysis. In Proceedings ofthe 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing (SGP’04). 45–54.