26
CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 1 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact Reduction CR-PLAY “Capture-Reconstruct-Play: an innovative mixed pipeline for videogames development” Grant Agreement ICT-611089-CR-PLAY Start Date 01/11/2013 End Date 31/10/2016

Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

1

Deliverable 2.2 Report/publication(s) on Algorithms for Surface

Synthesis and Visual Artifact Reduction

CR-PLAY “Capture-Reconstruct-Play: an innovative mixed pipeline for

videogames development”

Grant Agreement ICT-611089-CR-PLAY

Start Date 01/11/2013

End Date 31/10/2016

Page 2: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

2

Document Information

Deliverable number: 2.2 Deliverable title: Report/publication(s) on Algorithms for Surface

Synthesis and Visual Artifact reduction Deliverable due date: 31/10/2015 Actual date of delivery: 31/10/2015 Main author(s): Gabriel Brostow (UCL), Stefan Guthe (TUD) Main contributor(s): George Drettakis (INRIA) Version: 1.0

Versions Information Version Date Description of changes

Dissemination Level PU Public

PP Restricted to other programme participants (including the Commission Services)

RE Restricted to a group specified by the consortium (including the Commission Services) X [for now]

CO Confidential, only for members of the consortium (including the Commission Services)

Deliverable Nature

R Report X

P Prototype

D Demonstrator

O Other

CR-PLAY Project Information The CR-PLAY project is funded by the European Commission, Directorate General for Communications Networks, Content and Technology, under the FP7-ICT programme. The CR-PLAY Consortium consists of:

Participant Number

Participant Organisation Name Participant Short Name

Country

Coordinator

1 Testaluna S.R.L. TL Italy Other Beneficiaries

2 Institut National de Recherche en Informatique et en Automatique

INRIA France

3 University College London UCL UK 4 Technische Universitaet Darmstadt TUD Germany 5 Miniclip UK Limited MC UK 6 University of Patras UPAT Greece 7 Cursor Oy CUR Finland

Page 3: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

3

1 Summary

In this deliverable we present the results of our work on Surface Synthesis and Visual Artifact Reduction. In particular, we present a new surface synthesis approach and the first results developed in this research area, first results in the definition of an automatic visual artifact detector and finally two areas of reduction of visual artifacts.

2 Contents

1 Summary .............................................................................................................................................. 3

2 Contents ............................................................................................................................................... 3

3 Introduction .......................................................................................................................................... 4

4 Surface Synthesis .................................................................................................................................. 5

4.1 Introduction and Context................................................................................................................ 5

4.2 Related Work.................................................................................................................................. 7

4.2.1 2D Texture Synthesis ............................................................................................................... 7

4.2.2 3D Texture Synthesis ............................................................................................................... 9

4.2.3 Geometry Synthesis ................................................................................................................. 9

4.3 The Repetitive Geometry Algorithm ............................................................................................. 12

4.3.1 Use of Blender ....................................................................................................................... 12

4.3.2 Geometry synthesis algorithm with complex shapes ............................................................. 13

4.4 Evaluation & Experiments ............................................................................................................. 14

4.4.1 Evaluating the practice example ............................................................................................ 14

We tested this simple case successfully, shown in Figure 13 above. ........................................................ 14

4.4.2 Evaluating the repetitive geometry algorithm with simple shapes ......................................... 14

4.5 Conclusions & Future Work .......................................................................................................... 18

4.5.1 Conclusions ........................................................................................................................... 18

4.5.2 Future Work .......................................................................................................................... 18

5 Reducing visual artifacts...................................................................................................................... 19

6 Luminance Harmonization .................................................................................................................. 21

7 Visual Artifact Reduction with Bayesian Selective IBR ......................................................................... 23

8 Conclusions ......................................................................................................................................... 24

9 Bibliography ........................................................................................................................................ 25

Page 4: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

4

3 Introduction

In this deliverable we present the results of our work on Surface Synthesis and Visual Artifact Reduction. In particular, we present a new surface synthesis approach and the first results developed in this research area, first results in the definition of an automatic visual artifact detector and finally two areas of reduction of visual artifacts.

Section 4 describes an approach to synthesize surface detail. We start with a short presentation of the state of the art in this relatively emerging area and then present our new algorithm for synthesizing repetitive geometric structures, and initial ideas on how to do this from image examples. This work is mainly the result of a Masters project of L. Legaye at UCL under the supervision of G. Brostow. Section 5 describes our first approach in developing an automatic detector for visual artifacts. Two important visual artifacts in Image-Based Rendering relate to ghosting (repetition of image features due to blending on two or more input images) and popping (when switching pixels from one input image to another). Identifying when these artifacts appears is a major challenge which we address here. The following two sections (6 and 7) present two more minor approaches which do affect final quality. The first relates to the fact that the colours of the same surfaces in different input images vary slightly, and thus blending between images creates artifacts; we present a simple method to overcome this problem. The second demonstrates the reduction in some visual artifacts due to the new Bayesian Selective Algorithm (see also Deliv. 2.1). Even though the algorithm was described there, a more complete illustration of the artifact reduction is briefly presented here.

NOTE: In the original description of work, this Deliverable corresponded to 17 person-months (PMs) of effort. As explained in the periodic report, there has been a shift of 7PMs from WP2 to WP3, and this corresponds to a reduction of effort concerning this deliverable of 5 PMs; this means that the work described here corresponds to 12 PMs.

Page 5: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

5

4 Surface Synthesis

We have built our first working prototype for synthesizing pre-specified surfaces. In the long term, such synthesis methods will allow the creation of detailed (typically repetitive) geometry from example images using a simple initial “proxy”, which could then potentially be used as traditional or image-based assets in the CR-PLAY pipeline. This is a very ambitious goal which requires extensive research to be useable in a practical context. In this section we present the results of our initial prototype of our research, working in this direction.

As input, the user first builds a 3D geometric proxy shape, usually by stretching one or more cubes. This step is meant to be fast, and defines the general proportions of a shape, whose outer “skin” will be refined. The user then chooses, from a menu, one of the available classes of surface, such as a football goal, tennis racket, archway, brick wall, or ladder. They can modify the script, influencing some parameters, but the chosen class-specific surface will be self-consistent. For example, the spacing of holes is automatically adjusted so that the pattern lines up at corners, and gaps are equally distributed. Mechanically, one can think of the subsequent process as using a large piercing object (such as the arch shown in the middle of Figure 3) to drill holes through the proxy cube. Boolean operations on 3D geometry serve a related purpose, but the constraint-optimization that avoids stretching or clipping of shapes is the main novelty here, building on previous methods.

This report mostly covers the background works that relate to 3D texture synthesis, before giving a brief summary of the proposed synthesis technique. Broadly, this report addresses the context of the problem, and only one of several possible solutions we had considered. Notably, we have not yet explored the prospects for capturing 3D surface details from specific real-world environments/settings. That formidable challenge remains at the edge of our scope, because it requires both good capture of surface details “in the wild,” and novel algorithms that can generalize parametric patterns from only a few samples. For now, the parametric class-specific surface details are pre-created by us, then applied for content creation by non-expert users.

4.1 Introduction and Context

Texture synthesis and inpainting are now established problem domains for image-manipulation. But equivalent synthesis problems have been almost entirely unexplored for 3D. 3D geometric assets, as used in video games, are usually painstakingly crafted by hand in their entirety. These elements are usually one-off creations, so each time a new object needs to be constructed, it is modelled from the ground up, with the previous assets providing no benefit. The leading existing tool for adding 3D surface details to “simple” base meshes is ZBrush from Pixologic, pictured in Figure 1. The CR-Play project is largely, though not entirely, about capturing and harnessing real-world scenery and dynamics. The aims of this particular sub-project are to parameterize some of the realism, and apply it on top of manually modelled base meshes.

Page 6: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

6

Figure 1 Example object model whose details were created using the ZBrush commercial software. Note the basket-woven pattern of the hat, or the details of the eye-brows, which both had to be created manually with great effort. Image from 3DVF, by Magdalena Dadela

We observed that users/artists are happy to model a cube or other primitive 3D shape in Blender, but their productivity quickly dropped when they needed to augment the 3D shape with specific surface details. We take on the classes of objects that have regular patterns of holes, because these occur frequently in man-made scenes, and would otherwise require careful planning by the artist to achieve a desired effect. Figure 2 of the Goal below shows the goal of this project: the artist models and stretches a cube, and applies an automatic “football goal” procedure (in script form) to it, which cuts holes and re-shapes parts of the cube, so spacing is almost uniform, corners are respected, and proportions are respected.

Figure 2 A simple cube was stretched by the user, and then the "football goal" surface-generation script was executed. Our system takes care to synthesize sides of a goal with holes that line-up at the corners, adapting to the initial cube's dimensions.

The case of the arch in Figure 3 also demonstrates a more general capability of our system: creation of the piercing object from a bitmap image. By changing or painting a different 2D binary image, the resulting 3D shape will have holes with different profiles. The existing prototype has a front-end interface accessible from the 3D modelling software Blender. This system is available with source code and the original documentation in the MSc Thesis of [Legaye 2015].

The following sections cover the related work, and explore and demonstrate the proposed method. Beyond the current incarnation of this project, the next steps should focus on simplifying the creation of new classes. Presently, the constraints within each class, and the script-design are hand-specified. While this already

Page 7: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

7

allows users to share their classes as a global (and ever-growing) library, it requires more expertise than snapping and annotating a colour or RGB-Depth image sequence. Simplifying that process would be a major research contribution as well.

Figure 3 Pont du Gard, as photographed by Edouard Baldus (1860), and generated in 3D from a stretched cube, a bitmap defining the shape of an arch, and a bridge-specific script that defines the arch layout.

4.2 Related Work

This section gives an overview of what has been done in the area of 2D texture synthesis (Section 4.2.1), 3D texture synthesis (Section 4.2.2) and geometry synthesis (Section 4.2.3). Taken together, this part shows the different breakthroughs that have been made in the research of surface synthesis.

4.2.1 2D Texture Synthesis Numerous works have explored the design of non-repetitive textures. Such textures are especially useful when trying to fill a larger 2D domain. The goal is to create a larger image from a small image, but we don’t want to just replicate the image in tile fashion. Tiling would reveal boundaries between constituent images, broadcasting that the texture is artificial. One nice improvement on tiling is Image Quilting for Texture Synthesis and Transfer [EF01]. They tried to synthesize a new image by stitching together patches of the input image. This was the first successful non-parametric texture synthesis method, based on copying parts of the input.

Figure 4 Square block from an input texture are patched together to get a bigger texture sample: (a) random blocks, (b) each block is cho­sen to match with its neighbour at the boundaries, (c) boundary is computed to minimize the cost at the overlap, with the courtesy of Alexei A. Efros, William T. Freeman ([EF01])

Page 8: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

8

Another example is Graphcut Textures: Image and Video Synthesis Using Graph Cuts [KSE+03]. There, they not only selected blocks from the input image, they select patch regions which have patterns in common. Then, they select an optimal sub-region to keep. By selecting specific regions where the costs at the boundaries will be small, they improve the results by having more complex textures and invisible boundaries.

Figure 5 From a sample texture, stitching together patch regions along optimal seams to generate a larger

output, courtesy of Vivek Kwatra, Arno Schoedl, Irfan Essa, Greg Turk and Aaron Bobick ([KSE+03]).

These articles were the foundation and inspiration for Wang Tiles. The Wang tiles are named after the mathematician Hao Wang, from 1961. Each tile is defined by a colour for each edge. Then, he tried to generate an infinite plane with the set of tiles he had, by respecting the fact that between two tiles, the edges must have the same colour. We can do exactly the same thing with textures. In Tile-Based Texture Mapping on Graphics Hardware [Wei04], they generate an output texture from a really small sample using the Wang tiles method.

Figure 6 Overview of a Texture Mapping based on the Wang tiles method. (a) the original texture, (b) identification of the boundaries of the different patch of the original texture (c) how the tiles can be stored (d) what is it possible to generate from the packed tiles, courtesy of Li-Yi Wei ([Wei04]).

The same kind of procedure can be seen in the Wang Tiles for Image and Texture Generation paper in [CSHD03].

Page 9: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

9

4.2.2 3D Texture Synthesis

The same concepts can be extended toward 3D. Some papers such as Lazy Solid Texture Synthesis ([DLTD08]) already show really good results. From a small sample of texture and bump mapping, they can recreate a non-repetitive texture over an entire mesh. But, the difficulty here is that the number of neighbors can be very high if the mesh is too complex. With their algorithm, each voxel depends only on a small number of voxels. The more we can reduce the dependency of other voxels, the more we can gain in computation speed. So, the algorithm pre-computed a set of a few possible 3D neighbor textures in order to choose fast when it computes the surface of the mesh. Some other articles deal with these problems such as the Synthesis of Bidirectional Texture

Functions on Arbitrary Surfaces([TZL+02]). Again, with their Bidirectional Texture Function, they fill in an entire 3D surface from a small sample image. To create their texture, they use the same kind of algorithm as the Image Quilting for Texture Synthesis and Transfer ([EF01]) one. In fact, the problem in 2D and in 3D is almost the same. The complexity is just increased in 3D because we deal with voxels, edges, faces and not only pixels. Adapting texturing to meshes is quite interesting, with some good papers on the subject. A few projects deal with the idea that even if the mesh is changing, we can adapt the texture and the surface to the transformation. The paper Detail-Replicating Shape Stretching ([AZL12]) is really about the stretching operation. The idea is that to generate a new stretched surface for a mesh, we transform, optimizing to avoid weird effects. This Shape Stretching method shares some aims with our own. Indeed, when we design the shape of the 3D base geometry, we always adapt to the dimension of the input object. The user will choose what the

dimensions of the final objects will be, and our algorithm needs to take this into account to fulfill the user’s expectations.

Figure 2 On the left, the original mesh, in the middle, the stretched surface over the transformed mesh, on the right, the result of the detail-replication algorithm, courtesy of Alhashim et al. [AZL12].

4.2.3 Geometry Synthesis A lot of work has been made to try to synthesize patterns on real-world objects. It is not so simple to do such things and to generalize it because the situations are always different. It can be easy to create something

Figure 7 Synthesizing of a solid texture on a surface from a sample texture, courtesy of

Yue Dong, Sylvain Lefebvre, Xin Tong and George

Drettakis([DLTD08]).

Page 10: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

10

that can generate a road but it can be very difficult to succeed in generalizing the approach to other examples. In the article Geometric Texture Synthesis by Example ([BIT04]), we can see some original results over the synthesizing of geometry.

Figure 3 The cylinder on top is used to modify the surface of a seahorse, courtesy of Bhat et al., [BIT04].

As mentioned above when discussing 3D Texture Synthesis (Section 2.2), the key idea is to compute different values for each voxel in order to compare them to its neighborhood. In the article for each voxel, they compute a coordinate frame (T, N, B). Here, T is created thanks to different direction vectors on the surface which are defined by the user and then diffused across the surface by using an interpolation technique described by Turk ([Tur01]), N is the normal vector and B the cross product between those two. Then, given a voxel with its own coordinate frame (T, N, B) rather than moving along the (x, y, z) axis to get neighboring values, they used the coordinate frame that they have computed. This permits the algorithm to follow exactly the shape of the object and then to modify the geometry with the right values. This idea of creating an object by looking at each voxel is something really interesting, but we can generalize that at a larger scale. Paul Merrell and Dinesh Manocha ([MM11]) has used the same approach but for objects directly.

Figure 4 From an input (a), a larger road is generated by using the connectivity constraint (b), with the courtesy of Merrell & Manocha ([MM11]).

Here we can see that from a really small input, they are capable of generating a huge road network simply by using the connectivity constraint we already see with the Wang Tiles ([CSHD03]) (Section 2.1). Moreover,

Page 11: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

11

to be sure that the algorithm doesn’t generate any isolated loops in the road, they choose a region growing approach where everything start from a road seed and we grow out from this one. But as we can see with the Peter Wonka algorithm, the generalization can become increasingly large [MWH+06].

Figure 5 Various views of the procuderal Pompeii model, courtesy of Peter Wonka ([MWH+06]).

Obviously, this is an advanced algorithm and our goal in this project will not be to generate such a difficult topology. Here, what we will try to do is to generate simpler geometries but make them more flexible to what the user really wants. With this aim in mind, what we want to implement is something that lets the user be part of the process in order to make the system more immersive. In this aspect, the work of Reichl is really interesting because the output of its algorithm is dependent on the shape of the input [Rei13].

Figure 6 Roofs generated with procedural modelling. Each roof has different edges, courtesy of Pavel Reichl ([Rei13]).

As we can see, the results are quite good and adaptive to the global shape of the original mesh. This way, the algorithm is always in line with what the user wants and leads to a flexible algorithm.

Page 12: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

12

4.3 The Repetitive Geometry Algorithm

This section describes our method to deal with the creation of the repetitive geometry and the different algorithmic choices made in order to improve performance. As explained in the section 1.2, we decided to use Blender because of its Python interface. Indeed, the Python interface for scripts is really simple to use and the fact that Blender is free and open-source made our algorithm easier to import on each platform. As explained in this section, we decided to add several features during development. First, we implemented that the user can choose the dimension of the object. This way, the algorithm had to adapt the creation of the object to the input lengths of the user’s object. Besides, we added a way to create a repetitive geometry by cutting holes in the object or by directly creating the geometry. And finally, we added a 2D contour detection feature to let the user choose the shape of the repetitive geometry they want to create, by exporting the 2D contours into a 3D object in Blender. This part was coded in C++ with the library OpenCV.

4.3.1 Use of Blender The Blender interface is used as the front-end to our system. A custom stand-alone software system could have been built, but the import/export of geometry, the familiar interface and plugin-functionality of Python scripting were all factors that pointed to using Blender instead. 3D previsualization of scenes allows a user to quickly design a rough version of a scene, including the relative or absolute proportions of important objects and props. Additionally, the Boolean operations engine within Blender allows for 3D-differencing, which we leverage for piercing objects with desired shapes to make patterns.

Geometry synthesis algorithm with simple shapes

As we have seen in the previous part, the main components of the code have already been exposed. Now, we need to gather the different parts together in order to have a tool which can generate repetitive geometry. Here, the user will need to specify some values for the creation of the geometry. First of all, such as in the previous part, the user will create an object of the shape he wants which will contain the generated repetitive geometry. Then, the user will need to specify the shape of the pattern and the dimension of this pattern and the distance between each one of them. An example of input could be: a cube with a certain dimension, three circular holes of 3cm along the x-axis with 1 cm between each one of them and the same along the y-axis. It is important to note that the algorithm will create the repetitive geometry on a unique face (here the face will be the face into the orthogonal plane to the z-axis). It is possible to combine several times the algorithm as we will see in the next part (Section 4.3.2.2)

Positioning the different parts of the geometry As soon as the bounding box of the user-supplied primitive is computed (often itself a box), we can compute the positions of the different repetitive parts of the geometry. If the user specifies some aspects, such as the number of repetitions along each axis or their dimension, our optimization is still able to compute the distance between parts. This approach is based on the Procrustes Method. Once the layout and repetitions of the piercing object have been computed, a Boolean operation can cut holes in the base geometry, either one piercing-replica at a time, or all at once. We found it safer to loop over the one-at-a-time approach.

Page 13: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

13

4.3.2 Geometry synthesis algorithm with complex shapes After creating different kinds of repetitive geometry thanks to simple shape Blender already got in the previous section, we will see in this part a code which can generate a repetitive geometry with a shape created from a 2D binary image. In fact, we created a C++ algorithm with the library OpenCV which can compute the contour of a 2D binary image([Fou15b]). Then, we implemented a function in Blender which can import those contours and create a 3D object from it.

4.3.2.1 Contour extraction from a 2D binary image There are two parts in this the code for the contour extraction. First of all, we need to find the contour in the 2D binary image. To do that, we adapted C++ code which in the OpenCV library [Fou15b]. To detect the contour, we use two different functions. The first one is the function findContours which detects every contour by following the border of the object [SA85]. This way, we have a lot of points that are kept for the border. But it can cause problem for a large image. To illustrate, consider a curved arch; It can be modelled as a very large number of short piecewise linear segments, or, at the loss of only a little accuracy, longer segments that still give the arched-shape look. Such geometry-simplification is often used to reduce the memory/vertex budget of game assets. To be specific, if a straight line is detected on the object, we don’t need to keep all the points from this line, only the two extremities are necessary. In order to do that, we approximate the contour by reducing the number of points. The function approximates a curve or a polygon with a simpler curve or polygon with fewer vertices thanks to the Douglas-Peucker algorithm [DP73]. Now that we have identified the points for the contours, we need to store their values in a format that can be read by Blender. We decided to create a .csv file in which we store the value of x and y for each point of the contours. Then, to run the C++ algorithm, we only need to use a console, in which we will call the executable with the name of the image. The result of the analysis will be stored in the file result.csv. Finally, we can use this file in Blender by reading each line of the .csv file. Then, all we have to do is to create a vertex for each line and finally create a face with all the vertices that were created. The last thing to do is to place the origin of the object at the center of mass for a better usage for the rest of the code.

4.3.2.2 Combining the algorithms in concrete examples This time, all we want to do is to replace the simple shape for the repetitive geometry of the section 4.3.2 by the new kind of geometry we can create with the code described just above. We already create the code that can generate the repetitive geometry and we have the code that can create a 3D object from a 2D binary image. All we need to do is to gather the two pieces of code in order to create the geometry we want. The Pont du Gard Here, we would like to create a well-known bridge in France which has a repetitive geometry: The Pont du Gard. To reproduce the different arches, we needed to create a 2D binary image of the same shape. This allowed us to extract the contours and to have the 3D object create holes on the input object of the user. The input of the user is a stretched cube in order to have the dimensions of a real bridge. Obviously, the user can choose any dimension he wants for the cube but for a good looking result, he had to respect certain proportions. Then, as soon as we have the bounding box of the object, we can begin to synthesize the geometry. By adapting the code, we will create two rows of big arches and on top a row of small arches with dimensions relative to the input object dimensions.

Page 14: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

14

All we did for the bridge example is to adapt the code we already wrote in term of repetition and size. An experienced user of our algorithm can create such an object in a few minutes.

4.4 Evaluation & Experiments

Now that we have explained the algorithm in detail, we need to evaluate its performance. To evaluate the algorithm, we will test it for different input objects and repetition and see if the algorithm is robust enough to create good results. As explained in Section 4.3, create something that adapt to what the user wants is essential. Indeed, we want our algorithm to create reasonable results if the user decides to create a really small object or at the opposite a larger object than a normal one. This would be the evaluation for the creation of the repetitive geometry, but, we also need to evaluate the precision of our contour detection. To do that, we will see the impact of the input image on the result we get on Blender. First of all, we will see briefly in section 4.4.1 the behavior of our example code with the barrier, then we will see how the algorithm acts with the repetitive geometry algorithm (Section 4.4.2) and finally, we will see the evaluation of the contour detection and the object that are created with it. (Section 4.4.3)

4.4.1 Evaluating the practice example As we see in the section 4.3.1, the goal of the algorithm is to create a simple barrier. Here, in the algorithm, the only thing the user can change is the size of the input object. Just a quick reminder, the input object is a cube which embodies the bounding box of the future barrier.

Figure 7 Input object in Blender, the user can choose its dimension. On the right: three possible outputs.

We tested this simple case successfully, shown in Figure 13 above.

4.4.2 Evaluating the repetitive geometry algorithm with simple shapes In this part, we will see how our algorithm deals with the different shape and repetitive geometry. First we will test the general algorithm with different size and shapes (Section 4.4.2.1), and finally, we will see over the several examples what kind of problem may arise with our algorithm or in which way the algorithm is doing well. (Section 4.4.2.2)

4.4.2.1 The general algorithm In this first part, we will test our general algorithm with different shapes and sizes to see if it acts as we have planned. First of all, we need to remember that the general algorithm can deal with any shape for the input object and any number of repetitions. To evaluate the algorithm, we will pierce different kinds of objects with several simple shapes in order to see if the algorithm doesn’t crash.

Page 15: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

15

Moreover, the user can choose the amount of repetition along each axis and also define the size of the holes. Nevertheless, if the size of the holes times the amount of repetition is larger than the actual bounding box of the object, the algorithm should do nothing because it means that the user did not provide correct input.

We need to try the algorithm with different shapes and distances: To do this, we will try to pierce a sphere with several parameters in order to see how the algorithm behaves. Moreover, we will see the different time of computation to see the impact of the number of repetition.

As we can see, the number of repetitions influences the computation time. After an initial overhead cost, the increase is linear, as shown in the figure below.

Figure 8 Graph of the time of computation in function of the number of repetitions

It is noteworthy that the shape of the piercing object makes a difference: angular shapes can have fewer vertices, costing the algorithm less time.

Figure 9 Different views of the results we get with the code generalization.

As we can see in the Figure 14 graph, the time of computation is five times larger with a cylinder as the repetitive geometry than with a cube.

4.4.2.2 The concrete examples In this part, we will see for each example described in the section 4.3.2.2 how the algorithm seems to adapt to the input of the user. The most important thing will be to see if everything is at the right place for several dimensions of the input object.

Page 16: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

16

Wall of bricks

Here, we would like to evaluate the performance of the algorithm which creates a brick wall. As above, we will try this algorithm just by changing the dimension of the input object.

But, as we can see, the results are disappointing for a tiny and a big input object. This is due to the fact that in the code, the user needs to adapt its code to its input object. This time, the user can’t just run the algorithm and see that everything is computed for him. In fact, he needs to decide the depth of the holes which are cut in the wall and also the distance between each bricks. This will obviously be related to the original size of the mesh.

Figure 10 An output of a small brick wall with the adapted code (left) and the original output (right).

As we can see, by adapting the algorithm, we can have good results too for other dimensions. Nevertheless, the user needs to know that he has to adapt the code to what he wants.

The goal

For the goal now, again, the same kind of procedure will be done to see the way the algorithm tries to adapt to the user input. This time, the goal is a repetition of the using of the algorithm.

Figure 11 Output of the soccer goal for different input values.

As we can see, the algorithm seems to respect the input of the user and to create a soccer goal of the right size. Nevertheless, it seems that the Blender implementation may have difficulty with very complex meshes.

Page 17: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

17

Nevertheless, the general algorithm seems to work very well and the conditions at the intersection between the two faces seem to be respected. Indeed, two strings that arrived at an edge come at the same position, this means we don’t have any misalignment and that the algorithm works well.

Figure 12 Several junctions between faces of the soccer goal, the strings hit the same point.

Tennis racket

Finally, the last example is the racket. The conclusions are quite the same as the previous ones, demonstrating that our prototype functions as required.

Figure 13 Output of the tennis racket for different input values.

4.4.2.3 Evaluating the repetitive geometry algorithm with complex shapes In this part, we will only evaluate the detection contour and the creation of the 3D object in Blender because the rest of the algorithm is just a combination of those two parts.

So, after a lot of computation with different objects, it seems that the binary image has to be really clean, that is to say with no noise in order to detect the contours in a good way.

Finally, we can see through all the examples that the algorithm works well and gets good results. The algorithm is quite easy and fast for creating a lot of different repetitive geometries. In fact, there is a lot of possibility with this algorithm and we only experimented with a few in this project, but a user who wants to

Page 18: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

18

create repetitive geometry has here a real good tool offering simplicity, efficiency, scalability, and overall time-saving.

Figure 14 Different views of the results we get with the code Gardbridge.

As we can see, the results are also quite good for the detection of contours and obviously for the repetitive geometry part. Again, the computational time doesn’t exceed 30 seconds so the algorithm is quite fast.

4.5 Conclusions & Future Work

4.5.1 Conclusions This project proposed a way of creating repetitive geometry in a simple and fast fashion, while working within the Blender framework and UI. This approach can save artists significant time, if they are trying to add a kind of surface detail that has already been parameterized, such as holes or indentations that appear in some repeating pattern. We have already created an initial set of semantic classes, and this set will grow as more users contribute.

4.5.2 Future Work One drawback of our algorithm may remain in the contour detection part. In fact, here, we are only able to extract contours of 2D binary images. One big improvement would be to extract contours in natural images, without pre-processing or segmentation. Like depth map hallucination, we would be able to recreate surface details from 2D images. This way, we could create more difficult objects such as gargoyles or window frames. Another concern is the time needed to add a new type of surface detail to the library. We would like to explore ways where fine-scale 3D scanning of surface details could feed a generative model of surface details.

Page 19: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

19

5 Reducing visual artifacts

Since the number of input camera images for the IBR rendering is limited by a user selected maximum asset size, we have to select a subset of images for the IBR asset; this is particularly important in the case of video-based input, where we wish to select a small subset of frames to use for IBR. Choosing the optimal set of cameras, to maximize the visual quality of the IBR during this process is a challenging task. In a first step towards a dedicated selection strategy we have defined a metric based on finding the most objectionable artifacts. These artifacts can be classified into two main categories: popping and ghosting. Popping is the rapid change of luminance or colour of one or more pixels without any apparent motion. Ghosting is a similar effect, except that the change is continuous over a small number of frames, e.g. fading in and out of an object instead of continuous movement. To detect the artifacts for a specific subset of input images, we first set up a camera path representing a typical path during gameplay and create a video using IBR. The video is then analysed for occurrences of ghosting and popping with a specific detection algorithm that has been developed during this reporting period. The number and severity of the artifacts found then corresponds to the quality of the rendering, both overall and for any given frame, thus allowing us to optimize the rendering for example by selecting a different subset of input images. This use case is a no-reference setting (i.e. we need to estimate the quality without a “reference” solution to compare to) since we only have rendered images for each camera position of the user selected gameplay path. Therefore this approach falls into the category of no-reference video quality assessment.

Figure 21 For each patch in an IBR rendered image, the algorithm performs an edge detection, followed by a Laplacian operator. Connected components are grouped into sets C and for each group c, the mean colour value of the corresponding input pixels is calculated. Finally a least squares problem is solved for each group of three components. If a set of weights, e.g. λ1 and λ2, sums up to one, we detect a ghosting artifact.

Ghosting detection works by using the fact that this artifact will produce a region between object and background that is simply a linear combination of its luminance and colour. First the image is split into small regions that will be analysed individually. Then all lines inside each region are found using a Canny edge detector (see Figure 21). The lines are filtered using a Laplacian filter to produce a set of two lines each on either completely inside or outside an object if no ghosting is present. In the presence of a ghosting artifact however, one of these lines will be entirely inside a ghosting region. Therefore the algorithm looks for a third line so that the luminance and colour of the line in between the other two can be represented as a linear combination of luminance and colour of these two. If a sufficiently long line has been found in a region, it is marked as potentially containing a ghosting artifact. If a region is detected in multiple consecutive frames, with the same set of lines and linearly varying interpolation weights, we can be almost certain that this effect is due to ghosting in that region.

Popping detection on the other hand works by computing the optical flow between consecutive rendered frames. In general, a popping artifact is most visible if a large low texture region of one luminance and colour is suddenly replaced by a region of different luminance and colour. Since the calculation of the optical flow

Page 20: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

20

is based on luminance and colour gradients over space and time, it is not designed for robustly handling this kind of popping artifacts, i.e. extremely large gradients in time in combination with small gradients in space. Therefore the flow computation simply fails to produce a flow vector for the most extreme cases of popping. However, even if the popping region contains high frequency textures, the calculated flow is very inconsistent and therefore of low confidence. Thus the confidence of the flow field directly correlates with the amount of popping artifacts in an image.

Figure 22 Detected popping (left) and ghosting (right) artifacts in an IBR sequence without reference video.

Since we can detect both the number and the severity of the rendering artifacts, we can define a quality metric that tells us which of the rendered images has the worst quality and in which region. With this information, we can make an initial guess as to which camera needs to be added to the set of previously used cameras to have the most increase in rendering quality. We then need to explore similar configurations of the same number of cameras, since adding one camera might diminish the positive effect of the previously selected ones. The reason for using a local approach rather than trying to find the optimum solution given a set of at most n selected input cameras is the sheer number of possible combinations, i.e. n over k for k total cameras which is already over 100,000 combinations for selecting 10 out of 20 cameras.

If the capture was done using video, we can use the actual camera paths instead of having the user select a typical gameplay path. Assuming that these capture paths also represent typical gameplay paths, this has several advantages in addition to not requiring manual path creation. First of all, it simplifies the detection of rendering artifacts as we do actually have a reference video to compare against. While popping becomes a sudden change in the difference between the reference and the rendering, ghosting is more like a gradual increase in difference over several frames, followed by a decrease. This difference can for example be calculated using the structural similarity index. It also makes selecting an initial set of camera easier since we can simply choose the first and last frame of each path and add the camera positions that show the most disturbing artifacts. This simple greedy strategy will however not produce the optimal set of cameras and will need to be refined afterwards, similar to the no-reference case.

Page 21: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

21

6 Luminance Harmonization

During render time, IBR algorithms use directly content from multiple images to render novel views. However, colour consistency was not guarantee during the acquisition of images. This inconsistency produces blending artifacts. We implement an image harmonization procedure that consist in simply

applying a global transformation 𝑀𝑖 from the input image 𝐼𝑖 colour space to an output image 𝐼𝑖 in a median colour space 𝐼𝑖 = 𝑀𝑖 𝐼𝑖

The median colour space 𝑌𝑖 is built with the multi-view colour information of the reconstructed points in 𝐼𝑖. The parameters of 𝑀𝑖 are found by solving 𝑋𝑖 in the linear system 𝑌𝑖 = 𝐴𝑖𝑋𝑖. Where 𝐴𝑖 contains the colour of reconstructed pixels in 𝐼𝑖 . This operation is performed independently for each colour challenge, resulting in

the new set of images 𝐼𝑖 which have harmonized colours. When then use 𝐼𝑖 as input images instead of 𝐼𝑖 in IBR algorithm, avoiding significant visual artifacts related to blending of inconsistent colours.

Below we see an example of the application of the harmonization approach, and the case of the red pixel and the differences between the two views before and after harmonization. In the original dataset, differences in RGB values were up to 14 (average 11), while after harmonization, the maximum difference is 7 and the average is 5.6: we have reduced the difference by half.

Even though the effect is subtle in the image shown above, the difference become much clearer during rendering where these image are blended. This can clearly be seen below, where we see that the severe artifacts in the sky on the right are greatly diminished after harmonization.

The current method is restricted to a single transformation for the entire image, which has proven sufficient in some cases, but may need to be refined in more difficult cases. For these a per-region (e.g., superpixel) approach may be appropriate, followed by a filtering step (i.e., bi-lateral filtering), similar to the approach developed in [Okura 2015].

Page 22: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

22

Figure 22 Left rendered image before harmonization, large artifacts visible in the sky, Right: after harmonisation, artifacts are reduced.

Page 23: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

23

7 Visual Artifact Reduction with Bayesian Selective IBR

As described in detail in Deliverable 2.2 and the subsequent publication [Ortiz-Cayon15] which contains a more complete version of the algorithm as actually published, our new Bayesian Selective IBR method, in addition to its main attribute which is reduction in computation time, can also result in quality improvements in the rendering. The following image shows two cases of such quality improvements compared to the previous superpixel warp approach [Chaurasia 13].

New Selective Bayesian IBR [Chaurasia 13]

Figure 23 Left: a novel view far from input images using the new selective rendering. Right: novel view rendered with [Chaurasia 13]. Notice how the new method improves the quality of the rendering of the curved balcony (top) and the metallic structure (bottom).

Page 24: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

24

8 Conclusions

In this deliverable, we have described the work performed in the tasks of WP2 related to surface synthesis and artifact reduction. Specifically, we have described our surface synthesis approach which is based on simple operators allowing the fast generation of repetitive geometric structured and initial ideas on how to extend this from example images. We have also described our first results on identifying the two most common visual artifacts, that is ghosting and popping for image-based rendering. Finally, we have presented two smaller improvements in the reduction of visual artifacts, namely luminance harmonization and the use of the selective IBR algorithm.

Page 25: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

25

9 Bibliography

[AZL12] Ibraheem Alhashim, Hao Zhang, and Ligang Liu. Detail-replicating shape stretching. The Visual Computer, 28(12):1153–1166, 2012. [BIT04] Pravin Bhat, Stephen Ingram, and Greg Turk. Geometric tex­ture synthesis by example. In Proceedings of the 2004 Eurograph-ics/ACM SIGGRAPH Symposium on Geometry Processing, SGP ’04, pages 41–44, New York, NY, USA, 2004. ACM. [BL08] Brent Burley and Dylan Lacewell. Ptex: Per-face texture mapping for production rendering. In Proceedings of the Nineteenth Euro-graphics Conference on Rendering, EGSR ’08, pages 1155–1164, Aire-la-Ville, Switzerland, Switzerland, 2008. Eurographics Association. [Chaurasia 2013] Chaurasia, G., Duchene, S., Sorkine-Hornung, O., &Drettakis, G. (2013). Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG), 32(3), 30.

[CSHD03] Michael F. Cohen, Jonathan Shade, Stefan Hiller, and Oliver Deussen. Wang tiles for image and texture generation. In ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03, pages 287–294, New York, NY, USA, 2003. ACM. [DLTD08] Yue Dong, Sylvain Lefebvre, Xin Tong, and George Drettakis. Lazy solid texture synthesis. In EGSR ’08. [DP73] D Douglas and T Peucker. Algorithms for the reduction of the num­ber of points required for represent a digitzed line or its caricature, 1973. Canadian Cartographer. [EF01] Alexei A. Efros and William T. Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th Annual Con­ference on Computer Graphics and Interactive Techniques, SIG­GRAPH ’01, pages 341–346, New York, NY, USA, 2001. ACM. [Fou15a] Blender Foundation. Blender 2.75, July 2015. https://www.blender.org/. Retrieved from [Fou15b] OpenCV Foundation. Opencv 3.0, June 2015. Retrieved from http: //opencv.org/. [KSE+03] Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: Image and video synthesis using graph cuts. In ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03, pages 277–286, New York, NY, USA, 2003. ACM. [Legaye] L. Legaye. Surface Synthesis of Details. MSc CGVI Thesis, University College London, 2015. [MM11] Paul Merrell and Dinesh Manocha. Model synthesis: A general pro­cedural modeling algorithm. IEEE Transactions on Visualization and Computer Graphics, 17(6):715–728, June 2011. [MWH+06] Pascal Müller, Peter Wonka, Simon Haegler, Andreas Ulmer, and Luc Van Gool. Procedural modeling of buildings. In ACM SIG­GRAPH 2006 Papers, SIGGRAPH ’06, pages 614–623, New York, NY, USA, 2006. ACM.

Page 26: Deliverable 2 - CR-PLAY...CR-PLAY Project no. 661089 Deliverable 2.2Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction 3 1 Summary In this deliverable

CR-PLAY Project no. 661089 Deliverable 2.2 Report/publication(s) on Algorithms for Surface Synthesis and Visual Artifact reduction

26

[Okura2015] Fumio Okura, Kenneth Vanhoey, Adrien Bousseau, Alexei Efros, George Drettakis, Unifying Color and Texture Transfer for Predictive Appearance Manipulation, Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering), Volume 34, Number 4, 2015 [Ortiz-Cayon2015] Rodrigo Oritz-Cayon, Abdelaziz Djelouah, and George Drettakis. A bayesian approach for selective image-based rendering using superpixels. In 3DV, 2015.

[Poi10] Pointwise. Automating small repetitive tasks, 2010. Retrieved from http://www.pointwise.com/library/Pointwise_Pointer_ Fall10.pdf. [Rei13] Bc. Pavel Reichl. Procedural modeling of buildings, 2013. Univer­sity Masarykova Univerzita. [SA85] Satoshi Suzuki and Keiichi Abe. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1):32–46, 1985. [Tur01] Greg Turk. Texture synthesis on surfaces. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Tech­niques, SIGGRAPH ’01, pages 347–354, New York, NY, USA, 2001. ACM. [TZL+02] Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, Baining Guo, and Heung-Yeung Shum. Synthesis of bidirectional texture functions on arbitrary surfaces. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’02, pages 665–672, New York, NY, USA, 2002. ACM. [Wei04] Li-Yi Wei. Tile-based texture mapping on graphics hardware. In ACM SIGGRAPH 2004 Sketches, SIGGRAPH ’04, pages 67–, New York, NY, USA, 2004. ACM. [WYZG09] Lvdi Wang, Yizhou Yu, Kun Zhou, and Baining Guo. Example-based hair geometry synthesis. In ACM SIGGRAPH 2009 Papers, SIGGRAPH ’09, pages 56:1–56:9, New York, NY, USA, 2009. ACM.