Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
1
Using Voxelization and Ray-Tracing to Identify Wall Thinness of Polygonal Models
Eric Fickenscher
Computing & Software Systems
Institute of Technology
University of Washington, Tacoma
Tacoma, WA 98402
MS CSS Capstone Design Project in Computing and Software Systems
Committee Chair: Isabelle Bichindaritz
Committee Member: Wayne Warren
Date of Submission: December 2008
Abstract
New technologies make it possible to print models in three-dimensions. Printers capable of 3D
printing take a 3D model described as a polygonal mesh as input and then build an actual
physical representation of that shape as output. One of these fabrication processes is called rapid
prototyping or layered manufacturing, in which models are built layer by layer from the bottom
up. The biggest limitation of the layered manufacturing is a loss of accuracy: certain details are
liable to get lost during the printing process due to the thinness of the slices used, the size of the
nozzle head, and material warpage that is a byproduct of the heating and cooling necessary to
create each layer. Furthermore, sections of the final printed model that are too thin are at risk for
being structurally unstable. This paper presents a novel application of voxelization and ray-
tracing to identify areas of the model that fall below these minimum thickness requirements. The
algorithm classifies voxels according to their wall thickness; this classification provides a
visualization of the model that highlights areas that are too thin to print with accuracy or that risk
structural collapse. This technique can be used to quickly identify issues with models before
sending them to the 3D printer, which can prevent troublesome prints, saving both material and
time. This paper focuses on 3D models written in the ISO standard XML-based X3D file format.
Keywords: Voxelization, ray tracing, wall thickness, layer thickness, rapid prototyping, layered
manufacturing
2
Table of Contents
1. Introduction
2. Voxels and Voxelization
3. Ray Tracing
4. X3D, Computer Graphics, and Triangles
5. Octrees
6. Algorithm Design and Implementation
a. Triangle-cube intersections and the Separating Axis Theorem
b. Ray-triangle intersections and the manifold property
c. Classification of voxels using a flooding algorithm
7. Results
8. Conclusion
9. Acknowledgments
10. References
1. Introduction
There are several new models of three-dimensional printers currently on the market, and as the
technology is refined the 3-D printing marketplace will only continue to grow. Though today's
printers are not yet sophisticated enough to perfectly reproduce machinery at the smallest scale,
they are sufficiently accurate and precise to print 3-D objects at a resolution that many artistic 3-
D modelers require. It is very easy to imagine that as this technology advances, it could someday
be used to replace industrial manufacturing. Printers could be used to generate pieces that are no
longer fabricated on a wide scale, perhaps used to replicate legacy parts, such as screws for a
vintage automobile.
Three-dimensional printing is known by a variety of names including rapid prototyping,
layered manufacturing, or solid free-form fabrication. The rapid prototyping methodology is
rather simple: first, a 3D input file is converted so that the printer can 'perceive' the input model
as a stack of very thin layers. The printer then builds the bottommost slice of the object, spraying
a powder or polymer that hardens into the first layer. Next, the printer builds the layer above that,
and so on, spraying, and bonding material to existing material, aggregating the model slice by
slice.
Despite the usage of the word 'rapid', this fabrication process can take many hours.
Nonetheless, the many hours it can take to print a prototype, or a model that has never before
been created, is actually much less time than would be used to print the same prototype with a
conventional manufacturing technology, such as die casting. Though die casting or injection
molding can be used to manufacture large quantities of parts quickly, the initial preparation
needed to build the necessary mold is high in terms of both cost and time. Rapid prototyping is
3
thus a relatively quick, competitive choice for production of 'one-off' models compared to
traditional manufacturing methods.
Rapid prototyping has other advantages as well. It can produce hollow shapes easily, as
well as more complicated usages of hollow space, including interlocking or nested parts a la a
ship-in-a-bottle or a set of Russian dolls. This technology, though readily available, is not exactly
at one's fingertips: unlike their ubiquitous two-dimensional counterparts, the cost of three-
dimensional printers puts ownership of one out of reach for most. However, people from all over
the world can make use of one by sending model data to a 3-D printing company. These
companies will make a physical object out of your 3D model by converting the model into a
suitable file format, sending the information to the printer and then mailing you a copy of your
personal creation.
As is the case with any developing technology, rapid prototyping is not without its
imperfections. For instance, the materials used for 3D printing are relatively flexible and
resistant to breakage, but they lack the strength of the materials used in conventional fabrication
processes. As technology advances, metal alloys may eventually be used for rapid prototyping
which would remedy this problem, yielding parts with greater strength. For now, objects
produced by 3D printing are more suited for display than they are as machinery parts.
Perhaps the biggest drawbacks to rapid prototyping are related to printer resolution and
accuracy. For one thing, the printed models are in some sense an accumulation of fused particles,
and this means the finished product can have a somewhat granulated surface. More importantly,
like their two-dimensional counterparts, 3-D printers have a minimum resolution below which
detail is lost. The limiting factor is the size of the printer nozzle and the accuracy of the printer
head's movements. The printer simply cannot create details below a certain size. Furthermore,
depending on the resolution of the printer and thickness of each slice, a staircase effect can be
prominently visible. If significant portions of the input model were below the size threshold, the
resulting printed model would be of indeterminate quality: it would suffer from loss of detail and
it might be structurally unstable.
Thus, it is valuable for 3-D printing companies to analyze model input before sending it
to a 3-D printer. Clearly, it is advantageous to avoid any misprints; it is easier (at the very least, it
is considerably less expensive) to analyze and then re-scale a polygonal model than it is to print a
poor model, re-scale it, and print it again. It is important from a customer-satisfaction standpoint
that the finished, printed model is as detailed as the customer expects, and not 'lossy'. It is also
important from a financial standpoint not to waste any time or material on structurally unstable
prints.
This paper presents a novel application of voxelization and ray-tracing to identify areas
of the model that fall below these minimum thickness requirements. The algorithm classifies
voxels according to their wall thickness; this classification provides a visualization of the model
that highlights areas that are too thin to print with accuracy or that risk structural collapse. The
visualization can thus be used to evaluate the integrity of a model and determine if it is suitable
4
for printing. This technique can be used to quickly identify issues with models before sending
them to the 3D printer, which can prevent troublesome prints, saving both material and time.
The algorithm uses a series of three repeating steps. First it voxelizes a portion of the
input model. Next it uses ray-casting to define the voxels that are interior to the model. Finally, it
uses a sort of flooding technique to characterize each voxel according to its distance from the
'center' of the nearest interior wall. These steps repeat until the entire model has been processed,
and then a highly customizable visualization of the result is printed to an X3D file. This file
highlights overly-narrow sections of the input model that are possibly structurally instable and
that could lose detail in the printing process, compromising the customer's vision of the finished
product.
To save space, the algorithm utilizes an octree implementation. It is very efficient in
terms of memory consumption, accepting input models of varying sizes and processing them to a
user-defined level of resolution. It allows scaling up to and beyond volumes of 1024x1024x1024
voxels. It is computationally efficient as well, traversing the geometric model just once.
Additionally, the method does not depend on GPU (graphics process unit) hardware in order to
perform its computation, so it can be used on any personal computer.
Figure 1: Example of a 3-D model created by a 3D printer [1]
5
Figure 2: Example of a model printed by a 3D printer [1]
2. Voxels and Voxelization
What is a voxel?
Just as a pixel represents a certain coordinate in 2-dimensional space, a voxel represents a
certain coordinate in 3-dimensional space. The actual 'dimensions' of a voxel are somewhat
arbitrary, but given the resolution of today's 3-D printers, it makes sense for this application to
think of voxels as units 1 millimeter cubed or smaller. The ratio of model size to the smallest
printer resolution is thus, as of this writing, on the order of 1,000,000,000 : 1. That is, printers are
capable of printing models of sizes up to 1 meter * 1 meter * 1 meter with layer thickness as
fine-grained as 1 millimeter.
What is voxelization?
The paper “Volume Graphics” describes voxelization as a process that represents objects
as a set of voxels [2]. In it, the authors write: “This stage, which is called voxelization, is
concerned with converting geometric objects from their continuous geometric representation into
a set of voxels that best approximates the continuous object. As this process mimics the scan-
conversion process that pixelizes (rasterizes) 2D geometric objects, it is also referred to as 3D
scan-conversion.” It is important to recognize that voxelization only approximates an object; it
does not replicate exactly, though it becomes more and more accurate as final voxel size is
reduced. The example in the following figure illustrates the effect in two dimensions.
6
Figure 3: Comparison of pixelization accuracy when pixel height and width are reduced 50%.
One can extrapolate from this two-dimensional example to predict that as voxel size is
reduced, the voxelization becomes more true to the input model. Though voxelization cannot be
an exact representation of an input object (aside from perfectly cubic input), it offers the
programmer great flexibility. One can voxelize a model to various voxel sizes, trading accuracy
for speed as desired. The capability of voxelization to offer varying degrees of accuracy makes it
a highly desirable tool for scalable algorithms.
Voxelization is also convenient to use due to the simplicity of the underlying voxels. One
can think conceptually of voxels as cubes, which are an extremely simple and recognizable
three-dimensional shape. Voxels also provide a very useful volumetric representation that is easy
to interpret both mathematically - since it is so easy to determine the volume of a cube, and
visually, since only a quick glance at a voxelization is needed to give the viewer a sense of the
structure's proportions. For these reasons of utility and practicality we felt that voxelization was
the tool of choice for highlighting areas of a model that were too thin to print with accuracy
and/or were at risk for structural collapse.
3. Ray Tracing
Ray-tracing is little more than a series of successive ray-triangle intersection tests. To put it as
more of an abstraction, ray-tracing is throwing a ray at a 3D model and seeing exactly how many
times the ray touched the surface of the model. An example can be seen in the following figure:
7
Figure 4: Jordan Curve Theorem, illustrated
The two-dimensional ray-casting that occurs in the above figure shows a line passing
through a complicated concave shape. Thinking of the surface of the model as the black line
around the grey shape, the ray (the red line, appearing dark grey in black and white print)
intersects the surface of the model no fewer than ten times.
What exactly does that information mean to us? The answer is this: for any given point
along the ray, if we know the number of intersections that have occurred, then we know whether
we are within the interior of the object, or whether we are outside the object. Namely, an odd
number of intersections indicates that we are inside the model, whereas an even number of
intersections means we are exterior to the model. Referring back to the previous figure for an
example, take a point almost at the midpoint of the red line, somewhere between the “5” and the
“6”. Since we know we have encountered five intersections thus far, the point must be inside the
convoluted grey shape. On the other hand, there is no way to travel along the ray to a point
between the “6” and the “7” without encountering an even number of intersections (whether
passing through six intersections when starting from the left or encountering four intersections
when traveling from the right). Therefore the area between the “6” and the “7” must be exterior
to the model.
This technique makes use of a mathematical property known as the Jordan curve
theorem. The Jordan curve theorem is very intuitive; most people understand and embrace it
even prior to learning of its formal existence. At its simplest, it means that for every two-
dimensional circle, the circle has an inside an outside. In somewhat more formal terms, it defines
the interior and exterior for every simple (non-intersecting) closed loop: the interior is bounded
8
by the curve and the exterior is unbounded. This property can be extended naturally enough to
higher dimensions: for every non-self-intersecting closed 'loop' (spheres for example), the shape
has an interior and an exterior.
The following figure demonstrates an example of using ray-casting specifically for the
purposes of a pixelization:
Figure 5: Pixelization and application of ray-casting
The previous figure shows a shape that has been (roughly) gridded up into pixels. By
casting multiple rays through each pixel in the shape, we can calculate, for each pixel, the
number of ray-shape intersections that are needed to reach each pixel. For some pixels, it is
inarguable that an even number of intersections are needed to reach the pixel (zero intersections,
for instance, in the case of pixels along the outside border). These pixels are clearly exterior to
the model. For other pixels, however, it is clear that an odd number of ray-shape intersections are
needed to reach the pixel. In the above figure, we have colored these pixels green (or dark grey
in black and white print): they are interior to the model. The remaining pixels, unfortunately,
cannot be classified as easily. Counting the number of ray-shape intersections to reach these
pixels is more indeterminate. While it takes an even number of intersections to reach certain
sections of these 'surface' pixels, it takes an odd number of intersections to reach other sections
of these pixels. They are not clearly interior to or exterior to the model.
The above example can be extrapolated to three dimensions, where it operates in the
same essential manner. By casting a large number of rays through a three-dimensional model,
and counting the number of ray-shape intersections needed to reach each voxel, we can classify
each voxel as interior to the model, exterior to the model, or on the surface of the model. This
9
forms the basis for the 'flooding' technique used later on to classify voxels according to their wall
thickness. To understand why it is even necessary to take the step to classify voxels as inside or
outside, one must first understand some computer graphics basics, explained in the following
section.
4. X3D, Computer Graphics, and Triangles
Users interested in developing three-dimensional content first have to decide which tool or
technology to use. This is no easy task, for there is a considerably long list of expedient 3D tools
and technologies. Of course, one could develop 3D modeling, animation, and rendering entirely
in assembly language, but with the complexity of today‟s 3D environments it would be an
exercise in tedium. A more likely solution is to make use of a 3D graphics application
programming interface (or API), such as OpenGL or Direct3D. OpenGL and Direct3D (a
component of DirectX) are perhaps the most well-known 3D graphics APIs, although there are
many others. These interfaces provide programmers with convenient ways to access low-level
graphics hardware.
For our purposes, we wanted to use a cross-platform 3D API that was suitable for
integration with Web Services. We found X3D ideal for these purposes. X3D is a royalty-free
open standards XML-based file format supported by the Web3D Consortium. With it, one can
render and animate 3D environments across the Web. In the words of the Web3D Consortium,
X3D “is an ISO ratified standard that provides a system for the storage, retrieval and playback of
real time graphics content embedded in applications, all within an open architecture to support a
wide array of domains and user scenarios. X3D has a rich set of componentized features that can
tailored for use in engineering and scientific visualization, Computer-Aided Design (CAD), and
architecture, medical visualization, training and simulation, multimedia, entertainment,
education, and more” [3].
It is easy to find an X3D-capable browser, since the language has been around in one
form or another for over a decade. The predecessor to X3D was introduced in 1994 as the Virtual
Reality Modeling Language, or VRML. Though the VRML specification is still supported,
“…the development of real-time communication of 3D data across all applications and network
applications has evolved from its beginnings as VRML to the considerably more mature and
refined X3D standard” [3]. At one point, the top web browsers came bundled with VRML
support. Both Netscape Navigator and Internet Explorer were embedded with VRML browsers
[4]. In today's world of aggressive browser competition, native 3D browsing support has been
cut. Still, one can readily find an X3D browser available as a downloadable extension.
10
The advantage of using XML-based X3D is how easy it is to integrate the 3D model into
a Web Services-based system. For instance, any company that is familiar with XML will be able
to parse X3D easily. This also makes it relatively easy to convert X3D into other file formats as
needed. Since it is not a language, but an API, X3D leaves it entirely up to the browser to decide
how to implement the specification. In other words, one 3D browser might allow X3D-viewable
content by implementing functionality with the OpenGL graphics language. Another browser
might choose to use DirectX to make the low-level calls instead. This allows vendors to optimize
different X3D browsers according to their user‟s needs.
Another interesting aspect of X3D might be described as its modularity. The Web3D
Consortium refers to this as the ability to be “componentized” [3]. What this means is that the
API itself is segmented into modular blocks of functionality. Browsers can then choose to
support one component of the X3D API and not another. This means developers and consumers
of X3D content can select or build an X3D browser that is perfectly suited to their needs. For
instance, engineers using X3D for modeling a pipe system would most likely want different
capabilities from their 3D browser than would an educator. If a group never uses a particular part
of the X3D specification, then it might benefit them to find a more lightweight browser that does
not support that part of the specification. Given that X3D is often meant for viewing 3D content
on the web, providing a way to streamline a browser via the profile level is very beneficial. An
example of profile level is shown in the file below.
Figure 6: X3D file containing a sphere as a high-level abstraction
The previous figure shows an example of a short X3D file. The very first lines of the file
specify which version of X3D is used. Together with the profile level, the specification number
gives the 3D browser a chance to recognize if it is able to support this file. Due to the sheer size
and scope of the specification, not all browsers provide complete support for each and every
profile level. As with most technologies, X3D continues to evolve and so the specification
continues to grow as well. However, unlike some other technologies, X3D is not rapidly
11
changing. Stability and longevity are key; one might describe X3D as a conservative, non-hasty
language. In an effort to avoid deprecation of feature sets, the Web3D Consortium make it a goal
to minimize language bloat and are slow to introduce a rash of new functionality. This gives
emerging 3D concepts a chance to sort themselves out before they are absorbed into the next
X3D iteration.
The specification, being an ISO-ratified standard, is very well documented. The actual
X3D itself is also easy to parse: The format of <Element parameter='value'> is highly
recognizable to those familiar with XML encoding. The data itself is nothing more complicated
than tags and strings, as demonstrated in figure 4. X3D gives us the ability to describe geometry
at a very high level of abstraction (should we so desire). For instance, to describe sphere
geometry, one need only write: <Sphere radius='2.0'></Sphere>. The standard unit of measure in
the specification is the meter, so this would describe a sphere with a radius of 2 meters. Just as
one would expect, when viewed in a browser one would see a round sphere (though one couldn't
necessarily determine the sphere's size without seeing its size relative to another object in the 3D
scene).
The following figure shows another X3D file, which would produce nearly identical
results when viewed in the same browser: a round sphere would appear on the screen. Note the
radical difference in the two files, however, and realize that the image below has been truncated
to save space:
12
Figure 7: X3D file containing a sphere, defined as a low-level Indexed Triangle Set
As has been mentioned, the idea of a sphere is a very high-level abstraction. The
computer does not really know the meaning of 'sphere'. Due to the grid-like lattice of pixels in a
computer display, there are no curved shapes or true circles. There are only illusions and
approximations thereof, with the use of straight lines to fool the eye. All a 3D browser does is
convert complex 3D-shapes to the most basic of 3D shapes - the triangle - and then it determines
how to render the triangle information to a 2D monitor surface. Triangles are generally the most
basic, elemental 3D primitives because they are the smallest 'piece' of any other three-
dimensional shape. With any three points, a plane is defined – this is why a three-legged stool
will never be wobbly, whereas a four-legged stool can rock unevenly.
The above example is therefore a much better description of the knowledge of the
browser: it has no understanding of 'shape', caring only about the numbers involved. In this case,
an IndexedTriangleSet is used to describe a shape as a sequence of triangles. Every three
consecutive values in the “index” field constitute the three vertices of a triangle. Every three
13
consecutive values in the “point” field constitute the three points of a vertex. To explain this with
an example, the first three index values are “17 0 1”. Since we start counting with zero, this
defines a triangle by the eighteenth point, the first point, and the second point in the Coordinate's
set of points.
Note that there is no further information in the X3D file than the description of the
indices and a description of the coordinates. There is no mention of physics. There is no mention
of mass. From a computer graphics standpoint, nothing beyond the visual details of the model
need defining, because there is no inherent meaning to the data other than something to display
with appropriate lighting and reflection. If a user zoomed a 3D browser to the middle of the
sphere, the browser can kind of fake some understanding of the properties of 'inside' and
'outside'. But in reality, this is merely a choice of which side of the triangles to display, as the
following figure suggests.
Figure 8: Triangle winding using the right-hand-rule
The 'sidedness' of a triangle is typically described by the order in which the triangle's
vertices are declared. In an application utilizing the common “right-hand-rule”, the 'front' of the
triangle is seen when the vertices A-B-C appear counter-clockwise to the viewer. In the case that
the viewer observes a triangle's vertices reading clockwise, the viewer is looking at the 'back' of
the triangle. In other words, when the viewing point is placed in the center of the sphere, all of
the triangles forming the sphere will read clockwise to the viewer. The browser will recognize
this and can then choose not to display anything. Once the viewing point is back outside the
sphere, the triangles will read counterclockwise again and the browser will display them again,
making the outer shell of the sphere visible. Doing so creates the illusion that the computer
knows more about the 'inside' and 'outside' of the shape than it really does.
Understanding the limitations of graphics data gives us a better idea of why the need for
ray-tracing is needed in the first place. Basically, we need a way to determine if a voxel, or unit
in space, is interior or exterior to the model so we can begin building on that knowledge to
determine wall thickness. It is also important to get a sense of the size and scope of data needed
to render any model. As the comparison between the <Sphere> file and the
<IndexedTriangleSet> file show, a seemingly innocuous shape may merely be masking a rather
large set of data underneath. In fact, as the capabilities of rendering hardware improve, 3D
modelers and 3D artists will be using larger and larger polygonal meshes to model their data.
The reason they desire greater numbers of polygons is because a shape becomes more realistic
14
and less 'boxy' the more polygons are used to render the object. We see 3D content creators using
vast numbers of polygons as is, and the numbers of polygons used will only continue to grow.
Therefore, when performing computations on 3D models now and in the future, dealing with the
enormous memory costs is the biggest challenge. Indeed, one might need a specialized data
structure in order to handle all the data, such as an octree.
5. Octrees
Voxelization, to reiterate, is a representation of a geometric shape as a set of voxels, or cubes. It
is natural when approaching voxelization to think only of the smallest voxel-cubes used in the
process. After all, the smallest voxels are what determines the accuracy of the final voxelization;
as size is reduced, the more 'true' the voxelization becomes. With this in mind, one (poor)
implementation of a voxelization algorithm would be to first make a flat list of every minimally-
sized voxel needed to represent the final model. Given a flat list, one could then evaluate which
voxels contain triangle geometry by performing a triangle-voxel intersection test for every
available triangle against every available minimally-sized voxel.
Calculating the algorithmic cost of such an operation shows us just how computationally
expensive that would be. Where M is the number of minimally-sized voxels and N is the number
of triangles in the triangle mesh, our intersection tests would be O(M*N). Note that M is
dependent upon the size of the input model. One calculates the number of minimally-sized
voxels needed to represent the model by first taking the bounds of the model and calculating
model width, height, and depth. Then the maximum number of minimally-sized voxels possibly
needed is equal to (model width/smallest voxel size)*(model height/smallest voxel size)*(model
depth/ smallest voxel size). As the size of the model increases and the size of the smallest voxels
decreases, M grows exponentially.
Memory costs for storing all the triangle intersection data with this method are even
worse – one might call them prohibitively expensive. Best case, of course, would be a model
designed purposely with the intent to minimize the number of triangle-voxel intersections. Even
in that case, each triangle is bound to intersect at least a few minimally-sized voxels. Conversely,
the worst case would be for every triangle to intersect with a very large number of voxels.
Imagine, for instance, some model that is composed out of hundreds of thousands of very large
triangles, ones that are practically the same width as the model they describe. It might be difficult
to model such an object, since it would necessitate a very convoluted structure, but it could be
done. This intersection data would be crippling. Triangles of such size would intersect tens of
thousands of minimally-sized voxels, each of which would need to remember that fact for the
ray-tracing step to follow. A situation would arise where each of the hundreds of thousands of
voxels was remembering each of the thousands of triangles which intersected it! In other words,
15
this brute 'track all the data at one time' method is not sufficient. A more intelligent method of
processing triangle-voxel intersections is needed.
Enter the octree. The octree gives us a way of partitioning three-dimensional space so
that we can process the model in well-defined pieces.
What is an octree?
An octree is a relatively basic data structure that is enormously useful for the elegant way
it maps to 3-dimensional space. Simply put, it is a k-tree with k equal to 8 (meaning each node in
the tree has eight child nodes). The octree used in this algorithm is a full tree, where every node
in the tree has either zero children (ie: is a leaf node) or has eight children. Of course, there is no
reason an octree used in another application would need this property.
The reason an octree becomes a useful data structure for three-dimensional applications is
because each child node can represent a separate octant of the space bounded by its parent node.
This is similar to the idea of four quadrants in two-dimensional space, only requiring twice as
many children to account for the z-axis. For example, imagine that our input model is a sphere
with a diameter of one meter centered at the origin (0, 0, 0). The root node of the octree would
encompass the 'box' of space also centered at the origin with a width, height, and depth all equal
to one meter.
The eight child nodes would look like as they do in Figure 9:
Figure 9: A representation of a full octree with depth equal to one.
Child 1 contains points where x is negative, y is negative, and z is positive.
Child 2 contains points where x is negative, y is negative, and z is negative.
16
Child 3 contains points where x is negative, y is positive, and z is positive.
Child 4 contains points where x is negative, y is positive, and z is negative.
Child 5 contains points where x is positive, y is negative, and z is positive.
Child 6 contains points where x is positive, y is negative, and z is negative.
Child 7 contains points where x is positive, y is positive, and z is positive.
Child 8 contains points where x is positive, y is positive, and z is negative.
One voxelization paper suggests use of an octree even though the authors themselves do
not use one. For example, in “A Low Cost Antialiased Space Filled Voxelization of Polygonal
Objects,” the authors use quadtrees to space-partition a 3D model. They discuss the
disadvantages by writing: “The presented method only computes uniform 3D grids of voxels.
However, it could easily be extended to the creation of adaptive structures, such as an octree, in
order to spare memory” [5]. As suggested, instead of using a quadtree, we implement an octree
structure for the subdivision of the 3D model.
The first advantage of the octree is that it greatly reduces the number of triangle-voxel
intersections tests needed. Instead of taking one triangle and testing its intersection versus every
minimally-sized voxel, one can test its intersection versus voxels it is likely to intersect. How
does this work? It starts at the root of the octree, and then trickles down to smaller voxels. If a
triangle intersects a particular octant, that octant is divided so that it has eight children. Then we
evaluate which of the octant's eight 'child' voxels also intersect the triangle. This process
continues down each level of the octree, filtering through until the minimally-sized voxels are
reached. As an example of this in practice, imagine testing a triangle against all eight octants in
the previous figure and finding that it does not intersect any octant except the third. Without the
octree, one would have to test the triangle's intersection against every single minimally-sized
voxel. But using the octree, with only eight intersection tests, about 87.5% (seven-eighths) of the
tests have been completely eliminated!
A simple recursive algorithm for iterating through the octree might look like this:
add to the octree (Octree node, Triangle t){
if ( intersection between (node, t)){
if( node == minimumSize)
mark Intersection( node, t);
else
for( each of node's children)
add to the octree( child, T);
}
}
17
A simple algorithm for remembering all the intersection data might look like this:
We found implementing and utilizing the octree went smoothest when the dimensions of
the structure were related to powers of two. Ensuring this property holds makes it easy to grasp
the size of a node at any level in the octree. The algorithm starts with the minimally-sized voxels.
Whatever their actual real-world dimensions, we treat them as one unit of voxel space. This
ensures that every step up in the octree is a convenient power of two in 'units of voxel space'. For
example, envision a model where resolution was one millimeter in size. The minimally-sized
voxels in this example would all be .001 meters across. Traveling up the octree structure, the
next larger voxel is two voxel-units, or .002 meters, in width; these voxels contain eight of the
minimally-sized voxels. The next level up in the octree contains voxels that are four voxel-units
in width, or .004 meters, and so on.
The most important step involved in preparing the octree for the 'powers of two' approach
is to ensure the root node is the correct size. To do this, we first perform a bounds check of the
input 3D model, and determine the largest dimension, in terms of voxel units: width, height, or
depth. The first power of two larger than this should be the width of the outermost, root voxel in
the octree. For example, imagine a voxelization where one voxel unit correlates to .001 meters.
Now picture an input triangle mesh with width, height, and depth all equal to .49 meters. This
converts to 490 voxel units. The first power of two larger than this is 512. Thus, if we create the
root node of the octree with a width of 512 voxel units (or .512 meters), it will evenly divide
until it reaches the minimally-sized voxels.
To further ensure that the octree is as conceptually simple as possible, it should be
centered on the 0, 0, 0 point. This can potentially pose a problem if the input mesh is positioned
elsewhere in 3D space, because parts of the input mesh might not intersect an octree centered at
0, 0, 0. But the solution is simple: input triangle data is quickly pre-processed, and the mesh is
translated so that it is centered on the origin point. Even though this changes the worldspace
coordinates of the 3D mesh, the underlying geometric shapes are still exactly the same relative to
one another, making the translation a negligible operation.
In addition to reducing the overall number of triangle-voxel intersections tests, the octree
also provides us a way to process the model in sections as large as memory will allow. In fact,
one of the biggest advantages of the octree is that it acts as a map of three-dimensional space,
letting us examine different pieces of the model in turn. Compare the difference between the next
algorithm and the previous two; it is kind of a combination of both.
for ( all the triangles in the input triangle mesh){
add to the octree( octreeRoot, current Triangle);
}
run main algorithm(octreeRoot);
18
An algorithm that processes the input model piecewise might call the following method, passing
a pointer or reference to the root node of the octree as a parameter:
What separates this algorithm from the previous example is that it no longer iterates
through the triangle list, dropping each triangle into the octree. Instead, conceptually speaking, it
iterates through the octree as the first step. Essentially, it breaks the octree down into smaller
voxels, and iterates through the triangle list over each one of these sub-octrees. This allows the
algorithm to evaluate the 3D input model in memory-manageable pieces. In theory, if a model
was small enough and printer resolution was poor enough, this approach would not be necessary.
However, iterating through the octree structure allows the program to scale down to finer
resolutions and scale up to larger input models.
6. Algorithm Design and Implementation
The full algorithm has three main steps. The first is to test and save all the triangle-voxel
intersection data. The second is to perform ray-tracing to classify voxels as interior to the model,
exterior to the model, or as 'containing geometry' and along the surface of the model. The final
part of the process is a flooding algorithm used to classify voxels according to their distance
from the interior of the model. Once this is completed the visualization of the results can be
produced. Written out in pseudocode, the algorithm appears straightforward:
break octree into pieces( Octree node ){
if(node == memory-manageable size) {
for ( all the triangles in the input triangle mesh){
add to the Octree(octreeRoot, current Triangle);
}
run main algorithm(node);
} else {
for( each of node's children)
break octree into pieces( child);
}
}
PrepareVisualizationUsingVoxelizationAndRayTracing(){
for( each memory-manageable branch of the octree){
mark every triangle-cube intersection();
perform ray casting();
additional classification of voxels();
}
}
19
It is important to point out this algorithm does not actually calculate an exact maximum,
minimum, or average wall thickness. It locates all the areas of the model that it cannot guarantee
are above a minimum thickness level. In other words, if we are unable to verify that a particular
region of the model meets minimum thickness requirements (as determined by "resolution" of
the printer), then that region will be identified by this program.
The 'SubCube'
Before going into greater detail about each step in the algorithm, let us introduce the
word 'SubCube', used later in this paper. A SubCube is merely a shorthand definition for the idea
appearing earlier in this paper as “memory-manageable branch of the octree”. Use of the
SubCube was pivotal, since we encountered many memory errors when trying to run the
algorithm on the root of the octree for all but the most basic toy models. Originally, we tried to
fit the entire model into memory at once, storing all the voxel information at once in the leaves
of the octree. This proved unfeasible; memory constraints were simply too large to overcome.
Minimizing memory usage was an absolute necessity. Processing the model by iterating through
chunks of the octree did just that, allowing us to scale the algorithm to dramatic levels. A
SubCube is nothing more than a convenient way to refer to a one of those cubic 'chunks' of
space.
So applying the octree and the idea of a SubCube gave us a way to process the model
systematically through well-defined blocks of space. We found the ideal size to use for the
SubCube varied, depending primarily on model size and memory limitations, and idiosyncrasies
of each input model to a lesser degree. For instance, if the minimally-sized voxels are defined as
one voxel unit in width, and the root node of the octree is 1024 or 2048 voxel units in width, the
SubCube might be either 128 or 256 voxel units in width, depending on available memory.
Thankfully, the size of the SubCube can be adjusted rapidly – the program simply processes the
model in greater or smaller pieces, as needed. In addition to the high ratio of model dimensions
to printer resolution, the input 3D models can be very complex, composed of hundreds of
thousands of triangles. For this reason the adaptability of the size of SubCube could prove quite
valuable: it could easily be tailored to suit a particular batch of incoming 3D models.
Triangle-Cube Intersections and the Separating Axis Theorem
After building the octree structure, we decompose the model's triangle data into the
leaves of the octree. This process takes each triangle of the 3D model and performs an
intersection test to determine which branches of the octree contain the triangle. Each node in the
octree represents a voxel, or 'cube' of space. We thus need to evaluate a series of triangle-cube
intersections; if an intersection occurs in the upper levels of the octree, we recursively travel
down the branches of the octree tree and see which one of the child voxels also intersects the
triangle.
20
The algorithm we used for the triangle-cube intersection test comes from “Real Time
Collision Detection” by Christer Ericson. It uses the separating axis theorem to determine if the
objects intersect [6]. The separating axis theorem is demonstrated (in two dimensions) in the
figure below.
Figure 10: Separating Axis Theorem
The essence of the separating axis theorem boils down to the fact that if one can draw a straight
line between two objects, they are distinct. (The separating axis itself is perpendicular to the
separating line.) As shown in figure 10 on the right, if the objects are not convex, the theorem
fails to apply. The separating axis theorem can be extended from the two-dimensional realm into
the three-dimensional, in which case it becomes the separating plane theorem. That is, if one can
describe a plane between two three-dimensional convex objects, they are disjoint.
We want to determine if a triangle intersects a particular cube of space defined by a node
in the octree. Thus, we apply the separating axis / separating plane theorem to a triangle and
axis-aligned-bounding-box intersection. We project three face normals from the axis-aligned
bounding box and project one face normal from the given triangle. We also must consider the
nine axes given by the cross product of a combination of edges from the triangle and the axis-
aligned bounding box. This process calculates the (possible) separating axes rather than the
separating planes themselves. If any separating axis is found, the triangle and the cube do not
intersect.
After iterating through the triangle list and recursing down the octree structure,
eventually we know, for every minimally-sized voxel in the SubCube, exactly what triangles
intersect with that voxel. We therefore have finished a voxelization of the surface of the model,
and can characterize all voxels in the SubCube as 'voxels containing geometry' or 'empty voxels'.
21
Pay careful attention to the semantics: though the computer knows which voxels are empty, it
does not know at this stage which empty voxels are interior to the model and which voxels are
exterior to the model. For that information, we use ray-tracing.
Ray-triangle intersections and the manifold property
This stage of the algorithm is where the ray-casting discussed earlier takes place. The
ideas of ray casting and voxelization often go hand in hand. For instance, in the paper, “A Low
Cost Antialiased Space Filled Voxelization of Polygonal Objects” [5], the authors use a ray-
casting approach to perform a voxelization with a low level of aliasing problems. What sets that
paper apart is the author's use of oversampling during the ray-tracing process in order to avoid
aliasing problems. In their ray-tracing step, they cast up to four rays at each voxel. This allows
the authors to categorize each voxel according to the ratio of intersecting rays. For visualization
purposes, each voxel is colored in greyscale according to the number of intersecting rays. The
authors of this paper use voxelization “in the context of virtual sculpting”, declaring “for our
sculpture application, it is important to obtain space filled voxelization, and not only a
discretization of the object surface.” Previous papers on voxelization only focused on surface
representation, and this paper was a step forward because it also fills the space inside the
resulting voxelized object. So the techniques advanced in this paper were useful to understand as
a basis for identifying voxels as interior and exterior.
The basics of ray-casting were explained earlier in this paper: by tracking of the number
of intersections modulo two along each ray, we know that an odd number of intersections
corresponds to the interior and an even number corresponds to the exterior. What has not yet
been mentioned is the fact that ray casting is entirely dependent on a manifold input object. That
is, every ray that intersects the model must intersect an even number of times. If there were an
odd number of intersections, the program would be unable to distinguish the interior from the
exterior of the model. To see this in action, see figure 11.
22
Figure 11: An odd number of intersections causing interior/exterior identification issues
Imagine that the solid black line represents the surface of the model defined by the
triangle mesh. The upper ray, cast from the left to the right, begins in the exterior since there is
an even number (zero) of ray-triangle intersections at the origin of the ray. However, the lower
ray, cast from the right to the left, defines this same region of space as interior to the model,
since it has encountered one ray-triangle intersection along its voyage.
To prevent this problematic issue, our current software specification accepts two-
manifold objects only. This is done by carefully examining each triangle described by the
coordinates in the mesh. We proceed through the set of triangles, cataloging every edge that
appears. The results are placed into a collection of edges. The assumption of two-manifold is
determined if the same edge is placed into the collection twice and only twice. Two-manifold is
the case of an edge being attached to only two triangles.
See Figure 12 for more information:
23
Figure 12: Visualization of the 'manifold' property
Manifold detection consists of the following test:
If an edge between two vertices A and B is placed in the collection of edges twice, that
edge is considered two-manifold, meaning it is shared by two and only two triangles. If any edge
is placed in the collection with a different number of edges, it is outside specifications and the
3D input model is not processed by our algorithm. For example, if an edge appears in the edge
container three times, it is three-manifold, meaning the same edge is shared by three triangles. A
k-manifold object with k greater than two is highly likely to contain self-intersecting loops. An
example of this is shown in the following figure:
24
Figure 13: Example of a model failing the manifold test
If a 3D artist creates a model by 'pushing' two objects together without removing the
inner wall, a self-intersecting loop occurs. Though independently the circle and the square are
both manifold, if an object is described where one intersects the other, the results are undefined.
The manifold check is one way to identify if such a situation occurs.
Figure 11 demonstrates the other potential mesh problem: an unclosed, non-manifold
shape. This problem is detected if an edge only appears in the collection once. By ensuring that
all models meet the two-manifold test we can safely assume that the input model is both closed
and non-self-intersecting.
Classification of voxels using a flooding algorithm
After the ray-tracing step, each voxel in the SubCube is initially defined in one of three
ways:
1. Interior Voxel: entirely interior to the model (containing no geometry whatsoever)
2. Surface Voxel: contains geometry; it is therefore located along some surface of the model
3. External Voxel: exterior to the model (containing no geometry)
After we finish initial classification of every voxel, we then use a flooding technique to push
outward from the interior voxels to the exterior voxels, redefining each surface voxel with a
more descriptive definition.
25
The idea to use a flooding technique to propagate through the voxelization is discussed in
the paper “A Complete Distance Field Representation” [7]. This paper proposes a “complete
distance field representation (CDFR) that does not rely on Nyquist's sampling theory.” Nyquist's
Law deals with aliasing of sampled wavelengths. It suggests that sampling a wavelength can be
done intelligently, using a lower rate for smooth areas and a higher rate of filtering for more
complicated distance fields, in order to reconstruct the original waveform. This law indicates that
complicated surface meshes (meaning areas with sharp, jagged points) require expensive multi-
sampling operations to compute volumetric analysis, whereas smooth areas require far lower
resolution. To avoid the expense of over-sampling in order to capture corners and edges, the
authors present a new distance representation for distance fields. They call it the complete
distance definition (CDD) which consists of tuples describing both the distance from a 3D point
to a surface geometry primitive and the geometry primitive itself. After eliminating exterior
voxels with outside flooding, the authors propagate contour depth towards the model interior.
This means that all twenty-six of a voxel's neighbors update their Euclidean positions relative to
the surface.
Unfortunately, the authors of this paper do not specify with any precision an ideal
flooding algorithm. Nor, in fact, do they even present any pseudocode for such a technique.
Perhaps it is taken for granted that readers can intuit the meaning of flooding. Here we shall be a
bit more descriptive about our own interpretation of a flooding algorithm. First, an explanation of
voxel adjacency relationships is showcased in figure 14.
Figure 14: Top - “The set of 2D pixels that are N-adjacent to the dark pixel, where N {4, 8}.”
Bottom - “The set of 3D voxels that are N-adjacent to the voxel at the center where N {6, 18,
26}”.
Note that the above figure is taken directly from “An Accurate Method for Voxelizing
Polygon Meshes” [8]. This paper is quite useful for its definitions of the different groupings of
voxel connectivity. Indeed, it provides very formal descriptions of voxels themselves: “A 3D
26
grid point is a zero dimensional object defined by its Cartesian coordinate (x, y, z). The Voronoi
neighborhood of grid point p is the set of all points in the Euclidean space that are closer to p
than to any other grid point. The Voronoi neighborhood of a 3D grid point is a unit cube around
it, known also as a voxel.” Formality aside, this paper is an excellent resource on the basics of
voxelization.
The set of voxel adjacency relationships is defined as – for any given voxel,
Six voxels share a cube face with it. These are known as 6-adjacent voxels.
Twelve voxels share an edge with it, but not a face.
Eighteen voxels share either an edge or a face with it. This set is simply an aggregate of
the previous two sets; these voxels are known as the 18-adjacent voxels.
Eight voxels share a vertex with it, but neither a face nor an edge; these are the 'corner'
voxels.
Twenty-six voxels share either a vertex or an edge or a face with it. This set is an
aggregate of the previous two sets; it contains all of the 26 possible voxel neighbors.
Flooding, then, is to start at a known voxel and 'push' outward into the 26-neighborhood,
and from there into the next 26-neighborhood, and so on. For our purposes, we begin at each
'safe' voxel within the SubCube and then flood outward, classifying each voxel with the proper
definition.
Definition 1 – SAFE – Safe voxels are entirely interior to the model and are thus correspond to
completely solid blocks of material. For this reason, we know that they meet the minimum
thickness level, and furthermore, that all voxels directly adjacent to this voxel also meet the
minimum thickness requirements. This brings us to our next definition:
Definition 2 – SAFE_BY_PROXY – SafeByProxy voxels are in the 26-neighborhood of a SAFE
voxel. Even though these voxels do contain geometry, we can safely assume that they are safe to
print by dint of their proximity to the SAFE voxel.
An example may demonstrate why. Assume that the big "X" marks the SAFE voxel, that the
little "x" marks the voxels-containing-geometry, and that an "o" represents empty space
|ox|X|xo|
The above diagram shows three voxels in a row. Because the SAFE voxel is in the center, the
two outer voxels are marked SAFE_BY_PROXY. Note that without the SAFE voxel we would
just have two voxels-containing-geometry next to one another, like so:
|ox|xo|
We cannot guarantee that the above model meets minimum thickness requirements.
27
What if the SAFE voxel was on the end of our pair of voxels-containing-geometry, instead of in
the center? Our diagram would look like so:
|X|xo|ox|
In this situation we can reasonably conclude that the middlemost voxel does meet minimum
thickness requirements, but we cannot claim that the right-most voxel meets minimum thickness
requirements with any degree of certainty. In this case, the right-most voxel needs another
definition.
Let us introduce the concept of "PossiblyBorderline."
Think of a non-empty voxel bordering a SAFE_BY_PROXY voxel. Since it is not directly
adjacent to the solid block of material marked by a SAFE voxel, we cannot conclude that it
meets minimum thickness requirements. However, we need to establish its 'degree of safety'
which is a really rough way of asking: how much geometry does this voxel really contain?
Definition 3 – BORDERLINE_SAFE – BorderlineSafe voxels are in the 26-neighborhood of a
SAFE_BY_PROXY voxel. Though they do contain geometry, they 'barely' contain geometry. To
clarify any ambiguity, 'barely contains geometry' means that the triangle geometry extends less
than 5% of the way into the voxel. [Side note: This number (5%) was chosen somewhat
arbitrarily and can be adjusted to suit one‟s tastes].
We need the 'borderlineSafe' definition because in cases where a voxel borders a
SAFE_BY_PROXY voxel, and it 'barely contains geometry', the odds are extremely high that
that geometry legitimately belongs to the SAFE_BY_PROXY voxel. Imagine, if you will, a
triangle positioned so that the tip just barely extends into a voxel. The tip would most likely be
identified as BORDERLINE_SAFE. On the other, what if the triangle-tip projected deep into the
voxel? This brings us to the fourth voxel definition:
Definition 4 – UNSAFE – Unsafe voxels contain geometry and are not in the 26-neighborhood
of any SAFE voxels. If an UNSAFE voxel is in the 26-neighborhood of a SAFE_BY_PROXY
voxel, the triangle geometry contained by the UNSAFE voxel extends more than 5% of the way
into the voxel.
Figures 15 and 16 demonstrate the distinction between the unsafe and borderline-safe
definitions:
28
Figure 15: A two-dimensional example of voxel classifications
Figure 16: Another two-dimensional example of voxel classifications
After studying the initial results stemming from these voxel classifications, we
determined that an additional distinction needed to be made to classify unsafe voxels according
to depth. The term we chose was 'distance-k'. To clarify, a distance-1 voxel is an unsafe voxel
that has one voxel-unit of separation from a safe voxel. It is in the 26-neighborhood of a safe-by-
proxy voxel, which is in turn in the 26-neighborhood of a safe voxel. Correspondingly, a
distance-2 unsafe voxel has two voxel-units of separation from a safe voxel: it is in the 26-
neighborhood of a distance-1 voxel. The choice of flooding, though far and away the most
computationally expensive part of our algorithm, made it easy to characterize voxels according
to any chosen depth, providing very customizable output.
29
Use of the flooding technique to classify voxels was made easier by converting the octree
SubCube into a three-dimensional array. Though it would be possible to do a 26-neighborhood
flood within an octree structure, at the time we felt more comfortable tackling the flooding
algorithm within the familiar array. This array of voxel information was fundamental to the
flooding operations: we could easily index into any voxel's 26-neighborhood to 'push'
information outward, and so on and so on, to the edges of the SubCube. Perhaps more
importantly, converting the short octree SubCube into a 3D array made conceptualization of the
evolving voxel classification an easy task. However, this later proved to be a very
computationally expensive sequence of operations. A more efficent approach would be to trust
the octree implementation and stick with it, to avoid memory churn and unnecessary data
copying.
Aside from producing visualizations (examples of which are just ahead), output also
includes a count of the number of voxels considered SAFE, SAFE_BY_PROXY,
BORDERLINE_SAFE, and UNSAFE. It also prints out other volumetric information about the
input triangle mesh, shown below. Twelve numbers are recorded, representing, in order:
1. volume of the model (float). Volume will be in cubic centimeters, assuming that all model
measurements are in true size and each unit of the model coordinate system corresponds to the
X3D convention of 1 meter in size.
2. min bounds x-axis of the input model (float).
3. max bounds x-axis (float).
4. min bounds y-axis (float).
5. max bounds y-axis (float).
6. min bounds z-axis (float).
7. max bounds z-axis (float).
8. total number of assessed volume voxels (integer).
9. number of safe voxels (integer).
10. number of voxels that are safe due to proximity of other safe voxels (integer).
11. number of distance-1 unsafe voxels and borderline voxels that may be unsafe for printing
(integer).
12. number of distance-2 and distance ≥2 voxels that are decidedly unsafe for printing (integer).
30
7. Results
Five values are calculated as a result of the wall thickness voxelization algorithm. These
numbers, as previously mentioned, include the total number of non-exterior voxels, the number
of safe voxels, the number of voxels that are safe due to proximity of other safe voxels, the
number of voxels that are 'distance one' from a safe voxel, and the number of voxels that are
'distance two' from a safe voxel.
One nice feature of the algorithm is that it allows the user to highlight any combination of
these voxels. This allows user to examine various surface features of the input model quite
easily. For instance, the following figure demonstrates the output that resulted from processing a
multi-shape cubic object and highlighting the 'safe-by-proxy' voxels in pink (showing in light
grey in black and white print) and the 'unsafe' voxels in red – of which there are none:
Figure 17: Voxelization performed with the Safe-by-Proxy voxels highlighted in pink (or light
grey in black-and-white print)
31
As one might expect, due to the cubic nature of the object and the cubic nature of the
voxelization, the majority of the voxels are safe. In fact, in this case no red unsafe voxels are
present, because all voxels are either safe or they are safe-by-proxy (that is, they border a safe
voxel).
Inherent Limitations of the Distance-1 Unsafe Voxels
Originally, we tried to cut algorithmic run-time cost by only calculating the safe-by-
proxy voxels and defined 'unsafe' to refer to all voxels one unit or farther away from a safe voxel.
However, this process was found to be overly restrictive. Areas of the model were flagged as
unsafe to print even though they very clearly were part of a printable base. For instance, when
highlighting all voxels of the 'Vase' model that are distance-1 or greater from a safe voxel, we
generate the following image:
Figure 18: Risky areas that may not be suitable for printing highlighted in red (or dark grey in
black-and-white print)
In figure 18, we see that the curved outlines of the leaves are being flagged as unsafe.
Clearly, these border the solid wall of the vase; it seems they would definitely be printable. Why
does the curved geometry produce such results? A close-up view of these so-called unsafe voxels
and a simple line drawing provides some explanation:
32
Figure 19: Many voxels are flagged as unsafe when the unsafe cutoff is only one voxel
In many cases, it seemed that highlighting the voxels that were one unit of voxel distance
away from a safe voxel only served to emphasize the edges of an object. This information is
interesting, but it was turning up too many false positives: voxels that were highlighted as unsafe
were, in fact, safe to print. Though this did indeed catch overly-narrow areas of geometry, we
needed a way to reduce the number of false positives to get the final output more relevant.
To try and reduce the number of false positives, we tried shifting the model slightly and
running the algorithm again, then comparing the results. Shifting the model in space slightly
while keeping our voxelization grid centered on the x=0, y=0, z=0 point caused the triangle
geometry to fall into different bounding voxels, thus altering the final results. However, overall
we found that this did not produce any significant effect on the number of false voxels. Though
rotating the model or shifting the model would often eliminate false positives in one area, it
would introduce them in a new area, thereby canceling out any gain. The net effect was roughly
the same, regardless of model positioning. Thankfully, from the perspective of a 3D printer, we
would rather have false positives than false negatives. If there was a problem area in a model and
the algorithm did not catch it, that would be greater cause for concern.
Another interesting byproduct that occurred when flagging all the distance-1 unsafe
voxels was the introduction of artifacts. Figure 20 demonstrates this effect.
33
Figure 20: Around the curved rim of the vase model, eight pairs of unsafe-voxel columns
Observe that the unsafe-voxel columns in Figure 20 occur in regular intervals around the
rim. When rotating this model about the y-axis, the unsafe-voxel columns would align in
different locations, but they were still present. A closer look at these voxels gives a better idea of
why this effect would occur.
34
Figure 21: Closeup of two columns of unsafe voxels
Figure 22: The two columns of unsafe voxels with edge of the model highlighted in blue and voxel
placement outlined in black
35
Figures 21 and 22 demonstrate the limitations of highlighting distance-1 unsafe voxels.
Any time a relatively narrow curve is introduced, the likelihood of a fully 'safe' voxel decreases.
Since they only border safe-by-proxy voxels, the voxels in the red columns are flagged as
distance-1 unsafe voxels, and the user is presented with a confusing set of unsafe voxels.
Floating Point Problems
Another problematic issue occurs during ray-casting. What if the ray should intersect a
triangle, but misses? An instance of this is shown in Figure 23.
Figure 23: Mysterious column of distance-1 unsafe voxels
When the voxels are analyzed, it is apparent that this column of unsafe voxels borders a
safe column. Specifically, there is a safe of solid material which in the image is to the upper right
of the red column. Since the voxels in the red column border this safe area, the column should
actually be categorized as Safe-by-proxy rather than distance-1 unsafe. What is going on here? A
look at the mesh geometry instead of the shaded geometry provides a clue.
Figure 24 is an image of the unshaded mesh geometry, in which the triangles of the
polygonal object are visible. To help the viewer make sense of the image, Figure 25 adds an
overlay of the voxelization on top.
36
Figure 24: Unshaded mesh geometry showing triangulated input model
Figure 25: Voxelization overlay on top of triangulated input model
37
In Figure 25, the triangles of the input model have been highlighted in orange. The
voxelization has been drawn in blue. The circle corresponds to the ray that was cast during the
ray-tracing step. Observe how it falls neatly on the boundaries of two triangles. In this case,
despite the triangles sharing two points, the line actually 'fell between' the two triangles and was
found to intersect neither triangle. This result meant that the column of voxels which appears to
the human eye to be solid material was instead interpreted by the program not as interior voxels,
but as voxels containing geometry defining the surface edge of the model. The program thus
categorized the voxels as safe-by-proxy rather than safe, and a neighboring column of voxels
was erroneously flagged as distance-1 unsafe instead of safe-by-proxy. Such an error is a false
negative – the ray-casting is testing for intersection, and the ray-triangle intersection was
mistakenly reported as absent.
These errors can be fixed by casting multiple rays per voxel to do the interior-exterior
test. The odds of multiple rays falling in-between triangles is very low, although there is a non-
zero probability of such an event.
The following is a series of tables showing the runtime of our algorithm as performed on
cones and spheres of varying sizes and varying triangle counts:
Table 1 Cone results with Resolution of 1mm
Cone height: 3centimeters
Cone radius: 2cm
# of
Indices
Total # of
voxels Safe voxels
Safe By
Proxy
voxels
Borderline
Safe
Unsafe
voxels
Unsafe
Distance
Two
Runtime
(secs)
48 11481046 11069974 406536 982 3548 6 64.896
96 12422240 11995934 422390 968 2934 14 64.911
192 12663341 12230691 428998 1179 2457 16 64.990
384 12729912 12291342 434798 1249 2511 12 65.114
768 12746772 12298354 444196 1783 2427 12 64.786
1536 12757579 12290748 460762 3530 2527 12 65.130
3072 12773920 12275130 484748 11397 2635 10 66.113
6144 12795952 12253664 503368 35933 2969 18 65.973
12288 12810140 12241402 506974 58404 3326 34 68.344
24576 12815318 12236026 511369 64431 3458 34 69.342
38
Table 2 Simple Sphere results with Resolution of 1mm
Radius
# of
Indices
Total # of
voxels Safe voxels
Safe By
Proxy
voxels
Borderline
Safe
Unsafe
voxels
Unsafe
Distance2
Runtime
(secs)
1mm 192 10668 7320 3109 31 208 0 57.255
2mm 192 29356 22474 6486 134 262 0 53.149
4mm 192 189056 164532 23908 96 520 0 54.241
1mm 768 12732 9104 3512 76 40 0 54.700
2cm 768 35276 27928 7232 80 36 0 53.835
4mm 768 229320 202720 26272 136 192 0 54.444
1mm 3072 13188 9512 3576 28 72 0 53.259
2mm 3072 36756 29312 7376 20 48 0 53.820
4mm 3072 239944 212912 26912 72 48 0 54.390
1mm 12288 13380 9704 3624 28 24 0 53.399
2cm 12288 37248 29744 7456 36 12 0 54.397
4mm 12288 242440 215360 27008 72 0 0 54.163
2mm 19200 37248 29744 7456 36 12 0 54.756
1mm 49152 13380 9704 3624 28 24 0 53.305
Table 3 Complex Sphere results with Resolution of 1mm
Radius
# of
Triangles
Total # of
voxels Safe voxels
Safe By
Proxy
voxels
Borderline
Safe
Unsafe
voxels
Unsafe
Distance2
Runtime
(secs)
10cm 10000 4276368 4088104 188192 64 8 0 57.320
20cm 10000 33739360 32986112 748574 508 4166 0 73.800
30cm 10000 113363830 111668462 1685766 674 8928 0 101.720
40cm 10000 268091638 265077798 2997486 1246 15108 0 137.610
10cm 20164 4280550 4092214 188268 64 4 0 56.847
20cm 20164 33767788 33014284 749038 320 4146 0 73.258
30cm 20164 113470294 111774582 1686520 454 8738 0 101.182
40cm 20164 268335879 265321299 2998982 1098 14500 0 134.752
10cm 50176 4281936 4093528 188336 72 0 0 56.830
20cm 50176 33777817 33024177 749162 262 4216 0 73.336
30cm 50176 113513704 111817630 1686724 520 8830 0 102.242
40cm 50176 268490537 265475313 2999544 1128 14552 0 133.177
10cm 101124 4282400 4094000 188350 48 2 0 57.657
20cm 101124 33788151 33034487 749212 220 4232 0 72.806
30cm 101124 113531453 111835317 1686760 528 8848 0 102.508
40cm 101124 268516860 265501452 2999702 1032 14674 0 134.566
10cm 200704 4283468 4095060 188352 56 0 0 57.533
20cm 200704 33794806 33041094 749286 234 4192 0 73.507
30cm 200704 113532380 111836180 1686726 560 8914 0 102.913
40cm 200704 268574553 265558961 2999827 1106 14659 0 134.347
10cm 300304 4282512 4094104 188352 56 0 0 58.141
20cm 300304 33802566 33048886 749360 192 4128 0 74.584
30cm 300304 113535241 111839109 1686779 506 8847 0 103.740
40cm 300304 268580767 265565127 2999859 1122 14659 0 137.686
39
Table 4 Sphere results with Resolution of .5mm
Radius
# of
Triangles
Total # of
voxels Safe voxels
Safe By
Proxy
voxels
Borderline
Safe
Unsafe
voxels
Unsafe
Distance2
Runtime
(secs)
10cm 10000 33742755 32989507 748570 508 4170 0 430.841
20cm 10000 268100817 265086977 2997490 1244 15106 0 492.524
30cm 10000 904216581 897435203 6744076 2841 34460 1 616.356
40cm 10000 2139962700 2127907452 11987554 5054 62640 0 797.900
10cm 20164 33768872 33015368 749038 320 4146 0 430.264
20cm 20164 268349533 265334953 2999022 1092 14466 0 492.300
30cm 20164 905006877 898223840 6746808 2418 33809 2 613.897
40cm 20164 2141705359 2129646691 11991576 4437 62623 32 791.757
10cm 50176 33779009 33025369 749174 260 4206 0 428.891
20cm 50176 268506348 265491124 2999544 1128 14552 0 491.963
30cm 50176 905350606 898565912 6748014 2435 34244 1 606.325
40cm 50176 2142791504 2130729584 11994780 5152 61988 0 783.728
10cm 101124 33795016 33041352 749204 220 4240 0 426.500
20cm 101124 268535011 265519603 2999708 1028 14672 0 489.154
30cm 101124 905490427 898705274 6748734 2040 34379 0 608.166
40cm 101124 2143028408 2130965900 11995180 4787 62529 12 784.337
10cm 200704 33801385 33047673 749276 236 4200 0 428.111
20cm 200704 268596315 265580723 2999830 1106 14656 0 486.829
30cm 200704 905647554 898861993 6749042 2250 34269 0 603.969
40cm 200704 2143367865 2131304937 11995805 4422 62701 0 792.714
10cm 300304 33809661 33055981 749364 186 4130 0 430.326
20cm 300304 268619550 265603910 2999865 1120 1120 0 488.670
30cm 300304 905626414 898840953 6748944 2161 34356 0 604.406
40cm 300304 2143429699 2131366507 11996323 4000 62869 0 785.550
These results were tested on a Dell Inspiron 1720 with an Intel ® Core ™ 2 Duo CPU
T8300 @2.40 GHz, 3.0 GB RAM, using the 32-bit Windows Vista operating system.
What is most interesting about these results is the consistency of algorithmic runtime,
regardless of the triangle count. For instance, all the spheres with radius equal to ten centimeters
took roughly a minute to run. Clearly, the limiting factor with our voxel-based approach is not
number of triangles but rather number of voxels. As the previous tables show, the algorithm ran
for about 70 seconds on spheres with a 20 centimeter radius, and it ran for about 100 seconds on
spheres with a 30 centimeter radius, and it ran for about 135 seconds on spheres with a 40
centimeter radius. Note the consistency of this runtime increase, strongly indicating the
correlation between number of voxels and runtime.
These results mean that this approach would be particularly valuable for identifying
narrow areas of small yet highly complex 3D models. The application is ideal for processing
complex, highly tessellated input since large numbers of vertices do not have a significant effect
on runtime. However, the algorithm is less suited to processing models with large physical
40
dimensions. Note that the current solution breaks down every area of model space into a
SubCube, splitting each branch of the octree space down to the minimally-sized voxels. This is
certainly the reason that runtime slows down for larger models. An improved version of the
algorithm would not break down the octree unnecessarily and would thereby make significant
runtime savings. In particular, there is no reason that completely safe, interior regions need to be
split; subdivision in such cases is redundant. A smarter implementation would leave these large
regions „whole‟ and would use the octree structure itself, rather than a 3D array, to determine 26-
neighbor adjacency for the flooding part of the algorithm.
These results also point to the scalability and accuracy of the solution. The tip of the
cones, which were purposely chosen to be very thin, were correctly identified as the risky areas
of the models. The spheres, being such round, thick objects, had no such thin areas. The
algorithm identified a negligible number of false positives when running at ½ millimeter
resolution, as shown in Table 4. This is attributable to floating point ray-casting errors; these
results were produced when running the algorithm without multisampling rays during ray-
casting.
Below is a screenshot of the X3D image produced by running the algorithm on a cone:
Figure 26: Results with different voxel classifications shaded by color. Distance-2 unsafe voxels
are colored red. Distance-1 unsafe voxels are colored yellow. Borderline-safe voxels are colored
pink, and safe-by-proxy voxels are colored blue.
41
Figure 26 is an image demonstrating the full spectrum of voxel classification. Users can
select which voxels they want to appear in the final visualization, however. The output of the
algorithm can be quickly and easily tailored to highlight any set or sets of voxels. For instance,
Figure 27 shows an image with only the borderline-safe voxels colored:
Figure 27: Example of 1mm-resolution output with only borderline-safe voxels colored
(highlight in pink, or grey in black-and-white print).
An example of the standard visualization (which highlights just the distance-2 unsafe voxels) is
presented in Figure 28. For comparison purposes, Figure 29 shows an image with both the
distance-2 and the distance-1 voxels present.
42
Figure 28: Default visualization: 1mm-resolution with distance-2 unsafe voxels colored yellow.
Figure 29: Alternate visualization: distance-1 unsafe voxels are colored yellow, and distance-2
unsafe voxels in red.
43
Also of note is the customizability of visualization colors. In Figure 28 the distance-2
unsafe voxels are yellow whereas in Figure 29 it is the distance-1 unsafe voxels that are
highlighted yellow.
The previous four figures were produced by voxelizing with 1 millimeter resolution.
Observe how the results change when voxelizing at higher resolutions in the following two
figures:
Figure 30: 0.5mm resolution with distance-2 voxels highlighted in red.
44
At the original one millimeter resolution the entire leaf stem was flagged as unsafe. In
contrast, at one-half millimeter resolution only the narrowest parts of the stem are flagged as
unsafe. Figure 31 is a screenshot of the visualization produced when we increase the resolution
yet again:
Figure 31: 0.25 mm resolution with distance-2 voxels highlighted in red
What the results of these visualizations demonstrate is that this algorithm can be scaled to
appropriately suit a 3D printer‟s resolution. If, for instance, a 3D printer or rapid prototyping
machine had poor resolution, one could run this algorithm at a correspondingly low level of
resolution for quick results. Yet this algorithm can identify narrow model geometry at a very
fine-grained resolution if so desired. It has the benefit of being easily matched to any particular
printer resolution, thus producing a suitable visualization regardless of printer resolution.
45
8.Conclusion
The algorithm identifies areas of a 3D model that are not of sufficient thickness to be printed
properly. It does this in three steps: First, the model geometry is divided into an octree down to
voxels of a specified depth. Secondly, ray casting is performed in order to determine which
voxels are interior or exterior to the model. Finally, voxel depth information is flooded outward
from the interior, safe-to-print voxels. When the algorithm finishes processing the entire model,
it produces an X3D representation of its voxel classifications.
This process produces customizable output that gives a representation of the 3-D model
and identifies overly-narrow segments. The algorithm could also be easily adapted to identify
overly-wide segments as well. There are many cases where it is important for financial reasons to
minimize the use of material, and the flooding algorithm could be applied to that purpose,
marking voxels as suitable for being hollowed away if they are of sufficient depth.
Additionally, its speed coupled with its use of the X3D file format make this algorithm
highly suitable for Web Services applications: one could pass their 3D triangle mesh information
through the web, and this algorithm could run and return a visualization highlighting the weakest
areas of the model.
Furthermore, this algorithm also produces relatively rapid results. Other voxelization
efforts, such as that in the paper “Real-time Voxelization for Complex Polygonal Models”,
perform even faster: By utilizing the GPU (graphics processing unit), the authors were able to
perform real-time voxelization of polygonal surfaces into 2D textures in video memory. Their
paper presents a very quick, powerful solution for volumetric representation. However, the paper
does not present a method to evaluate areas of minimal thickness, as my algorithm does. It does
not take into account any volume calculations and does not highlight voxels that fail the
minimum-thickness threshold. The paper is solely about straight voxelization.
In their introduction the authors claim, “As voxelization is basically a 3D scan conversion
process, it is natural to make use of rasterization graphics hardware in parts of or the whole
voxelization pipeline” [9]. They do so by dividing the volumetric representation into three tasks:
rasterization, texelization, and synthesis. During the rasterization step, the triangles of the surface
mesh are rasterized to the discrete voxel space. During the texelization step, volume space is
converted to a 'worksheet' that records all voxelization information. The final stage is the
synthesis stage, during which three directional 'sheet buffers', each representing a part of the
discrete voxel space, are transformed and reformulated to the worksheet representing final
volume. My algorithm, on the other hand, does not rely on any special GPU hardware and can be
used with only common computional resources.
Finally, I feel the scalability of my algorithm is a resounding success. Many voxelization
papers test exclusively on 3D models composed of few triangles, or they test with very low
resolutions, such as 256x256x256 [10]. My algorithm has scaled to resolutions as high as
4096x4096x4096 and evaluated models with many hundreds of thousands of triangles.
46
Despite these successes, I predict that the algorithm could be yet further improved. For
future work in this area, I would suggest that a significant time savings gain could be obtained by
not converting the SubCube over to a 3D array during the voxel classification/flooding part of
the algorithm. By sticking strictly to the octree implementation, a large amount of memory thrash
would be avoided and runtime speed would undoubtedly improve.
9. Acknowledgments
Many thanks to Dr. Isabelle Bichindaritz for her consistently insightful feedback and
suggestions, and to Wayne Warren for his advice and proofreading.
10. References
1. “3-D Printing for the Masses: A rapid-prototyping service opens up technology to
hobbyist designers.” Technology Review, Thursday, July 31, 2008.
http://www.technologyreview.com/Infotech/21152/?a=f
2. A. Kaufman, D. Cohen, and R. Yagel. Volume Graphics”. In IEEE Computer, Volume
26, Issue 7, July 1993. Page(s) 51-64.
3. “What is X3D?” Web3D Consortium, http://www.web3d.org/about/overview/
4. A. E. Walsh and M. Bourges-Sevenier. “Core Web3D”. Prentice Hall, September 14,
2000, Chapter 20.
5. S. Thon, G. Gesquiere, and R. Raffin. “A Low Cost Antialiased Space Filled
Voxelization of Polygonal Objects”. International Conference Graphicon 2004, Moscow,
Russia.
6. C. Ericson. “Real-Time Collision Detection”. Part of the Morgan Kaufmann Series in
Interactive 3-D Technology. Morgan Kaufmann, January 5, 2005.
7. J. Huang, Y. Li, R. Crawfis, S.C. Lu, and S. Y. Liou. “A complete distance field
representation.” In Proceedings of the conference on Visualization '01 (San Diego,
California, October 21-26, 2001). VISUALIZATION. IEEE Computer Society,
Washington, DC, 247-254.
8. J. Huang, R. Yagel, V. Filippov, and Y. Kurzion. “An Accurate Method for Voxelizing
Polygon Meshes.” Proc. 1998 IEEE Symp.Volume Visualization, pp. 119-126, Oct. 1998.
9. Z. Dong, W. Chen, H. Bao, H. Zhang, and Q. Peng. “Real-time Voxelization for
Complex Polygonal Models.” In Proceedings of the 12th
Pacific Conference on Computer
Graphics and Applications (Oct. 6-8, 2004), pp. 43-50, 2004.
10. G. Varadhan, S. Krishnan, Y. J. Kim, S. Diggavi, and D. Manocha. “Efficient max-norm
distance computation and reliable voxelization.” Proceedings of the 2003
Eurographics/ACM SIGGRAPH Symposium on Geometry Process (Aachen, Germany,
June 23-25, 2003). ACM International Conference Proceeding Series, vol. 43.
Eurographics Association, Aire-la-Ville, Switzerland, 116-126.