Upload
colin-bradley
View
215
Download
1
Embed Size (px)
Citation preview
Rapid prototyping models generated from machine vision data
Colin Bradley*
Department of Mechanical Engineering, University of Victoria, PO Box 3055, Victoria, BC, Canada V8W 3P6
Received 1 December 1999; accepted 28 September 2000
Abstract
This paper presents the development of both hardware and software systems suitable for the three-dimensional (3D)
digitization and computer modeling of objects intended for manufacture via computer numerically controlled (CNC)
machining or rapid prototyping and tooling systems. The hardware sub-system is comprised of a 3D machine vision sensor
integrated with a CNC machine tool. The software sub-system is comprised of modules capable of modeling very large 3D
data sets (termed cloud data) using a uni®ed, non-redundant triangular mesh. This representation is accomplished from the 3D
data points by a novel triangulation process. Several case studies are presented that illustrate the ef®cacy of the technique for
rapid manufacture from an initial designer's model. # 2001 Elsevier Science B.V. All rights reserved.
Keywords: Rapid manufacturing; Vision system; Surface modeling
1. Introduction
A large percentage of products are manufactured
through processes such as die-casting and injection
molding. There are approximately 15,000 mold and
die shops in the United States with, see Altan et al. [1],
an annual sales volume of US$ 20 billion. This
industry is employing new technologies to decrease
manufacturing costs, minimize production times and
quickly produce short prototype runs for product
testing. Technologies that promise to partially ful®ll
these demands are rapid prototyping, rapid tooling and
reverse engineering employing 3D machine vision as a
digitizer.
The process of capturing object form through sur-
face digitization and generation of a computer model
of the part is termed reverse engineering. The process
follows the opposite sequence of events performed in
normal design; a prototype object is generated from a
solid model. In the context of general manufacturing
methods, reverse engineering is an important process
for instances where a product initially exists as a
designer's model in a medium, such as styling foam
or modeling clay. The model's surface form must be
digitized and the data transformed to a computer-
based representation (compatible with current manu-
facturing methods). The digitization process can be
achieved through spatial measurements taken manu-
ally by a coordinate measuring machine (CMM) or
touch probe mounted on a CNC machine tool. The
manual process, while accurate, is laborious and not
suited for de®ning the more intricate and free-form
surface patches that are common in many modern
consumer products.
Computer vision systems, capable of measuring 3D
points on a surface, have many bene®cial features that
can increase the ef®ciency of the reverse engineering
process. Compared to touch probes, 3D vision systems
have the advantages of high data collection speed and
Computers in Industry 44 (2001) 159±173
* Tel.: �1-250-721-6031; fax: �1-250-721-6051.
E-mail address: [email protected] (C. Bradley).
0166-3615/01/$ ± see front matter # 2001 Elsevier Science B.V. All rights reserved.
PII: S 0 1 6 6 - 3 6 1 5 ( 0 0 ) 0 0 0 8 3 - X
non-contact measurement. The primary limitation is
the trade-off between the sensor's accuracy and depth
of ®eld and, in some cases, the cost.
A suitable CAD model can be utilized for genera-
tion of a CNC machine tool path or, if a rapid tooling
system is to be employed in manufacturing the part,
the 3D data can also be employed to produce the
necessary manufacturing data ®le (termed an `̀ .stl''
®le). A master pattern is produced in the rapid man-
ufacturing system (through stereo-lithography or
selective laser sintering) and employed to build a
silicone room temperature vulcanization (RTV) mold
from which short runs of ®nished parts can be pro-
duced through vacuum casting. Furthermore, recent
advances in materials and processes permit the pro-
duction of both bridge and hard tooling using rapid
manufacturing methods. The CAD data format neces-
sary for all rapid manufacturing systems is a poly-
hedral representation consisting of a mesh of planar
triangular facets completely covering the object. As
detailed in this work, generating the polyhedral trian-
gular mesh directly from the 3D digitized data can
further reduce product manufacturing times.
2. Modeling 3D digitized data
Data points generated by a 3D vision system can be
structured in a variety of ways depending on the
speci®c sensor used. A highly structured data format
is a range map consisting of a single z-value (height or
range) for each (x, y) pixel location in the image.
Range sensors utilizing a CCD camera generate single
view range maps. Some data structure is produced by
line scanning devices, whereas, sophisticated sensors
(mounted on coordinate measuring machines or CNC
machine tools) generate unstructured cloud data ®les
compiled from multiple views around the object. For
example, see the cloud data in Fig. 1 containing
110,000 data points collected from four views. Sur-
face data point resolution may also vary from sparse to
dense and contributes to the dif®culty in constructing
an accurate mesh. Construction of a uni®ed triangular
mesh to cloud data is a most challenging problem and
one focus of this research. Many algorithms have been
presented for planar data, or 3D data possessing some
degree of initial structure, but not for completely
unstructured data.
Parametric modeling entities, such as non-uniform
rational B-spline (NURB) curves and surfaces, are
ideal for computer-based design but are limited for
reverse engineering applications: (i) the parameteriza-
tion of 3D data is not robust enough to deliver smooth
and accurate surfaces, (ii) the overall 3D data set must
be manually segmented into discrete patches prior to
performing the surface ®tting, and (iii) the surface
patch continuity conditions (spatial and slope),
between all patches, must be ensured [2]. The trian-
gular meshing approach eliminates these problems
and can represent general object shapes, remove the
need for manual data segmentation and possesses
good computational ef®ciency.
Barnhill's [3] method for meshing planar data,
{�xi; yi� 2 R2}, employs a visibility criterion. As
shown in Fig. 2, an edge is visible from a point if
all possible lines connecting the point to the edge do
not touch or cross the existing mesh. In Fig. 2, all
visible edges, shown in bold, are connected to the
point. The process continues with the next nearest
point until all points are connected to the mesh. At
every stage in the triangulation, the mesh is closed,
containing no holes, and the boundary is convex,
containing no indentations. A Delaunay triangulation
Nomenclature
B patch boundary
Be � fbe1; . . . ; ben
g edges of a patch boundary
Bv � fbv1; . . . ;bvn
g vertices of a patch boundary
D � fd1; . . . ; dng digitized points representing S
E � fe1; . . . ;eng edges of a patch
Eti � feti1; eti2; eti3g edges of ti;Eti 2 E
Mi initial 3D triangular mesh
interpolating R
Mo optimized 3D triangular mesh
P � fp1; . . . ; png patches which collectively
form a mesh
R � fr1; . . . ; rng reduced subset of digitized
points
Rs � frs1; . . . ; rsn
g the sorted set R
So the original object surface
T � ft1; . . . ; tng the triangles that compose a
patch
V � fv1; . . . ; vng the vertices of a patch
Vti � fvti1; vti2; vti3g the vertices of a triangle tiG � fg1; g2; g3g the patch growth parameters
160 C. Bradley / Computers in Industry 44 (2001) 159±173
is subsequently achieved by making every triangular
facet in the mesh approximately equilateral. This
method cannot be extended to triangulate 3D cloud
data because any point, which is not coplanar with a
triangle, is visible to every edge of the triangle. The
2D Delaunay method was modi®ed and extended to
accommodate 3D data by a number of researchers:
Lawson [4], Choi et al. [5], Fang and Piegl [6] and
Cignoni et al. [7]. These methods examined data
arrangement in 3D space and employed geometric
reasoning to construct the triangular facet mesh
instead of Euclidean distance, between 2D points,
to determine the linkage of the data. For example,
Choi et al. [5] analyzed a vector angle between the
current point and a candidate point to determine
inclusion in the mesh.
A method for meshing a set of 3D cloud data points,
X � fx1; . . . ; xng � R3, on an unknown surface, U,
was proposed by Hoppe [8]. A surface normal vector
is calculated at every data point from a local approx-
imating plane derived from neighboring points. A
uniform and regular 3D grid of data is generated from
the normal vectors and data points. A triangular mesh
is created by connecting the points in adjacent rows
and columns present in the regularized grid. The
method is limited by the requirement that the input
data points xi must be uniformly and densely distrib-
uted across the given surface, U. The method is also
computationally intensive due to the large number of
calculations performed for each point and cannot be
used for sparse or irregularly distributed cloud data
due to errors induced by interpolating between sparse
data. This will result in erroneous data points that do not
accurately model the surface. Other researchers have
developed 3D cloud data meshing algorithms, Milroy
[9] and Turk and Levoy [10], however, they capitalize
on connectivity information inherent in the 3D data
generated by a speci®c machine vision system.
3. Three-dimensional vision system
The reverse engineering system consists of a MoireÂ
interferometer range sensor head (electro-optical
Fig. 1. Example of a 3D cloud data set containing 110,000 3D data points.
Fig. 2. Barnhill's method for triangulating a data set Ð the
visibility criterion.
C. Bradley / Computers in Industry 44 (2001) 159±173 161
information systems) mounted on a 3-axis CNC
milling machine. The milling machine mount includes
two stepper motors that rotate the sensor head in an
additional 2 degrees of freedom. Overall control is
provided by a PC-based programmable motion control
(PMC) board. The system can position the sensor head,
around an object placed on the machine tool deck,
within a total work volume of 75 cm� 50 cm� 24 cm.
3.1. Range sensor head
The sensor, shown in the photograph of Fig. 3,
utilizes a camera with a pixel resolution of
700� 500 and is positioned at a stand-off distance
of approximately 150 mm from the surface during
data collection. The ®eld of view is 50 mm� 50 mm
and data can be collected over a depth of ®eld of �/
ÿ15 mm from the stand-off point. The resolution of
the range data are determined by the pitch of the
projection grating; the pitch of two line pairs per
millimeter used here permits a resolution of
25.4 mm which is suitable for objects with surface
detail. A coarser pitch would lower the range data
resolution, due to the larger projected fringe spacing,
which is more suitable for objects (e.g. turbine blades)
possessing gradual changes in surface curvature. The
accuracy speci®cation is in the range 25.4±50.8 mm
for the above con®guration. As discussed by Besl [11],
this digital Moire (with reference) system is a tech-
nologically more sophisticated version of the tradi-
tional shadow Moire method.
Prior to collecting patch data from an object, the
sensor digitizes an image of a ¯at reference plate that
is etched with the same fringe pattern. This image is
stored and the range from the sensor to the target
surface is calculated, at each pixel location, by calcu-
lation of the phase shift between each fringe in the
projected image and the corresponding fringe in the
reference image. A more detailed description of data
point calculation is provided in the sensor's user
manual [12]. Other data patches are similarly acquired
and the process repeated until the entire object surface
is digitized. An on-line wire frame representation of
the patch data is available to guide the operator's
selection of the next sensor head position. Previous
research, for example Milroy [9] and Maver and
Bajcsy [13], has examined algorithms for computing
the minimum number of sensor head positions neces-
sary to digitize an object. In this instance, the sensor
head is positioned under operator control and the
software developed for modeling the overall data
set can accept overlapping patches.
3.2. Sensor positioning system
The milling machine has three computer-controlled
axes of motion: x, y, and z. The x and y axis are
physically realized in a cast iron deck which can move
in a plane measuring 75 cm� 50 cm. The z-axis is
normal to the planar deck and has a 24 cm travel.
Closed loop positioning for each machine axis is
through a dc servo motor and lead screw combination,
controlled by the PMC card, as shown in the diagram
of Fig. 4. As shown in Figs. 3 and 4, the PMC card also
controls the 8 oz-in and 12 oz-in torque stepper
motors, located on the sensor mount. These motors
provide additional sensor positioning capabilities and
each motor's driver is connected to a PMC digital
output port. In combination, the two motors can aim
the sensor in any direction below the horizontal plane
passing through the mount's axis and parallel to the
deck.
Supervisory control software was developed and
implemented on the PC to perform the following
functions:
� Integration of the patch data, generated by the
sensor in each viewing location, with the absolute
position obtained from the three CNC machine tool
motor encoders and the two rotational stepper
motors on the sensor head mount. The motorFig. 3. Photograph of the 3D-range sensor head mounted on the
CNC machine tool.
162 C. Bradley / Computers in Industry 44 (2001) 159±173
control software was written in the PMC card native
language and generates a cloud data file in one
unified global reference frame. Fig. 4 further illus-
trates the various component inter-connections.
� Providing a user interface for activating the sensor,
providing location feedback and controlling data
file storage. Digitized data from an object can be
displayed in a number of formats including ortho-
graphic and isometric views.
� Providing motor control and accepting required
sensor position through the user interface. Position
commands are passed to the custom motor control
program that calculates the number of motor shaft
turns, for each axis, to correctly position the sensor
at the desired 3D spatial coordinate. The PMC
control board executes this information.
4. Cloud data meshing algorithm
A ¯owchart illustrating the sequence of steps neces-
sary to form a complete mesh of a cloud data ®le is
presented in Fig. 5. The multiple patches of x±y±z
points, comprising the cloud data, are transferred from
the vision system to a workstation for geometric
modeling. The cloud data is visualized on the screen
and extraneous data points, erroneously collected
from the object ®xture or the machine tool deck,
are identi®ed and manually deleted. The cloud data
set is reduced (see Section 4.1) to a more manageable
size and the resulting ®le displayed on the workstation
screen. The part topology is examined to determine
the number of surface patches present and the ®rst
seed point for initial mesh generation (Section 4.2) is
selected employing the workstation user interface.
The initial mesh generation is repeated for each sur-
face patch utilizing an appropriate seed point. Upon
completion of this phase, the group of individual
meshes, each modeling a surface patch, are merged
to form one global mesh covering the entire object
surface. The ®nal step (outlined in Section 4.3) opti-
mizes the mesh in order to improve the local modeling
of physical edges and smooth the intervening mesh
surfaces.
The digitized data, D � fd1; . . . ; dng, is a set of 3D
coordinates of points, di � fxi; yi; zig 2 R3, on the
surface of the object, So. The data set D forms the
sampled representation of So; D can emanate from any
type of digitizing device and can be gathered from an
object of any shape. The triangular mesh, Mo is
constructed from D by the steps itemized above and
presented below in detail.
Fig. 4. Schematic diagram of the inter-connection between major system components.
C. Bradley / Computers in Industry 44 (2001) 159±173 163
4.1. Data reduction
Most 3D vision systems generate copious data sets;
the mini-Moire sensor can gather up to 12,000 points
per square centimeter. Very large data sets are not
always required, therefore, a voxel binning method
(see Weir [14]) is used to reduce the data. Typical raw
data sets are reduced from the range [300,000±30,000]
points to the range [3000±1000] points. Voxel binning
reduces D, retaining a regular distribution of points
R � fr1; . . . ; rmg � D, by creating a maximum
bounding box, around D, in the three principle axes.
The volume is subdivided into uniform cubes, termed
voxel bins, and the data set D is allocated to the bins
and the point, di, closest to each bin's center is
retained. Data reduction results in a uniform distribu-
tion of points ri across the object surface, So, as shown
in Fig. 6, and is controlled through adjustment of two
parameters:
� Bin size: the size can be set explicitly or calculated
automatically by setting the desired number of bins.
Approximate spacing between the reduced data
points, R � fr1; . . . ; rng, is equal to the bin edge
length.
� Sparse data location: for a bin containing very few
points, the closest point to the bin center, ri, may
still be located in one corner of the bin. Such points
are discarded, to preserve the regular data distribu-
tion. Two criteria are optionally available to remove
irregular points: the maximum distance, c1, from ri
to the bin center, or the minimum distance, c2, from
ri to all neighboring retained points, r. The criteria
are expressed as a fraction of the voxel bin size, i.e.
0 � �c1; c2� � 1. If the point ri fails to meet the
chosen criteria, the point is discarded and the bin
remains empty. The choice of criteria depends on
the shape of the surface, So, and the density of the
digitized points, D. Each case must be evaluated
individually, but a criteria of c2 � 0:5 works well
for many digitized surfaces.
Fig. 5. Flowchart of the sequence of operations for generating a
global optimized mesh.
Fig. 6. Cloud data ®le size reduction utilizing voxel binning; initial
3D data set size 30,000 points, resulting 3D data set size 3000
points.
164 C. Bradley / Computers in Industry 44 (2001) 159±173
4.2. Initial mesh generation
The objective of mesh generation is to cover the
reduced point set, R, with an initial surface mesh of
triangular facets, Mi. A triangular mesh surface is
comprised of one, or several, mesh patches, pi, and
each patch is grown over R from a starting seed point,
rs 2 R. Any point can be selected as the seed and the
®rst patch, p1, expands until all points in R are meshed.
Complex objects, see Fig. 7, require each distinct
region, on the object, to be individually meshed and
all patches subsequently joined together to form a
continuous surface, Mi. The six patch mesh models a
surface possessing many complex features. The seed
point, for each patch, is typically placed at a distinct
feature, such as the tip of the nose.
The meshing process, illustrated in the ¯owchart of
Fig. 8, is initiated with the selection of a seed point, rs,
and all remaining points R form the initial set of valid
points available for triangulation. The set R is sorted in
increasing Euclidean distance from rs forming the
sorted subset Rs � frs1; . . . ; rsn
g � R. The seed point
rs becomes rs1, the next nearest point becomes rs2
, and
so on, as shown in Fig. 9(a). Data from previously
meshed patches are removed from R and only the
boundary points of existing patches remain. The
meshing process follows a sequence of steps.
� The first vertices fv1; v2; v3g, edges, {e1, e2, e3},
and triangle {t1} are created first as shown in Fig. 8.
The first three vertices, defining the first triangle t1,
are the first three sorted points, i.e. fv1;v2;v3g �frs1
;rs2;rs3g. The triangle t1 contains the edges
Et1�fet11;et12;et13g�fe1;e2;e3g and vertices Vt1�fvt11;vt12;vt13g � fv1;v2;v3g, as shown in Fig. 9(b).
� A second triangle, t2, is added to the patch, con-
sisting of t1, by connecting the next nearest point,
rs4, to t1. The patch boundary, B, now contains t1 and
t2, as shown in Fig. 10(a), and the outside edges
form the patch boundary Be � fbe1;be2
;be3;be4g �
fe1;e2;e3;e4g. The boundary can be described in
terms of the edges, Be, or the vertices Bv �fbv1
;bv2;bv3
;bv4g � fv1;v2;v3;v4g, so B � fBe;Bvg.
� The remaining points rsi2 Rs are considered in
their sorted order and each rsican be connected
to the expanding patch by creating vertex vn � rsi,
then creating triangles between vn and the patch
boundary, B, as shown in Fig. 10(b). This process
begins for vn by finding the nearest patch boundary
vertex, bvn2 Bv, and the adjacent patch boundary
vertex, bvn�1. The first potential triangle is formed
containing fvn; bvn; bvn�1
g. If this triangle is valid
meshing continues in a forward direction around the
boundary. The next triangle to be considered con-
tains the vertices fvn; bvn�1; bvn�2
g. When a triangle
is rejected, meshing returns to bvnand continues in
the reverse direction with fvn; bvn; and bvnÿ1
g. When
a second triangle is rejected, then meshing for vn
is finished, and the patch boundary B is updated,
as shown in Fig. 11. Meshing continues with
the next valid point, vn � rsi�1, until the patch
growth is limited by pre-defined boundary con-
straints.
� A patch grows until any combination of the follow-
ing two constraints are met: (i) the outer edge of the
digitized data is reached; (ii) a distinct edge within
the digitized data is reached, or an existing patch is
encountered.
Speci®c patch growth control parameters have been
devised to limit further expansion as mentioned in the
last point above. The parameters prevent invalid tri-
angles from becoming part of the mesh. The growth
parameters G � fg1; g2; g3g are as follows.
Fig. 7. The result of applying the meshing algorithm to a 3D-range
data set: six contiguous surface mesh patches de®ne the object's
form.
C. Bradley / Computers in Industry 44 (2001) 159±173 165
� Range parameter, g1: the distance between vn and
the patch boundary must be less than the range
parameter, e.g. jvn ÿ bvjj � g1. The range g1 is
typically twice the bin size employed in the voxel
binning process. This value ensures that neighbor-
ing vertices on a surface will be connected, but
triangles will not span large gaps in the digitized
data.
� Minimum triangle angle, g2: the smallest angle
within the triangle must be larger than the minimum
angle parameter, g2; typically, g2 � 5�. This pre-
vents nearly co-linear vertices from forming long,
narrow triangles. Such triangles are unwanted
because their surface normal may not accurately
match the object's local surface curvature.
� Maximum normal angle, g3: the angle between the
surface normal of the new triangle, nÄt, and the local
surface normal of the patch, nÄp, must be below the
maximum allowable angle parameter, g3. New tri-
angles are only valid if they form a sufficiently
smooth surface with the existing patch. This pre-
vents erroneous triangles from forming a sharp
Fig. 8. Flowchart of the algorithm for growing a mesh patch from a seed point.
166 C. Bradley / Computers in Industry 44 (2001) 159±173
corner where the digitized object does not contain
one. It also prevents the patch from growing around
sharp corners on the digitized object. The local
surface normal of the patch, nÄp, is calculated at
the boundary edge, bej, that connects the boundary
vertices, bvjand bvj�1
. Each boundary edge is
assigned a surface normal which is the average
normal of all the triangles which touch that edge.
As such, nÄp is a fair approximation of the local patch
surface normal. The dot product operation is used to
compare the normals, i.e. the new triangle is valid if
�~nt � ~np� � g3. Typical values for g3 are between 0
and 0.9.
Fig. 9. (a) Sorting of 3D data points in increasing geometric distance from the seed point rs1; (b) the creation of the ®rst triangular facet t1.
Fig. 10. (a) The creation of the mesh boundary B from triangular facets t1 and t2; (b) the growth of the mesh by adding a triangular facet
between the vertex vn � rs1and the boundary.
Fig. 11. Condition for concluding the patch growth at a vertex vn
and updating the patch boundary.
C. Bradley / Computers in Industry 44 (2001) 159±173 167
If no valid triangles can be created between vn and
the boundary, B, then the vertex vn is deleted, and the
point rsidoes not become a vertex of the growing
patch.
Many complex object surfaces cannot be covered
with a single triangular mesh, and require the selection
of several seed points at the center of each discrete
patch, as shown in Fig. 5. A mesh is then grown
independently from each seed point and a global
mesh, Mo, is ®nally constructed from the set. The
set of meshes is combined so that all boundary vertices
and edges are included with no overlapping regions at
the boundaries of the constituent patches. This tech-
nique removes the need to segment the entire cloud
data set at the outset; instead, a seed point on each
patch is selected and the mesh grows to the patch
boundary.
4.3. Mesh optimization
A triangular mesh surface patch must be optimized
to further improve the object representation. The
optimized mesh, Mo, contains the same vertices and
the same number of triangles as the initial mesh, Mi.
Edges, however, are spatially relocated to either
increase the mesh smoothness or enhance object edges
present in the mesh. The optimization algorithm itera-
tively examines every edge in the mesh surface, and
applies an appropriate criterion for ensuring mesh
improvement. Each non-boundary edge is shared by
two triangles, which together form a quadrilateral,
having the target edge as a diagonal. If the quadri-
lateral is convex, the target edge can be swapped to the
opposite diagonal to create two different triangles
between the same four vertices, as shown in Fig. 12.
The two optimization criteria are as follows.
1. Mesh smoothness: edge locations that maximizes
the smallest interior angle for the two triangles and
results in a mesh containing approximately equi-
lateral triangles.
2. Mesh edge enhancement: mesh edges are aligned
with the edges in the object surface to enhance
sharp features. Neighboring triangles are exam-
ined and the ®nal orientation of the target edge is
the one which minimizes the variation in surface
normals between the quadrilateral and the neigh-
boring triangles.
For either criterion, the set of edges, E, is examined
on each iteration of the algorithm until no further edge
adjustment is necessary. The best results have been
obtained by applying the criteria sequentially; the
initial mesh, Mi, is ®rst optimized for smoothness,
then optimized to enhance the features. This prevents
arti®cial features that exist in the initial mesh, but not
on the object surface, from being erroneously
enhanced.
5. Application examples
Two case studies are presented to show the utility of
this technique for producing parts through manufac-
turing methods such as injection molding. Injection
mold tooling can be produced by CNC machining
from the surface de®nition generated by this techni-
que. Computer aided manufacturing (CAM) packages
(such as the DelCAM Duct program) utilize the
polyhedral surface representation created here. Alter-
natively, the triangular facet model can be employed
to create a `̀ STL'' ®le compatible with all available
rapid prototyping systems. These systems, as
described by Jacobs [15], are now being employed
Fig. 12. The edge-swapping process that is the basis of the feature-enhancement process.
168 C. Bradley / Computers in Industry 44 (2001) 159±173
to quickly produce short run and bridge tooling from
stereolithography master models. The models are then
used to create the tooling for production of prototypes
in the end use material, typically an engineering plastic.
5.1. Toy model
The ®gurine head, shown in Fig. 7 is typical of
intricate parts injection molded for the toy industry.
The object occupies a volume of 60 mm� 40 mm�
30 mm and has a complex surface form that would be
ideal for production through stereolithography as
opposed to CNC machining. Machining would require
multiple part set-up on a 3-axis mill, whereas stereo-
lithography can produce the part, layer-by-layer, with
little dif®culty. The Pluto ®gurine is a dif®cult object
to model for three reasons.
1. The surface is too complex to be mapped to a 2D
planar domain (or any simple 3D geometric
Fig. 13. The six patches of Fig. 12 optimized with respect to a smoothness criterion (on the left) and subsequently with respect to feature
enhancement (on the right).
Fig. 14. Five mesh patches modeling the data obtained from the telephone receiver handset; the model contains no gaps in the mesh.
C. Bradley / Computers in Industry 44 (2001) 159±173 169
domain) thereby excluding the simple triangular
meshing algorithms discussed in Section 2.
2. The surface is not readily or easily divisible into a
set of simple four-sided patches as would be
required for ®tting by tensor product surfaces.
Furthermore, the tensor product surface patches
would need to be merged together ensuring C0 and
C1 continuity across all patch boundaries, for
example see Milroy [2].
3. The surface has large smooth regions interrupted
by sharp features with high surface curvature that
are problematic for any B-spline surface ®tting
due to dif®culty in knot vector creation, data point
parameterization, and control point location.
The 3D triangular meshing algorithm commenced
with voxel binning the data; the original cloud ®le of
29,798 points was reduced to 3134 points employing a
bin dimension of 1.00 mm. The ®gurine was modeled
with six patches, as shown in Fig. 7. The six patches,
in the order they were grown, are nose, top of head,
right cheek, left cheek, chin, back of head and neck.
Patch growth parameters were selected that allow
large patches, that join without gaps and without
forming erroneous triangles. The seed vertices were
selected at the regions of highest curvature, e.g. top of
the head, tip of the nose, bottom of the chin, and points
on the cheeks. Trial and error revealed that patch
growth performance is more ef®cient over high cur-
vature regions if the seed point is located at the center
of the region. A ®nal patch was located at the back of
the head to ®ll a gap in the original data set D.
The six component patches were merged into a
single mesh prior to optimization. The optimized
meshes are shown in Fig. 13. The most accurate mesh
was obtained by optimizing ®rst with the smoothing
criterion, shown on the left, then with the feature-
enhancing criterion, shown on the right. The ®nal
mesh is still smooth at regions of low curvature
(e.g. the bulbous nose) but still possesses clearly-
de®ned features (e.g. the eyebrows, nose wrinkles,
and cheeks). After some initial experimentation, opti-
mum growth parameters for each patch were selected.
The largest apparent gap in the mesh, on the throat
under the chin, was due to the incomplete digitizing of
the ®gurine. This gap in the data points was too large
for the mesh to span. The error between the optimized
mesh and the digitized points tended to be largest at
the boundaries between patches, however, if necessary
the ®tting error could be improved by manually
inserting or removing individual triangles. The max-
imum ®tting error between cloud data and the mesh
was 0.50 mm. Therefore, the overall worst-case error,
based on the sensor's worst-case stated error speci®-
cation between the object's surface and the ®tted
triangular mesh was 0.55 mm.
The initial digitized surface of 29,798 points was
®tted with an optimized mesh of 6240 triangles in
approximately 3 h. The breakdown of time for each
stage of the process was: 10 s to import and reduce the
initial data set, approximately 2 h to grow the initial
mesh over the object's surface, 21 min to perform
optimization with respect to the smoothness criterion,
and 17 min to optimize with respect to the feature
enhancement criterion.
Fig. 15. (a) Five patch mesh optimized with respect to smoothness;
(b) same mesh optimized with respect to feature enhancement.
170 C. Bradley / Computers in Industry 44 (2001) 159±173
5.2. Telephone handset
The telephone handset is an example of an indus-
trial type object with smooth surfaces, ®llets and sharp
edges. The dif®culty in ®tting a surface to the handset
is retaining the sharp edges without distorting the
rounded corners or ¯at faces. The telephone handset
is ®tted with ®ve patches. One possible meshing
scheme would be to ®t separate patches on each
distinct region of the surface. The dif®culty to this
Fig. 16. Cross-sections through the telephone receiver handset illustrating the accuracy of the surface mesh model ®t: (a) location of the cutting
planes through the object; (b) longitudinal cross-section; (c) lateral cross-section through ear-piece; (d) lateral cross-section through mouthpiece.
C. Bradley / Computers in Industry 44 (2001) 159±173 171
approach is the rounded corners which do not provide
a clear boundary between surface regions. It would be
very dif®cult to select patch growth parameters which
constrain the patches to one surface region. Instead,
three large patches were grown to cover several sur-
face faces and two smaller patches were grown to ®ll
the gaps, as illustrated in Fig. 14. The resulting mesh
has no gaps or erroneous triangles.
The telephone handset was optimized ®rst for
smoothness, then to enhance the features. The smooth
mesh, shown in Fig. 15(a), predictably results in all the
sharp features of the surface becoming bevelled. The
feature enhancement shown in Fig. 15(b) causes the
triangle edges to align with the surface features, thus
enhancing the sharp edges between the ¯at faces. This
enhancement is clearly seen around the edges of the
ear and mouth pieces of the handset.
The ®tting errors in the handset mesh are best
illustrated with cross-sections, comparing the fea-
ture-enhanced mesh with the original digitized sur-
face. Three cross-sections are examined, as shown in
Fig. 16. The longitudinal cross-section shows the
maximum errors, at the sharp edge in the top left
corner of Fig. 16. The magnitude of this error is
approximately 0.8 mm, which is approximately half
of the voxel bin size of 2 mm. Furthermore, the overall
worst case error between actual part and ®tted mesh
was 0.85 mm. The ¯at bottom of the handset was not
digitized, so the mesh does not wrap around the
handset.
The telephone handset mesh is also suitable for
manufacture by either CNC machining or rapid pro-
totyping and tooling. Since, a triangular mesh is a
common method for describing a surface, the mesh
can be translated into any suitable computer ®le
format, such as IGES, DXF, or STL.
6. Summary
This research project has resulted in a ¯exible
hardware and software system that can enable the
rapid reverse engineering of objects typically encoun-
tered in the manufacturing arena. The CNC machine
tool mounted 3D vision system is an alternative to
expensive coordinate measuring machines. The mesh-
ing algorithm provides a means of accurately model-
ing complex models in a form that is compatible with
rapid tooling and prototyping machine and CNC
milling machine requirements.
The meshing software has several advantages com-
pared to other 3D data modeling techniques.
� The data can be generated by any digitizing device
and is not dependent on any structure in the data,
that is the algorithm functions with true 3D cloud
data.
� The algorithm's operation is not limited by the
complexity of the object's surface form.
� The optimization criteria ensure that important
engineering features on an object such as edges
are preserved in their initial `̀ crisp'' form.
� Compared to parametric surface fitting, the algo-
rithm is much less reliant on operator interaction,
such as the requirement to manually define and
segment patches of surface data.
References
[1] T. Altan, B.W. Lilly, J.P. Kruth, W. Konig, H.K. Tonshoff,
C.A. van Luttervelt, A.B. Khairy, Advanced techniques for
die and mold manufacturing, Annals of the CIRP 42 (2)
(1993) 707±715.
[2] M. Milroy, C. Bradley, G.W. Vickers, G1 continuity of B-
spline surface patches in reverse engineering, Computer
Aided Design 27 (6) (1995) 471±478.
[3] R.E. Barnhill, Representation and approximation of surfaces,
in: J.R. Rice (Ed.), Mathematical Surfaces III, University of
Wisconsin, Madison, WI, USA, 1977, pp. 69±120.
[4] C.L. Lawson, Software for C1 surface interpolation,
Mathematical Software III, Academic Press, New York,
1977.
[5] B.K. Choi, H.Y. Shin, Y.I. Yoon, J.W. Lee, Triangulation of
scattered data in 3D space, Computer Aided Design 20 (5)
(1988) 239±248.
[6] T.P. Fang, L. Piegl, Delaunay triangulation in three dimen-
sions, IEEE Computer Graphics and Applications 15 (5)
(1995) 62±69.
[7] P. Cignoni, C. Montani, R. Scopigno, A fast divide and
conquer delaunay triangulation algorithm in Ed, Computer
Aided Design 30 (5) (1998) 333±341.
[8] H. Hoppe, Surface Reconstruction from Unorganized Points,
Ph.D. Thesis, University of Washington, 1994.
[9] M. Milroy, C. Bradley, G.W. Vickers, Automated laser
scanning based on orthogonal cross-sections, Machine Vision
and Applications 9 (1996) 106±118.
[10] G. Turk, M. Levoy, Zippered polygon meshes from range
images, in: Proceedings of the SIGGRAPH '94, Computer
Graphics, Vol. 28, No. 3, 1994, pp. 311±318.
[11] P.J. Besl, Range imaging sensors, Research Report GMR-
172 C. Bradley / Computers in Industry 44 (2001) 159±173
6090, General Motors Research Laboratories, Michigan,
1988, pp. 62±64.
[12] J.M. Fitts, High-speed non-contact X±Y±Z gauging and part
mapping with Moire interferometry, Electro-Optical Informa-
tion Systems, Santa Monica, CA, 1993.
[13] J. Maver, R. Bajcsy, Occlusions as a guide for planning the
next view, IEEE Transactions Pattern Analysis and Machine
Intelligence 15 (5) (1993) 417±433.
[14] J. Weir, M. Milroy, C. Bradley, G.W. Vickers, Reverse
engineering physical models employing wrap around B-
spline surfaces and quadrics, Proceedings of the Institution of
Mechanical Engineers Part B 210 (1996) 147±157.
[15] P. Jacobs, Recent advances in rapid tooling from stereolitho-
graphy, in: The Seventh International Conference on Rapid
Prototyping, San Francisco, 1997, pp. 338±354.
Colin Bradley is an Associate Professor
in the Department of Mechanical Engi-
neering at the University of Victoria. He
teaches and performs research in the
design and manufacturing area. The
research focuses on the application of
emerging technologies to assist the
improvement of the design and manufac-
turing processes. Dr. Bradley received a
BASc from the University of British
Columbia, MSc from Herriot-Watt University and a PhD from
the University of Victoria. He has been an ASI Fellow since
1994 and won the NSERC Doctoral Prize in 1993. From 1997 to
1999, Dr. Bradley was a Senior Research Fellow at the National
University of Singapore.
C. Bradley / Computers in Industry 44 (2001) 159±173 173