10
Screwing Assembly Oriented Interactive Model Segmentation in HMD VR Environment Abstract Although different approaches of segmenting and assembling geometric models for 3D printing have been proposed, it is difficult to find any researches which investigate model segmentation and assembly in HMD (Head- Mounted Display) VR environments for 3D printing. In this work, we propose a novel and interactive segmentation method for screwing assembly in the environments to tackle this problem. Our approach divides a large model into semantic parts with a screwing interface for repeated tight assembly. With our method, non-professional users can intuitively segment models larger than a printer’s workspace into several components based on a single VR Handle cut and robust Boolean operations. Specifically, after a user places the cutting interface, an automatically created bounding box of the current part is computed for subse- quent Boolean segmentations, which results in multi-component semantic parts with planar cuts. Afterwards, the bolt is positioned with an improved K3M image thinning algorithm and is used for merging paired components with union and subtraction Boolean operations respectively. Moreover, we introduce a Swept Boolean based rotation collision detection and location method to guarantee a collision-free screwing assembly. Experiments show that our approach provides a new interactive multi- component semantic segmentation tool, which supports not only repeated installation and dis- assembly but also tight and aligned assembly. Keywords: 3D Segmentation, Virtual Re- ality, 3D Printing, HCI 1 Introduction Popular digital manufacturing facilities such as 3D printers make it easy for non-professionals to convert virtual digital models into physical objects. In recent years they have been subvert- ing manufacturing and entertainment industries, both of which are spreading to ordinary con- sumers. However, only a small portion of or- dinary users are affordable to buy a desktop 3D printer. The main reasons are: i) It is non-trivial for an ordinary user to create and edit a model, which need to employ most of the available pro- fessional 3D packages. However, 3D software often only provides 2D design interfaces to as- sist terminal users for creating 3D models. ii) The small dimensions of domestic desktop 3D printers limit the sizes of the printable models (Fig. 1). Although the ideas of gluing, con- necting, or interlocking could solve the problem to a certain extent [1, 2, 3, 4], it is difficult to satisfy both multiple assemblies and seamless tightness. As a result, such an emerging prob- lem in 3D printing has attracted more and more attention in the community. The intuitive user interface in the VR environ- ment provides a possible solution to tackle the above mentioned challenges. By integrating VR and 3D printing, we present a novel interactive model segmentation and assembly approach in VR environments for printing large models. To the best of our knowledge, it is the first work on segmenting and assembling models for 3D printing in HMD VR environments. The main contributions of the paper include: A fastener-based, 3D model segmentation method that supports not only repeated dis- assembly but also tight and aligned assem- bly;

Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

Screwing Assembly Oriented Interactive ModelSegmentation in HMD VR Environment

AbstractAlthough different approaches of segmentingand assembling geometric models for 3Dprinting have been proposed, it is difficult tofind any researches which investigate modelsegmentation and assembly in HMD (Head-Mounted Display) VR environments for 3Dprinting. In this work, we propose a novel andinteractive segmentation method for screwingassembly in the environments to tackle thisproblem. Our approach divides a large modelinto semantic parts with a screwing interfacefor repeated tight assembly. With our method,non-professional users can intuitively segmentmodels larger than a printer’s workspace intoseveral components based on a single VRHandle cut and robust Boolean operations.Specifically, after a user places the cuttinginterface, an automatically created boundingbox of the current part is computed for subse-quent Boolean segmentations, which results inmulti-component semantic parts with planarcuts. Afterwards, the bolt is positioned withan improved K3M image thinning algorithmand is used for merging paired componentswith union and subtraction Boolean operationsrespectively. Moreover, we introduce a SweptBoolean based rotation collision detection andlocation method to guarantee a collision-freescrewing assembly. Experiments show thatour approach provides a new interactive multi-component semantic segmentation tool, whichsupports not only repeated installation and dis-assembly but also tight and aligned assembly.

Keywords: 3D Segmentation, Virtual Re-ality, 3D Printing, HCI

1 Introduction

Popular digital manufacturing facilities such as3D printers make it easy for non-professionalsto convert virtual digital models into physicalobjects. In recent years they have been subvert-ing manufacturing and entertainment industries,both of which are spreading to ordinary con-sumers. However, only a small portion of or-dinary users are affordable to buy a desktop 3Dprinter. The main reasons are: i) It is non-trivialfor an ordinary user to create and edit a model,which need to employ most of the available pro-fessional 3D packages. However, 3D softwareoften only provides 2D design interfaces to as-sist terminal users for creating 3D models. ii)The small dimensions of domestic desktop 3Dprinters limit the sizes of the printable models(Fig. 1). Although the ideas of gluing, con-necting, or interlocking could solve the problemto a certain extent [1, 2, 3, 4], it is difficult tosatisfy both multiple assemblies and seamlesstightness. As a result, such an emerging prob-lem in 3D printing has attracted more and moreattention in the community.

The intuitive user interface in the VR environ-ment provides a possible solution to tackle theabove mentioned challenges. By integrating VRand 3D printing, we present a novel interactivemodel segmentation and assembly approach inVR environments for printing large models. Tothe best of our knowledge, it is the first workon segmenting and assembling models for 3Dprinting in HMD VR environments.

The main contributions of the paper include:

• A fastener-based, 3D model segmentationmethod that supports not only repeated dis-assembly but also tight and aligned assem-bly;

Page 2: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

Figure 1: A model exceeding printingdimension

• A new VR-based user interface for seg-mentation, through which users can seg-ment and assemble models according totheir needs in an intuitive way;

• A new rotation collision detection andavoidance method based on cube cage andBoolean operations, which prevents seg-mented components from colliding witheach other in the assembly later.

The remainder of this paper is organized asfollows. After introducing the related works onmodel segmentation especially for 3D printingand shape design in HMD VR environment inSection 2, we provide the overview of our workin Section 3. Then, we describe an automaticbounding box generation method in Section 4,followed by the description of model segmenta-tion based on Boolean operations in Section 5.In Sections 6 and 7, we detail rotation collisionavoidance and bolt-nut configuration. Finally,we present experiment results in Section 8, andprovide concluding remarks in Section 9.

2 Related Work

In this section, we will briefly review the mostlyrelated research efforts on model segmentationespecially for 3D printing and shape design inHMD VR environment.

3D Print-oriented segmentation. 3D shapeanalysis and component reuse require efficientsemantic 3D shape segmentation. Tremendousprogresses in 3D shape segmentation has been

made in the past decade [5, 6, 7, 8, 9]. With therecent popularity of 3D printing, some segmen-tation approaches specifically designed for de-composing 3D models have been proposed. Dueto transportation requirements and the limitedprinting volume, large models must be first sep-arated into small components for printing andthen assembled later. In the work of [10], anoptimization algorithm for the partitioning andpacking of a printable model was proposed in amulti-phase level-set framework, in which anyspecific way of assembly is not discussed. TheChopper algorithm proposed by Luo et al. [3]decomposes large objects into sub-componentsautomatically, and then places the joint rivets be-tween the components automatically through asimulated annealing algorithm. Song et al. [4]proposed a sub-module interlocking assembly,which supports multiple assemblies and disas-semblies. Their recent CofiFab system [11] in-corporates 2D laser cutting and 3D printing toproduce large-scale 3D models, in which the in-ternal structure is quickly generated by laser cut-ting, the surface details are obtained by fine 3Dprinting, and the two types of components arefinally assembled together. Jadoon et al. [12]presented an flexible interactive tool for parti-tion 3D models, which optimized the Chopperframework allowing more segmentation free-dom, while detailed assembling was not dis-cussed there.

3D model processing in immersive VR envi-ronment. Due to the developments of head-mounted devices such as Oculus Rift, HTCVive, and other similar VR devices, more andmore applications have started to use such VRsystems. The adoption of immersive 3D in-terfaces for model processing can be dated todecades ago. The early immersive 3DM [13]and FreeDrawer [14] allow users to create poly-line or spline based surfaces with simple 3D in-puts. In recent CavePainting [15] and TiltBrushsystems, users can create colorful art produc-tions. Both “Drawing on Air” [16] and “Lift-Off” [17] study intuitive 3D curve inputs, andthe latter also allows users to import a referenceimage. In the work of [18], visual and hap-tic feedback is used to provide the sensation ofpainting on virtual three-dimensional objects us-

Page 3: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

ing the MAI Painting Brush++.To assist young people with disabilities,

McLoughlin et al. [19] proposed a methodto create an artistic experience through virtualsculpting and 3D printing. Mendes et al. usenovel mid-air metaphors to model complex 3Dmodels with Boolean operations [20] and to se-lect out-of-reach objects with iterative refine-ments [21] in virtual reality. As sketch-basedmodeling relieves users from tedious operationsin professional packages, it has been extendedinto HMD environments. In the work of [22],Arora et al. analyzed various factors that couldaffect the human ability to sketch freely in a 3DVR environment. Giunchi et al. presented anapproach to search for 3D models based on free-form sketches within a virtual environment [23].Different from the above works, the focus of ourwork is to develop an immersive user interfacein VR for model segmentation and assembly for3D printing.

3 Approach Overview

As shown in Fig. 2, given an input Mesh Mthat need to be divided, the user first places asection P for segmentation using the dominanthand (usually the right hand) handle in a VR en-vironment. If the user is not satisfied with theposition of P , the non-dominant hand (usuallythe left hand) handle can be used to adjust it.Then, a bounding box V is automatically com-puted on one side of the section according to thesegmentation planeC, which surrounds the con-nected part of the model on the same side com-pletely. After that, two components, M1 andM2, are generated with the section as the seg-mentation interface, and the entire M is com-posed ofM1 andM2: M =M1∪M2, which areproduced with Boolean intersection/subtractionoperations with M and V as input primitives(Equation 1): {

M1 =M ∩ VM2 =M − V. (1)

Once M is successfully segmented into M1

and M2, the bolt template B will be placed atthe segmentation planeC for the componentM1

and the screwing simulation can be previewed.At the same time, a swept volume S of M1 is

Yes

No

Bounding Box

Generation

Collide?

Section Optimization

Bolt Placement

Screwable components ∑Mi'

Section Placement

Rotation Collision

Detection

Boolean Segmentation

Section Adjustment

Swept Volume

Generation

Mesh M

Figure 2: System workflow of a singlesegmentation

generated for rotation collision detection. If Sintersects with the component M2, the positionand orientation of the section need to be opti-mized. If not, union and subtraction Booleanoperations will be applied to M1 and M2 re-spectively, and a pair of screwable componentsM ′

i(i = 1, 2) is created.

4 Automatic Bounding BoxGeneration

4.1 Initial Section Placement andAdjustment

For the input mesh M , as shown in Fig. 3a, theuser can edit the scale of the brush and placethe section P directly with the dominant handhandle. Although the computation of the seg-mentation plane C is independent of the brushsize, the user can tune the brush scale for conve-nience. In order to obtain a robust segmentation

Page 4: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

(a) Initial placement ofthe section (circular purpleplate)

(b) Section adjustment (cir-cular red plate)

Figure 3: Place the section with both hands.

contour, the section center should be as close aspossible to the intended segmentation interface.Therefore, the non-dominant hand handle can beadopted to edit the position and orientation ofthe section P , as shown in Fig. 3b.

4.2 K3M-based Section CenterCalculation

Both the bounding box generation and the boltlocation depend on the center and the normal ofthe segmentation plane C, which coincides withits planar polygonal contour in 3D. Therefore,the normal vector can be obtained as the crossproduct of two normalized edges of its contourpolygon, and its central position can be deter-mined as follows.

The three-dimensional planar polygon isfirstly transformed into a two-dimensional poly-gon and discretized into an image (Fig. 4a).Then the approximated center is calculatedbased on the simplified K3M algorithm [24] (re-fer to Fig. 4b). Although the K3M algorithmhas certain advantages including retaining theright angle at the linear interconnection and pro-ducing a single-pixel wide skeleton, it is mainlyused to generate a single-pixel line skeleton. Inthis work we extend it so that it can be appliedfor the bolt-nut placement at the segmentationinterface in our system.

Similar to onion peeling, the center of C canbe obtained by gradually eroding the segmenta-tion interface from the outside to the inside. Theboundary pixels are peeled off layer by layer,and the last pixel is served as the center. Theoriginal K3M algorithm requires seven steps foreach iteration. During each iteration, it deter-mines the number of the neighborhood pointsof its boundary pixels. It can be simplified into

(a) Segmentation interface (b) Simplified K3M

Figure 4: Segmentation center location.

two iterations. The image boundary point is ob-tained at the first step (Phase 0), and the bound-ary points are deleted (converted to the back-ground) at the second step (Phase 1). Finally,there is only one pixel left in the image, whichis the center.

Phase 0: Traverse all the foreground pixels inthe image, and mark them as boundary pointswhen they have 1∼7 neighboring pixels.

Phase 1: Traverse all the boundary points.When a boundary point has 2∼7 neighboringpixels, its mark will be removed and it will beconverted into a background pixel.

Ending condition: Count all the foregroundpixels in the image, and stop the iterations whenonly one pixel is left in the image; otherwise,return to Phase 0.

4.3 Flooding-based Bounding BoxGeneration

In our approach, Boolean operations are uti-lized to segment the initial model, and the sec-ond primitive in a Boolean operation is the half-mesh agent which consists of the surface trian-gles of a compact bounding box.

It is nontrivial for a user to manually place asuitable bounding box given a complex modelto be segmented. Therefore, we present an auto-matic component bounding box creation methodafter the placement of a cut plane. The proce-dure takes the cut position and its normal vec-tor as input, and performs a flooding algorithmto recursively produce a compact bounding sur-face, which avoids users’ complicated interac-tions.

Our automatic generation approach is in-spired by the work of [25]. First, the boundingbox enclosing the model M is initialized, which

Page 5: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

is voxelized with a preset resolution. Then, thevoxels are classified into three categories, thatis, the outer voxels outside M , the feature vox-els intersecting the mesh surface, and the innervoxels inside M . Finally, the bounding box Vis generated by extracting the outer faces of thefeature voxels.

The classification of voxels is derived fromthe signed distance field [26] of the voxel ver-tices to M . For each voxel corner c, there is thecloset point p onM . When p lies on some facetsofM , the normal of p can be directly calculated.Otherwise, if p is at the vertices or edges of M ,an angle weighted pseud-onormal nA is applied[26]. Therefore, the sign of c can be computedby Equation 2. If all vertices of a voxel are in-side M , the voxel is an inner voxel with signvalue -1. If only a part of the vertices of a voxelare insideM , it is a feature voxel with sign value0. If all vertices of a voxel are outside M , the itis an outer voxel with sign value 1. They to-gether make up the tri-value distance field [25]:

c

outside M, if nA · (c− p) > 0inside M, if nA · (c− p) < 0on M, if nA · (c− p) = 0

(2)Actually, the feature voxels F of the model

component M1 are the main source for produc-ing the bounding box, and a voxel flooding algo-rithm is utilized for finding all inner voxels andfeature voxels Y of M1, from which F is se-lected. Firstly, a local coordinate system is con-structed by taking the center of the segmentationinterfaceC as the origin and its normal vector asthe y-axis (Fig. 5). After that, we search for in-ner voxels and feature voxels of the first layerby starting from the center voxel of C as a seed.It is followed by recursive searching in the x-axis and z-axis directions, from which all innervoxels and feature voxels Y1 from the first layerright above C are found and marked to avoid re-peated lookups. Then, we use the inner voxelsand feature voxels right above Y1 as the seedsand then search for inner and feature voxels re-cursively in the x-axis, y-axis, and z-axis direc-tions, and find all other inner voxels and featurevoxels Y2 of M1 from all the remaining layers,Y = Y1 ∪ Y2. Therefore, F is finally obtainedby combining the feature voxels F2 in Y2 and

Figure 5: Flooding algorithm.

Figure 6: The final bounding box.

Y1, that is, F = Y1 ∪ F2.The bounding box V is essentially the outer

faces of F , which are adjacent to an outer voxeland a feature voxel. So the extraction and gen-eration of V can be carried out by checking thevoxel values adjacent to each face of each fea-ture voxel based on the tri-value distance field(Fig. 6).

5 Model Segmentation based onBoolean Operations

5.1 Model Segmentation Principle

Although users can segment the input model Mat any position and in any direction, in order toensure the validity of the model segmentationand the convenience of subsequent assembly, theprinciples for users to follow are provided as fol-lows:

1. Segmentation position: The segmentationposition (i. e., segmentation center) of the

Page 6: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

Figure 7: Boolean operation.

model M should be as close as possibleto its skeleton center, preventing the modelfrom being segmented into too many parts,which is also not conducive to assembly.At the same time, segmentation should beplaced at the positions with locally simi-lar cylindrical shapes, which benefits bolt-based assembly and reinforcement.

2. Segmentation direction: Although themodel can be segmented in any direction inprinciple, the two segmented componentsM1 and M2 should be located in differenthalf spaces with the segmentation interfaceas the boundary to reduce the possibility ofcollision during screwing.

5.2 Robust Boolean Operations

Three phases of our system are based onBoolean operations. That is, the Boolean-basedsegmentation of the model (Fig. 7), the use ofBoolean operation on testing whether the sweptvolume S of one component intersects with an-other component for collision detection, and in-tegrating bolts and nuts to the segmented com-ponents in the subsequent sections.

Due to the importance of Boolean operationsin our system, we chose a rather robust methodfor mesh intersection calculation and Booleanoperation in libigl [27]. To perform the Booleanoperations of two triangle meshes TriMeshAand TriMeshB , the unified “mesh configura-tion” [28] is firstly calculated to find the inter-sections of all triangles, and the new edges andvertices will be added at the intersection linesaccurately. Then the voxels surrounded by theconfigured surface are marked according to their

Figure 8: Screwing process simulation.

“winding number vectors”. Finally, the bound-aries of the corresponding voxels are extractedusing specific Boolean operations (Union, Inter-section, etc.).

6 Collision detection andoptimization

6.1 Rotation simulation and collisiondetection

After placing an improper segmentation, twoof the components may collide with each otherwhen screwing. To this end, we provide a so-lution for rotation collision detection and opti-mization. Moreover, the process of screwing thebolt is displayed with a previewing animation.

In order to simulate the screwing process ofthe components and to detect the collision, thebolt needs to be placed in the expected positiontemporarily. For components M1 and M2 shar-ing the same segmentation interface, the boltsand nuts can be assembled normally on eitherside. But in general, users are used to cutting asmall part of the model, so we place the boltB atthe segmentation center O of M1 regularly, andthe normal vector of the bolt is aligned with thenormal vector nP of the segmentation interface.The component M1 with a bolt is represented asM ′

1.The simulation of the component screwing

process is rotating and moving M ′1 with time t

forth and back along the normal of the segmen-tation interface, similar to the process of screw-ing the bolt by hand in reality, as shown in Fig. 8.

The swept volume S is generated by screw-ing M1. Using the swept volume algorithm pro-vided in libigl [27], S is essentially the union of

Page 7: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

(a) Swept volume (b) Collision region

Figure 9: Collision detection.

a moving solid object M1 [29] (Equation 3):

S =⋃

t∈[0,1]

f(t)M1, (3)

where f(t) is a rigid motion over time t of M1.Since the surface of the swept volume gener-

ated by M1 and f(t) is a piecewise-ruled sur-face, which cannot be represented exactly by atriangle mesh. An approximate swept volumecan be computed based on signed distances [29](Fig. 9a), and an offset parameter can be set toapproximate the exact swept volume.

Whether an intersection I between S and an-other component M2 exists can be determinedby a Boolean intersection operation (Equa-tion 4), where I is the potential 3D collision re-gion (Fig. 9b). The existence of I means that theposition of the section is not proper, and the seg-mented components cannot be assembled after3D printing. Therefore, the position and orienta-tion of the section need to be optimized. If thereis no collision between the segmented compo-nents during the screwing process, the bolt canbe placed at the segmentation interface directlyfor 3D printing.

I = S ∩M2. (4)

6.2 Segmentation optimization

As mentioned above, if the collision region I ex-ists, the position and orientation of the sectionP need to be optimized. We provide two opti-mization schemes, an intelligent version and aninteractive one.

The intelligent optimization scheme focuseson optimizing the orientation of section P . Thenormal vector of the section is the same as thenormal vector of the segmentation interface, nP .

Figure 10: Section optimization diagram.

Figure 11: Optimized section.

As shown in Fig. 10, a four-step algorithm isemployed to rectify the failed section direction:

1. Compute the line l coinciding with nP asits direction which passes the center O ofthe segmentation interface.

2. Find the closest point pc and the farthestpoint pf away from line l in the collisionregion I .

3. Calculate the angle γ between−−→Opc and−−→

Opf as the rotation angle of P , taking v =−−→Opc−

−−→Opf as the rotating orientation of P .

4. Transform the section center to the centerO of the segmentation interface, and rotatethe section with angle γ along v, thus ob-taining an optimized section P ′, as shownin Fig. 11.

Since the intelligent optimization schemeabove can only avoid the current collision step,which means the optimized segmentation of anew section may still cause a new collision inother regions, the intelligent optimization can be

Page 8: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

(a) Leg with a bolt (b) Body with a nut

Figure 12: Bolt and nut placement of a dogmodel.

performed iteratively until no collision occursany more.

However, if no collision-free orientation canbe found after too many iterations (a predefinedmaximum steps), an interactive operation is re-quired and the collision positions will be visual-ized for reference. Users can place the sectionat a new location along the skeleton manually tosatisfy the requirements with the non-dominanthand handle.

7 Bolt-nut configuration

Based on the segmentation interface center O,we can get the minimum distance R from O tothe interface boundary. The radius r of the boltand the nut should satisfy

r

R∈ [α, β], (5)

where α is used to avoid creating too small bolts,and β is used to avoid too thin walls of the nuthole, which may lead to collapse during screw-ing. In practical applications, we set r

R = 23 in

most cases. Users can adjust the value accordingto different requirements. In our experiments weempirically set α = 1

5 , β = 45 , which have satis-

factory results.Attaching bolts and nuts onto the components

Mi (i = 1, 2) by Boolean union and subtractionoperations will create a pair of screwable com-ponents (Equation 6):

M ′1 =M1 ∪B,

M ′2 =M2 −B.

(6)

The example of a dog model with rR = 1

3 isshown in Fig. 12.

(a) Partial sequence of segmentation

(b) Segmentation interface with its center

(c) Segmented components

Figure 13: Armadillo segmentation.

8 Experiments and Discussion

To validate the effectiveness of the proposed al-gorithm, we developed a prototype system basedon OpenVR using HTC Vive helmet. Our ex-periments are performed on a desktop PC, with4.0 GHZ Intel i7-6700K CPU, 8G memory andNvidia GTX 1070 graphics card. In addition tothe dog model above, another example of theArmadillo model segmented using our systemis shown in Fig. 13. Fig. 13a shows a part ofthe segmentation. After each pair of componentsegmentation, the center of the segmentation in-terface is calculated automatically (Fig. 13b),which will be used for the placement of boltsand nuts. Components after segmentation canbe seen in Meshlab [30], as shown in Fig. 13c.

As shown in Fig. 14, a single model can besegmented multiple times as needed, and all3D printed models can be repeatedly disassem-bled and assembled. The assembled compo-nents with very tiny gaps are firmly connectedwith each other. According to our tests, if themodel is too small, the screw threads of the gen-erated bolts and nuts are so fine that they maycollapse easily during the screwing assembly.

Page 9: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

(a) Dog components (b) Assembled dog

(c) Armadillo components (d) AssembledArmadillo

Figure 14: 3D printing and assembly of modelcomponents.

The larger the model is, the better the result willbe by using our system, which is just the re-quirement for large model segmentation beforeprinting. Since we use robust Boolean operationbased on “mesh configuration” and swept vol-ume algorithm in the libigl library, our methodis not real-time. But it does not matter as inour interactive system, a single Boolean oper-ation takes only about 1∼2 seconds. Althoughthe calculation of swept volume may take sev-eral seconds, it is only calculated once so it haslittle effect on the user’s experience.

9 Conclusion

We have presented a VR-based segmentationand assembly approach for printing 3D models,which is suitable for dividing large-size modelsinto small components and printing them sepa-rately. Pairs of bolt fasteners will be generated atthe segmentation interfaces, which supports re-peated seamless and firm assembly. In the seg-mentation procedure, users wear a VR helmetwith high immersive experience, which providesconvenient user interaction. Non-professionalusers can segment a 3D model on demand di-rectly with VR handles.

Some steps in our system are based on

Boolean operations using libigl library, whichmay affect the performance of our proposed ap-proach. In order to achieve a real-time perfor-mance, an approximate GPU parallel Booleanoperation [31] can be used for interactive dis-play in our future interaction system, and itsfinal processing of the model can use robustBoolean operations based on “mesh configura-tion”.

Our system also provides collision detectionand optimization of the segmented components,which guarantees successful assembly of com-ponent pairs with bolts and nuts.

However, there are still some limitations inour current approach, which can be further op-timized in the future. First of all, in order toincrease the robustness of the bolt assembly, amechanical analysis can be used to determinethe wall thickness and bolt diameter, which pre-vents the bolt from breaking during the screwingprocess. Secondly, the optimization schemes forthe section in the case of collision can be im-proved. For the intelligent optimization scheme,it takes too long time to compute the swept vol-ume. Therefore, the optimal orientation andposition of the section should be automaticallysearched locally. For the collision detection dur-ing the screwing, we can refer to the 2D projec-tion method in [32].

References[1] Juraj Vanek, Jorge A. Garcia Galicia, Bedrich Benes,

Radomı́r Mech, Nathan A. Carr, Ondrej Stava, andGavin S. P. Miller. Packmerger: A 3d print volumeoptimizer. Comput. Graph. Forum, 33(6):322–332,2014.

[2] Ruizhen Hu, Honghua Li, Hao Zhang, and DanielCohen-Or. Approximate pyramidal shape decom-position. ACM Trans. Graph., 33(6):213:1–213:12,2014.

[3] L. Luo, I. Baran, S. Rusinkiewicz, and W. Matusik.Chopper: partitioning models into 3d-printable parts.ACM Trans. Graph., 31:1–9, 2012.

[4] P. Song, Z. Fu, L. Liu, and C. W. Fu. Printing 3dobjects with interlocking parts. Computer Aided Ge-ometric Design, 35-36:137–148, 2015.

[5] Rui S. V. Rodrigues, Jos F. M. Morgado, and AbelJ. P. Gomes. Part-based mesh segmentation: A sur-vey. Computer Graphics Forum, 37(6):235–274,2018.

[6] Zhenyu Shu, Chengwu Qi, Shi-Qing Xin, Chao Hu,Li Wang, Yu Zhang, and Ligang Liu. Unsupervised3d shape segmentation and co-segmentation via deep

Page 10: Screwing Assembly Oriented Interactive Model Segmentation ...eprints.bournemouth.ac.uk › 33099 › 1 › CASA_2019_paper_15.pdf · Section 2, we provide the overview of our work

learning. Computer Aided Geometric Design, 43:39–52, 2016.

[7] Truc Le, Giang Bui, and Ye Duan. A multi-viewrecurrent neural network for 3d mesh segmentation.Computers & Graphics, 66:103–112, 2017.

[8] Li Yi, Vladimir G. Kim, Duygu Ceylan, I-ChaoShen, Mengyan Yan, Hao Su, Cewu Lu, QixingHuang, Alla Sheffer, and Leonidas J. Guibas. A scal-able active framework for region annotation in 3dshape collections. ACM Trans. Graph., 35(6):210:1–210:12, 2016.

[9] David George, Xianghua Xie, and Gary K. L. Tam.3d mesh segmentation via multi-branch 1d convolu-tional neural networks. Graphical Models, 96:1–10,2018.

[10] Miaojun Yao, Zhili Chen, Linjie Luo, Rui Wang,and Huamin Wang. Level-set-based partitioning andpacking optimization of a printable model. ACMTrans. Graph., 34(6):214:1–214:11, 2015.

[11] P. Song, B. Deng, Z. Wang, Z. Dong, W. Li, C. W.Fu, and L. Liu. Cofifab: coarse-to-fine fabrication oflarge 3d objects. ACM Trans. Graph., 35:1–11, 2016.

[12] Aamir Khan Jadoon, Chenming Wu, Yong-Jin Liu,Ying He, and Charlie C. L. Wang. Interactive parti-tioning of 3d models into printable parts. IEEE Com-puter Graphics and Applications, 38(4):38–53, 2018.

[13] Jeff Butterworth, Andrew Davidson, Stephen Hench,and Marc Olano. 3dm: A three dimensional modelerusing a head-mounted display. In Proceedings of the1992 Symposium on Interactive 3D Graphics, SI3D’92, pages 135–138, 1992.

[14] Gerold Wesche and Hans-Peter Seidel. Freedrawer: afree-form sketching system on the responsive work-bench. In VRST, pages 167–174, 2001.

[15] Daniel F. Keefe, Daniel Acevedo Feliz, TomerMoscovich, David H. Laidlaw, and Joseph J. LaVi-ola Jr. Cavepainting: a fully immersive 3d artisticmedium and interactive experience. In Proceedingsof the 2001 Symposium on Interactive 3D Graphics,SI3D, pages 85–93, 2001.

[16] D. F. Keefe, R. C. Zeleznik, and D. H. Laidlaw.Drawing on air: input techniques for controlled 3dline illustration. IEEE Trans. Vis. Comput. Graph.,13:1067–1081, 2007.

[17] Bret Jackson and Daniel F. Keefe. Lift-off: Usingreference imagery and freehand sketching to create3d models in VR. IEEE Transactions on Visual-ization and Computer Graphics, 22(4):1442–1451,2016.

[18] Mai Otsuki, Kenji Sugihara, Azusa Toda, FumihisaShibata, and Asako Kimura. A brush device withvisual and haptic feedback for virtual painting of 3dvirtual objects. Virtual Reality, 22(2):167–181, 2018.

[19] Leigh McLoughlin, Oleg Fryazinov, Mark Moseley,Mathieu Sanchez, Valery Adzhiev, Peter Comninos,and Alexander A. Pasko. Virtual sculpting and 3dprinting for young people with disabilities. IEEEComputer Graphics and Applications, 36(1):22–28,2016.

[20] Daniel Mendes, Daniel Medeiros, Maurı́cio Sousa,Ricardo Ferreira, Alberto Raposo, Alfredo Ferreira,and Joaquim A. Jorge. Mid-air modeling withboolean operations in VR. In 2017 IEEE Symposiumon 3D User Interfaces, 3DUI 2017, Los Angeles, CA,USA, March 18-19, 2017, pages 154–157, 2017.

[21] Daniel Mendes, Daniel Medeiros, Maurı́cio Sousa,Eduardo Cordeiro, Alfredo Ferreira, and Joaquim A.Jorge. Design and evaluation of a novel out-of-reach selection technique for VR using iterative re-finement. Computers & Graphics, 67:95–102, 2017.

[22] Rahul Arora, Rubaiat Habib Kazi, Fraser Anderson,Tovi Grossman, Karan Singh, and George W. Fitz-maurice. Experimental evaluation of sketching onsurfaces in VR. In Proceedings of the 2017 CHI Con-ference on Human Factors in Computing Systems,Denver, CO, USA, May 06-11, 2017., pages 5643–5654, 2017.

[23] Daniele Giunchi, Stuart James, and Anthony Steed.3d sketching for interactive model retrieval in vir-tual reality. In Proceedings of the Joint Symposiumon Computational Aesthetics and Sketch-Based In-terfaces and Modeling and Non-Photorealistic Ani-mation and Rendering, Expressive ’18, pages 1:1–1:12, 2018.

[24] K. Saeed, M. Tabedzki, M. Rybnik, and M. Adamski.K3m: a universal algorithm for image skeletoniza-tion and a review of thinning techniques. Int. J. Appl.Math. Comput. Sci., 20:317–335, 2010.

[25] C. Xian, H. Lin, and S. Gao. Automatic generation ofcoarse bounding cages from dense meshes. In IEEEInternational Conference on Shape Modeling & Ap-plications, pages 21–27, 2009.

[26] J. A. Baerentzen and H. Aanaes. Signed dis-tance computation using the angle weighted pseudo-normal. IEEE Trans. Vis. Comput. Graph., 11:243–253, 2005.

[27] A. Jacobson, Daniele. Panozzo, C. Schller, O. Dia-manti, Q. Zhou, S. Koch, and et al. libigl: a sim-ple c++ geometry processing library. http://libigl.github.io/libigl/, 2017.

[28] Q. Zhou, E. Grinspun, D. Zorin, and A. Jacobson.Mesh arrangements for solid geometry. ACM Trans.Graph., 35:1–15, 2016.

[29] A. Garg, A. Jacobson, and E. Grinspun. Computa-tional design of reconfigurables. ACM Trans. Graph.,35:1–14, 2016.

[30] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane,F. Ganovelli, and G. Ranzuglia. Meshlab: an open-source mesh processing tool. In Eurographics ItalianChapter Conference, pages 129–136, 2008.

[31] H. Zhao, C. C. L. Wang, Y. Chen, and X. Jin. Paral-lel and efficient boolean on polygonal solids. VisualComputer, 27:507–517, 2011.

[32] T. Sun and C. Zheng. Computational design of twistyjoints and puzzles. ACM Trans. Graph., 34:1–11,2015.