A Virtual Environment for Collaborative Assembly

  • Published on
    16-Oct-2014

  • View
    16

  • Download
    0

Embed Size (px)

Transcript

A Virtual Environment for Collaborative AssemblyXiaowu Chen, Nan Xu, Ying Li The Key Laboratory of Virtual Reality Technology, Ministry of Education School of Computer Science and Engneering, Beihang University Beijing 100083, P.R. China) chen@buaa.edu.cn

AbstractTo allow geographical dispersed engineers to perform an assembly task together, a Virtual Environment for Collaborative Assembly (VECA) has been developed to build a typical collaborative virtual assembly system. This presents the key parts of VECA, such as system architecture, HLA-based (High Level Architecture) communication and collaboration, motion guidance based on collision detection and assembly constraints recognition, data translation from CAD to virtual environment, reference resolution in multimodal interaction.

1. IntroductionVirtual reality (VR) is a technology which is often regarded as a natural extension to 3D computer graphics with advanced input and output devices. Now VR has matured enough to warrant serious engineering applications [1]. As one of the important application domains of VR, virtual assembly (VA) fulfills design flexibility by replacing physical objects with the virtual representation of machinery parts and providing advanced user interfaces for users to design and generate product prototypes [2]. VA can evaluate and analyze product assembly and disassembly during the product design stage with the goal of reducing assembly costs, improving quality and shortening time to market [3]. With the distributed fashion of companies and research organizations, many design, assembly, manufacture, analysis works require the collaboration of geographical dispersed engineers. CollaborativeThis paper is supported by A.S.T. Fund (VEADAM ), National Natural Science Foundation of China (60503066), National Research Fund (51404040305HK01015), China Education and Research Grid (ChinaGrid) Program (CG2003-GA004), National 863 Program (2004AA104280) & (SIMBRIDGE).

virtual environment (CVE) is a computer system that allows remote users to work together in a shared virtual reality space [4]. As an important category of Computer-Supported Cooperative Work (CSCW) and VR, CVE systems have been applied to military training, telepresence, collaborative design and engineering, distance training, entertainment, and many other personal and industrial applications [5]. Because of the advantages of CVE and the actual requirement of industry, collaborative virtual assembly (CVA) is becoming one of research emphases of VA. There have been many research efforts to develop virtual assembly systems [6][7][8]. A representative system is Virtual Assembly Design Environment (VADE) [9] which allows engineers to evaluate, analyze, and plan the assembly of mechanical systems. VADE simulates inter-part interaction for planar and axisymmetric parts using constrained motions along axes/planes which are obtained from the parametric CAD system. In this system, direct interaction is supported through a CyberGlove. In the field of CVA, several systems have been created. For example, National Center for Supercomputing Applications (NCSA) and Germanys National Research Center for Information Technology (GMD) developed a collaborative virtual prototyping system over ATM network [10]. The system integrated real-time video and audio transmissions let engineers see other participants in a shared virtual environment at each remote sites viewpoint position and orientation. Jianzhong Mo et al. developed a virtual-reality-based software tool- Motive3D [11] that supports collaborative assembly/disassembly over Internet and presented systematic methodology for disassembly relation modeling, path/sequence automatic generation and evaluation independent of any commercial CAD systems. Shyamsundar et al. developed an internetbased collaborative product assembly design (cPAD) tool [12][13]. The architecture of cPAD adopts 3-tier client/server mode. In this system, a new Assembly

Proceedings of the Second International Conference on Embedded Software and Systems (ICESS05) 0-7695-2512-1/05 $20.00 2005

IEEE

Representation (AREP) scheme was introduced to improve the assembly modeling efficiency. The AREP model at the server side can be accessed by many designers at different locations through client browsers implemented using Java3D. But most CVA systems above are based on C/S or B/S architecture, and little effort has been put into the research of CVA based on distributed architecture. Sometimes there are some requirements on the design of certain industrial products, which expect a distributed virtual environment to support the collaborative assembly, such as a Virtual Environment for Collaborative Assembly (VECA) presented in this paper. VECA can build a collaborative virtual assembly system which allows geographical dispersed engineers to perform an assembly task together. VECA mainly includes HLA-based (High Level Architecture) communication and collaboration, motion guidance based on collision detection and assembly constraints recognition, data translation from CAD to virtual environment, reference resolution in multimodal interaction, and so on. The rest of this paper is organized as follows. Section 2 presents the system architecture of VECA. The modules of communication and collaboration, data translation, motion guidance, multimodal interaction are separately elaborated in section 3, 4, 5, and 6. Section 7 is the implementation and application of VECA, and finally section 8 ends this paper with conclusions and future work.

is a plug-in for Pro/Engineer and is developed by Pro/Toolkit and Multigen OpenFlight API. 4. Constraint manager: dynamically maintains assembly constraints information. The design of this module and a constraint-based distributed virtual assembly model were already introduced in [14][15]. 5. Multimodal interaction: processes combined natural input modes such as speech, hand gesture in a coordinated manner.CAD Data translationMotion guidance

Collaborative virtual assemblyConstraint manager Comunication & collaboration

Input/output deviceMouse, Keyboard, CyberGlove, Microphone, ... Display, 3D shutter glasses, ...

Virtual environment

Multimodal interaction Scenegraph management

Figure 1. System architecture of VECA

3.

HLA-based Communication Collaboration

and

2. System ArchitectureThe system architecture of VECA is illustrated in Fig.1. Once an engineer has finished designing assemblies or subassemblies using a parametric CAD system such as Pro/Engineer, he or she uses a plug-in for Pro/Engineer to translate CAD models to triangle mesh models (Multigen OpenFlight) including assembly constraints and geometry feature information. Others download these models, and then they can assemble the product collaboratively in a multimodal shared VE to find the design defects or get the feasible assembly sequence. Now there are five key parts in the system. 1. Communication and collaboration: connects geographical dispersed nodes to form a distributed collaborative VE based on HLA. 2. Motion guidance: helps the user translate or rotate the parts in the VE freely and precisely using collision detection and assembly constraints recognition. 3. Data translation: this module translates models and extracts information from CAD (Pro/Engineer). It

HLA standard [16] is a general architecture for simulation reuse and interoperability developed by the US Department of Defense. The conceptualization of HLA led to the development of the Run-Time Infrastructure (RTI). This software implements an interface specification that represents one of the tangible products of the HLA. Some concepts and terms in HLA are introduced as follows. Federation is defined as a group of federates forming a community. Federates may be simulation models, data collectors, simulators, autonomous agents or passive viewers. A simulation session, in which a number of federates participate, is called a federation execution. Simulated entities, such as tanks, aircrafts or subassemblies, are referred to as objects. Attribute, referred to as object state, can be passed from one object to another. Objects interact with each other and with the environment via interactions, which may be viewed as unique events, such as manipulation of the object, or a collision between objects. Initially, an attribute is controlled by the federate that instantiated the object. However, attribute ownership may change during the execution. This mechanism allows users to co-manipulate the same object, for example. All possible interactions among the federates of a federation are defined in a so-called Federation Object Model (FOM). The capabilities of a federate are defined in a so-called Simulation Object Model

Proceedings of the Second International Conference on Embedded Software and Systems (ICESS05) 0-7695-2512-1/05 $20.00 2005

IEEE

(SOM). The SOM is introduced to encourage reuse of simulation models [17]. DVE_FM [18] is an application framework which is based on the interface specification of HLA. It provides the universal function of distributed interactive simulation and insulates invoking of RTI. DVE_FM reduces the complexity of development of simulation application based on HLA, thus the developers can focus on domain application but not complex interoperation between simulation application and RTI. The HLA-based simulation framework of VECA is shown in Fig.2. Each of VA nodes joins federation as a federate. And the development of the federate is based on DVE_FM.VM federate 1 VM federate 2...

VM federate n

DVE_FM

DVE_FM

DVE_FM

Run-Time Infrastructure (RTI) TCP/IP

Figure 2. HLA-based simulation framework

During a federation execution, each of federates updates the attributes of the objects (parts or subassemblies) which belong to it. VECA has defined two types of interactions: CCooperationInteraction and CConstraintFeedbackInteraction. CCooperationInteraction represents federate assembly/disassembly requirement. A federate that has the ownership of the parts or subassemblies may receive more than one CCooperationInteraction for the same part (subassembly) at the same time, and it will select one by a set of schedule rules to solve the problem of competition and collaboration between multi-users. If the requirement satisfies assembly constraints, the federate updates position or attitude of parts (subassemblies), then informs other federates through RTI. Otherwise the federate transfers CConstraintFeedbackInteraction to declare that the current operation is not allowed.

4. Motion GuidanceIn VA, due to the lack of physical constraint information and the limitation of location tracking precision, it is difficult for the user to control the motion of the assembling parts precisely with current virtual reality I/O devices during VA [19]. Therefore, it is necessary to explore a technique to help the user manipulate the target assembling part freely and precisely in VA.

VECA adopts motion guidance based on collision detection and assembly constraints recognition [20] to accomplish precise location of parts (subassemblies). Collision detection is used to solve assembling parts interference problem. Assembly constraints recognition is used to assist in capturing the users intention. Axis orientation constraint and face match constraint are the two assembly constraints mainly researched into because other assembly constraints can be converted to the combination of these two constraints, and a pair of assembly units just need to satisfy each of them for only once to form a new subassembly. In order to simulate the actual assembly procedure, motion guidance first recognizes the axis orientation constraint, and then recognizes the face match constraint during assembly constraints recognition. The above discusses the logic sequence of a single assembly from the angle of recognition satisfaction. Fig.3 describes the functions of motion guidance and precise location from the angel of a part's (subassembly's) motion state transition during a single assembly (disassembly). A single assembly process of a pair of assembly units can be divided into three phases: free motion state A, axial motion state, and free motion state B as shown in Fig.3. Free motion state A is the initial state of assembly units, axial motion state means that the assembly motion unit can only move along or rotate around the orientation axis under the axis orientation constraint, and free motion state B means the assembly motion unit and the assembly base unit compose a new subassembly and return to the free motion state again. Note that the initial state of assembly is free motion state A while that of disassembly is free motion state B because disassembly is the contrary course of assembly. During assembly, the two assembly units are in the free motion state first, and collision detection algorithm guarantees that any parts (subassemblies) don't collide with each other, and similarly, it can also avoid penetration between users and that between users and objects. Axis orientation constraints recognition algorithm detects whether axis orientation constraint is satisfied between the two assembly units, if satisfied, they will perform axis align operation via axial precise location, then they enter axial motion state in which axial collision detection algorithm guarantees axial motion without interference and detects possible design defects. Axial collision detection extends the general collision detection which cannot distinguish between contact and collision by examining the normals of the pieces where collision happens. Axial motion solving algorithm maps the users' inputs to a certain length of translation along the orientation axis.

Proceedings of the Second International Conference on Embedded Software and Systems (ICESS05) 0-7695-2512-1/05 $20.00 2005

IEEE

Axis orientation constraint relief algorithm is used to detect user's intention of relieving axis orientation constraint. Face match constraint recognition algorithm is used to detect whether face match constraint is satisfied between the two assembly units, if satisfied, they are assembled together successfully, and compose a new subassembly, and then enter free motion state B in which face match constraint relief algorithm detects user's intention of relieving face match constraint (namely disassembly intention). Fig.4-Fig.6 show an example of motion guidance.Precise location Axial precise location

5. Data TranslationThis module includes two functions: model translation and information extraction. The former translates the parametric model created in Pro/Engineer to triangle mesh model (Multigen OpenFlight) which can be supported in VR. The latter extracts orientation axis, match face and assembly constraints for motion guidance. It is a plug-in for Pro/Engineer and developed by Pro/Toolkit and Multigen OpenFlight API. Th...