15
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETKS, VOL. 22, NO. 1, JANUARYFEBRUARY 1992 115 A Geometric Feature Relation Graph Formulation for Consistent Sensor Fusion Y. C. Tang and C. S. George Lee, Senior Member, IEEE A~s&Qc~- To efficiently utilize multiple sensors in intelligent tasks, it requires a unified and reliable framework to represent and consistently fuse versatile sensory information. A generic framework is proposed that employs a sensor-independent, feature-based relational model, called the geometric feature relation graph (GFRG), to represent information acquired by various sensors. A GFRG consists of nodes representing 3-D geometric features and arcs denoting spatial relations between features. Sensor fusion is then accomplished by integrating multiple irregular GFRG’s constructed by various sensors into a regular GFRG. In the integration process, two inherent problems must be solved. The correspondence problem deals with how to identify and fuse corresponding measurements of features from different sensors, and the other problem concerns how to maintain consistency in the network of relations after those corresponding measurements are fused. The paper presents an effective and reliable procedure for identifying corresponding measurements of features in the presence of sensory uncertainty based on both geometric and topological constraints, and a nonlinear programming formulation for maintaining consistency in a network of relations is proposed. The DempsterShafer theory of belief functions is applied to make the utilization of topological constraints in achieving reliable identification. Optimal as well as heuristic solutions for maintaining consistency are presented. The heuristic solution was shown to have satisfactory near-optimal performance with less computational complexity. Computer simulations were conducted to verify the validity and the performance of the proposed sensor fusion framework. I. INTRODUCTION ENSORS play an important role in extending the capa- S bility of industrial machines to an uncertain or unknown environment. Recent advances in hardware and software for physical sensors have further spurred the interest of using more and more sensors in a wide range of industrial tasks. An advantage of using multiple heterogeneous sensors is that versatile information can be extracted from sensing due to the various capabilities of the sensors. Another advantage is that uncertainties and noises embedded in sensory information can be reduced through fusion of multiple sensory information. Furthermore, cooperative utilization of various sensors to extract complementary information about an environment can accomplish the desired task more quickly and efficiently. Thus, a multifunctional and reliable integrated-sensor system can be utilized to satisfy a variety of requirements arising in assembly and manufacturing tasks. Manuscript received September 18, 1990; revised July 1, 1991. This work was supported in part by a grant from the Ford Fund. Y. C. Tang is with the Department of Computer Science and Engineering, School of Electrical Engineering, Arizona State University, Tempe, AZ 85287. C. S. G. Lee is with Purdue University, West Lafayette, IN 47907. IEEE Log Number 9104107. Research on integrated sensors has mainly focused on the areas of establishing the system model [ll], [20], developing strategies for sensing tasks [9], [14], [15], and consistently fus- ing sensory data acquired by various sensors [l], [2], [4]-[6], [lo], [21]-[23]. The system model provides the fundamental architecture of an integrated-sensor system. Sensing strategies conduct an efficient and cooperative utilization of sensors for accomplishing a specific task. Sensory data fusion generates the most consistent interpretation of an environment from sim- ilar, complementary, or contradictory sensory observations of the environment. Among these areas, sensory data fusion has received most attention from different perspectives. Shekhar et al. [2] used the concept of “good measurement” in pruning sensory estimates of the position and orientation (i.e., pose) of an object. The best interpretation of the pose is then obtained by solving a weighted linear system of the good measurements in a least square sense. Luo et al. [6] presented a hierarchical framework for multisensor fusion using templates in four phases: “far away,” “near to,” “touching,” and “manip- ulation.” In each phase, sensory measurements are combined based on supporting evidence among these measurements. Durrant-WhytZ [5] also proposed a team decision model for fusing sensory information. For different applications, Allen [21] and Stansfield [22] both implemented integrated-sensor systems using passive vision and active tactile sensors, and Flynn [23] built a structure for combining sonar and infrared sensors for mobile robot navigation. Two major deficiencies exist in previous work. First, most existing sensor fusion approaches are applied to sensory data at the low level of abstraction. They are inadequate since sensory data processed in low-level details are unable to provide infor- mation at a higher level of abstraction. For instance, symbolic information such as geometric features of objects is obscured in low-level sensor fusion. Furthermore, due to various input characteristics of sensors, sensory information described by low-level readings is represented in heterogeneous form. This makes it difficult to fuse sensory information obtained from different sensors. Second, most of the previous work assumed that the set of sensory readings to be fused is always homogeneous; that is, they are measurements of an identical entity. This is not the case in real situations since sensory readings of various entities are usually acquired simultaneously in a frame (such as from cameras) and it is necessary to classify these readings into groups of identical entities before those fusing mechanisms for homogeneous sensory measurements can be applied. This con- stitutes the correspondence problem that must be solved when 0018-9472/92$03.00 0 1992 IEEE

A geometric feature relation graph formulation for consistent sensor fusion

  • Upload
    csg

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A geometric feature relation graph formulation for consistent sensor fusion

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETKS, VOL. 22, NO. 1, JANUARYFEBRUARY 1992 115

A Geometric Feature Relation Graph Formulation for Consistent Sensor Fusion

Y. C. Tang and C. S. George Lee, Senior Member, IEEE

A~s&Qc~- To efficiently utilize multiple sensors in intelligent tasks, it requires a unified and reliable framework to represent and consistently fuse versatile sensory information. A generic framework is proposed that employs a sensor-independent, feature-based relational model, called the geometric feature relation graph (GFRG), to represent information acquired by various sensors. A GFRG consists of nodes representing 3-D geometric features and arcs denoting spatial relations between features. Sensor fusion is then accomplished by integrating multiple irregular GFRG’s constructed by various sensors into a regular GFRG. In the integration process, two inherent problems must be solved. The correspondence problem deals with how to identify and fuse corresponding measurements of features from different sensors, and the other problem concerns how to maintain consistency in the network of relations after those corresponding measurements are fused. The paper presents an effective and reliable procedure for identifying corresponding measurements of features in the presence of sensory uncertainty based on both geometric and topological constraints, and a nonlinear programming formulation for maintaining consistency in a network of relations is proposed. The DempsterShafer theory of belief functions is applied to make the utilization of topological constraints in achieving reliable identification. Optimal as well as heuristic solutions for maintaining consistency are presented. The heuristic solution was shown to have satisfactory near-optimal performance with less computational complexity. Computer simulations were conducted to verify the validity and the performance of the proposed sensor fusion framework.

I. INTRODUCTION

ENSORS play an important role in extending the capa- S bility of industrial machines to an uncertain or unknown environment. Recent advances in hardware and software for physical sensors have further spurred the interest of using more and more sensors in a wide range of industrial tasks. An advantage of using multiple heterogeneous sensors is that versatile information can be extracted from sensing due to the various capabilities of the sensors. Another advantage is that uncertainties and noises embedded in sensory information can be reduced through fusion of multiple sensory information. Furthermore, cooperative utilization of various sensors to extract complementary information about an environment can accomplish the desired task more quickly and efficiently. Thus, a multifunctional and reliable integrated-sensor system can be utilized to satisfy a variety of requirements arising in assembly and manufacturing tasks.

Manuscript received September 18, 1990; revised July 1, 1991. This work was supported in part by a grant from the Ford Fund.

Y. C. Tang is with the Department of Computer Science and Engineering, School of Electrical Engineering, Arizona State University, Tempe, AZ 85287.

C. S. G. Lee is with Purdue University, West Lafayette, IN 47907. IEEE Log Number 9104107.

Research on integrated sensors has mainly focused on the areas of establishing the system model [ l l ] , [20], developing strategies for sensing tasks [9], [14], [15], and consistently fus- ing sensory data acquired by various sensors [l], [2], [4]-[6], [lo], [21]-[23]. The system model provides the fundamental architecture of an integrated-sensor system. Sensing strategies conduct an efficient and cooperative utilization of sensors for accomplishing a specific task. Sensory data fusion generates the most consistent interpretation of an environment from sim- ilar, complementary, or contradictory sensory observations of the environment. Among these areas, sensory data fusion has received most attention from different perspectives. Shekhar et al. [2] used the concept of “good measurement” in pruning sensory estimates of the position and orientation (i.e., pose) of an object. The best interpretation of the pose is then obtained by solving a weighted linear system of the good measurements in a least square sense. Luo et al. [6] presented a hierarchical framework for multisensor fusion using templates in four phases: “far away,” “near to,” “touching,” and “manip- ulation.” In each phase, sensory measurements are combined based on supporting evidence among these measurements. Durrant-WhytZ [5] also proposed a team decision model for fusing sensory information. For different applications, Allen [21] and Stansfield [22] both implemented integrated-sensor systems using passive vision and active tactile sensors, and Flynn [23] built a structure for combining sonar and infrared sensors for mobile robot navigation.

Two major deficiencies exist in previous work. First, most existing sensor fusion approaches are applied to sensory data at the low level of abstraction. They are inadequate since sensory data processed in low-level details are unable to provide infor- mation at a higher level of abstraction. For instance, symbolic information such as geometric features of objects is obscured in low-level sensor fusion. Furthermore, due to various input characteristics of sensors, sensory information described by low-level readings is represented in heterogeneous form. This makes it difficult to fuse sensory information obtained from different sensors.

Second, most of the previous work assumed that the set of sensory readings to be fused is always homogeneous; that is, they are measurements of an identical entity. This is not the case in real situations since sensory readings of various entities are usually acquired simultaneously in a frame (such as from cameras) and it is necessary to classify these readings into groups of identical entities before those fusing mechanisms for homogeneous sensory measurements can be applied. This con- stitutes the correspondence problem that must be solved when

0018-9472/92$03.00 0 1992 IEEE

Page 2: A geometric feature relation graph formulation for consistent sensor fusion

I

116 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1, JANUARYIFEBRUARY 1992

frames of sensory measurements are fused. In addition, in the presence of uncertainty, fusion of frames of sensory measure- ments requires consistent integration of the topological and geometric relations among the measurements in the frames. To account for this requirement, Durrant-Whyte [l], [4] has established a structured, constrained network of geometric features in which uncertain measurements of features are fused and propagated for preserving the topological relations among these measurements. However, the correspondence problem was not mentioned and has yet to be solved.

This paper presents the development of a generic framework for representing and consistently fusing frames of sensory measurements acquired by various sensors. These sensory measurements provide geometric features of 3-D objects at a higher level of abstraction. A sensor-independent, feature- based relational model, called the geometric feature relation graph (GFRG), is developed and discussed in Section I1 to represent sensory measurements of geometric features and their spatial relations observed on objects in an environ- ment. This GFRG representation facilitates the framewise sensor fusion. In the framewise sensor fusion, an irregular GFRG is constructed by each sensor to represent the frame of sensory information acquired by the sensor, and these frames of sensory information are fused by integrating mul- tiple irregular GFRG’s into a regular GFRG. On integrating irregular GFRG’s, corresponding (coincident and compatible) measurements of features from different sensors need to be identified and fused, and consistent measurements of the geo- metric relations among features must be maintained after those corresponding measurements of features are fused. This paper presents an effective and reliable procedure for identifying coincident measurements of features in the presence of sensory uncertainty, and a nonlinear programming formulation for maintaining consistency in a network of relations is developed. The identifying procedure is established in algorithm IDENT based on both geometric and topological constraints. In the algorithm, an affinity function is employed to estimate the closeness between two probability distributions that char- acterize uncertain measurements of features. According to the value of the affinity function, the confidence in that two measurements of features are coincident is defined as a Dempster-Shafer’s belief function. The belief functions are then refined based on knowledge of topological constraints using Dempster’s rule of combination, and the coincident measurements of features are determined based on the updated belief functions; this is discussed in Section 111. The optimal solution for maintaining consistency is derived using the Newton method. To reduce the computational complexity, an efficient heuristic solution is derived with satisfactory near- optimal performance. This is discussed in Section IV. Finally, computer simulations were conducted to verify the validity and the performance of the proposed sensor fusion framework.

11. GEOMETRIC FEATURE RELATION GRAPH

This section presents the GFRG for representing 3-D geo- metric sensory information. The advantages of feature-based representation are the manipulation of symbolic information at a higher level of abstraction and a unified representation of

diverse sensory data. Though there have been many definitions of features given in the literature [3], we define a geometric feature as a characteristic portion of an object that describes the local geometric property of the object such as a vertex, an edge, or a surface. Two types of special features, the null feature and a sensor feature, are of interest. The null feature implicitly represents the world coordinate frame that is common to all sensors. A sensor feature, however, represents the local coordinate frame attached to a sensor. Each geometric feature observed by the sensor is measured with respect to this local coordinate frame. All geometric features except these two special features are referred to as normal features. Table I contains a set of normal features that is considered in the following discussion. They can be obtained from a taxonomy of visual structures [19] and tactile sensors [18]. It should be noted that the proposed GFRG is not necessarily restricted to the set of normal features in Table I; it is, however, generic for representing a general class of geometric features. Objects composed by a different set of normal features can be represented by the GFRG as well.

A GFRG is essentially a directed graph, of which each node represents one 3-D geometric feature and each arc denotes the spatial relation between two features. A formal definition of a GFRG follows.

Definition 1: A GFRG is defined as a four-tuple, (V, A, F, H ) , with two operations q5f and q5h, in which (V, A ) forms a directed graph with a set of nodes V and a set of arcs A, where A = A, U At with A, representing the set of geometric relations and At representing the set of topological relations; F is the space of geometric features with F = M x P, where M is the space of feature types and P is the space of parametric vectors; H is the space of homogeneous transforms; and q5f : V + F and 4 h : A, -+ H are two one-to-one functions, attributing each node in V with a geometric feature in F and each arc in A, with a homogeneous transform in H .

In the current discussion, the adjacency is employed as the only topological relation in a GFRG, and each geometric relation, on the other hand, is represented by a 4 x 4 ho- mogeneous transform. As an example, the construction of the GFRG of an environment (in Fig. l(a)) by a sensor is illustrated. The sensory observation of each geometric feature (e.g., the cylindrical surface s1 and the circular edge c1) in the construction of the GFRG incorporates three pieces of information (see Fig. l(b)). They are a feature type (e.g., f,, and fc,) usually described by a parametric function, a uniquely defined feature-associated coordinate frame (FACOF) (e.g., F,, and Fcl) with respect to which the parametric function is defined, and a parametric vector (e.g., p, , and p c , ) for instantiating the parametric function. The feature type is recognized from the pattern of sensory data (for example, image arrays or tactile pressure patterns) of the geometric feature. The feature-associated coordinate frame represents the pose of the geometric feature, and the parametric vector describes the shape and dimension of the geometric feature. Table I gives the description of feature types and their associated parametric vectors, and the principles of assigning the FACOF to each feature type. The geometric relation

Page 3: A geometric feature relation graph formulation for consistent sensor fusion

1

Ga*ncpricFauns

PI

P* P I . 2

AP4 & p3

V1

Vn

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION

- Tvpe PIlrameuic vecurr Ptincipk-s d Assigning FACOF

NuLLfeuue NOnC World Coordinste F m

SENSOR fuunc NOnC Sensor Codinate Frame

VERTEX NClE 6 ) 0 = P I . (a) 2 + 4.1. + &.Y +Y.. t

LINEAR EDGE I : lenglh. (a)O=pl ~ ~ - + ( P z - P I ) O S r i l . if l PI -0-1 >I PZ -owl.

@) x = 2 x 2..

(a) 0 = pI (center ofthe circuhred&. (b) 2 + ( ~ 2 -PI) if& .@2 - PI) > 0 (C)X + (PI - PI) if@, -Pl)x@4 -PI) + 2.

r : radius, B : wipe angle.

;;LEDGE 2 +yz =rz ,

z=O.

n : n u m b of vaties. vi = (q,yi), i = 1 ton.

(a) 0 = pl (center of varices). vi = C.yi). i = 1 U, n. vatkes (b) 2 + (h - PI) if& .@2- PI) > 0. ofIhesllrfaoecmthexy-ph with $ l i b i ... i b ( c ) x = z x g . of Ihc FACOF. whwe $=orcron(v,.q).

117

CYLINDRICAL SURFACE x z + y ’ = r ’ , p3

h : height. r : radius. B : wipe angle.

(a) 0 = pI (”I and height cenw of Ihe surlaoe).

(b) 2 +(PZ - PI) if4 .@,-PI) > 0 (c) x -9 (P3 - PI) if@, - PI)X(P4 - PI) + 2.

Fig. 1. Construction of the GFRG of an environment,

between each pair of geometric features is then derived as the homogeneous transform from the FACOF of one feature to that of another.

Fig. l(c) shows the irregular GFRG of the environment in Fig. l(a). An irregular GFRG contains measurements of normal features, the sensor feature, the null feature, geometric relations between normal features and the sensor feature, and the geometric relation between the sensor feature and the null feature that is determined according to the configuration of the sensor in the world coordinate frame. In an irregular GFRG, the value of the parametric vector of each normal feature is sensor-independent because it is measured with respect to the feature’s own FACOF rather than the sensor’s

coordinate frame. This has an obvious advantage for the integration of irregular GFRG’s since sensory measurements acquired by various sensors can be fused directly without first transforming them with respect to a common coordinate frame. This greatly facilitates the sensor fusion based on multiple irregular GFRG’s.

Since the irregular GFRG representation still comprises sensor-dependent substance such as the sensor feature and its adjacent arcs of relations, it is required to further trans- form an irregular GFRG into a completely sensor-independent structure. A regular GFRG can thus be generated from an irregular GFRG by deriving composite geometric relations among normal features and the null feature, and eliminating the sensor feature and its adjacent arcs of relations from the irregular GFRG. Regular GFRG’s are not unique. Fig. l(d) shows two such regular GFRG’s that can be derived from the irregular GFRG in Fig. l(c). It is also apparent that the regular GFRG constructed by a sensor is invariant for any configuration of the sensor. Therefore, the regular GFRG is a well-defined, sensor-independent representation of 3-D geometric features and relations in an environment.

When multiple sensors are employed to observe an envi- ronment, an irregular GFRG is constructed by each sensor. To obtain a consistent interpretation of the environment, the multiple irregular GFRG’s must be integrated. This integra- tion process together with the GFRG representation model Constitute a generic framework for sensor fusion. The block diagram of this sensor fusion framework is illustrated in Fig. 2. In this framework, low-level sensory information acquired by each sensor is processed and the geometric features in the environment are obtained and represented by an irreg- ular GFRG. In achieving consistent integration of irregular GFRG’s, two inherent problems are identified. To combine multiple observations of a geometric feature taken by different sensors, corresponding measurements of the feature from different irregular GFRG’s must be identified and fused. This constitutes the correspondence problem. After corresponding

Page 4: A geometric feature relation graph formulation for consistent sensor fusion

118

Identification of Corresponding

MePSuWMb

D-S Thcory B a d Knowledge Fusing

Mechanism

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1 , JANUARYIFEBRUARY 1992

Knowledge of Topological Consmints

Non-lincar

Consistent GFRG

Fig. 2. Schematic diagram of the generic framework for multisensor fusion.

measurements of features are fused, conflicting measurements of homogeneous transforms in the irregular GFRG’s may be present. The consistency of the network of relations therefore needs to be maintained. This constitutes the problem of main- taining consistency. The correspondence problem is discussed in Section 111, followed by the investigation of maintaining consistency in Section IV.

111. CORRESPONDENCE PROBLEM We shall first explore the correspondence problem with

an example. In Fig. 3(a), two objects are observed by two stereo visions and one tactile sensor. The irregular GFRG’s constructed by each sensor are given in Fig. 3@). Two types of corresponding measurements, coincident and compat- ible measurements, are identified. Coincident measurements are complete measurements of one geometric feature that are acquired by more than one sensors due to overlapped sensing areas. For instance, in Fig. 3(b), { u ~ , u ~ } , { u ~ , u ~ } , and { e i 2 , e f 2 } (solid lines linked) are sets of coincident measurements of the features u2, V8, and e12, respectively. Compatible features, on the other hand, are incomplete and disparate measurements of one geometric feature that are taken by sensors due to occlusions, sensing incapabilities, and/or spatial restrictions. In Fig. 3(b), {e:, e l } and {e;, e:} (dashed lines linked) are sets of compatible measurements of the features e9 and e5, respectively. Since each compatible measurement only describes a partial information about the object, the fusion of these compatible measurements will render a better description for recognizing the object. The correspondence problem therefore deals with how to identify and fuse coincident as well as compatible measurements of features from different frames of sensory measurements.

In general, the identification and fusion of compatible measurements of features are more difficult and require com- mon geometric knowledge as well as additionally domain-

(c)

Fig. 3. An example of GRFG-based sensory fusion. (a) Multiple sensory observations. (b) Coincident and compatible measurements of features. (c) Maintenance of consistency.

specific knowledge regarding the objects. In the remainder of this section, we shall explore how to identify coincident measurements of features only. Fusion of coincident mea- surements thereafter is not discussed but can be achieved using existing fusion approaches [lo]. Although a complete solution of the correspondence problem is not ready in this paper, we elaborate the simpler problem of coincident features with consideration of sensory uncertainty in order to obtain reliable identification. These results are substantial and will be applied to the future work that generally deals with compatible measurements of features.

A. Geometry

Based on the geometric information comprised in sensory measurements, ideally a set of measurements of normal fea- tures is coincident if and only if the contents of feature types in these measurements are identical with one another, and the contents of parametric vectors and FACOF’s are also identical with one another. The comparisons of feature types and parametric vectors measured at different sensors are straight- forward since their observations are sensor-independent. How- ever, sensory measurements of FACOF’s in irregular GFRG’s are sensor-dependent, and comparisons of FACOF’s require transforming these measurements of FACOF’s with respect to a common coordinate frame. In this development7 for simplicity, the null feature coordinate frame is chosen to be the common coordinate frame for this transformation.

Due to the existence of uncertainty in real-world applica- tions, the identification of coincident measurements of features

Page 5: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION 119

(c) (4 Fig. 4. Basic operations of homogeneous transforms. (a) Compounding. H I S = H12H23. (b) Inverting. Hzl = H T i . (c) Compound-inverting. H23 = HTiH13. (d) Merging. E1;2 is the fusion of Hi2and HT,.

must be able to handle uncertain sensory measurements of parametric vectors and FACOF’s. In transforming measure- ments of FACOF’s with respect to the common null feature for comparison, the uncertainties associated with the mea- surements of FACOF’s (homogeneous transforms) must be propagated appropriately under the compounding operation (Fig. 4(a)). In addition, there are other basic operations (Fig. 4(b) to (d)) that need to be applied to sensory measurements of homogeneous transforms in the GFRG-based integration process. The propagation of uncertainties when these ba- sic operations are applied to measurements of homogeneous transforms has been analyzed in [8]. These results will be employed in this paper for the development of the sensor fusion framework.

We now consider how to identify coincident measurements of features based on uncertain geometric information. Suppose each uncertain measurement of a parametric vector or a FACOF (in terms of the equivalent pose vector of the homoge- neous transform) is characterized by a multivariate probability distribution with its mean vector representing the measurement and its covariance matrix describing the uncertainty in the measurement. To distinguish between uncertain measurements of geometric features is then a problem of discriminating between multivariate distributions based on a metric function defined on the distribution space. A suitable metric function for discriminating between multivariate distributions is a distance function proposed by Matusita [7], which is formulated as

d(p1,pz) = [I {b1(411/2 - [p2(z)11/z)zdz]1/2 (1) Rk

where pl (z ) and p 2 ( z ) are the density functions of two Ic- dimensional multivariate distributions defined on the space R“ This distance function can be alternatively expressed in terms of the uffinityfunction [7] given by

with

d2(P1,Pz) = 2(1- P(P1,PZ)). (3)

This affinity function can be interpreted as a measure of the closeness between two multivariate distributions with the

property that 0 5 p(p1,pz) 5 1 and p(p1,pz) = 1 when the two distributions pl and pa are identical. As we represent each uncertain measurement of parametric vectors and FACOF’s by a certain probability distribution, the coincidence between measurements of features can be determined based on the affinity function measured on the corresponding distributions.

For the circumstances where uncertain measurements of geometric features are characterized by multivariate normal distributions, the affinity function in (2) can be further derived as follows. Let p1 and p, be the measurements of two para- metric vectors (or FACOF’s) characterized by the multivariate normal distributions pl and pa with covariance matrices E1

and 222, respectively. The affinity function between pl and p z , according to (2), is then given by [7]

- + -&4T(E1 + . & - I

. (Ell42 + EzCLz)Il (4)

where [El 1 and 1E2 I denote the determinants of the covariance matrices E1 and Ez, respectively, and the superscript “ T denotes the transpose operation of vectodmatrix. Note that in distinguishing between uncertain measurements of FACOF’s, each measurement of FACOF derived with respect to the null feature through the compounding operations may not satisfy the assumed normality of distributions. In this case, the affinity function in (4) still serves as a good measure of closeness between any two probability distributions based on their means and covariance matrices only. The distinguishing property of this affinity function is apparent in two extreme cases. That is, if E1 = Ez, the affinity function in (4) becomes

1 P ( P ~ , P ~ ) = exp[-g(p1- ~ 2 ) ~ ~ 1 ( ~ 1 - l42)I

and if pl = p2, the affinity function in (4) becomes

(5)

In (5), when the covariance matrices are identical, the close- ness between two distributions is decided by the difference between the mean vectors, and in (6) when the mean vec- tors are identical, it is determined by the similarity of the covariance matrices.

B. Topology

In addition to the geometric information, the knowledge of topological constraints can be used to help identifying coinci- dent measurements of features and improving the reliability of the identification procedure in the presence of sensory uncertainty. For the set of normal features in Table I, the topological constraints for identification can be addressed in three cases.

1) With reference to Fig. 5(a), if two measurements of edges (el and ez) in different irregular GFRG’s have equivalent feature types (linear or circular) and identical contents of parametric vectors, and the measurements of adjacent vertices are coincident in pairs (adjacency is

Page 6: A geometric feature relation graph formulation for consistent sensor fusion

120 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1, JANUARYIFEBRUARY 1992

shown by dashed lines in Fig. 5) , then the measurements of edges are coincident. On the other hand, if two measurements of edges in different irregular GFRG’s are coincident, the measurements of adjacent vertices are coincident in pairs.

2) With reference to Fig. 5(b), two measurements of cylin- drical surfaces (SI and s2) in different irregular GFRG’s are coincident if the measurements of one pair of cir- cular edges adjacent to different cylindrical surfaces ( e.g., ell and ezl) are coincident. On the contrary, if two measurements of cylindrical surfaces in different irregular GFRG’s are coincident, the measurements of adjacent circular edges are coincident in pairs.

3) With reference to Figures 5(c), define an edge set as the measurements of a set of adjacent linear edges that forms the complete or partial contour of a planar surface (e.g., {ell, e12,. . . , el,}). Any two measurements of edges, each from an edge set in a different irregular GFRG, can form a hypothesized pair. Also define that any two hypothesized pairs of measurements in two edge sets are in a coherent order if the number of adjacent edges between the measurements of edges in one edge set is identical to that in the other edge set (e.g., (e11,ezl)

and (e12, e22) are in a coherent order). Since two distinct planar surfaces can have at most one edge in common, if two hypothesized pairs of measurements in two edge sets are coincident in a coherent order, the two measurements of planar surfaces that are adjacent to the edge sets are coincident. Instead, if two measurements of planar surfaces in different irregular GFRG’s are coincident and each of them is adjacent to an edge set, the edge measurements in the two edge sets are coincident in pairs in a coherent order. If we define a vertex set as the measurements of a set of vertices that continuously decides the complete or partial contour of a surface, similar results can be concluded for the case of vertex sets.

Our next step is to apply the above topological constraints to enforce reliable identification of coincident measurements. In the following derivation, based on uncertain geometric information, the confidence in that two measurements of features are coincident is defined as a Dempster-Shafer’s belief function [16], [17], whose value is computed using the affinity function given in (4). A knowledge fusing mechanism is then provided to refine the derived belief functions based on the topological constraints using Dempster’s rule of com- bination. This mechanism provides an environment in which knowledge about coincidence measurements obtained from different geometric information can be propagated and fused.

Suppose f1 and fz are two uncertain measurements of features in different irregular GFRG’s with identical feature types. Each measurement fi, i = 1,2, contains the uncertain measurements of parametric vector and FACOF characterized by the multivariate distributions p , , and p f , % , respectively. Let f1 - f2 denote the proposition that the measurements f1 and f2 are coincident. A belief function defined on the frame of discernment including propositions f1 - f2 and f l N f2 is

1

Fig. 5. Identification of coincident measurements of features based on knowledge of topological constraints.

given by the basic probability assignment

m u 1 - f 2 ) = TP(P,l,Pr,Z) + (1 - T)P(Pf,l,Pf,2) (7)

where the affinity metric p is given by (4) and 0 5 T 5 1 is the weighting factor. Note that 0 5 m(f1 N f2 ) 5 1 for any measurements f l and f ~ . For the case of vertex, no parametric vector is concerned and the basic probability assignment is

In a knowledge fusing mechanism, the topological con- straints provided in cases (1) to (3) can be applied to refine the derived belief functions for coincident measurements. To reduce the complexity of the fusing mechanism, this is per- formed at two levels of updating: the vertex-edge level and the edge-surface level. At the vertex-edge level, belief functions associated with coincident measurements of vertices and edges are updated using the topological constraints provided in case (1). At the edge-surface level, belief functions associated with coincident measurements of edges and surfaces are updated according to cases (2) and (3).

Fig. 5(a) shows the update of belief functions at the vertex- edge level. Let el be the measurement of an edge and ‘ull and 1112 be the measurements of the edge’s adjacent vertices in one irregular GFRG. Similarly, e2, wpl, and 1122 are the measure- ments of the same edge and its adjacent vertices obtained in another irregular GFRG. Suppose from the uncertain geometric information comprised in these measurements, we have the belief functions given by ml(’u11 - 021) = x1,m2(1112 -

simply defined as P ( P ~ J , P ~ , z ) .

Page 7: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION 121

U Z Z ) = %m3(u12 - 2121) = 23,m4(u11 - U Z Z ) = 2 4 , and mo(e1 - ez) = y. For convenience, the graph in Fig. 5(a) is defined as the vertex-edge graph, G,,, which contains nodes of vertex and edge measurements, dashed links of adjacency, and solid links of coincidence attributed with the above belief functions. A number of functions are defined on G,,, l ( x J ) denotes the link of coincidence with which the belief xJ is attributed. L( G,,) contains the links of adjacency and the links Qf coincidence in G,,. K(G,,) is the factor of contradiction of G,,, containing the total belief associated with contradictory propositions that are asserted by the belief functions in G,, . The vertex-edge graph G,, is noncoherent in the sense that it may comprises contradictory belief functions. For example, the propositions of coincidence asserted by the beliefs 2 1 and 2 3 in G,, are contradictory. Let y‘ and 2,’ be the updated beliefs from y and z,, respectively, for i = 1 , 2 , 3 , 4 . Based on the topological constraint in case (l), the proposition of coincidence e l - e2 can be implied from the propositions ull - 1 1 2 1 ~ ~ 1 2 - u22,u12 - 1121, and m. Hence using Dempster’s rule of combination, the product of the beliefs associated with the above propositions, i.e., x122(1- z3)(l - 24), is derived as part of the belief in the proposition e l - e2. Similarly, the product x3x4(l - q ) ( 1 - 2 2 ) can be derived as a partial belief in the proposition el - e2. The belief y is then refined by these partial beliefs as

Y’ =y + (1 - y)[21z2(1 - 23)(1 - 2 4 )

+ 2324(1- x i ) ( 1 - 22)]/(1 - K,O,) (8.a)

where K:, = K ( G v e ) is a factor of contradiction. By similar reasoning, the other belief functions can be updated by

2 1 ‘ = [xi(1 - K;,) + (1 - zi)Yzz(l - 23)(1 - z4)]

2 2 ’ = [ ~ ( 1 - Kt,) + (1 - zz)~z i ( l - 23)(1 - Q)] / (I - K,Oe) (8b)

/(I - K,Oe) (8c)

where K:, = K(Gve - { l (x , ) } ) , i = 1,2, are factors of contradiction. For the graph G,, in Fig. S(a), the factors of contradiction are generated by

topological constraints in case (2) , the beliefs in G,, can be updated by

Y’ = Y + (1 - Y>[(Zl + 2 2 - 21z2)(1- 23)(1 - 241

+ ( 2 3 + 2 4 - X3X4)(1 - 21)(1 - 22)1/(1 - e,) (9a)

/ (I - K,o,) (9b)

/ (I - K 2 C ) ( 9 4

21’ = [xi(1 - K:,) + (1 - 2 1 ) X z ( l - 23)(1 - 2 4 ) ]

22’ [xz(1 - K:,) + (1 - 2 ~ ) 2 1 ( 1 - 23)(1 - 2 4 ) ]

where K,“, = K ( G e C ) and K:, = K(Gec - { l ( X a ) } ) , i = 1,2, are the factors of contradiction that are given by (8dH8f) with the substitution of ue by ec. Likewise, 23‘ and 2 4 ‘ can be achieved from the symmetry of G,,, and the results can be extended to the cases where multiple edge-cylindrical-surface graphs are overlapped.

The second situation of updating at the edge-surface level with planar surfaces is more complicated. For this, the edge- planar-surface graph is defined as a graph containing the measurements of two planar surfaces and their adjacent linear edges, links of adjacency, and links of coincidence attributed with belief functions. A simple case in Fig. 5(c) is first investigated where the edge-planar-surface graph, G:,, is co- herent without contradictory belief functions. This coherence is satisfied when the links of coincidence in the graph connect measurements of edges in pairs in a coherent order. Based on the topological constraints in case (3), the coincident measurements of surfaces can be evidenced by coincident measurements of edges in a coherent order and vice versa. The belief functions in G:, are therefore updated by

n. \

Due to the symmetry of G,,, the beliefs x3 and 2 4 can be updated by interchanging the subscript/superscript 1 with 3 and 2 with 4 in (8b), (8c), (se), and (8f). The equations of updating when two or more vertex-edge graphs are overlapped can be obtained from the generalization of the above results.

At the edge-surface level of updating, we are concerned with two different situations where cylindrical and planar surfaces are involved. In the first situation, measurements of cylindrical surfaces and their adjacent circular edges are considered for coincidence. These measurements of features and associated belief functions for coincidence are similarly represented by the edge-cylindrical-surface graph, G,,, in Fig. 5(b). Note that the graph G,, is also noncoherent. According to the

I

for i = 1,2 , . . . , n. The factor of contradiction is zero for (loa) and (lob) due to the coherence of the graph G:,. The term inside the curly brackets in (loa) is recognized as the partial belief of coincidence for refining the belief y that is implied by the beliefs other than y in G:,. Similarly, the term inside the curly brackets in (lob) is the partial belief for refining x, that is supported by the beliefs other than x, in GE,.

Next, we consider the general edge-planar-surface graph, G e p , which may contain contradictory beliefs. Fig. 5(d) shows the G,, with generally unequal numbers of edge measurements ( nl # n2) in two irregular GFRG’s and complete links of coincidence between each pair of edge measurements. A coherent subgraph of G,, can be defined as a subgraph of

Page 8: A geometric feature relation graph formulation for consistent sensor fusion

122 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1 , JANUARYFEBRUARY 1992

G,,, which is also a coherent edge-planar-surface graph. Consequently, a set of orthogonal coherent subgraphs, G & k , k = 1 ,2 , . . . , m, of Gep can be determined such that each Gzp,k is a coherent subgraph of G,, as well as no other coherent subgraph of G,, can be a supergraph of G$>, except itself, and the union of G & k , k = 1,2, . . . , m, constitutes the graph Gep. An example of the orthogonal coherent subgraphs of G,, for the case of n1 = n2 = 3 is given in Fig. 5(e). It can be observed that for any belief x, in Gep, there exists a unique orthogonal coherent subgraph G & k such that Z(x,) E L(Gzp,k). Accordingly, two beliefs x, and xJ in G,, are coherent if and only if the links of coincidence Z(x,) and l ( x J ) are contained in the same orthogonal coherent subgraph of Gep. Otherwise, x, and xJ are contradictory beliefs.

Based on the topological constraint in case (3), the partial belief of coincidence for refining the belief y in G,, can only come from the coherent beliefs that are comprised in the same orthogonal coherent subgraph of G,, . Therefore, the belief y is updated by

with

Q k = {jll(.j) E L(Gep) l ( x j ) 6 L(GEp,k)}l k = 1 , 2 , ... , m (lib)

containing the indices of the links of coincidence that are not contained in G & k , and KZ, = K(Gep) is the factor of contradiction that can be found to be

where N is the total number of the links of coincidence except l(y) in G,,. In (l la), b k is the partial belief of coincidence that is implied by the coherent beliefs in G",,,. On applying the previous results derived for coherent edge-planar-surface graphs, the partial belief b k can be given by

nk r nk 1 nk

b k = 1 - (1 - x j q ) ] - n ( 1 - x j p ) ( l l d ) p=l n=Ln#P p=l

where x jp , p = 1,2, . . . , nk, are the nk coherent beliefs in G&. Similarly, the belief xi, i = 1 ,2 , . . . , N , in G,, can be updated by

j E R k

when 1(x2) E L(G&!) for some unique k . In (l le), b,,k is the partial belief of coincidence for refining the belief xi that is supported by the coherent beliefs in G & k . Likewise, this partial belief can be given by

- n where x J P , p = 1 , 2 , . . . , nk, are the nk coherent beliefs in G ; p 3 k .

C. Algorithm for Identifiting Coincident Measurements With the formulation of belief functions in (7) from uncer-

tain geometric information and the fusion of belief functions in (8H11) based on the topological constraints, an algorithm has been established for effectively and reliably identifying all coincident measurements of features.

Algorithm IDENTqidentification Of Coincident Measure- ments of Features): Let S = { S I , S2, . . . , s,} be a set of p sensors and I? = {rl, r2,. . . , I?,} be the set of p irregular GFRG's generated by the sensors, respectively. Then all coincident measurements of geometric features in r can be identified in the following steps.

S1: For each feature type f, do steps S2 and S3. S2: Determine all the measurements of features of type f

from I?. For i = 1 ,2 , . . .p, let V, be the set of the nodes in I?, with measurements of features' of type f . A p-partite connected graph G f = (U, E ) can be constructed with the set of nodes U = VI U V2 U. . . U Vp and the set of edges E = { ( U k , V I ) I U k E K , 211 E

S3: For each edge (link of coincidence) in G f , which is adjacent to two measurements of features, f l and f2, compute the belief m(f1 - f2) according to (7) as the weight associated with the edge.

S4: Let GF be the union of all G f . Update the weights associated with edges in GF using the knowledge fusing mechanism given in ( 8 H l l ) .

S5: Given a threshold of belief, E , eliminate all edges in GF with weights less than E . For each connected component in G F , find the maximally matched cliques in the component that contains maximum number of nodes and the complete subgraph induced is maximum- weighted. Also determine the disjointly matched cliques from the component, if any, that are cliques disjoint with one another as well as disjoint with the maximally matched clique.

S6: The set of measurements in the maximally matched clique as well as the set of measurements in each disjointly matched clique found in step S5 are identified as coincident.

V,, i # j } .

End IDENT. The deduction of the maximally matched clique discovers

the largest set of feature measurements in the component that is most likely to be coincident in the sense that the total belief in the coincidence of these measurements is maximal. Other cliques can also be identified for providing coincident measurements of features as long as they do not overlap with the maximally matched clique. A simple example of iden- tifying coincident measurements of features using algorithm IDENT is given in Fig. 6. Fig. 6(a) shows that an object is observed by four sensors. The vertices and edges of the

Page 9: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION 123

must be resolved. Previously, maintenance of consistency was conceptually discussed by Durrant-Whyte [4] in his formulation of a directed constrained graph. Here, we propose a two-step procedure for resolving the inconsistency in a network of measurements of relations. In the first step, the compromise between conflicting measurements of relations is achieved by the fusion of these measurements using existing sensory fusing mechanisms [lo]. Then in the second step, measurements of relations in the network are modified to be consistent with this compromise optimally based on the assumed normal distributions of the uncertain measurements of the relations. As an example, in Fig. 3(c) the compromise between the conflicting HwlHll and Hw2H21 on separate chains is obtained as the fusion of HwlHll and Hw2H21. Then the measurements of relations on each chain are modified

Sensor 4

It should be noted that very often more than one instance ( 4

Fig. 6. An example of identifying coincident measurements of features using algorithm INDENT. (a) The object. (b) Links of coincidence. (c) Update of beliefs. (d) Matched cliques.

of inconsistency may occur at the Same time. F~~ example, in Fig. 3(c) the chains of relations, (Hwl, HII), (HW1,H12),

(HW1,H13), and (Hw1,H14), in the irregular GFRG rl object are denoted by e, and v,, respectively. By executing steps S2 and S3 in algorithm IDENT for the feature types vertex and edge, links of coincidence between measurements of features and their associated beliefs are given in Fig. 6(b). Measurements of features taken by the ith sensor (1 5 i 5 4) are denoted by superscript i. For simplicity, without changing the result, only links of coincidence with significant amounts of belief are illustrated in Fig. 6(b). Continuing in step S4, the associated beliefs are updated in Fig. 6(c). With the threshold of belief [ given by 0.25 in step S5, edges with beliefs less than 0.25 are eliminated in Fig. 6(d), and the coincident measurements of features in each connected component are determined by the maximally matched clique consisting of squared nodes.

Algorithm IDENT is effective and robust against sensory noises based on the knowledge fusing mechanism. In the update of belief functions, contradicting or noncoherent belief functions due to sensory noises are depressed as they are not supported by the topological constraints while belief functions that satisfy proper topological constraints are raised. As a result, undesired links of coincidence caused by sensory noises are easily rejected by a threshold of belief, and the possibility of determining incorrect coincident measurements due to sensory noises is greatly reduced. We shall further illustrate this robust performance by computer simulations in Section V.

Iv. MAINTENANCE OF CONSISTENCY

After solving the correspondence problem, a network of geometric relations is formed (see Fig. 3(c)). Due to the fusion of corresponding measurements, konflicting measurements of relations may be present in the network. For example, in Fig. 3(c), an instance of inconsistency happens among the homogeneous transforms Hwl, H11, Hw2, and H21 when HwlHll # Hw2H21 after the coincident measurements of feature 212 are fused into U;. To integrate and achieve consis- tent measurements of the geometric relations, this confliction

overlap on the first stage Hwl, and each of the chains has to compromise with a chain of relations in another irregular GFRG. Hence, measurements of relations need to be modified to reach proper compromises simultaneously. In the following derivation, we begin with the case of single instance of inconsistency, and then extend the results to the case of multiple instances of inconsistency.

A. Single Instance of Inconsistency Fig. 7(a) describes the problem of modifying measurements

of relations for reaching a compromise in an n-stage chain of arbitrary features and relations. Suppose each relation from feature fi to feature f,+l in the chain is represented by a relation vector and the true value of the vector occurs with a probability distribution function P, with a mean vector r, and a covariance matrix C,. Typically, r, is given by the sensory measurement of the relation and C, specifies the uncertainty of the measurement. It is assumed that each relation is independently distributed. A binary operator, 8, is defined on the relations such that r, 8 r,+l describes the composite relation from feature fi to f,+2. Let s, denoting the composite relation from feature fl to f,+l, be a compromise with which the measurements of relations must be consistent. Then an instance of inconsistency happens in the chain of relations when rl 8 r 2 8 . . . 8 r , # s. The optimal solution for resolving this inconsistency is to find the most likely true values of the relations, which satisfy the requirement of consistency, as the modified measurements of the relations. That is, find a new set of n relations, rT,rz,. . . ,rE, which maximizes the joint distribution function

Page 10: A geometric feature relation graph formulation for consistent sensor fusion

124 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1, JANUARYlFEBRUARY 1992

where R; and R; are the rotation matrices generated from the orientation vectors v; and vf, respectively, and S is the rotation matrix generated from the orientation vector U. Note that in (19) the orientation variables are essentially confined to three independent constraints since the rotation matrix has only three degrees of freedom. This is more clear if we express (18) and (19) by a 6 x 1 vector function of r; and rf,

(a) bh(r; , r ; ) = [ @-l(R;%) R ; p ; + p ; - q ] - U = o (20)

where W1 is the inverse transform from a rotation matrix to its corresponding row, pitch, and yaw rotation angles. Consequently, the nonlinear programming for maintaining consistency becomes minimizing the objective function in (16) subject to the nonlinear constrains in (20). The necessary condition for the optimal solution of this problem is

(b) Fig. 7. Formulations of maintenance of consistency. (a) General case.

(b) GRFG case with multiple instances of inconsistency.

Assuming that each P, is normally distributed, the objective function in (12) can be simplified into a quadratic form

n

C{r:TC,lr: - 2r:T~,1r,} (14) 2=1

which must be minimized subject to the nonlinear constraint given in (13). For the relations of homogeneous transforms that are comprised in GFRG’s, each relation in (13) can be expressed as an equivalent 6 x 1 pose vector. That is, r: = ( P : ~ , u : ~ ) for z = 1 , 2 , . . . ,n and s = (qT,uT)T where p:, q are position vectors and U:, U are orientation vectors of row, pitch, and yaw angles [13]. The composition of relations in (13) is hence expressed by the multiplication of homogeneous transforms

T

T ( r ; ) T ( r ; ) . . . T(r:) = T(s) (15)

where T(r , ) denotes the equivalent homogeneous transform from the pose vector r,. Since an instance of inconsistency in irregular GFRG’s is always confronted in two-stage chains of homogeneous transforms, we focus on the case when n = 2. The problem of maintaining consistency is thus formulated as a nonlinear programming that minimizes

2

f ( r ; , r f ) = x{r:TC;’r : - 2r:TC,’r,} (16) Z = 1

I subject to the constraint

T(r;)T(r;) = T ( s ) .

Decomposing the multiplication of homogeneous transforms in (17) into rotation matrices and position vectors, the con- straint equation in (17) is split into position and orientation constraints that must be satisfied simultaneously:

R;P; + P ; = q

V f ( r ; , r ; ) + XTVh(r;,r;) = o (21) h(r7, r ; ) = o (22)

where X is a vector of Lagrange multipliers and Of and V h are the gradients of the functions f and h, respectively. Using the Newton method, r; and rf in (21) and (22) can be solved by an iterative algorithm [12]. The Newton method requires the computations of the gradients as well as the Hessian matrices of f and h, which can be derived explicitly or implicitly from (16) and (20), respectively.

Since the optimal solution requires a large amount of computations and its performance depends on the rate of convergence of the iterative algorithm, a heuristic solution that provides a fast but suboptimal solution to the nonlin- ear programming has been developed. As the computational complexity of the optimal solution mainly comes from the nonlinearity of the rotation matrices R; and Rg, a heuristic solution is proposed that utilizes the linearization of R; and R; by assuming differential changes of the orientation vectors v; and vf from u1 and 02, respectively. The orientation constraint in (19) becomes a set of overconstraining linear equations after the linearization of R; and R;. The orientation vectors U; and vf are therefore solved independently in (19) by With these solutions of U; and vf, the objective function in (16) is minimized subject to the position constraint in (18) for achieving the position vectors p ; and p; . Since the position and orientation constraints are satisfied sequentially, the heuristic solution is suboptimal; it is, however, noniterative and requires less computations. A more detailed description of the sequential optimization procedure is discussed next.

Assume that the inconsistency between T(rl)T(r2) and T(s ) is small enough and the consistency can be “recovered” by the differential changes of orientation vectors. That is, v: = U, + AV, for i = 1,2, where U, = ( ~ , , B , , T ~ ) ~ and AV, = (Aa,, AB,, AY,)^ is a vector of differential orientation angles. Using the approximations sin(8 + AO) M sin(O)+cos(O)AB and cos(8+A8) M cos(8) -sin(O)AO when A8 NN 0, and neglecting all the second and higher-order terms of the differential orientation angles, the rotation matrices R; and R; in (18) and (19) can be approximated by

Page 11: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION

where the components of Da = [ d i k l a X 3 are given by

= -catsptAPz - SatCpzAat (24a) d;, = -sa%sp,APa + catCptAya (24b) 4, = -Cp,AP, (244 4 2 = (Cat SP% c7, + sa% s , )AY% + CO, CO, Sy, AA

- (SCYtS&S7, + ca,c7t)Aa, ( 2 4 4 4 2 = (SatS~tCy, - Ca,sy,)A~a + SazCptS,A@a

+ (Ca, sp, S7% - sat C T t )Aaz (24e) 4 2 = CP, c7, AYZ - sp, sy, w z (249 4 3 = (-CatS~zSyt + SatCyz)A~a + C~,CO~C~,APZ

+ (Ca&, - sa%sptc7t)Affz (24g) 4 3 = -(Sa, SO, sy" + ca, c y z )A-Y, + sat c p z c7* apt

+ (G%sp*C,* + sa%sy,)Aff , (24h) dj3 = - cp, s, AY, - sp, cy, A h (24i)

where Se sin(0) and CO cos(0). The elements of the matrix D, are linear combinations of the differential angles. Substituting (23) into (18) and (19), we obtain, respectively,

Dip; + Rip; + P; q (25)

(26) A

RlD2 + DlR2 S - R1R2=W.

In deriving (26), we neglect the second-order matrix D I D 2 since all its components are second-order terms of the differ- ential orientation angles.

Using the vector AV, of differential orientation angles and the approximations in (23), the nonlinear rotation matrix constraint in (19) is linearized into (26). The position vector constraint in (18), however, becomes quadratic in (25) after linearization. Due to the linearization, the variables of differ- ential orientation angles in (19) are overconstrained by a set of nine linear equations in (26). The optimal solution to (26) can be achieved by the least-square solution of the set of linear equations in (26 . Let the jkth component of D, be denoted by dik = ( U J k ) AV,, 2 = 1,2, where a,\ is a 3 x 1 constant vector that can be easily found from (24aH24i). The set of linear equations in (26) is then given by

F y = e

4

where F =

h 3 3 1 T h12 h13 h2l h22 h23 h31 h32 9 1 1 g12 g13 g21 g22 923 -931 932 g33

(27)

y = [ ""'1 and e = [ zi]. (28) 4 AV2

F is a 9 x 6 coefficient matrix with 3 3

g j k = Ti1UFk, hjk = ~ U ~ ~ T ~ ~ , for j = 1 ,2 ,3 , k 1 1 , 2 , 3 1=1 1=1

(29) where rj'[ is the j l th component of R1 and rFk is the llcth component of R2. e is a 9 x 1 vector with wi being the ith

125

1x3 row vector of the matrix W in (26). The least-square solution of (27) can be found to be

y = (FTF)- lFTe (30)

where (FTF)-'FT is the pseudoinverse of F. Once the least-square solutions of the differential orientation

angles are obtained, they are substituted into the objective function in (16) and into the position vector constraint in (25). The problem left for solving p; and p ; then becomes a quadratic programming that can be easily solved by intro- ducing Lagrange multipliers and solving for a set of linear equations [ 121. Because the iterative computations of first- order gradients and second-order Hessian matrices are not required in the heuristic solution, the amount of computations needed in the heuristic solution is greatly reduced.

Computer simulations have been conducted to verify the performance of the heuristic solution as compared to the optimal solution. The heuristic solution is close to the optimal one when the degree of inconsistency on the two-stage chain of relations is small. The degree of inconsistency is defined as

where rc is the composition of the chain of relations that must be modified to be consistent with the compromise s. Different values of K for a two-stage chain of relations are simulated. The performance of the heuristic solution is evaluated based on its ratio of deviation, c, from the optimal solution. That is,

where r ; and r ; denote the optimal solutions and rl' and r2' denote the heuristic solutions of the relations on the chain. For each value of K , the optimal and the heuristic solutions of the relations are computed for a total of 250 random two-stage chains of relations. Table I1 shows the near- optimal performance of the heuristic solution. The second to fourth columns give the values of € 1 , € 2 , and E averaged more than 250 samples for each instance of K . The fifth to seventh columns then provide the percentage of the samples for which E is less than 1%, 5%, and lo%, respectively. For example, heuristic solutions with E less than 5% can be achieved for 96.80% of the samples when K = 0.5%. Note that the near-optimal performance degrades when the degree of inconsistency increases.

B. Multiple Instances of Inconsistency Next, we consider the case in which several instances of

inconsistency occur at the same time. A general description of such occasion is given in Fig. 7(b). The optimal and a heuristic solutions for resolving multiple instances of inconsistency can be derived in a similar way as in single instance of inconsistency. This resolving procedure is briefly summarized below.

Suppose a set of m overlapped two-stage chains of relations is given in Fig. 7(b). The true values of the relations occur with independent normal distributions Pi with mean vector ri and

Page 12: A geometric feature relation graph formulation for consistent sensor fusion

126 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1, IANUARYIFEBRUARY 1992

TABLE I1 NEAR OITIMAL PERFORMANCE OF THE HEURISTIC SOLUTION FOR MAINTENANCE OF CONSISTENCY ILLUSTRATED BY COMPUTER SIMULATIONS

Degree of Deviation in Deviation in Averaged Percentage of Percentage of Percentage of Consistency 1st Link 2nd Link Deviation Samples with Samples with Samples with

0.5% 1.43% 1.30% 1.37% 76.00% 96.80% 98.80% 1.0% 2.35% 2.42% 2.38% 41.20% 91.60% 96.40% 2.0% 5.93% 5.77% 5.85% 6.80% 77.20% 88.40% 3.0% 9.01% 8.61% 8.81% 0.40% 62.80% 82.00% 4.0% 10.16% 10.84% 10.50% 1.20% 51.20% 76.40%

IC € 1 €2 € F < 1 % E 5 5% F 5 10%

covariance matrix C, for z = 0 , 1 , . . . , m. The task is to find a set of new relations r:, z = 0 , 1 , . . . , m, which maximizes the joint distribution function

Po(~G)PI(~;). . + Pm(r&) (33)

(34)

subject to

rT, @ r : = s,,z = 1 ,2 , . . . ,m

where s, is the composite relation with which the ith chain of relations must be consistent. In our case, the relations are represented by pose vectors, i.e., r , = ( p , ' , ~ , ' ) ~ , i = 0 ,1 , ..., m, and s, = (qT,uy)T, z = 1 , 2 , . . . ,m, and the binary operator @ is interpreted as the multiplication of homogeneous transforms. The problem of resolving inconsis- tency is therefore formulated as a nonlinear programming that minimizes

m

(35) i=O

subject to

%P: +P; = qi

RiR,* = Si, a = 1,2, ..., m (36) where Si is the rotation matrix generated from the orientation vector ui.

Similarly, the optimal solution to (35) subject to (36) can be achieved by the Newton method, while a heuristic solution is derived using the differential orientation vectors Aui = ( A q , A,&, AY^)^, i = 0 , 1 , . . . , m. In the heuristic solution, the nonlinear programming of (35) and (36) is simplified to minimize

(37) subject to

1 DOP: + ROp: + PG = 9, (38) ROD, + D ~ R , = S, - R O R Z ~ W , , i = 1 ,2 , . . . ,m (39)

where the components of D, are given by (24aH24i). The constraint in (39) now contains a set of 9m equations in 3(m + 1) unknowns. By using the same approach, the least- square solutions of the differential orientation vectors Au, in (39) can be found. After substituting Au, into (37) and (38), the problem left for achieving the position vectors p: (3(m + 1) unknowns) is confined to the minimization of a quadratic function subject to a set of 3m linear equations.

V. SIMULATION RESULTS Computer simulations are conducted to verify the proposed

GFRG-based sensor fusion framework. In the simulation, the object in Fig. 8(a) is observed by three sensors from different perspectives as illustrated in Fig. 8(b). The portion of the object observed by each sensor is outlined in Fig. 8(b). The vertices, edges, and surfaces of the object are denoted by U,, e,, and sa, respectively. The sensory measurements of these features are simulated from the CAD model of the object perturbed by random noises with a normal distribution, and their uncertainties are characterized by a given covariance matrix. We selected the belief weighting factor T in (7) as 0.1 and the threshold of belief E in algorithm IDENT as 0.2.

An irregular GFRG of the object is generated by each sensor. According to algorithm IDENT, measurements of features in these irregular GFRG's are connected by links of coincidence and some of them are exhibited in Fig. 9(a). The belief functions associated with these links of coincidence are partially listed in Table 111, where they are initially derived based on uncertain geometric information in the second col- umn, and refined according to topological constraints at the vertex-edge level in the third column and at the edge-surface level in the fourth column. It can be seen that the identification process is effective and robust against sensory noises as the links of coincidence undesired due to sensory noises are surpassed by those supported by the topological constraints. Subject to the threshold of belief, links of coincidence with insufficient beliefs are eliminated, and the maximally and disjointedly matched cliques are determined for coincident measurements of features. are determined. Some of these results are illustrated in Fig. 9(b) with determined coincident measurements connected by solid lines.

The identified coincident measurements of feature are fused in the simulation as parametric vectors are averaged and FACOF's are merged by Kalman filtering. Fig. 10 shows the network of geometric relations after the fusion of coin- cident measurements. The consistency of the network is then maintained using the optimal solution proposed in Section IV. As a result, measurements of geometric relations in the network are integrated. The performance of this integration is evaluated based on the sensor recovery ratio defined by II, = Irf - rOl/U,. In the definition, ro is the ideal pose vector of a geometric feature from the CAD model that is then perturbed by random noises distributed with the deviation U,

to simulate the sensory measurement of the pose vector, and rf is the measurement of the pose vector after the integration. It is apparent that the smaller the 11, is, the better those

Page 13: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION

TABLE I11 THE BELIEFS ASSOCIATED WITH THE LINKS OF COINCIDENCE IN FIG. 9(a)

Initial Updated at the Vertex- Updated at the Edge- Belief Edge Level Surface Level Link of Coincidence

0.3165 0.2461 0.2739 0.0000 0.2131 0.0000 0.3230 0.0000 0.0000 0.2677 0.4120 0.3864 0.0845 0.0816 0.0842 0.0904 0.3217 0.1004 0.2850 0.0816 0.2045 0.3358 0.2307 0.0825 0.2693 0.1549 0.3324 0.3673

0.4531 0..3156 0.4368 0.0000 0.2809 0.0000 0.5753 0.0000 0.0000 0.4168 0.4640 0.4531 0.1000 0.0980 0.0980 0.1000 0.3673 0.1852 0.3240 0.0980 0.2906 0.3879 0.2975 0.0980 0.3233 0.1900 0.3901 0.3673

-

- - -

- - - -

- - -

0.3106 0.0285 0.0403 0.0296 0.0285 0.2930 0.0381 0.0667 0.0296 0.0618 0.2968 0.0508 0.0430 0.0560 0.0373 0.2871 0.5935

127

TABLE IV REDUCTION OF UNCERTAINTIES IN SENSORY MEASUREMENTSOF GEOMETRIC RELATIONS

THROUGH " I E N A N C E O F CONSISTENCY IN COMPUTER SIMULATIONS

Set Set Set Set Set Set Set Set Set Set Average 1 2 3 4 5 6 7 8 9 10

& 0.26 0.35 0.69 0.38 0.74 0.46 0.63 0.56 0.71 0.35 0.51 8.02 6.43 11.26 5.78 13.54 8.40 10.46 14.20 16.66 7.91 10.23

measurements of relations are integrated. Table IV shows the evaluation of 4 averaged over all geometric features for positional uncertainty (4* = I[rflP - [ r ~ ] ~ l / q ~ ) and orienta- tional uncertainty (&, = l [ ~ f ] ~ - [rOlv J/gT) , respectively. The averaged values of $p and &, are taken for each of 10 sets of random noises as illustrated in Table IV. From the low values of l ( l p , we show that maintaining consistency in the proposed sensor fusion framework has significant effect in recovering the correct positions of geometric features from noisy sensory measurements.

VI. CONCLUSION

A sensor-independent, feature-based geometric feature rela- tion graph for representing the geometric features and their spatial relations in an environment was presented. Based on the GFRG, a sensor fusion framework was proposed in which multiple irregular GFRG's constructed by various sensors can be integrated into a regular GFRG as a consistent interpretation of the environment. Two major problems, the

correspondence problem and the maintenance of consistency, were addressed and solved in the integration process. For the correspondence problem, algorithm IDENT was established to identify coincident measurements of features in the presence of sensory uncertainty based on both geometric and topological constraints. This algorithm takes advantage of topological constraints for identification in the proposed knowledge fusing mechanism using Dempster-Shafer theory of belief functions. The maintenance of consistency in a network of relations was formulated as a nonlinear programming problem. The optimal solution of this problem was obtained using the Newton method. A fast but suboptimal heuristic solution was also proposed with an illustrated near-optimal performance. Computer simulations were conducted to verify the perfor- mance of the proposed sensor fusion framework. It was shown that algorithm IDENT is effective and robust against sensory noises, and maintaining consistency has significant effects in recovering the correct positions of geometric features from noisy sensory measurements. Applications of this sensor fusion framework can be found in sensor-based tasks such as robotic

Page 14: A geometric feature relation graph formulation for consistent sensor fusion

128 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 22, NO. 1, JANWARYFEBRUARY 1992

SENSOR eq@ 1 OR 3

(b) Fig. 8. Environment for computer simulations. (a) The object. (b) Sensing

configurations.

@) Fig. 9. Identification of coincident measurements of features in computer

simulations.

assembly and recognition of objects where multiple sensors are used. Intelligent recognition of objects employing this GFRG- based fusion framework is being conducted [8]. Limitation

Fig. 10. Network of geometric relations after fusion of coincident measure- ments of features in computer simulations.

exists, however, in the application of the currently developed fusion framework. The current framework requires complete sensory measurements of features. As a number of features observed on objects may be occluded and incomplete, the fusion framework cannot be effectively applied. More work therefore is required to extend the current fusion framework to consider identification and fusion of compatible measurements of features that are unexplored in the foregoing discussions. The results presented in this paper will provide a foundation for this future research.

REFERENCES

[l] H. F. Durrant-Whyte, “Consistent integration and propagation of dis- parate sensor observations,” Proc. of 1986 IEEE Int. Con$ of Robotics Automat., San Francisco, CA, Apr. 1986, pp. 1623-1628.

[2] S. Shekhar et al., “Sensor fusion and object localization,” Proc. of 1986 IEEE Int’l Con$ of Robotics and Automation, San Francisco, CA, pp. 1623-1628, Apr. 1986.

[3] M. J. Wozny, Ed., Geometric Modeling for CAD Applications. A m s - terdam: North-Holland, 1988.

[4] H. F. Durrant-Whyte, “Uncertain geometry in robotics,” IEEEJ. Robotics Automat, vol. 4, pp. 23-31, 1988.

[5] -, Integration, Coordination and Control of Multi-Sensor Robot System.

[6] R. C. Luo, “Dynamic multi-sensor data fusion system for intelligent robots,” IEEE J. Robotics and Automation, vol. 4, no. 4, pp. 386-396, 1988.

[7] K. Matusita, “A distance and related statistics in multivariate analysis,” Multivariate Analysis. New York: Academic, 1966, pp. 187-200.

[8] Y. C. Tang, “Integrated sensors in robotic assembly tasks,” Ph.D. dissertation, School Elec. Eng., Purdue Univ., W. Lafayette, IN, Nov. 1990.

[9] G. K. Cowan and P. D. Kovesi, “Automatic sensor placement from vision task requirements,” IEEE Trans. Pattern Anal. Machine Intell., vol. 10, pp. 407-1116, May 1988.

[lo] R. C. Luo and M. G. Kay, “Multisensor integration and fusion in intelligent systems,” IEEE Trans. Syst., Man. Cybern., vol. 19, no. 5, pp. 901-931, Sept. 1989.

[ l l ] T. Henderson and E. Shilcrat, “Logical Sensor system,”J. Robotic Syst.,

[ 121 D. G. Luenberger, Linear and Nonlinear Programming. Reading, M A Addison-Wesley, 1984.

[13] R. P. Paul, Robot Manipulators: Mathematics, Programming, and Con- trol. Cambridge, MA: MIT Press, 1981.

[ 141 W. E. Grimson, “Sensing strategies for disambiguating among multiple objects in known poses,” IEEE J. Robotics Automat., vol. RA-2, no. 4, pp. 196-213, Dec. 1986.

[15] S. A. Hutchinson, R. L. Cromwell, and A. C. Kak, “Planning sensing -strategies in a robot work cell with multi-sensor capabilities,” Proc. 1988

Norwell, MA: Kluwer Academic, 1988.

pp. 169-193, 1984.

Page 15: A geometric feature relation graph formulation for consistent sensor fusion

TANG AND LEE: GEOMETRIC FEATURE RELATION GRAPH FORMULATION 129

IEEE Int. Conf Robotics Automation, Philadelphia, PA, pp. 1068-1075, 1988. G. Shafer, A Mathematical Theory ofEvidence. Princeton, NJ: Prince- ton Univ. Press, 1976. A. P. Dempster, “A generalization of bayesian inference,” J. Royal Statist. Soc., ser. B 30, pp 205-247. A. D. Berger, “On using a tactile sensor for real-time feature extraction,” Masters thesis, Camegie-Mellon Univ., Pittsburgh, PA, Dec. 1988. D. Nitzan, “Three-dimensional vision structure for robot applications,” IEEE Trans. Pattern Anal. Machine Intell., vol. 10, pp. 291-309, May 1988. T. C. Henderson, W. S. Fai, and C. Hansen, “MKS: A multisensor kernel system,” IEEE Trans. Syst., Man, Cybern., vol. SMC-14, no. 5, pp. 784-791, 1984. P. K. Allen, “Integrating vision and touch for object recognition tasks,” Inr. J. Robotics Res., vol. 7, no. 6, pp. 15-33, Dec. 1988. S. A. Stansfield, “A robotic perceptual system utilizing passive vision and active touch,” Int. J. Robotics Res., vol. 7, no. 6, pp. 138-161, Dec. .r.””

C. S. George Lee (S’71-S178-M’78SM’86) re- ceived the B.S. and M.S. degrees irl electrical engi- neering from Washington State University in 1973 and 1974, respectively, and the Ph.D. degree from Purdue University, west Lafayette, IN, in 1978.

In 1978-1979, he taught at Purdue University, and in 1979-1985, at the University of Michigan. Since 1985, he has been with the School of Electrical En- gineering, Purdue University, where he is currently a Professor of Electrical Engineering. His current research interests include computational algorithms

and architectures in robotics, intelligent multirobot assembly systems, and neural networks. Dr. Lee was an IEEE Computer Society Distinguished Visitor in 1983-1986, the Organizer apd Chairman of the 1988 NATO Advanced Research Workshop on Sensor-Based Robots: Algorithms and Architectures, and the Secretary of the IEEE Robotics and Automation Society in 1988- 1990. He is Vice-president of Technical Affairs of the IEEE Robotics and Automation Society, a Technical Editor of the IEEE TRANSACTIONS ON

1YUU.

[23] A. M. Flynn, “combining and infrared for mobile robot RoBoncS AND A m 6 m o N , an &&Xiate Editor of the I n t e r t l a t w d J o u r d navigation,” J , Robotics Res., 7, no. 6, pp. 5-14, 1988, ofRobotics anddutomation, a co-author of Robotics: Control, Sensing, Wsion,

and Intelligence (McGraw-Hill), and a co-editor of Tutorial on Robotics (Second E&ion),’(IEEE Computer Society Press). He is a senior member of the IEEE, and a member of Sigma Xi and Tau Beta Pi.

multisensor systems, artificial intelligence,

Y. C. Tang received the B.S. degree in 1983 from National Taiwan University, Taiwan, R.O.C., the M.S. degree in 1986 from the University of Florida, Gainesville, E, and the Ph.D. degree in 1990 at Purdue University, West Lafayette, IN, all in Electrical Engineering.

Since 1990 he has been a faculty research mem- ber with the Center for Advanced Research in Transportation and Department of Computer Sci- ence and Engineering at Arizona State University. His research interests are in the areas of in’telligent

computer vision, geometric model-based reasoning, and sensor-based robotics.