4
338 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. VOL. 35. NO. 3. MARCH 1990 Summing over the interval [kT+(2n + 1)7, kT+(4n + 1)7] and verifying from (4.9) and (3.10) that [ii(kT + (2n + 1)7), . . . , U(kT + (4n + 1)7)] gives 4n+l (w'a(kT + v7)I2 = /)U(kT)w(\* "=2"cl 4ntl v=2n+1 4nt1 ploying two separate, decoupled Kalman estimators- one for each set of states-wwas proposed by Friedland in 111. An alternate derivation of the decoupled Kalman estimator structure, given by Ignagni in (21, ad- dressed the more general case where the dynamic and bias states are initially correlated. The usefulness of the decoupled Kalman estimator structure is, in its original form, clearly limited since in many applications the bias states will undergo some random variation with time. The present work ad- dresses the class of problems in which the bias vector, although not strictly constant, experiences only limited random variation. When such is the case, a suitably modified decoupled Kalman estimator is possible that has the potential of essentially optimal performance. The resultant decoupled estimator structure provides both increased computational ef- ficiency and improved stability relative to the generalized Kalman es- timator, and also lends itself naturally to a fully partitioned and thus computationally efficient U-D implementation 131. STATEMENT OF THE PROBLEM The problem of interest is taken to be defined by the discretized equa- and *=I where xn =vector of dynamic states at nth update point Since the smallest singular value of U(kT) is greater than a constant, a constant 6 > 0 exists such that and therefore the proposition follows from (11.2). REFERENCES [I] P. T. Kabamba. "Control of linear system using generalized sampled-data hold functions," IEEE Trans. Aufomar. Contr., vol. AC-32. no. 9, 1987. 121 T. Kailath. Linear Systems. Englewood Cliffs, NI: Prentice-Hall, 1980. . . b, =vector of bias states at nth update point y, =measurement vector at nth update point and A,, B,, H,, and C, are time-varying coefficient matrices, with the quantities &, 0 , , and being zero-mean uncorrelated random se- quences governed by = Qr6,i E(P,P:) = QhSJk E(VjV:) = R6jk [3j E. W. Bai and S. Sastry, "Global stability proofs for continuous-time indirect adaptive control schemes," IEEE Trans. Aulomaf. Contr., vol. AC-32, no. 6, 1987. ity," IEEE Trans. Automal Contr., vol. 34. pp. 524-528. 1989. (41 G. Kreisselmeier. "An indirect adaptive controller with a self-excitation capabil- E(C;JP:) = E(iJd) =E(PJV;) =o. It is assumed, as in 121, that x and b are initially correlated through a general dependency of the form X, = X: + Mob, (4) Separate-Bias Kalman Estimator with Bias State Noise M. B. IGNAGNI Abstract-The problem of estimating a set of dynamic states and a set of slowly varying bias-like states employing a decoupled Kalman estimator approach is addressed. A decoupled estimator structure, suit- ably modified from that developed by Wedland in dealing with the constant-bias case, is shown to have the potential of essentially opti- mal performance when the bias vector undergoes limited random vari- ation. INTRODUCTION where x,] is the initial value of x, x: is the component of x, which is uncorrelated with the initial bias b, , and MO is a known matrix. This leads to the following initial error covariance relationships: Px0 = P.:, + M,Pb, Ii4: (5) Prho =M,Pb, (6) where P,, , P:, , and Pho are the covariance matrices of the initial es- timation errors in x, , x: , and bo respectively, and P.rbo is the cross- covariance matrix of the estimation errors in x, and bo. The bias vari- ation model defined by (2) constitutes a random-walk process for each element of b, but is also applicable when the actual random variations are first-order exponentially correlated processes with correlation times much larger than the time interval between successive measurement up- dates. A unique procedure for estimating the dynamic states of a linear sys- tem in the presence of a set of constant but unknown bias states, em- GENERALILCD KALMAN ESTIMATOR The generalized Kalman estimator of the dynamic and bias states is Manuscript received October 21. 1988; revised March 20. 1989. The author is with the Honeywell Systems and Research Center, Minneapolis. MN formulated as 55418. IEEE Log Number 8933237. z, = QnZn-l (7) OO18-9286/90/03OO-0338$01 .OO 0 1990 IEEE

Separate bias Kalman estimator with bias state noise

  • Upload
    mb

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Separate bias Kalman estimator with bias state noise

338 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. VOL. 35. NO. 3 . MARCH 1990

Summing over the interval [kT+(2n + 1)7, kT+(4n + 1)7] and verifying from (4.9) and (3.10) that

[ii(kT + (2n + 1)7), . . . , U(kT + (4n + 1)7)]

gives 4 n + l

(w'a(kT + v7) I2 = / )U(kT)w( \* " = 2 " c l

4 n t l

v = 2 n + 1

4 n t 1

ploying two separate, decoupled Kalman estimators- one for each set of states-wwas proposed by Friedland in 111. An alternate derivation of the decoupled Kalman estimator structure, given by Ignagni in (21, ad- dressed the more general case where the dynamic and bias states are initially correlated.

The usefulness of the decoupled Kalman estimator structure is, in its original form, clearly limited since in many applications the bias states will undergo some random variation with time. The present work ad- dresses the class of problems in which the bias vector, although not strictly constant, experiences only limited random variation. When such is the case, a suitably modified decoupled Kalman estimator is possible that has the potential of essentially optimal performance. The resultant decoupled estimator structure provides both increased computational ef- ficiency and improved stability relative to the generalized Kalman es- timator, and also lends itself naturally to a fully partitioned and thus computationally efficient U-D implementation 131.

STATEMENT O F T H E PROBLEM

The problem of interest is taken to be defined by the discretized equa-

and

* = I where

x n =vector of dynamic states at nth update point Since the smallest singular value of U ( k T ) is greater than a constant, a constant 6 > 0 exists such that

and therefore the proposition follows from (11.2).

REFERENCES [I] P. T. Kabamba. "Control of linear system using generalized sampled-data hold

functions," IEEE Trans. Aufomar. Contr., vol. AC-32. no. 9, 1987. 121 T. Kailath. Linear Systems. Englewood Cliffs, NI: Prentice-Hall, 1980.

. . b, =vector of bias states at nth update point y , =measurement vector at nth update point

and A, , B , , H,, and C , are time-varying coefficient matrices, with the quantities &, 0, , and being zero-mean uncorrelated random se- quences governed by

= Qr6,i

E(P,P:) = Q h S J k

E(VjV:) = R 6 j k [3j E. W . Bai and S. Sastry, "Global stability proofs for continuous-time indirect adaptive control schemes," IEEE Trans. Aulomaf. Contr., vol. AC-32, no. 6, 1987.

ity," IEEE Trans. Automal Contr., vol. 34. pp. 524-528. 1989. (41 G. Kreisselmeier. "An indirect adaptive controller with a self-excitation capabil- E(C;JP:) = E ( i J d ) =E(PJV;) = o .

It is assumed, as in 121, that x and b are initially correlated through a general dependency of the form

X , = X : + M o b , (4)

Separate-Bias Kalman Estimator with Bias State Noise

M. B. IGNAGNI

Abstract-The problem of estimating a set of dynamic states and a set of slowly varying bias-like states employing a decoupled Kalman estimator approach is addressed. A decoupled estimator structure, suit- ably modified from that developed by Wedland in dealing with the constant-bias case, is shown to have the potential of essentially opti- mal performance when the bias vector undergoes limited random vari- ation.

INTRODUCTION

where x,] is the initial value of x , x: is the component of x, which is uncorrelated with the initial bias b, , and MO is a known matrix. This leads to the following initial error covariance relationships:

Px0 = P.:, + M,Pb, I i 4 : ( 5 )

Prho =M,Pb, (6)

where P,, , P:, , and Pho are the covariance matrices of the initial es- timation errors in x , , x: , and bo respectively, and P.rbo is the cross- covariance matrix of the estimation errors in x, and bo . The bias vari- ation model defined by (2) constitutes a random-walk process for each element of b, but is also applicable when the actual random variations are first-order exponentially correlated processes with correlation times much larger than the time interval between successive measurement up- dates.

A unique procedure for estimating the dynamic states of a linear sys- tem in the presence of a set of constant but unknown bias states, em- GENERALILCD KALMAN ESTIMATOR

The generalized Kalman estimator of the dynamic and bias states is Manuscript received October 21. 1988; revised March 20. 1989. The author is with the Honeywell Systems and Research Center, Minneapolis. MN formulated as

55418. IEEE Log Number 8933237. z , = QnZn-l ( 7 )

OO18-9286/90/03OO-0338$01 .OO 0 1990 IEEE

Page 2: Separate bias Kalman estimator with bias state noise

IEEE TRANSACTIONS ON AUTOMATIC CONTROL. VOL. 35. NO. 3. MARCH 1990 339

P , = Q,nP,,-i @: + Q n (8)

K , = P,yLz(L,P;L; +R,) - ' (9)

P,, = ( I - K,,L,,)P,;

i n =i, + Kn(y,, - LnZn-)

(10)

(11)

where i is the estimate of z . P the estimation error covariance matrix of z , Q, the transition matrix from the ( n - l)th to nth measurement update points, K the Kalman gain matrix, L the observation matrix, Q the process noise covariance matrix, R the measurement noise covariance matrix, the superscript "-" is used to denote the a priori values of z and P , and the following partitioned form specializes the generalized Kalman estimator to the problem of interest

A B Z = [ z ] @ = [ ] L = [ H C ]

O I

The partitioned Kalman estimator will provide a basis of comparison to the decoupled estimator in the subsequent development.

DECOUPLED ESTIMATOR

When the bias vector b is ignored, a Kalman estimator can be defined which produces estimates of x utilizing a recursive scheme defined by

x, =A,X,-, (12)

P, (n ) = A , P , ( n - 1)Az +B,,Qh(n)Bz + Q , ( n ) (13)

K . , ( n ) = P,T(n)HL[H,P,(n)Hz +R,]-' (14)

(15) i ) , (n ) = [ I - K,(n)H,]P,(n)

(16) X, = x , +K.K(n)(y , , - H , i , )

with initial condition P, (0) = P:,, . and where X is the estimate of x when the bias is ignored (referred to as the bias-free estimate of x) , P, is the computed error covariance matrix of X, K , is the Kalman gain matrix, and Q., and R are as defined for the partitioned Kalman estimator. The recursive algorithm given by (12)-( 16) defines the first stage of a two- stage decoupled estimator, and differs from the form given in [ l ] and [2] insofar as the term B,Qb(n) BZ has been added to (13).

A separate Kalman estimator may be utilized to estimate the bias vector from the residual sequence of the bias-free estimator according to

- - -

b y = b,-, (17)

(18)

(19)

(20)

(21)

Pb-(n) = P b ( n - 1) + Q/j(n)

K b ( n ) = Pb-(n)S;[S,lP,(~)SL +H,,P,(n)HZ +R,]- '

p b ( n ) = [ I - Kb(n)S,IP,(n)

b, = 6 , + Kb(n)(y , - H,X, -S ,b;)

with initial condition P h ( 0 ) = Ph,, , and where S,, is defined by the recursive sequence

Un =AnV,-l + B , (22)

(23) S , = H , U , + C ,

V , = U , - K, (n)S,. (24)

The bias estimator differs from that given in 12) insofar as the term Qb ( n ) has been added in ( 18).

Given the sequence of estimates of X and 6, the adjusted estimates of x are defined by

x,; =x,; + u,b, X" =x,, + V"6"

where X adjusts the present estimate x utilizing the current estimatc of the bias vector 6 . A set of recursion equations for X and b may be expressed in a form in which the bias-frec estimate X does not appear explicitly. This proceeds in the following manner:

6, = 6; + K b ( n ) ( y , - H , i ; - S,b ; )

= Xn- + K,(n) (y , - H , x , - C,bn-) (2%

where the relationships expressed by (22), (23), and (24) were utilized in the last step, and the gain K, is defined by

K,(n) = k , ( n ) + V,Kh(n) . (30)

Comparison of (27). (28), and (29) with the analogous equations for the partitioned Kalman estimator reveals that the estimation equations for x and b have exactly the same structure for both estimators. It is also clear that this depends only on the form of the state estimation equations [equations (12), (16), (17), (21), (25), and (26)] and that of the functions U , V , and S , and holds for both the fixed- and variable-bias cases.

RELATIONSHIP BETWEEN T H E PARTITIONED A N D DECOUPLED KALUA~. ESTIMATORS

For the constant-bias case the gain matrices K, and Kb are shown inductively in [2] to be exactly the same as the corresponding matrices K , and Kz for the partitioned estimator on each iterative cycle, and the estimates x and b provided by the two estimators shown to be identi- cal. Another property of the constant-bias case demonstrated in [2] is that, after a measurement update, the following covariance relationships between the decoupled and partitioned estimators hold:

(31) f l l ( n ) = P,(n) = P, (n ) + v,Pb(n)v;

Pl2(n) = P x b ( n ) = VnPb(n) (32)

Pzz(n) = P,(n) (33)

where P , is the covariance matrix of i , Pxh the cross-covariance matrix between x and 6, Pb the covariance matrix of 6, and the P,, are the covariance matrices associated with the partitioned estimator. It is further shown that the equivalency between the two estimators requires that the initial value of the function V be chosen equal to the matrix MO appearing in (4), so that initially as well as at all subsequent times the relationships expressed by (31). (32), and (33) hold.

The case where the bias vector is not constant can be treated as the natural extension of the case where it undergoes a single jump at some instant of time. Therefore, consider the case where the bias vector, after a period of constancy, randomly jumps at the nth update point by an amount /3 having covariance matrix Q b and, thereafter, remains constant at this new value. The estimate of the bias vector is the same immediately after the jump as before, and this is also true of the bias-free estimate i , from which it is clear that the adjusted estimate 1, as constructed from (26), remains unchanged across the bias jump. Therefore, the two equivalencies expressed by (31) and (32) can be preserved by reinitial- izing the bias-free covariance matrix Fx and the function V immediately after the jump as P ; ( n ) = P , ( n ) and V i = V , , where the superscript

Page 3: Separate bias Kalman estimator with bias state noise

340 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 35. NO. 3. MARCH 1990

Accelerometer Errors

Qaantbzalion (Ivsec) .005

. Random Growih at 1 Hr

- bias (micro-g) 2 0

2 0 - scale factor (ppm)

- nononhogonalily (arc sec) 1 o

. misalignment (arc sec) I o

Measurement Error

Ool * Position Measurement (H)

“+” is used to denote the instant of time immediately after the jump. The covariance matrix of the bias estimation error is, as a result of the jump, augmented by Qh to reflect the increased uncertainty associated with the jump or, explicitly, P i ( n ) = Pb ( n ) + Qb , from which it may be seen that the set of equivalencies expressed by (31), (32), and (33) is no longer completely satisfied since the covariance matrix of b before the jump occurs in (31) and (32), and the value after the jump in (33). This invalidates the equivalency between the decoupled and partitioned estimators immediately after the jump as well as at all subsequent times, from which it follows that the defined reset of the decoupled estimator will lead in general to suboptimal performance.

The variation of the bias vector, as expressed by (2), constitutes a series of discrete jumps occurring over successive update intervals, and is the natural extension of the single-jump case; consequently, the de- fined reset relationships are applicable at each update point. As for the single-jump case, the resulting estimator is suboptimal but, depending on the magnitude of Qbr its performance may be nearly optimal in a given application. The addition of the term B, Qb (n)Bf; in (13) further enhances the performance of the decoupled estimator by accounting for the short-term effect of the bias noise on the propagation of jC in the bias- free estimator; however, the decoupled estimator remains suboptimal in spite of this refinement.

PERFORMANCE EVALUATION O F T H E DECOUPLED ESTIMATOR

The performance associated with the decoupled estimator may be pre- cisely determined in a given application by taking advantage of its struc- tural equivalency to the partitioned Kalman estimator. The procedure is to compute the gain matrices K, and Kb at each measurement update point for the decoupled estimator, from which the gain matrix K, for the adjusted estimate x may be computed from (30). The gain matrices K , and K b are then stored at each measurement update point for the subsequent second stage of the evaluation, during which a covariance analysis characterizing the actual performance of the decoupled estima- tor is carried out utilizing the stored gain matrices K, and Kb in place of K I and K l in the partitioned Kalman estimator defined by (7)-( 1 I ) . However, Joseph’s form of the a posteriori covariance matrix, defined bY

I

P , = ( I - K,L,)Pn-(Z - K,L,)T + K,R,Ki (34)

must be used in place of ( IO) . The same evaluation procedure is also applicable when the decoupled estimator is a reduced-state estimator, with one or more bias states ignored. When such is the case, the ignored states can be initialized in the first step with zero uncertainties and noise variances which, in turn, causes the gains computed for these states to be identically zero, and their uncertainties to thus remain zero. Then, in the second step, the previously ignored bias states can be activated by setting their initial uncertainties and noise variances to the true values, allowing the effect of these states to be properly accounted for.

AN APPLICATION OF THE DECOUPLED ESTIMATOR

The use of the decoupled estimator will be demonstrated in a prac- tical application consisting of the laboratory calibration of a strapdown inertial navigation system. The x vector for this application consists of nine navigation error states (three attitude errors, three velocity errors, three position errors), with the b vector consisting of 21 states (three biases, three scale factor errors, and three nonorthogonality errors as- sociated with the gyros; three biases, three scale factor errors, three nonorthogonality errors, and three misalignment errors associated with the accelerometers). The salient features of the calibration procedure are as follows. 1) The inertial system is mounted on a two-axis rate table being subjected to a rotational motion pattern specifically designed to excite the navigation and sensor calibration errors. 2) The system’s ini- tial orientation relative to a North/East/Down reference frame has been approximately established. 3) While freely navigating, the difference be- tween each of the three system-indicated position coordinates and its known value is made available as a measurement every 20 s. The ran- dom errors associated with the sensors and the measurement process are as defined in Table I. The initial la errors associated with the navigation and sensor calibration errors are shown in the first column of Table 11.

TABLE I DFFINITION OF RANDOM ERRORS FOR CALIBRATION EXAMPLE

* Quanlizatlon (arc sec)

. Random Growth at 1 Hr

- bias (degihr)

- scale factor (ppm)

The 1 a navigation and sensor calibration errors for the decoupled es- timator, as determined by the performance evaluation procedure outlined above, are given at t = 1/2 h and t = 2 h in the next two columns of Table 11. The comparable results at each time point for the partitioned Kalman estimator are defined by the second set of values shown in paren- theses. Both sets of results are seen to be substantially the same, with each exhibiting an excellent parameter tracking capability, which clearly demonstrates the effectivity of the decoupled estimator. (Even though the navigation errors are of no ultimate interest-since the objective is sensor calibration- the estimation accuracy comparison associated with these states is included to demonstrate the overall effectiveness of the decoupled estimator.)

U-D IMPLEMENTATION OF THE DECOUPLED ESTIMATOR

An approach that substantially improves the numerical stability charac- teristics of the Kalman estimator, as described by Thornton and Bierman

Page 4: Separate bias Kalman estimator with bias state noise

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 35. NO. 3, MARCH 1990 34 1

in 131. avoids the propagation of the P matrix as a distinct entity, but propagates its U and D factors instead, where U and D combine to form P according to P = U D W , in which U is a unitary upper triangu- lar matrix, and D a diagonal matrix. Application of the U-D approach assumes a sequence of scalar measurements, with measurement matrix h , and measurement noise variance r . Simultaneously occurring mea- surements with uncorrelated measurement errors may be processed by repeated application of the U-D measurement update equations.

As applied to the decoupled estimator, which lends itself naturally to scalar measurement updates, application of the U-D approach is straight- forward. The bias-free estimator can be implemented utilizing the basic U-D update algorithms defined by ( 11)-(24) in [3], with the exception that the noise covariance matrix Qx + BQbBT appearing in (13) will generally require conversion to U-D form by application of Cholesky’s decomposition [4]. The U-D implementation of the bias estimator de- fined by (17)-(21), with measurement matrix s, is also straightforward and only requires that the measurement noise variance for the bias estima- tor be obtained from the bias-free estimator as the innovations variance hP;h7 + r , which is directly available as a natural byproduct of the U-D formulation, and can be provided by the U-D bias-free estimator to the U-D bias estimator. The bias estimator, when formulated in U-D terms, allows simplifications in the U-D time propagation step as a result of Qb being naturally diagonal and @U being upper-triangular.

CONCLUSION

A modified decoupled Kalman estimator suitable for use when the bias vector varies as a random-walk process has been defined, and demon- strated in a practical application consisting of the calibration of a strap- down inertial navigation system, the results of which showed the esti- mation accuracy associated with the modified estimator to be essentially the same as that of the generalized partitioned Kalman estimator. Con- sidering that the sensor error random growth rates assumed in the exam- ple are on the order of 5 to 10 times greater than normally associated with contemporary strapdown systems, it may be inferred that inertial navigation systems possessing more typical sensor error random growth characteristics should be amenable to a decoupled estimator approach in a broad spectrum of aided-navigation system applications. This should also be true in a variety of other applications in which the bias vector experiences only limited random variation.

I11

I21

I31

[41

REFERENCES B. Friedland. “Treatment of bias in recursive filtering,“ IEEE Trans. Automat. Contr., vol. AC-14, Aug. 1969. M. Ignagni, “An alternate derivation and extension of Friedland’s two-stage Kalman estimator.” lEEE Trans. Automat. Contr.. vol. AC-26, June 1981. C . L. Thornton and G. L. Bierman. “Filtering and error analysis via the UDU‘ covariance factorization,” IEEE Trans. Automat. Contr., vol. AC-23. no. 5 , Oct. 1978. G. Bierman, Factorization Methods f o r Discrete Sequential Estimation. New York: Academic, 1977.

Remarks on Some Dynamical Problems of Controlling Redundant Manipulators

T. SHAMIR

Abstract-Most algorithms for controlling redundant robot arms are based on dynamical constraints or optimization criteria that the mecha-

Manuscript received April 22. 1988; revised December 20. 1988. Paper recommended

The author is with the Department of Applied Mathematics, Weizmann Institute of

IEEE Log Number 8933154.

by Past Associate Editor, B. Krogh.

Science, Rehovot, Israel.

nism is to satisfy. The local algorithms give infinitesimal displacements of the joints so that the hand would follow the desired path; whereas the global ones give the joint values as a function of the workspace point. Some local methods are based on local criteria, other local methods on global criteria. The mutual effects of the dynamical constraints and the actual performance may sometimes be unexpected. For example, the problems of repeatability and stability may arise, the domain of invert- ibility may be restricted, or there may be introduction of algorithmic singularities. This note uses an approach based on the differential geo- metric structure of the joint space to perform a comparative analysis of the various “inverse kinematics” methods used for controlling redun- dant manipulators and to point out some advantages and disadvantages of each method.

I. INTRODUCTION

A redundant robot arm or manipulator is one that has more joints than the minimal number required to simply carry out the task. It has been argued that redundancy can improve the performance of a manip- ulator in various aspects. It obviously improves versatility by enabling the manipulator to avoid obstacles, joint limits, and singularities. It also enables us to apply additional constraints that can optimize certain cri- teria according to the task requirements or the environment. Therefore, a multipurpose robot arm will need to be redundant if it is to be im- plemented effectively. However, the global qualitative behavior of such mechanisms must be analyzed and predicted. This is usually done via simulations, but simulations can only demonstrate some phenomena, not prevent them.

A redundant manipulator would have a continuum of joint settings for each typical position/orientation of the hand. Therefore, in order to guide the hand along a designated path, the joint values that can be assigned may be chosen in many ways. There is a variety of inverse kinematics strategies, or joint motion planning algorithms, for controlling the mechanism. Some of these algorithms are based on local or global dynamical criteria such as minimization of energy, torque, etc. 121, [ l I] or on structural criteria such as increased manipulability [ 141, singularity avoidance 1231, and so on.

A “local strategy” or “path tracking algorithm” chooses an infinites- imal joint displacement at each point so as to move the hand an infinites- imal distance in the required direction. The initial setting and the local displacements can, of course, be chosen in an infinite number of ways. A local algorithm may cause the mechanism to run into unforeseeable prob- lems. For example, the joint settings may not return to an initial value after the hand traces a closed path in its workspace [ 151, [20]. There may be a continual drift in the joint settings, or there may be convergence to a limit cycle. Another problem that may arise in local strategies is that the joint settings may run close to singular configurations, even if the initial setting was far away from them. In this case, it is necessary to apply very large forces to move the mechanism out of this configuration [2], 1231. A local strategy in some cases may not foresee such a trap.

On the other hand there are “function inversion” strategies, where a function is chosen whose domain is part of the workspace and range is a subset of the joint space. Such a function need not necessarily be explicit or in closed form, but nevertheless it must be continuous and well defined. Because of the redundancy, of course, there are many such functions. Some of them may incorporate the optimization of various more global criteria 1131, [12]. The joint motion is planned off line in advance over some path segment rather than sequentially at each point along the path. Such inversion methods have the advantage of not running into unpredictable dynamical difficulties, and are probably more similar to the way biological motion is planned. However, they are not very adaptable to varying situations, and are more computationally expensive.

Another problem that arises in both tracking methods and inverse functions is the introduction of algorithmic singularities. A manipulator usually has structural singularities that cannot be eliminated. However, some algorithms may introduce new singularities, even if they avoid the old ones. This reduces the workspace region over which an inverse- kinematics scheme is well defined.

OO18-9286/90/03OO-0341$01 .OO @ 1990 IEEE