Upload
nguyendung
View
218
Download
1
Embed Size (px)
Citation preview
9
Numerical Methods inKinematicsBy increasing the number of links, the analytic calculation in robotics be-comes a tedious task and numerical calculations are needed. We review themost frequent needed numerical analysis in robotics.
9.1 Linear Algebraic Equations
In robotic analysis, there exist problems and situations, such as inversekinematics, that we need to solve a set of coupled linear or nonlinear al-gebraic equations. Every numerical method of solving nonlinear equationsalso works by iteratively solving a set of linear equations.Consider a system of n linear algebraic equations with real constant
coefficients,
a11x1 + a12x2 + · · ·+ a1nxn = b1
a21x1 + a22x2 + · · ·+ a2nxn = b2
· · · = · · ·an1x1 + an2x2 + · · ·+ annxn = bn (9.1)
which can also be written in matrix form
[A]x = b. (9.2)
There are numerous methods for solving this set of equations. Among themost efficient methods is the LU factorization method.For every nonsingular matrix [A] there exists an upper triangular matrix
[U ] with nonzero diagonal elements and a lower triangular matrix [L] withunit diagonal elements, such that
[A] = [L] [U ] (9.3)
[A] =
⎡⎢⎢⎣a11 a12 · · · a1na21 a22 · · · a2n· · · · · · · · · · · ·an1 an2 · · · ann
⎤⎥⎥⎦ (9.4)
[L] =
⎡⎢⎢⎣1 0 · · · 0l21 1 · · · 0· · · · · · · · · · · ·ln1 ln2 · · · 1
⎤⎥⎥⎦ (9.5)
R.N. Jazar, Theory of Applied Robotics, 2nd ed., DOI 10.1007/978-1-4419-1750-8_9, © Springer Science+Business Media, LLC 2010
486 9. Numerical Methods in Kinematics
[U ] =
⎡⎢⎢⎣u11 u12 · · · u1n0 u22 · · · u2n· · · · · · · · · · · ·0 0 · · · unn
⎤⎥⎥⎦ . (9.6)
The process of factoring [A] into [L] [U ] is called LU factorization. Oncethe [L] and [U ] matrices are obtained, the equation
[L] [U ]x = b (9.7)
can be solved by transforming into
[L]y = b (9.8)
and[U ]x = y. (9.9)
Equations (9.8) and (9.9) are both a triangular set of equations and theirsolutions are easy to obtain by forward and backward substitution.
Proof. To show how [A] can be transformed into [L] [U ], we consider a4× 4 matrix.⎡⎢⎢⎣
a11 a12 a13 a14a21 a22 a23 a24a31 a32 a33 a34a41 a42 a43 a44
⎤⎥⎥⎦ =⎡⎢⎢⎣1 0 0 0l21 1 0 0l31 l32 1 0l41 l42 l43 1
⎤⎥⎥⎦⎡⎢⎢⎣
u11 u12 u13 u140 u22 u23 u240 0 u33 u340 0 0 u44
⎤⎥⎥⎦(9.10)
Employing a dummy matrix [B], we may combine the elements of [L] and[U ] as
[B] =
⎡⎢⎢⎣u11 u12 u13 u14l21 u22 u23 u24l31 l32 u33 u34l41 l42 l43 u44
⎤⎥⎥⎦ . (9.11)
The elements of [B] will be calculated one by one, in the following order:
[B] =
⎡⎢⎢⎣(1) (2) (3) (4)(5) (8) (9) (10)(6) (11) (13) (14)(7) (12) (15) (16)
⎤⎥⎥⎦ (9.12)
The process for generating a matrix [B], associated to an n× n matrix[A], is performed in n− 1 iterations. After i− 1 iterations, the matrix is inthe following form:
[B] =
⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣
u1,1 u1,2 · · · u1,i−1 · · · · · · u1,nl2,1 u2,2 · · · · · · · · · · · · u2,n· · · · · · · · · · · · · · · · · · · · ·· · · · · · · · · · · · · · · · · · ui−1,n· · · · · · · · · · · · d e· · · · · · · · · · · · | Di |ln,1 ln,2 · · · ln,i−1 b c
⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(9.13)
9. Numerical Methods in Kinematics 487
The unprocessed (n− i+1)×(n− i+1) submatrix in the lower right corneris denoted by [Di] and has the same elements as [A]. In the ith step, theLU factorization method converts [Di]
[Di] =
∙dii rTisi [Hi+1]
¸(9.14)
to a new form
[Di] =
∙uii uTili [Di+1]
¸. (9.15)
Direct multiplication shows that
u11 = a11 u12 = a12 u13 = a13 u14 = a14 (9.16)
l21 =a21u11
l31 =a31u11
l41 =a41u11
(9.17)
u22 = a22 − l21u12 u23 = a23 − l21u13 u24 = a24 − l21u14 (9.18)
l32 =a32 − l31u12
u22l42 =
a42 − l41u12u22
(9.19)
u33 = a33 − (l31u13 + l32u23) u34 = a34 − (l31u14 + l32u24) (9.20)
l43 =a43 − (l41u13 + l42u23)
u33(9.21)
u44 = a44 − (l41u14 + l42u24 + l43u34) . (9.22)
Therefore, the general formula for getting elements of [L] and [U ] corre-sponding to an n× n coefficients matrix [A] can be written as
uij = aij −i−1Xk=1
likukj i ≤ j j = 1, · · ·n (9.23)
lij =
aij −j−1Pk=1
likukj
ujjj ≤ i i = 1, · · ·n. (9.24)
For i = 1, the rule for u reduces to
u1j = a1j (9.25)
and for j = 1, the rule for l reduces to
li1 =ai1u11
. (9.26)
The calculation of element (k) of the dummy matrix [B], which is anelement of [L] or [U ], involves only the elements of [A] in the same positionand some previously calculated elements of [B].
488 9. Numerical Methods in Kinematics
The LU factorization technique can be set up in an algorithm for easiernumerical calculations.
Algorithm 9.1. LU factorization technique for an n× n matrix [A].1- Set the initial counter i = 1.2 - Set [D1] = [A].3- Calculate [Di+1] from [Di] according to
uii = dii (9.27)
uTi = rTi (9.28)
li =1
uiisi (9.29)
[Di+1] = [Hi+1]− li uTi . (9.30)
4- Set i = i+ 1. If i = n then LU factorization is completed. Otherwisereturn to step 3.
After decomposing the matrix [A] into the matrices [L] and [U ], the setof equations can be solved based on the following algorithm.
Algorithm 9.2. LU solution technique.1- Calculate y from [L]y = b by
y1 = b1
y2 = b2 − y1l21
y3 = b3 − y1l31 − y2l32
· · ·
yi = bi −i−1Xj=1
yjlij . (9.31)
2- Calculate x from [U ]x = y by
xn =ynun,n
xn−1 =yn−1 − xnun−1,n
un−1,n−1· · ·
xi =1
uii
⎛⎝yi −nX
j=i+1
xjuij
⎞⎠ . (9.32)
9. Numerical Methods in Kinematics 489
Example 257 Solution of a set of four equations.Consider a set of four linear algebraic equations
[A]x = b (9.33)
where,
[A] =
⎡⎢⎢⎣2 1 3 −31 0 −1 −20 2 2 13 1 0 −2
⎤⎥⎥⎦ (9.34)
and
b =
⎡⎢⎢⎣120−2
⎤⎥⎥⎦ . (9.35)
Following the LU factorization algorithm we first set
i = 1 [D1] = [A] (9.36)
to findd11 = 2 rT1 =
£1 3 −3
¤(9.37)
s1 =
⎡⎣ 103
⎤⎦ [H2] =
⎡⎣ 0 −1 −22 2 11 0 −2
⎤⎦ (9.38)
and calculate
u11 = d11 = 2 (9.39)
uT1 = rT1 =£1 3 −3
¤l1 =
1
u11s1 =
⎡⎣ 12032
⎤⎦ (9.40)
[D2] = [H2]− l1 uT1 =
⎡⎣ −12 −52 −122 2 1−12 −92
52
⎤⎦ . (9.41)
In the second step we havei = 2 (9.42)
andd22 = −
1
2rT2 =
£− 52 −12
¤(9.43)
s2 =
∙2− 12
¸[H3] =
∙2 1−92
52
¸(9.44)
and calculateu22 = d22 = −
1
2(9.45)
490 9. Numerical Methods in Kinematics
uT2 = rT2 =
£−52 −12
¤l2 =
1
u22s2 =
∙−41
¸(9.46)
[D3] = [H3]− l2 uT2 =∙−8 −1−2 3
¸. (9.47)
In the third step we seti = 3 (9.48)
and findd33 = −8 rT3 = [−1] (9.49)
s3 = [−2] [H4] = [3] (9.50)
and therefore,u33 = d33 = −8 (9.51)
uT3 = rT3 = [−1] l3 =
1
u33s3 =
∙1
4
¸(9.52)
[D4] = [H4]− l3 uT3 =∙13
4
¸. (9.53)
After these calculations, the matrix [B], [L], and [U ] become
[B] =
⎡⎢⎢⎣2 1 3 −312 −12 −52 −120 4 −8 −132 1 1
4134
⎤⎥⎥⎦ (9.54)
[L] =
⎡⎢⎢⎣1 0 0 012 1 0 00 −4 1 032 1 1
4 1
⎤⎥⎥⎦ (9.55)
[U ] =
⎡⎢⎢⎣2 1 3 −30 −12 −52 −120 0 −8 −10 0 0 13
4
⎤⎥⎥⎦ . (9.56)
Now a vector y can be found to satisfy
[L]y = b (9.57)
y =
⎡⎢⎢⎣13/26
−13/2
⎤⎥⎥⎦ (9.58)
and finally the unknown vector x should be found to satisfy
[U ]x = y (9.59)
9. Numerical Methods in Kinematics 491
x =
⎡⎢⎢⎣−5/23/2−1/2−2
⎤⎥⎥⎦ . (9.60)
Example 258 LU factorization with pivoting.In the process of LU factorization, the situation uii = 0 generates a
division by zero, which must be avoided. In this situation, pivoting must beapplied. By pivoting, we change the order of equations to have a coefficientmatrix with the largest elements, in absolute value, as diagonal elements.The largest element is called the pivot element.As an example, consider the following set of equations:
[A]x = b (9.61)⎡⎢⎢⎣2 1 3 −31 0 −1 20 2 0 13 1 4 −2
⎤⎥⎥⎦⎡⎢⎢⎣
x1x2x3x4
⎤⎥⎥⎦ =
⎡⎢⎢⎣120−2
⎤⎥⎥⎦ (9.62)
however, we move the largest element to d11 by interchanging row 1 with4, and column 1 with 3.⎡⎢⎢⎣
3 1 4 −21 0 −1 20 2 0 12 1 3 −3
⎤⎥⎥⎦⎡⎢⎢⎣
x1x2x3x4
⎤⎥⎥⎦ =⎡⎢⎢⎣−2201
⎤⎥⎥⎦ (9.63)
⎡⎢⎢⎣4 1 3 −2−1 0 1 20 2 0 13 1 2 −3
⎤⎥⎥⎦⎡⎢⎢⎣
x3x2x1x4
⎤⎥⎥⎦ =⎡⎢⎢⎣−2201
⎤⎥⎥⎦ (9.64)
Then the largest element in the 3 × 3 submatrix in the lower right cornerwill move to d22 ⎡⎢⎢⎣
4 −2 3 1−1 2 1 00 1 0 23 −3 2 1
⎤⎥⎥⎦⎡⎢⎢⎣
x3x4x1x2
⎤⎥⎥⎦ =⎡⎢⎢⎣−2201
⎤⎥⎥⎦ (9.65)
⎡⎢⎢⎣4 −2 3 13 −3 2 10 1 0 2−1 2 1 0
⎤⎥⎥⎦⎡⎢⎢⎣
x3x4x1x2
⎤⎥⎥⎦ =⎡⎢⎢⎣−2102
⎤⎥⎥⎦ (9.66)
and finally the largest element in the 2 × 2 in the lower right corner will
492 9. Numerical Methods in Kinematics
move to d22. ⎡⎢⎢⎣4 −2 1 33 −3 1 20 1 2 0−1 2 0 1
⎤⎥⎥⎦⎡⎢⎢⎣
x3x4x2x1
⎤⎥⎥⎦ =⎡⎢⎢⎣−2102
⎤⎥⎥⎦ (9.67)
To apply the LU factorization algorithm and LU solution algorithm, wedefine a new set of equations.
[A0]x0 = b0 (9.68)⎡⎢⎢⎣4 −2 1 33 −3 1 20 1 2 0−1 2 0 1
⎤⎥⎥⎦⎡⎢⎢⎣
x01x02x03x04
⎤⎥⎥⎦ =
⎡⎢⎢⎣−2102
⎤⎥⎥⎦ (9.69)
Based on the LU factorization algorithm, in the first step we set
i = 1 (9.70)
and find[D1] = [A
0] d11 = 4 rT1 =£−2 1 3
¤(9.71)
s1 =
⎡⎣ 30−1
⎤⎦ [H2] =
⎡⎣ −3 1 21 2 02 0 1
⎤⎦ (9.72)
to calculate
u11 = d11 = 4 uT1 = rT1 =
£−2 1 3
¤(9.73)
l1 =1
u11s1 =
⎡⎣ 340−14
⎤⎦ (9.74)
[D2] = [H2]− l1 uT1 =
⎡⎣ −32 14 −14
1 2 032
14
74
⎤⎦ . (9.75)
For the second step, we have
i = 2 (9.76)
and
d22 = −3
2rT2 =
£14 −14
¤(9.77)
s2 =
∙132
¸[H3] =
∙2 014
74
¸(9.78)
9. Numerical Methods in Kinematics 493
and then
u22 = d22 = −3
2(9.79)
uT2 = rT2 =
£14 −14
¤l2 =
1
u22s2 =
∙−23−1
¸(9.80)
[D3] = [H3]− l2 uT2 =∙
136 −1612
32
¸. (9.81)
In the third step, we seti = 3 (9.82)
and find
d33 =13
6rT3 =
∙−16
¸(9.83)
s3 =
∙1
2
¸[H4] =
∙3
2
¸(9.84)
and calculate
u33 = d33 =13
6(9.85)
uT3 = rT3 =
∙−16
¸l3 =
1
u33s3 =
∙3
13
¸(9.86)
[D4] = [H4]− l3 uT3 =∙20
13
¸. (9.87)
Therefore, the matrices [L] and [U ] are
[L] =
⎡⎢⎢⎣1 0 0 034 1 0 00 −23 1 0−14 −1 3
13 1
⎤⎥⎥⎦ (9.88)
[U ] =
⎡⎢⎢⎣4 −2 1 30 −32
14 −14
0 0 136 −16
0 0 0 2013
⎤⎥⎥⎦ (9.89)
and now we can find the vector y
[L]y = b0 (9.90)
y =
⎡⎢⎢⎣−25/25/347/13
⎤⎥⎥⎦ . (9.91)
494 9. Numerical Methods in Kinematics
The unknown vector x0 can then be calculated,
[U ]x0 = y (9.92)
x0 =
⎡⎢⎢⎣−69/29−19/1019/2047/20
⎤⎥⎥⎦ =⎡⎢⎢⎣
x3x4x2x1
⎤⎥⎥⎦ (9.93)
and therefore,
x =
⎡⎢⎢⎣47/2019/20−69/20−19/10
⎤⎥⎥⎦ . (9.94)
Example 259 F Uniqueness of solution.Consider a set of n linear equations, [A]x = b. If [A] is square and non-
singular, then there exists a unique solution x = [A]−1 b. However, if thelinear system of equations involves n variables and m equations
a11x1 + a12x2 + · · ·+ a1nxn = b1
a21x1 + a22x2 + · · ·+ a2nxn = b2
· · · = · · ·am1x1 + am2x2 + · · ·+ amnxn = bm (9.95)
then, three classes of solutions are possible.
1. A unique solution exists and the system is called consistent.
2. No solution exists and the system is called inconsistent.
3. Multiple solutions exist and the system is called undetermined.
Example 260 F Ill conditioned and well conditioned.A system of equations, [A]x = b, is considered to be well conditioned
if a small change in [A] or b results in a small change in the solution vectorx. A system of equations, [A]x = b, is considered to be ill conditioned ifa small change in [A] or b results in a big change in the solution vector x.The system of equations is ill conditioned when [A] has rows or columns sonearly dependent on each other.Consider the following set of equations:
[A]x = b∙2 3.991 2
¸ ∙x1x2
¸=
∙1.991
¸(9.96)
The solution of this set of equations is∙x1x2
¸=
∙−1.01.0
¸. (9.97)
9. Numerical Methods in Kinematics 495
Let’s make a small change in the b vector∙2 3.991 2
¸ ∙x1x2
¸=
∙1.981.01
¸(9.98)
and see how the solution will change.∙x1x2
¸=
∙−6.994.0
¸(9.99)
Now we make a small change in [A] matrix∙2.01 3.980.99 2.01
¸ ∙x1x2
¸=
∙1.991
¸(9.100)
and solve the equations ∙x1x2
¸=
∙0.19880.3993
¸. (9.101)
Therefore, the set of equations (9.96) is ill conditioned and is sensitive toperturbation in [A] and b. However, the set of equations∙
2 31 2
¸ ∙x1x2
¸=
∙11
¸(9.102)
is well conditioned because small changes in [A] or b cannot change thesolution drastically.The sensitivity of the solution x to small perturbations in [A] and b is
measured in terms of the condition number of [A] by
k4xkkxk ≤ con (A)
k4AkkAk (9.103)
wherecon (A) =
°°A−1°° kAk . (9.104)
and kAk is a norm of [A]. If con (A) = 1. Then [A] is called perfectlyconditioned. The matrix [A] is well conditioned if con (A) < 1 and it is illconditioned if con (A) > 1. In fact, the relative change in the norm of thecoefficient matrix, [A], can be amplified by con (A) to make the upper limitof the relative change in the norm of the solution vector x.
Proof. Start with a set of equations
[A]x = b (9.105)
and change the matrix [A] to [A0]. Then the solution x will change to x0
such that[A0]x0 = b. (9.106)
496 9. Numerical Methods in Kinematics
Therefore,[A]x = [A0]x0 = ([A] +4A) (x+4x) (9.107)
where
4A = [A0]− [A] (9.108)
4x = x0 − x. (9.109)
Expanding (9.107)
[A]x = [A]x+ [A]4x+4A (x+4x) (9.110)
and simplifying4x = −A−14A (x+4x) (9.111)
shows thatk4xk ≤
°°A−1°° k4Ak kx+4xk . (9.112)
Multiplying both sides of (9.112) by the norm kAk leads to Equation (9.103).
Example 261 F Norm of a matrix.The norm of a matrix is a scalar positive number, and is defined for
every kind of matrices including square, rectangular, invertible, and non-invertible. There are several definitions for the norm of a matrix. The mostimportant ones are
kAk1 = Max1≤j≤n
nXi=1
|aij | (9.113)
kAk2 = λMax
¡ATA
¢(9.114)
kAk∞ = Max1≤i≤n
nXj=1
|aij | (9.115)
kAkF =nXi=1
nXj=1
a2ij . (9.116)
The norm-infinity, kAk∞, is the one we accept to calculate the con (A) inEquation (9.104). The norm-infinity, kAk∞, is also called the row sumnorm and uniform norm. To calculate kAk∞, we find the sum of theabsolute of the elements of each row of the matrix [A] and pick the largestsum.As an example, the norm of
[A] =
⎡⎣ 1 3 −3−1 −1 22 4 −2
⎤⎦ (9.117)
9. Numerical Methods in Kinematics 497
is
kAk∞ = Max1≤i≤n
nXj=1
|aij |
= Max {(|1|+ |3|+ |−3|) , (|−1|+ |−1|+ |2|) , (|2|+ |4|+ |−2|)}= Max {7, 4, 8}= 8. (9.118)
We may check the following relations between norms of matrices.
k[A] + [B]k ≤ k[A]k+ k[B]k (9.119)
k[A] [B]k ≤ k[A]k k[B]k (9.120)
9.2 Matrix Inversion
There are numerous techniques for matrix inversion. However the methodbased on the LU factorization can simplify our numerical calculations sincewe have already applied the method for solving a set of linear algebraicequations.Assume a matrix [A] could be decomposed into
[A] = [L] [U ] (9.121)
where
[A] =
⎡⎢⎢⎣a11 a12 · · · a1na21 a22 · · · a2n· · · · · · · · · · · ·an1 an2 · · · ann
⎤⎥⎥⎦ (9.122)
[L] =
⎡⎢⎢⎣1 0 · · · 0l21 1 · · · 0· · · · · · · · · · · ·ln1 ln2 · · · 1
⎤⎥⎥⎦ (9.123)
[U ] =
⎡⎢⎢⎣u11 u12 · · · u1n0 u22 · · · u2n· · · · · · · · · · · ·0 0 · · · unn
⎤⎥⎥⎦ (9.124)
then its inverse would be
[A]−1 = [U ]−1 [L]−1 . (9.125)
498 9. Numerical Methods in Kinematics
Proof. Because [L] and [U ] are triangular matrices, their inverses are alsotriangular. The elements of the matrix [M ]
[M ] = [L]−1 =
⎡⎢⎢⎣1 0 · · · 0
m21 1 · · · 0· · · · · · · · · · · ·mn1 mn2 · · · 1
⎤⎥⎥⎦ (9.126)
are
mij = −lij −i−1X
k=j+1
likmkj j < i i = 2, 3, · · ·n− 1 (9.127)
and the elements of the matrix [V ]
[V ] = [U ]−1=
⎡⎢⎢⎣v11 v12 v13 v140 v22 v23 v240 0 v33 v340 0 0 v44
⎤⎥⎥⎦ (9.128)
are
vij =
⎧⎪⎪⎪⎨⎪⎪⎪⎩1
uijj = i i = n, n− 1, · · · , 1
−1uii
jXk=i+1
uikvkj j ≥ i i = n− 1, · · · , 2.(9.129)
Example 262 Solution of a set of equations by matrix inversion.Consider a set of four linear algebraic equations.
[A]x = b (9.130)⎡⎢⎢⎣2 1 3 −31 0 −1 −20 2 2 13 1 0 −2
⎤⎥⎥⎦⎡⎢⎢⎣
x1x2x3x4
⎤⎥⎥⎦ =
⎡⎢⎢⎣120−2
⎤⎥⎥⎦ (9.131)
Following the LU factorization algorithm, we can decompose the coefficientmatrix to
[A] = [L] [U ] (9.132)
where
[L] =
⎡⎢⎢⎣1 0 0 012 1 0 00 −4 1 032 1 1
4 1
⎤⎥⎥⎦ [U ] =
⎡⎢⎢⎣2 1 3 −30 −12 −52 −120 0 −8 −10 0 0 13
4
⎤⎥⎥⎦ . (9.133)
9. Numerical Methods in Kinematics 499
The inverse of matrices [L] and [U ] are
[L]−1=
⎡⎢⎢⎣1 0 0 0−12 1 0 0−2 4 1 0−12 −2 −14 1
⎤⎥⎥⎦ [U ]−1=
⎡⎢⎢⎣12 1 − 18
1526
0 −2 58 − 3
260 0 − 18 − 1
260 0 0 4
13
⎤⎥⎥⎦(9.134)
and therefore the solution of the equations is:
x = [U ]−1[L]−1b =
⎡⎢⎢⎣−5232−12−2
⎤⎥⎥⎦ (9.135)
Example 263 F LU factorization method compared to other methods.Every nonsingular matrix [A] can be decomposed into lower and upper
triangular matrices [A] = [L] [U ]. Then, the solution of a set of equations[A]x = b is equivalent to
[L] [U ]x = b. (9.136)
Multiplying both sides by L−1 shows that
[U ]x = [L]−1b (9.137)
and the problem is broken into two new sets of equations
[L]y = b (9.138)
and[U ]x = y. (9.139)
The computational time required to decompose [A] into [L] [U ] is propor-tional to n3/3, where n is the number of equations. Then, the computationaltime for solving each set of [L]y = b and [U ]x = y is proportional ton2/2. Therefore, the total computational time for solving a set of equationsby the LU factorization method is proportional to n2+ n3/3. However, theGaussian elimination method takes a computational time proportional ton2/2 + n3/3, forward elimination takes a time proportional to n3/3, andback substitution takes a time proportional to n2/2.On the other hand, the total computational time required to inverse a ma-
trix using the LU factorization method is proportional to 4n3/3. However,the Gaussian elimination method needs n4/3 + n3/2, and
n4
3+
n3
2>4n3
3n > 2. (9.140)
Figure 9.1 depicts a plot of the function G−LU = n4
3 +n3
2 −4n3
3 and showshow fast the number of calculations for the Gaussian elimination, compared
500 9. Numerical Methods in Kinematics
G-L
U
n
FIGURE 9.1. The number of calculations for the Gaussian elimination subtractedby the LU factorization methods, as a function of the size of the matrix.
to the LU factorization methods, increases. As an example, for a 6 × 6matrix inversion, we need 540 calculations for the Gaussian eliminationmethod, compared to 288 calculations for the LU factorization method.
Example 264 F Partitioning inverse method.Assume that a matrix [T ] can be partitioned into
[T ] =
∙A BC D
¸(9.141)
then, T−1 can be calculated by
T−1 =
∙E FG H
¸(9.142)
where
[E] =£A−BD−1
¤−1(9.143)
[H] =£D − CA−1B
¤−1(9.144)
[F ] = −A−1BH (9.145)
[G] = −D−1CE. (9.146)
Sometimes, it is a shortcut inverse method.
Example 265 F Analytic inversion method.If the n × n matrix [A] = [aij ] is non-singular, that is det(A) 6= 0, we
may compute the inverse, A−1, by dividing the adjoint matrix Aa by thedeterminant of [A].
A−1 =Aa
det(A)(9.147)
9. Numerical Methods in Kinematics 501
The adjoint or adjugate matrix of the matrix [A], is the transpose ofthe cofactor matrix of [A].
Aa = AcT (9.148)
The cofactor matrix, denoted by Ac, for a matrix [A], is made of thematrix [A] by replacing each of its elements by its cofactor. The cofactorassociated with the element aij is defined by
Acij = (−1)
i+jAij (9.149)
where Aij is the ij-minor of [A]. Associated with each element aij of thematrix T , there exists a minor Aij which is a number equal to the valueof the determinant of the submatrix obtained by deleting row i and columnj of the matrix [A].The determinant of [A] is calculated by
det (A) =nXj=1
aijAcij . (9.150)
Therefore, if
[A] =
⎡⎣ a11 a12 a13a21 a22 a23a31 a32 a33
⎤⎦ (9.151)
then the elements of adjoint matrix Aa are
Aa11 = Ac
11 = (−1)2
¯a22 a23a32 a33
¯(9.152)
Aa21 = Ac
12 = (−1)3
¯a21 a23a31 a33
¯(9.153)
...
Aa33 = Ac
33 = (−1)6
¯a11 a12a21 a22
¯. (9.154)
and the determinant of [A] is
det (A) = a11a22a33 − a11a23a32 − a12a21a33
+a12a31a23 + a21a13a32 − a13a22a31. (9.155)
As an example, consider a 3× 3 matrix as below
[A] =
⎡⎣ 3 4 87 2 59 6 1
⎤⎦ . (9.156)
502 9. Numerical Methods in Kinematics
The associated adjoint matrix for [A] is
Aa = AcT =
⎡⎣ −28 38 2444 −69 184 41 −22
⎤⎦T =⎡⎣ −28 44 4
38 −69 4124 18 −22
⎤⎦ (9.157)
and the determinant of [A] is
det [A] = 260 (9.158)
and therefore,
[A]−1=
Aa
det(A)=
⎡⎢⎣ −765
1165
165
19130 − 69
26041260
665
9130 − 11
130
⎤⎥⎦ . (9.159)
Example 266 F Cayley-Hamilton matrix inversion.The Cayley-Hamilton theorem says: Every non-singular matrix satisfies
its own characteristic equation. The characteristic equation of an n × nmatrix [A] = [aij ] is
det (A− λI) = |A− λI|= P (λ) = λn + an−1λ
n−1 + · · ·+ a1λ+ a0 = 0. (9.160)
Hence, the characteristic equation of an n × n matrix is a polynomial ofdegree n. Based on the Cayley-Hamilton theorem, we have
P (A) = An + an−1An−1 + · · ·+ a1A+ a0 = 0. (9.161)
Multiplying both sides of this polynomial by A−1 and solving for A−1 pro-vides
A−1 = − 1a0
£An−1 + an−1A
n−2 + · · ·+ a2A+ a1I¤. (9.162)
Therefore, if
[A] =
⎡⎣ a11 a12 a13a21 a22 a23a31 a32 a33
⎤⎦ (9.163)
and
det (A) =nXj=1
aijAcij (9.164)
then the characteristic equation of [A] is
P (λ) = det (A− λI) = λ3 + (−a11 − a22 − a33)λ2
+(a11a22 − a12a21 + a11a33 − a13a31 + a22a33 − a23a32)λ
+a11a23a32 + a12a21a33 + a13a22a31
−a11a22a33 − a12a31a23 − a21a13a32. (9.165)
9. Numerical Methods in Kinematics 503
As an example, consider a 3× 3 matrix
[A] =
⎡⎣ 1 2 34 6 75 8 9
⎤⎦ (9.166)
with following characteristic equation:
λ3 − 16λ2 − 10λ− 2 = 0. (9.167)
Because [A] satisfies its own characteristic equation, we have
A3 − 16A2 − 10A− 2 = 0. (9.168)
Multiplying both sides by A−1
A−1A3 − 16A−1A2 − 10A−1A = 2A−1 (9.169)
provides the inverse matrix.
A−1 =1
2
¡A2 − 16A− 10I
¢=
1
2
⎡⎢⎣⎡⎣ 1 2 34 6 75 8 9
⎤⎦2 − 16⎡⎣ 1 2 34 6 75 8 9
⎤⎦− 10⎡⎣ 1 0 00 1 00 0 1
⎤⎦⎤⎥⎦
=
⎡⎣ −1 3 −2−12 −3 5
21 1 −1
⎤⎦ (9.170)
9.3 Nonlinear Algebraic Equations
Inverse kinematic problem ends up to a set of nonlinear coupled algebraicequations. Consider a set of nonlinear algebraic equations
f(q) = 0 (9.171)
orf1(q1, q2, · · · , qn) = 0f2(q1, q2, · · · , qn) = 0
· · ·fn(q1, q2, · · · , qn) = 0
(9.172)
where the function and variable vectors are:
f =
⎡⎢⎢⎣f1(q)f2(q)· · ·
fn(q)
⎤⎥⎥⎦ q =
⎡⎢⎢⎣q1q2· · ·qn
⎤⎥⎥⎦ (9.173)
504 9. Numerical Methods in Kinematics
To solve the set of equations (9.171), we begin with a guess solution vectorq(0), and employ the following iteration formula to search for a bettersolution
q(i+1) = q(i) − J−1(q(i)) f(q(i)) (9.174)
where J−1(q(i)) is the Jacobian matrix of the system of equations evaluatedat q = q(i).
[J] =
∙∂fi∂qj
¸(9.175)
Utilizing the iteration formula (9.174), we can approach an exact solutionas desired. The iteration method based on a guess solution is called Newton-Raphson method, which is the most common method for solving a set ofnonlinear algebraic equations.A set of nonlinear equations usually has multiple solutions and the main
disadvantage of the Newton-Raphson method for solving a set of nonlinearequations is that the solution may not be the solution of interest. The solu-tion that the method will provide depends highly on the initial estimation.Hence, having a correct estimate helps to detect the proper solution.
Proof. Let us define the increment δ(i) as
δ(i) = q(i+1) − q(i) (9.176)
and expand the set of equations around q(i+1)
f(q(i+1)) = f(q(i)) + J(q(i)) δ(i). (9.177)
Assume that q(i+1) is the exact solution of Equation (9.171). Therefore,f(q
(i+1)) = 0 and we may use
J(q(i)) δ(i) = −f(q(i)) (9.178)
to find the increment δ(i)
δ(i) = −J−1(q(i)) f(q(i)) (9.179)
and determine the solution
q(i+1) = q(i) + δ(i)
= q(i) − J−1(q(i)) f(q(i)). (9.180)
The Newton-Raphson iteration method can be set up as an algorithmfor better application.
Algorithm 9.3. Newton-Raphson iteration method for f(q) = 0.
1. Set the initial counter i = 0.
9. Numerical Methods in Kinematics 505
2. Evaluate an estimate solution q = q(i).
3. Calculate the Jacobian matrix [J] =h∂fi∂qj
iat q = q(i).
4. Solve for δ(i) from the set of linear equations J(q(i)) δ(i) = −f(q(i)).
5. If¯δ(i)
¯< ², where ² is an arbitrary tolerance then, q(i) is the solu-
tion. Otherwise calculate q(i+1) = q(i) + δ(i).
6. Set i = i+ 1 and return to step 3.
Example 267 Inverse kinematics problem for a 2R planar robot.The endpoint of a 2R planar manipulator can be described by two non-
linear algebraic equations.∙f1 (θ1, θ2)f2 (θ1, θ2)
¸=
∙l1 cos θ1 + l2 cos (θ1 + θ2)−Xl1 sin θ1 + l2 sin (θ1 + θ2)− Y
¸= 0 (9.181)
Assumingl1 = l2 = 1 (9.182)
and the endpoint is at ∙XY
¸=
∙11
¸(9.183)
we are looking for the associated variables
θ =
∙θ1θ2
¸(9.184)
that provide the desired position of the endpoint. Due to simplicity of thesystem of equations, the Jacobian of the equations and its inverse can befound in closed form.
J(θ) =
∙∂fi∂θj
¸=
⎡⎢⎢⎣∂f1∂θ1
∂f1∂θ2
∂f2∂θ1
∂f2∂θ2
⎤⎥⎥⎦=
∙−l1 sin θ1 − l2 sin (θ1 + θ2) −l2 sin (θ1 + θ2)l1 cos θ1 + l2 cos (θ1 + θ2) l2 cos (θ1 + θ2)
¸(9.185)
J−1 =−1
l1l2sθ2
∙−l2c (θ1 + θ2) −l2s (θ1 + θ2)
l1cθ1 + l2c (θ1 + θ2) l1sθ1 + l2s (θ1 + θ2)
¸(9.186)
The Newton-Raphson iteration algorithm may now be started by settingi = 0 and evaluating an estimate solution.
q(0) =
∙θ1θ2
¸(0)=
∙π/3−π/3
¸(9.187)
506 9. Numerical Methods in Kinematics
Therefore,
J(π
3,π
3) =
∙−√3 −12
√3
0 −12
¸(9.188)
f(π
3,π
3) =
∙−1√3− 1
¸(9.189)
δ(0) = −J−1(θ(0)) f(θ(0)) =∙−1.30941.4641
¸(9.190)
and a better solution is:∙θ1θ2
¸(1)=
∙π/3π/3
¸+
∙−1.30941.4641
¸=
∙−0.26222.5113
¸(9.191)
In the next iterations we find∙θ1θ2
¸(2)=
∙−0.26222.5113
¸+
∙−.06952−.80337
¸=
∙−0.33171.7079
¸(9.192)∙
θ1θ2
¸(3)=
∙−0.33171.7079
¸+
∙.31414−.068348
¸=
∙−0.01761.63958
¸(9.193)
∙θ1θ2
¸(4)=
∙−0.01761.63958
¸+
∙.016275−.06739
¸=
∙−.00131.5722
¸(9.194)∙
θ1θ2
¸(5)=
∙−.00131.5722
¸+
∙.1304−.139
¸=
∙−.295× 10−8
1.571
¸(9.195)
∙θ1θ2
¸(6)=
∙−.3× 10−81.571
¸+
∙.29× 10−8−.85× 10−6
¸=
∙−.49× 10−10
1.571
¸(9.196)∙
θ1θ2
¸(7)=
∙−.49× 10−10
1.571
¸+
∙−.41× 10−19−.2× 10−9
¸=
∙−.49× 10−10
1.571
¸(9.197)
and this answer is close enough to the exact elbow down answer∙θ1θ2
¸=
∙0
π/2
¸. (9.198)
Example 268 F Alternative and expanded proof for Newton-Raphson it-eration method.Consider the following set of equations in which we are searching for the
exact solutions qj
yi = fi(qj) (9.199)
i = 1, · · · , nj = 1, · · · ,m
9. Numerical Methods in Kinematics 507
where j is the number of unknowns and i is the number of equations.Assume that, for a given yi an approximate solution qFj is available. The
difference between the exact solution qj and the approximate solution qFj is
δj = qj − qFj (9.200)
where the value of equations for the approximate solution qFj is denoted by
Yi = fi(qFj ). (9.201)
The iteration method is based on the minimization of δj to make thesolution of
yi = fi(qFj + δj) (9.202)
be as close as possible to the exact solution.A first-order Taylor expansion of this equation is
yi = fi(qFj ) +
mXi=1
∂fi∂qj
δj +O(δ2j ). (9.203)
We may defineYi = fi(q
Fj ) (9.204)
and the residual quantityri = yi − Yi (9.205)
to writer = J δ +O(δ2) (9.206)
where J is the Jacobian matrix of the set of equations
J =
∙∂fi∂qj
¸. (9.207)
The method of solution depends on the relative value of m and n. There-fore, three cases must be considered.1- m = nProvided that the Jacobian matrix remains non-singular, the linearized
equationr = J δq (9.208)
possesses a unique solution, and the Newton-Raphson technique may thenbe utilized to solve equation (9.199). The stepwise procedure is illustratedin figure 9.2.The effectiveness of the procedure depends on the number of iterations
to be performed, which depends on the initial estimate of qFj and on thedimension of the Jacobian matrix. Since the solution to nonlinear equationsis not unique, it may generate different sets of solutions depending on the
508 9. Numerical Methods in Kinematics
r(k) < .
q(0) = q*
k = k+1
r(k) = y – f (q(k))
q=q(k)Y
r(k) = J d(k)
q(k+1) = q(k) +d (k)
N
ε
δ
δ
FIGURE 9.2. Newton-Raphson iteration method for solving a set of nonlinearalgebraic equations.
initial guess. Furthermore, convergence may not occur if the initial estimateof the solution falls outside the convergence domain of the algorithm. In thiscase, much effort is needed to attain a numerical solution.2- m < nThis is the overdetermined case for which no solution exists in general,
because the number of unknowns (such as the number of joints in robots)are not sufficient enough to generate a solution (such as an arbitrary con-figuration of the end-effector). A solution can, however, be generated thatminimizes an error (such as position error).Consider the problem
min
ÃF =
1
2
nXi=1
wi [yi − fi(qj)]2
!(9.209)
or, in matrix form,
min
ÃF =
1
2
nXi=1
[y − f(q)]T W [y − f(q)]!
(9.210)
whereW = diag(w1 · · ·wn) (9.211)
is a set of weighting factors giving a relative importance to each of thekinematic equations.The error is minimum when
∂F
∂qj= −
Xi
∂fj∂qj
wi [yi − fi(qj)] = 0 (9.212)
9. Numerical Methods in Kinematics 509
or, in matrix form,JTW [y − f(q)] = 0. (9.213)
A Taylor expansion of the third factor shows that the linear correction toan estimated solution qF is
JTW£y− f(qF)
¤− J δ = 0. (9.214)
The correction equation is
JTWJ δ = JTWr (9.215)
where r is the residual vector defined by Equation (9.205).The weighting factorW is a positive definite diagonal matrix, and there-
fore, the matrix JTWJ is always symmetric and invertible. It provides thegeneralized inverse to the Jacobian matrix
J−1 =£JTWJ
¤−1JTW (9.216)
which verifies the propertyJ−1 J = I. (9.217)
When the Jacobian is revertible, the solution for (9.210) is the solution tothe nonlinear system (9.199).3- m > nThis is the redundant case for which an infinity of solutions is generally
available. Selection of an appropriate solution can be made under the condi-tion that it is optimal in some sense. For example, let us find a solution for(9.199) which minimizes the deviation from a given reference configurationq(0). The problem may then be formulated as that of finding the minimumof a constrained function
min
µF =
1
2
hq− q(0)
iTWhq− q(0)
i¶(9.218)
subject toy− f(q) = 0. (9.219)
Using the technique of Lagrangian multipliers, problem (9.218) and (9.219)may be replaced by an equivalent problem
∂G
∂q= 0 (9.220)
∂G
∂λ= 0 (9.221)
with the definition of the functional
G(q, λ) =1
2
hq− q(0)
iTWhq− q(0)
i+ λT [y− f(q)] . (9.222)
510 9. Numerical Methods in Kinematics
It leads to a system of m+ n equations with m+ n unknowns
Whq− q(0)
i− JTλ = 0 (9.223)
y− f(q) = 0. (9.224)
Linearization of equations (9.223) provides the system of equations forthe displacement corrections and variations of Lagrangian multipliers
W δ − JT λ0 = 0 (9.225)
W δ = r (9.226)
where λ0 is the increment of λ.Substitution of the solution δ obtained from the first equation of (9.223)
into the second one yields
JW−1JT δλ = r (9.227)
or, in terms of the displacement correction
δq =W−1JT¡JW−1JT
¢. (9.228)
The matrixJ+ =W−1JT
¡JW−1JT
¢−1(9.229)
has the meaning of a pseudo-inverse to the singular Jacobian matrix J. Itverifies the identity
JJ+ = I (9.230)
and, whenever J is invertible,
J+ = J−1. (9.231)
9.4 F Jacobian Matrix From Link TransformationMatrices
In robot motion, we need to calculate the Jacobian matrix in a very shorttime for every configuration of the robot. The Jacobian matrix of a robotcan be found easier and in an algorithmic way by evaluating columns ofthe Jacobian
J =£c1 c2 · · · cn
¤=
∙0k0
00dn
0k101dn · · · 0kn−1
0n−1dn
0k00k1 · · · 0kn−1
¸(9.232)
=
∙0k0 × 0
0dn0k1 × 0
1dn · · · 0kn−1 × 0n−1dn
0k00k1 · · · 0kn−1
¸(9.233)
9. Numerical Methods in Kinematics 511
where ci is called the Jacobian generating vector
ci =
∙0ki−1
0i−1dn
0ki−1
¸=
∙0ki−1 × 0
i−1dn0ki−1
¸(9.234)
and 0ki−1 is the vector associated to the skew matrix 0ki−1. This methodis solely based on link transformation matrices found in forward kinematicsand does not involve differentiation.The matrix 0ki−1 is
0ki−1 =0Ri−1
i−1ki−10RT
i−1 (9.235)
which means 0ki−1 is a unit vector in the direction of joint axis i in theglobal coordinate frame. For a revolute joint, we have
ci =
∙0ki−1
0i−1dn
0ki−1
¸(9.236)
and for a prismatic joint we have
ci =
∙0ki−10
¸. (9.237)
Proof. Transformation between two coordinate frames
Gr = GTBBr (9.238)
is based on a transformation matrix that is a combination of rotation matrixR and the position vector d.
T =
∙R d0 1
¸. (9.239)
Introducing the infinitesimal transformation matrix
δT =
∙δR δd0 0
¸(9.240)
leads to
δT T−1 =
∙ eδθ δv0 0
¸(9.241)
where
T−1 =
∙RT −RT d0 1
¸(9.242)
and therefore, eδθ is the matrix of infinitesimal rotations,eδθ = δR RT =
⎡⎣ 0 −δθz δθyδθz 0 −δθx−δθy δθx 0
⎤⎦ (9.243)
512 9. Numerical Methods in Kinematics
and δv is a vector related to infinitesimal displacements,
δv = δd− eδθd. (9.244)
Let’s define a 6×1 coordinate vector describing the rotational and trans-lational coordinates of the end-effector
X =
∙dθ
¸(9.245)
which its variation is
δX =
∙δdδθ
¸. (9.246)
The Jacobian matrix J is then a matrix that maps differential joint variablesto differential end-effector motion.
δX =∂T(q)
∂qδq = J δq (9.247)
The transformation matrix T , generated in forward kinematics, is a func-tion of joint coordinates
0Tn = T(q) (9.248)
= 0T1(q1)1T2(q2)
2T3(q3)3T4(q4) · · · n−1Tn(qn)
therefore, the infinitesimal transformation matrix is
δT =nXi=1
0T1(q1)1T2(q2) · · ·
δ¡i−1Ti
¢δqi
· · · n−1Tn(qn) · δqi. (9.249)
Interestingly, the partial derivative of the transformation matrix can bearranged in the form
δ¡i−1Ti
¢δqi
= i−1∆i−1i−1Ti (9.250)
where according to DH transformation matrix (5.11)
i−1Ti =
⎡⎢⎢⎣cos θi − sin θi cosαi sin θi sinαi ai cos θisin θi cos θi cosαi − cos θi sinαi ai sin θi0 sinαi cosαi di0 0 0 1
⎤⎥⎥⎦=
∙i−1Ri
i−1di0 1
¸(9.251)
we can find the velocity coefficient matrices matrix ∆i for a revolute jointto be
i−1∆i−1 = ∆R =
∙i−1ki−1 00 0
¸=
⎡⎢⎢⎣0 −1 0 01 0 0 00 0 0 00 0 0 0
⎤⎥⎥⎦ (9.252)
9. Numerical Methods in Kinematics 513
and for a prismatic joint to be
i−1∆i−1 = ∆P =
∙0 i−1ki−10 0
¸=
⎡⎢⎢⎣0 0 0 00 0 0 00 0 0 10 0 0 0
⎤⎥⎥⎦ . (9.253)
We may now express each term of (9.249) in the form
0T1(q1)1T2(q2) · · ·
δ¡i−1Ti
¢δqi
· · · n−1Tn(qn) = Ci T (9.254)
where
Ci =£0T1
1T2 · · · i−2Ti−1¤ δ
¡i−1Ti
¢δqi
£0T1
1T2 · · · i−1Ti¤−1
=£0T1
1T2 · · · i−2Ti−1¤i−1∆i−1
i−1Ti£0T1
1T2 · · · i−1Ti¤−1
=£0T1
1T2 · · · i−2Ti−1¤i−1∆i−1
£0T1
1T2 · · · i−2Ti−1¤−1
= 0T11T2 · · · i−2Ti−1 i−1∆i−1
i−2T−1i−1 · · · 1T−12 0T−11 . (9.255)
The matrix Ci can be rearranged for a revolute joint, in the form
Ci =
∙0ki−1
0ki−10dn − 0ki−1
0di−10 0
¸(9.256)
and for a prismatic joint in the form
Ci =
∙0ki−1 00 0
¸(9.257)
Ci has six independent terms that can be combined in a 6 × 1 vector.This vector makes the ith column of the Jacobian matrix and is called thegenerating vector ci. The Jacobian generating vector for a revolute joint is
ci =
∙0ki−1
0i−1dn
0ki−1
¸(9.258)
and for a revolute joint is
ci =
∙0
0ki−1
¸. (9.259)
The position vector 0di indicated the origin of the coordinate frame Bi inthe base frame B0. Hence, 0
i−1dn indicated the origin of the end-effectorcoordinate frame Bn with respect to coordinate frame Bi−1 and expressedin the base frame B0.
514 9. Numerical Methods in Kinematics
Therefore, the Jacobian matrix describing the instantaneous kinematicsof the robot can be obtained from
J =
∙0k0
00dn
0k101dn · · · 0kn−1
0n−1dn
0k00k1 · · · 0kn−1
¸. (9.260)
Example 269 Jacobian matrix for articulated robots.The forward and inverse kinematics of the articulated robot has been ana-
lyzed in Example 186 with the following individual transformation matrices:
0T1 =
⎡⎢⎢⎣cθ1 0 sθ1 0sθ1 0 −cθ1 00 1 0 00 0 0 1
⎤⎥⎥⎦ 1T2 =
⎡⎢⎢⎣cθ2 −sθ2 0 l2cθ2sθ2 cθ2 0 l2sθ20 0 1 d20 0 0 1
⎤⎥⎥⎦
2T3 =
⎡⎢⎢⎣cθ3 0 sθ3 0sθ3 0 −cθ3 00 1 0 00 0 0 1
⎤⎥⎥⎦ 3T4 =
⎡⎢⎢⎣cθ4 0 −sθ4 0sθ4 0 cθ4 00 −1 0 l30 0 0 1
⎤⎥⎥⎦
4T5 =
⎡⎢⎢⎣cθ5 0 sθ5 0sθ5 0 −cθ5 00 1 0 00 0 0 1
⎤⎥⎥⎦ 5T6 =
⎡⎢⎢⎣cθ6 −sθ6 0 0sθ6 cθ6 0 00 0 1 00 0 0 1
⎤⎥⎥⎦ (9.261)
The articulated robot has 6 DOF and therefore, its Jacobian matrix is a6× 6 matrix
J(q) =£c1(q) c2(q) · · · c6(q)
¤(9.262)
that relates the translational and angular velocities of the end-effector tothe joints’ velocities q. ∙
vω
¸= J(q) q (9.263)
The ith column vector ci(q) for a revolute joint is given by
ci(q) =
∙0ki−1 × i−1d6
0ki−1
¸(9.264)
and for a prismatic joint is given by
ci(q) =
∙0ki−10
¸. (9.265)
Column 1. The first column of the Jacobian matrix has the simplestcalculation, since it is based on the contribution of the z0-axis and the
9. Numerical Methods in Kinematics 515
position of the end-effector frame 0d6. The direction of the z0-axis in thebase coordinate frame is
0k0 =
⎡⎣ 001
⎤⎦ (9.266)
and the position vector of the end-effector frame B6 is given by 0d6 directlydetermined from 0T6
0T6 = 0T11T2
2T33T4
4T55T6
=
∙0R6
0d60 1
¸=
⎡⎢⎢⎣t11 t12 t13 t14t21 t22 t23 t24t31 t32 t33 t340 0 0 1
⎤⎥⎥⎦ (9.267)
0d6 =
⎡⎣ t14t24t34
⎤⎦ (9.268)
where,
t14 = d6 (sθ1sθ4sθ5 + cθ1 (cθ4sθ5c (θ2 + θ3) + cθ5s (θ2 + θ3)))
+l3cθ1s (θ2 + θ3) + d2sθ1 + l2cθ1cθ2 (9.269)
t24 = d6 (−cθ1sθ4sθ5 + sθ1 (cθ4sθ5c (θ2 + θ3) + cθ5s (θ2 + θ3)))
+sθ1s (θ2 + θ3) l3 − d2cθ1 + l2cθ2sθ1 (9.270)
t34 = d6 (cθ4sθ5s (θ2 + θ3)− cθ5c (θ2 + θ3))
+l2sθ2 + l3c (θ2 + θ3) . (9.271)
Therefore,
0k0 × 0d6 = 0k00d6
=
⎡⎣ 0 −1 01 0 00 0 0
⎤⎦⎡⎣ t14t24t34
⎤⎦ =⎡⎣ −t24t14
0
⎤⎦ (9.272)
and the first Jacobian generating vector is
c1 =
⎡⎢⎢⎢⎢⎢⎢⎣−t24t140001
⎤⎥⎥⎥⎥⎥⎥⎦ . (9.273)
Column 2. The z1-axis in the base frame can be found by
0k1 =0R1
⎡⎣ 001
⎤⎦ =⎡⎣ cθ1 0 sθ1
sθ1 0 −cθ10 1 0
⎤⎦⎡⎣ 001
⎤⎦ =⎡⎣ sin θ1− cos θ10
⎤⎦ . (9.274)
516 9. Numerical Methods in Kinematics
The second half of c2 needs the cross product of 0k1 and position vector 1d6.The vector 1d6 is the position of the end-effector in the coordinate frameB1, however it must be described in the base frame to be able to performthe cross product. An easier method is to find 1k1× 1d6 and transform theresultant into the base frame.
0k1 × 1d6 = 0R1
³1k1 × 1d6
´= 0R1
⎛⎝⎡⎣ 001
⎤⎦×⎡⎣ l2 cos θ2 + l3 sin (θ2 + θ3)
l2 sin θ2 − l3 cos (θ2 + θ3)d2
⎤⎦⎞⎠=
⎡⎣ cos θ1 (−l2 sin θ2 + l3 cos (θ2 + θ3))sin θ1 (−l2 sin θ2 + l3 cos (θ2 + θ3))
l2 cos θ2 + l3 sin (θ2 + θ3)
⎤⎦ (9.275)
Therefore, c2 is found as a 6× 1 vector,
c2 =
⎡⎢⎢⎢⎢⎢⎢⎣cos θ1 (−l2 sin θ2 + l3 cos (θ2 + θ3))sin θ1 (−l2 sin θ2 + l3 cos (θ2 + θ3))
l2 cos θ2 + l3 sin (θ2 + θ3)sin θ1− cos θ10
⎤⎥⎥⎥⎥⎥⎥⎦ . (9.276)
Column 3. The z2-axis in the base frame can be found using the samemethod:
0k2 =0R2
2k2 =0R1
1R2
⎡⎣ 001
⎤⎦ =⎡⎣ sin θ1− cos θ10
⎤⎦ . (9.277)
The second half of c3 can be found by finding 2k2 × 2d6 and transformingthe resultant into the base coordinate frame.
2k2 × 2d6 =
⎡⎣ l3 cos θ3l3 sin θ30
⎤⎦ (9.278)
0R2
³2k2 × 2d6
´=
⎡⎣ l3 cos θ1 sin (θ2 + θ3)l3 sin θ1 sin (θ2 + θ3)−l3 cos (θ2 + θ3)
⎤⎦ (9.279)
Therefore, c3 is
c3 =
⎡⎢⎢⎢⎢⎢⎢⎣l3 cos θ1 sin (θ2 + θ3)l3 sin θ1 sin (θ2 + θ3)−l3 cos (θ2 + θ3)
sin θ1− cos θ10
⎤⎥⎥⎥⎥⎥⎥⎦ . (9.280)
9. Numerical Methods in Kinematics 517
Column 4. The z3-axis in the base frame is
0k3 = 0R11R2
2R3
⎡⎣ 001
⎤⎦=
⎡⎣ cos θ1 (cos θ2 sin θ3 + cos θ3 sin θ2)sin θ1 (cos θ2 sin θ3 + sin θ2 cos θ3)
− cos (θ2 + θ3)
⎤⎦ (9.281)
and the second half of c4 can be found by finding 3k3× 3d6 and transformingthe resultant into the base coordinate frame.
0R3
³3k3 × 3d6
´= 0R3
⎛⎝⎡⎣ 001
⎤⎦×⎡⎣ 00l3
⎤⎦⎞⎠ =
⎡⎣ 000
⎤⎦ (9.282)
Therefore, c4 is
c4 =
⎡⎢⎢⎢⎢⎢⎢⎣000
cos θ1 (cos θ2 sin θ3 + cos θ3 sin θ2)sin θ1 (cos θ2 sin θ3 + sin θ2 cos θ3)
− cos (θ2 + θ3)
⎤⎥⎥⎥⎥⎥⎥⎦ . (9.283)
Column 5. The z4-axis in the base frame is
0k4 =0R4
⎡⎣ 001
⎤⎦ =⎡⎣ cθ4sθ1 − cθ1sθ4c (θ2 + θ3)−cθ1cθ4 − sθ1sθ4c (θ2 + θ3)
−sθ4s (θ2 + θ3)
⎤⎦ (9.284)
and the second half of c5 can be found by finding 4k4× 4d6 and transformingthe resultant into the base coordinate frame.
0R4
³4k4 × 4d6
´= 0R4
⎛⎝⎡⎣ 001
⎤⎦×⎡⎣ 000
⎤⎦⎞⎠ =
⎡⎣ 000
⎤⎦ (9.285)
Therefore, c5 is
c5 =
⎡⎢⎢⎢⎢⎢⎢⎣000
cos θ4 sin θ1 − cos θ1 sin θ4 cos (θ2 + θ3)− cos θ1 cos θ4 − sin θ1 sin θ4 cos (θ2 + θ3)
− sin θ4 sin (θ2 + θ3)
⎤⎥⎥⎥⎥⎥⎥⎦ . (9.286)
518 9. Numerical Methods in Kinematics
Column 6. The z5-axis in the base frame is
0k5 = 0R5
⎡⎣ 001
⎤⎦ (9.287)
=
⎡⎣ −cθ1cθ4s (θ2 + θ3)− sθ4 (sθ1sθ4 + cθ1cθ4c (θ2 + θ3))−sθ1cθ4s (θ2 + θ3)− sθ4 (−cθ1sθ4 + sθ1cθ4c (θ2 + θ3))
cθ4c (θ2 + θ3)− 12s (θ2 + θ3) s2θ4
⎤⎦and the second half of c6 can be found by finding 5k5× 5d6 and transformingthe resultant into the base coordinate frame.
0R5
³5k5 × 5d6
´= 0R5
⎛⎝⎡⎣ 001
⎤⎦×⎡⎣ 000
⎤⎦⎞⎠ =
⎡⎣ 000
⎤⎦ (9.288)
Therefore, c6 is
c6 =
⎡⎢⎢⎢⎢⎢⎢⎣000
−cθ1cθ4s (θ2 + θ3)− sθ4 (sθ1sθ4 + cθ1cθ4c (θ2 + θ3))−sθ1cθ4s (θ2 + θ3)− sθ4 (−cθ1sθ4 + sθ1cθ4c (θ2 + θ3))
cθ4c (θ2 + θ3)− 12s (θ2 + θ3) s2θ4
⎤⎥⎥⎥⎥⎥⎥⎦ .(9.289)
9.5 Summary
There are some general numerical calculations needed in robot kinematics.Solutions to a set of linear and nonlinear algebraic equations are the mostimportant ones for calculating a matrix inversion and a Jacobian matrix.An applied solution for a set of linear equations is LU factorization, and apractical method for a set of nonlinear equations is the Newton-Raphsonmethod. Both of these methods are cast in applied algorithms.
9. Numerical Methods in Kinematics 519
9.6 Key Symbols
a turn vector of end-effector frameA coefficient matrixaij the element of row i and column j of Ab the vector of known values in a set of linear equationsB body coordinate frame,
dummy matrix with upper U and lower Lc cosc Jacobian generating vectorcon condition numberdx, dy, dz elements of ddet determinantd translation vector, displacement vectorD lower-right submatrix of Bf a set of nonlinear algebraic equationsG,B0 global coordinate frame, Base coordinate frameH dummy matrix to calculate DI = [I] identity matrixJ Jacobianlij the element of row i and column j of LL lower triangle submatrix of Am number of independent equationsn number of rows and columns of Aq the vector of unknowns of f , vector of joint variablesr position vectors, homogeneous position vectorri the element i of rrij the element of row i and column j of a matrixR rotation transformation matrixs sinsT , lT nondiagonal first column of Dsgn signum functionT homogeneous transformation matrixT a set of nonlinear algebraic equations of quij the element of row i and column j of UU upper triangle submatrix of AW weight factor matrixx, y, z local coordinate axesx vector of unknownsX,Y,Z global coordinate axesy dummy vector of unknownsuT , rT nondiagonal first row of D
520 9. Numerical Methods in Kinematics
Greekδ small increment of a parameterδ difference in q for in two steps of iterationθ rotary joint angleθ vector of θiθijk θi + θj + θk
Symbolk k norm of the matrix [ ]
[ ]−1 inverse of the matrix [ ]
[ ]T transpose of the matrix [ ]
[ ]+ pseudo-inverse of the matrix [ ]≡ equivalent` orthogonal(i) link number ik parallel sign⊥ perpendicular× vector cross productqF a guess value for q4 perturbation in a vector or a matrix
9. Numerical Methods in Kinematics 521
Exercises
1. Notation and symbols.
Describe the meaning of
a- [L] b- [U ] c- [B] d- [Di] e- uii f- dii
g- con (A) h- kAk∞ i- kAk1 j- kAk2 k- ci l- X
m- J n- q o- i−1∆i−1 p- T q- V r- θ.
2. LU factorization method.
Use the LU factorization method and find the associated [L] and [U ]for the following matrices.
(a)
[A] =
⎡⎣ 1 4 85 2 79 6 3
⎤⎦(b)
[B] =
⎡⎢⎢⎣2 −1 3 −31 3 −1 −20 2 2 43 1 5 −2
⎤⎥⎥⎦(c)
[C] =
⎡⎢⎢⎢⎢⎣−2 −1 3 −3 61 3 −1 −2 01 2 2 4 −23 1 5 −2 −17 −5 2 1 1
⎤⎥⎥⎥⎥⎦3. LU inversion method.
Use the LU inversion method and find the inverse of the matrices inExercises 2.
4. LU calculations.
Use the LU inversion method and calculate the inversion of the fol-lowing matrices based on the matrices in Exercises 2.
D = AB E = AB−1 F = A−1B G = A−1B−1
522 9. Numerical Methods in Kinematics
5. A set of liner equations.
Use the LU factorization method and solve the following set of equa-tions and show that the solutions are: x1 = 4, x2 = 1, x3 = 2.
−3x1 + 8x2 + 5x3 = 6
2x1 − 7x2 + 4x3 = 9
x1 + 9x2 − 6x3 = 1
6. A set of six equations.
Use the LU factorization method and solve the following set of equa-tions and show that the solutions are: x1 = 75, x2 = 52, x3 = 40,x4 = 31, x5 = 22, x6 = 10.
11x1 − 5x2 − x6 = 500
−20x1 + 41x2 − 15x3 − 6x5 = 0
−3x2 + 7x3 − 4x4 = 0
−x3 + 2x4 − x5 = 0
−2x1 − 15x5 + 47x6 = 0
−3x2 − 10x4 + 28x5 − 15x6 = 0
7. A set of nonlinear equations.
Solve the following set of equations.
x1x2 − 2x1 − x2 = 0
x21x2 − 2x1x2 + x2 − 2x21 + 4x1 = 2
8. F Gaussian elimination method.
There are two situations where the Gaussian elimination method fails:division by zero and round-off errors.
Examine the LU factorization method for the possibility of divisionby zero.
9. F Number of subtractions as a source of round-off error.
Round-off error is common in numerical techniques, however it in-creases by increasing the number of subtractions. Apply the Gaussianelimination and LU factorization methods for solving a set of fourequations ⎡⎢⎢⎣
2 1 3 −31 0 −1 −20 2 2 13 1 0 −2
⎤⎥⎥⎦⎡⎢⎢⎣
x1x2x3x4
⎤⎥⎥⎦ =⎡⎢⎢⎣
120−2
⎤⎥⎥⎦and count the number of subtractions in each method.
9. Numerical Methods in Kinematics 523
10. F Jacobian matrix from transformation matrices.
Use the Jacobian matrix technique from links’ transformation matri-ces and find the Jacobian matrix of the RkRkR planar manipulatorshown in Figure 5.21. Choose a set of sample data for the dimen-sions and kinematics of the manipulator and find the inverse of theJacobian matrix.
11. F Jacobian matrix for a spherical wrist.
Use the Jacobian matrix technique from links’ transformation ma-trices and find the Jacobian matrix of the spherical wrist shown inFigure 5.26. Assume that the frame B3 is the base frame.
12. Jacobian matrix for a SCARA manipulator.
Use the Jacobian matrix technique from links’ transformation matri-ces and find the Jacobian matrix of the RkRkRkP robot shown inFigure 5.23.
13. Jacobian matrix for an R`RkR articulated manipulator.Figure 5.22 illustrates a 3 DOF R`RkR manipulator. Use the Jaco-bian matrix technique from links’ transformation matrices and findthe Jacobian matrix for the manipulator.
14. F Partitioning inverse method.
Calculate the matrix inversion for the matrices in Exercise 2 usingthe partitioning inverse method.
15. F Analytic matrix inversion.
Use the analytic and LU factorization methods and find the inverseof
[A] =
⎡⎣ 1 4 85 2 79 6 3
⎤⎦or an arbitrary 3 × 3 matrix. Count and compare the number ofarithmetic operations.
16. F Cayley-Hamilton matrix inversion.
Use the Cayley-Hamilton and LU factorization methods and find theinverse of
[A] =
⎡⎣ 1 4 85 2 79 6 3
⎤⎦or an arbitrary 3 × 3 matrix. Count and compare the number ofarithmetic operations.
524 9. Numerical Methods in Kinematics
17. F Norms of matrices.
Calculate the following norms of the matrices in Exercise 2.
kAk1 = Max1≤j≤n
nXi=1
|aij |
kAk2 = λMax
¡ATA
¢kAk∞ = Max
1≤i≤n
nXj=1
|aij |
kAkF =nXi=1
nXj=1
a2ij