17
Research Article Error Modeling in Distance and Rotation for Self-Calibration of Space Robots on Orbit Qingxuan Jia, Shiwei Wang , Gang Chen , and Hanxu Sun Automation School, Beijing University of Posts and Telecommunications, Beijing 100876, China Correspondence should be addressed to Gang Chen; [email protected] Received 1 September 2018; Revised 22 October 2018; Accepted 22 November 2018; Published 17 February 2019 Academic Editor: Paolo Gasbarri Copyright © 2019 Qingxuan Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Vibration and impact of launching, inner and outer pressure dierence, and thermal deformation of the space capsule will change the transformation between the pose measurement system and the space robot base. It will be complicated, even hard, to measure and calculate this transformation accurately. Therefore, an error modeling method considering both the distance error and the rotation error of the end-eector is proposed for self-calibration of the space robot on orbit, in order to avoid the drawback of frame transformation. Moreover, according to linear correlation of the columns of the identication matrix, unrecognizable parameters in the distance and rotation error model are removed to eliminate singularity in robot kinematic calibration. Finally simulation tests on a 7-DOF space robot are conducted to verify the eectiveness of the proposed method. 1. Introduction Space robots can assist astronauts to reach and expand their maintenance work areas, improving operational eciency and safety [1], and even can complete on-orbit missions such as spacecraft rendezvous and docking, satellite fault maintenance, and satellite capture, independently [2]. Some of these precise space missions require the space robot to have high end positioning accuracy. Nevertheless, defects in manufacturing and assembly lead to the dierence between the actual kinematic parameters and the nominal ones, generally regarded as systematic errors. Additionally, the end positioning accuracy is also aected by random errors, such as environmental changes, gear transmission, and mechanical deformation. Calibration on the ground can remedy the positioning deciencies caused by these inherent kinematic errors [3, 4]. However, contrast to tradi- tional industrial robots, space robots are subjected to strong vibration and impact with the launch of spacecraft and then confronted with extreme temperature on orbit. These fac- tors will inevitably cause the kinematic parameters of the space robot to change, resulting in a decrease in the end positioning accuracy. Therefore, it is necessary to perform on-orbit kinematic calibration [5]. The actual pose of the space robot end-eector can hardly be measured by an external measuring device due to the extreme orbital environment, so the internal sensing system mounted on its end-eector is adopted for measurement during self-calibration. Many researchers have devoted eorts to kinematic self-calibration of robot manipulators. Angulo and Torras [6] developed a neural-network method to recalibrate automatically a commercial robot manipulator after undergoing wear or damage, which has been applied in the REIS robot included in the space station mock-up at Daimler-Benz Aerospace. Liang et al. [7] developed an adaptive self-calibration of hand-eye systems in which a visual-feedback-based self-learning process is used for dynamically and continuously learning the hand-eye trans- formation through repetitive operation trials. Liu et al. [8] proposed a self-calibration method based on hand-eye vision, which establishes the relative pose error model of the space robot and uses the particle swarm optimization algorithm to identify the kinematic parameters. Yin et al. [9] proposed a vision-based robot self-calibration method, eliminating the need for a robot-based frame and hand-to-eye calibra- tions. Du et al. [1013] introduced an inertial measurement unit to estimate the end posture and attaching a position marker to the end-eector to measure the actual position Hindawi International Journal of Aerospace Engineering Volume 2019, Article ID 8349048, 16 pages https://doi.org/10.1155/2019/8349048

Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

Research ArticleError Modeling in Distance and Rotation for Self-Calibration ofSpace Robots on Orbit

Qingxuan Jia, Shiwei Wang , Gang Chen , and Hanxu Sun

Automation School, Beijing University of Posts and Telecommunications, Beijing 100876, China

Correspondence should be addressed to Gang Chen; [email protected]

Received 1 September 2018; Revised 22 October 2018; Accepted 22 November 2018; Published 17 February 2019

Academic Editor: Paolo Gasbarri

Copyright © 2019 Qingxuan Jia et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Vibration and impact of launching, inner and outer pressure difference, and thermal deformation of the space capsule will changethe transformation between the pose measurement system and the space robot base. It will be complicated, even hard, to measureand calculate this transformation accurately. Therefore, an error modeling method considering both the distance error and therotation error of the end-effector is proposed for self-calibration of the space robot on orbit, in order to avoid the drawback offrame transformation. Moreover, according to linear correlation of the columns of the identification matrix, unrecognizableparameters in the distance and rotation error model are removed to eliminate singularity in robot kinematic calibration. Finallysimulation tests on a 7-DOF space robot are conducted to verify the effectiveness of the proposed method.

1. Introduction

Space robots can assist astronauts to reach and expand theirmaintenance work areas, improving operational efficiencyand safety [1], and even can complete on-orbit missionssuch as spacecraft rendezvous and docking, satellite faultmaintenance, and satellite capture, independently [2]. Someof these precise space missions require the space robot tohave high end positioning accuracy. Nevertheless, defectsin manufacturing and assembly lead to the differencebetween the actual kinematic parameters and the nominalones, generally regarded as systematic errors. Additionally,the end positioning accuracy is also affected by randomerrors, such as environmental changes, gear transmission,and mechanical deformation. Calibration on the groundcan remedy the positioning deficiencies caused by theseinherent kinematic errors [3, 4]. However, contrast to tradi-tional industrial robots, space robots are subjected to strongvibration and impact with the launch of spacecraft and thenconfronted with extreme temperature on orbit. These fac-tors will inevitably cause the kinematic parameters of thespace robot to change, resulting in a decrease in the endpositioning accuracy. Therefore, it is necessary to performon-orbit kinematic calibration [5].

The actual pose of the space robot end-effector can hardlybe measured by an external measuring device due to theextreme orbital environment, so the internal sensing systemmounted on its end-effector is adopted for measurementduring self-calibration. Many researchers have devotedefforts to kinematic self-calibration of robot manipulators.Angulo and Torras [6] developed a neural-network methodto recalibrate automatically a commercial robot manipulatorafter undergoing wear or damage, which has been appliedin the REIS robot included in the space station mock-upat Daimler-Benz Aerospace. Liang et al. [7] developed anadaptive self-calibration of hand-eye systems in which avisual-feedback-based self-learning process is used fordynamically and continuously learning the hand-eye trans-formation through repetitive operation trials. Liu et al. [8]proposed a self-calibration method based on hand-eye vision,which establishes the relative pose error model of the spacerobot and uses the particle swarm optimization algorithmto identify the kinematic parameters. Yin et al. [9] proposeda vision-based robot self-calibration method, eliminatingthe need for a robot-based frame and hand-to-eye calibra-tions. Du et al. [10–13] introduced an inertial measurementunit to estimate the end posture and attaching a positionmarker to the end-effector to measure the actual position

HindawiInternational Journal of Aerospace EngineeringVolume 2019, Article ID 8349048, 16 pageshttps://doi.org/10.1155/2019/8349048

Page 2: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

and identified kinematic parameters with different filters toovercome the impact of sensing noise. Especially, for thesake of more accurate and reliable estimation from thesensors, various filter tools such as Kalman filter and particlefilter are used in the estimation process and the positionestimation always combines with the orientation estimation[14–16]. Zhang et al. [17] realized kinematic calibrationbased on the local exponential product formula by measur-ing the end position of the robot with a fixed camera andthe plane mark mounted on the end-effector.

Works on self-calibration above adopted the absolutepose/position error model for kinematic calibration. Theyhave to describe the end-effector pose errors under therobot base frame, making it inevitable to identify the trans-formation matrix between the measurement system frameand the robot base frame before calibration. However, thistransformation matrix is very complicated to measure andcalculate accurately, even hardly possible to obtain inunmanned environments such as on orbit [18]. To avoidthe drawback of frame transformation, the distance errorof any two positions in robot workspace is applied to cali-brate the robot position accuracy indirectly [19, 20]. Roningand Korzun [21] used the criteria of equal distancesbetween the points in the robot space and the task spaceto perform calibration on the GM Fanuc S-10 robot. Gonget al. [22] used a hybrid noncontact optical sensor mountedon the end-effector, calibrating the 6 degree-of-freedom(DOF) robot based on distance error. Tan et al. [23] madeuse of the screw theory and the distance error model, con-sidering the initial orientation errors. Gao et al. [24]obtained the linearized equation describing the relationshipbetween the positioning errors and the kinematic errors bydifferentiating the kinematic equation. Zhang et al. [25]derived a linear model from link parameter errors tosquared range difference of the robot end-effector. Zhanget al. [26] proposed a method to directly establish parametererror equations based on relative distance error and identi-fied the parameter errors by employing a hybrid geneticalgorithm. Mu et al. [27] synthesized the hand-eye transfor-mation parameters and the robot kinematic parametersduring calibration of the system parameters of a flexiblemeasurement system based on spheres’ centre-to-centre dis-tance errors. Shi et al. [28] established the distance errormodel to connect the robot distance error and the absolutepositioning error and validated the error model on a han-dling robot. Li et al. [29] used two error models, includingthe position error model and the distance error model, forcalibrating Selective Compliance Assembly Robot Arm(SCARA) robots. Yao et al. [30] measured the distancebetween two points in space to conduct kinematic calibra-tion on a service robot. As for measurement of thedistance errors, a laser tracker was always used to measurethese errors [31–33], while a CMM (coordinate measuringmachine) could be also employed [34]. Joubair and Bonev[35] developed a kinematic calibration method to improvethe accuracy of a six-axis serial industrial robot in a specifictarget workspace, using distance error and sphere con-straints. Works above obtained the distance error modelby ignoring the linearizing error, however with a lack of

the rotation error of the robot end-effector, so as to degradecalibration performance to a certain extent.

In this paper, we propose an error modeling methodconsidering both the distance and the rotation error ofthe end-effector of the space robot. The remainder of thispaper is organized as follows. In Section 2, the kinematicself-calibration system for a 7-DOF space robot is elaborated.In Section 3, based on the absolute pose error model, we buildthe mappings from the kinematic errors to the rotation errorof the robot end-effector, obtaining the distance and rotationerror model. Then, the redundant parameters in this errormodel are analysed theoretically in Section 4. At last, Sections5 and 6 give the results of our experiments and conclude thispaper, respectively.

2. Kinematic Self-Calibration System

Different from kinematic calibration of robot manipulatorson the ground, the space robot has large structural size andextreme working environment, making it impossible to mea-sure the end pose of the space robot using external measuringequipment, so its own hand-eye vision system [36–38] with acheckerboard calibration plate is adopted for pose measure-ment. As shown in Figure 1, a 7-DOF space robot is mountedon the outside of the space capsule, with a hand-eye cameraattached to its end. A checkerboard calibration plate is placedon the outside of the space capsule, away from the spacerobot base, as a target for the hand-eye camera.

As illustrated in Figure 2, b denotes the base frame ofthe space robot, and i , i = 1, 2,… , 7 denotes the referenceframe attached to each joint, respectively. t is theend-effector reference frame, which coincides exactly withthe reference frame 7 . c is the hand-eye camera frame,namely, the measurement system frame, and its transforma-tion matrix with respect to the base frame b is assumed tobe bTc. Therefore, once the poses of the checkerboard in thehand-eye camera frame c are measured, the transforma-tion matrix of the end-effector frame t with respect tothe hand-eye camera frame c is obtained, denoted as cTt .Further, the transformation matrix from the base frame bto the end-effector frame t is calculated as

bTt = bTc ⋅cTt 1

However, vibration and impact of launching, inner andouter pressure difference, and thermal deformation of thespace capsule will change the pose of the checkerboardwith respect to the robot base frame, making it compli-cated to measure and calculate the transformation matrixbTc between the camera frame and the robot base frameaccurately. Therefore, the relative errors including distanceand rotation errors of the robot end-effector are analysed.

3. Error Modeling in Distance and Rotation

3.1. Distance Error Modeling. As the basis of error modeling,kinematic modeling is aimed at describing the relationbetween any two adjoining link coordinate frames with asfew parameters as possible. But inappropriate kinematic

2 International Journal of Aerospace Engineering

Page 3: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

parameters might not meet three fundamental principles,namely, completeness, model continuity, and minimality[39]. Various methods of kinematic modeling for robotmanipulators have been proposed; e.g., the classic DHmethod [40] uses only 4 parameters to describing the rela-tion between any two adjoining link coordinate frames,while the MDH (modified DH) method [41] uses 5 param-eters. There are more such as the CPC (complete and para-metrically continuous) method [42] and zero referencemodel method [43].

Without loss of generality, we assume that the number ofkinematic parameters used to describe the relation betweenany two adjoining link coordinate frames is p, and the DOFof the space robot is n, then all the kinematic parameters ofthe space robot can be recorded asΩ ∈ Rnp×1, while one robotconfiguration is denoted as Θ ∈ Rn.

Definition 1. Kinematic model: the kinematic model of thespace robot relates the configuration Θ and the kinematic

YYYYYYYYYYYYYYYYYYYYYYYYYY

−3

−4

−4

−2

Z

Space robot

Hand-eye camera

Checkerboard

Space capsuleZ

Y

X

XZ

0

2

4

−5

Figure 1: Kinematic self-calibration system. The space robot and the checkerboard calibration plate are mounted on the outside of the spacecapsule, and a hand-eye camera is adopted to measure the pose of the checkerboard.

Z1

Y1X1

Z2

Z3

Z4

Z5

Z6

Z7

Zb

Yb

Xb

Zc

YcXc

Zt

YtXt

l 0

l2

l 1

l 3

l4

l5

l cz

lcy

l 6l7

Figure 2: Skeleton of the coordinate frames for self-calibration. The coordinate frames of the robot base, the checkboard, and all the joints aredefined, while the related lengths are marked.

3International Journal of Aerospace Engineering

Page 4: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

parameters Ω to the end-effector pose Ψ ∈ R6×1 through afunction f as

Ψ = f Ω,Θ 2

The robot configurations are not exactly known dueto the existence of sensor noise. Similarly, defects inmanufacturing and assembly result in kinematic errors.The difference between the nominal value Ω and the exactone Ω∗ is assumed to be δΩ, such as Ω∗ =Ω + δΩ. It isnoteworthy that errors in Θ can be also included in kine-matic errors δΩ.

Proposition 1. Absolute pose error model [39]: the differentialerror of the end-effector pose is the product of the generalizedJacobian matrix and the kinematic errors, such as

δΨ = J Ω,Θ δΩ, 3

where J Ω,Θ ∈ R6×np is the generalized Jacobian matrix, alsotermed as the identification matrix, determined by the kine-matic parameters Ω and the configuration Θ. δΨ ∈ R6×1

denotes the differential error of the end-effector pose, com-posed of 3 differential translations along the x-y-z axis and 3differential rotations around the x-y-z axis, respectively.

In Figure 3, as the robot configuration changes from ΘA

to ΘB, AB⇀

and A′B′⇀

are the corresponding nominal and

actual displacements, respectively, whereas AA′⇀

and BB′⇀

arethe end-effector position offset displacements under thesetwo configurations. According to the absolute error modelin Equation (3),

AA′⇀

= Jd Ω,ΘA δΩ,

BB′⇀

= Jd Ω,ΘB δΩ,4

where Jd ∈ R3×np is the position-related part of the identifica-tion matrix.

Proposition 2. Distance error model [20]: as the robot config-uration changes fromΘA toΘB, the measurable distance errorscalar is the product of the identification matrix and the kine-matic errors, such as

ΔlAB =AB⇀

AB⇀ ⋅ Jd Ω,ΘB − Jd Ω,ΘA ⋅ δΩ, 5

where ΔlAB = A′B′⇀

− AB⇀

denotes the measurable distance

error scalar. AB⇀

and A′B′⇀

are the norms of the vectors

AB⇀

and A′B′⇀

, respectively, the former of which is calculatedby nominal forward kinematics, while the latter can be mea-sured by the hand-eye camera system. Both the values havenothing to do with their reference frames. The identification

matrix of distance error model is obtained as AB⇀/ AB

⇀ ⋅ JdΩ,ΘB − Jd Ω,ΘA .

3.2. Propositions of Differential Rotation

Definition 2. General rotation transformation: suppose thatK = kx, ky, kz is a unit vector over origin, the rotation matrixaround K with the angle θ can be obtained as

R K , θ

=

kxkxversθ + cθ kykxversθ − kzsθ kzkxversθ + kysθ

kxkyversθ + kzsθ kykyversθ + cθ kxkyversθ − kzsθ

kxkzversθ − kysθ kykzversθ + kxsθ kzkzversθ + cθ

,

6

where sθ = sin θ, cθ = cos θ, and versθ = 1 − cos θ.

Proposition 3. Equivalent angle and axis of rotation [40]:any rotation matrix R can be expressed as a rotation arounda certain axis K with a certain angle θ. The axis K is termedas equivalent axis of rotation while θ is the equivalent angleof rotation.

It should be noted that the equivalent angle and axis ofrotation have the following three important properties.

(a) For a certain rotation matrix R, it may have morethan one set of equivalent angle and axis of rotation.Actually, K , θ is equivalent to −K ,−θ and evenequivalent to K , θ + 2nπ , n ∈ Z. Therefore, the valueof θ is always forced to locate in 0, π .

(b) When θ is small, the rotation axis K will be hard to becomputed because of singularity.

(c) K ⋅ RT = K , where RT denotes the transpose of R. Theproof is simple and omitted.

Proposition 4. Differential rotation by equivalent angle andaxis of rotation: any differential rotation transformation ΔRcan always be regarded as a differential rotation around a cer-tain axis K = kx, ky , kz with the rotation angle δθ, such as

ΔR K , δθ =

1 −kzδθ kyδθ

kzδθ 1 −kxδθ

−kyδθ kxδθ 1

7

Proof. δθ is so small that sin δθ = δθ, cos δθ = 1, and versδθ = 0. Then, Equation (7) can be obtained by substitutingK and δθ into Equation (6).

Proposition 5. Differential rotation by 3-dimensional dif-ferential rotation angles [44]: any differential rotationtransformation ΔR can always be regarded as differentialrotation around the axes x, y, and z in turn. Suppose that

4 International Journal of Aerospace Engineering

Page 5: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

the 3-dimensional differential rotation angles are ξ =ξx, ξy, ξz

T, then

ΔR ξ = skew ξ + I3, 8

where I3 denotes a 3 × 3 identity matrix, and the functionskew is used to create a skew symmetric matrix, which isdefined as

skew v =

0 −v 3 v 2

v 3 0 −v 1

−v 2 v 1 0

9

Proposition 6. If the equivalent angle and axis of rotationof a certain differential rotation matrix ΔR is δθ and K =kx, ky , kz , and its 3-dimensional differential rotation angles

are ξ = ξx, ξy, ξzT, then

δθ = K ⋅ ξ 10

Proof. It can be obtained by Equations (7) and (8) as

ξx = kxδθ,

ξy = kyδθ,

ξz = kzδθ

11

Considering that K is a unit vector, then

kx2 + ky

2 + kz2 = 1 12

Thus, Equation (10) can be obtained by

δθ = kx2 + ky

2 + kz2 δθ = kx kxδθ + ky kyδθ + kz kzδθ

= kxξx + kyξy + kzξz = K ⋅ ξ

13

Proposition 7. Differential rotation by the rotation matrix[44]: if the rotation matrix R′ is the rotation matrix Rtransformed by a differential rotation ΔR, then

ΔR = R′ ⋅ RT − I3 14

It should be noted that ΔR is calculated with respect tothe reference frame of R. And the differential rotationwith respect to the frame of R, termed as TΔR, can beobtained as

TΔR = RT ⋅ R′ − I3 15

The two kinds of differential rotation matrix satisfy

ΔR ⋅ R = R ⋅ TΔR 16

3.3. Rotation Error Modeling. We can avoid identifying thetransformation matrix between the measurement systemframe and the robot base frame by using the distanceerror. This is true for equivalent angle of rotation todescribe the variation of the end-effector orientation.

As shown in Figure 3, the matrixes ARB,A′RB′,

ARA′,and BRB′ denote the rotation between any two frames ofA , B , A′ , and B′ . Since kinematic errors are small,

ARB and A′RB′ are very close to each other. In other words,ARB

T ⋅ A′RB′ meets the definition of differential rotation.

ATA′

ATB

X

Z

Y

X

x

z

y

0

{A}

{A′}X

{B}

{B′}

X

A

B

Y

Z

Z

Y

Y

Z

A′TB′

BTB′

Figure 3: Schematic of the relative pose error model. The robot moves from the configuration ΘA to ΘB, and its nominal end-effector posechanges from the frame A to B .

5International Journal of Aerospace Engineering

Page 6: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

Proposition 8. Suppose that ξA, ξB, and ξR represent3-dimensional differential rotation angles of ARA′,

BRB′, andARB

T ⋅ A′RB′, respectively, then

ξR = ξB −ARB

TξA 17

Proof. According to Proposition 7, we can obtain the differ-ential rotation matrix as

TΔA = 0RAT 0RA′ − I3,

TΔB = 0RBT 0RB′ − I3

18

Then, the differential rotation matrix between ARB andA′RB′ can be calculated as

TΔR = ARBT ⋅ A′RB′−I

3= ARBT⋅ A′RA⋅

ARB⋅BRB′ −I3

= ARBT⋅ TΔA+I3

T⋅ARB⋅TΔB+I3 −I3

= ARBT⋅ TΔA

T+I3 ⋅ARB⋅TΔB+I3 −I3

= ARBT⋅ TΔA

T⋅ARB⋅TΔB+ ARB

T⋅ TΔAT⋅ARB+TΔB

19

Equation (17) can be obtained by substituting ξA, ξB, andξR into Equation (19) and then simplifying it.

Suppose that the equivalent angle and axis of rotationcorresponding to ARB are AθB and AKB, while those of

A′RB′are A′θB′ and

A′KB′. BothAKB and A′KB′ are 1 × 3 vectors.

According to Equation (3), we can obtain

ξA = Jr Ω,ΘA δΩ,

ξB = Jr Ω,ΘB δΩ,20

where Jr ∈ R3×np is the orientation-related part of the identifi-cation matrix. It should be noted that the identificationmatrix in Equation (20) is calculated with respect to theend-effector reference frame.

Proposition 9. Rotation error model: as the robot configura-tion changes from ΘA to ΘB, the measurable rotation errorscalar is the product of the identification matrix and the kine-matic errors, such as

Δθr = AKB ⋅ Jr Ω,ΘB − Jr Ω,ΘA ⋅ δΩ, 21

where Δθr =A′θB′−

AθB is the measurable rotation error scalar,and the identification matrix of the rotation error model is A

KB ⋅ Jr Ω,ΘB − Jr Ω,ΘA .

Proof. Substituting Equation (20) into Equation (17), wecan obtain

ξR = Jr Ω,ΘB − ARBT ⋅ Jr Ω,ΘA ⋅ δΩ 22

Similar to the derivation of distance error, we makethe rotational axis AKB coincide with A′KB′ and the start-ing point of rotation matrix ARB coincide with that of A′

RB′ as shown in Figure 4, where the point H is the projec-tion of B′ onto the flat AOB, while the point G satisfies∠AOG = ∠A′OB′. The measurable equivalent rotation errorcan be obtained as

Δθr =A′θB′−

AθB=∠A′OB′−∠AOB=∠AOG−∠AOB=∠GOB23

∠HOB is the projection of ∠BOB′ onto the flat AOB,which is actually the equivalent angle of rotation δθr of differ-ential rotation TΔR around the axis

AKB, such as ∠HOB = δθr.Ignoring the linearizing error, we can obtain ∠GOB≈∠HOB,and then

Δθr = δθr 24

According to Proposition 6, substitute Equation (22) into(24), then

Δθr = δθr = AKB ⋅ ξR= AKB ⋅ Jr Ω,ΘB − ARB

T ⋅ Jr Ω,ΘA ⋅ δΩ25

According to the third property of the equivalent angleand axis of rotation,

ABK ⋅ A

BRT = A

BK 26

Finally, the rotation error model in Equation (21) can beobtained by substituting Equation (26) into (25).

3.4. Distance and Rotation Error Modeling. In summary, thedistance and rotation error model is obtained by Equations(5) and (21) as

ΔlAB =AB⇀

AB⇀ ⋅ Jd Ω,ΘB − Jd Ω,ΘA ⋅ δΩ,

Δθr = AKB ⋅ Jr Ω,ΘB − Jr Ω,ΘA ⋅ δΩ

27

So we can obtain the identification matrix of the distanceand rotation error model as

Jdr =

AB⇀

AB⇀ ⋅ Jd Ω,ΘB − Jd Ω,ΘA

AKB ⋅ Jr Ω,ΘB − Jr Ω,ΘA

, Jdr ∈ R2×np

28

When there are redundant parameters in the errormodel, the identification matrix is rank defect, and mea-surement noises will affect the accuracy and robustness of

6 International Journal of Aerospace Engineering

Page 7: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

parameter identification seriously, inferring the necessity ofremoval of redundant parameters. Redundant parameters inthe error model will be discussed in the next section.

4. Parameter Independence Analysis

4.1. Kinematic Model by the MDH Method. A modified DHmethod, termed as MDH method [41], is used for kinemat-ics modeling in this paper, which describes the deviationbetween two adjacent parallel joints axes with an additionalrotation transformation to remedy the incompleteness. TheMDH method is a good selection for verifying the proposederror model, because its modeling process is simple and par-tially overcomes the singularity.

The coordinate frame is established by the MDHmethodas shown in Figure 5. The MDHmethod uses five parameters(let p = 5) includingΩi = ai, αi, di, θi, βi

T to describe the twoadjoining link coordinate frames, and the homogeneoustransformation between them is shown as follows:

i−1Ai = trans x, ai ⋅ rot x, αi ⋅ trans z, di ⋅ rot z, θi ⋅ rot y, βi

=

cθicβi −sθi cθisβi ai

cαisθicβi + sαisβi cαicθi cαisθisβi − sαicβi −disαi

sαisθicβi − cαisβi sαicθi sαisθisβi + cαicβi dicαi

0 0 0 1

,

29

where rot and trans denote the translation and rotationmatrices, respectively, and i−1Ai denotes the homogeneoustransformation of joint i with respect to joint i − 1. sX meanssin X and cX means cos X. The length of the rods shown inFigure 2 is illustrated in Table 1.

According to the modeling rules of the MDHmethod, weobtain the nominal kinematic parameters of the 7-DOF spacerobot as shown in Table 2.

Since the relation between the camera frame and theend-effector has been calibrated on the ground and it isassumed to be unchanged on orbit, the coordinate frame ofthe end-effector coincides with that of the last joint. There-fore, the end-effector pose of a robot manipulator with nDOFs can be calculated as

Ψ = g 0An = g 〠n

i=1

i−1Ai , 30

where the function g denotes the transformation from ahomogeneous matrix to its corresponding 3-dimension posi-tions and Z-Y-X Euler angles.

Ai-2

Ai-1

Ai

Zi-1

Xi-1

Yi-1

Zi

Xi

𝛼i

𝛼i

𝜃i

di

Zi-2 Yi-2

Yi

Xi-2

𝛽i−1

Figure 5: Coordinate frame established by the MDH method. Theangle between the y-axis of the adjacent coordinate frames isdefined as β, overcoming the singularity in the DH method.

OA (A′)

AKBA′K′B′( )

B

B′

H G

Figure 4: Derivation of the rotation error model. The rotational axisAKB is forced to coincide with A′KB′, and the starting points of

rotation matrix ARB and A′RB′ are also forced to coincide.

Table 1: Nominal length of the rods in the kinematicself-calibration system of the space robot.

Rod l0 l1 l2 l3 l6 l7 l4l5 lcy lczLength (m) 2.3 0.5 5 6.5 2.55

Table 2: Nominal kinematic parameters of the 7-DOF space robot.

Joint a (m) α (°) d (m) θ (°) β (°)

1 0 0 2.3 0 0

2 0 −90 0.5 0 0

3 0 −90 0.5 0 0

4 5 0 0.5 0 0

5 5 0 0.5 0 0

6 0 90 0.5 0 0

7 0 90 0.5 0 0

7International Journal of Aerospace Engineering

Page 8: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

4.2. Absolute Pose Error Model and Its Identifiability. Itshould be noted that nonsingularity of an error model indi-cates its identification matrix is column full rank. That is tosay, any column of the identification matrix is linearly inde-pendent. The kinematic parameters of the robot can besorted into three groups by its corresponding column in theidentification matrix.

(a) Independent parameters, whose corresponding col-umn is linearly independent with each other

(b) Relative parameters, whose corresponding column islinearly dependent with another one

(c) Ineffective parameters, whose corresponding columnis the zero vector, indicating that they have no effecton the pose error of the robot end-effector

We differentiate Equation (30) against the kinematicparameters to obtain the absolute pose error model as

δΨ = 〠n

i=1JiδΩi, 31

where δΩi denotes the kinematic parameter errors ofthe ith joint, and Ji denotes its corresponding identifica-tion matrix.

Definition 3. Local transfer matrix of the MDH method [44]:the local transfer matrix Gi as illustrated in Figure 6 is used totransfer δΩi to the local pose error in its own frame i . It canbe calculated as

Gi =

cθicβi −disθicβi −sβi 0 0

−sθi −dicθi 0 0 0

cθisβi −disθisβi cβi 0 0

0 cθicβi 0 −sβi 0

0 −sθi 0 0 1

0 cθisβi 0 cβi 0

32

Definition 4. Global transfer matrix [44]: then, the localpose error is passed to the next coordinate frame untilthe last one, and we obtain the pose error of theend-effector with respect to the last coordinate frame. Theglobal transfer matrix from the ith frame to the i + 1 thone is written as

i−1Fi =i−1Ri

T skew i−1Pii−1Ri

T

03×3 i−1RiT

, 33

where i−1Ri denotes the rotation matrix in the homoge-neous matrix i−1Ai and

i−1Pi is its translation vector. The

error transferring matrix between the two adjoining coordi-nate frames satisfies

i−1Fi+1 = iFi+1i−1Fi,

iFi = I634

Parameter independence in the error model can bedetermined just by analysing the redundant parameters ofadjacent joints [42]. In other words, the matrix Ji−1 Ji isrequired to be full column rank to ensure that all parame-ters are identifiable.

Proposition 10. The full column rank of Ji−1 Ji is equiva-lent to the full column rank of i−1FiGi−1 Gi .

Proof. According to Equations (32), (33), and (34), we canobtain the identification matrix with respect to δΩi−1 and δΩi as

Ji−1 = iFni−1FiGi−1,

Ji = iFnGi

35

Since iFn has nothing to do withΩi−1 andΩi, the full col-umn rank of Ji−1 Ji is equivalent to the full column rankof i−1FiGi−1 Gi according to Equation (35). Consideringthat the expression of i−1FiGi−1 Gi is more simple andjust depends on Ωi−1 and Ωi, it becomes easier to analyzethe parameter independence in the error model.

Suppose Hi−1 = i−1FiGi−1 and then

Hi−1 = Hai−1 Hα

i−1 Hdi−1 Hθ

i−1 Hβi−1 ,

Gi = Gai Gα

i Gdi Gθ

i Gβi

36

Considering that θi−1 and θi are variables, and the initialvalue of βi−1 and βi is generally set as zero, we can obtainby Equation (36)

Gi =

cθi −disθi 0 0 0

−sθi −dicθi 0 0 0

0 0 1 0 0

0 cθi 0 0 0

0 −sθi 0 0 1

0 0 0 1 0

37

Next, three typical singularities are discussed to analyzethe parameter independence between Hi−1 and Gi.

(1) The two adjacent joints are parallel but not collinear,indicating that αi = 0 orπ, ai ≠ 0. Hi−1 can be rewrit-ten as

8 International Journal of Aerospace Engineering

Page 9: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

According to Equations (37) and (38), Hdi−1 = ±Gd

i , indi-cating that there is a linear interrelationship between di−1and di.

(2) The two adjacent joints are parallel and collinear,indicating that αi = 0 orπ, ai = 0. Hi−1 can be rewrit-ten as

According to Equations (37) and (39), Hdi−1 = ±Gd

i andHθ

i−1 = ±Gθi , indicating that there is a linear interrelationship

between di−1 and di, and this is true for θi−1 and θi.(3) The two adjacent joints are orthogonal with αi = ±

π/2, ai = 0. Hi−1 can be rewritten as

Hi−1 =

cθicθi−1 + sθisθi−1 di − di−1 cθisθi−1 − sθicθi−1 0 ±asθi ±dcθicθisθi−1 − sθicθi−1 di − di−1 sθisθi−1 − cθicθi−1 0 ±acθi ∓dsθi

0 −asθi−1 ±1 0 ∓a

0 cθicθi−1 + sθisθi−1 0 0 ±sθi0 cθisθi−1 − sθicθi−1 0 0 ±cθi0 0 0 ±1 0

38

Hi−1 =

cθicθi−1 + sθisθi−1 di − di−1 cθisθi−1 − sθicθi−1 0 0 ±dcθicθisθi−1 − sθicθi−1 di − di−1 sθisθi−1 − cθicθi−1 0 0 ∓dsθi

0 0 ±1 0 0

0 cθicθi−1 + sθisθi−1 0 0 ±sθi0 cθisθi−1 − sθicθi−1 0 0 ±cθi0 0 0 ±1 0

39

Kinematic errors

Local errors

𝛿End-effectorpose errors

G1 G2 Gn−1 Gn

e1

0A11A2

1Fn2Fn

n−1FnnFn

n−1An

e2 en−1 en

End-effectorpose

𝛿Ω1 𝛿Ω2 𝛿Ωn−1 𝛿Ωn

Figure 6: Schematic of error transferring. Kinematic errors are transferred to the end-effector pose by the local transfer matrix Gi and theglobal transfer matrix i Fn. That is, Ji = iFn ⋅Gi.

9International Journal of Aerospace Engineering

Page 10: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

According to Equations (37) and (40),Hβi−1 = ∓Gθ

i , so βi−1is linearly related to θi.

Based on the analysis above, the identifiability ofkinematic parameters in the absolute pose error modelof the 7-DOF space robot can be obtained and shownin Table 3.

Only the relative parameters θ2, θ3, d5, θ6, θ7 and all theindependent parameters can be identified.

4.3. Parameter Independence Analysis of the Distance andRotation Error Model. Equation (28) shows that the iden-tification matrix of the distance and rotation error modelis calculated based on that of the absolute pose errormodel. Specifically, the distance-related identificationmatrix is based on the position-related one whereas therotation-related one is based on the orientation-relatedone. However, it should be noted that the two parts have

different coefficients, i.e., AB⇀/ AB

⇀and AKB, and that

the position-related identification matrix is expressed inthe end-effector frame while the orientation-related isexpressed in the robot base frame. Accordingly, we discussthe two kinds of identification matrix separately in the fol-lowing section.

(1) The distance-related identification matrix: we canobtain the distance-related identification matrix with respectto Ωi by Equations (28) and (33) as

Jdi = 0RniFr1−r3

n Gi

= 0RniRn

T skew iPniRn

T Gi

= 0Ri0Riskew −iPn Gi,

41

where iFr1−r3n denotes the first three rows of iFn, and

0Rn isused to convert the related frame of the position-related iden-tification matrix from the end-effector frame to the robotbase frame.

The position of the end-effector can be written as

0Pn = 0Pi + 0RiiPn 42

For the first joint, i = 1, such as

Jd1 = 0Rn1Rn

T skew 1Pn1Rn

T G1, 43

0Pn = 0P1 + 0R11Pn 44

Substitute Equation (44) into Equation (43), and Equation(44) can be rewritten as

Jd1 = 0R1 I3 skew −1Pn G1, 45

where 0R1 can be calculated as Equation (29) and 1Pn is inde-pendent of Ω1.

We assume that the space robot moves from the con-figuration ΘA to ΘB, and the first joint rotates from θ1A toθ1B, i.e., from Ω1A = a1, α1, d1, θ1A, β1

T to Ω1B =a1, α1, d1, θ1B, β1

T. The end-effector position changes

from 1PnA = xA, yA, zAT to 1PnB = xB, yB, zB

T. Using theWolfram Mathematica, we can obtain the distance-relatedidentification matrix as

Table 3: Identifiability of kinematic parameters in the absolute pose error model of the 7-DOF space robot.

Identifiability Kinematic parameters

Independent parameters a1, α1, d1, θ1, a2, α2, d2, a3, α3, β3, a4, α4, θ4, β4, a5, α5, θ5, a6, α6, d6, a7, α7, d7, β7

Relative parameters β1 θ2 , β2 θ3 , d3 d4 , d4 d5 , β5 θ6 , β6 θ7

Ineffective parameters None

Hi−1 =

cθicθi−1 −disθicθi−1 − di−1cθisθi−1 sθi dicθi 0

−sθicθi−1 −dicθicθi−1 + di−1sθisθi−1 cθi −disθi 0

0 di−1cθi−1 0 0 0

0 cθicθi−1 0 sθi 0

0 −sθicθi−1 0 cθi 0

0 0 0 0 ∓1

40

10 International Journal of Aerospace Engineering

Page 11: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

Obviously, the kinematic parameters a1, α1, d1, θ1 areineffective, for that their corresponding columns are zeroshown in Equation (46).

For the last joint, i = n and nPn = 01×3, such as

Jdn = 0Rn I3 03×3 Gn = 0RnGr1−r3n , 47

where 0Rn depends on all the kinematic parameters, and sodoes the coefficient 0Pn B − 0Pn A , which indicates thatthe identifiability of Jdn ΘA ⟶ΘB depends on Gr1−r3

n .Obviously, from Equation (37), the kinematic parametersθn, βn are ineffective.

For the ith joint i = 2, 3,⋯,n − 1 , the coefficients AB⇀/

AB⇀

in Equation (28) and 0Ri0Riskew −iPn in Equa-

tion (41) both depend on all the kinematic parameters, sothe parameter identifiability of the distance error model isthe same with that of the position error model.

(1) The rotation-related identification matrix

We can obtain the rotation-related identification matrixby Equations (28) and (33) as

Jri = iFnGi = 03×3 iRnT Gi = iRn

TGr4−r6i 48

For the first joint, i = 1,such as

Jr1 = 1RnTGr4−r61 = 1R0

0RnTGr4−r61 = 0Rn

T0R1Gr4−r61

49

where

0R1Gr4−r61 Ω1 =

0 1 0 0 −sθ10 0 0 −sα1 cα1cθ1

0 0 0 cα1 sα1cθ1

50

By Equations (28), (49) and (50), we can obtain

Jr1 ΘA ⟶ΘB = AKB Jr1 θ1B − Jr1 θ1A

= AKB0Rn

TRG θ1B − RG θ1A

= AKB0Rn

T

0 0 0 0 sθ1A − sθ1B

0 0 0 0 cθ1B − cθ1A cα1

0 0 0 0 cθ1B − cθ1A sα1

51

where AKB0Rn

T depends on all the kinematic parameters,and the kinematic parameters a1, α1, d1, θ1 are ineffective.

For the last joint, i = n, such as

Jrn = nRnTGr4−r6

n =

0 cθn 0 0 0

0 −sθn 0 0 1

0 0 0 1 0

52

By Equations (28) and (52), we can obtain

Jrn ΘA ⟶ΘB = AKB Jrn θnB − Jr1 θnA

= AKB0Rn

TRG θnB − RG θnA

= AKB0Rn

T

0 cθnB − cθnA 0 0 0

0 sθnA − sθnB 0 0 0

0 0 0 0 053

By Equation (53), the kinematic parameters an, dn, θn, βnare ineffective. Actually, if both the distance and rotationerror are taken into consideration, the ineffective parametersare θn, βn.

For the ith joint (i = 2, 3,… , n − 1), the coefficients AKB

in Equation (28) and iRnT in Equation (48) both depend

on all the kinematic parameters, so the parameter identifia-bility of the rotation error model is the same with that ofthe orientation error model.

Jd1 ΘA ⟶ΘB = 0Pn B − 0Pn A Jd1 θ1B − Jd1 θ1A

=

0

0

0

0

2 xBzA + xAzB cos 2β1 + −xAxB + zAzB sin 2β1 sinθ1A − θ1B

2

2

+ −yBzA + yAzB cos β1 + −xByA + xAyB sin β1 sin θ1A − θ1B

T

46

11International Journal of Aerospace Engineering

Page 12: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

(2) The distance and rotation error model: in summary,the identifiability of kinematic parameters in the distanceand rotation error model of the 7-DOF space robot can beobtained and shown in Table 4.

Since β6 is linearly related to θ7, and θ7 is ineffective,β6 is also ineffective. Therefore, only the relative parame-ters θ2, θ3, d5, θ6 and all the independent parameters canbe identified.

5. Method Verification

The process of calibration simulation is shown in Figure 7.Firstly, the measurement configurations are selected fromthe permissible operating range of joints with the givenamount of configurations; then, the actual end-effector posesare calculated with the actual kinematic parameters; finally,taking the encoder noise and the measurement noise intoaccount, the least squares method is adopted to identify thekinematic parameters. It is worthy noted that the actualparameters are calculated by the nominal parameters artifi-cially added with parameter errors, indicating that all theseparameters are known for analysis and comparison.

Whether it is the absolute pose error model or the pro-posed distance and rotation error model, the least squaresmethod is a powerful tool to identify the kinematic errorsagainst sensor noises. The specific application of least squaresmethod is as

δΨi =Ψ∗ − f Ωi,Θ ,

δΩi+1 = JΩ Ωi,Θ +δΨi,

Ωi+1 =Ωi + δΩi+1,

54

where i = 0, 1, 2,⋯,N , N denotes the total number of itera-tions, and Ω0 is the initial nominal kinematic parameters.· + is the Moore-Penrose inverse of a matrix. The processis iterated until δΩi converges to a small threshold. Finally,the identified kinematic parameters Ωc can be obtained as

Ωc =Ω0 + 〠N

i=1δΩi 55

For the distance and rotation error model,Ψ∗ is the mea-sured value of the distance and rotation of the space robot, soit is inaccurate due to the existence of sensor noises. Thework [39] indicates that a sufficient number of measurementscan guarantee the convergence of the above process. For thedistance and rotation error model of the space robot, the leastnumber of measurements is the number of all the identifiablekinematic parameters.

5.1. Selection of Measurement Configurations. Theend-effector poses of the space robot are measured by ahand-eye camera, so the camera has to point to the tar-get checkerboard. Besides, the end-effector of the spacerobot under the selected measurement configurationshas to be close to the checkerboard in order to ensuremeasurement accuracy.

For the sake of convenience and economy, we select oneconfiguration for comparison as ΘA and another 51 configu-rations as ΘB. Actually, the corresponding end-effector posi-tion of the configurationΘA is located 0.5m above the centreof the checkerboard, while the corresponding end-effectorpositions of the other 51 configurations are distributed uni-formly on the hemispherical surface, whose centre is locatedat just the corresponding end-effector position of the config-urationΘA, and its radius is 0.5m. Additionally, the z-axis ofthe coordinate frames corresponding to all these end-effectorposes points to the centre of the checkerboard, as shown inFigure 8.

5.2. Sensor Noises and Kinematic Parameter Errors. In practi-cal applications, the actual end-effector pose of a robot can beobtained by the hand-eye camera, while the actual robot con-figurations can be measured by the encoders. However, insimulation applications, measuring noises and encodernoises are added according to their respective distributionsin order to imitate the influence of sensor errors. These mea-suring noises meet normal distribution as shown in Table 5.

According to Section 4, the transformation matrixbetween the robot base frame and the frame 1 cannot beidentified, so we have to make the assumption that this trans-formation matrix will not be changed on orbit or can be cal-ibrated in another way. Therefore, errors are added to other

Table 4: Identifiability of kinematic parameters in the distance androtation error model of the 7-DOF space robot.

Identifiability Kinematic parameters

Independentparameters

a2, α2, d2, a3, α3, β3, a4, α4, θ4, β4, a5, α5, θ5, a6, α6,d6, a7, α7, d7

Relativeparameters

β1 θ2 , β2 θ3 , d3 d4 , d4 d5 , β5 θ6

Ineffectiveparameters

a1, α1, d1, θ1, β6, θ7, β7

Reachable configuration set

Generateconfigurations

Compute actual toolposes

Least squares parameteridentification

Number ofobservations

Actualparameters

Encodernoise

Nominalparameters

Identified parameters

Measurementnoise

Figure 7: Flowchart of calibration simulation. Encoder noise andpose measurement noise are added to achieve more reliablesimulation results.

12 International Journal of Aerospace Engineering

Page 13: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

kinematic parameters but Ω1. All the kinematic parametererrors δΩ are shown in Table 6.

5.3. Result and Analysis. The purpose of robot calibration isto obtain the accurate kinematic parameters representingthe robot structure and the exact estimation of theend-effector pose. The least squares method is adopted toobtain the calibrated kinematic parameters Ωc, and thenthe calibration residuals by different kinds of error modelsare calculated as

ςΩ =Ω∗ −Ωc 56

The calibration residuals by the distance error model, bythe distance and rotation error model, and by the absolutepose error model are shown in Tables 7-9.

Some of the calibration residuals are relatively big inTables 7-9, which are actually relative parameters, so thesecalibration residuals should be summed up accordingly.

For the distance and rotation error model,

ςβ1 + ςθ2 = 0 00000 + 0 00176 = 0 00176,

ςβ2 + ςθ3 = −0 04111 + 0 03701 = −0 0041,

ςd3 + ςd4 + ςd5 = 0 00015 + −0 00861 + 0 01005 = 0 00159,

ςβ5 − ςθ6 = −0 04298 − −0 05903 = 0 0160557

For the absolute pose error model,

ςβ1 + ςθ2 = 0 00000 + 0 00247 = 0 00247,

ςβ2 + ςθ3 = −0 04111 + 0 04160 = 0 00049,

ςd3 + ςd4 + ςd5 = 0 00015 + −0 00861 + 0 00890 = 0 00044,

ςβ5 − ςθ6 = −0 04298 − −0 04436 = 0 00138,

ςβ6 − ςθ7 = −0 02700 − −0 02722 = 0 0002258

Equations (57) and (58) illustrate that once summed up,calibration residuals of these relative parameters will coun-teract each other, indicating that only some of the relativeparameters are necessary to be identified. This is consistentwith the analysis of parameter independence, which provesthe correctness of the analysis results.

Moreover, we come to a conclusion that the distance androtation error model does better than the distance error

Table 6: Kinematic parameter errors of the 7-DOF space robot.

Joint δa (m) δα (°) δd (m) δθ (°) δβ (°)

1 0 0 0 0 0

2 0.01 0.05 0.01 0.05 0.05

3 0.01 0.05 0.01 0.05 0.05

4 0.01 0.05 0.01 0.05 0.05

5 0.01 0.05 0.01 0.05 0.05

6 0.01 0.05 0.01 0.05 0.05

7 0.01 0.05 0.01 0.05 0.05

Table 7: Calibration residuals by the distance error model.

Joint ςa (m) ςα (°) ςd (m) ςθ (°) ςβ (°)

1 0.00000 0.00000 0.00000 0.00000 0.00000

2 0.00054 0.00445 0.00094 0.00038 −0.041113 0.00063 0.01097 0.00015 0.03997 0.11269

4 0.00355 0.00632 −0.00861 0.01106 0.15176

5 0.00442 −0.07475 0.02887 0.05620 −0.042986 0.00012 −0.20839 −0.00284 −0.01352 −0.027007 −0.00194 −0.29798 0.00152 −0.00042 −0.03749

Table 8: Calibration residuals by the distance and rotation errormodel.

Joint ςa (m) ςα (°) ςd (m) ςθ (°) ςβ (°)

1 0.00000 0.00000 0.00000 0.00000 0.00000

2 0.00056 0.00050 0.00051 0.00176 −0.041113 0.00041 −0.00313 0.00015 0.03701 0.00586

4 0.00081 0.00485 −0.00861 0.01164 0.01368

5 0.00053 −0.00637 0.01005 0.00310 −0.042986 0.00048 −0.01963 0.00047 −0.05903 −0.027007 −0.00159 −0.01215 0.00045 −0.00042 −0.03749

Table 5: Normal distribution of measuring noises.

Measuringnoise

End-effectorposition (m)

End-effectororientation (°)

Robotconfiguration (°)

Mean 0 0 0

Standarddeviation

0.001 0.01 0.005

Figure 8: The corresponding end-effector frames of themeasurement configurations. A total of 52 configurations areselected as measurement configurations, one of which is defined asΘA, while the other configurations are treated as ΘB.

13International Journal of Aerospace Engineering

Page 14: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

model in identification accuracy of the kinematic parameters,but a little weaker than the absolute pose error model.

Finally, 500 robot configurations are selected randomlyto serve as the validation group. The end-effector positionestimate errors of validation configurations are calculated,respectively, with different error models, and maximumand average of these estimate errors are analysed to comparecalibration performance.

Figure 9 gives the end-effector position estimate errors ofvalidation configurations before calibration and those aftercalibration by three kinds of error models. Figure 10 givesthe corresponding histograms of these position errors. Themaximum and average of these 500 position errors are shownin Table 10.

Whether it is the identification accuracy of the kinematicparameters shown in Tables 7-9 or statistical analysis of val-idation tests in Figures 9 and 10 and Table 10, it verifies theeffectiveness of the proposed distance and rotation errormodel. The rotation error of the robot end-effector isincluded so as to improve calibration performance. But itshould be pointed out that, due to the lack of absoluteness,errors in kinematic parameters of the first (root) joint ofthe space robot cannot be identified.

6. Conclusions

Proposed in this paper is an error model involving both thedistance and the rotation error of the space robotend-effector. The error model can avoid identifying thetransformation matrix between the measurement systemframe and the robot base frame, suitable for self-calibrationof the space robot. Besides, identifiable parameters in the dis-tance and rotation error model are confirmed to eliminatesingularity in robot kinematic calibration. Finally, we con-duct the calibration simulation and compare differences incalibration performance between these models. Statisticalresults indicate that the proposed error model does betterin the accuracy of the robot end-effector position estimateand of the kinematic parameter identification than the onlydistance error model. However, it still matters that observ-ability of the distance and rotation error model be studiedas an indicator of measurement configuration optimization,which will significantly reduce the number of configurationsrequired for calibration. Besides, information fusion providesa powerful tool to deal with uncertainty and external distur-bance of pose measurement and application of filtering

algorithms in robot calibration is worthy of attention.From the operational point of view, the light conditionsto carry on calibration processes should be also taken into

00

20

40

60

Am

ount

of r

obot

confi

gura

tion

Before calibrationBy distance error

By distance + rotation errorBy absolute error

80

100

120

0.005 0.01 0.015 0.02 0.025 0.03Position error (m)

0.035 0.04

Figure 10: The histograms of the end-effector position errors of 500confirm configurations.

Table 10: The statistical characteristics of the end-effector positionerrors of 500 confirm configurations.

Location Max (mm) Mean (mm)

Before calibration 38 19

By distance error 21 11

By distance and rotation error 4.8 2.5

By absolute error 1.1 0.5

0.04

0.035

0.03

0.025

0.02

Posit

ion

erro

r (m

)

No. of the confirmed configuration

0.015

0.01

00 50

Before calibrationBy distance error

By distance + rotation errorBy absolute

100 150 200 250 300 350 400 450 500

0.005

Figure 9: The end-effector position errors of 500 confirmconfigurations.

Table 9: Calibration residuals by the absolute pose error model.

Joint ςa (m) ςα (°) ςd (m) ςθ (°) ςβ (°)

1 0.00000 0.00000 0.00000 0.00000 0.00000

2 0.00017 0.00140 −0.00024 0.00247 −0.041113 0.00016 0.00262 0.00015 0.04160 0.00187

4 −0.00005 0.00014 −0.00861 −0.00283 0.00287

5 −0.00009 0.00569 0.00890 0.00251 −0.042986 0.00005 −0.00458 −0.00003 −0.04436 −0.027007 −0.00006 0.00457 −0.00003 −0.02722 0.00050

14 International Journal of Aerospace Engineering

Page 15: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

consideration. In summary, there are still more futureworks and challenges to adopt the proposed method inpractical application.

Data Availability

The data used to support the findings of this study areincluded within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (61573066 and 61327806).

References

[1] C. Sallaberger, “Canadian space robotic activities,” Acta Astro-nautica, vol. 41, no. 4–10, pp. 239–246, 1997.

[2] P. J. Staritz, S. Skaff, C. Urmson, and W. Whittaker, “Sky-worker: a robot for assembly, inspection and maintenanceof large scale orbital facilities,” in Proceedings 2001 ICRA.IEEE International Conference on Robotics and Automation(Cat. No.01CH37164), vol. 4, pp. 4180–4185, Seoul, SouthKorea, May 2001.

[3] Z. Roth, B. Mooring, and B. Ravani, “An overview of robot cal-ibration,” IEEE Journal on Robotics and Automation, vol. 3,no. 5, pp. 377–385, 1987.

[4] Chen-Gang, Li-Tong, Chu-Ming, J.-Q. Xuan, and S.-H. Xu,“Review on kinematics calibration technology of serialrobots,” International Journal of Precision Engineering andManufacturing, vol. 15, no. 8, pp. 1759–1774, 2014.

[5] R. P. Judd and A. B. Knasinski, “A technique to calibrateindustrial robots with experimental verification,” IEEE Trans-actions on Robotics and Automation, vol. 6, no. 1, pp. 20–30,1990.

[6] V. R. AnguloDe and C. Torras, “Self-calibration of a spacerobot,” IEEE Transactions on Neural Networks, vol. 8, no. 4,pp. 951–963, 1997.

[7] P. Liang, Y. L. Chang, and S. Hackwood, “Adaptiveself-calibration of vision-based robot systems,” IEEE Transac-tions on Systems, Man, and Cybernetics, vol. 19, no. 4, pp. 811–824, 1989.

[8] Y. Liu, H. Liu, F. L. Ni, and W. F. Xu, “New self-calibrationapproach to space robots based on hand-eye vision,” Journalof Central South University, vol. 18, no. 4, pp. 1087–1096, 2011.

[9] S. Yin, Y. Ren, J. Zhu, S. Yang, and S. Ye, “A vision-basedself-calibration method for robotic visual inspection systems,”Sensors, vol. 13, no. 12, pp. 16565–16582, 2013.

[10] G. Du and P. Zhang, “Online robot calibration based onvision measurement,” Robotics and Computer-IntegratedManufacturing, vol. 29, no. 6, pp. 484–492, 2013.

[11] G. Du and P. Zhang, “IMU-based online kinematic calibrationof robot manipulator,” The Scientific World Journal, vol. 2013,Article ID 139738, 10 pages, 2013.

[12] G. Du and P. Zhang, “Online serial manipulator calibrationbased on multisensory process via extended Kalman and

particle filters,” IEEE Transactions on Industrial Electronics,vol. 61, no. 12, pp. 6852–6859, 2014.

[13] G. Du, P. Zhang, and D. Li, “Online robot calibration basedon hybrid sensors using Kalman filters,” Robotics andComputer-Integrated Manufacturing, vol. 31, pp. 91–100,2015.

[14] G. Du, P. Zhang, and D. Li, “Human–manipulator interfacebased on multisensory process via Kalman filters,” IEEETransactions on Industrial Electronics, vol. 61, no. 10,pp. 5411–5418, 2014.

[15] G. Du and P. Zhang, “A markerless human–robot Interfaceusing particle filter and Kalman filter for dual robots,” IEEETransactions on Industrial Electronics, vol. 62, no. 4,pp. 2257–2264, 2015.

[16] G. Du, P. Zhang, and X. Liu, “Markerless human–manipulatorinterface using leap motion with interval Kalman filter andimproved particle filter,” IEEE Transactions on IndustrialInformatics, vol. 12, no. 2, pp. 694–704, 2016.

[17] X. Zhang, Y. Song, Y. Yang, and H. Pan, “Stereo visionbased autonomous robot calibration,” Robotics and Autono-mous Systems, vol. 93, pp. 43–51, 2017.

[18] Y. J. Ren, Z. Ji-gui, Y. Xue-you, and Y. Sheng-hua, “Measure-ment robot calibration model and algorithm based on distanceaccuracy,” Acta Metrologica Sinica, vol. 3, no. 29, pp. 198–202,2008.

[19] Z. Xuecai, Z. Qixian, and Z. Shixiong, “A newmodel with com-pensation algorithm for distance errors of robot mechanisms,”Robot, vol. 3, no. 1, 1991.

[20] X. Zhou and Q. Zhang, “Distance error model in the studyon the positioning accuracy of robots,” Robot, vol. 17, no. 1,1995.

[21] J. Roning and A. Korzun, “A method for industrial robotcalibration,” in Proceedings of International Conference onRobotics and Automation, vol. 4, pp. 3184–3190, Albuquerque,NM, USA, April 1997.

[22] C. Gong, J. Yuan, and J. Ni, “Nongeometric error identificationand compensation for robotic system by inverse calibration,”International Journal of Machine Tools & Manufacture,vol. 40, no. 14, pp. 2119–2137, 2000.

[23] Y. Tan, H. Sun, and Z. Shao, “New manipulator calibrationmethod based on screw theory and distance error,” Journalof Beijing University of Aeronautics & Astronautics, vol. 32,no. 9, pp. 1104–1108, 2006.

[24] W. Gao, H. Wang, Y. Jiang, and X.'a. Pan, “Kinematic calibra-tion method of robots based on distance error,” Robot, vol. 35,no. 5, p. 600, 2013.

[25] T. Zhang, X. Dai, and D. Liang, “Robot error calibration basedon distance measurement with parameter selection,” Journal ofBeijing University of Aeronautics and Astronautics, vol. 40,no. 5, pp. 585–590, 2014.

[26] Y. G. Zhang and H. Zhang, “An approach of robotic kinemat-ics parameters calibration,” Advanced Materials Research,vol. 655-657, no. 5, pp. 1023–1028, 2013.

[27] N. Mu, K. Wang, Z. Xie, and P. Ren, “Calibration of a flexiblemeasurement system based on industrial articulated robot andstructured light sensor,” Optical Engineering, vol. 56, no. 5,article 054103, 2017.

[28] Y. Shi, J. Fang, and Z. Weng, “Research on kinematicparameter calibration of handling robot,” in 2017 13th IEEEInternational Conference on Electronic Measurement &Instruments (ICEMI), pp. 224–228, Yangzhou, China, 2017.

15International Journal of Aerospace Engineering

Page 16: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

[29] X. Li, H. Hu, and W. Ding, “Two error models for calibratingSCARA robots based on the MDH Model,” MATEC Web ofConferences, vol. 95, article 08008, 2017.

[30] X. Yao, W. Shi, L. Zhang, D. Xu, and J. Zuo, “Research on kine-matic calibration of service robot based on distance error,”Modern Manufacturing Engineering, vol. 9, no. 1, 2017.

[31] I.-C. Ha, “Kinematic parameter calibration method for indus-trial robot manipulator using the relative position,” Journal ofMechanical Science and Technology, vol. 22, no. 6, pp. 1084–1090, 2008.

[32] W. Zhenhua, X. Hui, C. Guodong, S. Rongchuan, and L. Sun,“A distance error based industrial robot kinematic calibrationmethod,” Industrial Robot: An International Journal, vol. 41,no. 5, pp. 439–446, 2014.

[33] M. John, “Kinematic calibration of Delta robot using distancemeasurements,” Proceedings of the Institution of MechanicalEngineers, Part C: Journal of Mechanical Engineering Science1989–1996 (203+210), vol. 39, no. 8, pp. 55–60, 2015.

[34] T. Zhang, “Kinematic calibration of robot based on distanceerror,” Journal of South China University of Technology,vol. 39, no. 11, pp. 98–103, 2011.

[35] A. Joubair and I. A. Bonev, “Kinematic calibration of a six-axisserial robot using distance and sphere constraints,” Interna-tional Journal of Advanced Manufacturing Technology,vol. 77, no. 1–4, pp. 515–523, 2015.

[36] G. Flandin, F. Chaumette, and E. Marchand, “Eye-in-han-d/eye-to-hand cooperation for visual servoing,” in Proceedings2000 ICRA. Millennium Conference. IEEE International Con-ference on Robotics and Automation. Symposia Proceedings(Cat. No.00CH37065), vol. 3, pp. 2741–2746, San Francisco,CA, USA, April 2000.

[37] M. Sabatini, R. Monti, P. Gasbarri, and G. B. Palmerini,“Adaptive and robust algorithms and tests for visual-basednavigation of a space robotic manipulator,” Acta Astronautica,vol. 83, pp. 65–84, 2013.

[38] M. Carpentiero, M. Sabatini, and G. B. Palmerini, “Capabilitiesof stereo vision systems for future space missions,” in Proceed-ings of the 67thInternational Astronautical Congress, Guadala-jara, Mexico, 2016.

[39] K. Schröer, S. L. Albright, and M. Grethlein, “Complete, min-imal and model-continuous kinematic models for robot cali-bration,” Robotics and Computer-Integrated Manufacturing,vol. 13, no. 1, pp. 73–85, 1997.

[40] J. Denavit and R. S. Hartenberg, “A kinematic notation forlower-pair mechanisms based on matrices,” ASME Journal ofApplied Mechanics, vol. 22, pp. 215–221, 1955.

[41] S. A. Hayati, “Robot arm geometric link parameter estima-tion,” in The 22nd IEEE Conference on Decision and Control,pp. 1477–1483, San Antonio, TX, USA, December 1983.

[42] H. Zhuang, Z. S. Roth, and F. Hamano, “A complete and para-metrically continuous kinematic model for robot manipula-tors,” IEEE Transactions on Robotics & Automation, vol. 8,no. 4, pp. 451–463, 1992.

[43] B. W. Mooring and G. R. Tang, “An improved method foridentifying the kinematic parameters in a six-axis robot,” inComputers in Engineering, Proceedings of the InternationalComputers in Engineering Conference and Exhibit, vol. 1,pp. 79–84, Chicago, IL, USA, 1984.

[44] Y. L. Xiong, Robotics, China Machine Press, 1993.

16 International Journal of Aerospace Engineering

Page 17: Error Modeling in Distance and Rotation for Self-Calibration of … · 2019. 7. 30. · of any two positions in robot workspace is applied to cali-brate the robot position accuracy

International Journal of

AerospaceEngineeringHindawiwww.hindawi.com Volume 2018

RoboticsJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Shock and Vibration

Hindawiwww.hindawi.com Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwww.hindawi.com

Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwww.hindawi.com Volume 2018

International Journal of

RotatingMachinery

Hindawiwww.hindawi.com Volume 2018

Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Navigation and Observation

International Journal of

Hindawi

www.hindawi.com Volume 2018

Advances in

Multimedia

Submit your manuscripts atwww.hindawi.com